You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@geode.apache.org by kl...@apache.org on 2017/08/19 00:09:56 UTC

[01/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - Reference section [Forced Update!]

Repository: geode
Updated Branches:
  refs/heads/feature/GEODE-1279 742cb6178 -> 23c4126a6 (forced update)


http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/reference/topics/gfe_cache_xml.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/topics/gfe_cache_xml.html.md.erb b/geode-docs/reference/topics/gfe_cache_xml.html.md.erb
index dba7b6a..3a941eb 100644
--- a/geode-docs/reference/topics/gfe_cache_xml.html.md.erb
+++ b/geode-docs/reference/topics/gfe_cache_xml.html.md.erb
@@ -22,9 +22,9 @@ limitations under the License.
 
 # <cache> Element Reference
 
-This section documents the `cache.xml` sub-elements used for Geode server configuration. All elements are sub-elements of the `<cache>` element.
+This section documents the `cache.xml` sub-elements used for <%=vars.product_name%> server configuration. All elements are sub-elements of the `<cache>` element.
 
-For Geode client configuration, see [&lt;client-cache&gt; Element Reference](client-cache.html#cc-client-cache).
+For <%=vars.product_name%> client configuration, see [&lt;client-cache&gt; Element Reference](client-cache.html#cc-client-cache).
 
 **API**:`org.apache.geode.cache.CacheFactory`
 
@@ -237,7 +237,7 @@ Deprecated
 
 ## <a id="gateway-sender" class="no-quick-link"></a>&lt;gateway-sender&gt;
 
-Configures a gateway sender to distribute region events to another Geode site. See [Configuring a Multi-site (WAN) System](../../topologies_and_comm/multi_site_configuration/setting_up_a_multisite_system.html#setting_up_a_multisite_system).
+Configures a gateway sender to distribute region events to another <%=vars.product_name%> site. See [Configuring a Multi-site (WAN) System](../../topologies_and_comm/multi_site_configuration/setting_up_a_multisite_system.html#setting_up_a_multisite_system).
 
 **API:** `GatewaySender`
 
@@ -260,7 +260,7 @@ Configures a gateway sender to distribute region events to another Geode site. S
 <tbody>
 <tr class="odd">
 <td>parallel</td>
-<td>Value of &quot;true&quot; or &quot;false&quot; that specifies the type of gateway sender that Geode creates.</td>
+<td>Value of &quot;true&quot; or &quot;false&quot; that specifies the type of gateway sender that <%=vars.product_name%> creates.</td>
 <td>false</td>
 </tr>
 <tr class="even">
@@ -277,7 +277,7 @@ When distributing region events from the local queue, multiple dispatcher thread
 <span class="keyword option">thread</span>
 When distributing region events from the local queue, multiple dispatcher threads preserve the order in which a given thread added region events to the queue.
 <span class="keyword option">partition</span>
-When distributing region events from the local queue, multiple dispatcher threads preserve the order in which region events were added to the local queue. For a partitioned region, this means that all region events delivered to a specific partition are delivered in the same order to the remote Geode site. For a distributed region, this means that all key updates delivered to the local gateway sender queue are distributed to the remote site in the same order.
+When distributing region events from the local queue, multiple dispatcher threads preserve the order in which region events were added to the local queue. For a partitioned region, this means that all region events delivered to a specific partition are delivered in the same order to the remote <%=vars.product_name%> site. For a distributed region, this means that all key updates delivered to the local gateway sender queue are distributed to the remote site in the same order.
 </div>
 <p>You cannot configure the <code class="ph codeph">order-policy</code> for a parallel event queue, because parallel queues cannot preserve event ordering for regions. Only the ordering of events for a given partition (or in a given queue of a distributed region) can be preserved.</p></td>
 <td>key</td>
@@ -289,7 +289,7 @@ When distributing region events from the local queue, multiple dispatcher thread
 </tr>
 <tr class="odd">
 <td>remote-distributed-system-id</td>
-<td>Integer that uniquely identifies the remote Geode cluster to which this gateway sender will send region events. This value corresponds to the <code class="ph codeph">distributed-system-id</code> property specified in locators for the remote cluster. This attribute is required.</td>
+<td>Integer that uniquely identifies the remote <%=vars.product_name%> cluster to which this gateway sender will send region events. This value corresponds to the <code class="ph codeph">distributed-system-id</code> property specified in locators for the remote cluster. This attribute is required.</td>
 <td>null</td>
 </tr>
 <tr class="even">
@@ -309,7 +309,7 @@ When distributing region events from the local queue, multiple dispatcher thread
 </tr>
 <tr class="odd">
 <td>enable-batch-conflation</td>
-<td>Boolean value that determines whether Geode should conflate messages.</td>
+<td>Boolean value that determines whether <%=vars.product_name%> should conflate messages.</td>
 <td>false</td>
 </tr>
 <tr class="even">
@@ -324,12 +324,12 @@ When distributing region events from the local queue, multiple dispatcher thread
 </tr>
 <tr class="even">
 <td>enable-persistence</td>
-<td>Boolean value that determines whether Geode persists the gateway queue.</td>
+<td>Boolean value that determines whether <%=vars.product_name%> persists the gateway queue.</td>
 <td>false</td>
 </tr>
 <tr class="odd">
 <td>disk-store-name</td>
-<td>Named disk store to use for storing the queue overflow, or for persisting the queue. If you specify a value, the named disk store must exist. If you specify a null value, Geode uses the default disk store for overflow and queue persistence.</td>
+<td>Named disk store to use for storing the queue overflow, or for persisting the queue. If you specify a value, the named disk store must exist. If you specify a null value, <%=vars.product_name%> uses the default disk store for overflow and queue persistence.</td>
 <td> </td>
 </tr>
 <tr class="even">
@@ -344,7 +344,7 @@ When distributing region events from the local queue, multiple dispatcher thread
 </tr>
 <tr class="even">
 <td>alert-threshold</td>
-<td>Maximum number of milliseconds that a region event can remain in the gateway sender queue before Geode logs an alert.</td>
+<td>Maximum number of milliseconds that a region event can remain in the gateway sender queue before <%=vars.product_name%> logs an alert.</td>
 <td>0</td>
 </tr>
 </tbody>
@@ -416,7 +416,7 @@ Specify the Java class and its initialization parameters with the `<class-name>`
 
 ## <a id="gateway-transport-filter" class="no-quick-link"></a>&lt;gateway-transport-filter&gt;
 
-Use a GatewayTransportFilter implementation to process the TCP stream that sends a batch of events that is distributed from one Geode cluster to another over a WAN. A GatewayTransportFilter is typically used to perform encryption or compression on the data that distributed. You install the same GatewayTransportFilter implementation on both a gateway sender and gateway receiver.
+Use a GatewayTransportFilter implementation to process the TCP stream that sends a batch of events that is distributed from one <%=vars.product_name%> cluster to another over a WAN. A GatewayTransportFilter is typically used to perform encryption or compression on the data that distributed. You install the same GatewayTransportFilter implementation on both a gateway sender and gateway receiver.
 
 Specify the Java class and its initialization parameters with the `<class-name>` and `<parameter>` sub-elements. See [&lt;class-name&gt; and &lt;parameter&gt;](#class-name_parameter).
 
@@ -434,7 +434,7 @@ Specify the Java class and its initialization parameters with the `<class-name>`
 
 ## <a id="gateway-receiver" class="no-quick-link"></a>&lt;gateway-receiver&gt;
 
-Configures a gateway receiver to receive and apply region events that were distributed from another Geode site. You can only specify one gateway receiver on a member. See [Configuring a Multi-site (WAN) System](../../topologies_and_comm/multi_site_configuration/setting_up_a_multisite_system.html#setting_up_a_multisite_system).
+Configures a gateway receiver to receive and apply region events that were distributed from another <%=vars.product_name%> site. You can only specify one gateway receiver on a member. See [Configuring a Multi-site (WAN) System](../../topologies_and_comm/multi_site_configuration/setting_up_a_multisite_system.html#setting_up_a_multisite_system).
 
 **API:** `GatewayReceiverFactory`, `GatewayTransportFilter`
 
@@ -457,14 +457,14 @@ Configures a gateway receiver to receive and apply region events that were distr
 <tbody>
 <tr class="odd">
 <td>start-port</td>
-<td><p>Starting port number to use when specifying the range of possible port numbers this gateway receiver will use to connects to gateway senders in other sites. Geode chooses an unused port number in the specified port number range to start the receiver. If no port numbers in the range are available, an exception is thrown.</p>
-<p>The <code class="ph codeph">STARTPORT</code> value is inclusive while the <code class="ph codeph">ENDPORT</code> value is exclusive. For example, if you specify <code class="ph codeph">STARTPORT=&quot;50510&quot;</code> and <code class="ph codeph">ENDPOINT=&quot;50520&quot;</code>, Geode chooses a port value from 50510 to 50519.</p></td>
+<td><p>Starting port number to use when specifying the range of possible port numbers this gateway receiver will use to connects to gateway senders in other sites. <%=vars.product_name%> chooses an unused port number in the specified port number range to start the receiver. If no port numbers in the range are available, an exception is thrown.</p>
+<p>The <code class="ph codeph">STARTPORT</code> value is inclusive while the <code class="ph codeph">ENDPORT</code> value is exclusive. For example, if you specify <code class="ph codeph">STARTPORT=&quot;50510&quot;</code> and <code class="ph codeph">ENDPOINT=&quot;50520&quot;</code>, <%=vars.product_name%> chooses a port value from 50510 to 50519.</p></td>
 <td>5000</td>
 </tr>
 <tr class="even">
 <td>end-port</td>
-<td><p>Defines the upper bound port number to use when specifying the range of possible port numbers this gateway receiver will use to for connections from gateway senders in other sites. Geode chooses an unused port number in the specified port number range to start the receiver. If no port numbers in the range are available, an exception is thrown.</p>
-<p>The <code class="ph codeph">ENDPORT</code> value is exclusive while the <code class="ph codeph">STARTPORT</code> value is inclusive. For example, if you specify <code class="ph codeph">STARTPORT=&quot;50510&quot;</code> and <code class="ph codeph">ENDPOINT=&quot;50520&quot;</code>, Geode chooses a port value from 50510 to 50519.</p></td>
+<td><p>Defines the upper bound port number to use when specifying the range of possible port numbers this gateway receiver will use to for connections from gateway senders in other sites. <%=vars.product_name%> chooses an unused port number in the specified port number range to start the receiver. If no port numbers in the range are available, an exception is thrown.</p>
+<p>The <code class="ph codeph">ENDPORT</code> value is exclusive while the <code class="ph codeph">STARTPORT</code> value is inclusive. For example, if you specify <code class="ph codeph">STARTPORT=&quot;50510&quot;</code> and <code class="ph codeph">ENDPOINT=&quot;50520&quot;</code>, <%=vars.product_name%> chooses a port value from 50510 to 50519.</p></td>
 <td>5500</td>
 </tr>
 <tr class="odd">
@@ -515,7 +515,7 @@ Configures a gateway receiver to receive and apply region events that were distr
 
 ## <a id="gateway-receiver_gateway-transport-filter" class="no-quick-link"></a>&lt;gateway-transport-filter&gt;
 
-Use a GatewayTransportFilter implementation to process the TCP stream that sends a batch of events that is distributed from one Geode cluster to another over a WAN. A GatewayTransportFilter is typically used to perform encryption or compression on the data that distributed. You install the same GatewayTransportFilter implementation on both a gateway sender and gateway receiver.
+Use a GatewayTransportFilter implementation to process the TCP stream that sends a batch of events that is distributed from one <%=vars.product_name%> cluster to another over a WAN. A GatewayTransportFilter is typically used to perform encryption or compression on the data that distributed. You install the same GatewayTransportFilter implementation on both a gateway sender and gateway receiver.
 
 Specify the Java class and its initialization parameters with the `<class-name>` and `<parameter>` sub-elements. See [&lt;class-name&gt; and &lt;parameter&gt;](#class-name_parameter).
 
@@ -579,7 +579,7 @@ Configures a queue for sending region events to an AsyncEventListener implementa
 </tr>
 <tr class="even">
 <td>parallel</td>
-<td>Value of &quot;true&quot; or &quot;false&quot; that specifies the type of queue that Geode creates.</td>
+<td>Value of &quot;true&quot; or &quot;false&quot; that specifies the type of queue that <%=vars.product_name%> creates.</td>
 <td>false</td>
 </tr>
 <tr class="odd">
@@ -594,12 +594,12 @@ Configures a queue for sending region events to an AsyncEventListener implementa
 </tr>
 <tr class="odd">
 <td>enable-batch-conflation</td>
-<td>Boolean value that determines whether Geode should conflate messages.</td>
+<td>Boolean value that determines whether <%=vars.product_name%> should conflate messages.</td>
 <td>false</td>
 </tr>
 <tr class="even">
 <td>disk-store-name</td>
-<td>Named disk store to use for storing queue overflow, or for persisting the queue. If you specify a value, the named disk store must exist. If you specify a null value, Geode uses the default disk store for overflow and queue persistence.</td>
+<td>Named disk store to use for storing queue overflow, or for persisting the queue. If you specify a value, the named disk store must exist. If you specify a null value, <%=vars.product_name%> uses the default disk store for overflow and queue persistence.</td>
 <td>null specifies the default disk store</td>
 </tr>
 <tr class="odd">
@@ -628,13 +628,13 @@ Configures a queue for sending region events to an AsyncEventListener implementa
 <ul>
 <li><strong>key</strong>. When distributing region events from the local queue, multiple dispatcher threads preserve the order of key updates.</li>
 <li><strong>thread</strong>. When distributing region events from the local queue, multiple dispatcher threads preserve the order in which a given thread added region events to the queue.</li>
-<li><strong>partition</strong>. This option is valid for parallel event queues. When distributing region events from the local queue, multiple dispatcher threads preserve the order in which region events were added to the local queue. For a partitioned region, this means that all region events delivered to a specific partition are delivered in the same order to the remote Geode site. For a distributed region, this means that all key updates delivered to the local queue are distributed to the remote site in the same order.</li>
+<li><strong>partition</strong>. This option is valid for parallel event queues. When distributing region events from the local queue, multiple dispatcher threads preserve the order in which region events were added to the local queue. For a partitioned region, this means that all region events delivered to a specific partition are delivered in the same order to the remote <%=vars.product_name%> site. For a distributed region, this means that all key updates delivered to the local queue are distributed to the remote site in the same order.</li>
 </ul></td>
 <td>key</td>
 </tr>
 <tr class="even">
 <td>persistent</td>
-<td>Boolean value that determines whether Geode persists this queue.</td>
+<td>Boolean value that determines whether <%=vars.product_name%> persists this queue.</td>
 <td>False</td>
 </tr>
 </tbody>
@@ -795,7 +795,7 @@ The `cacheserver` process uses only `cache.xml` configuration. For application s
 </tr>
 <tr class="odd">
 <td>tcp-no-delay</td>
-<td>When set to true, enables TCP_NODELAY for Geode server connections to clients.</td>
+<td>When set to true, enables TCP_NODELAY for <%=vars.product_name%> server connections to clients.</td>
 <td>false</td>
 </tr>
 </tbody>
@@ -850,7 +850,7 @@ Application plug-in used to provide current and predicted server load informatio
 
 Specify the Java class and its initialization parameters with the `<class-name>` and `<parameter>` sub-elements. See [&lt;class-name&gt; and &lt;parameter&gt;](cache_xml.html#class-name_parameter).
 
-**Default:** If this is not defined, the default Geode load probe is used.
+**Default:** If this is not defined, the default <%=vars.product_name%> load probe is used.
 
 **API:** `org.apache.geode.cache.server.setLoadProbe`
 
@@ -1259,7 +1259,7 @@ Specifies the configuration for the Portable Data eXchange (PDX) method of seria
 
 ## <a id="id_td2_ydq_rr" class="no-quick-link"></a>&lt;pdx-serializer&gt;
 
-Allows you to configure the PdxSerializer for this Geode member.
+Allows you to configure the PdxSerializer for this <%=vars.product_name%> member.
 
 Specify the Java class and its initialization parameters with the `<class-name>` and `<parameter>` sub-elements. See [&lt;class-name&gt; and &lt;parameter&gt;](cache_xml.html#class-name_parameter).
 
@@ -1305,7 +1305,7 @@ Specifies a region attributes template that can be named (by `id`) and reference
 <tbody>
 <tr class="odd">
 <td>concurrency-level</td>
-<td>Gives an estimate of the maximum number of application threads that will concurrently access a region entry at one time. This attribute does not apply to partitioned regions. This attribute helps Geode optimize the use of system resources and reduce thread contention. This sets an initial parameter on the underlying <code class="ph codeph">java.util.ConcurrentHashMap</code> used for storing region entries.
+<td>Gives an estimate of the maximum number of application threads that will concurrently access a region entry at one time. This attribute does not apply to partitioned regions. This attribute helps <%=vars.product_name%> optimize the use of system resources and reduce thread contention. This sets an initial parameter on the underlying <code class="ph codeph">java.util.ConcurrentHashMap</code> used for storing region entries.
 <div class="note note">
 **Note:**
 <p>Before you modify this, read the concurrency level description, then see the Java API documentation for <code class="ph codeph">java.util.ConcurrentHashMap</code>.</p>
@@ -1407,7 +1407,7 @@ Used only with GemFire version 6.x gateway configurations. For GemFire 7.0 confi
 </tr>
 <tr class="even">
 <td>gateway-sender-ids</td>
-<td><p>Specifies one or more gateway sender IDs to use for distributing region events to remote Geode sites. Specify multiple IDs as a comma-separated list.</p>
+<td><p>Specifies one or more gateway sender IDs to use for distributing region events to remote <%=vars.product_name%> sites. Specify multiple IDs as a comma-separated list.</p>
 <p><strong>API:</strong> <code class="ph codeph">addGatewaySenderId</code></p>
 <p><strong>Example:</strong></p>
 <pre class="pre codeblock language-xml"><code>&lt;region-attributes 
@@ -1584,7 +1584,7 @@ Used only with GemFire version 6.x gateway configurations. For GemFire 7.0 confi
 <td><p>Definition: Determines how updates to region entries are distributed to the other caches in the distributed system where the region and entry are defined. Scope also determines whether to allow remote invocation of some of the region’s event handlers, and whether to use region entry versions to provide consistent updates across replicated regions.</p>
 <div class="note note">
 **Note:**
-<p>You can configure the most common of these options with Geode’s region shortccuts in <code class="ph codeph">RegionShortcut</code> and <code class="ph codeph">ClientRegionShortcut</code>.</p>
+<p>You can configure the most common of these options with <%=vars.product_name%> region shortcuts in <code class="ph codeph">RegionShortcut</code> and <code class="ph codeph">ClientRegionShortcut</code>.</p>
 </div>
 <div class="note note">
 **Note:**
@@ -1622,7 +1622,7 @@ Used only with GemFire version 6.x gateway configurations. For GemFire 7.0 confi
 </tr>
 <tr class="even">
 <td>statistics-enabled</td>
-<td>Boolean specifying whether to gather statistics on the region. Must be true to use expiration on the region. Geode provides a standard set of statistics for cached regions and region entries, which give you information for fine-tuning your distributed system. Unlike other Geode statistics, statistics for local and distributed regions are not archived and cannot be charted. They are kept in instances of <code class="ph codeph">org.apache.geode.cache.CacheStatistics</code> and made available through the region and its entries through the <code class="ph codeph">Region.getStatistics</code> and <code class="ph codeph">Region.Entry.getStatistics</code> methods.
+<td>Boolean specifying whether to gather statistics on the region. Must be true to use expiration on the region. <%=vars.product_name%> provides a standard set of statistics for cached regions and region entries, which give you information for fine-tuning your distributed system. Unlike other <%=vars.product_name%> statistics, statistics for local and distributed regions are not archived and cannot be charted. They are kept in instances of <code class="ph codeph">org.apache.geode.cache.CacheStatistics</code> and made available through the region and its entries through the <code class="ph codeph">Region.getStatistics</code> and <code class="ph codeph">Region.Entry.getStatistics</code> methods.
 <p><strong>API:</strong> <code class="ph codeph">setStatisticsEnabled</code></p>
 <p><strong>Example:</strong></p>
 <pre class="pre codeblock language-xml"><code>&lt;region-attributes 
@@ -1645,7 +1645,7 @@ Used only with GemFire version 6.x gateway configurations. For GemFire 7.0 confi
 <td><p>Determines whether members perform checks to provide consistent handling for concurrent or out-of-order updates to distributed regions. See [Consistency for Region Updates](../../developing/distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045).</p>
 <div class="note note">
 **Note:**
-<p>Applications that use a <code class="ph codeph">client-cache</code> may want to disable concurrency checking in order to see all events for a region. Geode server members can continue using concurrency checks for the region, but they will pass all events to the client cache. This configuration ensures that the client sees all events, but it does not prevent the client cache from becoming out-of-sync with the server cache.</p>
+<p>Applications that use a <code class="ph codeph">client-cache</code> may want to disable concurrency checking in order to see all events for a region. <%=vars.product_name%> server members can continue using concurrency checks for the region, but they will pass all events to the client cache. This configuration ensures that the client sees all events, but it does not prevent the client cache from becoming out-of-sync with the server cache.</p>
 </div>
 <p><strong>API:</strong> <code class="ph codeph">setConcurrencyChecksEnabled</code></p>
 <p><strong>Example:</strong></p>
@@ -2209,7 +2209,7 @@ With the exception of `local-max-memory`, all members defining a partitioned reg
 | Attribute              | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    | Default              |
 |------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------|
 | colocated-with         | The full name of a region to colocate with this region. The named region must exist before this region is created.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             | null                 |
-| local-max-memory       | Maximum megabytes of memory set aside for this region in the local member. This is all memory used for this partitioned region - for primary buckets and any redundant copies. This value must be smaller than the Java settings for the initial or maximum JVM heap. When the memory use goes above this value, Geode issues a warning, but operation continues. Besides setting the maximum memory to use for the member, this setting also tells Geode how to balance the load between members where the region is defined. For example, if one member sets this value to twice the value of another member’s setting, Geode works to keep the ratio between the first and the second at two-to-one, regardless of how little memory the region consumes. This is a local parameter that applies only to the local member. A value of 0 disables local data caching. | 90% (of local heap)  |
+| local-max-memory       | Maximum megabytes of memory set aside for this region in the local member. This is all memory used for this partitioned region - for primary buckets and any redundant copies. This value must be smaller than the Java settings for the initial or maximum JVM heap. When the memory use goes above this value, <%=vars.product_name%> issues a warning, but operation continues. Besides setting the maximum memory to use for the member, this setting also tells <%=vars.product_name%> how to balance the load between members where the region is defined. For example, if one member sets this value to twice the value of another member’s setting, <%=vars.product_name%> works to keep the ratio between the first and the second at two-to-one, regardless of how little memory the region consumes. This is a local parameter that applies only to the local member. A value of 0 disables local data caching. | 90% (of local heap)  |
 | recovery-delay         | Applies when `redundant-copies` is greater than zero. The number of milliseconds to wait after a member crashes before reestablishing redundancy for the region. A setting of -1 disables automatic recovery of redundancy after member failure.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               | -1                   |
 | redundant-copies       | Number of extra copies that the partitioned region must maintain for each entry. Range: 0-3. If you specify 1, this partitioned region maintains the original and one backup, for a total of two copies. A value of 0 disables redundancy.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | 0                    |
 | startup-recovery-delay | Applies when `redundant-copies` is greater than zero. The number of milliseconds a newly started member should wait before trying to satisfy redundancy of region data stored on other members. A setting of -1 disables automatic recovery of redundancy after new members join.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              | 0                    |
@@ -2527,7 +2527,7 @@ An event-handler plug-in that allows you to receive before-event notification fo
 
 ## <a id="id_xt4_m2q_rr" class="no-quick-link"></a>&lt;cache-listener&gt;
 
-An event-handler plug-in that receives after-event notification of changes to the region and its entries. Any number of cache listeners can be defined for a region in any member. Geode offers several listener types with callbacks to handle data and process events. Depending on the `data-policy` and the `interest-policy` subscription attributes, a cache listener may receive only events that originate in the local cache, or it may receive those events along with events that originate remotely.
+An event-handler plug-in that receives after-event notification of changes to the region and its entries. Any number of cache listeners can be defined for a region in any member. <%=vars.product_name%> offers several listener types with callbacks to handle data and process events. Depending on the `data-policy` and the `interest-policy` subscription attributes, a cache listener may receive only events that originate in the local cache, or it may receive those events along with events that originate remotely.
 
 Specify the Java class for the cache listener and its initialization parameters with the `<class-name>` and `<parameter>` sub-elements. See [&lt;class-name&gt; and &lt;parameter&gt;](cache_xml.html#class-name_parameter).
 
@@ -2564,7 +2564,7 @@ A compressor registers a custom class that extends `Compressor` to support compr
 
 ## <a id="id_kc4_n2q_rr" class="no-quick-link"></a>&lt;eviction-attributes&gt;
 
-Specifies whether and how to control a region’s size. Size is controlled by removing least recently used (LRU) entries to make space for new ones. This may be done through destroy or overflow actions. You can configure your region for lru-heap-percentage with an eviction action of local-destroy using Geode’s stored region attributes.
+Specifies whether and how to control a region’s size. Size is controlled by removing least recently used (LRU) entries to make space for new ones. This may be done through destroy or overflow actions. You can configure your region for lru-heap-percentage with an eviction action of local-destroy using stored region attributes.
 
 **Default:** Uses the lru-entry-count algorithm.
 
@@ -2634,7 +2634,7 @@ Using the maximum attribute, specifies maximum region capacity based on entry co
 
 ## <a id="id_gpn_42q_rr" class="no-quick-link"></a>&lt;lru-heap-percentage&gt;
 
-Runs evictions when the Geode resource manager says to. The manager orders evictions when the total cache size is over the heap percentage limit specified in the manager configuration. You can declare a Java class that implements the ObjectSizer interface to measure the size of objects in the Region.
+Runs evictions when the <%=vars.product_name%> resource manager says to. The manager orders evictions when the total cache size is over the heap percentage limit specified in the manager configuration. You can declare a Java class that implements the ObjectSizer interface to measure the size of objects in the Region.
 
 Specify the Java class and its initialization parameters with the `<class-name>` and `<parameter>` sub-elements. See [&lt;class-name&gt; and &lt;parameter&gt;](cache_xml.html#class-name_parameter).
 
@@ -2771,7 +2771,7 @@ Specifies the binding for a data-source used in transaction management. See [Con
 
 ## <a id="id_jrf_q2q_rr" class="no-quick-link"></a>&lt;jndi-binding&gt;
 
-For every datasource that is bound to the JNDI tree, there should be one `<jndi-binding>` element. This element describes the property and the configuration of the datasource. Geode uses the attributes of the `<jndi-binding>` element for configuration. Use the `<config-property>` element to configure properties for the datasource.
+For every datasource that is bound to the JNDI tree, there should be one `<jndi-binding>` element. This element describes the property and the configuration of the datasource. <%=vars.product_name%> uses the attributes of the `<jndi-binding>` element for configuration. Use the `<config-property>` element to configure properties for the datasource.
 
 We recommend that you set the username and password with the `user-name` and `password` jndi-binding attributes rather than using the `<config-property>` element.
 
@@ -2828,7 +2828,7 @@ We recommend that you set the username and password with the `user-name` and `pa
 </tr>
 <tr class="odd">
 <td>jndi-name</td>
-<td>The <code class="ph codeph">jndi-name</code> attribute is the key binding parameter. If the value of jndi-name is a DataSource, it is bound as java:/myDatabase, where myDatabase is the name you assign to your data source. If the data source cannot be bound to JNDI at runtime, Geode logs a warning.</td>
+<td>The <code class="ph codeph">jndi-name</code> attribute is the key binding parameter. If the value of jndi-name is a DataSource, it is bound as java:/myDatabase, where myDatabase is the name you assign to your data source. If the data source cannot be bound to JNDI at runtime, <%=vars.product_name%> logs a warning.</td>
 <td> </td>
 </tr>
 <tr class="even">
@@ -2862,7 +2862,7 @@ We recommend that you set the username and password with the `user-name` and `pa
 <tbody>
 <tr class="odd">
 <td>XATransaction</td>
-<td>Select this option when you want to use a<span class="keyword apiname">ManagedConnection</span> interface with a Java Transaction Manager to define transaction boundries. This option allows a <span class="keyword apiname">ManagedDataSource</span> to participate in a transaction with a Geode cache.</td>
+<td>Select this option when you want to use a<span class="keyword apiname">ManagedConnection</span> interface with a Java Transaction Manager to define transaction boundries. This option allows a <span class="keyword apiname">ManagedDataSource</span> to participate in a transaction with a <%=vars.product_name%> cache.</td>
 </tr>
 <tr class="even">
 <td>NoTransaction</td>
@@ -3276,7 +3276,7 @@ Set of serializer or instantiator tags to register customer DataSerializer exten
 
 ## <a id="id_jsk_y2q_rr" class="no-quick-link"></a>&lt;serializer&gt;
 
-Allows you to configure the DataSerializer for this Geode member. It registers a custom class which extends DataSerializer to support custom serialization of non-modifiable object types inside Geode.
+Allows you to configure the DataSerializer for this <%=vars.product_name%> member. It registers a custom class which extends DataSerializer to support custom serialization of non-modifiable object types inside <%=vars.product_name%>.
 
 Specify the Java class for the `DataSerializer` and its initialization parameters with the `<class-name>` sub-element.
 
@@ -3284,7 +3284,7 @@ Specify the Java class for the `DataSerializer` and its initialization parameter
 
 ## <a id="id_p5t_y2q_rr" class="no-quick-link"></a>&lt;instantiator&gt;
 
-An Instantiator registers a custom class which implements the `DataSerializable` interface to support custom object serialization inside Geode.
+An Instantiator registers a custom class which implements the `DataSerializable` interface to support custom object serialization inside <%=vars.product_name%>.
 
 Specify the Java class and its initialization parameters with the `<class-name>` sub-element.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/reference/topics/handling_exceptions_and_failures.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/topics/handling_exceptions_and_failures.html.md.erb b/geode-docs/reference/topics/handling_exceptions_and_failures.html.md.erb
index 8d46db5..45fc7eb 100644
--- a/geode-docs/reference/topics/handling_exceptions_and_failures.html.md.erb
+++ b/geode-docs/reference/topics/handling_exceptions_and_failures.html.md.erb
@@ -19,14 +19,14 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Your application needs to catch certain classes to handle all the exceptions and system failures thrown by Apache Geode.
+Your application needs to catch certain classes to handle all the exceptions and system failures thrown by <%=vars.product_name_long%>.
 
--   `GemFireCheckedException`. This class is the abstract superclass of exceptions that are thrown and declared. Wherever possible, GemFire exceptions are checked exceptions. `GemFireCheckedException` is a Geode version of `java.lang.Exception`.
--   `GemFireException`. This class is the abstract superclass of unchecked exceptions that are thrown to indicate conditions for which the developer should not normally need to check. You can look at the subclasses of `GemFireException` to see all the runtime exceptions in the GemFire system; see the class hierarchy in the online Java API documentation. `GemFireException` is a Geode version of java.lang.`RuntimeException`. You can also look at the method details in the `Region` API javadocs for Geode exceptions you may want to catch.
--   `SystemFailure`. In addition to exception management, Geode provides a class to help you manage catastrophic failure in your distributed system, particularly in your application. The Javadocs for this class provide extensive guidance for managing failures in your system and your application. See `SystemFailure` in the `org.apache.geode` package.
+-   `GemFireCheckedException`. This class is the abstract superclass of exceptions that are thrown and declared. Wherever possible, GemFire exceptions are checked exceptions. `GemFireCheckedException` is a <%=vars.product_name%> version of `java.lang.Exception`.
+-   `GemFireException`. This class is the abstract superclass of unchecked exceptions that are thrown to indicate conditions for which the developer should not normally need to check. You can look at the subclasses of `GemFireException` to see all the runtime exceptions in the GemFire system; see the class hierarchy in the online Java API documentation. `GemFireException` is a <%=vars.product_name%> version of java.lang.`RuntimeException`. You can also look at the method details in the `Region` API javadocs for <%=vars.product_name%> exceptions you may want to catch.
+-   `SystemFailure`. In addition to exception management, <%=vars.product_name%> provides a class to help you manage catastrophic failure in your distributed system, particularly in your application. The Javadocs for this class provide extensive guidance for managing failures in your system and your application. See `SystemFailure` in the `org.apache.geode` package.
 
 To see the exceptions thrown by a specific method, refer to the method's online Java documentation.
 
-A Geode system member can also throw exceptions generated by third-party software such as JGroups or `java.lang` classes. For assistance in handling these exceptions, see the vendor documentation.
+A <%=vars.product_name%> system member can also throw exceptions generated by third-party software such as JGroups or `java.lang` classes. For assistance in handling these exceptions, see the vendor documentation.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/reference/topics/memory_requirements_for_cache_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/topics/memory_requirements_for_cache_data.html.md.erb b/geode-docs/reference/topics/memory_requirements_for_cache_data.html.md.erb
index 150814a..4fa57d9 100644
--- a/geode-docs/reference/topics/memory_requirements_for_cache_data.html.md.erb
+++ b/geode-docs/reference/topics/memory_requirements_for_cache_data.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode solutions architects need to estimate resource requirements for meeting application performance, scalability and availability goals.
+<%=vars.product_name%> solutions architects need to estimate resource requirements for meeting application performance, scalability and availability goals.
 
 These requirements include estimates for the following resources:
 
@@ -27,12 +27,12 @@ These requirements include estimates for the following resources:
 -   number of machines
 -   network bandwidth
 
-The information here is only a guideline, and assumes a basic understanding of Geode. While no two applications or use cases are exactly alike, the information here should be a solid starting point, based on real-world experience. Much like with physical database design, ultimately the right configuration and physical topology for deployment is based on the performance requirements, application data access characteristics, and resource constraints (i.e., memory, CPU, and network bandwidth) of the operating environment.
+The information here is only a guideline, and assumes a basic understanding of <%=vars.product_name%>. While no two applications or use cases are exactly alike, the information here should be a solid starting point, based on real-world experience. Much like with physical database design, ultimately the right configuration and physical topology for deployment is based on the performance requirements, application data access characteristics, and resource constraints (i.e., memory, CPU, and network bandwidth) of the operating environment.
 
 
 <a id="topic_ipt_dqz_j4"></a>
 
-# Core Guidelines for Geode Data Region Design
+# Core Guidelines for <%=vars.product_name%> Data Region Design
 
 The following guidelines apply to region design:
 
@@ -46,9 +46,9 @@ The following guidelines apply to region design:
 
 The following guidelines should provide a rough estimate of the amount of memory consumed by your system.
 
-Memory calculation about keys and entries (objects) and region overhead for them can be divided by the number of members of the distributed system for data placed in partitioned regions only. For other regions, the calculation is for each member that hosts the region. Memory used by sockets, threads, and the small amount of application overhead for Geode is per member.
+Memory calculation about keys and entries (objects) and region overhead for them can be divided by the number of members of the distributed system for data placed in partitioned regions only. For other regions, the calculation is for each member that hosts the region. Memory used by sockets, threads, and the small amount of application overhead for <%=vars.product_name%> is per member.
 
-For each entry added to a region, the Geode cache API consumes a certain amount of memory to store and manage the data. This overhead is required even when an entry is overflowed or persisted to disk. Thus objects on disk take up some JVM memory, even when they are paged to disk. The Java cache overhead introduced by a region, using a 32-bit JVM, can be approximated as listed below.
+For each entry added to a region, the <%=vars.product_name%> cache API consumes a certain amount of memory to store and manage the data. This overhead is required even when an entry is overflowed or persisted to disk. Thus objects on disk take up some JVM memory, even when they are paged to disk. The Java cache overhead introduced by a region, using a 32-bit JVM, can be approximated as listed below.
 
 Actual memory use varies based on a number of factors, including the JVM you are using and the platform you are running on. For 64-bit JVMs, the usage will usually be larger than with 32-bit JVMs. As much as 80% more memory may be required for 64-bit JVMs, due to object references and headers using more memory.
 
@@ -56,7 +56,7 @@ There are several additional considerations for calculating your memory requirem
 
 -   **Size of your stored data.** To estimate the size of your stored data, determine first whether you are storing the data in serialized or non-serialized form. In general, the non-serialized form will be the larger of the two. See [Determining Object Serialization Overhead](#topic_psn_5tz_j4)
 
-    Objects in Geode are serialized for storage into partitioned regions and for all distribution activities, including moving data to disk for overflow and persistence. For optimum performance, Geode tries to reduce the number of times an object is serialized and deserialized, so your objects may be stored in serialized or non-serialized form in the cache.
+    Objects in <%=vars.product_name%> are serialized for storage into partitioned regions and for all distribution activities, including moving data to disk for overflow and persistence. For optimum performance, <%=vars.product_name%> tries to reduce the number of times an object is serialized and deserialized, so your objects may be stored in serialized or non-serialized form in the cache.
 
 -   **Application object overhead for your data.** When calculating application overhead, make sure to count the key as well as the value, and to count every object if the key and/or value is a composite object.
 
@@ -105,7 +105,7 @@ Keys are stored in object form except for certain classes where the storage of k
 
 **When to disable inline key storage.** In some cases, storing keys inline may introduce extra memory or CPU usage. If all of your keys are also referenced from some other object, then it is better to not inline the key. If you frequently ask for the key from the region, then you may want to keep the object form stored in the cache so that you do not need to recreate the object form constantly. Note that the basic operation of checking whether a key is in a region does not require the object form but uses the inline primitive data.
 
-The key inlining feature can be disabled by specifying the following Geode property upon member startup:
+The key inlining feature can be disabled by specifying the following <%=vars.product_name%> property upon member startup:
 
 ``` pre
 -Dgemfire.DISABLE_INLINE_REGION_KEYS=true
@@ -180,19 +180,19 @@ The other index overhead estimates listed here also apply to Lucene indexes.
 
 ## <a id="topic_i1m_stz_j4" class="no-quick-link"></a>Estimating Management and Monitoring Overhead
 
-Geode's JMX management and monitoring system contributes to memory overhead and should be accounted for when establishing the memory requirements for your deployment. Specifically, the memory footprint of any processes (such as locators) that are running as JMX managers can increase.
+The <%=vars.product_name%> JMX management and monitoring system contributes to memory overhead and should be accounted for when establishing the memory requirements for your deployment. Specifically, the memory footprint of any processes (such as locators) that are running as JMX managers can increase.
 
 For each resource in the distributed system that is being managed and monitored by the JMX Manager (for example, each MXBean such as MemberMXBean, RegionMXBean, DiskStoreMXBean, LockServiceMXBean and so on), you should add 10 KB of required memory to the JMX Manager node.
 
 ## <a id="topic_psn_5tz_j4" class="no-quick-link"></a>Determining Object Serialization Overhead
 
-Geode PDX serialization can provide significant space savings over Java Serializable in addition to better performance. In some cases we have seen savings of up to 65%, but the savings will vary depending on the domain objects. PDX serialization is most likely to provide the most space savings of all available options. DataSerializable is more compact, but it requires that objects are deserialized on access, so that should be taken into account. On the other hand, PDX serializable does not require deserialization for most operations, and because of that, it may provide greater space savings.
+<%=vars.product_name%> PDX serialization can provide significant space savings over Java Serializable in addition to better performance. In some cases we have seen savings of up to 65%, but the savings will vary depending on the domain objects. PDX serialization is most likely to provide the most space savings of all available options. DataSerializable is more compact, but it requires that objects are deserialized on access, so that should be taken into account. On the other hand, PDX serializable does not require deserialization for most operations, and because of that, it may provide greater space savings.
 
-In any case, the kinds and volumes of operations that would be done on the server side should be considered in the context of data serialization, as Geode has to deserialize data for some types of operations (access). For example, if a function invokes a get operation on the server side, the value returned from the get operation will be deserialized in most cases (the only time it will not be deserialized is when PDX serialization is used and the read-serialized attribute is set). The only way to find out the actual overhead is by running tests, and examining the memory usage.
+In any case, the kinds and volumes of operations that would be done on the server side should be considered in the context of data serialization, as <%=vars.product_name%> has to deserialize data for some types of operations (access). For example, if a function invokes a get operation on the server side, the value returned from the get operation will be deserialized in most cases (the only time it will not be deserialized is when PDX serialization is used and the read-serialized attribute is set). The only way to find out the actual overhead is by running tests, and examining the memory usage.
 
 Some additional serialization guidelines and tips:
 
--   If you are using compound objects, do not mix using standard Java serialization with with Geode serialization (either DataSerializable or PDX). Standard Java serialization functions correctly when mixed with Geode serialization, but it can end up producing many more serialized bytes.
+-   If you are using compound objects, do not mix using standard Java serialization with with <%=vars.product_name%> serialization (either DataSerializable or PDX). Standard Java serialization functions correctly when mixed with <%=vars.product_name%> serialization, but it can end up producing many more serialized bytes.
 
     To determine if you are using standard Java serialization, specify the `-DDataSerializer.DUMP_SERIALIZED=true` upon process execution. Then check your log for messages of this form:
 
@@ -224,7 +224,7 @@ A note of caution-- if the domain object contains many domain objects as member
 
 Servers always maintain two outgoing connections to each of their peers. So for each peer a server has, there are four total connections: two going out to the peer and two coming in from the peer.
 
-The server threads that service client requests also communicate with peers to distribute events and forward client requests. If the server's Geode connection property *conserve-sockets* is set to true (the default), these threads use the already-established peer connections for this communication.
+The server threads that service client requests also communicate with peers to distribute events and forward client requests. If the server's <%=vars.product_name%> connection property *conserve-sockets* is set to true (the default), these threads use the already-established peer connections for this communication.
 
 If *conserve-sockets* is false, each thread that services clients establishes two of its own individual connections to its server peers, one to send, and one to receive. Each socket uses a file descriptor, so the number of available sockets is governed by two operating system settings:
 
@@ -236,7 +236,7 @@ In servers with many threads servicing clients, if *conserve-sockets* is set to
 Since each client connection takes one server socket on a thread to handle the connection, and since that server acts as a proxy on partitioned regions to get results, or execute the function service on behalf of the client, for partitioned regions, if conserve sockets is set to false, this also results in a new socket on the server being opened to each peer. Thus N sockets are opened, where N is the number of peers. Large number of clients simultaneously connecting to a large set of peers with a partitioned region with conserve sockets set to false can cause a huge amount of memory to be consumed by socket. Set conserve-sockets to true in these instances.
 
 **Note:**
-There is also JVM overhead for the thread stack for each client connection being processed, set at 256KB or 512KB for most JVMs . On some JVMs you can reduce it to 128KB. You can use the Geode `max-threads` property or the Geode `max-connections` property to limit the number of client threads and thus both thread overhead and socket overhead.
+There is also JVM overhead for the thread stack for each client connection being processed, set at 256KB or 512KB for most JVMs . On some JVMs you can reduce it to 128KB. You can use the <%=vars.product_name%> `max-threads` property or the <%=vars.product_name%> `max-connections` property to limit the number of client threads and thus both thread overhead and socket overhead.
 
 The following table lists the memory requirements based on connections.
 
@@ -286,7 +286,7 @@ The following table lists the memory requirements based on connections.
 <td>1 MB +</td>
 </tr>
 <tr class="even">
-<td><strong>Geode classes and JVM overhead</strong></td>
+<td><strong><%=vars.product_name%> classes and JVM overhead</strong></td>
 <td>Roughly 50MB</td>
 </tr>
 <tr class="odd">
@@ -300,4 +300,4 @@ The following table lists the memory requirements based on connections.
 </tbody>
 </table>
 
-<a id="topic_eww_rvz_j4"></a>
+

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/reference/topics/non-ascii_strings_in_config_files.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/topics/non-ascii_strings_in_config_files.html.md.erb b/geode-docs/reference/topics/non-ascii_strings_in_config_files.html.md.erb
index 1f4e091..045140a 100644
--- a/geode-docs/reference/topics/non-ascii_strings_in_config_files.html.md.erb
+++ b/geode-docs/reference/topics/non-ascii_strings_in_config_files.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Using Non-ASCII Strings in Apache Geode Property Files
----
+<% set_title("Using Non-ASCII Strings in", product_name_long, "Property Files") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,7 +17,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-You can specify Unicode (non-ASCII) characters in Apache Geode property files by using a `\uXXXX` escape sequence.
+You can specify Unicode (non-ASCII) characters in <%=vars.product_name_long%> property files by using a `\uXXXX` escape sequence.
 
 For a supplementary character, you need two escape sequences, one for each of the two UTF-16 code units. The XXXX denotes the 4 hexadecimal digits for the value of the UTF-16 code unit. For example, a properties file might have the following entries:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/reference/topics/region_shortcuts_reference.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/topics/region_shortcuts_reference.html.md.erb b/geode-docs/reference/topics/region_shortcuts_reference.html.md.erb
index dfdaa39..1535bfa 100644
--- a/geode-docs/reference/topics/region_shortcuts_reference.html.md.erb
+++ b/geode-docs/reference/topics/region_shortcuts_reference.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-This topic describes the various region shortcuts you can use to configure Geode regions.
+This topic describes the various region shortcuts you can use to configure <%=vars.product_name%> regions.
 
 ## <a id="reference_w2h_3cd_lk" class="no-quick-link"></a>LOCAL
 


[28/51] [abbrv] geode git commit: GEODE-2886 : 1. optimized imports to the correct order and to eliminate the use of the wildcard imports after applying style file located in `geode/etc/intellij-java-modified-google-style.xml'

Posted by kl...@apache.org.
GEODE-2886 : 1. optimized imports to the correct order and to eliminate
the use of the wildcard imports after applying style file located in
`geode/etc/intellij-java-modified-google-style.xml'


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/88abd31a
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/88abd31a
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/88abd31a

Branch: refs/heads/feature/GEODE-1279
Commit: 88abd31aae2ca45989d973935fe10e09f98c5cb9
Parents: 11971d5
Author: Amey Barve <ab...@apache.org>
Authored: Mon Jul 3 11:49:34 2017 +0530
Committer: Amey Barve <ab...@apache.org>
Committed: Thu Aug 17 15:47:30 2017 +0530

----------------------------------------------------------------------
 .../lucene/internal/LuceneServiceImpl.java      | 26 ++++++++++++--------
 .../distributed/WaitUntilFlushedFunction.java   |  5 ++--
 2 files changed, 18 insertions(+), 13 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/88abd31a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
index 258b8a4..7280d66 100644
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
@@ -15,20 +15,18 @@
 
 package org.apache.geode.cache.lucene.internal;
 
-import java.util.*;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
 import java.util.concurrent.TimeUnit;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 import java.util.stream.Collectors;
 
-import org.apache.geode.cache.lucene.LuceneIndexExistsException;
-import org.apache.geode.cache.lucene.internal.distributed.LuceneQueryFunction;
-import org.apache.geode.cache.lucene.internal.management.LuceneServiceMBean;
-import org.apache.geode.cache.lucene.internal.management.ManagementIndexListener;
-import org.apache.geode.cache.lucene.internal.results.LuceneGetPageFunction;
-import org.apache.geode.cache.lucene.internal.results.PageResults;
-import org.apache.geode.internal.cache.InternalCache;
-import org.apache.geode.management.internal.beans.CacheServiceMBeanBase;
 import org.apache.logging.log4j.Logger;
 import org.apache.lucene.analysis.Analyzer;
 import org.apache.lucene.analysis.miscellaneous.PerFieldAnalyzerWrapper;
@@ -41,10 +39,12 @@ import org.apache.geode.cache.execute.Execution;
 import org.apache.geode.cache.execute.FunctionService;
 import org.apache.geode.cache.execute.ResultCollector;
 import org.apache.geode.cache.lucene.LuceneIndex;
+import org.apache.geode.cache.lucene.LuceneIndexExistsException;
 import org.apache.geode.cache.lucene.LuceneQueryFactory;
 import org.apache.geode.cache.lucene.internal.directory.DumpDirectoryFiles;
 import org.apache.geode.cache.lucene.internal.distributed.EntryScore;
 import org.apache.geode.cache.lucene.internal.distributed.LuceneFunctionContext;
+import org.apache.geode.cache.lucene.internal.distributed.LuceneQueryFunction;
 import org.apache.geode.cache.lucene.internal.distributed.TopEntries;
 import org.apache.geode.cache.lucene.internal.distributed.TopEntriesCollector;
 import org.apache.geode.cache.lucene.internal.distributed.TopEntriesCollectorManager;
@@ -52,15 +52,21 @@ import org.apache.geode.cache.lucene.internal.distributed.WaitUntilFlushedFuncti
 import org.apache.geode.cache.lucene.internal.distributed.WaitUntilFlushedFunctionContext;
 import org.apache.geode.cache.lucene.internal.filesystem.ChunkKey;
 import org.apache.geode.cache.lucene.internal.filesystem.File;
+import org.apache.geode.cache.lucene.internal.management.LuceneServiceMBean;
+import org.apache.geode.cache.lucene.internal.management.ManagementIndexListener;
+import org.apache.geode.cache.lucene.internal.results.LuceneGetPageFunction;
+import org.apache.geode.cache.lucene.internal.results.PageResults;
 import org.apache.geode.cache.lucene.internal.xml.LuceneServiceXmlGenerator;
 import org.apache.geode.internal.DSFIDFactory;
 import org.apache.geode.internal.DataSerializableFixedID;
-import org.apache.geode.internal.cache.extension.Extensible;
 import org.apache.geode.internal.cache.CacheService;
+import org.apache.geode.internal.cache.InternalCache;
 import org.apache.geode.internal.cache.RegionListener;
+import org.apache.geode.internal.cache.extension.Extensible;
 import org.apache.geode.internal.cache.xmlcache.XmlGenerator;
 import org.apache.geode.internal.i18n.LocalizedStrings;
 import org.apache.geode.internal.logging.LogService;
+import org.apache.geode.management.internal.beans.CacheServiceMBeanBase;
 
 /**
  * Implementation of LuceneService to create lucene index and query.

http://git-wip-us.apache.org/repos/asf/geode/blob/88abd31a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
index 6c14fcd..0fecc41 100644
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
@@ -18,14 +18,13 @@ package org.apache.geode.cache.lucene.internal.distributed;
 import java.util.concurrent.TimeUnit;
 
 import org.apache.geode.cache.Cache;
-import org.apache.geode.cache.execute.Function;
-import org.apache.geode.cache.lucene.internal.LuceneServiceImpl;
-
 import org.apache.geode.cache.Region;
 import org.apache.geode.cache.asyncqueue.internal.AsyncEventQueueImpl;
+import org.apache.geode.cache.execute.Function;
 import org.apache.geode.cache.execute.FunctionContext;
 import org.apache.geode.cache.execute.RegionFunctionContext;
 import org.apache.geode.cache.execute.ResultSender;
+import org.apache.geode.cache.lucene.internal.LuceneServiceImpl;
 import org.apache.geode.internal.InternalEntity;
 
 /**


[08/51] [abbrv] geode git commit: GEODE-3402: Mark ProtoBuf interface as experimental. This now closes #698

Posted by kl...@apache.org.
GEODE-3402: Mark ProtoBuf interface as experimental. This now closes #698

Signed-off-by: Alexander Murmann <am...@pivotal.io>


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/a6000684
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/a6000684
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/a6000684

Branch: refs/heads/feature/GEODE-1279
Commit: a600068477bb07abb04206ab9540dfef09e89506
Parents: 87bee08
Author: Bruce Schuchardt <bs...@pivotal.io>
Authored: Tue Aug 8 16:29:29 2017 -0700
Committer: Udo Kohlmeyer <uk...@pivotal.io>
Committed: Mon Aug 14 17:06:13 2017 -0700

----------------------------------------------------------------------
 .../InvalidProtocolMessageException.java         |  3 +++
 .../protocol/operations/OperationHandler.java    |  2 ++
 .../protobuf/EncodingTypeTranslator.java         |  2 ++
 .../apache/geode/protocol/protobuf/Failure.java  |  3 +++
 .../protocol/protobuf/OperationContext.java      |  6 ++++--
 .../protocol/protobuf/ProtobufOpsProcessor.java  |  2 ++
 .../protobuf/ProtobufSerializationService.java   |  2 ++
 .../protobuf/ProtobufStreamProcessor.java        |  2 ++
 .../apache/geode/protocol/protobuf/Result.java   |  3 +++
 .../apache/geode/protocol/protobuf/Success.java  |  3 +++
 .../GetAllRequestOperationHandler.java           | 10 ++++++----
 .../GetAvailableServersOperationHandler.java     | 19 +++++++++++--------
 .../GetRegionNamesRequestOperationHandler.java   |  2 ++
 .../GetRegionRequestOperationHandler.java        |  2 ++
 .../operations/GetRequestOperationHandler.java   |  2 ++
 .../PutAllRequestOperationHandler.java           | 13 ++++++++-----
 .../operations/PutRequestOperationHandler.java   |  2 ++
 .../RemoveRequestOperationHandler.java           |  7 +++++--
 .../registry/OperationContextRegistry.java       |  2 ++
 .../serializer/ProtobufProtocolSerializer.java   |  2 ++
 .../utilities/ProtobufPrimitiveTypes.java        |  2 ++
 .../utilities/ProtobufRequestUtilities.java      |  4 ++++
 .../utilities/ProtobufResponseUtilities.java     |  9 ++++++---
 .../protobuf/utilities/ProtobufUtilities.java    |  2 ++
 .../exception/UnknownProtobufPrimitiveType.java  |  3 +++
 .../protocol/serializer/ProtocolSerializer.java  |  2 ++
 .../serialization/SerializationService.java      |  2 ++
 .../geode/serialization/SerializationType.java   |  2 ++
 .../apache/geode/serialization/TypeCodec.java    |  3 +++
 .../geode/serialization/codec/BinaryCodec.java   |  2 ++
 .../geode/serialization/codec/BooleanCodec.java  |  6 ++++--
 .../geode/serialization/codec/ByteCodec.java     |  2 ++
 .../geode/serialization/codec/DoubleCodec.java   |  2 ++
 .../geode/serialization/codec/FloatCodec.java    |  2 ++
 .../geode/serialization/codec/IntCodec.java      |  2 ++
 .../geode/serialization/codec/JSONCodec.java     |  2 ++
 .../geode/serialization/codec/LongCodec.java     |  2 ++
 .../geode/serialization/codec/ShortCodec.java    |  2 ++
 .../geode/serialization/codec/StringCodec.java   |  2 ++
 .../UnsupportedEncodingTypeException.java        |  3 +++
 .../registry/SerializationCodecRegistry.java     |  2 ++
 .../CodecAlreadyRegisteredForTypeException.java  |  3 +++
 .../CodecNotRegisteredForTypeException.java      |  3 +++
 geode-protobuf/src/main/proto/basicTypes.proto   |  6 ++++++
 .../src/main/proto/clientProtocol.proto          |  5 +++++
 geode-protobuf/src/main/proto/region_API.proto   |  5 +++++
 geode-protobuf/src/main/proto/server_API.proto   |  5 +++++
 47 files changed, 148 insertions(+), 26 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/exception/InvalidProtocolMessageException.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/exception/InvalidProtocolMessageException.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/exception/InvalidProtocolMessageException.java
index 8903b8a..29f5a01 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/exception/InvalidProtocolMessageException.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/exception/InvalidProtocolMessageException.java
@@ -14,9 +14,12 @@
  */
 package org.apache.geode.protocol.exception;
 
+import org.apache.geode.annotations.Experimental;
+
 /**
  * Indicates that a message didn't properly follow its protocol specification.
  */
+@Experimental
 public class InvalidProtocolMessageException extends Exception {
   public InvalidProtocolMessageException(String message) {
     super(message);

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/operations/OperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/operations/OperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/operations/OperationHandler.java
index 92a844f..aa6d79e 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/operations/OperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/operations/OperationHandler.java
@@ -14,6 +14,7 @@
  */
 package org.apache.geode.protocol.operations;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.cache.Cache;
 import org.apache.geode.protocol.protobuf.ProtobufOpsProcessor;
 import org.apache.geode.protocol.protobuf.Result;
@@ -25,6 +26,7 @@ import org.apache.geode.serialization.SerializationService;
  *
  * See {@link ProtobufOpsProcessor}
  */
+@Experimental
 public interface OperationHandler<Req, Resp> {
   /**
    * Decode the message, deserialize contained values using the serialization service, do the work

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/EncodingTypeTranslator.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/EncodingTypeTranslator.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/EncodingTypeTranslator.java
index ec12661..2a3bf54 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/EncodingTypeTranslator.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/EncodingTypeTranslator.java
@@ -16,6 +16,7 @@ package org.apache.geode.protocol.protobuf;
 
 import java.util.HashMap;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.pdx.JSONFormatter;
 import org.apache.geode.pdx.PdxInstance;
 import org.apache.geode.serialization.SerializationType;
@@ -24,6 +25,7 @@ import org.apache.geode.serialization.exception.UnsupportedEncodingTypeException
 /**
  * This class maps protobuf specific encoding types and the corresponding serialization types.
  */
+@Experimental
 public abstract class EncodingTypeTranslator {
   static final HashMap<Class, BasicTypes.EncodingType> typeToEncodingMap = intializeTypeMap();
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/Failure.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/Failure.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/Failure.java
index fcbbb50..f8de911 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/Failure.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/Failure.java
@@ -16,6 +16,9 @@ package org.apache.geode.protocol.protobuf;
 
 import java.util.function.Function;
 
+import org.apache.geode.annotations.Experimental;
+
+@Experimental
 public class Failure<SuccessType> implements Result<SuccessType> {
   private final BasicTypes.ErrorResponse errorResponse;
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/OperationContext.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/OperationContext.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/OperationContext.java
index 72e4d75..5191007 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/OperationContext.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/OperationContext.java
@@ -15,10 +15,12 @@
 
 package org.apache.geode.protocol.protobuf;
 
-import org.apache.geode.protocol.operations.OperationHandler;
-
 import java.util.function.Function;
 
+import org.apache.geode.annotations.Experimental;
+import org.apache.geode.protocol.operations.OperationHandler;
+
+@Experimental
 public class OperationContext<OperationRequest, OperationResponse> {
   private final OperationHandler<OperationRequest, OperationResponse> operationHandler;
   private final Function<ClientProtocol.Request, OperationRequest> fromRequest;

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufOpsProcessor.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufOpsProcessor.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufOpsProcessor.java
index c11b534..7d75b4a 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufOpsProcessor.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufOpsProcessor.java
@@ -14,6 +14,7 @@
  */
 package org.apache.geode.protocol.protobuf;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.cache.Cache;
 import org.apache.geode.protocol.protobuf.registry.OperationContextRegistry;
 import org.apache.geode.serialization.SerializationService;
@@ -22,6 +23,7 @@ import org.apache.geode.serialization.SerializationService;
  * This handles protobuf requests by determining the operation type of the request and dispatching
  * it to the appropriate handler.
  */
+@Experimental
 public class ProtobufOpsProcessor {
 
   private final OperationContextRegistry operationContextRegistry;

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufSerializationService.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufSerializationService.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufSerializationService.java
index 38bf56a..8246f1c 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufSerializationService.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufSerializationService.java
@@ -14,6 +14,7 @@
  */
 package org.apache.geode.protocol.protobuf;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.serialization.SerializationService;
 import org.apache.geode.serialization.SerializationType;
 import org.apache.geode.serialization.TypeCodec;
@@ -22,6 +23,7 @@ import org.apache.geode.serialization.registry.SerializationCodecRegistry;
 import org.apache.geode.serialization.registry.exception.CodecAlreadyRegisteredForTypeException;
 import org.apache.geode.serialization.registry.exception.CodecNotRegisteredForTypeException;
 
+@Experimental
 public class ProtobufSerializationService implements SerializationService<BasicTypes.EncodingType> {
   private SerializationCodecRegistry serializationCodecRegistry = new SerializationCodecRegistry();
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufStreamProcessor.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufStreamProcessor.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufStreamProcessor.java
index 118ccc4..648ab3c 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufStreamProcessor.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufStreamProcessor.java
@@ -19,6 +19,7 @@ import java.io.IOException;
 import java.io.InputStream;
 import java.io.OutputStream;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.cache.Cache;
 import org.apache.geode.internal.cache.InternalCache;
 import org.apache.geode.internal.cache.tier.sockets.ClientProtocolMessageHandler;
@@ -33,6 +34,7 @@ import org.apache.geode.serialization.registry.exception.CodecAlreadyRegisteredF
  * messages, hands the requests to an appropriate handler, wraps the response in a protobuf message,
  * and then pushes it to the output stream.
  */
+@Experimental
 public class ProtobufStreamProcessor implements ClientProtocolMessageHandler {
   private final ProtobufProtocolSerializer protobufProtocolSerializer;
   private final ProtobufOpsProcessor protobufOpsProcessor;

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/Result.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/Result.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/Result.java
index 14168bc..5f62997 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/Result.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/Result.java
@@ -16,6 +16,9 @@ package org.apache.geode.protocol.protobuf;
 
 import java.util.function.Function;
 
+import org.apache.geode.annotations.Experimental;
+
+@Experimental
 public interface Result<SuccessType> {
   <T> T map(Function<SuccessType, T> successFunction,
       Function<BasicTypes.ErrorResponse, T> errorFunction);

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/Success.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/Success.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/Success.java
index 2c409dd..63f8b3f 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/Success.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/Success.java
@@ -16,6 +16,9 @@ package org.apache.geode.protocol.protobuf;
 
 import java.util.function.Function;
 
+import org.apache.geode.annotations.Experimental;
+
+@Experimental
 public class Success<SuccessType> implements Result<SuccessType> {
   private final SuccessType successResponse;
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAllRequestOperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAllRequestOperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAllRequestOperationHandler.java
index 7c8685f..7f2ffe4 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAllRequestOperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAllRequestOperationHandler.java
@@ -14,6 +14,11 @@
  */
 package org.apache.geode.protocol.protobuf.operations;
 
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.cache.Cache;
 import org.apache.geode.cache.Region;
 import org.apache.geode.protocol.operations.OperationHandler;
@@ -28,10 +33,7 @@ import org.apache.geode.serialization.SerializationService;
 import org.apache.geode.serialization.exception.UnsupportedEncodingTypeException;
 import org.apache.geode.serialization.registry.exception.CodecNotRegisteredForTypeException;
 
-import java.util.HashSet;
-import java.util.Map;
-import java.util.Set;
-
+@Experimental
 public class GetAllRequestOperationHandler
     implements OperationHandler<RegionAPI.GetAllRequest, RegionAPI.GetAllResponse> {
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAvailableServersOperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAvailableServersOperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAvailableServersOperationHandler.java
index 39c837a..e58c8cd 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAvailableServersOperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAvailableServersOperationHandler.java
@@ -14,7 +14,17 @@
  */
 package org.apache.geode.protocol.protobuf.operations;
 
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.Properties;
+import java.util.StringTokenizer;
+import java.util.stream.Collectors;
+
 import org.apache.commons.lang.StringUtils;
+
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.cache.Cache;
 import org.apache.geode.cache.client.internal.locator.GetAllServersRequest;
 import org.apache.geode.cache.client.internal.locator.GetAllServersResponse;
@@ -32,14 +42,7 @@ import org.apache.geode.protocol.protobuf.ServerAPI;
 import org.apache.geode.protocol.protobuf.Success;
 import org.apache.geode.serialization.SerializationService;
 
-import java.io.IOException;
-import java.net.InetSocketAddress;
-import java.util.Collection;
-import java.util.HashSet;
-import java.util.Properties;
-import java.util.StringTokenizer;
-import java.util.stream.Collectors;
-
+@Experimental
 public class GetAvailableServersOperationHandler implements
     OperationHandler<ServerAPI.GetAvailableServersRequest, ServerAPI.GetAvailableServersResponse> {
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRegionNamesRequestOperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRegionNamesRequestOperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRegionNamesRequestOperationHandler.java
index 50e121e..e5d216a 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRegionNamesRequestOperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRegionNamesRequestOperationHandler.java
@@ -16,6 +16,7 @@ package org.apache.geode.protocol.protobuf.operations;
 
 import java.util.Set;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.cache.Cache;
 import org.apache.geode.cache.Region;
 import org.apache.geode.protocol.operations.OperationHandler;
@@ -25,6 +26,7 @@ import org.apache.geode.protocol.protobuf.Success;
 import org.apache.geode.protocol.protobuf.utilities.ProtobufResponseUtilities;
 import org.apache.geode.serialization.SerializationService;
 
+@Experimental
 public class GetRegionNamesRequestOperationHandler
     implements OperationHandler<RegionAPI.GetRegionNamesRequest, RegionAPI.GetRegionNamesResponse> {
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRegionRequestOperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRegionRequestOperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRegionRequestOperationHandler.java
index 5ad0cc1..3814bf6 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRegionRequestOperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRegionRequestOperationHandler.java
@@ -14,6 +14,7 @@
  */
 package org.apache.geode.protocol.protobuf.operations;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.cache.Cache;
 import org.apache.geode.cache.Region;
 import org.apache.geode.protocol.operations.OperationHandler;
@@ -26,6 +27,7 @@ import org.apache.geode.protocol.protobuf.Success;
 import org.apache.geode.protocol.protobuf.utilities.ProtobufUtilities;
 import org.apache.geode.serialization.SerializationService;
 
+@Experimental
 public class GetRegionRequestOperationHandler
     implements OperationHandler<RegionAPI.GetRegionRequest, RegionAPI.GetRegionResponse> {
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRequestOperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRequestOperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRequestOperationHandler.java
index 861e518..1086bca 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRequestOperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRequestOperationHandler.java
@@ -14,6 +14,7 @@
  */
 package org.apache.geode.protocol.protobuf.operations;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.cache.Cache;
 import org.apache.geode.cache.Region;
 import org.apache.geode.protocol.operations.OperationHandler;
@@ -28,6 +29,7 @@ import org.apache.geode.serialization.SerializationService;
 import org.apache.geode.serialization.exception.UnsupportedEncodingTypeException;
 import org.apache.geode.serialization.registry.exception.CodecNotRegisteredForTypeException;
 
+@Experimental
 public class GetRequestOperationHandler
     implements OperationHandler<RegionAPI.GetRequest, RegionAPI.GetResponse> {
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutAllRequestOperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutAllRequestOperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutAllRequestOperationHandler.java
index 49fd811..33e3ade 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutAllRequestOperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutAllRequestOperationHandler.java
@@ -14,6 +14,13 @@
  */
 package org.apache.geode.protocol.protobuf.operations;
 
+import java.util.Objects;
+import java.util.stream.Collectors;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.cache.Cache;
 import org.apache.geode.cache.Region;
 import org.apache.geode.protocol.operations.OperationHandler;
@@ -28,12 +35,8 @@ import org.apache.geode.protocol.protobuf.utilities.ProtobufUtilities;
 import org.apache.geode.serialization.SerializationService;
 import org.apache.geode.serialization.exception.UnsupportedEncodingTypeException;
 import org.apache.geode.serialization.registry.exception.CodecNotRegisteredForTypeException;
-import org.apache.logging.log4j.LogManager;
-import org.apache.logging.log4j.Logger;
-
-import java.util.Objects;
-import java.util.stream.Collectors;
 
+@Experimental
 public class PutAllRequestOperationHandler
     implements OperationHandler<RegionAPI.PutAllRequest, RegionAPI.PutAllResponse> {
   private static Logger logger = LogManager.getLogger();

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutRequestOperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutRequestOperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutRequestOperationHandler.java
index 9c51c87..637d8f1 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutRequestOperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutRequestOperationHandler.java
@@ -14,6 +14,7 @@
  */
 package org.apache.geode.protocol.protobuf.operations;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.cache.Cache;
 import org.apache.geode.cache.Region;
 import org.apache.geode.protocol.operations.OperationHandler;
@@ -28,6 +29,7 @@ import org.apache.geode.serialization.SerializationService;
 import org.apache.geode.serialization.exception.UnsupportedEncodingTypeException;
 import org.apache.geode.serialization.registry.exception.CodecNotRegisteredForTypeException;
 
+@Experimental
 public class PutRequestOperationHandler
     implements OperationHandler<RegionAPI.PutRequest, RegionAPI.PutResponse> {
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/RemoveRequestOperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/RemoveRequestOperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/RemoveRequestOperationHandler.java
index 296f8b2..dbc58bf 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/RemoveRequestOperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/RemoveRequestOperationHandler.java
@@ -14,6 +14,10 @@
  */
 package org.apache.geode.protocol.protobuf.operations;
 
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.cache.Cache;
 import org.apache.geode.cache.Region;
 import org.apache.geode.protocol.operations.OperationHandler;
@@ -28,9 +32,8 @@ import org.apache.geode.protocol.protobuf.utilities.ProtobufUtilities;
 import org.apache.geode.serialization.SerializationService;
 import org.apache.geode.serialization.exception.UnsupportedEncodingTypeException;
 import org.apache.geode.serialization.registry.exception.CodecNotRegisteredForTypeException;
-import org.apache.logging.log4j.LogManager;
-import org.apache.logging.log4j.Logger;
 
+@Experimental
 public class RemoveRequestOperationHandler
     implements OperationHandler<RegionAPI.RemoveRequest, RegionAPI.RemoveResponse> {
   private static Logger logger = LogManager.getLogger();

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/registry/OperationContextRegistry.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/registry/OperationContextRegistry.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/registry/OperationContextRegistry.java
index b160adc..dfe975c 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/registry/OperationContextRegistry.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/registry/OperationContextRegistry.java
@@ -18,6 +18,7 @@ package org.apache.geode.protocol.protobuf.registry;
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.protocol.protobuf.ClientProtocol;
 import org.apache.geode.protocol.protobuf.ClientProtocol.Request.RequestAPICase;
 import org.apache.geode.protocol.protobuf.OperationContext;
@@ -30,6 +31,7 @@ import org.apache.geode.protocol.protobuf.operations.PutAllRequestOperationHandl
 import org.apache.geode.protocol.protobuf.operations.PutRequestOperationHandler;
 import org.apache.geode.protocol.protobuf.operations.RemoveRequestOperationHandler;
 
+@Experimental
 public class OperationContextRegistry {
   private Map<RequestAPICase, OperationContext> operationContexts = new ConcurrentHashMap<>();
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/serializer/ProtobufProtocolSerializer.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/serializer/ProtobufProtocolSerializer.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/serializer/ProtobufProtocolSerializer.java
index 1c6e847..d82f9b9 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/serializer/ProtobufProtocolSerializer.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/serializer/ProtobufProtocolSerializer.java
@@ -18,10 +18,12 @@ import java.io.IOException;
 import java.io.InputStream;
 import java.io.OutputStream;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.protocol.exception.InvalidProtocolMessageException;
 import org.apache.geode.protocol.protobuf.ClientProtocol;
 import org.apache.geode.protocol.serializer.ProtocolSerializer;
 
+@Experimental
 public class ProtobufProtocolSerializer implements ProtocolSerializer<ClientProtocol.Message> {
   @Override
   public ClientProtocol.Message deserialize(InputStream inputStream)

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufPrimitiveTypes.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufPrimitiveTypes.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufPrimitiveTypes.java
index b26de20..90ce308 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufPrimitiveTypes.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufPrimitiveTypes.java
@@ -14,8 +14,10 @@
  */
 package org.apache.geode.protocol.protobuf.utilities;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.protocol.protobuf.utilities.exception.UnknownProtobufPrimitiveType;
 
+@Experimental
 public enum ProtobufPrimitiveTypes {
 
   STRING(String.class),

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufRequestUtilities.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufRequestUtilities.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufRequestUtilities.java
index e184592..520daef 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufRequestUtilities.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufRequestUtilities.java
@@ -14,6 +14,9 @@
  */
 package org.apache.geode.protocol.protobuf.utilities;
 
+import java.util.Set;
+
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.protocol.protobuf.BasicTypes;
 import org.apache.geode.protocol.protobuf.ClientProtocol;
 import org.apache.geode.protocol.protobuf.RegionAPI;
@@ -27,6 +30,7 @@ import java.util.Set;
  * Response building helpers can be found in {@link ProtobufResponseUtilities}, while more general
  * purpose helpers can be found in {@link ProtobufUtilities}
  */
+@Experimental
 public abstract class ProtobufRequestUtilities {
   /**
    * Creates a request object containing a RegionAPI.GetRequest

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufResponseUtilities.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufResponseUtilities.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufResponseUtilities.java
index bb3ef98..7bc766e 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufResponseUtilities.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufResponseUtilities.java
@@ -14,13 +14,15 @@
  */
 package org.apache.geode.protocol.protobuf.utilities;
 
+import java.util.Set;
+
+import org.apache.logging.log4j.Logger;
+
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.cache.Region;
 import org.apache.geode.protocol.protobuf.BasicTypes;
 import org.apache.geode.protocol.protobuf.ProtocolErrorCode;
 import org.apache.geode.protocol.protobuf.RegionAPI;
-import org.apache.logging.log4j.Logger;
-
-import java.util.Set;
 
 /**
  * This class contains helper functions for generating ClientProtocol.Response objects.
@@ -28,6 +30,7 @@ import java.util.Set;
  * Request building helpers can be found in {@link ProtobufRequestUtilities}, while more general
  * purpose helpers can be found in {@link ProtobufUtilities}
  */
+@Experimental
 public abstract class ProtobufResponseUtilities {
 
   /**

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufUtilities.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufUtilities.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufUtilities.java
index fd35803..8310632 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufUtilities.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufUtilities.java
@@ -16,6 +16,7 @@ package org.apache.geode.protocol.protobuf.utilities;
 
 import com.google.protobuf.ByteString;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.cache.Region;
 import org.apache.geode.cache.RegionAttributes;
 import org.apache.geode.protocol.protobuf.BasicTypes;
@@ -37,6 +38,7 @@ import org.apache.geode.serialization.registry.exception.CodecNotRegisteredForTy
  * {@link ProtobufResponseUtilities} Helper functions specific to creating ClientProtocol.Requests
  * can be found at {@link ProtobufRequestUtilities}
  */
+@Experimental
 public abstract class ProtobufUtilities {
   /**
    * Creates a object containing the type and value encoding of a piece of data

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/exception/UnknownProtobufPrimitiveType.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/exception/UnknownProtobufPrimitiveType.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/exception/UnknownProtobufPrimitiveType.java
index 675a2f0..ca1dc72 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/exception/UnknownProtobufPrimitiveType.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/exception/UnknownProtobufPrimitiveType.java
@@ -14,6 +14,9 @@
  */
 package org.apache.geode.protocol.protobuf.utilities.exception;
 
+import org.apache.geode.annotations.Experimental;
+
+@Experimental
 public class UnknownProtobufPrimitiveType extends Exception {
   public UnknownProtobufPrimitiveType(String message) {
     super(message);

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/protocol/serializer/ProtocolSerializer.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/serializer/ProtocolSerializer.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/serializer/ProtocolSerializer.java
index 0a48e1b..36fc8ec 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/serializer/ProtocolSerializer.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/serializer/ProtocolSerializer.java
@@ -18,6 +18,7 @@ import java.io.IOException;
 import java.io.InputStream;
 import java.io.OutputStream;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.protocol.exception.InvalidProtocolMessageException;
 
 /**
@@ -25,6 +26,7 @@ import org.apache.geode.protocol.exception.InvalidProtocolMessageException;
  * 
  * @param <T> The message type of the protocol.
  */
+@Experimental
 public interface ProtocolSerializer<T> {
   T deserialize(InputStream inputStream) throws InvalidProtocolMessageException;
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/SerializationService.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/SerializationService.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/SerializationService.java
index cdeb170..3373d44 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/SerializationService.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/SerializationService.java
@@ -14,6 +14,7 @@
  */
 package org.apache.geode.serialization;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.serialization.exception.UnsupportedEncodingTypeException;
 import org.apache.geode.serialization.registry.exception.CodecNotRegisteredForTypeException;
 
@@ -23,6 +24,7 @@ import org.apache.geode.serialization.registry.exception.CodecNotRegisteredForTy
  *
  * @param <T> the enumeration of types known to a particular protocol
  */
+@Experimental
 public interface SerializationService<T> {
   Object decode(T encodingTypeValue, byte[] value)
       throws UnsupportedEncodingTypeException, CodecNotRegisteredForTypeException;

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/SerializationType.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/SerializationType.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/SerializationType.java
index 91466a1..10a3e51 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/SerializationType.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/SerializationType.java
@@ -14,11 +14,13 @@
  */
 package org.apache.geode.serialization;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.pdx.PdxInstance;
 
 /**
  * Enumerates the serialization types currently available to wire protocols.
  */
+@Experimental
 public enum SerializationType {
   STRING(String.class),
   BINARY(byte[].class),

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/TypeCodec.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/TypeCodec.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/TypeCodec.java
index 8506a53..f9edc09 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/TypeCodec.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/TypeCodec.java
@@ -14,6 +14,8 @@
  */
 package org.apache.geode.serialization;
 
+import org.apache.geode.annotations.Experimental;
+
 /**
  * This interface converts a particular type to and from its binary representation.
  *
@@ -21,6 +23,7 @@ package org.apache.geode.serialization;
  *
  * @param <T> the type this codec knows how to convert
  */
+@Experimental
 public interface TypeCodec<T> {
   T decode(byte[] incoming);
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/BinaryCodec.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/BinaryCodec.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/BinaryCodec.java
index c1bee43..cca88dd 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/BinaryCodec.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/BinaryCodec.java
@@ -14,9 +14,11 @@
  */
 package org.apache.geode.serialization.codec;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.serialization.SerializationType;
 import org.apache.geode.serialization.TypeCodec;
 
+@Experimental
 public class BinaryCodec implements TypeCodec<byte[]> {
   @Override
   public byte[] decode(byte[] incoming) {

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/BooleanCodec.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/BooleanCodec.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/BooleanCodec.java
index e3e234d..ca0443c 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/BooleanCodec.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/BooleanCodec.java
@@ -14,11 +14,13 @@
  */
 package org.apache.geode.serialization.codec;
 
+import java.nio.ByteBuffer;
+
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.serialization.SerializationType;
 import org.apache.geode.serialization.TypeCodec;
 
-import java.nio.ByteBuffer;
-
+@Experimental
 public class BooleanCodec implements TypeCodec<Boolean> {
   @Override
   public Boolean decode(byte[] incoming) {

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/ByteCodec.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/ByteCodec.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/ByteCodec.java
index 8e71149..847d210 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/ByteCodec.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/ByteCodec.java
@@ -16,9 +16,11 @@ package org.apache.geode.serialization.codec;
 
 import java.nio.ByteBuffer;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.serialization.SerializationType;
 import org.apache.geode.serialization.TypeCodec;
 
+@Experimental
 public class ByteCodec implements TypeCodec<Byte> {
   @Override
   public Byte decode(byte[] incoming) {

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/DoubleCodec.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/DoubleCodec.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/DoubleCodec.java
index ab09537..8f01639 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/DoubleCodec.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/DoubleCodec.java
@@ -16,9 +16,11 @@ package org.apache.geode.serialization.codec;
 
 import java.nio.ByteBuffer;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.serialization.SerializationType;
 import org.apache.geode.serialization.TypeCodec;
 
+@Experimental
 public class DoubleCodec implements TypeCodec<Double> {
   @Override
   public Double decode(byte[] incoming) {

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/FloatCodec.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/FloatCodec.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/FloatCodec.java
index 5ff79ce..75c1e0d 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/FloatCodec.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/FloatCodec.java
@@ -16,9 +16,11 @@ package org.apache.geode.serialization.codec;
 
 import java.nio.ByteBuffer;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.serialization.SerializationType;
 import org.apache.geode.serialization.TypeCodec;
 
+@Experimental
 public class FloatCodec implements TypeCodec<Float> {
   @Override
   public Float decode(byte[] incoming) {

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/IntCodec.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/IntCodec.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/IntCodec.java
index ae4e4da..4366c84 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/IntCodec.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/IntCodec.java
@@ -16,9 +16,11 @@ package org.apache.geode.serialization.codec;
 
 import java.nio.ByteBuffer;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.serialization.SerializationType;
 import org.apache.geode.serialization.TypeCodec;
 
+@Experimental
 public class IntCodec implements TypeCodec<Integer> {
   @Override
   public Integer decode(byte[] incoming) {

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/JSONCodec.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/JSONCodec.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/JSONCodec.java
index eb1ebc3..b481375 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/JSONCodec.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/JSONCodec.java
@@ -14,11 +14,13 @@
  */
 package org.apache.geode.serialization.codec;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.pdx.JSONFormatter;
 import org.apache.geode.pdx.PdxInstance;
 import org.apache.geode.serialization.SerializationType;
 import org.apache.geode.serialization.TypeCodec;
 
+@Experimental
 public class JSONCodec implements TypeCodec<PdxInstance> {
   @Override
   public PdxInstance decode(byte[] incoming) {

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/LongCodec.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/LongCodec.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/LongCodec.java
index 7691db2..b6b8053 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/LongCodec.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/LongCodec.java
@@ -16,9 +16,11 @@ package org.apache.geode.serialization.codec;
 
 import java.nio.ByteBuffer;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.serialization.SerializationType;
 import org.apache.geode.serialization.TypeCodec;
 
+@Experimental
 public class LongCodec implements TypeCodec<Long> {
   @Override
   public Long decode(byte[] incoming) {

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/ShortCodec.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/ShortCodec.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/ShortCodec.java
index e927b11..df79fb0 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/ShortCodec.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/ShortCodec.java
@@ -16,9 +16,11 @@ package org.apache.geode.serialization.codec;
 
 import java.nio.ByteBuffer;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.serialization.SerializationType;
 import org.apache.geode.serialization.TypeCodec;
 
+@Experimental
 public class ShortCodec implements TypeCodec<Short> {
   @Override
   public Short decode(byte[] incoming) {

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/StringCodec.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/StringCodec.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/StringCodec.java
index b137ad5..027f4ca 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/StringCodec.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/codec/StringCodec.java
@@ -16,9 +16,11 @@ package org.apache.geode.serialization.codec;
 
 import java.nio.charset.Charset;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.serialization.SerializationType;
 import org.apache.geode.serialization.TypeCodec;
 
+@Experimental
 public class StringCodec implements TypeCodec<String> {
   private static final Charset UTF8 = Charset.forName("UTF-8");
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/exception/UnsupportedEncodingTypeException.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/exception/UnsupportedEncodingTypeException.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/exception/UnsupportedEncodingTypeException.java
index 1056002..4a18619 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/exception/UnsupportedEncodingTypeException.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/exception/UnsupportedEncodingTypeException.java
@@ -14,9 +14,12 @@
  */
 package org.apache.geode.serialization.exception;
 
+import org.apache.geode.annotations.Experimental;
+
 /**
  * This indicates an encoding type that we don't know how to handle.
  */
+@Experimental
 public class UnsupportedEncodingTypeException extends Exception {
   public UnsupportedEncodingTypeException(String message) {
     super(message);

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/registry/SerializationCodecRegistry.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/registry/SerializationCodecRegistry.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/registry/SerializationCodecRegistry.java
index ec93a72..387d33f 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/registry/SerializationCodecRegistry.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/registry/SerializationCodecRegistry.java
@@ -17,11 +17,13 @@ package org.apache.geode.serialization.registry;
 import java.util.HashMap;
 import java.util.ServiceLoader;
 
+import org.apache.geode.annotations.Experimental;
 import org.apache.geode.serialization.SerializationType;
 import org.apache.geode.serialization.TypeCodec;
 import org.apache.geode.serialization.registry.exception.CodecAlreadyRegisteredForTypeException;
 import org.apache.geode.serialization.registry.exception.CodecNotRegisteredForTypeException;
 
+@Experimental
 public class SerializationCodecRegistry {
   private HashMap<SerializationType, TypeCodec> codecRegistry = new HashMap<>();
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/registry/exception/CodecAlreadyRegisteredForTypeException.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/registry/exception/CodecAlreadyRegisteredForTypeException.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/registry/exception/CodecAlreadyRegisteredForTypeException.java
index 66ae850..dcab478 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/registry/exception/CodecAlreadyRegisteredForTypeException.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/registry/exception/CodecAlreadyRegisteredForTypeException.java
@@ -14,10 +14,13 @@
  */
 package org.apache.geode.serialization.registry.exception;
 
+import org.apache.geode.annotations.Experimental;
+
 /**
  * This indicates that we're attempting to register a codec for a type which we already have a
  * handler for.
  */
+@Experimental
 public class CodecAlreadyRegisteredForTypeException extends Exception {
   public CodecAlreadyRegisteredForTypeException(String message) {
     super(message);

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/java/org/apache/geode/serialization/registry/exception/CodecNotRegisteredForTypeException.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/serialization/registry/exception/CodecNotRegisteredForTypeException.java b/geode-protobuf/src/main/java/org/apache/geode/serialization/registry/exception/CodecNotRegisteredForTypeException.java
index 58cb691..18f255b 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/serialization/registry/exception/CodecNotRegisteredForTypeException.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/serialization/registry/exception/CodecNotRegisteredForTypeException.java
@@ -14,9 +14,12 @@
  */
 package org.apache.geode.serialization.registry.exception;
 
+import org.apache.geode.annotations.Experimental;
+
 /**
  * This indicates we're attempting to handle a type for which we don't have a registered codec.
  */
+@Experimental
 public class CodecNotRegisteredForTypeException extends Exception {
   public CodecNotRegisteredForTypeException(String message) {
     super(message);

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/proto/basicTypes.proto
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/proto/basicTypes.proto b/geode-protobuf/src/main/proto/basicTypes.proto
index 330b53b..684e4c8 100644
--- a/geode-protobuf/src/main/proto/basicTypes.proto
+++ b/geode-protobuf/src/main/proto/basicTypes.proto
@@ -13,6 +13,12 @@
  * the License.
  */
 
+/*
+* These ProtoBuf files are part of an experimental interface.
+* Use this interface at your own risk.
+*/
+
+
 syntax = "proto3";
 package org.apache.geode.protocol.protobuf;
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/proto/clientProtocol.proto
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/proto/clientProtocol.proto b/geode-protobuf/src/main/proto/clientProtocol.proto
index c64d4de..8203c43 100644
--- a/geode-protobuf/src/main/proto/clientProtocol.proto
+++ b/geode-protobuf/src/main/proto/clientProtocol.proto
@@ -13,6 +13,11 @@
  * the License.
  */
 
+/*
+* These ProtoBuf files are part of an experimental interface.
+* Use this interface at your own risk.
+*/
+
 syntax = "proto3";
 package org.apache.geode.protocol.protobuf;
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/proto/region_API.proto
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/proto/region_API.proto b/geode-protobuf/src/main/proto/region_API.proto
index 2a93a7d..40bf882 100644
--- a/geode-protobuf/src/main/proto/region_API.proto
+++ b/geode-protobuf/src/main/proto/region_API.proto
@@ -13,6 +13,11 @@
  * the License.
  */
 
+/*
+* These ProtoBuf files are part of an experimental interface.
+* Use this interface at your own risk.
+*/
+
 syntax = "proto3";
 package org.apache.geode.protocol.protobuf;
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a6000684/geode-protobuf/src/main/proto/server_API.proto
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/proto/server_API.proto b/geode-protobuf/src/main/proto/server_API.proto
index 81622cc..201db8f 100644
--- a/geode-protobuf/src/main/proto/server_API.proto
+++ b/geode-protobuf/src/main/proto/server_API.proto
@@ -13,6 +13,11 @@
  * the License.
  */
 
+/*
+* These ProtoBuf files are part of an experimental interface.
+* Use this interface at your own risk.
+*/
+
 syntax = "proto3";
 package org.apache.geode.protocol.protobuf;
 


[36/51] [abbrv] geode git commit: Add test to expose GEODE-3429

Posted by kl...@apache.org.
Add test to expose GEODE-3429


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/1a67d462
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/1a67d462
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/1a67d462

Branch: refs/heads/feature/GEODE-1279
Commit: 1a67d46278ea519a1bfe185a7da11247e9771a4b
Parents: 64f33c3
Author: Jared Stewart <js...@pivotal.io>
Authored: Thu Aug 10 11:21:59 2017 -0700
Committer: Jared Stewart <js...@pivotal.io>
Committed: Thu Aug 17 15:57:59 2017 -0700

----------------------------------------------------------------------
 .../deployment/FunctionScannerTest.java         | 17 ++++++++++
 .../internal/deployment/AbstractFunction.java   | 33 --------------------
 .../internal/deployment/AnnotatedFunction.java  | 23 ++++++++++++++
 .../apache/geode/test/compiler/JarBuilder.java  | 10 ++++--
 .../geode/test/compiler/JavaCompiler.java       | 11 +++++--
 5 files changed, 57 insertions(+), 37 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/1a67d462/geode-core/src/test/java/org/apache/geode/management/internal/deployment/FunctionScannerTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/management/internal/deployment/FunctionScannerTest.java b/geode-core/src/test/java/org/apache/geode/management/internal/deployment/FunctionScannerTest.java
index af9ffdf..d46b801 100644
--- a/geode-core/src/test/java/org/apache/geode/management/internal/deployment/FunctionScannerTest.java
+++ b/geode-core/src/test/java/org/apache/geode/management/internal/deployment/FunctionScannerTest.java
@@ -23,6 +23,7 @@ import java.net.URL;
 import java.util.Collection;
 
 import org.junit.Before;
+import org.junit.Ignore;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -95,6 +96,22 @@ public class FunctionScannerTest {
         "org.apache.geode.management.internal.deployment.AbstractImplementsFunction");
   }
 
+  @Test
+  @Ignore("Fails due to GEODE-3429")
+  public void registerFunctionHierarchySplitAcrossTwoJars() throws Exception {
+    File sourceFileOne = loadTestResource("AbstractImplementsFunction.java");
+    File abstractJar = new File(temporaryFolder.getRoot(), "abstract.jar");
+    jarBuilder.buildJar(abstractJar, sourceFileOne);
+
+    jarBuilder.addToClasspath(abstractJar);
+    File sourceFileTwo = loadTestResource("AnnotatedFunction.java");
+
+    jarBuilder.buildJar(outputJar, sourceFileTwo);
+    Collection<String> functionsFoundInJar = functionScanner.findFunctionsInJar(outputJar);
+    assertThat(functionsFoundInJar).containsExactlyInAnyOrder(
+        "org.apache.geode.management.internal.deployment.AnnotatedFunction");
+  }
+
   private File loadTestResource(String fileName) throws URISyntaxException {
     URL resourceFileURL = this.getClass().getResource(fileName);
     assertThat(resourceFileURL).isNotNull();

http://git-wip-us.apache.org/repos/asf/geode/blob/1a67d462/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractFunction.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractFunction.java b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractFunction.java
deleted file mode 100644
index afc83ab..0000000
--- a/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractFunction.java
+++ /dev/null
@@ -1,33 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
- * agreements. See the NOTICE file distributed with this work for additional information regarding
- * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.geode.management.internal.deployment;
-
-import org.apache.geode.cache.execute.FunctionContext;
-
-public class AbstractFunction implements Function {
-  public void execute(FunctionContext context) {
-    context.getResultSender().lastResult("ConcreteResult");
-  }
-
-  public static abstract class AbstractImplementsFunction implements Function {
-    public abstract void execute(FunctionContext context);
-  }
-
-  public static class Concrete extends AbstractImplementsFunction {
-    public void execute(FunctionContext context) {
-      context.getResultSender().lastResult("ConcreteResult");
-    }
-  }
-}

http://git-wip-us.apache.org/repos/asf/geode/blob/1a67d462/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AnnotatedFunction.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AnnotatedFunction.java b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AnnotatedFunction.java
new file mode 100644
index 0000000..612b498
--- /dev/null
+++ b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AnnotatedFunction.java
@@ -0,0 +1,23 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.deployment;
+
+import org.apache.geode.cache.execute.FunctionContext;
+
+public class AnnotatedFunction extends AbstractImplementsFunction {
+  public void execute(FunctionContext context) {
+    context.getResultSender().lastResult("AnnotatedFunctionResult");
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/1a67d462/geode-junit/src/main/java/org/apache/geode/test/compiler/JarBuilder.java
----------------------------------------------------------------------
diff --git a/geode-junit/src/main/java/org/apache/geode/test/compiler/JarBuilder.java b/geode-junit/src/main/java/org/apache/geode/test/compiler/JarBuilder.java
index beea476..db1eb58 100644
--- a/geode-junit/src/main/java/org/apache/geode/test/compiler/JarBuilder.java
+++ b/geode-junit/src/main/java/org/apache/geode/test/compiler/JarBuilder.java
@@ -24,8 +24,6 @@ import java.util.List;
 import java.util.jar.JarEntry;
 import java.util.jar.JarOutputStream;
 
-import org.assertj.core.api.Assertions;
-
 
 /**
  * This class accepts java source code in the format of .java source files or strings containing the
@@ -76,6 +74,14 @@ import org.assertj.core.api.Assertions;
 public class JarBuilder {
   private final JavaCompiler javaCompiler = new JavaCompiler();
 
+  /**
+   * Adds the given jarFile to the classpath that will be used for compilation by the buildJar
+   * methods.
+   */
+  public void addToClasspath(File jarFile) {
+    javaCompiler.addToClasspath(jarFile);
+  }
+
   public void buildJarFromClassNames(File outputJarFile, String... classNames) throws IOException {
     UncompiledSourceCode[] uncompiledSourceCodes = Arrays.stream(classNames)
         .map(UncompiledSourceCode::fromClassName).toArray(UncompiledSourceCode[]::new);

http://git-wip-us.apache.org/repos/asf/geode/blob/1a67d462/geode-junit/src/main/java/org/apache/geode/test/compiler/JavaCompiler.java
----------------------------------------------------------------------
diff --git a/geode-junit/src/main/java/org/apache/geode/test/compiler/JavaCompiler.java b/geode-junit/src/main/java/org/apache/geode/test/compiler/JavaCompiler.java
index 8449605..6039e87 100644
--- a/geode-junit/src/main/java/org/apache/geode/test/compiler/JavaCompiler.java
+++ b/geode-junit/src/main/java/org/apache/geode/test/compiler/JavaCompiler.java
@@ -32,10 +32,16 @@ import org.apache.commons.io.FileUtils;
 
 public class JavaCompiler {
   private File tempDir;
+  private String classpath;
 
   public JavaCompiler() {
     this.tempDir = Files.createTempDir();
     tempDir.deleteOnExit();
+    this.classpath = System.getProperty("java.class.path");
+  }
+
+  public void addToClasspath(File jarFile) {
+    classpath += File.pathSeparator + jarFile.getAbsolutePath();
   }
 
   public List<CompiledSourceCode> compile(File... sourceFiles) throws IOException {
@@ -57,8 +63,9 @@ public class JavaCompiler {
     File temporarySourcesDirectory = createSubdirectory(tempDir, "sources");
     File temporaryClassesDirectory = createSubdirectory(tempDir, "classes");
 
-    List<String> options = Stream.of("-d", temporaryClassesDirectory.getAbsolutePath(),
-        "-classpath", System.getProperty("java.class.path")).collect(toList());
+    List<String> options =
+        Stream.of("-d", temporaryClassesDirectory.getAbsolutePath(), "-classpath", classpath)
+            .collect(toList());
 
     try {
       for (UncompiledSourceCode sourceCode : uncompiledSources) {


[29/51] [abbrv] geode git commit: GEODE-2886 : 1. renamed testcase name as suggested in the PR review.'

Posted by kl...@apache.org.
GEODE-2886 : 1. renamed testcase name as suggested in the PR review.'


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/40185e8b
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/40185e8b
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/40185e8b

Branch: refs/heads/feature/GEODE-1279
Commit: 40185e8bc18e6fa36077ebd9a59bd8bc3160655e
Parents: 88abd31
Author: Amey Barve <ab...@apache.org>
Authored: Fri Jul 28 17:28:14 2017 +0530
Committer: Amey Barve <ab...@apache.org>
Committed: Thu Aug 17 15:47:30 2017 +0530

----------------------------------------------------------------------
 .../apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/40185e8b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
index 779b12a..2044c68 100644
--- a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
+++ b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
@@ -331,8 +331,8 @@ public class LuceneQueriesIntegrationTest extends LuceneIntegrationTest {
   }
 
   @Test()
-  public void testWaitUntilFlushedForException() throws Exception {
-    Map<String, Analyzer> fields = new HashMap<String, Analyzer>();
+  public void waitUntilFlushThrowsIllegalStateExceptionWhenAEQNotFound() throws Exception {
+    Map<String, Analyzer> fields = new HashMap<>();
     fields.put("name", null);
     fields.put("lastName", null);
     fields.put("address", null);


[02/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - Reference section

Posted by kl...@apache.org.
http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/reference/topics/cache-elements-list.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/topics/cache-elements-list.html.md.erb b/geode-docs/reference/topics/cache-elements-list.html.md.erb
index 2b1c035..3f4872a 100644
--- a/geode-docs/reference/topics/cache-elements-list.html.md.erb
+++ b/geode-docs/reference/topics/cache-elements-list.html.md.erb
@@ -1,4 +1,4 @@
----
+--
 title: "&lt;cache&gt; Element Hierarchy"
 ---
 
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-This section shows the hierarchy of `<cache>` element sub-elements that you use to configure Geode caches and servers.
+This section shows the hierarchy of `<cache>` element sub-elements that you use to configure <%=vars.product_name%> caches and servers.
 
 For details, see [&lt;cache&gt; Element Reference](cache_xml.html#cache_xml_cache).
 

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/reference/topics/cache_xml.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/topics/cache_xml.html.md.erb b/geode-docs/reference/topics/cache_xml.html.md.erb
index cf5d2b3..a8acd89 100644
--- a/geode-docs/reference/topics/cache_xml.html.md.erb
+++ b/geode-docs/reference/topics/cache_xml.html.md.erb
@@ -21,9 +21,9 @@ limitations under the License.
 <a id="cache_xml_cache"></a>
 
 
-This section documents the `cache.xml` sub-elements used for Geode server configuration. All elements are sub-elements of the `<cache>` element.
+This section documents the `cache.xml` sub-elements used for <%=vars.product_name%> server configuration. All elements are sub-elements of the `<cache>` element.
 
-For Geode client configuration, see [&lt;client-cache&gt; Element Reference](client-cache.html#cc-client-cache).
+For <%=vars.product_name%> client configuration, see [&lt;client-cache&gt; Element Reference](client-cache.html#cc-client-cache).
 
 **API**:`org.apache.geode.cache.CacheFactory`
 
@@ -244,7 +244,7 @@ Configures a queue for sending region events to an AsyncEventListener implementa
 </tr>
 <tr class="even">
 <td>parallel</td>
-<td>Value of &quot;true&quot; or &quot;false&quot; that specifies the type of queue that Geode creates.</td>
+<td>Value of &quot;true&quot; or &quot;false&quot; that specifies the type of queue that <%=vars.product_name%> creates.</td>
 <td>false</td>
 </tr>
 <tr class="odd">
@@ -259,12 +259,12 @@ Configures a queue for sending region events to an AsyncEventListener implementa
 </tr>
 <tr class="odd">
 <td>enable-batch-conflation</td>
-<td>Boolean value that determines whether Geode should conflate messages.</td>
+<td>Boolean value that determines whether <%=vars.product_name%> should conflate messages.</td>
 <td>false</td>
 </tr>
 <tr class="even">
 <td>disk-store-name</td>
-<td>Named disk store to use for storing queue overflow, or for persisting the queue. If you specify a value, the named disk store must exist. If you specify a null value, Geode uses the default disk store for overflow and queue persistence.</td>
+<td>Named disk store to use for storing queue overflow, or for persisting the queue. If you specify a value, the named disk store must exist. If you specify a null value, <%=vars.product_name%> uses the default disk store for overflow and queue persistence.</td>
 <td>null specifies the default disk store</td>
 </tr>
 <tr class="odd">
@@ -293,13 +293,13 @@ Configures a queue for sending region events to an AsyncEventListener implementa
 <ul>
 <li><strong>key</strong>. When distributing region events from the local queue, multiple dispatcher threads preserve the order of key updates.</li>
 <li><strong>thread</strong>. When distributing region events from the local queue, multiple dispatcher threads preserve the order in which a given thread added region events to the queue.</li>
-<li><strong>partition</strong>. This option is valid for parallel event queues. When distributing region events from the local queue, multiple dispatcher threads preserve the order in which region events were added to the local queue. For a partitioned region, this means that all region events delivered to a specific partition are delivered in the same order to the remote Geode site. For a distributed region, this means that all key updates delivered to the local queue are distributed to the remote site in the same order.</li>
+<li><strong>partition</strong>. This option is valid for parallel event queues. When distributing region events from the local queue, multiple dispatcher threads preserve the order in which region events were added to the local queue. For a partitioned region, this means that all region events delivered to a specific partition are delivered in the same order to the remote <%=vars.product_name%> site. For a distributed region, this means that all key updates delivered to the local queue are distributed to the remote site in the same order.</li>
 </ul></td>
 <td>key</td>
 </tr>
 <tr class="even">
 <td>persistent</td>
-<td>Boolean value that determines whether Geode persists this queue.</td>
+<td>Boolean value that determines whether <%=vars.product_name%> persists this queue.</td>
 <td>False</td>
 </tr>
 </tbody>
@@ -494,7 +494,7 @@ The `cacheserver` process uses only `cache.xml` configuration. For application s
 </tr>
 <tr class="odd">
 <td>tcp-no-delay</td>
-<td>When set to true, enables TCP_NODELAY for Geode server connections to clients.</td>
+<td>When set to true, enables TCP_NODELAY for <%=vars.product_name%> server connections to clients.</td>
 <td>false</td>
 </tr>
 </tbody>
@@ -549,7 +549,7 @@ Application plug-in used to provide current and predicted server load informatio
 
 Specify the Java class and its initialization parameters with the `<class-name>` and `<parameter>` sub-elements. See [&lt;class-name&gt; and &lt;parameter&gt;](#class-name_parameter).
 
-**Default:** If this is not defined, the default Geode load probe is used.
+**Default:** If this is not defined, the default <%=vars.product_name%> load probe is used.
 
 **API:** `org.apache.geode.cache.server.setLoadProbe`
 
@@ -958,7 +958,7 @@ Specifies the configuration for the Portable Data eXchange (PDX) method of seria
 
 ## <a id="pdx-serializer_24898989679" class="no-quick-link"></a>&lt;pdx-serializer&gt;
 
-Allows you to configure the PdxSerializer for this Geode member.
+Allows you to configure the PdxSerializer for this <%=vars.product_name%> member.
 
 Specify the Java class and its initialization parameters with the `<class-name>` and `<parameter>` sub-elements. See [&lt;class-name&gt; and &lt;parameter&gt;](#class-name_parameter).
 
@@ -1004,7 +1004,7 @@ Specifies a region attributes template that can be named (by `id`) and reference
 <tbody>
 <tr class="odd">
 <td>concurrency-level</td>
-<td>Gives an estimate of the maximum number of application threads that will concurrently access a region entry at one time. This attribute does not apply to partitioned regions. This attribute helps Geode optimize the use of system resources and reduce thread contention. This sets an initial parameter on the underlying <code class="ph codeph">java.util.ConcurrentHashMap</code> used for storing region entries.
+<td>Gives an estimate of the maximum number of application threads that will concurrently access a region entry at one time. This attribute does not apply to partitioned regions. This attribute helps <%=vars.product_name%> optimize the use of system resources and reduce thread contention. This sets an initial parameter on the underlying <code class="ph codeph">java.util.ConcurrentHashMap</code> used for storing region entries.
 <div class="note note">
 <b>Note:</b>
 <p>Before you modify this, read the concurrency level description, then see the Java API documentation for <code class="ph codeph">java.util.ConcurrentHashMap</code>.</p>
@@ -1100,7 +1100,7 @@ Specifies a region attributes template that can be named (by `id`) and reference
 </tr>
 <tr class="even">
 <td>gateway-sender-ids</td>
-<td><p>Specifies one or more gateway sender IDs to use for distributing region events to remote Geode sites. Specify multiple IDs as a comma-separated list.</p>
+<td><p>Specifies one or more gateway sender IDs to use for distributing region events to remote <%=vars.product_name%> sites. Specify multiple IDs as a comma-separated list.</p>
 <p><strong>API:</strong> <code class="ph codeph">addGatewaySenderId</code></p>
 <p><strong>Example:</strong></p>
 <pre class="pre codeblock language-xml"><code>&lt;region-attributes 
@@ -1277,7 +1277,7 @@ Specifies a region attributes template that can be named (by `id`) and reference
 <td><p>Definition: Determines how updates to region entries are distributed to the other caches in the distributed system where the region and entry are defined. Scope also determines whether to allow remote invocation of some of the region’s event handlers, and whether to use region entry versions to provide consistent updates across replicated regions.</p>
 <div class="note note">
 <b>Note:</b>
-<p>You can configure the most common of these options with Geode’s region shortccuts in <code class="ph codeph">RegionShortcut</code> and <code class="ph codeph">ClientRegionShortcut</code>.</p>
+<p>You can configure the most common of these options with <%=vars.product_name%> region shortcuts in <code class="ph codeph">RegionShortcut</code> and <code class="ph codeph">ClientRegionShortcut</code>.</p>
 </div>
 <div class="note note">
 <b>Note:</b>
@@ -1315,7 +1315,7 @@ Specifies a region attributes template that can be named (by `id`) and reference
 </tr>
 <tr class="even">
 <td>statistics-enabled</td>
-<td>Boolean specifying whether to gather statistics on the region. Must be true to use expiration on the region. Geode provides a standard set of statistics for cached regions and region entries, which give you information for fine-tuning your distributed system. Unlike other Geode statistics, statistics for local and distributed regions are not archived and cannot be charted. They are kept in instances of <code class="ph codeph">org.apache.geode.cache.CacheStatistics</code> and made available through the region and its entries through the <code class="ph codeph">Region.getStatistics</code> and <code class="ph codeph">Region.Entry.getStatistics</code> methods.
+<td>Boolean specifying whether to gather statistics on the region. Must be true to use expiration on the region. <%=vars.product_name%> provides a standard set of statistics for cached regions and region entries, which give you information for fine-tuning your distributed system. Unlike other <%=vars.product_name%> statistics, statistics for local and distributed regions are not archived and cannot be charted. They are kept in instances of <code class="ph codeph">org.apache.geode.cache.CacheStatistics</code> and made available through the region and its entries through the <code class="ph codeph">Region.getStatistics</code> and <code class="ph codeph">Region.Entry.getStatistics</code> methods.
 <p><strong>API:</strong> <code class="ph codeph">setStatisticsEnabled</code></p>
 <p><strong>Example:</strong></p>
 <pre class="pre codeblock language-xml"><code>&lt;region-attributes 
@@ -1338,7 +1338,7 @@ Specifies a region attributes template that can be named (by `id`) and reference
 <td><p>Determines whether members perform checks to provide consistent handling for concurrent or out-of-order updates to distributed regions. See [Consistency for Region Updates](../../developing/distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045).</p>
 <div class="note note">
 <b>Note:</b>
-<p>Applications that use a <code class="ph codeph">client-cache</code> may want to disable concurrency checking in order to see all events for a region. Geode server members can continue using concurrency checks for the region, but they will pass all events to the client cache. This configuration ensures that the client sees all events, but it does not prevent the client cache from becoming out-of-sync with the server cache.</p>
+<p>Applications that use a <code class="ph codeph">client-cache</code> may want to disable concurrency checking in order to see all events for a region. <%=vars.product_name%> server members can continue using concurrency checks for the region, but they will pass all events to the client cache. This configuration ensures that the client sees all events, but it does not prevent the client cache from becoming out-of-sync with the server cache.</p>
 </div>
 <p><strong>API:</strong> <code class="ph codeph">setConcurrencyChecksEnabled</code></p>
 <p><strong>Example:</strong></p>
@@ -1902,7 +1902,7 @@ With the exception of `local-max-memory`, all members defining a partitioned reg
 | Attribute              | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    | Default              |
 |------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------|
 | colocated-with         | The full name of a region to colocate with this region. The named region must exist before this region is created.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             | null                 |
-| local-max-memory       | Maximum megabytes of memory set aside for this region in the local member. This is all memory used for this partitioned region - for primary buckets and any redundant copies. This value must be smaller than the Java settings for the initial or maximum JVM heap. When the memory use goes above this value, Geode issues a warning, but operation continues. Besides setting the maximum memory to use for the member, this setting also tells Geode how to balance the load between members where the region is defined. For example, if one member sets this value to twice the value of another member’s setting, Geode works to keep the ratio between the first and the second at two-to-one, regardless of how little memory the region consumes. This is a local parameter that applies only to the local member. A value of 0 disables local data caching. | 90% (of local heap)  |
+| local-max-memory       | Maximum megabytes of memory set aside for this region in the local member. This is all memory used for this partitioned region - for primary buckets and any redundant copies. This value must be smaller than the Java settings for the initial or maximum JVM heap. When the memory use goes above this value, <%=vars.product_name%> issues a warning, but operation continues. Besides setting the maximum memory to use for the member, this setting also tells <%=vars.product_name%> how to balance the load between members where the region is defined. For example, if one member sets this value to twice the value of another member’s setting, <%=vars.product_name%> works to keep the ratio between the first and the second at two-to-one, regardless of how little memory the region consumes. This is a local parameter that applies only to the local member. A value of 0 disables local data caching. | 90% (of local heap)  |
 | recovery-delay         | Applies when `redundant-copies` is greater than zero. The number of milliseconds to wait after a member crashes before reestablishing redundancy for the region. A setting of -1 disables automatic recovery of redundancy after member failure.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               | -1                   |
 | redundant-copies       | Number of extra copies that the partitioned region must maintain for each entry. Range: 0-3. If you specify 1, this partitioned region maintains the original and one backup, for a total of two copies. A value of 0 disables redundancy.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | 0                    |
 | startup-recovery-delay | Applies when `redundant-copies` is greater than zero. The number of milliseconds a newly started member should wait before trying to satisfy redundancy of region data stored on other members. A setting of -1 disables automatic recovery of redundancy after new members join.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              | 0                    |
@@ -2220,7 +2220,7 @@ An event-handler plug-in that allows you to receive before-event notification fo
 
 ## <a id="cache-listener" class="no-quick-link"></a>&lt;cache-listener&gt;
 
-An event-handler plug-in that receives after-event notification of changes to the region and its entries. Any number of cache listeners can be defined for a region in any member. Geode offers several listener types with callbacks to handle data and process events. Depending on the `data-policy` and the `interest-policy` subscription attributes, a cache listener may receive only events that originate in the local cache, or it may receive those events along with events that originate remotely.
+An event-handler plug-in that receives after-event notification of changes to the region and its entries. Any number of cache listeners can be defined for a region in any member. <%=vars.product_name%> offers several listener types with callbacks to handle data and process events. Depending on the `data-policy` and the `interest-policy` subscription attributes, a cache listener may receive only events that originate in the local cache, or it may receive those events along with events that originate remotely.
 
 Specify the Java class for the cache listener and its initialization parameters with the `<class-name>` and `<parameter>` sub-elements. See [&lt;class-name&gt; and &lt;parameter&gt;](#class-name_parameter).
 
@@ -2257,7 +2257,7 @@ A compressor registers a custom class that extends `Compressor` to support compr
 
 ## <a id="eviction-attributes" class="no-quick-link"></a>&lt;eviction-attributes&gt;
 
-Specifies whether and how to control a region’s size. Size is controlled by removing least recently used (LRU) entries to make space for new ones. This may be done through destroy or overflow actions. You can configure your region for lru-heap-percentage with an eviction action of local-destroy using Geode’s stored region attributes.
+Specifies whether and how to control a region’s size. Size is controlled by removing least recently used (LRU) entries to make space for new ones. This may be done through destroy or overflow actions. You can configure your region for lru-heap-percentage with an eviction action of local-destroy using stored region attributes.
 
 **Default:** Uses the lru-entry-count algorithm.
 
@@ -2327,7 +2327,7 @@ Using the maximum attribute, specifies maximum region capacity based on entry co
 
 ## <a id="lru-heap-percentage" class="no-quick-link"></a>&lt;lru-heap-percentage&gt;
 
-Runs evictions when the Geode resource manager says to. The manager orders evictions when the total cache size is over the heap percentage limit specified in the manager configuration. You can declare a Java class that implements the ObjectSizer interface to measure the size of objects in the Region.
+Runs evictions when the <%=vars.product_name%> resource manager says to. The manager orders evictions when the total cache size is over the heap percentage limit specified in the manager configuration. You can declare a Java class that implements the ObjectSizer interface to measure the size of objects in the Region.
 
 Specify the Java class and its initialization parameters with the `<class-name>` and `<parameter>` sub-elements. See [&lt;class-name&gt; and &lt;parameter&gt;](#class-name_parameter).
 
@@ -2464,7 +2464,7 @@ Specifies the binding for a data-source used in transaction management. See [Con
 
 ## <a id="jndi-binding" class="no-quick-link"></a>&lt;jndi-binding&gt;
 
-For every datasource that is bound to the JNDI tree, there should be one `<jndi-binding>` element. This element describes the property and the configuration of the datasource. Geode uses the attributes of the `<jndi-binding>` element for configuration. Use the `<config-property>` element to configure properties for the datasource.
+For every datasource that is bound to the JNDI tree, there should be one `<jndi-binding>` element. This element describes the property and the configuration of the datasource. <%=vars.product_name%> uses the attributes of the `<jndi-binding>` element for configuration. Use the `<config-property>` element to configure properties for the datasource.
 
 We recommend that you set the username and password with the `user-name` and `password` jndi-binding attributes rather than using the `<config-property>` element.
 
@@ -2521,7 +2521,7 @@ We recommend that you set the username and password with the `user-name` and `pa
 </tr>
 <tr class="odd">
 <td>jndi-name</td>
-<td>The <code class="ph codeph">jndi-name</code> attribute is the key binding parameter. If the value of jndi-name is a DataSource, it is bound as java:/myDatabase, where myDatabase is the name you assign to your data source. If the data source cannot be bound to JNDI at runtime, Geode logs a warning.</td>
+<td>The <code class="ph codeph">jndi-name</code> attribute is the key binding parameter. If the value of jndi-name is a DataSource, it is bound as java:/myDatabase, where myDatabase is the name you assign to your data source. If the data source cannot be bound to JNDI at runtime, <%=vars.product_name%> logs a warning.</td>
 <td> </td>
 </tr>
 <tr class="even">
@@ -2555,7 +2555,7 @@ We recommend that you set the username and password with the `user-name` and `pa
 <tbody>
 <tr class="odd">
 <td>XATransaction</td>
-<td>Select this option when you want to use a<span class="keyword apiname">ManagedConnection</span> interface with a Java Transaction Manager to define transaction boundries. This option allows a <span class="keyword apiname">ManagedDataSource</span> to participate in a transaction with a Geode cache.</td>
+<td>Select this option when you want to use a<span class="keyword apiname">ManagedConnection</span> interface with a Java Transaction Manager to define transaction boundries. This option allows a <span class="keyword apiname">ManagedDataSource</span> to participate in a transaction with a <%=vars.product_name%> cache.</td>
 </tr>
 <tr class="even">
 <td>NoTransaction</td>
@@ -2733,7 +2733,7 @@ Describes an index to be created on a region. The index node, if any, should all
 ## <a id="luceneindex" class="no-quick-link"></a>&lt;lucene:index&gt;
 
 Describes a Lucene index to be created on a region. The `lucene` namespace
-and the scoping operator (`:`) must be specified, as the Geode `cache`
+and the scoping operator (`:`) must be specified, as the <%=vars.product_name%> `cache`
 namespace also defines an `index` element (for OQL indexes).
 
 **API:** `org.apache.geode.cache.lucene` package
@@ -3032,7 +3032,7 @@ Set of serializer or instantiator tags to register customer DataSerializer exten
 
 ## <a id="serializer" class="no-quick-link"></a>&lt;serializer&gt;
 
-Allows you to configure the DataSerializer for this Geode member. It registers a custom class which extends DataSerializer to support custom serialization of non-modifiable object types inside Geode.
+Allows you to configure the DataSerializer for this <%=vars.product_name%> member. It registers a custom class which extends DataSerializer to support custom serialization of non-modifiable object types inside <%=vars.product_name%>.
 
 Specify the Java class for the `DataSerializer` and its initialization parameters with the `<class-name>` sub-element.
 
@@ -3040,7 +3040,7 @@ Specify the Java class for the `DataSerializer` and its initialization parameter
 
 ## <a id="instantiator" class="no-quick-link"></a>&lt;instantiator&gt;
 
-An Instantiator registers a custom class which implements the `DataSerializable` interface to support custom object serialization inside Geode.
+An Instantiator registers a custom class which implements the `DataSerializable` interface to support custom object serialization inside <%=vars.product_name%>.
 
 Specify the Java class and its initialization parameters with the `<class-name>` sub-element.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/reference/topics/chapter_overview_cache_xml.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/topics/chapter_overview_cache_xml.html.md.erb b/geode-docs/reference/topics/chapter_overview_cache_xml.html.md.erb
index 2e47eb9..379d320 100644
--- a/geode-docs/reference/topics/chapter_overview_cache_xml.html.md.erb
+++ b/geode-docs/reference/topics/chapter_overview_cache_xml.html.md.erb
@@ -30,18 +30,18 @@ You can configure most elements of the cache.xml file and apply it to your entir
 
 -   **[&lt;cache&gt; Element Hierarchy](../../reference/topics/cache-elements-list.html)**
 
-    This section shows the hierarchy of `<cache>` element sub-elements that you use to configure Geode caches and servers.
+    This section shows the hierarchy of `<cache>` element sub-elements that you use to configure <%=vars.product_name%> caches and servers.
 
 -   **[&lt;cache&gt; Element Reference](../../reference/topics/cache_xml.html#cache_xml_cache)**
 
-    This section documents the `cache.xml` sub-elements used for Geode server configuration. All elements are sub-elements of the `<cache>` element.
+    This section documents the `cache.xml` sub-elements used for <%=vars.product_name%> server configuration. All elements are sub-elements of the `<cache>` element.
 
 -   **[&lt;client-cache&gt; Element Hierarchy](../../reference/topics/client-cache-elements-list.html)**
 
-    This section shows the hierarchy of `<client-cache>` element sub-elements that you use to configure Geode caches and clients.
+    This section shows the hierarchy of `<client-cache>` element sub-elements that you use to configure <%=vars.product_name%> caches and clients.
 
 -   **[&lt;client-cache&gt; Element Reference](../../reference/topics/client-cache.html)**
 
-    This section documents all `cache.xml` elements that you use to configure Geode clients. All elements are sub-elements of the `<client-cache>` element.
+    This section documents all `cache.xml` elements that you use to configure <%=vars.product_name%> clients. All elements are sub-elements of the `<client-cache>` element.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/reference/topics/chapter_overview_regionshortcuts.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/topics/chapter_overview_regionshortcuts.html.md.erb b/geode-docs/reference/topics/chapter_overview_regionshortcuts.html.md.erb
index 4a107ec..1b266e9 100644
--- a/geode-docs/reference/topics/chapter_overview_regionshortcuts.html.md.erb
+++ b/geode-docs/reference/topics/chapter_overview_regionshortcuts.html.md.erb
@@ -19,9 +19,9 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-This topic describes the various region shortcuts you can use to configure Geode regions.
+This topic describes the various region shortcuts you can use to configure <%=vars.product_name%> regions.
 
-Region shortcuts are groupings of pre-configured attributes that define the characteristics of a region. You can use a region shortcut as a starting point when configuring regions and you can add additional configurations to customize your application. To reference a region shortcut in a Geode `cache.xml` file, use the `refid` attribute of the `<region>` element. For example:
+Region shortcuts are groupings of pre-configured attributes that define the characteristics of a region. You can use a region shortcut as a starting point when configuring regions and you can add additional configurations to customize your application. To reference a region shortcut in a <%=vars.product_name%> `cache.xml` file, use the `refid` attribute of the `<region>` element. For example:
 
 ``` pre
 <region name="myRegion" refid="PARTITION_REDUNDANT"/>
@@ -52,56 +52,56 @@ If you change the cache.xml file that defines a region, you must restart the mem
 
 For more information about configuring regions, see [Region Management](../../basic_config/data_regions/managing_data_regions.html).
 
-For more information about using the various types of Geode regions and when to use them, see [Region Types](../../developing/region_options/region_types.html#region_types).
+For more information about using the various types of <%=vars.product_name%> regions and when to use them, see [Region Types](../../developing/region_options/region_types.html#region_types).
 
--   **[Region Shortcuts Quick Reference](../../reference/topics/region_shortcuts_table.html)**
+-   **[Region Shortcuts Quick Reference](region_shortcuts_table.html)**
 
     This section provides a quick reference for all region shortcuts.
 
--   **[LOCAL](../../reference/topics/region_shortcuts_reference.html#reference_w2h_3cd_lk)**
+-   **[LOCAL](region_shortcuts_reference.html#reference_w2h_3cd_lk)**
 
--   **[LOCAL\_HEAP\_LRU](../../reference/topics/region_shortcuts_reference.html#reference_wd5_lpy_lk)**
+-   **[LOCAL\_HEAP\_LRU](region_shortcuts_reference.html#reference_wd5_lpy_lk)**
 
--   **[LOCAL\_OVERFLOW](../../reference/topics/region_shortcuts_reference.html#reference_adk_y4y_lk)**
+-   **[LOCAL\_OVERFLOW](region_shortcuts_reference.html#reference_adk_y4y_lk)**
 
--   **[LOCAL\_PERSISTENT](../../reference/topics/region_shortcuts_reference.html#reference_l5r_y4y_lk)**
+-   **[LOCAL\_PERSISTENT](region_shortcuts_reference.html#reference_l5r_y4y_lk)**
 
--   **[LOCAL\_PERSISTENT\_OVERFLOW](../../reference/topics/region_shortcuts_reference.html#reference_a45_y4y_lk)**
+-   **[LOCAL\_PERSISTENT\_OVERFLOW](region_shortcuts_reference.html#reference_a45_y4y_lk)**
 
--   **[PARTITION](../../reference/topics/region_shortcuts_reference.html#reference_ow5_4qy_lk)**
+-   **[PARTITION](region_shortcuts_reference.html#reference_ow5_4qy_lk)**
 
--   **[PARTITION\_HEAP\_LRU](../../reference/topics/region_shortcuts_reference.html#reference_twx_y4y_lk)**
+-   **[PARTITION\_HEAP\_LRU](region_shortcuts_reference.html#reference_twx_y4y_lk)**
 
--   **[PARTITION\_OVERFLOW](../../reference/topics/region_shortcuts_reference.html#reference_js1_z4y_lk)**
+-   **[PARTITION\_OVERFLOW](region_shortcuts_reference.html#reference_js1_z4y_lk)**
 
--   **[PARTITION\_PERSISTENT](../../reference/topics/region_shortcuts_reference.html#reference_d4k_jpy_lk)**
+-   **[PARTITION\_PERSISTENT](region_shortcuts_reference.html#reference_d4k_jpy_lk)**
 
--   **[PARTITION\_PERSISTENT\_OVERFLOW](../../reference/topics/region_shortcuts_reference.html#reference_v5l_jpy_lk)**
+-   **[PARTITION\_PERSISTENT\_OVERFLOW](region_shortcuts_reference.html#reference_v5l_jpy_lk)**
 
--   **[PARTITION\_PROXY](../../reference/topics/region_shortcuts_reference.html#reference_v4m_jpy_lk)**
+-   **[PARTITION\_PROXY](region_shortcuts_reference.html#reference_v4m_jpy_lk)**
 
--   **[PARTITION\_PROXY\_REDUNDANT](../../reference/topics/region_shortcuts_reference.html#reference_c1n_jpy_lk)**
+-   **[PARTITION\_PROXY\_REDUNDANT](region_shortcuts_reference.html#reference_c1n_jpy_lk)**
 
--   **[PARTITION\_REDUNDANT](../../reference/topics/region_shortcuts_reference.html#reference_shn_jpy_lk)**
+-   **[PARTITION\_REDUNDANT](region_shortcuts_reference.html#reference_shn_jpy_lk)**
 
--   **[PARTITION\_REDUNDANT\_HEAP\_LRU](../../reference/topics/region_shortcuts_reference.html#reference_m4n_jpy_lk)**
+-   **[PARTITION\_REDUNDANT\_HEAP\_LRU](region_shortcuts_reference.html#reference_m4n_jpy_lk)**
 
--   **[PARTITION\_REDUNDANT\_OVERFLOW](../../reference/topics/region_shortcuts_reference.html#reference_own_jpy_lk)**
+-   **[PARTITION\_REDUNDANT\_OVERFLOW](region_shortcuts_reference.html#reference_own_jpy_lk)**
 
--   **[PARTITION\_REDUNDANT\_PERSISTENT](../../reference/topics/region_shortcuts_reference.html#reference_bd4_jpy_lk)**
+-   **[PARTITION\_REDUNDANT\_PERSISTENT](region_shortcuts_reference.html#reference_bd4_jpy_lk)**
 
--   **[PARTITION\_REDUNDANT\_PERSISTENT\_OVERFLOW](../../reference/topics/region_shortcuts_reference.html#reference_xqq_tvc_lk)**
+-   **[PARTITION\_REDUNDANT\_PERSISTENT\_OVERFLOW](region_shortcuts_reference.html#reference_xqq_tvc_lk)**
 
--   **[REPLICATE](../../reference/topics/region_shortcuts_reference.html#reference_wq4_jpy_lk)**
+-   **[REPLICATE](region_shortcuts_reference.html#reference_wq4_jpy_lk)**
 
--   **[REPLICATE\_HEAP\_LRU](../../reference/topics/region_shortcuts_reference.html#reference_xx4_jpy_lk)**
+-   **[REPLICATE\_HEAP\_LRU](region_shortcuts_reference.html#reference_xx4_jpy_lk)**
 
--   **[REPLICATE\_OVERFLOW](../../reference/topics/region_shortcuts_reference.html#reference_t2p_jpy_lk)**
+-   **[REPLICATE\_OVERFLOW](region_shortcuts_reference.html#reference_t2p_jpy_lk)**
 
--   **[REPLICATE\_PERSISTENT](../../reference/topics/region_shortcuts_reference.html#reference_emp_jpy_lk)**
+-   **[REPLICATE\_PERSISTENT](region_shortcuts_reference.html#reference_emp_jpy_lk)**
 
--   **[REPLICATE\_PERSISTENT\_OVERFLOW](../../reference/topics/region_shortcuts_reference.html#reference_tsp_jpy_lk)**
+-   **[REPLICATE\_PERSISTENT\_OVERFLOW](region_shortcuts_reference.html#reference_tsp_jpy_lk)**
 
--   **[REPLICATE\_PROXY](../../reference/topics/region_shortcuts_reference.html#reference_n1q_jpy_lk)**
+-   **[REPLICATE\_PROXY](region_shortcuts_reference.html#reference_n1q_jpy_lk)**
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/reference/topics/client-cache-elements-list.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/topics/client-cache-elements-list.html.md.erb b/geode-docs/reference/topics/client-cache-elements-list.html.md.erb
index cd99f65..0d26303 100644
--- a/geode-docs/reference/topics/client-cache-elements-list.html.md.erb
+++ b/geode-docs/reference/topics/client-cache-elements-list.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-This section shows the hierarchy of `<client-cache>` element sub-elements that you use to configure Geode caches and clients.
+This section shows the hierarchy of `<client-cache>` element sub-elements that you use to configure <%=vars.product_name%> caches and clients.
 
 For details, see [&lt;client-cache&gt; Element Reference.](client-cache.html)
 

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/reference/topics/client-cache.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/topics/client-cache.html.md.erb b/geode-docs/reference/topics/client-cache.html.md.erb
index 99e5b39..83043f7 100644
--- a/geode-docs/reference/topics/client-cache.html.md.erb
+++ b/geode-docs/reference/topics/client-cache.html.md.erb
@@ -20,9 +20,9 @@ limitations under the License.
 -->
 <a id="cc-client-cache"></a>
 
-This section documents all `cache.xml` elements that you use to configure Geode clients. All elements are sub-elements of the `<client-cache>` element.
+This section documents all `cache.xml` elements that you use to configure <%=vars.product_name%> clients. All elements are sub-elements of the `<client-cache>` element.
 
-For Geode server configuration, see [&lt;cache&gt; Element Reference](cache_xml.html).
+For <%=vars.product_name%> server configuration, see [&lt;cache&gt; Element Reference](cache_xml.html).
 
 API: `org.apache.geode.cache.client.ClientCacheFactory` and `PoolFactory` interfaces.
 
@@ -511,7 +511,7 @@ Specifies the configuration for the Portable Data eXchange (PDX) method of seria
 
 ## <a id="cc-pdx-serializer" class="no-quick-link"></a>&lt;pdx-serializer&gt;
 
-Allows you to configure the PdxSerializer for this Geode member.
+Allows you to configure the PdxSerializer for this <%=vars.product_name%> member.
 
 Specify the Java class and its initialization parameters with the `<class-name>` and `<parameter>` sub-elements. See [&lt;class-name&gt; and &lt;parameter&gt;](cache_xml.html#class-name_parameter).
 
@@ -557,7 +557,7 @@ Specifies a region attributes template that can be named (by `id`) and reference
 <tbody>
 <tr class="odd">
 <td>concurrency-level</td>
-<td>Gives an estimate of the maximum number of application threads that will concurrently access a region entry at one time. This attribute does not apply to partitioned regions. This attribute helps Geode optimize the use of system resources and reduce thread contention. This sets an initial parameter on the underlying <code class="ph codeph">java.util.ConcurrentHashMap</code> used for storing region entries.
+<td>Gives an estimate of the maximum number of application threads that will concurrently access a region entry at one time. This attribute does not apply to partitioned regions. This attribute helps <%=vars.product_name%> optimize the use of system resources and reduce thread contention. This sets an initial parameter on the underlying <code class="ph codeph">java.util.ConcurrentHashMap</code> used for storing region entries.
 <div class="note note">
 **Note:**
 <p>Before you modify this, read the concurrency level description, then see the Java API documentation for <code class="ph codeph">java.util.ConcurrentHashMap</code>.</p>
@@ -659,7 +659,7 @@ Used only with GemFire version 6.x gateway configurations. For GemFire 7.0 confi
 </tr>
 <tr class="even">
 <td>gateway-sender-ids</td>
-<td><p>Specifies one or more gateway sender IDs to use for distributing region events to remote Geode sites. Specify multiple IDs as a comma-separated list.</p>
+<td><p>Specifies one or more gateway sender IDs to use for distributing region events to remote <%=vars.product_name%> sites. Specify multiple IDs as a comma-separated list.</p>
 <p><strong>API:</strong> <code class="ph codeph">addGatewaySenderId</code></p>
 <p><strong>Example:</strong></p>
 <pre class="pre codeblock language-xml"><code>&lt;region-attributes 
@@ -836,7 +836,7 @@ Used only with GemFire version 6.x gateway configurations. For GemFire 7.0 confi
 <td><p>Definition: Determines how updates to region entries are distributed to the other caches in the distributed system where the region and entry are defined. Scope also determines whether to allow remote invocation of some of the region’s event handlers, and whether to use region entry versions to provide consistent updates across replicated regions.</p>
 <div class="note note">
 **Note:**
-<p>You can configure the most common of these options with Geode’s region shortccuts in <code class="ph codeph">RegionShortcut</code> and <code class="ph codeph">ClientRegionShortcut</code>.</p>
+<p>You can configure the most common of these options with <%=vars.product_name%> region shortcuts in <code class="ph codeph">RegionShortcut</code> and <code class="ph codeph">ClientRegionShortcut</code>.</p>
 </div>
 <div class="note note">
 **Note:**
@@ -874,7 +874,7 @@ Used only with GemFire version 6.x gateway configurations. For GemFire 7.0 confi
 </tr>
 <tr class="even">
 <td>statistics-enabled</td>
-<td>Boolean specifying whether to gather statistics on the region. Must be true to use expiration on the region. Geode provides a standard set of statistics for cached regions and region entries, which give you information for fine-tuning your distributed system. Unlike other Geode statistics, statistics for local and distributed regions are not archived and cannot be charted. They are kept in instances of <code class="ph codeph">org.apache.geode.cache.CacheStatistics</code> and made available through the region and its entries through the <code class="ph codeph">Region.getStatistics</code> and <code class="ph codeph">Region.Entry.getStatistics</code> methods.
+<td>Boolean specifying whether to gather statistics on the region. Must be true to use expiration on the region. <%=vars.product_name%> provides a standard set of statistics for cached regions and region entries, which give you information for fine-tuning your distributed system. Unlike other <%=vars.product_name%> statistics, statistics for local and distributed regions are not archived and cannot be charted. They are kept in instances of <code class="ph codeph">org.apache.geode.cache.CacheStatistics</code> and made available through the region and its entries through the <code class="ph codeph">Region.getStatistics</code> and <code class="ph codeph">Region.Entry.getStatistics</code> methods.
 <p><strong>API:</strong> <code class="ph codeph">setStatisticsEnabled</code></p>
 <p><strong>Example:</strong></p>
 <pre class="pre codeblock language-xml"><code>&lt;region-attributes 
@@ -897,7 +897,7 @@ Used only with GemFire version 6.x gateway configurations. For GemFire 7.0 confi
 <td><p>Determines whether members perform checks to provide consistent handling for concurrent or out-of-order updates to distributed regions. See [Consistency for Region Updates](../../developing/distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045).</p>
 <div class="note note">
 **Note:**
-<p>Applications that use a <code class="ph codeph">client-cache</code> may want to disable concurrency checking in order to see all events for a region. Geode server members can continue using concurrency checks for the region, but they will pass all events to the client cache. This configuration ensures that the client sees all events, but it does not prevent the client cache from becoming out-of-sync with the server cache.</p>
+<p>Applications that use a <code class="ph codeph">client-cache</code> may want to disable concurrency checking in order to see all events for a region. <%=vars.product_name%> server members can continue using concurrency checks for the region, but they will pass all events to the client cache. This configuration ensures that the client sees all events, but it does not prevent the client cache from becoming out-of-sync with the server cache.</p>
 </div>
 <p><strong>API:</strong> <code class="ph codeph">setConcurrencyChecksEnabled</code></p>
 <p><strong>Example:</strong></p>
@@ -1483,7 +1483,7 @@ An event-handler plug-in that allows you to receive before-event notification fo
 
 ## <a id="cc-cache-listener" class="no-quick-link"></a>&lt;cache-listener&gt;
 
-An event-handler plug-in that receives after-event notification of changes to the region and its entries. Any number of cache listeners can be defined for a region in any member. Geode offers several listener types with callbacks to handle data and process events. Depending on the `data-policy` and the `interest-policy` subscription attributes, a cache listener may receive only events that originate in the local cache, or it may receive those events along with events that originate remotely.
+An event-handler plug-in that receives after-event notification of changes to the region and its entries. Any number of cache listeners can be defined for a region in any member. <%=vars.product_name%> offers several listener types with callbacks to handle data and process events. Depending on the `data-policy` and the `interest-policy` subscription attributes, a cache listener may receive only events that originate in the local cache, or it may receive those events along with events that originate remotely.
 
 Specify the Java class for the cache listener and its initialization parameters with the `<class-name>` and `<parameter>` sub-elements. See [&lt;class-name&gt; and &lt;parameter&gt;](cache_xml.html#class-name_parameter).
 
@@ -1503,7 +1503,7 @@ Specify the Java class for the cache listener and its initialization parameters
 
 ## <a id="cc-eviction-attributes" class="no-quick-link"></a>&lt;eviction-attributes&gt;
 
-Specifies whether and how to control a region’s size. Size is controlled by removing least recently used (LRU) entries to make space for new ones. This may be done through destroy or overflow actions. You can configure your region for lru-heap-percentage with an eviction action of local-destroy using Geode’s stored region attributes.
+Specifies whether and how to control a region’s size. Size is controlled by removing least recently used (LRU) entries to make space for new ones. This may be done through destroy or overflow actions. You can configure your region for lru-heap-percentage with an eviction action of local-destroy using stored region attributes.
 
 **Default:** Uses the lru-entry-count algorithm.
 
@@ -1573,7 +1573,7 @@ Using the maximum attribute, specifies maximum region capacity based on entry co
 
 ## <a id="cc-lru-heap-percentage" class="no-quick-link"></a>&lt;lru-heap-percentage&gt;
 
-Runs evictions when the Geode resource manager says to. The manager orders evictions when the total cache size is over the heap percentage limit specified in the manager configuration. You can declare a Java class that implements the ObjectSizer interface to measure the size of objects in the Region.
+Runs evictions when the <%=vars.product_name%> resource manager says to. The manager orders evictions when the total cache size is over the heap percentage limit specified in the manager configuration. You can declare a Java class that implements the ObjectSizer interface to measure the size of objects in the Region.
 
 Specify the Java class and its initialization parameters with the `<class-name>` and `<parameter>` sub-elements. See [&lt;class-name&gt; and &lt;parameter&gt;](cache_xml.html#class-name_parameter).
 
@@ -1710,7 +1710,7 @@ Specifies the binding for a data-source used in transaction management. See [Con
 
 ## <a id="cc-jndi-binding" class="no-quick-link"></a>&lt;jndi-binding&gt;
 
-For every datasource that is bound to the JNDI tree, there should be one `<jndi-binding>` element. This element describes the property and the configuration of the datasource. Geode uses the attributes of the `<jndi-binding>` element for configuration. Use the `<config-property>` element to configure properties for the datasource.
+For every datasource that is bound to the JNDI tree, there should be one `<jndi-binding>` element. This element describes the property and the configuration of the datasource. <%=vars.product_name%> uses the attributes of the `<jndi-binding>` element for configuration. Use the `<config-property>` element to configure properties for the datasource.
 
 We recommend that you set the username and password with the `user-name` and `password` jndi-binding attributes rather than using the `<config-property>` element.
 
@@ -1767,7 +1767,7 @@ We recommend that you set the username and password with the `user-name` and `pa
 </tr>
 <tr class="odd">
 <td>jndi-name</td>
-<td>The <code class="ph codeph">jndi-name</code> attribute is the key binding parameter. If the value of jndi-name is a DataSource, it is bound as java:/myDatabase, where myDatabase is the name you assign to your data source. If the data source cannot be bound to JNDI at runtime, Geode logs a warning.</td>
+<td>The <code class="ph codeph">jndi-name</code> attribute is the key binding parameter. If the value of jndi-name is a DataSource, it is bound as java:/myDatabase, where myDatabase is the name you assign to your data source. If the data source cannot be bound to JNDI at runtime, <%=vars.product_name%> logs a warning.</td>
 <td> </td>
 </tr>
 <tr class="even">
@@ -1801,7 +1801,7 @@ We recommend that you set the username and password with the `user-name` and `pa
 <tbody>
 <tr class="odd">
 <td>XATransaction</td>
-<td>Select this option when you want to use a<span class="keyword apiname">ManagedConnection</span> interface with a Java Transaction Manager to define transaction boundries. This option allows a <span class="keyword apiname">ManagedDataSource</span> to participate in a transaction with a Geode cache.</td>
+<td>Select this option when you want to use a<span class="keyword apiname">ManagedConnection</span> interface with a Java Transaction Manager to define transaction boundries. This option allows a <span class="keyword apiname">ManagedDataSource</span> to participate in a transaction with a <%=vars.product_name%> cache.</td>
 </tr>
 <tr class="even">
 <td>NoTransaction</td>
@@ -1956,7 +1956,7 @@ Specifies a region attributes template that can be named (by `id`) and reference
 <tbody>
 <tr class="odd">
 <td>concurrency-level</td>
-<td>Gives an estimate of the maximum number of application threads that will concurrently access a region entry at one time. This attribute does not apply to partitioned regions. This attribute helps Geode optimize the use of system resources and reduce thread contention. This sets an initial parameter on the underlying <code class="ph codeph">java.util.ConcurrentHashMap</code> used for storing region entries.
+<td>Gives an estimate of the maximum number of application threads that will concurrently access a region entry at one time. This attribute does not apply to partitioned regions. This attribute helps <%=vars.product_name%> optimize the use of system resources and reduce thread contention. This sets an initial parameter on the underlying <code class="ph codeph">java.util.ConcurrentHashMap</code> used for storing region entries.
 <div class="note note">
 **Note:**
 <p>Before you modify this, read the concurrency level description, then see the Java API documentation for <code class="ph codeph">java.util.ConcurrentHashMap</code>.</p>
@@ -2058,7 +2058,7 @@ Used only with GemFire version 6.x gateway configurations. For GemFire 7.0 confi
 </tr>
 <tr class="even">
 <td>gateway-sender-ids</td>
-<td><p>Specifies one or more gateway sender IDs to use for distributing region events to remote Geode sites. Specify multiple IDs as a comma-separated list.</p>
+<td><p>Specifies one or more gateway sender IDs to use for distributing region events to remote <%=vars.product_name%> sites. Specify multiple IDs as a comma-separated list.</p>
 <p><strong>API:</strong> <code class="ph codeph">addGatewaySenderId</code></p>
 <p><strong>Example:</strong></p>
 <pre class="pre codeblock language-xml"><code>&lt;region-attributes 
@@ -2235,7 +2235,7 @@ Used only with GemFire version 6.x gateway configurations. For GemFire 7.0 confi
 <td><p>Definition: Determines how updates to region entries are distributed to the other caches in the distributed system where the region and entry are defined. Scope also determines whether to allow remote invocation of some of the region’s event handlers, and whether to use region entry versions to provide consistent updates across replicated regions.</p>
 <div class="note note">
 **Note:**
-<p>You can configure the most common of these options with Geode’s region shortccuts in <code class="ph codeph">RegionShortcut</code> and <code class="ph codeph">ClientRegionShortcut</code>.</p>
+<p>You can configure the most common of these options with region shortccuts in <code class="ph codeph">RegionShortcut</code> and <code class="ph codeph">ClientRegionShortcut</code>.</p>
 </div>
 <div class="note note">
 **Note:**
@@ -2273,7 +2273,7 @@ Used only with GemFire version 6.x gateway configurations. For GemFire 7.0 confi
 </tr>
 <tr class="even">
 <td>statistics-enabled</td>
-<td>Boolean specifying whether to gather statistics on the region. Must be true to use expiration on the region. Geode provides a standard set of statistics for cached regions and region entries, which give you information for fine-tuning your distributed system. Unlike other Geode statistics, statistics for local and distributed regions are not archived and cannot be charted. They are kept in instances of <code class="ph codeph">org.apache.geode.cache.CacheStatistics</code> and made available through the region and its entries through the <code class="ph codeph">Region.getStatistics</code> and <code class="ph codeph">Region.Entry.getStatistics</code> methods.
+<td>Boolean specifying whether to gather statistics on the region. Must be true to use expiration on the region. <%=vars.product_name%> provides a standard set of statistics for cached regions and region entries, which give you information for fine-tuning your distributed system. Unlike other <%=vars.product_name%> statistics, statistics for local and distributed regions are not archived and cannot be charted. They are kept in instances of <code class="ph codeph">org.apache.geode.cache.CacheStatistics</code> and made available through the region and its entries through the <code class="ph codeph">Region.getStatistics</code> and <code class="ph codeph">Region.Entry.getStatistics</code> methods.
 <p><strong>API:</strong> <code class="ph codeph">setStatisticsEnabled</code></p>
 <p><strong>Example:</strong></p>
 <pre class="pre codeblock language-xml"><code>&lt;region-attributes 
@@ -2296,7 +2296,7 @@ Used only with GemFire version 6.x gateway configurations. For GemFire 7.0 confi
 <td><p>Determines whether members perform checks to provide consistent handling for concurrent or out-of-order updates to distributed regions. See [Consistency for Region Updates](../../developing/distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045).</p>
 <div class="note note">
 **Note:**
-<p>Applications that use a <code class="ph codeph">client-cache</code> may want to disable concurrency checking in order to see all events for a region. Geode server members can continue using concurrency checks for the region, but they will pass all events to the client cache. This configuration ensures that the client sees all events, but it does not prevent the client cache from becoming out-of-sync with the server cache.</p>
+<p>Applications that use a <code class="ph codeph">client-cache</code> may want to disable concurrency checking in order to see all events for a region. <%=vars.product_name%> server members can continue using concurrency checks for the region, but they will pass all events to the client cache. This configuration ensures that the client sees all events, but it does not prevent the client cache from becoming out-of-sync with the server cache.</p>
 </div>
 <p><strong>API:</strong> <code class="ph codeph">setConcurrencyChecksEnabled</code></p>
 <p><strong>Example:</strong></p>
@@ -2637,7 +2637,7 @@ Set of serializer or instantiator tags to register customer DataSerializer exten
 
 ## <a id="cc-serializer" class="no-quick-link"></a>&lt;serializer&gt;
 
-Allows you to configure the DataSerializer for this Geode member. It registers a custom class which extends DataSerializer to support custom serialization of non-modifiable object types inside Geode.
+Allows you to configure the DataSerializer for this <%=vars.product_name%> member. It registers a custom class which extends DataSerializer to support custom serialization of non-modifiable object types inside <%=vars.product_name%>.
 
 Specify the Java class for the `DataSerializer` and its initialization parameters with the `<class-name>` sub-element.
 
@@ -2645,7 +2645,7 @@ Specify the Java class for the `DataSerializer` and its initialization parameter
 
 ## <a id="cc-instantiator" class="no-quick-link"></a>&lt;instantiator&gt;
 
-An Instantiator registers a custom class which implements the `DataSerializable` interface to support custom object serialization inside Geode.
+An Instantiator registers a custom class which implements the `DataSerializable` interface to support custom object serialization inside <%=vars.product_name%>.
 
 Specify the Java class and its initialization parameters with the `<class-name>` sub-element.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/reference/topics/gemfire_properties.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/topics/gemfire_properties.html.md.erb b/geode-docs/reference/topics/gemfire_properties.html.md.erb
index 238803e..a226618 100644
--- a/geode-docs/reference/topics/gemfire_properties.html.md.erb
+++ b/geode-docs/reference/topics/gemfire_properties.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  gemfire.properties and gfsecurity.properties (Geode Properties)
----
+<% set_title("gemfire.properties and gfsecurity.properties:", product_name, "Properties") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,13 +17,13 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-You use the `gemfire.properties` settings to join a distributed system and configure system member behavior. Distributed system members include applications, the cache server, the locator, and other Geode processes.
+You use the `gemfire.properties` settings to join a distributed system and configure system member behavior. Distributed system members include applications, the cache server, the locator, and other <%=vars.product_name%> processes.
 
 You can place any security-related (properties that begin with `security-*`) configuration properties in `gemfire.properties` into a separate `gfsecurity.properties` file. Placing these configuration settings in a separate file allows you to restrict access to security configuration data. This way, you can still allow read or write access for your `gemfire.properties` file.
 
 You can also define provider-specific properties ("ssl" properties) in `gfsecurity.properties` instead of defining them at the command-line or in your environment.
 
-You can specify non-ASCII text in your properties files by using Unicode escape sequences. See [Using Non-ASCII Strings in Apache Geode Property Files](non-ascii_strings_in_config_files.html) for more details.
+You can specify non-ASCII text in your properties files by using Unicode escape sequences. See [Using Non-ASCII Strings in <%=vars.product_name_long%> Property Files](non-ascii_strings_in_config_files.html) for more details.
 
 **Note:**
 Unless otherwise indicated, these settings only affect activities within this distributed system - not activities between clients and servers or between a gateway sender and gateway receiver in a multi-site installation.
@@ -106,7 +104,7 @@ Valid values are in the range 0...2147483647</td>
 </tr>
 <tr class="even">
 <td>bind-address</td>
-<td>Relevant only for multi-homed hosts - machines with multiple network interface cards. Specifies the adapter card the cache binds to for peer-to-peer communication. Also specifies the default location for Geode servers to listen on, which is used unless overridden by the <code class="ph codeph">server-bind-address</code>. An empty string causes the member to listen on the default card for the machine. This is a machine-wide attribute used for system member and client/server communication. It has no effect on locator location, unless the locator is embedded in a member process.
+<td>Relevant only for multi-homed hosts - machines with multiple network interface cards. Specifies the adapter card the cache binds to for peer-to-peer communication. Also specifies the default location for <%=vars.product_name%> servers to listen on, which is used unless overridden by the <code class="ph codeph">server-bind-address</code>. An empty string causes the member to listen on the default card for the machine. This is a machine-wide attribute used for system member and client/server communication. It has no effect on locator location, unless the locator is embedded in a member process.
 <p>Specify the IP address, not the hostname, because each network card may not have a unique hostname. An empty string (the default) causes the member to listen on the default card for the machine.</p></td>
 <td>S, L</td>
 <td><em>not set</em></td>
@@ -131,7 +129,7 @@ Valid values are in the range 0...2147483647</td>
 </tr>
 <tr class="odd">
 <td>conserve-sockets</td>
-<td>Specifies whether sockets are shared by the system member’s threads. If true, threads share, and a minimum number of sockets are used to connect to the distributed system. If false, every application thread has its own sockets for distribution purposes. You can override this setting for individual threads inside your application. Where possible, it is better to set conserve-sockets to true and enable the use of specific extra sockets in the application code if needed. WAN deployments increase the messaging demands on a Geode system. To avoid hangs related to WAN messaging, always set <code class="ph codeph">conserve-sockets=false</code> for Geode members that participate in a WAN deployment.</td>
+<td>Specifies whether sockets are shared by the system member’s threads. If true, threads share, and a minimum number of sockets are used to connect to the distributed system. If false, every application thread has its own sockets for distribution purposes. You can override this setting for individual threads inside your application. Where possible, it is better to set conserve-sockets to true and enable the use of specific extra sockets in the application code if needed. WAN deployments increase the messaging demands on a <%=vars.product_name%> system. To avoid hangs related to WAN messaging, always set <code class="ph codeph">conserve-sockets=false</code> for <%=vars.product_name%> members that participate in a WAN deployment.</td>
 <td>S, L</td>
 <td>true</td>
 </tr>
@@ -144,13 +142,13 @@ Valid values are in the range 0...2147483647</td>
 <tr class="odd">
 <td>deploy-working-dir</td>
 <td>Working directory used when deploying JAR application files to distributed system members. This directory can be local and unique to the member or a shared resource. 
-See <a href="../../configuring/cluster_config/deploying_application_jars.html">Deploying Application JARs to Apache Geode Members</a> for more information.</td>
+See <a href="../../configuring/cluster_config/deploying_application_jars.html">Deploying Application JARs to <%=vars.product_name_long%> Members</a> for more information.</td>
 <td>S</td>
 <td>. (current directory)</td>
 </tr>
 <tr class="even">
 <td>disable-auto-reconnect</td>
-<td>By default, a Geode member (both locators and servers) will attempt to reconnect and reinitialize the cache after it has been forced out of the distributed system by a network partition event or has otherwise been shunned by other members. Use this property to turn off the autoreconnect behavior. 
+<td>By default, a <%=vars.product_name%> member (both locators and servers) will attempt to reconnect and reinitialize the cache after it has been forced out of the distributed system by a network partition event or has otherwise been shunned by other members. Use this property to turn off the autoreconnect behavior. 
 See <a href="../../managing/autoreconnect/member-reconnect.html">Handling Forced Cache Disconnection Using Autoreconnect</a> for more details.</td>
 <td>S, L</td>
 <td>false</td>
@@ -205,7 +203,7 @@ This setting must be the same for every member of a given distributed system and
 </tr>
 <tr class="even">
 <td>enforce-unique-host</td>
-<td>Whether partitioned regions will put redundant copies of the same data in different members running on the same physical machine. By default, Geode tries to put redundant copies on different machines, but it will put them on the same machine if no other machines are available. Setting this property to true prevents this and requires different machines for redundant copies.</td>
+<td>Whether partitioned regions will put redundant copies of the same data in different members running on the same physical machine. By default, <%=vars.product_name%> tries to put redundant copies on different machines, but it will put them on the same machine if no other machines are available. Setting this property to true prevents this and requires different machines for redundant copies.</td>
 <td>S</td>
 <td>false</td>
 </tr>
@@ -218,13 +216,13 @@ See <a href="../../configuring/cluster_config/using_member_groups.html">Using Me
 </tr>
 <tr class="even">
 <td>http-service-bind-address</td>
-<td>If set, then the Geode member binds the embedded HTTP service to the specified address. If this property is not set but the HTTP service is enabled using <code class="ph codeph">http-service-port</code>, then Geode binds the HTTP service to the member's local address. Used by the Geode Pulse Web application and the developer REST API service.</td>
+<td>If set, then the <%=vars.product_name%> member binds the embedded HTTP service to the specified address. If this property is not set but the HTTP service is enabled using <code class="ph codeph">http-service-port</code>, then <%=vars.product_name%> binds the HTTP service to the member's local address. Used by the <%=vars.product_name%> Pulse Web application and the developer REST API service.</td>
 <td>S</td>
 <td><em>not set</em></td>
 </tr>
 <tr class="odd">
 <td>http-service-port</td>
-<td>If non-zero, then Geode starts an embedded HTTP service that listens on this port. The HTTP service is used to host the Geode Pulse Web application and the development REST API service. If you are hosting the Pulse web app on your own Web server and are not using the development REST API service, then disable this embedded HTTP service by setting this property to zero. Ignored if <code class="ph codeph">jmx-manager</code> and <code class="ph codeph">start-dev-rest-api</code> are both set to false.</td>
+<td>If non-zero, then <%=vars.product_name%> starts an embedded HTTP service that listens on this port. The HTTP service is used to host the <%=vars.product_name%> Pulse Web application and the development REST API service. If you are hosting the Pulse web app on your own Web server and are not using the development REST API service, then disable this embedded HTTP service by setting this property to zero. Ignored if <code class="ph codeph">jmx-manager</code> and <code class="ph codeph">start-dev-rest-api</code> are both set to false.</td>
 <td>S</td>
 <td>7070</td>
 </tr>
@@ -254,7 +252,7 @@ See <a href="../../configuring/cluster_config/using_member_groups.html">Using Me
 </tr>
 <tr class="even">
 <td>jmx-manager-port</td>
-<td>The port this JMX Manager will listen to for client connections. If this property is set to zero then Geode will not allow remote client connections but you can alternatively use the standard system properties supported by the JVM for configuring access from remote JMX clients. Ignored if <code class="ph codeph">jmx-manager</code> is false.</td>
+<td>The port this JMX Manager will listen to for client connections. If this property is set to zero then <%=vars.product_name%> will not allow remote client connections but you can alternatively use the standard system properties supported by the JVM for configuring access from remote JMX clients. Ignored if <code class="ph codeph">jmx-manager</code> is false.</td>
 <td>S, L</td>
 <td>1099</td>
 </tr>
@@ -266,7 +264,7 @@ See <a href="../../configuring/cluster_config/using_member_groups.html">Using Me
 </tr>
 <tr class="even">
 <td>jmx-manager-update-rate</td>
-<td>The rate, in milliseconds, at which this member will push updates to any JMX Managers. Currently this value should be greater than or equal to the statistic-sample-rate. Setting this value too high will cause stale values to be seen by gfsh and Geode Pulse.</td>
+<td>The rate, in milliseconds, at which this member will push updates to any JMX Managers. Currently this value should be greater than or equal to the statistic-sample-rate. Setting this value too high will cause stale values to be seen by gfsh and <%=vars.product_name%> Pulse.</td>
 <td>S, L</td>
 <td>2000</td>
 </tr>
@@ -348,7 +346,7 @@ See <a href="../../configuring/cluster_config/using_member_groups.html">Using Me
 <td>mcast-address</td>
 <td>Address used to discover other members of the distributed system. Only used if mcast-port is non-zero. This attribute must be consistent across the distributed system. Select different multicast addresses and different ports for different distributed systems. Do not just use different addresses. Some operating systems may not keep communication separate between systems that use unique addresses but the same port number.
 <p>This default multicast address was assigned by IANA
-(<a href="http://www.iana.org/assignments/multicast-addresses">http://www.iana.org/assignments/multicast-addresses</a>). Consult the IANA chart when selecting another multicast address to use with Geode.</p>
+(<a href="http://www.iana.org/assignments/multicast-addresses">http://www.iana.org/assignments/multicast-addresses</a>). Consult the IANA chart when selecting another multicast address to use with <%=vars.product_name%>.</p>
 <div class="note note">
 **Note:**
 <p>This setting controls only peer-to-peer communication and does not apply to client/server or multi-site communication. If multicast is enabled, distributed regions use it for most communication. Partitioned regions only use multicast for a few purposes, and mainly use either TCP or UDP unicast.</p>
@@ -422,7 +420,7 @@ See <a href="../../configuring/cluster_config/using_member_groups.html">Using Me
 </tr>
 <tr class="even">
 <td>member-timeout</td>
-<td>Geode uses the <code class="ph codeph">member-timeout</code> server configuration, specified in milliseconds, to detect the abnormal termination of members. The configuration setting is used in two ways: 1) First it is used during the UDP heartbeat detection process. When a member detects that a heartbeat datagram is missing from the member that it is monitoring after the time interval of 2 * the value of <code class="ph codeph">member-timeout</code>, the detecting member attempts to form a TCP/IP stream-socket connection with the monitored member as described in the next case. 2) The property is then used again during the TCP/IP stream-socket connection. If the suspected process does not respond to the <em>are you alive</em> datagram within the time period specified in <code class="ph codeph">member-timeout</code>, the membership coordinator sends out a new membership view that notes the member's failure.
+<td><%=vars.product_name%> uses the <code class="ph codeph">member-timeout</code> server configuration, specified in milliseconds, to detect the abnormal termination of members. The configuration setting is used in two ways: 1) First it is used during the UDP heartbeat detection process. When a member detects that a heartbeat datagram is missing from the member that it is monitoring after the time interval of 2 * the value of <code class="ph codeph">member-timeout</code>, the detecting member attempts to form a TCP/IP stream-socket connection with the monitored member as described in the next case. 2) The property is then used again during the TCP/IP stream-socket connection. If the suspected process does not respond to the <em>are you alive</em> datagram within the time period specified in <code class="ph codeph">member-timeout</code>, the membership coordinator sends out a new membership view that notes the member's failure.
 <p>Valid values are in the range 1000..600000.</p></td>
 <td>S, L</td>
 <td>5000</td>
@@ -430,10 +428,10 @@ See <a href="../../configuring/cluster_config/using_member_groups.html">Using Me
 <tr class="odd">
 <td>membership-port-range</td>
 <td>The range of ports available for unicast UDP messaging and for TCP failure detection. This is specified as two integers separated by a hyphen. Different members can use different ranges.
-<p>Geode randomly chooses at least two unique integers from this range for the member, one for UDP unicast messaging and the other for TCP failure detection messaging. If tcp-port is configured to 0, it will also randomly select a port from this range for TCP sockets used for peer-to-peer communication only.</p>
+<p><%=vars.product_name%> randomly chooses at least two unique integers from this range for the member, one for UDP unicast messaging and the other for TCP failure detection messaging. If tcp-port is configured to 0, it will also randomly select a port from this range for TCP sockets used for peer-to-peer communication only.</p>
 <p>Therefore, the specified range must include at least three available port numbers (UDP, FD_SOCK, and TCP DirectChannel).</p>
 <p>The system uniquely identifies the member using the combined host IP address and UDP port number.</p>
-<p>You may want to restrict the range of ports that Geode uses so the product can run in an environment where routers only allow traffic on certain ports.</p></td>
+<p>You may want to restrict the range of ports that <%=vars.product_name%> uses so the product can run in an environment where routers only allow traffic on certain ports.</p></td>
 <td>S, L</td>
 <td>1024-65535</td>
 </tr>
@@ -465,7 +463,7 @@ off-heap-memory-size=120g</code></pre></td>
 </tr>
 <tr class="even">
 <td>redundancy-zone</td>
-<td>Defines this member's redundancy zone. Used to separate member's into different groups for satisfying partitioned region redundancy. If this property is set, Geode will not put redundant copies of data in members with the same redundancy zone setting. 
+<td>Defines this member's redundancy zone. Used to separate member's into different groups for satisfying partitioned region redundancy. If this property is set, <%=vars.product_name%> will not put redundant copies of data in members with the same redundancy zone setting. 
 See <a href="../../developing/partitioned_regions/configuring_ha_for_pr.html">Configure High Availability for a Partitioned Region</a> for more details.</td>
 <td>S</td>
 <td><em>not set</em></td>
@@ -569,7 +567,7 @@ Any security-related (properties that begin with <code class="ph codeph">securit
 </tr>
 <tr class="even">
 <td>server-bind-address</td>
-<td>Relevant only for multi-homed hosts - machines with multiple network interface cards. Network adapter card a Geode server binds to for client/server communication. You can use this to separate the server’s client/server communication from its peer-to-peer communication, spreading the traffic load.
+<td>Relevant only for multi-homed hosts - machines with multiple network interface cards. Network adapter card a <%=vars.product_name%> server binds to for client/server communication. You can use this to separate the server’s client/server communication from its peer-to-peer communication, spreading the traffic load.
 <p>This is a machine-wide attribute used for communication with clients in client/server and multi-site installations. This setting has no effect on locator configuration.</p>
 <p>Specify the IP address, not the hostname, because each network card may not have a unique hostname.</p>
 <p>An empty string causes the servers to listen on the same card used for peer-to-peer communication. This is either the <code class="ph codeph">bind-address</code> or, if that is not set, the machine’s default card.</p></td>
@@ -703,14 +701,14 @@ If you only specify the port, the address assigned to the member is used for the
 </tr>
 <tr class="odd">
 <td>tcp-port</td>
-<td>The TCP port to listen on for cache communications. If set to zero, the operating system selects an available port. Each process on a machine must have its own TCP port. Note that some operating systems restrict the range of ports usable by non-privileged users, and using restricted port numbers can cause runtime errors in Geode startup.
+<td>The TCP port to listen on for cache communications. If set to zero, the operating system selects an available port. Each process on a machine must have its own TCP port. Note that some operating systems restrict the range of ports usable by non-privileged users, and using restricted port numbers can cause runtime errors in <%=vars.product_name%> startup.
 <p>Valid values are in the range 0..65535.</p></td>
 <td>S, L</td>
 <td>0</td>
 </tr>
 <tr class="even">
 <td>tombstone-gc-threshold</td>
-<td>The number of tombstones that can accumulate before the Geode member triggers garbage collection for tombstones. 
+<td>The number of tombstones that can accumulate before the <%=vars.product_name%> member triggers garbage collection for tombstones. 
 See <a href="../../developing/distributed_regions/how_region_versioning_works.html#topic_321B05044B6641FCAEFABBF5066BD399">How Destroy and Clear Operations Are Resolved</a>.</td>
 <td>S</td>
 <td>100000</td>
@@ -752,6 +750,6 @@ See <a href="../../developing/distributed_regions/how_region_versioning_works.ht
 </tbody>
 </table>
 
--   **[Using Non-ASCII Strings in Apache Geode Property Files](../../reference/topics/non-ascii_strings_in_config_files.html)**
+-   **[Using Non-ASCII Strings in <%=vars.product_name_long%> Property Files](../../reference/topics/non-ascii_strings_in_config_files.html)**
 
-    You can specify Unicode (non-ASCII) characters in Apache Geode property files by using a `\uXXXX` escape sequence.
+    You can specify Unicode (non-ASCII) characters in <%=vars.product_name_long%> property files by using a `\uXXXX` escape sequence.


[18/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - Tools & Modules

Posted by kl...@apache.org.
http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/quick_ref_commands_by_area.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/quick_ref_commands_by_area.html.md.erb b/geode-docs/tools_modules/gfsh/quick_ref_commands_by_area.html.md.erb
index 14636d8..33dd44e 100644
--- a/geode-docs/tools_modules/gfsh/quick_ref_commands_by_area.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/quick_ref_commands_by_area.html.md.erb
@@ -1,6 +1,4 @@
----
-title: Basic Geode gfsh Commands
----
+<% set_title("Basic", product_name, "gfsh Commands")%>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -38,7 +36,7 @@ limitations under the License.
 | [sleep](command-pages/sleep.html)                                                           | Delay `gfsh` command execution.                                                                                                                                 | online, offline |
 | [version](command-pages/version.html)                                                | Display product version information.                                                                                                                            | online, offline |
 
-<span class="tablecap">Table 1. Basic Geode gfsh Commands</span>
+<span class="tablecap">Table 1. Basic <%=vars.product_name%> gfsh Commands</span>
 
 ## <a id="topic_EB854534301A477BB01058B3B142AE1D" class="no-quick-link"></a>Configuration Commands
 
@@ -88,7 +86,7 @@ limitations under the License.
 </tr>
 <tr class="even">
 <td><a href="command-pages/export.html#topic_mdv_jgz_ck">export cluster-configuration</a></td>
-<td>Exports a shared configuration zip file that contains cache.xml files, gemfire.properties files and jar files needed to configure and operate a Geode distributed system.</td>
+<td>Exports a shared configuration zip file that contains cache.xml files, gemfire.properties files and jar files needed to configure and operate a <%=vars.product_name%> distributed system.</td>
 <td>online</td>
 </tr>
 <tr class="odd">
@@ -117,7 +115,7 @@ limitations under the License.
 | [import data](command-pages/import.html#topic_jw2_2ld_2l)                       |                                                                 | online       |
 | [locate entry](command-pages/locate.html#concept_73B980C1138743DDBBFACE68009BD1E3__section_04BD7EC0032147DFA9CCD1331EE3B694)      | Locate a region entry on a member.                              | online       |
 | [put](command-pages/put.html)                                                               | Add or update a region entry.                                   | online       |
-| [query](command-pages/query.html)                                                      | Run queries against Geode regions. | online       |
+| [query](command-pages/query.html)                                                      | Run queries against <%=vars.product_name%> regions. | online       |
 | [remove](command-pages/remove.html)                                                        | Remove an entry from a region.                                  | online       |
 
 <span class="tablecap">Table 3. Data Commands</span>
@@ -140,7 +138,7 @@ limitations under the License.
 
 | Command                                                                                                                                                                                                                                                 | Description                                                                                                                              | Availability    |
 |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------|-----------------|
-| [alter disk-store](command-pages/alter.html#topic_99BCAD98BDB5470189662D2F308B68EB)                                                                               | Modify an existing Geode resource.                                                                          | online          |
+| [alter disk-store](command-pages/alter.html#topic_99BCAD98BDB5470189662D2F308B68EB)                                                                               | Modify an existing <%=vars.product_name%> resource.                                                                          | online          |
 | [backup disk-store](command-pages/backup.html#topic_E74ED23CB60342538B2175C326E7D758)                                                              | Back up persistent data from all members to the specified directory.                                                                     | online          |
 | [compact disk-store](command-pages/compact.html#topic_F113C95C076F424E9AA8AC4F1F6324CC)                                                                       | Compact online disk-stores.                                                                                                              | online          |
 | [compact offline-disk-store](command-pages/compact.html#topic_9CCFCB2FA2154E16BD775439C8ABC8FB)                                                                                          | Compact an offline disk store.                                                                                                           | online, offline |
@@ -148,7 +146,7 @@ limitations under the License.
 | [describe disk-store](command-pages/describe.html#topic_C635B500BE6A4F1D9572D0BC98A224F2)                                                                          | Display information about a member's disk store.                                                                                         | online          |
 | [describe offline-disk-store](command-pages/describe.html#topic_kys_yvk_2l)                                                                               | Display information about an offline member's disk store                                                                                 | online, offline |
 | [destroy disk-store](command-pages/destroy.html#topic_yfr_l2z_ck)              | Deletes a disk store and all files on disk used by the disk store. Data for closed regions that previously used this disk store is lost. | online          |
-| [list disk-stores](command-pages/list.html#topic_BC14AD57EA304FB3845766898D01BD04)                                                                              | List all available disk stores in a Geode cluster.                                                          | online          |
+| [list disk-stores](command-pages/list.html#topic_BC14AD57EA304FB3845766898D01BD04)                                                                              | List all available disk stores in a <%=vars.product_name%> cluster.                                                          | online          |
 | [revoke missing-disk-store](command-pages/revoke.html)                                                                 | Instruct the member(s) of a distributed system to stop waiting for a disk store to be available.                                         | online          |
 | [show missing-disk-stores](command-pages/show.html#topic_7B3D624D5B4F41D1A0F8A9C3C8B2E780)                                   | Display a summary of the disk stores that are currently missing from a distributed system.                                               | online          |
 | [validate offline-disk-store](command-pages/validate.html)                                                                                                                                | Validate offline disk stores.                                                                                                            | online, offline |
@@ -204,7 +202,7 @@ limitations under the License.
 
 <span class="tablecap">Table 8. Gateway (WAN) Commands</span>
 
-## <a id="topic_F0AE5CE40D6D49BF92247F5EF4F871D3" class="no-quick-link"></a>GeodeAsyncEventQueue Commands
+## <a id="topic_F0AE5CE40D6D49BF92247F5EF4F871D3" class="no-quick-link"></a><%=vars.product_name%> AsyncEventQueue Commands
 
 <a id="topic_F0AE5CE40D6D49BF92247F5EF4F871D3__table_vp5_mz1_3l"></a>
 
@@ -213,9 +211,9 @@ limitations under the License.
 | [create async-event-queue](command-pages/create.html#topic_ryz_pb1_dk) | Creates an asynchronous event queue.                  | online       |
 | [list async-event-queues](command-pages/list.html#topic_j22_kzk_2l)                                                     | Display a list of async event queues for all members. | online       |
 
-<span class="tablecap">Table 9. GeodeAsyncEventQueue Commands</span>
+<span class="tablecap">Table 9. <%=vars.product_name%> AsyncEventQueue Commands</span>
 
-## <a id="topic_B742E9E862BA457082E2346581C97D03" class="no-quick-link"></a>Geode Monitoring Commands
+## <a id="topic_B742E9E862BA457082E2346581C97D03" class="no-quick-link"></a><%=vars.product_name%> Monitoring Commands
 
 <a id="topic_B742E9E862BA457082E2346581C97D03__table_pkf_nz1_3l"></a>
 
@@ -235,9 +233,9 @@ limitations under the License.
 | [shutdown](command-pages/shutdown.html)                                                                                                                                                                         | Shut down all members that have a cache.                                                                                                               | online          |
 | [start jconsole](command-pages/start.html#topic_D00507416F3944DFAB48D2FA2B9E4A31)                                                                            | Start the JDK JConsole monitoring application in a separate process. JConsole automatically connects to a running JMX Manager node if one is available | online, offline |
 | [start jvisualvm](command-pages/start.html#topic_5B5BF8BEE905463D8B7762B89E2D65E7)                                                                | Start the JDK's Java VisualVM monitoring application in a separate process.                                                                            | online, offline |
-| [start pulse](command-pages/start.html#topic_E906BA7D9E7F4C5890FEFA7ECD40DD77) | Launch the Geode Pulse monitoring dashboard tool in the user's default system browser.                                    | online, offline |
+| [start pulse](command-pages/start.html#topic_E906BA7D9E7F4C5890FEFA7ECD40DD77) | Launch the <%=vars.product_name%> Pulse monitoring dashboard tool in the user's default system browser.                                    | online, offline |
 
-<span class="tablecap">Table 10. Geode Monitoring Commands</span>
+<span class="tablecap">Table 10. <%=vars.product_name%> Monitoring Commands</span>
 
 ## <a id="topic_688C66526B4649AFA51C0F72F34FA45E" class="no-quick-link"></a>Index Commands
 
@@ -313,7 +311,7 @@ limitations under the License.
 | [create region](command-pages/create.html#topic_54B0985FEC5241CA9D26B0CE0A5EA863)                                                                         | Create and configure a region.                                                                                                                             | online       |
 | [describe region](command-pages/describe.html#topic_DECF7D3D33F54071B6B8AD4EA7E3F90B)                                                                | Display the attributes and key information of a region.                                                                                                    | online       |
 | [destroy region](command-pages/destroy.html#topic_BEDACECF4599407794ACBC0E56B30F65)                                                                                              | Destroy or remove a region.                                                                                                                                | online       |
-| [list regions](command-pages/list.html#topic_F0ECEFF26086474498598035DD83C588) | Display regions of a member or members. If no parameter is specified, all regions in the Geode distributed system are listed. | online       |
+| [list regions](command-pages/list.html#topic_F0ECEFF26086474498598035DD83C588) | Display regions of a member or members. If no parameter is specified, all regions in the <%=vars.product_name%> distributed system are listed. | online       |
 | [rebalance](command-pages/rebalance.html)                                                                                                                                     | Rebalance partitioned regions.                                                                                                                             | online       |
 
 <span class="tablecap">Table 16. Region Commands</span>
@@ -324,9 +322,9 @@ limitations under the License.
 
 | Command                                                                                                                                                                   | Description                                                                          | Availability    |
 |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|-----------------|
-| [start server](command-pages/start.html#topic_3764EE2DB18B4AE4A625E0354471738A)                       | Start a Geode cache server process.                     | online, offline |
-| [status server](command-pages/status.html#topic_E5DB49044978404D9D6B1971BF5D400D) | Display the status of the specified Geode cache server. | online, offline |
-| [stop server](command-pages/stop.html#topic_723EE395A63A40D6819618AFC2902115)                                  | Stop a Geode cache server.                              | online, offline |
+| [start server](command-pages/start.html#topic_3764EE2DB18B4AE4A625E0354471738A)                       | Start a <%=vars.product_name%> cache server process.                     | online, offline |
+| [status server](command-pages/status.html#topic_E5DB49044978404D9D6B1971BF5D400D) | Display the status of the specified <%=vars.product_name%> cache server. | online, offline |
+| [stop server](command-pages/stop.html#topic_723EE395A63A40D6819618AFC2902115)                                  | Stop a <%=vars.product_name%> cache server.                              | online, offline |
 
 <span class="tablecap">Table 17. Server Commands</span>
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/starting_gfsh.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/starting_gfsh.html.md.erb b/geode-docs/tools_modules/gfsh/starting_gfsh.html.md.erb
index 21b22a2..209614c 100644
--- a/geode-docs/tools_modules/gfsh/starting_gfsh.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/starting_gfsh.html.md.erb
@@ -24,7 +24,7 @@ Before you start gfsh, confirm that you have set JAVA\_HOME and that your PATH v
 **Note:**
 On Windows, you must have the JAVA\_HOME environment variable set properly to use start, stop and status commands for both locators and servers.
 
-To launch the gfsh command-line interface, execute the following command at the prompt on any machine that is currently installed with Apache Geode:
+To launch the gfsh command-line interface, execute the following command at the prompt on any machine that is currently installed with <%=vars.product_name_long%>:
 
 **Start gfsh on Windows:**
 
@@ -32,7 +32,7 @@ To launch the gfsh command-line interface, execute the following command at the
 <product_directory>\bin\gfsh.bat 
 ```
 
-where &lt;*product\_directory*&gt; corresponds to the location where you installed Apache Geode.
+where &lt;*product\_directory*&gt; corresponds to the location where you installed <%=vars.product_name_long%>.
 
 **Start gfsh on Unix:**
 
@@ -40,19 +40,19 @@ where &lt;*product\_directory*&gt; corresponds to the location where you install
 <product_directory>/bin/gfsh
 ```
 
-where &lt;*product\_directory*&gt; corresponds to the location where you installed Apache Geode. Upon execution, the `gfsh` script appends the required Apache Geode and JDK Jar libraries to your existing CLASSPATH.
+where &lt;*product\_directory*&gt; corresponds to the location where you installed <%=vars.product_name_long%>. Upon execution, the `gfsh` script appends the required <%=vars.product_name_long%> and JDK Jar libraries to your existing CLASSPATH.
 
 If you have successfully started `gfsh`, the `gfsh` splash screen and prompt appears.
 
 ``` pre
-c:\Geode\Latest>gfsh.bat
+c:\<%=vars.product_name%>\Latest>gfsh.bat
     _________________________     __
    / _____/ ______/ ______/ /____/ /
   / /  __/ /___  /_____  / _____  /
  / /__/ / ____/  _____/ / /    / /
 /______/_/      /______/_/    /_/
 
-Monitor and Manage Geode
+Monitor and Manage <%=vars.product_name%>
 gfsh>
 ```
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/tour_of_gfsh.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/tour_of_gfsh.html.md.erb b/geode-docs/tools_modules/gfsh/tour_of_gfsh.html.md.erb
index 759fa99..9ae206e 100644
--- a/geode-docs/tools_modules/gfsh/tour_of_gfsh.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/tour_of_gfsh.html.md.erb
@@ -39,7 +39,7 @@ $ gfsh
  / /__/ / ____/  _____/ / /    / /
 /______/_/      /______/_/    /_/
 
-Monitor and Manage Geode
+Monitor and Manage <%=vars.product_name%>
 gfsh>
 ```
 
@@ -60,7 +60,7 @@ Locator in /home/username/gfsh_tutorial/locator1 on 192.0.2.0[10334]
 as locator1 is currently online.
 Process ID: 67666
 Uptime: 6 seconds
-Geode Version: 1.2.0
+<%=vars.product_name%> Version: <%=vars.product_version%>
 Java Version: 1.8.0_92
 Log File: /home/username/gfsh_tutorial/locator1.log
 JVM Arguments: -Dgemfire.enable-cluster-configuration=true
@@ -176,13 +176,13 @@ If the server starts successfully, the following output appears:
 
 ``` pre
 gfsh>start server --name=server1 --locators=localhost[10334]
-Starting a Geode Server in /home/username/gfsh_tutorial/server1/server1.log...
+Starting a <%=vars.product_name%> Server in /home/username/gfsh_tutorial/server1/server1.log...
 ...
 Server in /home/username/gfsh_tutorial/server1 on 192.0.2.0[40404] as server1
 is currently online.
 Process ID: 68375
 Uptime: 4 seconds
-Geode Version: 1.2.0
+<%=vars.product_name%> Version: <%=vars.product_version%>
 Java Version: 1.8.0_92
 Log File: /home/username//gfsh_tutorial/server1/server1.log
 JVM Arguments: -Dgemfire.locators=localhost[10334]
@@ -203,7 +203,7 @@ directory (named after the server), and within that working directory, it has cr
 a .pid (containing the server's process ID) for this cache server. In addition, it has also written
 log files.
 
-**Step 7: List members.** Use the `list members` command to view the current members of the Apache Geode distributed system you have just created.
+**Step 7: List members.** Use the `list members` command to view the current members of the <%=vars.product_name_long%> distributed system you have just created.
 
 ``` pre
 gfsh>list members
@@ -291,13 +291,13 @@ Because only one server is in the distributed system at the moment, the command
 
 ``` pre
 gfsh>start server --name=server2 --server-port=40405
-Starting a Geode Server in /home/username/gfsh_tutorial/server2...
+Starting a <%=vars.product_name%> Server in /home/username/gfsh_tutorial/server2...
 ...
 Server in /home/username/gfsh_tutorial/server2 on 192.0.2.0[40405] as
 server2 is currently online.
 Process ID: 68423
 Uptime: 4 seconds
-Geode Version: 1.2.0
+<%=vars.product_name%> Version: <%=vars.product_version%>
 Java Version: 1.8.0_92
 Log File: /home/username/gfsh_tutorial/server2/server2.log
 JVM Arguments: -Dgemfire.default.locators=192.0.2.0[10334]

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/useful_gfsh_shell_variables.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/useful_gfsh_shell_variables.html.md.erb b/geode-docs/tools_modules/gfsh/useful_gfsh_shell_variables.html.md.erb
index e355e9f..4b10e83 100644
--- a/geode-docs/tools_modules/gfsh/useful_gfsh_shell_variables.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/useful_gfsh_shell_variables.html.md.erb
@@ -46,7 +46,7 @@ gfsh>echo --string=${SYS_CLASSPATH}
 |                          |                                                                                                    |
 |--------------------------|----------------------------------------------------------------------------------------------------|
 | SYS\_CLASSPATH           | CLASSPATH of the gfsh JVM (read only).                                                             |
-| SYS\_GEMFIRE\_DIR        | Product directory where Geode has been installed (read only). |
+| SYS\_GEMFIRE\_DIR        | Product directory where <%=vars.product_name%> has been installed (read only). |
 | SYS\_HOST\_NAME          | Host from which gfsh is started (read only).                                                       |
 | SYS\_JAVA\_VERSION       | Java version used (read only).                                                                     |
 | SYS\_OS                  | OS name (read only).                                                                               |

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/chapter_overview.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/chapter_overview.html.md.erb
index e6808c8..71115e4 100644
--- a/geode-docs/tools_modules/http_session_mgmt/chapter_overview.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/chapter_overview.html.md.erb
@@ -19,17 +19,17 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-The Apache Geode HTTP Session Management modules provide fast, scalable, and reliable session replication for HTTP servers without requiring application changes.
+The <%=vars.product_name_long%> HTTP Session Management modules provide fast, scalable, and reliable session replication for HTTP servers without requiring application changes.
 
-Apache Geode offers HTTP session management modules for tc Server, Tomcat, and AppServers.
+<%=vars.product_name_long%> offers HTTP session management modules for tc Server, Tomcat, and AppServers.
 
-These modules are included with the Apache Geode product distribution, and installation .zip files can be found in the `tools/Modules` directory of your product installation.
+These modules are included with the <%=vars.product_name_long%> product distribution, and installation .zip files can be found in the `tools/Modules` directory of your product installation.
 
 -   **[HTTP Session Management Quick Start](../../tools_modules/http_session_mgmt/quick_start.html)**
 
     In this section you download, install, and set up the HTTP Session Management modules.
 
--   **[Advantages of Using Geode for Session Management](../../tools_modules/http_session_mgmt/http_why_use_gemfire.html)**
+-   **[Advantages of Using <%=vars.product_name%> for Session Management](../../tools_modules/http_session_mgmt/http_why_use_gemfire.html)**
 
     The HTTP Session Management Module enables you to customize how you manage your session data.
 
@@ -39,7 +39,7 @@ These modules are included with the Apache Geode product distribution, and insta
 
 -   **[General Information on HTTP Session Management](../../tools_modules/http_session_mgmt/tc_additional_info.html)**
 
-    This section provides information on sticky load balancers, session expiration, additional Geode property changes, serialization and more.
+    This section provides information on sticky load balancers, session expiration, additional <%=vars.product_name%> property changes, serialization and more.
 
 -   **[Session State Log Files](../../tools_modules/http_session_mgmt/session_state_log_files.html)**
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/common_gemfire_topologies.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/common_gemfire_topologies.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/common_gemfire_topologies.html.md.erb
index f0bf050..e0bf3a9 100644
--- a/geode-docs/tools_modules/http_session_mgmt/common_gemfire_topologies.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/common_gemfire_topologies.html.md.erb
@@ -33,4 +33,4 @@ In a peer-to-peer configuration, each instance within an application server cont
 
 <img src="../../images_svg/http_module_cs_with_locator.svg" id="common_gemfire_topologies__image_oss_zyw_rv" class="image" />
 
-In a client/server configuration, the Tomcat or tc Server instance operates as a Geode client, which must communicate with one or more Geode servers to acquire session data. The client maintains its own local cache and will communicate with the server to satisfy cache misses. A client/server configuration is useful when you want to separate the application server instance from the cached session data. In this configuration, you can reduce the memory consumption of the application server since session data is stored in separate Geode server processes.
+In a client/server configuration, the Tomcat or tc Server instance operates as a <%=vars.product_name%> client, which must communicate with one or more <%=vars.product_name%> servers to acquire session data. The client maintains its own local cache and will communicate with the server to satisfy cache misses. A client/server configuration is useful when you want to separate the application server instance from the cached session data. In this configuration, you can reduce the memory consumption of the application server since session data is stored in separate <%=vars.product_name%> server processes.

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/http_why_use_gemfire.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/http_why_use_gemfire.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/http_why_use_gemfire.html.md.erb
index f47d625..01cd563 100644
--- a/geode-docs/tools_modules/http_session_mgmt/http_why_use_gemfire.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/http_why_use_gemfire.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Advantages of Using Geode for Session Management
----
+<% set_title("Advantages of Using", product_name, "for Session Management") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -27,19 +25,19 @@ Depending on your usage model, the HTTP Session Management Module enables you to
 -   Partition data across multiple servers.
 -   Manage your session data in many other customizable ways.
 
-Using Geode for session management has many advantages:
+Using <%=vars.product_name%> for session management has many advantages:
 
 <dt>**tc Server integration**</dt>
-<dd>The Geode module offers clean integration into the tc Server environment with minimal configuration changes necessary.</dd>
+<dd>The <%=vars.product_name%> module offers clean integration into the tc Server environment with minimal configuration changes necessary.</dd>
 
 <dt>**Scalability**</dt>
-<dd>Applications with a small number of frequently-accessed sessions can **replicate** session information on all members in the system. However, when the number of concurrent sessions being managed is large, data can be **partitioned** across any number of servers (either embedded within your application server process or managed by Geode cache servers), which allows for **linear scaling**. Additionally, capacity can be **dynamically added or removed** in a running system and Geode re-executes a non-blocking, rebalancing logic to migrate data from existing members to the newly added members. When the session state memory requirements exceed available memory, each partition host can **overflow to disk**.</dd>
+<dd>Applications with a small number of frequently-accessed sessions can **replicate** session information on all members in the system. However, when the number of concurrent sessions being managed is large, data can be **partitioned** across any number of servers (either embedded within your application server process or managed by <%=vars.product_name%> cache servers), which allows for **linear scaling**. Additionally, capacity can be **dynamically added or removed** in a running system and <%=vars.product_name%> re-executes a non-blocking, rebalancing logic to migrate data from existing members to the newly added members. When the session state memory requirements exceed available memory, each partition host can **overflow to disk**.</dd>
 
 <dt>**Server-managed session state**</dt>
 <dd>Session state can be managed independent of the application server cluster. This allows applications or servers to come and go without impacting session lifetimes.</dd>
 
 <dt>**Shared nothing cluster-wide persistence**</dt>
-<dd>Session state can be persisted (and recovered) - invaluable for scenarios where sessions manage critical application state or have long lifetimes. Geode uses a shared nothing persistence model where each member can continuously append to rolling log files without ever needing to seek on disk, providing very high disk throughput. When data is partitioned, the total disk throughput can come close to the aggregate disk transfer rates across each of the members storing data on disk.</dd>
+<dd>Session state can be persisted (and recovered) - invaluable for scenarios where sessions manage critical application state or have long lifetimes. <%=vars.product_name%> uses a shared nothing persistence model where each member can continuously append to rolling log files without ever needing to seek on disk, providing very high disk throughput. When data is partitioned, the total disk throughput can come close to the aggregate disk transfer rates across each of the members storing data on disk.</dd>
 
 <dt>**Session deltas**</dt>
 <dd>When session attributes are updated, only the updated state that is sent over the wire (to cache servers and to replicas). This provides fast updates even for large sessions. Session state is always managed in a serialized state on the servers, avoiding the need for the cache servers to be aware of the application classes.</dd>
@@ -50,7 +48,7 @@ Using Geode for session management has many advantages:
 <dt>**Application server sizing**</dt>
 <dd>Another aspect of tiered-caching functionality is that session replication can be configured so that session objects are stored external to the application server process. This allows the heap settings on the application server to be much smaller than they would otherwise need to be.</dd>
 
-<dt>**High availability (HA), disk-based overflow, synchronization to backend data store, other Geode features**</dt>
-<dd>All the popular Geode features are available. For example: more than one synchronous copy of the session state can be maintained providing high availability (HA); the session cache can overflow to disk if the memory capacity in the cache server farm becomes insufficient; state information can be written to a backend database in a synchronous or asynchronous manner.</dd>
+<dt>**High availability (HA), disk-based overflow, synchronization to backend data store, other <%=vars.product_name%> features**</dt>
+<dd>All the popular <%=vars.product_name%> features are available. For example: more than one synchronous copy of the session state can be maintained providing high availability (HA); the session cache can overflow to disk if the memory capacity in the cache server farm becomes insufficient; state information can be written to a backend database in a synchronous or asynchronous manner.</dd>
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/interactive_mode_ref.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/interactive_mode_ref.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/interactive_mode_ref.html.md.erb
index 05a3ab4..da4d336 100644
--- a/geode-docs/tools_modules/http_session_mgmt/interactive_mode_ref.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/interactive_mode_ref.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-This section describes each prompt when entering into interactive configuration mode of the Geode HTTP Session Management Module for tc Server.
+This section describes each prompt when entering into interactive configuration mode of the <%=vars.product_name%> HTTP Session Management Module for tc Server.
 
 ``` pre
   Please enter a value for 'geode-cs.maximum.vm.heap.size.mb'. Default '512':
@@ -30,30 +30,30 @@ This section describes each prompt when entering into interactive configuration
 The above properties allow you to fine-tune your JVM heap and garbage collector. For more information, refer to [Managing Heap and Off-heap Memory](../../managing/heap_use/heap_management.html#resource_manager).
 
 ``` pre
-  Please specify whether to enable a Geode listener that logs session create, 
+  Please specify whether to enable a <%=vars.product_name%> listener that logs session create, 
   update, destroy and expiration events. Default 'false':
 ```
 
-The above property determines whether a debug cache listener is added to the session region. When true, info-level messages are logged to the Geode log when sessions are created, updated, invalidated, or expired.
+The above property determines whether a debug cache listener is added to the session region. When true, info-level messages are logged to the <%=vars.product_name%> log when sessions are created, updated, invalidated, or expired.
 
 ``` pre
 With the geode-p2p template:
-  Please specify whether to maintain a local Geode cache. Default 'false':
+  Please specify whether to maintain a local <%=vars.product_name%> cache. Default 'false':
   
 With the geode-cs template:
-  Please specify whether to maintain a local Geode cache. Default 'true':
+  Please specify whether to maintain a local <%=vars.product_name%> cache. Default 'true':
 ```
 
 The above property determines whether a local cache is enabled; if this parameter is set to true, the app server load balancer should be configured for sticky session mode.
 
 ``` pre
 With the geode-p2p template:
-  Please enter the id of the attributes of the Geode region used to cache 
+  Please enter the id of the attributes of the <%=vars.product_name%> region used to cache 
       sessions.
   Default 'REPLICATE':
 
 With the geode-cs template:
-  Please enter the id of the attributes of the Geode region used to cache 
+  Please enter the id of the attributes of the <%=vars.product_name%> region used to cache 
       sessions.
   Default 'PARTITION_REDUNDANT':
 ```
@@ -61,11 +61,11 @@ With the geode-cs template:
 The above property determines the ID of the attributes for the cache region; possible values include PARTITION, PARTITION\_REDUNDANT, PARTITION\_PERSISTENT, REPLICATE, REPLICATE\_PERSISTENT, and any other region shortcut that can be found in [Region Shortcuts and Custom Named Region Attributes](../../basic_config/data_regions/region_shortcuts.html). When using a partitioned region attribute, it is recommended that you use PARTITION\_REDUNDANT (rather than PARTITION) to ensure that the failure of a server does not result in lost session data.
 
 ``` pre
-  Please enter the name of the Geode region used to cache sessions. 
+  Please enter the name of the <%=vars.product_name%> region used to cache sessions. 
   Default 'gemfire_modules_sessions':
 ```
 
-The above property determines the Geode region name.
+The above property determines the <%=vars.product_name%> region name.
 
 ``` pre
   Please enter the port that Tomcat Shutdown should listen on. Default '-1':
@@ -84,11 +84,11 @@ tc Server requires information about connector ports. `bio.http.port` is the htt
 
 ``` pre
 With the geode-p2p template:
-  Please enter the name of the Geode cache configuration file. 
+  Please enter the name of the <%=vars.product_name%> cache configuration file. 
   Default 'cache-peer.xml':
   
 With the geode-cs template:
-  Please enter the name of the Geode cache configuration file. 
+  Please enter the name of the <%=vars.product_name%> cache configuration file. 
   Default 'cache-client.xml':
 ```
 
@@ -104,36 +104,36 @@ You can change the name of the cache configuration file with the above property.
 The above properties allow you to control the critical and eviction watermarks for the heap. By default, the critical watermark is disabled (set to 0.0) and the eviction watermark is set to 80%.
 
 ``` pre
-Please enter the list of locators used by Geode members to discover each other. 
+Please enter the list of locators used by <%=vars.product_name%> members to discover each other. 
 The format is a comma-separated list of host[port]. Default ' ':
 ```
 
 The above property specifies the list of locators.
 
 ``` pre
-  Please enter the name of the file used to log Geode messages. 
+  Please enter the name of the file used to log <%=vars.product_name%> messages. 
   Default 'gemfire_modules.log':
 ```
 
-The above property determines the file name for the Geode log file.
+The above property determines the file name for the <%=vars.product_name%> log file.
 
 ``` pre
 Applicable to the geode-p2p template ONLY:
-  Please specify whether to rebalance the Geode cache at startup.
+  Please specify whether to rebalance the <%=vars.product_name%> cache at startup.
   Default 'false':
 ```
 
-This property allows you to rebalance a partitioned Geode cache when a new Geode peer starts up.
+This property allows you to rebalance a partitioned <%=vars.product_name%> cache when a new <%=vars.product_name%> peer starts up.
 
 ``` pre
-  Please enter the name of the file used to store Geode statistics. 
+  Please enter the name of the file used to store <%=vars.product_name%> statistics. 
   Default 'gemfire_modules.gfs':
 ```
 
-The above property determines the filename for the Geode statistics file.
+The above property determines the filename for the <%=vars.product_name%> statistics file.
 
 ``` pre
-  Please specify whether Geode statistic sampling should be enabled. 
+  Please specify whether <%=vars.product_name%> statistic sampling should be enabled. 
   Default 'false':
 ```
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/quick_start.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/quick_start.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/quick_start.html.md.erb
index fcbb138..4d63da0 100644
--- a/geode-docs/tools_modules/http_session_mgmt/quick_start.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/quick_start.html.md.erb
@@ -49,7 +49,7 @@ In this section you download, install, and set up the HTTP Session Management mo
 These steps provide a basic starting point for using the tc Server module. For more configuration options, see [HTTP Session Management Module for Pivotal tc Server](session_mgmt_tcserver.html). As a prerequisite, module set up requires a JAVA\_HOME environment variable set to the java installation.
 
 1.  Navigate to the root directory of tc Server.
-2.  Create a Geode instance using one of the provided templates and start the instance after starting up a locator. For example:
+2.  Create a <%=vars.product_name%> instance using one of the provided templates and start the instance after starting up a locator. For example:
 
     ``` pre
     $ gfsh start locator --name=locator1
@@ -57,7 +57,7 @@ These steps provide a basic starting point for using the tc Server module. For m
     $ ./tcruntime-ctl.sh my_instance_name start
     ```
 
-    This will create and run a Geode instance using the peer-to-peer topology and default configuration values. Another Geode instance on another system can be created and started in the same way.
+    This will create and run a <%=vars.product_name%> instance using the peer-to-peer topology and default configuration values. Another <%=vars.product_name%> instance on another system can be created and started in the same way.
 
     If you need to pin your tc Server instance to a specific tc Server runtime version, use the `--version` option when creating the instance.
 
@@ -92,13 +92,13 @@ These steps provide a basic starting point for using the AppServers module with
 
 **Note:**
 
--   The `modify_war` script relies upon a GEODE environment variable. Set the GEODE environment variable to the Geode product directory; this is the parent directory of `bin`.
+-   The `modify_war` script relies upon a GEODE environment variable. Set the GEODE environment variable to the <%=vars.product_name%> product directory; this is the parent directory of `bin`.
 -   The `modify_war` script, described below, relies on files within the distribution tree and should not be run outside of a complete distribution.
 -   The `modify_war` script is a `bash` script and does not run on Windows.
 
 To set up the AppServers module, perform the following steps:
 
-1.  Run the `modify_war` script against an existing `.war` or `.ear` file to integrate the necessary components. The example below will create a configuration suitable for a peer-to-peer Geode system, placing the necessary libraries into `WEB-INF/lib` for wars and `lib` for ears and modifying any `web.xml` files:
+1.  Run the `modify_war` script against an existing `.war` or `.ear` file to integrate the necessary components. The example below will create a configuration suitable for a peer-to-peer <%=vars.product_name%> system, placing the necessary libraries into `WEB-INF/lib` for wars and `lib` for ears and modifying any `web.xml` files:
 
     ``` pre
     $ bin/modify_war -w my-app.war -p gemfire.property.locators=localhost[10334] \

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/session_mgmt_tcserver.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/session_mgmt_tcserver.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/session_mgmt_tcserver.html.md.erb
index 833a918..8af8a03 100644
--- a/geode-docs/tools_modules/http_session_mgmt/session_mgmt_tcserver.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/session_mgmt_tcserver.html.md.erb
@@ -31,8 +31,8 @@ If you would prefer to manually change the `server.xml` and `context.xml` files
 
     To set up the HTTP Module for tc Server, start a tc Server instance with the appropriate tc Server template based on your preferred topology.
 
--   **[Changing the Default Geode Configuration in the tc Server Module](../../tools_modules/http_session_mgmt/tc_changing_gf_default_cfg.html)**
+-   **[Changing the Default <%=vars.product_name%> Configuration in the tc Server Module](../../tools_modules/http_session_mgmt/tc_changing_gf_default_cfg.html)**
 
-    By default, the tc Server HTTP module will run Geode automatically with pre-configured settings. You can change these Geode settings.
+    By default, the tc Server HTTP module will run <%=vars.product_name%> automatically with pre-configured settings. You can change these <%=vars.product_name%> settings.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/session_mgmt_tomcat.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/session_mgmt_tomcat.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/session_mgmt_tomcat.html.md.erb
index aef1c3a..e8c9d5a 100644
--- a/geode-docs/tools_modules/http_session_mgmt/session_mgmt_tomcat.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/session_mgmt_tomcat.html.md.erb
@@ -29,10 +29,10 @@ For instructions specific to SpringSource tc Server templates, refer to [HTTP Se
 
 -   **[Setting Up the HTTP Module for Tomcat](../../tools_modules/http_session_mgmt/tomcat_setting_up_the_module.html)**
 
-    To use the Geode HTTP module with Tomcat application servers, you will need to modify Tomcat's `server.xml` and `context.xml` files.
+    To use the <%=vars.product_name%> HTTP module with Tomcat application servers, you will need to modify Tomcat's `server.xml` and `context.xml` files.
 
--   **[Changing the Default Geode Configuration in the Tomcat Module](../../tools_modules/http_session_mgmt/tomcat_changing_gf_default_cfg.html)**
+-   **[Changing the Default <%=vars.product_name%> Configuration in the Tomcat Module](../../tools_modules/http_session_mgmt/tomcat_changing_gf_default_cfg.html)**
 
-    By default, the Tomcat module will run Geode automatically with pre-configured settings. You can change these Geode settings.
+    By default, the Tomcat module will run <%=vars.product_name%> automatically with pre-configured settings. You can change these <%=vars.product_name%> settings.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/session_mgmt_weblogic.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/session_mgmt_weblogic.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/session_mgmt_weblogic.html.md.erb
index 075c9fa..fc1be0c 100644
--- a/geode-docs/tools_modules/http_session_mgmt/session_mgmt_weblogic.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/session_mgmt_weblogic.html.md.erb
@@ -21,14 +21,14 @@ limitations under the License.
 
 You implement session caching with the HTTP Session Management Module for AppServers with a special filter, defined in the `web.xml`, which is configured to intercept and wrap all requests.
 
-You can use this HTTP module with a variety of application servers. Wrapping each request allows the interception of `getSession()` calls to be handled by Geode instead of the native container. This approach is a generic solution, which is supported by any container that implements the Servlet 2.4 specification.
+You can use this HTTP module with a variety of application servers. Wrapping each request allows the interception of `getSession()` calls to be handled by <%=vars.product_name%> instead of the native container. This approach is a generic solution, which is supported by any container that implements the Servlet 2.4 specification.
 
 -   **[Setting Up the HTTP Module for AppServers](../../tools_modules/http_session_mgmt/weblogic_setting_up_the_module.html)**
 
     To use the module, you need to modify your application's `web.xml` files. Configuration is slightly different depending on the topology you are setting up.
 
--   **[Changing the Default Geode Configuration in the AppServers Module](../../tools_modules/http_session_mgmt/weblogic_changing_gf_default_cfg.html)**
+-   **[Changing the Default <%=vars.product_name%> Configuration in the AppServers Module](../../tools_modules/http_session_mgmt/weblogic_changing_gf_default_cfg.html)**
 
-    By default, the AppServers module will run Geode automatically with preconfigured settings. You can change these Geode settings.
+    By default, the AppServers module will run <%=vars.product_name%> automatically with preconfigured settings. You can change these <%=vars.product_name%> settings.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/session_state_log_files.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/session_state_log_files.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/session_state_log_files.html.md.erb
index 4a84e99..44679cb 100644
--- a/geode-docs/tools_modules/http_session_mgmt/session_state_log_files.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/session_state_log_files.html.md.erb
@@ -22,12 +22,12 @@ limitations under the License.
 Several log files are written by the various parts of the session management code.
 
 -   `catalina.log`. Log file written by the tc server
--   `cacheserver.log`. Log file written by the Geode server process.
--   `gemfire_modules.log`. Log file written by the Geode cache client.
+-   `cacheserver.log`. Log file written by the <%=vars.product_name%> server process.
+-   `gemfire_modules.log`. Log file written by the <%=vars.product_name%> cache client.
 
 ## <a id="concept_33F73F78783D4994B721486243827E15__section_A547F9C7AA4541ED9B99CF0DEAC1417A" class="no-quick-link"></a>Adding FINE Debug Logging to catalina.log
 
-To add Geode-specific FINE logging to the `catalina.log` file, add the following lines to your `<instance>/conf/logging.properties` file:
+To add <%=vars.product_name%>-specific FINE logging to the `catalina.log` file, add the following lines to your `<instance>/conf/logging.properties` file:
 
 ``` pre
 org.apache.geode.modules.session.catalina.DeltaSessionManager.level = FINE
@@ -50,9 +50,9 @@ Created session region: org.apache.geode.internal.cache.LocalRegion[path='/gemfi
 scope=LOCAL';dataPolicy=EMPTY; gatewayEnabled=false]
 ```
 
-## <a id="concept_33F73F78783D4994B721486243827E15__section_CF950FC81CC046838F42A3E6783985BD" class="no-quick-link"></a>Add Session State Logging to the Geode Server Log
+## <a id="concept_33F73F78783D4994B721486243827E15__section_CF950FC81CC046838F42A3E6783985BD" class="no-quick-link"></a>Add Session State Logging to the <%=vars.product_name%> Server Log
 
-To add session-state-specific logging to the Geode server log file, add the following property to the `catalina.properties` file for the tc Server instance:
+To add session-state-specific logging to the <%=vars.product_name%> server log file, add the following property to the `catalina.properties` file for the tc Server instance:
 
 ``` pre
 geode-cs.enable.debug.listener=true
@@ -73,9 +73,9 @@ sessionRegionName=gemfire_modules_sessions; operatingRegionName=unset]
 key=5782ED83A3D9F101BBF8D851CE4E798E
 ```
 
-## <a id="concept_33F73F78783D4994B721486243827E15__section_B446063292F0447CA178DB67245B72C1" class="no-quick-link"></a>Adding Additional Debug Logging to the Geode Server Log
+## <a id="concept_33F73F78783D4994B721486243827E15__section_B446063292F0447CA178DB67245B72C1" class="no-quick-link"></a>Adding Additional Debug Logging to the <%=vars.product_name%> Server Log
 
-To add fine-level logging to the Geode cache server, add the 'log-level' property to the server process. For example:
+To add fine-level logging to the <%=vars.product_name%> cache server, add the 'log-level' property to the server process. For example:
 
 ``` pre
 gfsh> start server --name=server1 --cache-xml-file=../conf/cache-server.xml 
@@ -85,11 +85,11 @@ gfsh> start server --name=server1 --cache-xml-file=../conf/cache-server.xml
 This will add fine-level logging to the `server.log` file.
 
 **Note:**
-This will help debug Geode server issues, but it adds a lot of logging to the file.
+This will help debug <%=vars.product_name%> server issues, but it adds a lot of logging to the file.
 
 ## <a id="concept_33F73F78783D4994B721486243827E15__section_D36A81360D904450B8BE7334897C5685" class="no-quick-link"></a>Add Debug Logging to gemfire\_modules.log
 
-To add fine-level logging to the Geode Cache Client, add the 'log-level' property to the Listener element in the tc Server or Tomcat `server.xml` file. For example:
+To add fine-level logging to the <%=vars.product_name%> Cache Client, add the 'log-level' property to the Listener element in the tc Server or Tomcat `server.xml` file. For example:
 
 ``` pre
 <Listener log-level="fine" 
@@ -106,6 +106,6 @@ statistic-sampling-enabled="${geode-cs.statistic.sampling.enabled}"/>
 This will add fine-level logging to the file defined by the `${geode-cs.log.file}` property. The default log file name is `gemfire_modules.log`.
 
 **Note:**
-This will help debug Geode client issues, but it adds a lot of logging to the file.
+This will help debug <%=vars.product_name%> client issues, but it adds a lot of logging to the file.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/tc_additional_info.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/tc_additional_info.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/tc_additional_info.html.md.erb
index 991fd94..c4adcd6 100644
--- a/geode-docs/tools_modules/http_session_mgmt/tc_additional_info.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/tc_additional_info.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-This section provides information on sticky load balancers, session expiration, additional Geode property changes, serialization and more.
+This section provides information on sticky load balancers, session expiration, additional <%=vars.product_name%> property changes, serialization and more.
 
 ## <a id="tc_additional_info__section_78F53B3F4301466EA0E5DF277CF33A71" class="no-quick-link"></a>Sticky Load Balancers
 
@@ -30,20 +30,20 @@ as a possible [load balancing](http://static.springsource.com/projects/ers/4.0/g
 ## <a id="tc_additional_info__section_C7C4365EA2D84636AE1586F187007EC4" class="no-quick-link"></a>Session Expiration
 
 To set the session expiration value, you must change the `session-timeout` value specified in your application server's `WEB-INF/web.xml` file. 
-This value will override the Geode inactive interval, which is specified in Tomcat, for example, by `maxInactiveInterval` within `context.xml`.
+This value will override the <%=vars.product_name%> inactive interval, which is specified in Tomcat, for example, by `maxInactiveInterval` within `context.xml`.
 
-When a session expires, it gets removed from the application server and from all Geode servers when running in client-server mode.
+When a session expires, it gets removed from the application server and from all <%=vars.product_name%> servers when running in client-server mode.
 
-## <a id="tc_additional_info__section_5CE5FF6F55DB462E8B2A336A0AF7515E" class="no-quick-link"></a>Making Additional Geode Property Changes
+## <a id="tc_additional_info__section_5CE5FF6F55DB462E8B2A336A0AF7515E" class="no-quick-link"></a>Making Additional <%=vars.product_name%> Property Changes
 
-If you want to change additional Geode property values, refer to instructions on manually changing property values as specified in the Geode module documentation for Tomcat ([Changing the Default Geode Configuration in the Tomcat Module](tomcat_changing_gf_default_cfg.html#tomcat_changing_gf_default_cfg)) and Application Servers ([Changing the Default Geode Configuration in the AppServers Module](weblogic_changing_gf_default_cfg.html#weblogic_changing_gf_default_cfg)).
+If you want to change additional <%=vars.product_name%> property values, refer to instructions on manually changing property values as specified in the <%=vars.product_name%> module documentation for Tomcat ([Changing the Default <%=vars.product_name%> Configuration in the Tomcat Module](tomcat_changing_gf_default_cfg.html#tomcat_changing_gf_default_cfg)) and Application Servers ([Changing the Default <%=vars.product_name%> Configuration in the AppServers Module](weblogic_changing_gf_default_cfg.html#weblogic_changing_gf_default_cfg)).
 
 ## <a id="tc_additional_info__section_0013BDC875A44344B7B062F46AFA073C" class="no-quick-link"></a>Module Version Information
 
-To acquire Geode module version information, look in the web server's log file for a message similar to:
+To acquire <%=vars.product_name%> module version information, look in the web server's log file for a message similar to:
 
 ``` pre
-INFO: Initializing Geode Modules
+INFO: Initializing <%=vars.product_name%> Modules
 Java version:   1.0.0 user1 041216 2016-11-12 11:18:37 -0700
           javac 1.8.0_92
 Native version: native code unavailable

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/tc_changing_gf_default_cfg.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/tc_changing_gf_default_cfg.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/tc_changing_gf_default_cfg.html.md.erb
index a49009f..2198d2b 100644
--- a/geode-docs/tools_modules/http_session_mgmt/tc_changing_gf_default_cfg.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/tc_changing_gf_default_cfg.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Changing the Default Geode Configuration in the tc Server Module
----
+<% set_title("Changing the Default", product_name, "Configuration in the tc Server Module")%>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,14 +17,14 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-By default, the tc Server HTTP module will run Geode automatically with pre-configured settings. You can change these Geode settings.
+By default, the tc Server HTTP module will run <%=vars.product_name%> automatically with pre-configured settings. You can change these <%=vars.product_name%> settings.
 
-Here are the default Geode settings:
+Here are the default <%=vars.product_name%> settings:
 
--   Geode peer-to-peer members use locators for discovery.
+-   <%=vars.product_name%> peer-to-peer members use locators for discovery.
 -   The region name is set to `gemfire_modules_sessions`.
 -   The cache region is replicated for peer-to-peer configurations and partitioned (with redundancy turned on) for client/server configurations.
--   Geode clients have local caching turned on and when the local cache needs to evict data, it will evict least-recently-used (LRU) data first.
+-   <%=vars.product_name%> clients have local caching turned on and when the local cache needs to evict data, it will evict least-recently-used (LRU) data first.
 
 **Note:**
 On the application server side, the default inactive interval for session expiration is set to 30 minutes. To change this value, refer to [Session Expiration](tc_additional_info.html#tc_additional_info__section_C7C4365EA2D84636AE1586F187007EC4).
@@ -65,7 +63,7 @@ For information on setting up your instance for the most common types of configu
 
 ## <a id="tc_changing_gf_default_cfg__use_a_diff_mc_port" class="no-quick-link"></a>Using a Different Locator Port
 
-For a Geode peer-to-peer member to communicate on a different port than the default (10334), answer the following question in the tc Server HTTP module's interactive mode:
+For a <%=vars.product_name%> peer-to-peer member to communicate on a different port than the default (10334), answer the following question in the tc Server HTTP module's interactive mode:
 
 ``` pre
 Please enter the list of locators used by GemFire members to discover each other. 
@@ -93,6 +91,6 @@ Then on the cache server side, reference the modified region attributes template
 
 -   **[Interactive Configuration Reference for the tc Server Module](../../tools_modules/http_session_mgmt/interactive_mode_ref.html)**
 
-    This section describes each prompt when entering into interactive configuration mode of the Geode HTTP Session Management Module for tc Server.
+    This section describes each prompt when entering into interactive configuration mode of the <%=vars.product_name%> HTTP Session Management Module for tc Server.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/tc_installing_the_module.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/tc_installing_the_module.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/tc_installing_the_module.html.md.erb
index a247e82..1090642 100644
--- a/geode-docs/tools_modules/http_session_mgmt/tc_installing_the_module.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/tc_installing_the_module.html.md.erb
@@ -22,7 +22,7 @@ limitations under the License.
 This topic describes how to install the HTTP session management module with tc Server templates.
 
 1.  If you do not already have tc Server, download and install the product from the [Pivotal tc Server download page](https://network.pivotal.io/products/pivotal-tcserver). These instructions require **tc Server 2.9** or later.
-2.  The HTTP Session Management Module for tc Server is included in the Apache Geode installation package. After you install Apache Geode, you will find the module in the `tools/Modules` directory of the installation.
+2.  The HTTP Session Management Module for tc Server is included in the <%=vars.product_name_long%> installation package. After you install <%=vars.product_name_long%>, you will find the module in the `tools/Modules` directory of the installation.
 
 3.  Unzip the module into the Pivotal tc Server `$CATALINA_HOME/templates` directory so that it creates `geode-p2p` and `geode-cs` subdirectories within the tc Server `templates` directory.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/tc_setting_up_the_module.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/tc_setting_up_the_module.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/tc_setting_up_the_module.html.md.erb
index bf64219..c0f8f5d 100644
--- a/geode-docs/tools_modules/http_session_mgmt/tc_setting_up_the_module.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/tc_setting_up_the_module.html.md.erb
@@ -98,17 +98,17 @@ In Windows:
   $ ./tcruntime-ctl.bat my_instance_name start
 ```
 
-Refer to the [tc Server](http://www.vmware.com/products/vfabric-tcserver) documentation for more information. Once started, Geode will automatically launch within the application server process.
+Refer to the [tc Server](http://www.vmware.com/products/vfabric-tcserver) documentation for more information. Once started, <%=vars.product_name%> will automatically launch within the application server process.
 
 **Note:**
-Geode session state management provides its own clustering functionality. If you are using Geode, you should NOT turn on Tomcat clustering as well.
+<%=vars.product_name%> session state management provides its own clustering functionality. If you are using <%=vars.product_name%>, you should NOT turn on Tomcat clustering as well.
 
 To verify that the system is running, check the log file for a message similar to:
 
 ``` pre
 Mar 29, 2016 8:38:31 AM org.apache.geode.modules.session.bootstrap.AbstractCache
 lifecycleEvent
-INFO: Initializing Geode Modules
+INFO: Initializing <%=vars.product_name%> Modules
 Modules version: 1.0.0
 Java version:   1.0.0 user1 032916 2016-11-29 07:49:26 -0700
 javac 1.8.0_92
@@ -118,7 +118,7 @@ Source repository: develop
 Running on: /192.0.2.0, 8 cpu(s), x86_64 Mac OS X 10.11.4
 ```
 
-Information is also logged within the Geode log file, which by default is named `gemfire_modules.log`.
+Information is also logged within the <%=vars.product_name%> log file, which by default is named `gemfire_modules.log`.
 
 ## <a id="tc_setting_up_the_module__section_B2396FB0879248DBA85ADFDBBEFA987E" class="no-quick-link"></a>Configuring Non-Sticky Session Replication
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/tomcat_changing_gf_default_cfg.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/tomcat_changing_gf_default_cfg.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/tomcat_changing_gf_default_cfg.html.md.erb
index a6a5b46..539a992 100644
--- a/geode-docs/tools_modules/http_session_mgmt/tomcat_changing_gf_default_cfg.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/tomcat_changing_gf_default_cfg.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Changing the Default Geode Configuration in the Tomcat Module
----
+<% set_title("Changing the Default", product_name, "Configuration in the Tomcat Module") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,21 +17,21 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-By default, the Tomcat module will run Geode automatically with pre-configured settings. You can change these Geode settings.
+By default, the Tomcat module will run <%=vars.product_name%> automatically with pre-configured settings. You can change these <%=vars.product_name%> settings.
 
 Here are the default settings:
 
 -   Locators are used for member discovery.
 -   The region name is set to `gemfire_modules_sessions`.
 -   The cache region is replicated for peer-to-peer configurations and partitioned (with redundancy turned on) for client/server configurations.
--   Geode clients have local caching turned on and when the local cache needs to evict data, it will evict least-recently-used (LRU) data first.
+-   <%=vars.product_name%> clients have local caching turned on and when the local cache needs to evict data, it will evict least-recently-used (LRU) data first.
 
 **Note:**
 On the application server side, the default inactive interval for session expiration is set to 30 minutes. To change this value, refer to [Session Expiration](tc_additional_info.html#tc_additional_info__section_C7C4365EA2D84636AE1586F187007EC4).
 
-## <a id="tomcat_changing_gf_default_cfg__section_changing_sys_props" class="no-quick-link"></a>Changing Geode Distributed System Properties
+## <a id="tomcat_changing_gf_default_cfg__section_changing_sys_props" class="no-quick-link"></a>Changing <%=vars.product_name%> Distributed System Properties
 
-Geode system properties must be set by adding properties to Tomcat's `server.xml` file. When setting properties, use the following syntax:
+<%=vars.product_name%> system properties must be set by adding properties to Tomcat's `server.xml` file. When setting properties, use the following syntax:
 
 ``` pre
 <Listener 
@@ -55,19 +53,19 @@ If the `xxxLifecycleListener` is a `PeerToPeerCacheLifecycleListener`, then a mi
  /> 
 ```
 
-The list of configurable Tomcat's `server.xml` system properties include any of the properties that can be specified in Geode's `gemfire.properties` file. The following list contains some of the properties that can be configured.
+The list of configurable Tomcat's `server.xml` system properties include any of the properties that can be specified in <%=vars.product_name%>'s `gemfire.properties` file. The following list contains some of the properties that can be configured.
 
 | Parameter                                 | Description                                                                                                                                                                                 | Default                                                                 |
 |-------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------|
 | cache-xml-file                            | name of the cache configuration file                                                                                                                                                        | `cache-peer.xml` for peer-to-peer, `cache-client.xml` for client/server |
-| locators (only for peer-to-peer topology) | (required) list of locators (host\[port\]) used by Geode members; if a single locator listens on its default port, then set this value to `"localhost[10334]"` | empty string                                                            |
-| log-file                                  | name of the Geode log file                                                                                                                                     | `gemfire_modules.log`                                                   |
-| statistic-archive-file                    | name of the Geode statistics file                                                                                                                              | `gemfire_modules.gfs`                                                   |
-| statistic-sampling-enabled                | whether Geode statistics sampling is enabled                                                                                                                   | false                                                                   |
+| locators (only for peer-to-peer topology) | (required) list of locators (host\[port\]) used by <%=vars.product_name%> members; if a single locator listens on its default port, then set this value to `"localhost[10334]"` | empty string                                                            |
+| log-file                                  | name of the <%=vars.product_name%> log file                                                                                                                                     | `gemfire_modules.log`                                                   |
+| statistic-archive-file                    | name of the <%=vars.product_name%> statistics file                                                                                                                              | `gemfire_modules.gfs`                                                   |
+| statistic-sampling-enabled                | whether <%=vars.product_name%> statistics sampling is enabled                                                                                                                   | false                                                                   |
 
 For more information on these properties, along with the full list of properties, see the [Reference](../../reference/book_intro.html#reference).
 
-In addition to the standard Geode system properties, the following cache-specific properties can also be configured with the `LifecycleListener`.
+In addition to the standard <%=vars.product_name%> system properties, the following cache-specific properties can also be configured with the `LifecycleListener`.
 
 | Parameter              | Description                                                                                     | Default      |
 |------------------------|-------------------------------------------------------------------------------------------------|--------------|
@@ -75,11 +73,11 @@ In addition to the standard Geode system properties, the following cache-specifi
 | evictionHeapPercentage | percentage of heap at which session eviction begins                                             | 80.0         |
 | rebalance              | whether a rebalance of the cache should be done when the application server instance is started | false        |
 
-Although these properties are not part of the standard Geode system properties, they apply to the entire JVM instance and are therefore also handled by the `LifecycleListener`. For more information about managing the heap, refer to [Managing Heap and Off-heap Memory](../../managing/heap_use/heap_management.html#resource_manager).
+Although these properties are not part of the standard <%=vars.product_name%> system properties, they apply to the entire JVM instance and are therefore also handled by the `LifecycleListener`. For more information about managing the heap, refer to [Managing Heap and Off-heap Memory](../../managing/heap_use/heap_management.html#resource_manager).
 
 ## <a id="tomcat_changing_gf_default_cfg__section_changing_cache_config_props" class="no-quick-link"></a>Changing Cache Configuration Properties
 
-To edit Geode cache properties such as the name and the characteristics of the cache region, you add these properties to Tomcat's `context.xml` file. When adding properties, unless otherwise specified, use the following syntax:
+To edit <%=vars.product_name%> cache properties such as the name and the characteristics of the cache region, you add these properties to Tomcat's `context.xml` file. When adding properties, unless otherwise specified, use the following syntax:
 
 ``` pre
 <Manager 
@@ -105,7 +103,7 @@ This example creates a partitioned region by the name of "my\_region".
 The following parameters are the cache configuration parameters that can be added to Tomcat's `context.xml` file.
 
 <dt>**CommitSessionValve**</dt>
-<dd>Whether to wait until the end of the HTTP request to save all session attribute changes to the Geode cache; if the configuration line is present in the application's `context.xml` file, then only one put will be performed into the cache for the session per HTTP request. If the configuration line is not included, then the session is saved each time the `setAttribute` or `removeAttribute` method is invoked. As a consequence, multiple puts are performed into the cache during a single session. This configuration setting is recommended for any applications that modify the session frequently during a single HTTP request.</dd>
+<dd>Whether to wait until the end of the HTTP request to save all session attribute changes to the <%=vars.product_name%> cache; if the configuration line is present in the application's `context.xml` file, then only one put will be performed into the cache for the session per HTTP request. If the configuration line is not included, then the session is saved each time the `setAttribute` or `removeAttribute` method is invoked. As a consequence, multiple puts are performed into the cache during a single session. This configuration setting is recommended for any applications that modify the session frequently during a single HTTP request.</dd>
 
 Default: Set
 
@@ -120,7 +118,7 @@ To disable this configuration, remove or comment out the following line from Tom
 
 Default: `false`
 
-The Geode API equivalent to setting this parameter:
+The <%=vars.product_name%> API equivalent to setting this parameter:
 
 ``` pre
 // Create factory
@@ -134,7 +132,7 @@ factory.addCacheListener(new DebugCacheListener());
 
 Default: `false` for peer-to-peer, `true` for client/server
 
-The Geode API equivalent to setting this parameter:
+The <%=vars.product_name%> API equivalent to setting this parameter:
 
 ``` pre
 // For peer-to-peer members: 
@@ -148,7 +146,7 @@ ClientCache.createClientRegionFactory(CACHING_PROXY_HEAP_LRU);
 
 Default: REPLICATE for peer-to-peer, PARTITION\_REDUNDANT for client/server
 
-The Geode API equivalent to setting this parameter:
+The <%=vars.product_name%> API equivalent to setting this parameter:
 
 ``` pre
 // Creates a region factory for the specified region shortcut 
@@ -160,7 +158,7 @@ Cache.createRegionFactory(regionAttributesId);
 
 Default: gemfire\_modules\_sessions
 
-The Geode API equivalent to setting this parameter:
+The <%=vars.product_name%> API equivalent to setting this parameter:
 
 ``` pre
 // Creates a region with the specified name 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/tomcat_installing_the_module.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/tomcat_installing_the_module.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/tomcat_installing_the_module.html.md.erb
index de5d3ab..f3fc35c 100644
--- a/geode-docs/tools_modules/http_session_mgmt/tomcat_installing_the_module.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/tomcat_installing_the_module.html.md.erb
@@ -22,7 +22,7 @@ limitations under the License.
 This topic describes how to install the HTTP session management module for Tomcat.
 
 1.  If you do not already have Tomcat installed, download the desired version from the [Apache Website](http://tomcat.apache.org/).
-2.  The HTTP Session Management Module for Tomcat is included in the Geode installation package. After you install Apache Geode, you will find the module in the `tools/Modules` directory of the installation.
+2.  The HTTP Session Management Module for Tomcat is included in the <%=vars.product_name%> installation package. After you install <%=vars.product_name_long%>, you will find the module in the `tools/Modules` directory of the installation.
 
 3.  Unzip the module into the `$CATALINA_HOME` directory or wherever you installed the application server.
 4.  Copy the following jar files to the `lib` directory of your Tomcat server (`$CATALINA_HOME/lib`):

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/tomcat_setting_up_the_module.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/tomcat_setting_up_the_module.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/tomcat_setting_up_the_module.html.md.erb
index 73e104c..4707f3a 100644
--- a/geode-docs/tools_modules/http_session_mgmt/tomcat_setting_up_the_module.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/tomcat_setting_up_the_module.html.md.erb
@@ -19,14 +19,14 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-To use the Geode HTTP module with Tomcat application servers, you will need to modify Tomcat's `server.xml` and `context.xml` files.
+To use the <%=vars.product_name%> HTTP module with Tomcat application servers, you will need to modify Tomcat's `server.xml` and `context.xml` files.
 
 Configuration is slightly different depending on the topology you are setting up. Refer to [Common Topologies for HTTP Session Management](common_gemfire_topologies.html#common_gemfire_topologies) for more information.
 
 ## <a id="tomcat_setting_up_the_module__section_20294A39368D4402AEFB3D074E8D5887" class="no-quick-link"></a>Peer-to-Peer Setup
 
 <img src="../../images_svg/http_module_p2p_with_locator.svg" id="tomcat_setting_up_the_module__image_bsm_2gf_sv" class="image" />
-To run Geode in a peer-to-peer configuration, add the following line to Tomcat's `$CATALINA_HOME$/conf/server.xml` within the `<Server>` tag:
+To run <%=vars.product_name%> in a peer-to-peer configuration, add the following line to Tomcat's `$CATALINA_HOME$/conf/server.xml` within the `<Server>` tag:
 
 ``` pre
 <Listener className="org.apache.geode.modules.session.catalina.
@@ -60,7 +60,7 @@ For Tomcat 8.0 and 8.5:
 
 <img src="../../images_svg/http_module_cs_with_locator.svg" id="tomcat_setting_up_the_module__image_aqn_jjf_sv" class="image" />
 
-To run Geode in a client/server configuration, the application server will operate as a Geode client. To do this, add the following line to `$CATALINA_HOME$/conf/server.xml` within the `<Server>` tag:
+To run <%=vars.product_name%> in a client/server configuration, the application server will operate as a <%=vars.product_name%> client. To do this, add the following line to `$CATALINA_HOME$/conf/server.xml` within the `<Server>` tag:
 
 ``` pre
 <Listener className="org.apache.geode.modules.session.catalina.
@@ -90,7 +90,7 @@ For Tomcat 8.0 and 8.5:
                          Tomcat8DeltaSessionManager"/> 
 ```
 
-The application server operates as a Geode client in this configuration. With a similar environment to this example that is for a client/server set up,
+The application server operates as a <%=vars.product_name%> client in this configuration. With a similar environment to this example that is for a client/server set up,
 
 ``` pre
 TC_VER=tomcat-8.0.30.C.RELEASE
@@ -112,14 +112,14 @@ $ gfsh start server --name=server1 --locators=localhost[10334] --server-port=0 \
 
 ## <a id="tomcat_setting_up_the_module__section_2B97047AB30A4C549D91AD258657FBA6" class="no-quick-link"></a>Starting the Application Server
 
-Once you've updated the configuration, you are now ready to start your tc Server or Tomcat instance. Refer to your application server documentation for starting the application server. Once started, Geode will automatically launch within the application server process.
+Once you've updated the configuration, you are now ready to start your tc Server or Tomcat instance. Refer to your application server documentation for starting the application server. Once started, <%=vars.product_name%> will automatically launch within the application server process.
 
 **Note:**
-Geode session state management provides its own clustering functionality. If you are using Geode, you should NOT turn on Tomcat clustering as well.
+<%=vars.product_name%> session state management provides its own clustering functionality. If you are using <%=vars.product_name%>, you should NOT turn on Tomcat clustering as well.
 
-## <a id="tomcat_setting_up_the_module__section_3E186713737E4D5383E23B41CDFED59B" class="no-quick-link"></a>Verifying that Geode Started
+## <a id="tomcat_setting_up_the_module__section_3E186713737E4D5383E23B41CDFED59B" class="no-quick-link"></a>Verifying that <%=vars.product_name%> Started
 
-You can verify that Geode has successfully started by inspecting the Tomcat log file. For example:
+You can verify that <%=vars.product_name%> has successfully started by inspecting the Tomcat log file. For example:
 
 ``` pre
 Nov 8, 2010 12:12:12 PM
@@ -130,4 +130,4 @@ INFO: Created GemFireCache[id = 2066231378; isClosing = false;
    copyOnRead = false; lockLease = 120; lockTimeout = 60]
 ```
 
-Information is also logged within the Geode log file, which by default is named `gemfire_modules.log`.
+Information is also logged within the <%=vars.product_name%> log file, which by default is named `gemfire_modules.log`.


[06/51] [abbrv] geode git commit: GEODE-3393: One-way SSL commit failing with userHome/.keystore not found. This now closes #682

Posted by kl...@apache.org.
GEODE-3393: One-way SSL commit failing with userHome/.keystore not found. This now closes #682

Signed-off-by: Galen O'Sullivan <go...@pivotal.io>


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/684f85d2
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/684f85d2
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/684f85d2

Branch: refs/heads/feature/GEODE-1279
Commit: 684f85d2881dd1b0b68bc49b303fb45a8b17452d
Parents: c1129c7
Author: Udo Kohlmeyer <uk...@pivotal.io>
Authored: Thu Aug 3 14:13:06 2017 -0700
Committer: Udo Kohlmeyer <uk...@pivotal.io>
Committed: Mon Aug 14 15:31:36 2017 -0700

----------------------------------------------------------------------
 .../apache/geode/internal/admin/SSLConfig.java  |  5 ++-
 .../geode/internal/net/SocketCreator.java       | 38 ++++++-----------
 .../net/SSLConfigurationFactoryJUnitTest.java   |  6 ++-
 .../internal/net/SocketCreatorJUnitTest.java    | 43 ++++++++++++++++++++
 4 files changed, 62 insertions(+), 30 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/684f85d2/geode-core/src/main/java/org/apache/geode/internal/admin/SSLConfig.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/admin/SSLConfig.java b/geode-core/src/main/java/org/apache/geode/internal/admin/SSLConfig.java
index 0171933..65e4694 100755
--- a/geode-core/src/main/java/org/apache/geode/internal/admin/SSLConfig.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/admin/SSLConfig.java
@@ -16,6 +16,7 @@ package org.apache.geode.internal.admin;
 
 import static org.apache.geode.distributed.ConfigurationProperties.*;
 
+import java.security.KeyStore;
 import java.util.Iterator;
 import java.util.Properties;
 
@@ -33,11 +34,11 @@ public class SSLConfig {
   private String ciphers = DistributionConfig.DEFAULT_SSL_CIPHERS;
   private boolean requireAuth = DistributionConfig.DEFAULT_SSL_REQUIRE_AUTHENTICATION;
   private String keystore = DistributionConfig.DEFAULT_SSL_KEYSTORE;
-  private String keystoreType = DistributionConfig.DEFAULT_CLUSTER_SSL_KEYSTORE_TYPE;
+  private String keystoreType = KeyStore.getDefaultType();
   private String keystorePassword = DistributionConfig.DEFAULT_SSL_KEYSTORE_PASSWORD;
   private String truststore = DistributionConfig.DEFAULT_SSL_TRUSTSTORE;
   private String truststorePassword = DistributionConfig.DEFAULT_SSL_TRUSTSTORE_PASSWORD;
-  private String truststoreType = DistributionConfig.DEFAULT_CLUSTER_SSL_KEYSTORE_TYPE;
+  private String truststoreType = KeyStore.getDefaultType();
   private String alias = null;
   private SecurableCommunicationChannel securableCommunicationChannel = null;
 

http://git-wip-us.apache.org/repos/asf/geode/blob/684f85d2/geode-core/src/main/java/org/apache/geode/internal/net/SocketCreator.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/net/SocketCreator.java b/geode-core/src/main/java/org/apache/geode/internal/net/SocketCreator.java
index dbe18a9..47fd766 100755
--- a/geode-core/src/main/java/org/apache/geode/internal/net/SocketCreator.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/net/SocketCreator.java
@@ -333,7 +333,6 @@ public class SocketCreator {
    * <p>
    * Caller must synchronize on the SocketCreator instance.
    */
-  @SuppressWarnings("hiding")
   private void initialize() {
     try {
       // set p2p values...
@@ -384,7 +383,7 @@ public class SocketCreator {
 
   /**
    * Creates & configures the SSLContext when SSL is enabled.
-   * 
+   *
    * @return new SSLContext configured using the given protocols & properties
    *
    * @throws GeneralSecurityException if security information can not be found
@@ -402,7 +401,7 @@ public class SocketCreator {
 
   /**
    * Used by CacheServerLauncher and SystemAdmin to read the properties from console
-   * 
+   *
    * @param env Map in which the properties are to be read from console.
    */
   public static void readSSLProperties(Map<String, String> env) {
@@ -413,7 +412,7 @@ public class SocketCreator {
    * Used to read the properties from console. AgentLauncher calls this method directly & ignores
    * gemfire.properties. CacheServerLauncher and SystemAdmin call this through
    * {@link #readSSLProperties(Map)} and do NOT ignore gemfire.properties.
-   * 
+   *
    * @param env Map in which the properties are to be read from console.
    * @param ignoreGemFirePropsFile if <code>false</code> existing gemfire.properties file is read,
    *        if <code>true</code>, properties from gemfire.properties file are ignored.
@@ -537,6 +536,10 @@ public class SocketCreator {
       NoSuchAlgorithmException, CertificateException, UnrecoverableKeyException {
     GfeConsoleReader consoleReader = GfeConsoleReaderFactory.getDefaultConsoleReader();
 
+    if (sslConfig.getKeystore() == null) {
+      return null;
+    }
+
     KeyManager[] keyManagers = null;
     String keyStoreType = sslConfig.getKeystoreType();
     if (StringUtils.isEmpty(keyStoreType)) {
@@ -611,7 +614,7 @@ public class SocketCreator {
 
     /**
      * Constructor.
-     * 
+     *
      * @param mgr The X509KeyManager used as a delegate
      * @param keyAlias The alias name of the server's keypair and supporting certificate chain
      */
@@ -791,7 +794,7 @@ public class SocketCreator {
   /**
    * Creates or bind server socket to a random port selected from tcp-port-range which is same as
    * membership-port-range.
-   * 
+   *
    * @param ba
    * @param backlog
    * @param isBindAddress
@@ -811,7 +814,7 @@ public class SocketCreator {
   /**
    * Creates or bind server socket to a random port selected from tcp-port-range which is same as
    * membership-port-range.
-   * 
+   *
    * @param ba
    * @param backlog
    * @param isBindAddress
@@ -1021,14 +1024,6 @@ public class SocketCreator {
               ex);
           throw ex;
         }
-      } catch (SSLException ex) {
-        logger
-            .fatal(
-                LocalizedMessage.create(
-                    LocalizedStrings.SocketCreator_SSL_ERROR_IN_CONNECTING_TO_PEER_0_1,
-                    new Object[] {socket.getInetAddress(), Integer.valueOf(socket.getPort())}),
-                ex);
-        throw ex;
       }
     }
   }
@@ -1108,16 +1103,7 @@ public class SocketCreator {
               .create(LocalizedStrings.SocketCreator_SSL_ERROR_IN_AUTHENTICATING_PEER), ex);
           throw ex;
         }
-      } catch (SSLException ex) {
-        logger
-            .fatal(
-                LocalizedMessage.create(
-                    LocalizedStrings.SocketCreator_SSL_ERROR_IN_CONNECTING_TO_PEER_0_1,
-                    new Object[] {socket.getInetAddress(), Integer.valueOf(socket.getPort())}),
-                ex);
-        throw ex;
       }
-
     }
   }
 
@@ -1219,7 +1205,7 @@ public class SocketCreator {
 
   /**
    * This method uses JNDI to look up an address in DNS and return its name
-   * 
+   *
    * @param addr
    *
    * @return the host name associated with the address or null if lookup isn't possible or there is
@@ -1295,7 +1281,7 @@ public class SocketCreator {
    * Fails Assertion if the conversion would result in <code>java.lang.UnknownHostException</code>.
    * <p>
    * Any leading slashes on host will be ignored.
-   * 
+   *
    * @param host string version the InetAddress
    *
    * @return the host converted to InetAddress instance

http://git-wip-us.apache.org/repos/asf/geode/blob/684f85d2/geode-core/src/test/java/org/apache/geode/internal/net/SSLConfigurationFactoryJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/net/SSLConfigurationFactoryJUnitTest.java b/geode-core/src/test/java/org/apache/geode/internal/net/SSLConfigurationFactoryJUnitTest.java
index 47f0d2b..cd7585c 100644
--- a/geode-core/src/test/java/org/apache/geode/internal/net/SSLConfigurationFactoryJUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/net/SSLConfigurationFactoryJUnitTest.java
@@ -51,6 +51,8 @@ import org.apache.geode.internal.security.SecurableCommunicationChannel;
 import org.apache.geode.test.junit.categories.MembershipTest;
 import org.apache.geode.test.junit.categories.UnitTest;
 
+import java.security.KeyStore;
+
 @Category({UnitTest.class, MembershipTest.class})
 public class SSLConfigurationFactoryJUnitTest {
 
@@ -216,11 +218,11 @@ public class SSLConfigurationFactoryJUnitTest {
     properties.setProperty(CLUSTER_SSL_ENABLED, "true");
     properties.setProperty(MCAST_PORT, "0");
     System.setProperty(SSLConfigurationFactory.JAVAX_KEYSTORE, "keystore");
-    System.setProperty(SSLConfigurationFactory.JAVAX_KEYSTORE_TYPE, "JKS");
+    System.setProperty(SSLConfigurationFactory.JAVAX_KEYSTORE_TYPE, KeyStore.getDefaultType());
     System.setProperty(SSLConfigurationFactory.JAVAX_KEYSTORE_PASSWORD, "keystorePassword");
     System.setProperty(SSLConfigurationFactory.JAVAX_TRUSTSTORE, "truststore");
     System.setProperty(SSLConfigurationFactory.JAVAX_TRUSTSTORE_PASSWORD, "truststorePassword");
-    System.setProperty(SSLConfigurationFactory.JAVAX_TRUSTSTORE_TYPE, "JKS");
+    System.setProperty(SSLConfigurationFactory.JAVAX_TRUSTSTORE_TYPE, KeyStore.getDefaultType());
     DistributionConfigImpl distributionConfig = new DistributionConfigImpl(properties);
     SSLConfigurationFactory.setDistributionConfig(distributionConfig);
     SSLConfig sslConfig =

http://git-wip-us.apache.org/repos/asf/geode/blob/684f85d2/geode-core/src/test/java/org/apache/geode/internal/net/SocketCreatorJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/net/SocketCreatorJUnitTest.java b/geode-core/src/test/java/org/apache/geode/internal/net/SocketCreatorJUnitTest.java
new file mode 100644
index 0000000..b258ee1
--- /dev/null
+++ b/geode-core/src/test/java/org/apache/geode/internal/net/SocketCreatorJUnitTest.java
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.net;
+
+import org.apache.geode.internal.admin.SSLConfig;
+import org.apache.geode.test.junit.categories.UnitTest;
+import org.apache.geode.util.test.TestUtil;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category(UnitTest.class)
+public class SocketCreatorJUnitTest {
+
+  @Test
+  public void testCreateSocketCreatorWithKeystoreUnset() throws Exception {
+    SSLConfig testSSLConfig = new SSLConfig();
+    testSSLConfig.setEnabled(true);
+    testSSLConfig.setKeystore(null);
+    testSSLConfig.setKeystorePassword("");
+    testSSLConfig.setTruststore(getSingleKeyKeystore());
+    testSSLConfig.setTruststorePassword("password");
+    // GEODE-3393: This would fail with java.io.FileNotFoundException: $USER_HOME/.keystore
+    new SocketCreator(testSSLConfig);
+
+  }
+
+  private String getSingleKeyKeystore() {
+    return TestUtil.getResourcePath(getClass(), "/ssl/trusted.keystore");
+  }
+
+}


[30/51] [abbrv] geode git commit: GEODE-2886 : Updated testcase to fail if expected exception is not thrown. This closes #609

Posted by kl...@apache.org.
GEODE-2886 : Updated testcase to fail if expected exception is not
thrown.
This closes #609


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/83c19160
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/83c19160
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/83c19160

Branch: refs/heads/feature/GEODE-1279
Commit: 83c19160d2b9e5f31a9ae6e48c4b5f59a271a300
Parents: 3720151
Author: Amey Barve <ab...@apache.org>
Authored: Fri Aug 11 11:07:05 2017 +0530
Committer: Amey Barve <ab...@apache.org>
Committed: Thu Aug 17 15:48:36 2017 +0530

----------------------------------------------------------------------
 .../apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java   | 3 +++
 1 file changed, 3 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/83c19160/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
index 2c46b4c..de5ad76 100644
--- a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
+++ b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
@@ -21,6 +21,7 @@ import static org.apache.geode.cache.lucene.test.LuceneTestUtilities.verifyQuery
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.util.ArrayList;
 import java.util.HashMap;
@@ -346,6 +347,8 @@ public class LuceneQueriesIntegrationTest extends LuceneIntegrationTest {
     try {
       result = luceneService.waitUntilFlushed(nonCreatedIndex, REGION_NAME, 60000,
           TimeUnit.MILLISECONDS);
+      fail(
+          "Should have got the exception because the queue does not exist for the non created index ");
     } catch (Exception ex) {
       assertEquals(ex.getMessage(),
           "java.lang.IllegalStateException: The AEQ does not exist for the index index2 region /index");


[32/51] [abbrv] geode git commit: GEODE-3329: Changed logging output of modify_war script

Posted by kl...@apache.org.
GEODE-3329: Changed logging output of modify_war script

Changed the modify_war script so that its output and error streams write to a log file instead of to standard out.


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/04c446ae
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/04c446ae
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/04c446ae

Branch: refs/heads/feature/GEODE-1279
Commit: 04c446aef2b12befe91a31a69ad8c4f2116d5c26
Parents: 7cbbf67
Author: David Anuta <da...@gmail.com>
Authored: Fri Jul 28 10:49:36 2017 -0700
Committer: Dan Smith <up...@apache.org>
Committed: Thu Aug 17 14:03:15 2017 -0700

----------------------------------------------------------------------
 .../geode/session/tests/GenericAppServerContainer.java    | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/04c446ae/geode-assembly/src/test/java/org/apache/geode/session/tests/GenericAppServerContainer.java
----------------------------------------------------------------------
diff --git a/geode-assembly/src/test/java/org/apache/geode/session/tests/GenericAppServerContainer.java b/geode-assembly/src/test/java/org/apache/geode/session/tests/GenericAppServerContainer.java
index 0694e6f..7a2cfaf 100644
--- a/geode-assembly/src/test/java/org/apache/geode/session/tests/GenericAppServerContainer.java
+++ b/geode-assembly/src/test/java/org/apache/geode/session/tests/GenericAppServerContainer.java
@@ -39,6 +39,7 @@ import org.junit.Assume;
  */
 public class GenericAppServerContainer extends ServerContainer {
   private final File modifyWarScript;
+  private final File modifyWarScriptLog;
 
   private static final String DEFAULT_GENERIC_APPSERVER_WAR_DIR = "/tmp/cargo_wars/";
 
@@ -58,6 +59,10 @@ public class GenericAppServerContainer extends ServerContainer {
     modifyWarScript = new File(install.getModulePath() + "/bin/modify_war");
     modifyWarScript.setExecutable(true);
 
+    // Setup modify_war script logging file
+    modifyWarScriptLog = new File(logDir + "/warScript.log");
+    modifyWarScriptLog.createNewFile();
+
     // Ignore tests that are running on windows, since they can't run the modify war script
     Assume.assumeFalse(System.getProperty("os.name").toLowerCase().contains("win"));
 
@@ -116,7 +121,7 @@ public class GenericAppServerContainer extends ServerContainer {
    * {@link #buildCommand()}
    *
    * The modified WAR file is sent to {@link #warFile}.
-   * 
+   *
    * @throws IOException If the command executed returns with a non-zero exit code.
    */
   private void modifyWarFile() throws IOException, InterruptedException {
@@ -126,6 +131,9 @@ public class GenericAppServerContainer extends ServerContainer {
     builder.inheritIO();
     // Setup the environment builder with the command
     builder.command(buildCommand());
+    // Redirect the command line logging to a file
+    builder.redirectError(modifyWarScriptLog);
+    builder.redirectOutput(modifyWarScriptLog);
     logger.info("Running command: " + String.join(" ", builder.command()));
 
     // Run the command


[43/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - Developing

Posted by kl...@apache.org.
http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/rebalancing_pr_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/rebalancing_pr_data.html.md.erb b/geode-docs/developing/partitioned_regions/rebalancing_pr_data.html.md.erb
index d40a4f2..66057a4 100644
--- a/geode-docs/developing/partitioned_regions/rebalancing_pr_data.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/rebalancing_pr_data.html.md.erb
@@ -34,7 +34,7 @@ If you have transactions running in your system, be careful in planning your reb
 
 Kick off a rebalance using one of the following:
 
--   `gfsh` command. First, starting a `gfsh` prompt and connect to the Geode distributed system. Then type the following command:
+-   `gfsh` command. First, starting a `gfsh` prompt and connect to the <%=vars.product_name%> distributed system. Then type the following command:
 
     ``` pre
     gfsh>rebalance
@@ -70,11 +70,11 @@ The rebalancing operation runs asynchronously.
 
 By default, rebalancing is performed on one partitioned region at a time. For regions that have colocated data, the rebalancing works on the regions as a group, maintaining the data colocation between the regions.
 
-You can optionally rebalance multiple regions in parallel by setting the `gemfire.resource.manager.threads` system property. Setting this property to a value greater than 1 enables Geode to rebalance multiple regions in parallel, any time a rebalance operation is initiated using the API.
+You can optionally rebalance multiple regions in parallel by setting the `gemfire.resource.manager.threads` system property. Setting this property to a value greater than 1 enables <%=vars.product_name%> to rebalance multiple regions in parallel, any time a rebalance operation is initiated using the API.
 
 You can continue to use your partitioned regions normally while rebalancing is in progress. Read operations, write operations, and function executions continue while data is moving. If a function is executing on a local data set, you may see a performance degradation if that data moves to another host during function execution. Future function invocations are routed to the correct member.
 
-Geode tries to ensure that each member has the same percentage of its available space used for each partitioned region. The percentage is configured in the `partition-attributes` `local-max-memory` setting.
+<%=vars.product_name%> tries to ensure that each member has the same percentage of its available space used for each partitioned region. The percentage is configured in the `partition-attributes` `local-max-memory` setting.
 
 Partitioned region rebalancing:
 
@@ -89,7 +89,7 @@ You typically want to trigger rebalancing when capacity is increased or reduced
 
 You may also need to rebalance when:
 
--   You use redundancy for high availability and have configured your region to not automatically recover redundancy after a loss. In this case, Geode only restores redundancy when you invoke a rebalance. See [Configure High Availability for a Partitioned Region](configuring_ha_for_pr.html).
+-   You use redundancy for high availability and have configured your region to not automatically recover redundancy after a loss. In this case, <%=vars.product_name%> only restores redundancy when you invoke a rebalance. See [Configure High Availability for a Partitioned Region](configuring_ha_for_pr.html).
 -   You have uneven hashing of data. Uneven hashing can occur if your keys do not have a hash code method, which ensures uniform distribution, or if you use a `PartitionResolver` to colocate your partitioned region data (see [Colocate Data from Different Partitioned Regions](colocating_partitioned_region_data.html#colocating_partitioned_region_data)). In either case, some buckets may receive more data than others. Rebalancing can be used to even out the load between data stores by putting fewer buckets on members that are hosting large buckets.
 
 ## <a id="rebalancing_pr_data__section_495FEE48ED60433BADB7D36C73279C89" class="no-quick-link"></a>How to Simulate Region Rebalancing

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/set_enforce_unique_host.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/set_enforce_unique_host.html.md.erb b/geode-docs/developing/partitioned_regions/set_enforce_unique_host.html.md.erb
index 000216c..fdfaf5a 100644
--- a/geode-docs/developing/partitioned_regions/set_enforce_unique_host.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/set_enforce_unique_host.html.md.erb
@@ -19,11 +19,11 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Configure Geode to use only unique physical machines for redundant copies of partitioned region data.
+Configure <%=vars.product_name%> to use only unique physical machines for redundant copies of partitioned region data.
 
 Understand how to set a member's `gemfire.properties` settings. See [Reference](../../reference/book_intro.html#reference).
 
-Configure your members so Geode always uses different physical machines for redundant copies of partitioned region data using the `gemfire.properties` setting `enforce-unique-host`. The default for this setting is false. 
+Configure your members so <%=vars.product_name%> always uses different physical machines for redundant copies of partitioned region data using the `gemfire.properties` setting `enforce-unique-host`. The default for this setting is false. 
 
 Example:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/set_redundancy_zones.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/set_redundancy_zones.html.md.erb b/geode-docs/developing/partitioned_regions/set_redundancy_zones.html.md.erb
index e08be5d..d4d3838 100644
--- a/geode-docs/developing/partitioned_regions/set_redundancy_zones.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/set_redundancy_zones.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Group members into redundancy zones so Geode will separate redundant data copies into different zones.
+Group members into redundancy zones so <%=vars.product_name%> will separate redundant data copies into different zones.
 
 Understand how to set a member's `gemfire.properties` settings. See [Reference](../../reference/book_intro.html#reference).
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/using_custom_partition_resolvers.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/using_custom_partition_resolvers.html.md.erb b/geode-docs/developing/partitioned_regions/using_custom_partition_resolvers.html.md.erb
index 40b2237..44b45d8 100644
--- a/geode-docs/developing/partitioned_regions/using_custom_partition_resolvers.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/using_custom_partition_resolvers.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-By default, Geode partitions each data entry into a bucket using a hashing policy on the key. Additionally, the physical location of the key-value pair is abstracted away from the application. You can change these policies for a partitioned region. You can provide your own data partitioning resolver and you can additionally specify which members host which data buckets.
+By default, <%=vars.product_name%> partitions each data entry into a bucket using a hashing policy on the key. Additionally, the physical location of the key-value pair is abstracted away from the application. You can change these policies for a partitioned region. You can provide your own data partitioning resolver and you can additionally specify which members host which data buckets.
 
 <a id="custom_partition_region_data__section_CF05CE974C9C4AF78430DA55601D2158"></a>
 **Note:**
@@ -40,7 +40,7 @@ For standard partitioning, use `org.apache.geode.cache.PartitionResolver`. To im
 
 **Procedure**
 
-1.  If using `org.apache.geode.cache.PartitionResolver` (standard partitioning) or `org.apache.geode.cache.FixedPartitionResolver` (fixed partitioning), implement the standard partitioning resolver or the fixed partitioning resolver in one of the following locations, listed here in the search order used by Geode:
+1.  If using `org.apache.geode.cache.PartitionResolver` (standard partitioning) or `org.apache.geode.cache.FixedPartitionResolver` (fixed partitioning), implement the standard partitioning resolver or the fixed partitioning resolver in one of the following locations, listed here in the search order used by <%=vars.product_name%>:
     -   **Custom class**. You provide this class as the partition resolver to the region creation.
     -   **Entry key**. You use the implementing key object for every operation on the region entries.
     -   **Cache callback argument**. This implementation restricts you to using methods that accept a cache callback argument to manage the region entries. For a full list of the methods that take a callback argument, see the `Region` Javadocs.
@@ -54,7 +54,7 @@ function that partitions the entry.
 2.  If you need the resolver's `getName` method, program that.
 3.  If *not* using the default implementation of the string-based
 partition resolver,
-program the resolver's `getRoutingObject` method to return the routing object for each entry, based on how you want to group the entries. Give the same routing object to entries you want to group together. Geode will place the entries in the same bucket.
+program the resolver's `getRoutingObject` method to return the routing object for each entry, based on how you want to group the entries. Give the same routing object to entries you want to group together. <%=vars.product_name%> will place the entries in the same bucket.
 
     **Note:**
     Only fields on the key should be used when creating the routing object. Do not use the value or additional metadata for this purpose.
@@ -130,7 +130,7 @@ program the resolver's `getRoutingObject` method to return the routing object fo
 
             You cannot specify a partition resolver using gfsh.
 
-    2.  Program the `FixedPartitionResolver` `getPartitionName` method to return the name of the partition for each entry, based on where you want the entries to reside. Geode uses `getPartitionName` and `getRoutingObject` to determine where an entry is placed.
+    2.  Program the `FixedPartitionResolver` `getPartitionName` method to return the name of the partition for each entry, based on where you want the entries to reside. <%=vars.product_name%> uses `getPartitionName` and `getRoutingObject` to determine where an entry is placed.
 
         **Note:**
         To group entries, assign every entry in the group the same routing object and the same partition name.
@@ -188,7 +188,7 @@ program the resolver's `getRoutingObject` method to return the routing object fo
         }
         ```
 
-5.  Configure or program the region so Geode finds your resolver for every operation that you perform on the region's entries. How you do this depends on where you chose to program your custom partitioning implementation (step 1).
+5.  Configure or program the region so <%=vars.product_name%> finds your resolver for every operation that you perform on the region's entries. How you do this depends on where you chose to program your custom partitioning implementation (step 1).
     -   **Custom class**. Define the class for the region at creation. The resolver will be used for every entry operation. Use one of these methods:
 
         **XML:**

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/query_additional/advanced_querying.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/advanced_querying.html.md.erb b/geode-docs/developing/query_additional/advanced_querying.html.md.erb
index 7054868..ce758a2 100644
--- a/geode-docs/developing/query_additional/advanced_querying.html.md.erb
+++ b/geode-docs/developing/query_additional/advanced_querying.html.md.erb
@@ -21,27 +21,27 @@ limitations under the License.
 
 This section includes advanced querying topics such as using query indexes, using query bind parameters, querying partitioned regions and query debugging.
 
--   **[Performance Considerations](../../developing/querying_basics/performance_considerations.html)**
+-   **[Performance Considerations](../querying_basics/performance_considerations.html)**
 
     This topic covers considerations for improving query performance.
 
--   **[Monitoring Queries for Low Memory](../../developing/querying_basics/monitor_queries_for_low_memory.html)**
+-   **[Monitoring Queries for Low Memory](../querying_basics/monitor_queries_for_low_memory.html)**
 
     The query monitoring feature prevents out-of-memory exceptions from occurring when you execute queries or create indexes.
 
--   **[Using Query Bind Parameters](../../developing/query_additional/using_query_bind_parameters.html)**
+-   **[Using Query Bind Parameters](../query_additional/using_query_bind_parameters.html)**
 
-    Using query bind parameters in Geode queries is similar to using prepared statements in SQL where parameters can be set during query execution. This allows user to build a query once and execute it multiple times by passing the query conditions during run time.
+    Using query bind parameters in <%=vars.product_name%> queries is similar to using prepared statements in SQL where parameters can be set during query execution. This allows user to build a query once and execute it multiple times by passing the query conditions during run time.
 
--   **[Working with Indexes](../../developing/query_index/query_index.html)**
+-   **[Working with Indexes](../query_index/query_index.html)**
 
-    The Geode query engine supports indexing. An index can provide significant performance gains for query execution.
+    The <%=vars.product_name%> query engine supports indexing. An index can provide significant performance gains for query execution.
 
--   **[Querying Partitioned Regions](../../developing/querying_basics/querying_partitioned_regions.html)**
+-   **[Querying Partitioned Regions](../querying_basics/querying_partitioned_regions.html)**
 
-    Geode allows you to manage and store large amounts of data across distributed nodes using partitioned regions. The basic unit of storage for a partitioned region is a bucket, which resides on a Geode node and contains all the entries that map to a single hashcode. In a typical partitioned region query, the system distributes the query to all buckets across all nodes, then merges the result sets and sends back the query results.
+    <%=vars.product_name%> allows you to manage and store large amounts of data across distributed nodes using partitioned regions. The basic unit of storage for a partitioned region is a bucket, which resides on a <%=vars.product_name%> node and contains all the entries that map to a single hashcode. In a typical partitioned region query, the system distributes the query to all buckets across all nodes, then merges the result sets and sends back the query results.
 
--   **[Query Debugging](../../developing/query_additional/query_debugging.html)**
+-   **[Query Debugging](../query_additional/query_debugging.html)**
 
     You can debug a specific query at the query level by adding the `<trace>` keyword before the query string that you want to debug.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/query_additional/literals.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/literals.html.md.erb b/geode-docs/developing/query_additional/literals.html.md.erb
index e86371c..40c4434 100644
--- a/geode-docs/developing/query_additional/literals.html.md.erb
+++ b/geode-docs/developing/query_additional/literals.html.md.erb
@@ -21,7 +21,7 @@ limitations under the License.
 
 ## <a id="literals__section_BA2D0AC444EB45088F00D9E2C8A1DD06" class="no-quick-link"></a>Comparing Values With java.util.Date
 
-Geode supports the following literal types:
+<%=vars.product_name%> supports the following literal types:
 
 <dt>**boolean**</dt>
 <dd>A `boolean` value, either TRUE or FALSE</dd>
@@ -51,7 +51,7 @@ You can compare temporal literal values `DATE`, `TIME`, and `TIMESTAMP` with `ja
 
 ## <a id="literals__section_9EE6CFC410D2409188EDEAA43AC85851" class="no-quick-link"></a>Type Conversion
 
-The Geode query processor performs implicit type conversions and promotions under certain cases in order to evaluate expressions that contain different types. The query processor performs binary numeric promotion, method invocation conversion, and temporal type conversion.
+The <%=vars.product_name%> query processor performs implicit type conversions and promotions under certain cases in order to evaluate expressions that contain different types. The query processor performs binary numeric promotion, method invocation conversion, and temporal type conversion.
 
 ## <a id="literals__section_F5A3FC509FD04E09B5468BA94B814701" class="no-quick-link"></a>Binary Numeric Promotion
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/query_additional/operators.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/operators.html.md.erb b/geode-docs/developing/query_additional/operators.html.md.erb
index e8cca37..a4a3d8d 100644
--- a/geode-docs/developing/query_additional/operators.html.md.erb
+++ b/geode-docs/developing/query_additional/operators.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode supports comparison, logical, unary, map, index, dot, and right arrow operators.
+<%=vars.product_name%> supports comparison, logical, unary, map, index, dot, and right arrow operators.
 
 ## <a id="operators__section_A3FB372F85D840D7A49CB95BD7FCA7C6" class="no-quick-link"></a>Comparison Operators
 
@@ -42,7 +42,7 @@ The logical operators AND and OR allow you to create more complex expressions by
 
 ## <a id="operators__section_A970AE75B0D24E0B9E1B61BE2D9842D8" class="no-quick-link"></a>Unary Operators
 
-Unary operators operate on a single value or expression, and have lower precedence than comparison operators in expressions. Geode supports the unary operator NOT. NOT is the negation operator, which changes the value of the operand to its opposite. So if an expression evaluates to TRUE, NOT changes it to FALSE. The operand must be a boolean.
+Unary operators operate on a single value or expression, and have lower precedence than comparison operators in expressions. <%=vars.product_name%> supports the unary operator NOT. NOT is the negation operator, which changes the value of the operand to its opposite. So if an expression evaluates to TRUE, NOT changes it to FALSE. The operand must be a boolean.
 
 ## <a id="operators__section_E78FB4FB3703471C8186A0E26D25F01F" class="no-quick-link"></a>Map and Index Operators
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/query_additional/query_debugging.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/query_debugging.html.md.erb b/geode-docs/developing/query_additional/query_debugging.html.md.erb
index c404d6b..8ec8703 100644
--- a/geode-docs/developing/query_additional/query_debugging.html.md.erb
+++ b/geode-docs/developing/query_additional/query_debugging.html.md.erb
@@ -33,7 +33,7 @@ You can also write:
 <TRACE> select * from /exampleRegion
 ```
 
-When the query is executed, Geode will log a message in `$GEMFIRE_DIR/system.log` with the following information:
+When the query is executed, <%=vars.product_name%> will log a message in `$GEMFIRE_DIR/system.log` with the following information:
 
 ``` pre
 [info 2011/08/29 11:24:35.472 PDT CqServer <main> tid=0x1] Query Executed in 9.619656 ms; rowCount = 99; indexesUsed(0) "select *  from /exampleRegion" 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/query_additional/query_language_features.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/query_language_features.html.md.erb b/geode-docs/developing/query_additional/query_language_features.html.md.erb
index 10ab0c6..e9d3602 100644
--- a/geode-docs/developing/query_additional/query_language_features.html.md.erb
+++ b/geode-docs/developing/query_additional/query_language_features.html.md.erb
@@ -22,20 +22,20 @@ limitations under the License.
 <a id="concept_5B8BA904DF2A41BEAA057017777D4E90__section_33F0FD791A2448CB812E8397828B33C2"></a>
 This section covers the following querying language features:
 
--   **[Supported Character Sets](../../developing/querying_basics/supported_character_sets.html)**
+-   **[Supported Character Sets](../querying_basics/supported_character_sets.html)**
 
--   **[Supported Keywords](../../developing/query_additional/supported_keywords.html)**
+-   **[Supported Keywords](supported_keywords.html)**
 
--   **[Case Sensitivity](../../developing/query_additional/case_sensitivity.html)**
+-   **[Case Sensitivity](case_sensitivity.html)**
 
--   **[Comments in Query Strings](../../developing/querying_basics/comments_in_query_strings.html)**
+-   **[Comments in Query Strings](../querying_basics/comments_in_query_strings.html)**
 
--   **[Query Language Grammar](../../developing/querying_basics/query_grammar_and_reserved_words.html)**
+-   **[Query Language Grammar](../querying_basics/query_grammar_and_reserved_words.html)**
 
--   **[Operators](../../developing/query_additional/operators.html)**
+-   **[Operators](operators.html)**
 
--   **[Reserved Words](../../developing/querying_basics/reserved_words.html)**
+-   **[Reserved Words](../querying_basics/reserved_words.html)**
 
--   **[Supported Literals](../../developing/query_additional/literals.html)**
+-   **[Supported Literals](literals.html)**
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/query_additional/using_query_bind_parameters.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/using_query_bind_parameters.html.md.erb b/geode-docs/developing/query_additional/using_query_bind_parameters.html.md.erb
index 8fee56b..880d186 100644
--- a/geode-docs/developing/query_additional/using_query_bind_parameters.html.md.erb
+++ b/geode-docs/developing/query_additional/using_query_bind_parameters.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Using query bind parameters in Geode queries is similar to using prepared statements in SQL where parameters can be set during query execution. This allows user to build a query once and execute it multiple times by passing the query conditions during run time.
+Using query bind parameters in <%=vars.product_name%> queries is similar to using prepared statements in SQL where parameters can be set during query execution. This allows user to build a query once and execute it multiple times by passing the query conditions during run time.
 
 Query objects are thread-safe.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/query_index/create_multiple_indexes.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_index/create_multiple_indexes.html.md.erb b/geode-docs/developing/query_index/create_multiple_indexes.html.md.erb
index e5a15c0..0f8f8af 100644
--- a/geode-docs/developing/query_index/create_multiple_indexes.html.md.erb
+++ b/geode-docs/developing/query_index/create_multiple_indexes.html.md.erb
@@ -61,7 +61,7 @@ Message : Region ' /r3' not found: from  /r3Occurred on following members
     List<Index> indexes = queryService.createDefinedIndexes();
 ```
 
-If one or more index population fails, Geode collect the Exceptions and continues to populate the rest of the indexes. The collected `Exceptions` are stored in a Map of index names and exceptions that can be accessed through `MultiIndexCreationException`.
+If one or more index population fails, <%=vars.product_name%> collect the Exceptions and continues to populate the rest of the indexes. The collected `Exceptions` are stored in a Map of index names and exceptions that can be accessed through `MultiIndexCreationException`.
 
 Index definitions are stored locally on the `gfsh` client. If you want to create a new set of indexes or if one or more of the index creations fail, you might want to clear the definitions stored by using `clear defined indexes`command. The defined indexes can be cleared by using the Java API:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/query_index/creating_an_index.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_index/creating_an_index.html.md.erb b/geode-docs/developing/query_index/creating_an_index.html.md.erb
index 2438447..abac6f3 100644
--- a/geode-docs/developing/query_index/creating_an_index.html.md.erb
+++ b/geode-docs/developing/query_index/creating_an_index.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-The Geode `QueryService` API provides methods to create, list and remove the index. You can also use `gfsh` command-line interface to create, list and remove indexes, and use cache.xml to create an index.
+The <%=vars.product_name%> `QueryService` API provides methods to create, list and remove the index. You can also use `gfsh` command-line interface to create, list and remove indexes, and use cache.xml to create an index.
 
 ## <a id="indexing__section_565C080FBDD0443C8504DF372E3C32C8" class="no-quick-link"></a>Creating Indexes
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/query_index/creating_hash_indexes.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_index/creating_hash_indexes.html.md.erb b/geode-docs/developing/query_index/creating_hash_indexes.html.md.erb
index bd97e6f..8ee8167 100644
--- a/geode-docs/developing/query_index/creating_hash_indexes.html.md.erb
+++ b/geode-docs/developing/query_index/creating_hash_indexes.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode supports the creation of hash indexes for the purposes of performing equality-based queries.
+<%=vars.product_name%> supports the creation of hash indexes for the purposes of performing equality-based queries.
 
 ## <a id="concept_5C7614F71F394C62ACA1BDC5684A7AC4__section_8A927DFB29364DA7856E7FE122FC1654" class="no-quick-link"></a>Why Create a HashIndex
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/query_index/indexing_guidelines.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_index/indexing_guidelines.html.md.erb b/geode-docs/developing/query_index/indexing_guidelines.html.md.erb
index 88c14b0..4470d97 100644
--- a/geode-docs/developing/query_index/indexing_guidelines.html.md.erb
+++ b/geode-docs/developing/query_index/indexing_guidelines.html.md.erb
@@ -31,7 +31,7 @@ When creating indexes, keep in mind the following:
 
 ## <a id="indexing_guidelines__section_A8AFAA243B5C43DD9BB9F9235A48AF53" class="no-quick-link"></a>Tips for Writing Queries that Use Indexes
 
-As with query processors that run against relational databases, the way a query is written can greatly affect execution performance. Among other things, whether indexes are used depends on how each query is stated. These are some of the things to consider when optimizing your Geode queries for performance:
+As with query processors that run against relational databases, the way a query is written can greatly affect execution performance. Among other things, whether indexes are used depends on how each query is stated. These are some of the things to consider when optimizing your <%=vars.product_name%> queries for performance:
 
 -   In general an index will improve query performance if the FROM clauses of the query and index match exactly.
 -   The query evaluation engine does not have a sophisticated cost-based optimizer. It has a simple optimizer which selects best index (one) or multiple indexes based on the index size and the operator that is being evaluated.

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/query_index/maintaining_indexes.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_index/maintaining_indexes.html.md.erb b/geode-docs/developing/query_index/maintaining_indexes.html.md.erb
index 838f380..8214076 100644
--- a/geode-docs/developing/query_index/maintaining_indexes.html.md.erb
+++ b/geode-docs/developing/query_index/maintaining_indexes.html.md.erb
@@ -52,7 +52,7 @@ Flight {
 }
 ```
 
-An index on the Passenger name field will have different memory space requirements in the cache than the Flight origin field even though they are both String field types. The internal data structure selected by Geode for index storage will depend on the field's level in the object. In this example, name is a top-level field and an index on name can be stored as a compact index. Since origin is a second-level field, any index that uses origin as the indexed expression will be stored as a non-compact index.
+An index on the Passenger name field will have different memory space requirements in the cache than the Flight origin field even though they are both String field types. The internal data structure selected by <%=vars.product_name%> for index storage will depend on the field's level in the object. In this example, name is a top-level field and an index on name can be stored as a compact index. Since origin is a second-level field, any index that uses origin as the indexed expression will be stored as a non-compact index.
 
 **Compact Index**
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/query_index/query_index.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_index/query_index.html.md.erb b/geode-docs/developing/query_index/query_index.html.md.erb
index 0f2c698..3d53e55 100644
--- a/geode-docs/developing/query_index/query_index.html.md.erb
+++ b/geode-docs/developing/query_index/query_index.html.md.erb
@@ -19,60 +19,60 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-The Geode query engine supports indexing. An index can provide significant performance gains for query execution.
+The <%=vars.product_name%> query engine supports indexing. An index can provide significant performance gains for query execution.
 
 <a id="indexing__section_565C080FBDD0443C8504DF372E3C32C8"></a>
 A query run without the aid of an index iterates through every object in the collection. If an index is available that matches part or all of the query specification, the query iterates only over the indexed set, and query processing time can be reduced.
 
--   **[Tips and Guidelines on Using Indexes](../../developing/query_index/indexing_guidelines.html)**
+-   **[Tips and Guidelines on Using Indexes](indexing_guidelines.html)**
 
     Optimizing your queries with indexes requires a cycle of careful planning, testing, and tuning. Poorly-defined indexes can degrade the performance of your queries instead of improving it. This section gives guidelines for index usage in the query service.
 
--   **[Creating, Listing and Removing Indexes](../../developing/query_index/creating_an_index.html)**
+-   **[Creating, Listing and Removing Indexes](creating_an_index.html)**
 
-    The Geode `QueryService` API provides methods to create, list and remove the index. You can also use `gfsh` command-line interface to create, list and remove indexes, and use cache.xml to create an index.
+    The <%=vars.product_name%> `QueryService` API provides methods to create, list and remove the index. You can also use `gfsh` command-line interface to create, list and remove indexes, and use cache.xml to create an index.
 
--   **[Creating Key Indexes](../../developing/query_index/creating_key_indexes.html)**
+-   **[Creating Key Indexes](creating_key_indexes.html)**
 
     Creating a key index is a good way to improve query performance when data is partitioned using a key or a field value. You can create key indexes by using the `createKeyIndex` method of the QueryService or by defining the index in `cache.xml`. Creating a key index makes the query service aware of the relationship between the values in the region and the keys in the region.
 
--   **[Creating Hash Indexes](../../developing/query_index/creating_hash_indexes.html)**
+-   **[Creating Hash Indexes](creating_hash_indexes.html)**
 
-    Geode supports the creation of hash indexes for the purposes of performing equality-based queries.
+    <%=vars.product_name%> supports the creation of hash indexes for the purposes of performing equality-based queries.
 
--   **[Creating Indexes on Map Fields ("Map Indexes")](../../developing/query_index/creating_map_indexes.html)**
+-   **[Creating Indexes on Map Fields ("Map Indexes")](creating_map_indexes.html)**
 
     To assist with the quick lookup of multiple values in a Map (or HashMap) type field, you can create an index (sometimes referred to as a "map index") on specific (or all) keys in that field.
 
--   **[Creating Multiple Indexes at Once](../../developing/query_index/create_multiple_indexes.html)**
+-   **[Creating Multiple Indexes at Once](create_multiple_indexes.html)**
 
     In order to speed and promote efficiency when creating indexes, you can define multiple indexes and then create them all at once.
 
--   **[Maintaining Indexes (Synchronously or Asynchronously) and Index Storage](../../developing/query_index/maintaining_indexes.html)**
+-   **[Maintaining Indexes (Synchronously or Asynchronously) and Index Storage](maintaining_indexes.html)**
 
     Indexes are automatically kept current with the region data they reference. The region attribute `IndexMaintenanceSynchronous` specifies whether the region indexes are updated synchronously when a region is modified or asynchronously in a background thread.
 
--   **[Using Query Index Hints](../../developing/query_index/query_index_hints.html)**
+-   **[Using Query Index Hints](query_index_hints.html)**
 
-    You can use the hint keyword to allow Geode's query engine to prefer certain indexes.
+    You can use the hint keyword to allow <%=vars.product_name%>'s query engine to prefer certain indexes.
 
--   **[Using Indexes on Single Region Queries](../../developing/query_index/indexes_on_single_region_queries.html)**
+-   **[Using Indexes on Single Region Queries](indexes_on_single_region_queries.html)**
 
     Queries with one comparison operation may be improved with either a key or range index, depending on whether the attribute being compared is also the primary key.
 
--   **[Using Indexes with Equi-Join Queries](../../developing/query_index/using_indexes_with_equijoin_queries.html)**
+-   **[Using Indexes with Equi-Join Queries](using_indexes_with_equijoin_queries.html)**
 
     Equi-join queries are queries in which two regions are joined through an equality condition in the WHERE clause.
 
--   **[Using Indexes with Overflow Regions](../../developing/query_index/indexes_with_overflow_regions.html)**
+-   **[Using Indexes with Overflow Regions](indexes_with_overflow_regions.html)**
 
     You can use indexes when querying on overflow regions; however, there are caveats.
 
--   **[Using Indexes on Equi-Join Queries using Multiple Regions](../../developing/query_index/using_indexes_with_equijoin_queries_multiple_regions.html)**
+-   **[Using Indexes on Equi-Join Queries using Multiple Regions](using_indexes_with_equijoin_queries_multiple_regions.html)**
 
     To query across multiple regions, identify all equi-join conditions. Then, create as few indexes for the equi-join conditions as you can while still joining all regions.
 
--   **[Index Samples](../../developing/query_index/index_samples.html)**
+-   **[Index Samples](index_samples.html)**
 
     This topic provides code samples for creating query indexes.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/query_index/query_index_hints.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_index/query_index_hints.html.md.erb b/geode-docs/developing/query_index/query_index_hints.html.md.erb
index 9911014..e461367 100644
--- a/geode-docs/developing/query_index/query_index_hints.html.md.erb
+++ b/geode-docs/developing/query_index/query_index_hints.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-You can use the hint keyword to allow Geode's query engine to prefer certain indexes.
+You can use the hint keyword to allow <%=vars.product_name%>'s query engine to prefer certain indexes.
 
 In cases where one index is hinted in a query, the query engine filters off the hinted index (if possible) and then iterates and filters from the resulting values.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/query_select/the_select_statement.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_select/the_select_statement.html.md.erb b/geode-docs/developing/query_select/the_select_statement.html.md.erb
index baaf6c1..d472889 100644
--- a/geode-docs/developing/query_select/the_select_statement.html.md.erb
+++ b/geode-docs/developing/query_select/the_select_statement.html.md.erb
@@ -80,7 +80,7 @@ When a struct is returned, the name of each field in the struct is determined fo
 
 ## <a id="concept_85AE7D6B1E2941ED8BD2A8310A81753E__section_972EE73A6F3E4427B6A99DB4EDF5860D" class="no-quick-link"></a>DISTINCT
 
-Use the DISTINCT keyword if you want to limit the results set to unique rows. Note that in the current version of Geode you are no longer required to use the DISTINCT keyword in your SELECT statement.
+Use the DISTINCT keyword if you want to limit the results set to unique rows. Note that in the current version of <%=vars.product_name%> you are no longer required to use the DISTINCT keyword in your SELECT statement.
 
 ``` pre
 SELECT DISTINCT * FROM /exampleRegion
@@ -124,7 +124,7 @@ If you are using ORDER BY queries, you must implement the equals and hashCode me
 
 ## <a id="concept_85AE7D6B1E2941ED8BD2A8310A81753E__section_69DCAD624E9640028BC86FD67649DEB2" class="no-quick-link"></a>Preset Query Functions
 
-Geode provides several built-in functions for evaluating or filtering data returned from a query. They include the following:
+<%=vars.product_name%> provides several built-in functions for evaluating or filtering data returned from a query. They include the following:
 
 <table>
 <colgroup>

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/query_select/the_where_clause.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_select/the_where_clause.html.md.erb b/geode-docs/developing/query_select/the_where_clause.html.md.erb
index 834bae9..6fe4498 100644
--- a/geode-docs/developing/query_select/the_where_clause.html.md.erb
+++ b/geode-docs/developing/query_select/the_where_clause.html.md.erb
@@ -162,7 +162,7 @@ SELECT * FROM /exampleRegion portfolio1, portfolio1.positions.values positions1,
 
 ## <a id="the_where_clause__section_D91E0B06FFF6431490CC0BFA369425AD" class="no-quick-link"></a>LIKE
 
-Geode offers limited support for the LIKE predicate. LIKE can be used to mean 'equals to'. If you terminate the string with a wildcard ('%'), it behaves like 'starts with'. You can also place a wildcard (either '%' or '\_') at any other position in the comparison string. You can escape the wildcard characters to represent the characters themselves.
+<%=vars.product_name%> offers limited support for the LIKE predicate. LIKE can be used to mean 'equals to'. If you terminate the string with a wildcard ('%'), it behaves like 'starts with'. You can also place a wildcard (either '%' or '\_') at any other position in the comparison string. You can escape the wildcard characters to represent the characters themselves.
 
 **Note:**
 The '\*' wildcard is not supported in OQL LIKE predicates.
@@ -318,7 +318,7 @@ One problem is that you cannot create indexes on Set or List types (collection t
 
 ## <a id="the_where_clause__section_E7206D045BEC4F67A8D2B793922BF213" class="no-quick-link"></a>Double.NaN and Float.NaN Comparisons
 
-The comparison behavior of Double.NaN and Float.NaN within Geode queries follow the semantics of the JDK methods Float.compareTo and Double.compareTo.
+The comparison behavior of Double.NaN and Float.NaN within <%=vars.product_name%> queries follow the semantics of the JDK methods Float.compareTo and Double.compareTo.
 
 In summary, the comparisons differ in the following ways from those performed by the Java language numerical comparison operators (<, <=, ==, >= >) when applied to primitive double [float] values:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/querying_basics/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/querying_basics/chapter_overview.html.md.erb b/geode-docs/developing/querying_basics/chapter_overview.html.md.erb
index 328cc46..b7291c8 100644
--- a/geode-docs/developing/querying_basics/chapter_overview.html.md.erb
+++ b/geode-docs/developing/querying_basics/chapter_overview.html.md.erb
@@ -27,11 +27,11 @@ Since Geode regions are key-value stores where values can range from simple byte
 
     This topic answers some frequently asked questions on querying functionality. It provides examples to help you get started with Geode querying.
 
--   **[Basic Querying](../../developing/querying_basics/query_basics.html)**
+-   **[Basic Querying](query_basics.html)**
 
     This section provides a high-level introduction to Geode querying such as building a query string and describes query language features.
 
--   **[Advanced Querying](../../developing/query_additional/advanced_querying.html)**
+-   **[Advanced Querying](../query_additional/advanced_querying.html)**
 
     This section includes advanced querying topics such as using query indexes, using query bind parameters, querying partitioned regions and query debugging.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/querying_basics/query_basics.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/querying_basics/query_basics.html.md.erb b/geode-docs/developing/querying_basics/query_basics.html.md.erb
index 4121140..b2928ff 100644
--- a/geode-docs/developing/querying_basics/query_basics.html.md.erb
+++ b/geode-docs/developing/querying_basics/query_basics.html.md.erb
@@ -19,12 +19,12 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-This section provides a high-level introduction to Geode querying such as building a query string and describes query language features.
+This section provides a high-level introduction to <%=vars.product_name%> querying such as building a query string and describes query language features.
 
 <a id="querying_with_oql__section_828A9660B5014DCAA883A58A45E6B51A"></a>
-Geode provides a SQL-like querying language that allows you to access data stored in Geode regions. Since Geode regions are key-value stores where values can range from simple byte arrays to complex nested objects, Geode uses a query syntax based on OQL (Object Query Language) to query region data. OQL and SQL have many syntactical similarities, however they have significant differences. For example, while OQL does not offer all of the capabilities of SQL like aggregates, OQL does allow you to execute queries on complex object graphs, query object attributes and invoke object methods.
+<%=vars.product_name%> provides a SQL-like querying language that allows you to access data stored in <%=vars.product_name%> regions. Since <%=vars.product_name%> regions are key-value stores where values can range from simple byte arrays to complex nested objects, <%=vars.product_name%> uses a query syntax based on OQL (Object Query Language) to query region data. OQL and SQL have many syntactical similarities, however they have significant differences. For example, while OQL does not offer all of the capabilities of SQL like aggregates, OQL does allow you to execute queries on complex object graphs, query object attributes and invoke object methods.
 
-The syntax of a typical Geode OQL query is:
+The syntax of a typical <%=vars.product_name%> OQL query is:
 
 ``` pre
 [IMPORT package]
@@ -34,24 +34,24 @@ FROM collection1, [collection2, …]
 [ORDER BY order_criteria [desc]]
 ```
 
-Therefore, a simple Geode OQL query resembles the following:
+Therefore, a simple <%=vars.product_name%> OQL query resembles the following:
 
 ``` pre
 SELECT DISTINCT * FROM /exampleRegion WHERE status = ‘active’
 ```
 
-An important characteristic of Geode querying to note is that by default, Geode queries on the values of a region and not on keys. To obtain keys from a region, you must use the keySet path expression on the queried region. For example, `/exampleRegion.keySet`.
+An important characteristic of <%=vars.product_name%> querying to note is that by default, <%=vars.product_name%> queries on the values of a region and not on keys. To obtain keys from a region, you must use the keySet path expression on the queried region. For example, `/exampleRegion.keySet`.
 
-For those new to the Geode querying, see also the [Geode Querying FAQ and Examples](../../getting_started/querying_quick_reference.html#reference_D5CE64F5FD6F4A808AEFB748C867189E).
+For those new to the <%=vars.product_name%> querying, see also the [<%=vars.product_name%> Querying FAQ and Examples](../../getting_started/querying_quick_reference.html#reference_D5CE64F5FD6F4A808AEFB748C867189E).
 
--   **[Advantages of OQL](../../developing/querying_basics/oql_compared_to_sql.html)**
+-   **[Advantages of OQL](oql_compared_to_sql.html)**
 
--   **[Writing and Executing a Query in Geode](../../developing/querying_basics/running_a_query.html)**
+-   **[Writing and Executing a Query in <%=vars.product_name%>](running_a_query.html)**
 
--   **[Building a Query String](../../developing/querying_basics/what_is_a_query_string.html)**
+-   **[Building a Query String](what_is_a_query_string.html)**
 
--   **[OQL Syntax and Semantics](../../developing/query_additional/query_language_features.html)**
+-   **[OQL Syntax and Semantics](../query_additional/query_language_features.html)**
 
--   **[Query Language Restrictions and Unsupported Features](../../developing/querying_basics/restrictions_and_unsupported_features.html)**
+-   **[Query Language Restrictions and Unsupported Features](restrictions_and_unsupported_features.html)**
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/querying_basics/querying_partitioned_regions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/querying_basics/querying_partitioned_regions.html.md.erb b/geode-docs/developing/querying_basics/querying_partitioned_regions.html.md.erb
index 0105d82..882fb9a 100644
--- a/geode-docs/developing/querying_basics/querying_partitioned_regions.html.md.erb
+++ b/geode-docs/developing/querying_basics/querying_partitioned_regions.html.md.erb
@@ -19,23 +19,23 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode allows you to manage and store large amounts of data across distributed nodes using partitioned regions. The basic unit of storage for a partitioned region is a bucket, which resides on a Geode node and contains all the entries that map to a single hashcode. In a typical partitioned region query, the system distributes the query to all buckets across all nodes, then merges the result sets and sends back the query results.
+<%=vars.product_name%> allows you to manage and store large amounts of data across distributed nodes using partitioned regions. The basic unit of storage for a partitioned region is a bucket, which resides on a <%=vars.product_name%> node and contains all the entries that map to a single hashcode. In a typical partitioned region query, the system distributes the query to all buckets across all nodes, then merges the result sets and sends back the query results.
 
 <a id="querying_partitioned_regions__section_4C603563DEDC4303818FB8F894470457"></a>
-The following list summarizes the querying functionality supported by Geode for partitioned regions:
+The following list summarizes the querying functionality supported by <%=vars.product_name%> for partitioned regions:
 
 -   **Ability to target specific nodes in a query**. If you know that a specific bucket contains the data that you want to query, you can use a function to ensure that your query only runs the specific node that holds the data. This can greatly improve query efficiency. The ability to query data on a specific node is only available if you are using functions and if the function is executed on one single region. In order to do this, you need to use `Query.execute(RegionFunctionContext context)`. See the [Java API](/releases/latest/javadoc/org/apache/geode/cache/query/Query.html) and [Querying a Partitioned Region on a Single Node](../query_additional/query_on_a_single_node.html#concept_30B18A6507534993BD55C2C9E0544A97) for more details.
--   **Ability to optimize partitioned region query performance using key indexes**. You can improve query performance on data that is partitioned by key or a field value by creating a key index and then executing the query using use `Query.execute(RegionFunctionContext                         context)` with the key or field value used as filter. See the [Java API](/releases/latest/javadoc/org/apache/geode/cache/query/Query.html) and [Optimizing Queries on Data Partitioned by a Key or Field Value](../query_additional/partitioned_region_key_or_field_value.html#concept_3010014DFBC9479783B2B45982014454) for more details.
+-   **Ability to optimize partitioned region query performance using key indexes**. You can improve query performance on data that is partitioned by key or a field value by creating a key index and then executing the query using use `Query.execute(RegionFunctionContext context)` with the key or field value used as filter. See the [Java API](/releases/latest/javadoc/org/apache/geode/cache/query/Query.html) and [Optimizing Queries on Data Partitioned by a Key or Field Value](../query_additional/partitioned_region_key_or_field_value.html#concept_3010014DFBC9479783B2B45982014454) for more details.
 -   **Ability to perform equi-join queries between partitioned regions and between partitioned regions and replicated regions**. Join queries between partitioned region and between partitioned regions and replicated regions are supported through the function service. In order to perform equi-join operations on partitioned regions or partitioned regions and replicated regions, the partitioned regions must be colocated, and you need to use the need to use `Query.execute(RegionFunctionContext                         context)`. See the [Java API](/releases/latest/javadoc/org/apache/geode/cache/query/Query.html) and [Performing an Equi-Join Query on Partitioned Regions](../partitioned_regions/join_query_partitioned_regions.html#concept_B930D276F49541F282A2CFE639F107DD) for more details.
 
--   **[Using ORDER BY on Partitioned Regions](../../developing/query_additional/order_by_on_partitioned_regions.html)**
+-   **[Using ORDER BY on Partitioned Regions](../query_additional/order_by_on_partitioned_regions.html)**
 
--   **[Querying a Partitioned Region on a Single Node](../../developing/query_additional/query_on_a_single_node.html)**
+-   **[Querying a Partitioned Region on a Single Node](../query_additional/query_on_a_single_node.html)**
 
--   **[Optimizing Queries on Data Partitioned by a Key or Field Value](../../developing/query_additional/partitioned_region_key_or_field_value.html)**
+-   **[Optimizing Queries on Data Partitioned by a Key or Field Value](../query_additional/partitioned_region_key_or_field_value.html)**
 
--   **[Performing an Equi-Join Query on Partitioned Regions](../../developing/partitioned_regions/join_query_partitioned_regions.html)**
+-   **[Performing an Equi-Join Query on Partitioned Regions](../partitioned_regions/join_query_partitioned_regions.html)**
 
--   **[Partitioned Region Query Restrictions](../../developing/query_additional/partitioned_region_query_restrictions.html)**
+-   **[Partitioned Region Query Restrictions](../query_additional/partitioned_region_query_restrictions.html)**
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/querying_basics/reserved_words.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/querying_basics/reserved_words.html.md.erb b/geode-docs/developing/querying_basics/reserved_words.html.md.erb
index 7a23f91..67829b9 100644
--- a/geode-docs/developing/querying_basics/reserved_words.html.md.erb
+++ b/geode-docs/developing/querying_basics/reserved_words.html.md.erb
@@ -21,7 +21,7 @@ limitations under the License.
 
 ## <a id="concept_4F288B1F9579422FA481FBE2C3ADD007__section_3415163C3EFB46A6BE873E2606C9DE0F" class="no-quick-link"></a>Reserved Words
 
-These words are reserved for the query language and may not be used as identifiers. The words with asterisk (`*`) after them are not currently used by Geode, but are reserved for future implementation.
+These words are reserved for the query language and may not be used as identifiers. The words with asterisk (`*`) after them are not currently used by <%=vars.product_name%>, but are reserved for future implementation.
 
 <table>
 <colgroup>

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/querying_basics/restrictions_and_unsupported_features.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/querying_basics/restrictions_and_unsupported_features.html.md.erb b/geode-docs/developing/querying_basics/restrictions_and_unsupported_features.html.md.erb
index d315461..0927a04 100644
--- a/geode-docs/developing/querying_basics/restrictions_and_unsupported_features.html.md.erb
+++ b/geode-docs/developing/querying_basics/restrictions_and_unsupported_features.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-At a high level, Geode does not support the following querying features:
+At a high level, <%=vars.product_name%> does not support the following querying features:
 
 -   Indexes targeted for joins across more than one region are not supported
 -   Static method invocations. For example, the following query is invalid:

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/querying_basics/running_a_query.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/querying_basics/running_a_query.html.md.erb b/geode-docs/developing/querying_basics/running_a_query.html.md.erb
index 6ddb1de..985a231 100644
--- a/geode-docs/developing/querying_basics/running_a_query.html.md.erb
+++ b/geode-docs/developing/querying_basics/running_a_query.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Writing and Executing a Query in Geode
----
+<% set_title("Writing and Executing a Query in", product_name) %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -20,7 +18,7 @@ limitations under the License.
 -->
 
 <a id="running_a_querying__section_C285160AF91C4486A39444C3A22D6475"></a>
-The Geode QueryService provides methods to create the Query object. You can then use the Query object to perform query-related operations.
+The <%=vars.product_name%> QueryService provides methods to create the Query object. You can then use the Query object to perform query-related operations.
 
 The QueryService instance you should use depends on whether you are querying the local cache of an application or if you want your application to query the server cache.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/querying_basics/supported_character_sets.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/querying_basics/supported_character_sets.html.md.erb b/geode-docs/developing/querying_basics/supported_character_sets.html.md.erb
index 50b9c87..e5db399 100644
--- a/geode-docs/developing/querying_basics/supported_character_sets.html.md.erb
+++ b/geode-docs/developing/querying_basics/supported_character_sets.html.md.erb
@@ -19,6 +19,6 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode query language supports the full ASCII and Unicode character sets.
+<%=vars.product_name%> query language supports the full ASCII and Unicode character sets.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/querying_basics/what_is_a_query_string.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/querying_basics/what_is_a_query_string.html.md.erb b/geode-docs/developing/querying_basics/what_is_a_query_string.html.md.erb
index eb79645..f12729b 100644
--- a/geode-docs/developing/querying_basics/what_is_a_query_string.html.md.erb
+++ b/geode-docs/developing/querying_basics/what_is_a_query_string.html.md.erb
@@ -37,14 +37,14 @@ A query string follows the rules specified by the query language and grammar. It
 
 The components listed above can all be part of the query string, but none of the components are required. At a minimum, a query string contains an expression that can be evaluated against specified data.
 
-The following sections provide guidelines for the query language building blocks that are used when writing typical Geode queries.
+The following sections provide guidelines for the query language building blocks that are used when writing typical <%=vars.product_name%> queries.
 
--   **[IMPORT Statement](../../developing/query_select/the_import_statement.html)**
+-   **[IMPORT Statement](../query_select/the_import_statement.html)**
 
--   **[FROM Clause](../../developing/query_select/the_from_clause.html)**
+-   **[FROM Clause](../query_select/the_from_clause.html)**
 
--   **[WHERE Clause](../../developing/query_select/the_where_clause.html)**
+-   **[WHERE Clause](../query_select/the_where_clause.html)**
 
--   **[SELECT Statement](../../developing/query_select/the_select_statement.html)**
+-   **[SELECT Statement](../query_select/the_select_statement.html)**
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/region_options/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/region_options/chapter_overview.html.md.erb b/geode-docs/developing/region_options/chapter_overview.html.md.erb
index e48ac79..be1ac36 100644
--- a/geode-docs/developing/region_options/chapter_overview.html.md.erb
+++ b/geode-docs/developing/region_options/chapter_overview.html.md.erb
@@ -19,21 +19,21 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-The Apache Geode data storage and distribution models put your data in the right place at the right time. You should understand all the options for data storage in Geode before you configure your data regions.
+The <%=vars.product_name_long%> data storage and distribution models put your data in the right place at the right time. You should understand all the options for data storage in <%=vars.product_name%> before you configure your data regions.
 
--   **[Storage and Distribution Options](../../developing/region_options/storage_distribution_options.html)**
+-   **[Storage and Distribution Options](storage_distribution_options.html)**
 
-    Geode provides several models for data storage and distribution, including partitioned or replicated regions as well as distributed or non-distributed regions (local cache storage).
+    <%=vars.product_name%> provides several models for data storage and distribution, including partitioned or replicated regions as well as distributed or non-distributed regions (local cache storage).
 
--   **[Region Types](../../developing/region_options/region_types.html)**
+-   **[Region Types](region_types.html)**
 
     Region types define region behavior within a single distributed system. You have various options for region data storage and distribution.
 
--   **[Region Data Stores and Data Accessors](../../developing/region_options/data_hosts_and_accessors.html)**
+-   **[Region Data Stores and Data Accessors](data_hosts_and_accessors.html)**
 
     Understand the difference between members that store data for a region and members that act only as data accessors to the region.
 
--   **[Creating Regions Dynamically](../../developing/region_options/dynamic_region_creation.html)**
+-   **[Creating Regions Dynamically](dynamic_region_creation.html)**
 
     You can dynamically create regions in your application code and automatically instantiate them on members of a distributed system.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/region_options/dynamic_region_creation.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/region_options/dynamic_region_creation.html.md.erb b/geode-docs/developing/region_options/dynamic_region_creation.html.md.erb
index 2652e3d..c26422f 100644
--- a/geode-docs/developing/region_options/dynamic_region_creation.html.md.erb
+++ b/geode-docs/developing/region_options/dynamic_region_creation.html.md.erb
@@ -25,7 +25,7 @@ If your application does not require partitioned regions, you can use the <span
 
 Due to the number of options involved, most developers use functions to create regions dynamically in their applications, as described in this topic. Dynamic regions can also be created from the `gfsh` command line.
 
-For a complete discussion of using Geode functions, see [Function Execution](../function_exec/chapter_overview.html). Functions use the <span class="keyword apiname">org.apache.geode.cache.execute.FunctionService</span> class.
+For a complete discussion of using <%=vars.product_name%> functions, see [Function Execution](../function_exec/chapter_overview.html). Functions use the <span class="keyword apiname">org.apache.geode.cache.execute.FunctionService</span> class.
 
 For example, the following Java classes define and use a function for dynamic region creation:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/region_options/region_types.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/region_options/region_types.html.md.erb b/geode-docs/developing/region_options/region_types.html.md.erb
index 57e565e..e12bb24 100644
--- a/geode-docs/developing/region_options/region_types.html.md.erb
+++ b/geode-docs/developing/region_options/region_types.html.md.erb
@@ -22,7 +22,7 @@ limitations under the License.
 Region types define region behavior within a single distributed system. You have various options for region data storage and distribution.
 
 <a id="region_types__section_E3435ED1D0D142538B99FA69A9E449EF"></a>
-Within a Geode distributed system, you can define distributed regions and non-distributed regions, and you can define regions whose data is spread across the distributed system, and regions whose data is entirely contained in a single member.
+Within a <%=vars.product_name%> distributed system, you can define distributed regions and non-distributed regions, and you can define regions whose data is spread across the distributed system, and regions whose data is entirely contained in a single member.
 
 Your choice of region type is governed in part by the type of application you are running. In particular, you need to use specific region types for your servers and clients for effective communication between the two tiers:
 
@@ -102,8 +102,8 @@ Partitioned regions group your data into buckets, each of which is stored on a s
 
 Use partitioning for:
 
--   **Large data sets**. Store data sets that are too large to fit into a single member, and all members will see the same logical data set. Partitioned regions divide the data into units of storage called buckets that are split across the members hosting the partitioned region data, so no member needs to host all of the region’s data. Geode provides dynamic redundancy recovery and rebalancing of partitioned regions, making them the choice for large-scale data containers. More members in the system can accommodate more uniform balancing of the data across all host members, allowing system throughput (both gets and puts) to scale as new members are added.
--   **High availability**. Partitioned regions allow you configure the number of copies of your data that Geode should make. If a member fails, your data will be available without interruption from the remaining members. Partitioned regions can also be persisted to disk for additional high availability.
+-   **Large data sets**. Store data sets that are too large to fit into a single member, and all members will see the same logical data set. Partitioned regions divide the data into units of storage called buckets that are split across the members hosting the partitioned region data, so no member needs to host all of the region’s data. <%=vars.product_name%> provides dynamic redundancy recovery and rebalancing of partitioned regions, making them the choice for large-scale data containers. More members in the system can accommodate more uniform balancing of the data across all host members, allowing system throughput (both gets and puts) to scale as new members are added.
+-   **High availability**. Partitioned regions allow you configure the number of copies of your data that <%=vars.product_name%> should make. If a member fails, your data will be available without interruption from the remaining members. Partitioned regions can also be persisted to disk for additional high availability.
 -   **Scalability**. Partitioned regions can scale to large amounts of data because the data is divided between the members available to host the region. Increase your data capacity dynamically by simply adding new members. Partitioned regions also allow you to scale your processing capacity. Because your entries are spread out across the members hosting the region, reads and writes to those entries are also spread out across those members.
 -   **Good write performance**. You can configure the number of copies of your data. The amount of data transmitted per write does not increase with the number of members. By contrast, with replicated regions, each write must be sent to every member that has the region replicated, so the amount of data transmitted per write increases with the number of members.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/region_options/storage_distribution_options.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/region_options/storage_distribution_options.html.md.erb b/geode-docs/developing/region_options/storage_distribution_options.html.md.erb
index f30135e..6cbafc6 100644
--- a/geode-docs/developing/region_options/storage_distribution_options.html.md.erb
+++ b/geode-docs/developing/region_options/storage_distribution_options.html.md.erb
@@ -19,11 +19,11 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode provides several models for data storage and distribution, including partitioned or replicated regions as well as distributed or non-distributed regions (local cache storage).
+<%=vars.product_name%> provides several models for data storage and distribution, including partitioned or replicated regions as well as distributed or non-distributed regions (local cache storage).
 
 ## <a id="concept_B18B7754E7C7485BA6D66F2DDB7A11FB__section_787D674A64244871AE49CBB58475088E" class="no-quick-link"></a>Peer-to-Peer Region Storage and Distribution
 
-At its most general, data management means having current data available when and where your applications need it. In a properly configured Geode installation, you store your data in your local members and Geode automatically distributes it to the other members that need it according to your cache configuration settings. You may be storing very large data objects that require special consideration, or you may have a high volume of data requiring careful configuration to safeguard your application's performance or memory use. You may need to be able to explicitly lock some data during particular operations. Most data management features are available as configuration options, which you can specify either using the `gfsh` cluster configuration service, `cache.xml` file or the API. Once configured, Geode manages the data automatically. For example, this is how you manage data distribution, disk storage, data expiration activities, and data partitioning. A few features are managed at ru
 n-time through the API.
+At its most general, data management means having current data available when and where your applications need it. In a properly configured <%=vars.product_name%> installation, you store your data in your local members and <%=vars.product_name%> automatically distributes it to the other members that need it according to your cache configuration settings. You may be storing very large data objects that require special consideration, or you may have a high volume of data requiring careful configuration to safeguard your application's performance or memory use. You may need to be able to explicitly lock some data during particular operations. Most data management features are available as configuration options, which you can specify either using the `gfsh` cluster configuration service, `cache.xml` file or the API. Once configured, <%=vars.product_name%> manages the data automatically. For example, this is how you manage data distribution, disk storage, data expiration activities, and 
 data partitioning. A few features are managed at run-time through the API.
 
 At the architectural level, data distribution runs between peers in a single system and between clients and servers.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/storing_data_on_disk/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/storing_data_on_disk/chapter_overview.html.md.erb b/geode-docs/developing/storing_data_on_disk/chapter_overview.html.md.erb
index 002fb53..d41b009 100644
--- a/geode-docs/developing/storing_data_on_disk/chapter_overview.html.md.erb
+++ b/geode-docs/developing/storing_data_on_disk/chapter_overview.html.md.erb
@@ -24,17 +24,17 @@ You can persist data on disk for backup purposes and overflow it to disk to free
 **Note:**
 This supplements the general steps for managing data regions provided in [Basic Configuration and Programming](../../basic_config/book_intro.html).
 
-All disk storage uses Apache Geode[Disk Storage](../../managing/disk_storage/chapter_overview.html).
+All disk storage uses <%=vars.product_name_long%> [Disk Storage](../../managing/disk_storage/chapter_overview.html).
 
--   **[How Persistence and Overflow Work](../../developing/storing_data_on_disk/how_persist_overflow_work.html)**
+-   **[How Persistence and Overflow Work](how_persist_overflow_work.html)**
 
-    To use Geode persistence and overflow, you should understand how they work with your data.
+    To use <%=vars.product_name%> persistence and overflow, you should understand how they work with your data.
 
--   **[Configure Region Persistence and Overflow](../../developing/storing_data_on_disk/storing_data_on_disk.html)**
+-   **[Configure Region Persistence and Overflow](storing_data_on_disk.html)**
 
     Plan persistence and overflow for your data regions and configure them accordingly.
 
--   **[Overflow Configuration Examples](../../developing/storing_data_on_disk/overflow_config_examples.html)**
+-   **[Overflow Configuration Examples](overflow_config_examples.html)**
 
     The `cache.xml` examples show configuration of region and server subscription queue overflows.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/storing_data_on_disk/how_persist_overflow_work.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/storing_data_on_disk/how_persist_overflow_work.html.md.erb b/geode-docs/developing/storing_data_on_disk/how_persist_overflow_work.html.md.erb
index 1a1cc10..89f63a3 100644
--- a/geode-docs/developing/storing_data_on_disk/how_persist_overflow_work.html.md.erb
+++ b/geode-docs/developing/storing_data_on_disk/how_persist_overflow_work.html.md.erb
@@ -19,14 +19,14 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-To use Geode persistence and overflow, you should understand how they work with your data.
+To use <%=vars.product_name%> persistence and overflow, you should understand how they work with your data.
 
 <a id="how_persist_overflow_work__section_jzl_wwb_pr"></a>
-Geode persists and overflows several types of data. You can persist or overflow the application data in your regions. In addition, Geode persists and overflows messaging queues between distributed systems, to manage memory consumption and provide high availability.
+<%=vars.product_name%> persists and overflows several types of data. You can persist or overflow the application data in your regions. In addition, <%=vars.product_name%> persists and overflows messaging queues between distributed systems, to manage memory consumption and provide high availability.
 
 Persistent data outlives the member where the region resides and can be used to initialize the region at creation. Overflow acts only as an extension of the region in memory.
 
-The data is written to disk according to the configuration of Geode disk stores. For any disk option, you can specify the name of the disk store to use or use the Geode default disk store. See [Disk Storage](../../managing/disk_storage/chapter_overview.html).
+The data is written to disk according to the configuration of <%=vars.product_name%> disk stores. For any disk option, you can specify the name of the disk store to use or use the <%=vars.product_name%> default disk store. See [Disk Storage](../../managing/disk_storage/chapter_overview.html).
 
 ## <a id="how_persist_overflow_work__section_78F2D1820B6C48859A0E5411CE360105" class="no-quick-link"></a>How Data Is Persisted and Overflowed
 


[39/51] [abbrv] geode git commit: GEODE-3169: Decoupling of DiskStore and backups This closes #715 * move backup logic away from DiskStore and into BackupManager * refactor code into smaller methods * improve test code clarity

Posted by kl...@apache.org.
http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/test/java/org/apache/geode/internal/cache/BackupJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/cache/BackupJUnitTest.java b/geode-core/src/test/java/org/apache/geode/internal/cache/BackupJUnitTest.java
index caa2ce5..28dc662 100644
--- a/geode-core/src/test/java/org/apache/geode/internal/cache/BackupJUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/cache/BackupJUnitTest.java
@@ -23,18 +23,15 @@ import static org.junit.Assert.*;
 import org.apache.commons.io.FileUtils;
 import org.apache.commons.io.filefilter.DirectoryFileFilter;
 import org.apache.commons.io.filefilter.RegexFileFilter;
+
 import org.apache.geode.cache.CacheFactory;
 import org.apache.geode.cache.DataPolicy;
 import org.apache.geode.cache.DiskStore;
 import org.apache.geode.cache.DiskStoreFactory;
-import org.apache.geode.cache.DiskWriteAttributesFactory;
 import org.apache.geode.cache.EvictionAction;
 import org.apache.geode.cache.EvictionAttributes;
 import org.apache.geode.cache.Region;
 import org.apache.geode.cache.RegionFactory;
-import org.apache.geode.distributed.DistributedSystem;
-import org.apache.geode.internal.cache.persistence.BackupManager;
-import org.apache.geode.internal.cache.persistence.RestoreScript;
 import org.apache.geode.test.junit.categories.IntegrationTest;
 import org.junit.After;
 import org.junit.Before;
@@ -54,16 +51,17 @@ import java.util.Collection;
 import java.util.Collections;
 import java.util.Properties;
 import java.util.Random;
+import java.util.concurrent.CompletableFuture;
 
 @Category(IntegrationTest.class)
 public class BackupJUnitTest {
 
-  protected GemFireCacheImpl cache = null;
+  private static final String DISK_STORE_NAME = "diskStore";
+  private GemFireCacheImpl cache = null;
   private File tmpDir;
-  protected File cacheXmlFile;
+  private File cacheXmlFile;
 
-  protected DistributedSystem ds = null;
-  protected Properties props = new Properties();
+  private Properties props = new Properties();
 
   private File backupDir;
   private File[] diskDirs;
@@ -103,7 +101,6 @@ public class BackupJUnitTest {
 
   private void createCache() throws IOException {
     cache = (GemFireCacheImpl) new CacheFactory(props).create();
-    ds = cache.getDistributedSystem();
   }
 
   @After
@@ -123,33 +120,26 @@ public class BackupJUnitTest {
 
   @Test
   public void testBackupAndRecover() throws IOException, InterruptedException {
-    backupAndRecover(new RegionCreator() {
-      public Region createRegion() {
-        DiskStoreImpl ds = createDiskStore();
-        return BackupJUnitTest.this.createRegion();
-      }
+    backupAndRecover(() -> {
+      createDiskStore();
+      return BackupJUnitTest.this.createRegion();
     });
   }
 
   @Test
   public void testBackupAndRecoverOldConfig() throws IOException, InterruptedException {
-    backupAndRecover(new RegionCreator() {
-      public Region createRegion() {
-        DiskStoreImpl ds = createDiskStore();
-        RegionFactory rf = new RegionFactory();
-        rf.setDataPolicy(DataPolicy.PERSISTENT_REPLICATE);
-        rf.setDiskDirs(diskDirs);
-        DiskWriteAttributesFactory daf = new DiskWriteAttributesFactory();
-        daf.setMaxOplogSize(1);
-        rf.setDiskWriteAttributes(daf.create());
-        return rf.create("region");
-      }
+    backupAndRecover(() -> {
+      createDiskStore();
+      RegionFactory regionFactory = cache.createRegionFactory();
+      regionFactory.setDataPolicy(DataPolicy.PERSISTENT_REPLICATE);
+      regionFactory.setDiskStoreName(DISK_STORE_NAME);
+      return regionFactory.create("region");
     });
   }
 
-  public void backupAndRecover(RegionCreator regionFactory)
+  private void backupAndRecover(RegionCreator regionFactory)
       throws IOException, InterruptedException {
-    Region region = regionFactory.createRegion();
+    Region<Object, Object> region = regionFactory.createRegion();
 
     // Put enough data to roll some oplogs
     for (int i = 0; i < 1024; i++) {
@@ -193,8 +183,8 @@ public class BackupJUnitTest {
 
     BackupManager backup =
         cache.startBackup(cache.getInternalDistributedSystem().getDistributedMember());
-    backup.prepareBackup();
-    backup.finishBackup(backupDir, null, false);
+    backup.prepareForBackup();
+    backup.doBackup(backupDir, null, false);
 
     // Put another key to make sure we restore
     // from a backup that doesn't contain this key
@@ -238,19 +228,19 @@ public class BackupJUnitTest {
 
   @Test
   public void testBackupEmptyDiskStore() throws IOException, InterruptedException {
-    DiskStoreImpl ds = createDiskStore();
+    createDiskStore();
 
     BackupManager backup =
         cache.startBackup(cache.getInternalDistributedSystem().getDistributedMember());
-    backup.prepareBackup();
-    backup.finishBackup(backupDir, null, false);
+    backup.prepareForBackup();
+    backup.doBackup(backupDir, null, false);
     assertEquals("No backup files should have been created", Collections.emptyList(),
         Arrays.asList(backupDir.list()));
   }
 
   @Test
   public void testBackupOverflowOnlyDiskStore() throws IOException, InterruptedException {
-    DiskStoreImpl ds = createDiskStore();
+    createDiskStore();
     Region region = createOverflowRegion();
     // Put another key to make sure we restore
     // from a backup that doesn't contain this key
@@ -258,8 +248,8 @@ public class BackupJUnitTest {
 
     BackupManager backup =
         cache.startBackup(cache.getInternalDistributedSystem().getDistributedMember());
-    backup.prepareBackup();
-    backup.finishBackup(backupDir, null, false);
+    backup.prepareForBackup();
+    backup.doBackup(backupDir, null, false);
 
 
     assertEquals("No backup files should have been created", Collections.emptyList(),
@@ -275,51 +265,54 @@ public class BackupJUnitTest {
     dsf.setAutoCompact(false);
     dsf.setAllowForceCompaction(true);
     dsf.setCompactionThreshold(20);
-    String name = "diskStore";
-    DiskStoreImpl ds = (DiskStoreImpl) dsf.create(name);
+    DiskStoreImpl ds = (DiskStoreImpl) dsf.create(DISK_STORE_NAME);
 
-    Region region = createRegion();
+    Region<Object, Object> region = createRegion();
 
     // Put enough data to roll some oplogs
     for (int i = 0; i < 1024; i++) {
       region.put(i, getBytes(i));
     }
 
-    RestoreScript script = new RestoreScript();
-    ds.startBackup(backupDir, null, script);
-
-    for (int i = 2; i < 1024; i++) {
-      assertTrue(region.destroy(i) != null);
-    }
-    assertTrue(ds.forceCompaction());
-    // Put another key to make sure we restore
-    // from a backup that doesn't contain this key
-    region.put("A", "A");
-
-    ds.finishBackup(
-        new BackupManager(cache.getInternalDistributedSystem().getDistributedMember(), cache));
-    script.generate(backupDir);
+    BackupManager backupManager =
+        cache.startBackup(cache.getInternalDistributedSystem().getDistributedMember());
+    backupManager.validateRequestingAdmin();
+    backupManager.prepareForBackup();
+    final Region theRegion = region;
+    final DiskStore theDiskStore = ds;
+    CompletableFuture.runAsync(() -> destroyAndCompact(theRegion, theDiskStore));
+    backupManager.doBackup(backupDir, null, false);
 
     cache.close();
     destroyDiskDirs();
     restoreBackup(false);
     createCache();
-    ds = createDiskStore();
+    createDiskStore();
     region = createRegion();
     validateEntriesExist(region, 0, 1024);
 
     assertNull(region.get("A"));
   }
 
+  private void destroyAndCompact(Region<Object, Object> region, DiskStore diskStore) {
+    for (int i = 2; i < 1024; i++) {
+      assertTrue(region.destroy(i) != null);
+    }
+    assertTrue(diskStore.forceCompaction());
+    // Put another key to make sure we restore
+    // from a backup that doesn't contain this key
+    region.put("A", "A");
+  }
+
   @Test
   public void testBackupCacheXml() throws Exception {
-    DiskStoreImpl ds = createDiskStore();
+    createDiskStore();
     createRegion();
 
     BackupManager backup =
         cache.startBackup(cache.getInternalDistributedSystem().getDistributedMember());
-    backup.prepareBackup();
-    backup.finishBackup(backupDir, null, false);
+    backup.prepareForBackup();
+    backup.doBackup(backupDir, null, false);
     Collection<File> fileCollection = FileUtils.listFiles(backupDir,
         new RegexFileFilter("cache.xml"), DirectoryFileFilter.DIRECTORY);
     assertEquals(1, fileCollection.size());
@@ -337,12 +330,9 @@ public class BackupJUnitTest {
     // The cache xml file should be small enough to fit in one byte array
     int size = (int) file.length();
     byte[] contents = new byte[size];
-    FileInputStream fis = new FileInputStream(file);
-    try {
+    try (FileInputStream fis = new FileInputStream(file)) {
       assertEquals(size, fis.read(contents));
       assertEquals(-1, fis.read());
-    } finally {
-      fis.close();
     }
     return contents;
   }
@@ -406,36 +396,35 @@ public class BackupJUnitTest {
 
   }
 
-  protected Region createRegion() {
-    RegionFactory rf = new RegionFactory();
-    rf.setDiskStoreName("diskStore");
-    rf.setDataPolicy(DataPolicy.PERSISTENT_REPLICATE);
-    return rf.create("region");
+  private Region createRegion() {
+    RegionFactory regionFactory = cache.createRegionFactory();
+    regionFactory.setDiskStoreName(DISK_STORE_NAME);
+    regionFactory.setDataPolicy(DataPolicy.PERSISTENT_REPLICATE);
+    return regionFactory.create("region");
   }
 
   private Region createOverflowRegion() {
-    RegionFactory rf = new RegionFactory();
-    rf.setDiskStoreName("diskStore");
-    rf.setEvictionAttributes(
+    RegionFactory regionFactory = cache.createRegionFactory();
+    regionFactory.setDiskStoreName(DISK_STORE_NAME);
+    regionFactory.setEvictionAttributes(
         EvictionAttributes.createLIFOEntryAttributes(1, EvictionAction.OVERFLOW_TO_DISK));
-    rf.setDataPolicy(DataPolicy.NORMAL);
-    return rf.create("region");
+    regionFactory.setDataPolicy(DataPolicy.NORMAL);
+    return regionFactory.create("region");
   }
 
   private DiskStore findDiskStore() {
-    return cache.findDiskStore("diskStore");
+    return cache.findDiskStore(DISK_STORE_NAME);
   }
 
-  private DiskStoreImpl createDiskStore() {
-    DiskStoreFactory dsf = cache.createDiskStoreFactory();
-    dsf.setDiskDirs(diskDirs);
-    dsf.setMaxOplogSize(1);
-    String name = "diskStore";
-    return (DiskStoreImpl) dsf.create(name);
+  private void createDiskStore() {
+    DiskStoreFactory diskStoreFactory = cache.createDiskStoreFactory();
+    diskStoreFactory.setDiskDirs(diskDirs);
+    diskStoreFactory.setMaxOplogSize(1);
+    diskStoreFactory.create(DISK_STORE_NAME);
   }
 
   private interface RegionCreator {
-    Region createRegion();
+    Region<Object, Object> createRegion();
   }
 
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/test/java/org/apache/geode/internal/cache/IncrementalBackupDUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/cache/IncrementalBackupDUnitTest.java b/geode-core/src/test/java/org/apache/geode/internal/cache/IncrementalBackupDUnitTest.java
index ee3d7f7..f31f17b 100644
--- a/geode-core/src/test/java/org/apache/geode/internal/cache/IncrementalBackupDUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/cache/IncrementalBackupDUnitTest.java
@@ -55,7 +55,6 @@ import org.apache.geode.distributed.DistributedSystem;
 import org.apache.geode.internal.ClassBuilder;
 import org.apache.geode.internal.ClassPathLoader;
 import org.apache.geode.internal.DeployedJar;
-import org.apache.geode.internal.cache.persistence.BackupManager;
 import org.apache.geode.internal.util.IOUtils;
 import org.apache.geode.internal.util.TransformUtils;
 import org.apache.geode.test.dunit.Host;
@@ -615,7 +614,7 @@ public class IncrementalBackupDUnitTest extends JUnit4CacheTestCase {
     File backupDir = getBackupDirForMember(getBaselineDir(), getMemberId(vm));
     assertTrue(backupDir.exists());
 
-    File incomplete = new File(backupDir, BackupManager.INCOMPLETE_BACKUP);
+    File incomplete = new File(backupDir, BackupManager.INCOMPLETE_BACKUP_FILE);
     incomplete.createNewFile();
   }
 

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/test/java/org/apache/geode/internal/cache/persistence/BackupPrepareAndFinishMsgDUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/cache/persistence/BackupPrepareAndFinishMsgDUnitTest.java b/geode-core/src/test/java/org/apache/geode/internal/cache/persistence/BackupPrepareAndFinishMsgDUnitTest.java
index 39c5c3c..e0fea77 100644
--- a/geode-core/src/test/java/org/apache/geode/internal/cache/persistence/BackupPrepareAndFinishMsgDUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/cache/persistence/BackupPrepareAndFinishMsgDUnitTest.java
@@ -22,11 +22,18 @@ import static org.junit.Assert.fail;
 
 import java.io.File;
 import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Future;
 import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import java.util.stream.Collectors;
 
 import org.apache.geode.admin.internal.FinishBackupRequest;
 import org.apache.geode.admin.internal.PrepareBackupRequest;
@@ -46,490 +53,151 @@ import org.apache.geode.distributed.internal.DM;
 import org.apache.geode.internal.cache.BackupLock;
 import org.apache.geode.internal.cache.DiskStoreImpl;
 import org.apache.geode.internal.cache.GemFireCacheImpl;
-import org.apache.geode.internal.cache.InternalCache;
 import org.apache.geode.test.junit.categories.DistributedTest;
 import org.awaitility.Awaitility;
-import org.junit.After;
+import org.junit.Before;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
 @Category({DistributedTest.class})
-public class BackupPrepareAndFinishMsgDUnitTest extends CacheTestCase {
+public abstract class BackupPrepareAndFinishMsgDUnitTest extends CacheTestCase {
   // Although this test does not make use of other members, the current member needs to be
   // a distributed member (rather than local) because it sends prepare and finish backup messages
-  File[] diskDirs = null;
+  private static final String TEST_REGION_NAME = "TestRegion";
+  private File[] diskDirs = null;
   private int waitingForBackupLockCount = 0;
+  private Region<Integer, Integer> region;
 
-  @After
-  public void after() throws Exception {
-    waitingForBackupLockCount = 0;
-    diskDirs = null;
-  }
-
-  @Test
-  public void testCreateWithParReg() throws Throwable {
-    doCreate(RegionShortcut.PARTITION_PERSISTENT, true);
-  }
-
-  @Test
-  public void testCreateWithReplicate() throws Throwable {
-    doCreate(RegionShortcut.REPLICATE_PERSISTENT, true);
-  }
-
-  @Test
-  public void testPutAsCreateWithParReg() throws Throwable {
-    doCreate(RegionShortcut.PARTITION_PERSISTENT, false);
-  }
-
-  @Test
-  public void testPutAsCreateWithReplicate() throws Throwable {
-    doCreate(RegionShortcut.REPLICATE_PERSISTENT, false);
-  }
-
-  @Test
-  public void testUpdateWithParReg() throws Throwable {
-    doUpdate(RegionShortcut.PARTITION_PERSISTENT);
-  }
-
-  @Test
-  public void testUpdateWithReplicate() throws Throwable {
-    doUpdate(RegionShortcut.REPLICATE_PERSISTENT);
-  }
-
-  @Test
-  public void testInvalidateWithParReg() throws Throwable {
-    doInvalidate(RegionShortcut.PARTITION_PERSISTENT);
-  }
-
-  @Test
-  public void testInvalidateWithReplicate() throws Throwable {
-    doInvalidate(RegionShortcut.REPLICATE_PERSISTENT);
-  }
-
-  @Test
-  public void testDestroyWithParReg() throws Throwable {
-    doDestroy(RegionShortcut.PARTITION_PERSISTENT);
-  }
-
-  @Test
-  public void testDestroyWithReplicate() throws Throwable {
-    doDestroy(RegionShortcut.REPLICATE_PERSISTENT);
-  }
-
-  @Test
-  public void testGetWithParReg() throws Throwable {
-    doRead(RegionShortcut.PARTITION_PERSISTENT, "get");
-  }
-
-  @Test
-  public void testGetWithReplicate() throws Throwable {
-    doRead(RegionShortcut.REPLICATE_PERSISTENT, "get");
-  }
-
-  @Test
-  public void testContainsKeyWithParReg() throws Throwable {
-    doRead(RegionShortcut.PARTITION_PERSISTENT, "containsKey");
-  }
-
-  @Test
-  public void testContainsKeyWithReplicate() throws Throwable {
-    doRead(RegionShortcut.REPLICATE_PERSISTENT, "containsKey");
-  }
-
-  @Test
-  public void testContainsValueWithParReg() throws Throwable {
-    doRead(RegionShortcut.PARTITION_PERSISTENT, "containsValue");
-  }
-
-  @Test
-  public void testContainsValueWithReplicate() throws Throwable {
-    doRead(RegionShortcut.REPLICATE_PERSISTENT, "containsValue");
-  }
-
-  @Test
-  public void testContainsValueForKeyWithParReg() throws Throwable {
-    doRead(RegionShortcut.PARTITION_PERSISTENT, "containsValueForKey");
-  }
-
-  @Test
-  public void testContainsValueForKeyWithReplicate() throws Throwable {
-    doRead(RegionShortcut.REPLICATE_PERSISTENT, "containsValueForKey");
-  }
-
-  @Test
-  public void testEntrySetWithParReg() throws Throwable {
-    doRead(RegionShortcut.PARTITION_PERSISTENT, "entrySet");
-  }
-
-  @Test
-  public void testEntrySetWithReplicate() throws Throwable {
-    doRead(RegionShortcut.REPLICATE_PERSISTENT, "entrySet");
-  }
-
-  @Test
-  public void testGetAllWithParReg() throws Throwable {
-    doRead(RegionShortcut.PARTITION_PERSISTENT, "getAll");
-  }
-
-  @Test
-  public void testGetAllWithReplicate() throws Throwable {
-    doRead(RegionShortcut.REPLICATE_PERSISTENT, "getAll");
-  }
-
-  @Test
-  public void testGetEntryWithParReg() throws Throwable {
-    doRead(RegionShortcut.PARTITION_PERSISTENT, "getEntry");
-  }
-
-  @Test
-  public void testGetEntryWithReplicate() throws Throwable {
-    doRead(RegionShortcut.REPLICATE_PERSISTENT, "getEntry");
-  }
-
-  @Test
-  public void testIsEmptyWithParReg() throws Throwable {
-    doRead(RegionShortcut.PARTITION_PERSISTENT, "isEmpty");
-  }
-
-  @Test
-  public void testIsEmptyWithReplicate() throws Throwable {
-    doRead(RegionShortcut.REPLICATE_PERSISTENT, "isEmpty");
-  }
-
-  @Test
-  public void testKeySetWithParReg() throws Throwable {
-    doRead(RegionShortcut.PARTITION_PERSISTENT, "keySet");
-  }
-
-  @Test
-  public void testKeySetWithReplicate() throws Throwable {
-    doRead(RegionShortcut.REPLICATE_PERSISTENT, "keySet");
-  }
-
-  @Test
-  public void testSizeWithParReg() throws Throwable {
-    doRead(RegionShortcut.PARTITION_PERSISTENT, "size");
-  }
+  protected abstract Region<Integer, Integer> createRegion();
 
-  @Test
-  public void testSizeWithReplicate() throws Throwable {
-    doRead(RegionShortcut.REPLICATE_PERSISTENT, "size");
+  @Before
+  public void setup() {
+    region = createRegion();
   }
 
   @Test
-  public void testValuesWithParReg() throws Throwable {
-    doRead(RegionShortcut.PARTITION_PERSISTENT, "values");
+  public void createWaitsForBackupTest() throws Throwable {
+    doActionAndVerifyWaitForBackup(() -> region.create(1, 1));
+    verifyKeyValuePair(1, 1);
   }
 
   @Test
-  public void testValuesWithReplicate() throws Throwable {
-    doRead(RegionShortcut.REPLICATE_PERSISTENT, "values");
+  public void putThatCreatesWaitsForBackupTest() throws Throwable {
+    doActionAndVerifyWaitForBackup(() -> region.put(1, 1));
+    verifyKeyValuePair(1, 1);
   }
 
   @Test
-  public void testQueryWithParReg() throws Throwable {
-    doRead(RegionShortcut.PARTITION_PERSISTENT, "query");
+  public void putWaitsForBackupTest() throws Throwable {
+    region.put(1, 1);
+    doActionAndVerifyWaitForBackup(() -> region.put(1, 2));
+    verifyKeyValuePair(1, 2);
   }
 
   @Test
-  public void testQueryWithReplicate() throws Throwable {
-    doRead(RegionShortcut.REPLICATE_PERSISTENT, "query");
+  public void invalidateWaitsForBackupTest() throws Throwable {
+    region.put(1, 1);
+    doActionAndVerifyWaitForBackup(() -> region.invalidate(1));
+    verifyKeyValuePair(1, null);
   }
 
   @Test
-  public void testExistsValueWithParReg() throws Throwable {
-    doRead(RegionShortcut.PARTITION_PERSISTENT, "existsValue");
+  public void destroyWaitsForBackupTest() throws Throwable {
+    region.put(1, 1);
+    doActionAndVerifyWaitForBackup(() -> region.destroy(1));
+    assertFalse(region.containsKey(1));
   }
 
   @Test
-  public void testExistsValueWithReplicate() throws Throwable {
-    doRead(RegionShortcut.REPLICATE_PERSISTENT, "existsValue");
-  }
+  public void putAllWaitsForBackupTest() throws Throwable {
+    Map<Integer, Integer> entries = new HashMap<>();
+    entries.put(1, 1);
+    entries.put(2, 2);
 
-  @Test
-  public void testPutAllWithParReg() throws Throwable {
-    doPutAll(RegionShortcut.PARTITION_PERSISTENT);
+    doActionAndVerifyWaitForBackup(() -> region.putAll(entries));
+    verifyKeyValuePair(1, 1);
+    verifyKeyValuePair(2, 2);
   }
 
   @Test
-  public void testPutAllWithReplicate() throws Throwable {
-    doPutAll(RegionShortcut.REPLICATE_PERSISTENT);
-  }
+  public void removeAllWaitsForBackupTest() throws Throwable {
+    region.put(1, 1);
+    region.put(2, 2);
 
-  @Test
-  public void testRemoveAllWithParReg() throws Throwable {
-    doRemoveAll(RegionShortcut.PARTITION_PERSISTENT);
+    List<Integer> keys = Arrays.asList(1, 2);
+    doActionAndVerifyWaitForBackup(() -> region.removeAll(keys));
+    assertTrue(region.isEmpty());
   }
 
   @Test
-  public void testRemoveAllWithReplicate() throws Throwable {
-    doRemoveAll(RegionShortcut.REPLICATE_PERSISTENT);
-  }
-
-  /**
-   * Test that a create waits for backup
-   * 
-   * @param shortcut The region shortcut to use to create the region
-   * @throws InterruptedException
-   */
-  private void doCreate(RegionShortcut shortcut, boolean useCreate) throws InterruptedException {
-    Region aRegion = createRegion(shortcut);
-    Runnable runnable = new Runnable() {
-      public void run() {
-        if (useCreate) {
-          aRegion.create(1, 1);
-        } else {
-          aRegion.put(1, 1);
-        }
-      }
-    };
-
-    verifyWaitForBackup(runnable);
-    assertTrue(aRegion.containsKey(1));
-    assertEquals(aRegion.get(1), 1);
-  }
-
-  /**
-   * Test that an update waits for backup
-   * 
-   * @param shortcut The region shortcut to use to create the region
-   * @throws InterruptedException
-   */
-  private void doUpdate(RegionShortcut shortcut) throws InterruptedException {
-    Region aRegion = createRegion(shortcut);
-    aRegion.put(1, 1);
-
-    Runnable runnable = new Runnable() {
-      public void run() {
-        aRegion.put(1, 2);
-      }
-    };
-
-    verifyWaitForBackup(runnable);
-    assertTrue(aRegion.containsKey(1));
-    assertEquals(aRegion.get(1), 2);
-  }
-
-  /**
-   * Test that an invalidate waits for backup
-   * 
-   * @param shortcut The region shortcut to use to create the region
-   * @throws InterruptedException
-   */
-  private void doInvalidate(RegionShortcut shortcut) throws InterruptedException {
-    Region aRegion = createRegion(shortcut);
-    aRegion.put(1, 1);
-
-    Runnable runnable = (new Runnable() {
-      public void run() {
-        aRegion.invalidate(1);
-      }
-    });
-
-    verifyWaitForBackup(runnable);
-    assertTrue(aRegion.containsKey(1));
-    assertNull(aRegion.get(1));
+  public void readActionsDoNotBlockDuringBackup() {
+    region.put(1, 1);
+    doReadActionsAndVerifyCompletion();
   }
 
-  /**
-   * Test that a destroy waits for backup
-   * 
-   * @param shortcut The region shortcut to use to create the region
-   * @throws InterruptedException
-   */
-  private void doDestroy(RegionShortcut shortcut) throws InterruptedException {
-    Region aRegion = createRegion(shortcut);
-    aRegion.put(1, 1);
-
-    Runnable runnable = new Runnable() {
-      public void run() {
-        aRegion.destroy(1);
-      }
-    };
-
-    verifyWaitForBackup(runnable);
-    assertFalse(aRegion.containsKey(1));
-  }
-
-  /**
-   * Test that a read op does NOT wait for backup
-   * 
-   * @param shortcut The region shortcut to use to create the region
-   * @throws InterruptedException
-   */
-  private void doRead(RegionShortcut shortcut, String op) throws Exception {
-    Region aRegion = createRegion(shortcut);
-    aRegion.put(1, 1);
-
-    Runnable runnable = new Runnable() {
-      public void run() {
-        switch (op) {
-          case "get": {
-            aRegion.get(1);
-            break;
-          }
-          case "containsKey": {
-            aRegion.containsKey(1);
-            break;
-          }
-          case "containsValue": {
-            aRegion.containsValue(1);
-            break;
-          }
-          case "containsValueForKey": {
-            aRegion.containsValue(1);
-            break;
-          }
-          case "entrySet": {
-            aRegion.entrySet();
-            break;
-          }
-          case "existsValue": {
-            try {
-              aRegion.existsValue("value = 1");
-            } catch (FunctionDomainException | TypeMismatchException | NameResolutionException
-                | QueryInvocationTargetException e) {
-              fail(e.toString());
-            }
-            break;
-          }
-          case "getAll": {
-            aRegion.getAll(new ArrayList());
-            break;
-          }
-          case "getEntry": {
-            aRegion.getEntry(1);
-            break;
-          }
-          case "isEmpty": {
-            aRegion.isEmpty();
-            break;
-          }
-          case "keySet": {
-            aRegion.keySet();
-            break;
-          }
-          case "query": {
-            try {
-              aRegion.query("select *");
-            } catch (FunctionDomainException | TypeMismatchException | NameResolutionException
-                | QueryInvocationTargetException e) {
-              fail(e.toString());
-            }
-            break;
-          }
-          case "size": {
-            aRegion.size();
-            break;
-          }
-          case "values": {
-            aRegion.values();
-            break;
-          }
-          default: {
-            fail("Unknown operation " + op);
-          }
-        }
-      }
-    };
-
-    verifyNoWaitForBackup(runnable);
-  }
-
-  /**
-   * Test that a putAll waits for backup
-   * 
-   * @param shortcut The region shortcut to use to create the region
-   * @throws InterruptedException
-   */
-  private void doPutAll(RegionShortcut shortcut) throws InterruptedException {
-    Region aRegion = createRegion(shortcut);
-    Runnable runnable = new Runnable() {
-      public void run() {
-        Map<Object, Object> putAllMap = new HashMap<Object, Object>();
-        putAllMap.put(1, 1);
-        putAllMap.put(2, 2);
-        aRegion.putAll(putAllMap);
-      }
-    };
-
-    verifyWaitForBackup(runnable);
-    assertTrue(aRegion.containsKey(1));
-    assertEquals(aRegion.get(1), 1);
-    assertTrue(aRegion.containsKey(2));
-    assertEquals(aRegion.get(2), 2);
-  }
-
-  /**
-   * Test that a removeAll waits for backup
-   * 
-   * @param shortcut The region shortcut to use to create the region
-   * @throws InterruptedException
-   */
-  private void doRemoveAll(RegionShortcut shortcut) throws InterruptedException {
-    Region aRegion = createRegion(shortcut);
-    aRegion.put(1, 2);
-    aRegion.put(2, 3);
-
-    Runnable runnable = new Runnable() {
-      public void run() {
-        List<Object> keys = new ArrayList();
-        keys.add(1);
-        keys.add(2);
-        aRegion.removeAll(keys);
-      }
-    };
-
-    verifyWaitForBackup(runnable);
-    assertEquals(aRegion.size(), 0);
+  private void doActionAndVerifyWaitForBackup(Runnable function)
+      throws InterruptedException, TimeoutException, ExecutionException {
+    DM dm = GemFireCacheImpl.getInstance().getDistributionManager();
+    Set recipients = dm.getOtherDistributionManagerIds();
+    Future<Void> future = null;
+    PrepareBackupRequest.send(dm, recipients);
+    waitingForBackupLockCount = 0;
+    future = CompletableFuture.runAsync(function);
+    Awaitility.await().atMost(5, TimeUnit.SECONDS)
+        .until(() -> assertTrue(waitingForBackupLockCount == 1));
+    FinishBackupRequest.send(dm, recipients, diskDirs[0], null, false);
+    future.get(5, TimeUnit.SECONDS);
   }
 
-  /**
-   * Test that executing the given runnable waits for backup completion to proceed
-   * 
-   * @param runnable The code that should wait for backup.
-   * @throws InterruptedException
-   */
-  private void verifyWaitForBackup(Runnable runnable) throws InterruptedException {
-    DM dm = ((InternalCache) GemFireCacheImpl.getInstance()).getDistributionManager();
+  private void doReadActionsAndVerifyCompletion() {
+    DM dm = GemFireCacheImpl.getInstance().getDistributionManager();
     Set recipients = dm.getOtherDistributionManagerIds();
-    boolean abort = true;
-    Thread aThread = new Thread(runnable);
+    PrepareBackupRequest.send(dm, recipients);
+    waitingForBackupLockCount = 0;
+    List<CompletableFuture<?>> futureList = doReadActions();
+    CompletableFuture.allOf(futureList.toArray(new CompletableFuture<?>[futureList.size()]));
+    assertTrue(waitingForBackupLockCount == 0);
+    FinishBackupRequest.send(dm, recipients, diskDirs[0], null, false);
+  }
+
+  private void verifyKeyValuePair(Integer key, Integer expectedValue) {
+    assertTrue(region.containsKey(key));
+    assertEquals(expectedValue, region.get(key));
+  }
+
+  private List<CompletableFuture<?>> doReadActions() {
+    List<Runnable> actions = new ArrayList<>();
+    actions.add(() -> region.get(1));
+    actions.add(() -> region.containsKey(1));
+    actions.add(() -> region.containsValue(1));
+    actions.add(region::entrySet);
+    actions.add(this::valueExistsCheck);
+    actions.add(() -> region.getAll(Collections.emptyList()));
+    actions.add(() -> region.getEntry(1));
+    actions.add(region::isEmpty);
+    actions.add(region::keySet);
+    actions.add(region::size);
+    actions.add(region::values);
+    actions.add(this::queryCheck);
+    return actions.stream().map(runnable -> CompletableFuture.runAsync(runnable))
+        .collect(Collectors.toList());
+  }
+
+  private void valueExistsCheck() {
     try {
-      PrepareBackupRequest.send(dm, recipients);
-      abort = false;
-      waitingForBackupLockCount = 0;
-      aThread.start();
-      Awaitility.await().atMost(30, TimeUnit.SECONDS)
-          .until(() -> assertTrue(waitingForBackupLockCount == 1));
-    } finally {
-      FinishBackupRequest.send(dm, recipients, diskDirs[0], null, abort);
-      aThread.join(30000);
-      assertFalse(aThread.isAlive());
+      region.existsValue("value = 1");
+    } catch (FunctionDomainException | TypeMismatchException | NameResolutionException
+        | QueryInvocationTargetException e) {
+      throw new RuntimeException(e);
     }
   }
 
-  /**
-   * Test that executing the given runnable does NOT wait for backup completion to proceed
-   * 
-   * @param runnable The code that should not wait for backup.
-   * @throws InterruptedException
-   */
-  private void verifyNoWaitForBackup(Runnable runnable) throws InterruptedException {
-    DM dm = ((InternalCache) GemFireCacheImpl.getInstance()).getDistributionManager();
-    Set recipients = dm.getOtherDistributionManagerIds();
-    boolean abort = true;
-    Thread aThread = new Thread(runnable);
+  private void queryCheck() {
     try {
-      PrepareBackupRequest.send(dm, recipients);
-      abort = false;
-      waitingForBackupLockCount = 0;
-      aThread.start();
-      aThread.join(30000);
-      assertFalse(aThread.isAlive());
-      assertTrue(waitingForBackupLockCount == 0);
-    } finally {
-      FinishBackupRequest.send(dm, recipients, diskDirs[0], null, abort);
+      region.query("select * from /" + TEST_REGION_NAME);
+    } catch (FunctionDomainException | TypeMismatchException | NameResolutionException
+        | QueryInvocationTargetException e) {
+      throw new RuntimeException(e);
     }
   }
 
@@ -549,7 +217,7 @@ public class BackupPrepareAndFinishMsgDUnitTest extends CacheTestCase {
    * @param shortcut The region shortcut to use to create the region
    * @return The newly created region.
    */
-  private Region<?, ?> createRegion(RegionShortcut shortcut) {
+  protected Region<Integer, Integer> createRegion(RegionShortcut shortcut) {
     Cache cache = getCache();
     DiskStoreFactory diskStoreFactory = cache.createDiskStoreFactory();
     diskDirs = getDiskDirs();
@@ -557,7 +225,7 @@ public class BackupPrepareAndFinishMsgDUnitTest extends CacheTestCase {
     DiskStore diskStore = diskStoreFactory.create(getUniqueName());
     ((DiskStoreImpl) diskStore).getBackupLock().setBackupLockTestHook(new BackupLockHook());
 
-    RegionFactory<String, String> regionFactory = cache.createRegionFactory(shortcut);
+    RegionFactory<Integer, Integer> regionFactory = cache.createRegionFactory(shortcut);
     regionFactory.setDiskStoreName(diskStore.getName());
     regionFactory.setDiskSynchronous(true);
     if (shortcut.equals(RegionShortcut.PARTITION_PERSISTENT)) {
@@ -565,7 +233,7 @@ public class BackupPrepareAndFinishMsgDUnitTest extends CacheTestCase {
       prFactory.setTotalNumBuckets(1);
       regionFactory.setPartitionAttributes(prFactory.create());
     }
-    return regionFactory.create("TestRegion");
+    return regionFactory.create(TEST_REGION_NAME);
   }
 
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/test/java/org/apache/geode/internal/cache/persistence/PartitionedBackupPrepareAndFinishMsgDUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/cache/persistence/PartitionedBackupPrepareAndFinishMsgDUnitTest.java b/geode-core/src/test/java/org/apache/geode/internal/cache/persistence/PartitionedBackupPrepareAndFinishMsgDUnitTest.java
new file mode 100644
index 0000000..4b42c21
--- /dev/null
+++ b/geode-core/src/test/java/org/apache/geode/internal/cache/persistence/PartitionedBackupPrepareAndFinishMsgDUnitTest.java
@@ -0,0 +1,28 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.cache.persistence;
+
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.RegionShortcut;
+
+public class PartitionedBackupPrepareAndFinishMsgDUnitTest
+    extends BackupPrepareAndFinishMsgDUnitTest {
+  private static final RegionShortcut REGION_TYPE = RegionShortcut.PARTITION_PERSISTENT;
+
+  @Override
+  public Region<Integer, Integer> createRegion() {
+    return createRegion(REGION_TYPE);
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/test/java/org/apache/geode/internal/cache/persistence/ReplicateBackupPrepareAndFinishMsgDUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/cache/persistence/ReplicateBackupPrepareAndFinishMsgDUnitTest.java b/geode-core/src/test/java/org/apache/geode/internal/cache/persistence/ReplicateBackupPrepareAndFinishMsgDUnitTest.java
new file mode 100644
index 0000000..3f0ba7d
--- /dev/null
+++ b/geode-core/src/test/java/org/apache/geode/internal/cache/persistence/ReplicateBackupPrepareAndFinishMsgDUnitTest.java
@@ -0,0 +1,28 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.cache.persistence;
+
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.RegionShortcut;
+
+public class ReplicateBackupPrepareAndFinishMsgDUnitTest
+    extends BackupPrepareAndFinishMsgDUnitTest {
+  private static final RegionShortcut REGION_TYPE = RegionShortcut.REPLICATE_PERSISTENT;
+
+  @Override
+  public Region<Integer, Integer> createRegion() {
+    return createRegion(REGION_TYPE);
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/test/java/org/apache/geode/management/internal/beans/DistributedSystemBridgeJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/management/internal/beans/DistributedSystemBridgeJUnitTest.java b/geode-core/src/test/java/org/apache/geode/management/internal/beans/DistributedSystemBridgeJUnitTest.java
index bdf097e..60fb859 100644
--- a/geode-core/src/test/java/org/apache/geode/management/internal/beans/DistributedSystemBridgeJUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/management/internal/beans/DistributedSystemBridgeJUnitTest.java
@@ -32,7 +32,7 @@ import org.apache.geode.admin.internal.PrepareBackupRequest;
 import org.apache.geode.distributed.internal.DM;
 import org.apache.geode.distributed.internal.locks.DLockService;
 import org.apache.geode.internal.cache.GemFireCacheImpl;
-import org.apache.geode.internal.cache.persistence.BackupManager;
+import org.apache.geode.internal.cache.BackupManager;
 import org.apache.geode.internal.cache.persistence.PersistentMemberManager;
 import org.apache.geode.test.fake.Fakes;
 import org.apache.geode.test.junit.categories.UnitTest;
@@ -74,9 +74,9 @@ public class DistributedSystemBridgeJUnitTest {
 
     InOrder inOrder = inOrder(dm, backupManager);
     inOrder.verify(dm).putOutgoing(isA(PrepareBackupRequest.class));
-    inOrder.verify(backupManager).prepareBackup();
+    inOrder.verify(backupManager).prepareForBackup();
     inOrder.verify(dm).putOutgoing(isA(FinishBackupRequest.class));
-    inOrder.verify(backupManager).finishBackup(any(), any(), eq(false));
+    inOrder.verify(backupManager).doBackup(any(), any(), eq(false));
   }
 
   @Test
@@ -99,6 +99,6 @@ public class DistributedSystemBridgeJUnitTest {
     }
 
     verify(dm).putOutgoing(isA(FinishBackupRequest.class));
-    verify(backupManager).finishBackup(any(), any(), eq(true));
+    verify(backupManager).doBackup(any(), any(), eq(true));
   }
 }


[16/51] [abbrv] geode git commit: GEODE-3383: Refactor deploy tests

Posted by kl...@apache.org.
GEODE-3383: Refactor deploy tests


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/c5dd26b7
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/c5dd26b7
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/c5dd26b7

Branch: refs/heads/feature/GEODE-1279
Commit: c5dd26b7f7317c93666583b405006e1bb7c9255e
Parents: e07b5c1
Author: Jared Stewart <js...@pivotal.io>
Authored: Tue Aug 8 10:32:32 2017 -0700
Committer: Jared Stewart <js...@pivotal.io>
Committed: Tue Aug 15 15:42:11 2017 -0700

----------------------------------------------------------------------
 .../ClassPathLoaderIntegrationTest.java         | 171 ++++++++
 .../geode/internal/DeployedJarJUnitTest.java    | 400 ++-----------------
 .../geode/internal/JarDeployerDeadlockTest.java | 131 ++++++
 .../geode/management/DeployJarTestSuite.java    |   4 +-
 .../cli/commands/DeployCommandsDUnitTest.java   | 303 --------------
 .../cli/commands/DeployWithGroupsDUnitTest.java | 303 ++++++++++++++
 .../cli/commands/CommandOverHttpDUnitTest.java  |   2 +-
 7 files changed, 636 insertions(+), 678 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/c5dd26b7/geode-core/src/test/java/org/apache/geode/internal/ClassPathLoaderIntegrationTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/ClassPathLoaderIntegrationTest.java b/geode-core/src/test/java/org/apache/geode/internal/ClassPathLoaderIntegrationTest.java
index 34d8a23..2fdc085 100644
--- a/geode-core/src/test/java/org/apache/geode/internal/ClassPathLoaderIntegrationTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/ClassPathLoaderIntegrationTest.java
@@ -27,9 +27,11 @@ import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.InputStream;
 import java.io.OutputStream;
+import java.lang.reflect.Method;
 import java.net.URL;
 import java.util.Enumeration;
 import java.util.List;
+import java.util.Random;
 import java.util.Vector;
 
 import org.apache.bcel.Constants;
@@ -44,10 +46,14 @@ import org.junit.experimental.categories.Category;
 import org.junit.rules.TemporaryFolder;
 
 import org.apache.geode.cache.execute.Execution;
+import org.apache.geode.cache.execute.Function;
+import org.apache.geode.cache.execute.FunctionContext;
 import org.apache.geode.cache.execute.FunctionService;
 import org.apache.geode.cache.execute.ResultCollector;
+import org.apache.geode.cache.execute.ResultSender;
 import org.apache.geode.distributed.DistributedSystem;
 import org.apache.geode.internal.cache.GemFireCacheImpl;
+import org.apache.geode.internal.cache.execute.FunctionContextImpl;
 import org.apache.geode.test.dunit.rules.ServerStarterRule;
 import org.apache.geode.test.junit.categories.IntegrationTest;
 import org.apache.geode.test.junit.rules.RestoreTCCLRule;
@@ -64,6 +70,7 @@ public class ClassPathLoaderIntegrationTest {
 
   private File tempFile;
   private File tempFile2;
+  private ClassBuilder classBuilder = new ClassBuilder();
 
   @Rule
   public RestoreTCCLRule restoreTCCLRule = new RestoreTCCLRule();
@@ -452,6 +459,145 @@ public class ClassPathLoaderIntegrationTest {
     }
   }
 
+  @Test
+  public void testDeclarableFunctionsWithNoCacheXml() throws Exception {
+    final String jarName = "JarClassLoaderJUnitNoXml.jar";
+
+    // Add a Declarable Function without parameters for the class to the Classpath
+    String functionString =
+        "import java.util.Properties;" + "import org.apache.geode.cache.Declarable;"
+            + "import org.apache.geode.cache.execute.Function;"
+            + "import org.apache.geode.cache.execute.FunctionContext;"
+            + "public class JarClassLoaderJUnitFunctionNoXml implements Function, Declarable {"
+            + "public String getId() {return \"JarClassLoaderJUnitFunctionNoXml\";}"
+            + "public void init(Properties props) {}"
+            + "public void execute(FunctionContext context) {context.getResultSender().lastResult(\"NOPARMSv1\");}"
+            + "public boolean hasResult() {return true;}"
+            + "public boolean optimizeForWrite() {return false;}"
+            + "public boolean isHA() {return false;}}";
+
+    byte[] jarBytes = this.classBuilder
+        .createJarFromClassContent("JarClassLoaderJUnitFunctionNoXml", functionString);
+
+    ClassPathLoader.getLatest().getJarDeployer().deploy(jarName, jarBytes);
+
+    ClassPathLoader.getLatest().forName("JarClassLoaderJUnitFunctionNoXml");
+
+    // Check to see if the function without parameters executes correctly
+    Function function = FunctionService.getFunction("JarClassLoaderJUnitFunctionNoXml");
+    assertThat(function).isNotNull();
+    TestResultSender resultSender = new TestResultSender();
+    function.execute(new FunctionContextImpl(null, function.getId(), null, resultSender));
+    assertThat((String) resultSender.getResults()).isEqualTo("NOPARMSv1");
+  }
+
+  @Test
+  public void testDependencyBetweenJars() throws Exception {
+    final File parentJarFile = temporaryFolder.newFile("JarClassLoaderJUnitParent.jar");
+    final File usesJarFile = temporaryFolder.newFile("JarClassLoaderJUnitUses.jar");
+
+    // Write out a JAR files.
+    StringBuffer stringBuffer = new StringBuffer();
+    stringBuffer.append("package jcljunit.parent;");
+    stringBuffer.append("public class JarClassLoaderJUnitParent {");
+    stringBuffer.append("public String getValueParent() {");
+    stringBuffer.append("return \"PARENT\";}}");
+
+    byte[] jarBytes = this.classBuilder.createJarFromClassContent(
+        "jcljunit/parent/JarClassLoaderJUnitParent", stringBuffer.toString());
+    writeJarBytesToFile(parentJarFile, jarBytes);
+    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitParent.jar", jarBytes);
+
+    stringBuffer = new StringBuffer();
+    stringBuffer.append("package jcljunit.uses;");
+    stringBuffer.append("public class JarClassLoaderJUnitUses {");
+    stringBuffer.append("public String getValueUses() {");
+    stringBuffer.append("return \"USES\";}}");
+
+    jarBytes = this.classBuilder.createJarFromClassContent("jcljunit/uses/JarClassLoaderJUnitUses",
+        stringBuffer.toString());
+    writeJarBytesToFile(usesJarFile, jarBytes);
+    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitUses.jar", jarBytes);
+
+    stringBuffer = new StringBuffer();
+    stringBuffer.append("package jcljunit.function;");
+    stringBuffer.append("import jcljunit.parent.JarClassLoaderJUnitParent;");
+    stringBuffer.append("import jcljunit.uses.JarClassLoaderJUnitUses;");
+    stringBuffer.append("import org.apache.geode.cache.execute.Function;");
+    stringBuffer.append("import org.apache.geode.cache.execute.FunctionContext;");
+    stringBuffer.append(
+        "public class JarClassLoaderJUnitFunction  extends JarClassLoaderJUnitParent implements Function {");
+    stringBuffer.append("private JarClassLoaderJUnitUses uses = new JarClassLoaderJUnitUses();");
+    stringBuffer.append("public boolean hasResult() {return true;}");
+    stringBuffer.append(
+        "public void execute(FunctionContext context) {context.getResultSender().lastResult(getValueParent() + \":\" + uses.getValueUses());}");
+    stringBuffer.append("public String getId() {return \"JarClassLoaderJUnitFunction\";}");
+    stringBuffer.append("public boolean optimizeForWrite() {return false;}");
+    stringBuffer.append("public boolean isHA() {return false;}}");
+
+    ClassBuilder functionClassBuilder = new ClassBuilder();
+    functionClassBuilder.addToClassPath(parentJarFile.getAbsolutePath());
+    functionClassBuilder.addToClassPath(usesJarFile.getAbsolutePath());
+    jarBytes = functionClassBuilder.createJarFromClassContent(
+        "jcljunit/function/JarClassLoaderJUnitFunction", stringBuffer.toString());
+
+    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitFunction.jar",
+        jarBytes);
+
+    Function function = FunctionService.getFunction("JarClassLoaderJUnitFunction");
+    assertThat(function).isNotNull();
+    TestResultSender resultSender = new TestResultSender();
+    FunctionContext functionContext =
+        new FunctionContextImpl(null, function.getId(), null, resultSender);
+    function.execute(functionContext);
+    assertThat((String) resultSender.getResults()).isEqualTo("PARENT:USES");
+  }
+
+  @Test
+  public void testFindResource() throws IOException, ClassNotFoundException {
+    final String fileName = "file.txt";
+    final String fileContent = "FILE CONTENT";
+
+    byte[] jarBytes = this.classBuilder.createJarFromFileContent(fileName, fileContent);
+    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitResource.jar",
+        jarBytes);
+
+    InputStream inputStream = ClassPathLoader.getLatest().getResourceAsStream(fileName);
+    assertThat(inputStream).isNotNull();
+
+    final byte[] fileBytes = new byte[fileContent.length()];
+    inputStream.read(fileBytes);
+    inputStream.close();
+    assertThat(fileContent).isEqualTo(new String(fileBytes));
+  }
+
+
+  @Test
+  public void testUpdateClassInJar() throws Exception {
+    // First use of the JAR file
+    byte[] jarBytes = this.classBuilder.createJarFromClassContent("JarClassLoaderJUnitTestClass",
+        "public class JarClassLoaderJUnitTestClass { public Integer getValue5() { return new Integer(5); } }");
+    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitUpdate.jar", jarBytes);
+
+    Class<?> clazz = ClassPathLoader.getLatest().forName("JarClassLoaderJUnitTestClass");
+    Object object = clazz.newInstance();
+    Method getValue5Method = clazz.getMethod("getValue5");
+    Integer value = (Integer) getValue5Method.invoke(object);
+    assertThat(value).isEqualTo(5);
+
+    // Now create an updated JAR file and make sure that the method from the new
+    // class is available.
+    jarBytes = this.classBuilder.createJarFromClassContent("JarClassLoaderJUnitTestClass",
+        "public class JarClassLoaderJUnitTestClass { public Integer getValue10() { return new Integer(10); } }");
+    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitUpdate.jar", jarBytes);
+
+    clazz = ClassPathLoader.getLatest().forName("JarClassLoaderJUnitTestClass");
+    object = clazz.newInstance();
+    Method getValue10Method = clazz.getMethod("getValue10");
+    value = (Integer) getValue10Method.invoke(object);
+    assertThat(value).isEqualTo(10);
+  }
+
   private void writeJarBytesToFile(File jarFile, byte[] jarBytes) throws IOException {
     final OutputStream outStream = new FileOutputStream(jarFile);
     outStream.write(jarBytes);
@@ -538,4 +684,29 @@ public class ClassPathLoaderIntegrationTest {
     return new ClassBuilder().createJarFromClassContent("integration/parent/" + className,
         stringBuilder);
   }
+
+  private static class TestResultSender implements ResultSender<Object> {
+    private Object result;
+
+    public TestResultSender() {}
+
+    protected Object getResults() {
+      return this.result;
+    }
+
+    @Override
+    public void lastResult(final Object lastResult) {
+      this.result = lastResult;
+    }
+
+    @Override
+    public void sendResult(final Object oneResult) {
+      throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public void sendException(final Throwable t) {
+      throw new UnsupportedOperationException();
+    }
+  }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/c5dd26b7/geode-core/src/test/java/org/apache/geode/internal/DeployedJarJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/DeployedJarJUnitTest.java b/geode-core/src/test/java/org/apache/geode/internal/DeployedJarJUnitTest.java
index 178dbae..853696a 100644
--- a/geode-core/src/test/java/org/apache/geode/internal/DeployedJarJUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/DeployedJarJUnitTest.java
@@ -18,411 +18,67 @@ package org.apache.geode.internal;
 import static org.assertj.core.api.Assertions.assertThat;
 import static org.assertj.core.api.Assertions.assertThatThrownBy;
 
-import org.apache.geode.cache.execute.Function;
-import org.apache.geode.cache.execute.FunctionContext;
-import org.apache.geode.cache.execute.FunctionService;
-import org.apache.geode.cache.execute.ResultSender;
-import org.apache.geode.internal.cache.execute.FunctionContextImpl;
+import org.apache.geode.test.compiler.JarBuilder;
 import org.apache.geode.test.junit.categories.IntegrationTest;
-import org.awaitility.Awaitility;
-import org.junit.After;
+
+import org.apache.commons.io.FileUtils;
 import org.junit.Before;
 import org.junit.Rule;
 import org.junit.Test;
-import org.junit.contrib.java.lang.system.RestoreSystemProperties;
 import org.junit.experimental.categories.Category;
 import org.junit.rules.TemporaryFolder;
 
 import java.io.File;
-import java.io.FileOutputStream;
 import java.io.IOException;
-import java.io.InputStream;
-import java.io.OutputStream;
-import java.lang.management.ManagementFactory;
-import java.lang.management.ThreadInfo;
-import java.lang.management.ThreadMXBean;
-import java.lang.reflect.Method;
-import java.util.Random;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
-import java.util.concurrent.TimeUnit;
 
 @Category(IntegrationTest.class)
 public class DeployedJarJUnitTest {
+  private static final String JAR_NAME = "test.jar";
   @Rule
   public TemporaryFolder temporaryFolder = new TemporaryFolder();
 
-  @Rule
-  public RestoreSystemProperties restoreSystemProperties = new RestoreSystemProperties();
-
-  private ClassBuilder classBuilder;
+  private JarBuilder jarBuilder;
+  private File jarFile;
+  private byte[] expectedJarBytes;
 
   @Before
   public void setup() throws Exception {
-    File workingDir = temporaryFolder.newFolder();
-    ClassPathLoader.setLatestToDefault(workingDir);
-    classBuilder = new ClassBuilder();
-  }
-
-  @After
-  public void tearDown() throws Exception {
-    for (String functionName : FunctionService.getRegisteredFunctions().keySet()) {
-      FunctionService.unregisterFunction(functionName);
-    }
-
-    ClassPathLoader.setLatestToDefault();
+    jarBuilder = new JarBuilder();
+    jarFile = new File(temporaryFolder.getRoot(), JAR_NAME);
+    jarBuilder.buildJarFromClassNames(jarFile, "ExpectedClass");
+    expectedJarBytes = FileUtils.readFileToByteArray(jarFile);
   }
 
   @Test
-  public void testIsValidJarContent() throws IOException {
-    assertThat(
-        DeployedJar.hasValidJarContent(this.classBuilder.createJarFromName("JarClassLoaderJUnitA")))
-            .isTrue();
+  public void validJarContentDoesNotThrow() throws Exception {
+    new DeployedJar(jarFile, JAR_NAME, expectedJarBytes);
   }
 
   @Test
-  public void testIsInvalidJarContent() {
-    assertThat(DeployedJar.hasValidJarContent("INVALID JAR CONTENT".getBytes())).isFalse();
-  }
-
-  @Test
-  public void testClassOnClasspath() throws Exception {
-    // Deploy the first JAR file and make sure the class is on the Classpath
-    byte[] jarBytes =
-        this.classBuilder.createJarFromClassContent("com/jcljunit/JarClassLoaderJUnitA",
-            "package com.jcljunit; public class JarClassLoaderJUnitA {}");
-    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnit.jar", jarBytes);
-
-    ClassPathLoader.getLatest().forName("com.jcljunit.JarClassLoaderJUnitA");
+  public void unexpectedContentThrowsException() throws Exception {
+    givenUnexpectedJarFileContents();
 
-    // Update the JAR file and make sure the first class is no longer on the Classpath
-    // and the second one is.
-    jarBytes = this.classBuilder.createJarFromClassContent("com/jcljunit/JarClassLoaderJUnitB",
-        "package com.jcljunit; public class JarClassLoaderJUnitB {}");
-    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnit.jar", jarBytes);
-
-    ClassPathLoader.getLatest().forName("com.jcljunit.JarClassLoaderJUnitB");
-    assertThatThrownBy(
-        () -> ClassPathLoader.getLatest().forName("com.jcljunit.JarClassLoaderJUnitA"))
-            .isInstanceOf(ClassNotFoundException.class);
+    assertThatThrownBy(() -> new DeployedJar(jarFile, JAR_NAME, expectedJarBytes))
+        .isInstanceOf(IllegalStateException.class);
   }
 
   @Test
-  public void testFailingCompilation() throws Exception {
-    String functionString = "import org.apache.geode.cache.Declarable;"
-        + "import org.apache.geode.cache.execute.Function;"
-        + "import org.apache.geode.cache.execute.FunctionContext;"
-        + "public class JarClassLoaderJUnitFunction implements Function {}";
+  public void invalidContentThrowsException() throws Exception {
+    byte[] invalidJarBytes = givenInvalidJarBytes();
 
-    assertThatThrownBy(() -> this.classBuilder
-        .createJarFromClassContent("JarClassLoaderJUnitFunction", functionString)).isNotNull();
+    assertThatThrownBy(() -> new DeployedJar(jarFile, JAR_NAME, invalidJarBytes))
+        .isInstanceOf(IllegalArgumentException.class);
   }
 
-  @Test
-  public void testFunctions() throws Exception {
-    // Test creating a JAR file with a function
-    String functionString =
-        "import java.util.Properties;" + "import org.apache.geode.cache.Declarable;"
-            + "import org.apache.geode.cache.execute.Function;"
-            + "import org.apache.geode.cache.execute.FunctionContext;"
-            + "public class JarClassLoaderJUnitFunction implements Function {"
-            + "public void init(Properties props) {}" + "public boolean hasResult() {return true;}"
-            + "public void execute(FunctionContext context) {context.getResultSender().lastResult(\"GOODv1\");}"
-            + "public String getId() {return \"JarClassLoaderJUnitFunction\";}"
-            + "public boolean optimizeForWrite() {return false;}"
-            + "public boolean isHA() {return false;}}";
-
-    byte[] jarBytes =
-        this.classBuilder.createJarFromClassContent("JarClassLoaderJUnitFunction", functionString);
-
-    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnit.jar", jarBytes);
-
-    Function function = FunctionService.getFunction("JarClassLoaderJUnitFunction");
-    assertThat(function).isNotNull();
-    TestResultSender resultSender = new TestResultSender();
-    FunctionContext functionContext =
-        new FunctionContextImpl(null, function.getId(), null, resultSender);
-    function.execute(functionContext);
-    assertThat(resultSender.getResults()).isEqualTo("GOODv1");
-
-    // Test updating the function with a new JAR file
-    functionString = functionString.replace("v1", "v2");
-    jarBytes =
-        this.classBuilder.createJarFromClassContent("JarClassLoaderJUnitFunction", functionString);
-    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnit.jar", jarBytes);
-
-    function = FunctionService.getFunction("JarClassLoaderJUnitFunction");
-    assertThat(function).isNotNull();
-    resultSender = new TestResultSender();
-    functionContext = new FunctionContextImpl(null, function.getId(), null, resultSender);
-    function.execute(functionContext);
-    assertThat(resultSender.getResults()).isEqualTo("GOODv2");
-
-    // Test returning null for the Id
-    String functionNullIdString =
-        functionString.replace("return \"JarClassLoaderJUnitFunction\"", "return null");
-    jarBytes = this.classBuilder.createJarFromClassContent("JarClassLoaderJUnitFunction",
-        functionNullIdString);
-    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnit.jar", jarBytes);
-
-    assertThat(FunctionService.getFunction("JarClassLoaderJUnitFunction")).isNull();
-
-    // Test removing the JAR
-    ClassPathLoader.getLatest().getJarDeployer().undeploy("JarClassLoaderJUnit.jar");
-    assertThat(FunctionService.getFunction("JarClassLoaderJUnitFunction")).isNull();
+  private void givenUnexpectedJarFileContents() throws IOException {
+    FileUtils.deleteQuietly(jarFile);
+    jarBuilder.buildJarFromClassNames(jarFile, "UnexpectedClass");
   }
 
-  /**
-   * Ensure that abstract functions aren't added to the Function Service.
-   */
-  @Test
-  public void testAbstractFunction() throws Exception {
-    // Add an abstract Function to the Classpath
-    String functionString = "import org.apache.geode.cache.execute.Function;"
-        + "public abstract class JarClassLoaderJUnitFunction implements Function {"
-        + "public String getId() {return \"JarClassLoaderJUnitFunction\";}}";
-
-    byte[] jarBytes =
-        this.classBuilder.createJarFromClassContent("JarClassLoaderJUnitFunction", functionString);
-    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitFunction.jar",
-        jarBytes);
-
-    ClassPathLoader.getLatest().forName("JarClassLoaderJUnitFunction");
-
-    Function function = FunctionService.getFunction("JarClassLoaderJUnitFunction");
-    assertThat(function).isNull();
-  }
-
-  @Test
-  public void testDeclarableFunctionsWithNoCacheXml() throws Exception {
-    final String jarName = "JarClassLoaderJUnitNoXml.jar";
-
-    // Add a Declarable Function without parameters for the class to the Classpath
-    String functionString =
-        "import java.util.Properties;" + "import org.apache.geode.cache.Declarable;"
-            + "import org.apache.geode.cache.execute.Function;"
-            + "import org.apache.geode.cache.execute.FunctionContext;"
-            + "public class JarClassLoaderJUnitFunctionNoXml implements Function, Declarable {"
-            + "public String getId() {return \"JarClassLoaderJUnitFunctionNoXml\";}"
-            + "public void init(Properties props) {}"
-            + "public void execute(FunctionContext context) {context.getResultSender().lastResult(\"NOPARMSv1\");}"
-            + "public boolean hasResult() {return true;}"
-            + "public boolean optimizeForWrite() {return false;}"
-            + "public boolean isHA() {return false;}}";
-
-    byte[] jarBytes = this.classBuilder
-        .createJarFromClassContent("JarClassLoaderJUnitFunctionNoXml", functionString);
-
-    ClassPathLoader.getLatest().getJarDeployer().deploy(jarName, jarBytes);
-
-    ClassPathLoader.getLatest().forName("JarClassLoaderJUnitFunctionNoXml");
-
-    // Check to see if the function without parameters executes correctly
-    Function function = FunctionService.getFunction("JarClassLoaderJUnitFunctionNoXml");
-    assertThat(function).isNotNull();
-    TestResultSender resultSender = new TestResultSender();
-    function.execute(new FunctionContextImpl(null, function.getId(), null, resultSender));
-    assertThat((String) resultSender.getResults()).isEqualTo("NOPARMSv1");
-  }
-
-  @Test
-  public void testDependencyBetweenJars() throws Exception {
-    final File parentJarFile = temporaryFolder.newFile("JarClassLoaderJUnitParent.jar");
-    final File usesJarFile = temporaryFolder.newFile("JarClassLoaderJUnitUses.jar");
-
-    // Write out a JAR files.
-    StringBuffer stringBuffer = new StringBuffer();
-    stringBuffer.append("package jcljunit.parent;");
-    stringBuffer.append("public class JarClassLoaderJUnitParent {");
-    stringBuffer.append("public String getValueParent() {");
-    stringBuffer.append("return \"PARENT\";}}");
-
-    byte[] jarBytes = this.classBuilder.createJarFromClassContent(
-        "jcljunit/parent/JarClassLoaderJUnitParent", stringBuffer.toString());
-    writeJarBytesToFile(parentJarFile, jarBytes);
-    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitParent.jar", jarBytes);
-
-    stringBuffer = new StringBuffer();
-    stringBuffer.append("package jcljunit.uses;");
-    stringBuffer.append("public class JarClassLoaderJUnitUses {");
-    stringBuffer.append("public String getValueUses() {");
-    stringBuffer.append("return \"USES\";}}");
-
-    jarBytes = this.classBuilder.createJarFromClassContent("jcljunit/uses/JarClassLoaderJUnitUses",
-        stringBuffer.toString());
-    writeJarBytesToFile(usesJarFile, jarBytes);
-    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitUses.jar", jarBytes);
-
-    stringBuffer = new StringBuffer();
-    stringBuffer.append("package jcljunit.function;");
-    stringBuffer.append("import jcljunit.parent.JarClassLoaderJUnitParent;");
-    stringBuffer.append("import jcljunit.uses.JarClassLoaderJUnitUses;");
-    stringBuffer.append("import org.apache.geode.cache.execute.Function;");
-    stringBuffer.append("import org.apache.geode.cache.execute.FunctionContext;");
-    stringBuffer.append(
-        "public class JarClassLoaderJUnitFunction  extends JarClassLoaderJUnitParent implements Function {");
-    stringBuffer.append("private JarClassLoaderJUnitUses uses = new JarClassLoaderJUnitUses();");
-    stringBuffer.append("public boolean hasResult() {return true;}");
-    stringBuffer.append(
-        "public void execute(FunctionContext context) {context.getResultSender().lastResult(getValueParent() + \":\" + uses.getValueUses());}");
-    stringBuffer.append("public String getId() {return \"JarClassLoaderJUnitFunction\";}");
-    stringBuffer.append("public boolean optimizeForWrite() {return false;}");
-    stringBuffer.append("public boolean isHA() {return false;}}");
-
-    ClassBuilder functionClassBuilder = new ClassBuilder();
-    functionClassBuilder.addToClassPath(parentJarFile.getAbsolutePath());
-    functionClassBuilder.addToClassPath(usesJarFile.getAbsolutePath());
-    jarBytes = functionClassBuilder.createJarFromClassContent(
-        "jcljunit/function/JarClassLoaderJUnitFunction", stringBuffer.toString());
-
-    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitFunction.jar",
-        jarBytes);
-
-    Function function = FunctionService.getFunction("JarClassLoaderJUnitFunction");
-    assertThat(function).isNotNull();
-    TestResultSender resultSender = new TestResultSender();
-    FunctionContext functionContext =
-        new FunctionContextImpl(null, function.getId(), null, resultSender);
-    function.execute(functionContext);
-    assertThat((String) resultSender.getResults()).isEqualTo("PARENT:USES");
-  }
-
-  @Test
-  public void testFindResource() throws IOException, ClassNotFoundException {
-    final String fileName = "file.txt";
-    final String fileContent = "FILE CONTENT";
-
-    byte[] jarBytes = this.classBuilder.createJarFromFileContent(fileName, fileContent);
-    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitResource.jar",
-        jarBytes);
-
-    InputStream inputStream = ClassPathLoader.getLatest().getResourceAsStream(fileName);
-    assertThat(inputStream).isNotNull();
-
-    final byte[] fileBytes = new byte[fileContent.length()];
-    inputStream.read(fileBytes);
-    inputStream.close();
-    assertThat(fileContent).isEqualTo(new String(fileBytes));
-  }
-
-  @Test
-  public void testUpdateClassInJar() throws Exception {
-    // First use of the JAR file
-    byte[] jarBytes = this.classBuilder.createJarFromClassContent("JarClassLoaderJUnitTestClass",
-        "public class JarClassLoaderJUnitTestClass { public Integer getValue5() { return new Integer(5); } }");
-    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitUpdate.jar", jarBytes);
-
-    Class<?> clazz = ClassPathLoader.getLatest().forName("JarClassLoaderJUnitTestClass");
-    Object object = clazz.newInstance();
-    Method getValue5Method = clazz.getMethod("getValue5");
-    Integer value = (Integer) getValue5Method.invoke(object);
-    assertThat(value).isEqualTo(5);
-
-    // Now create an updated JAR file and make sure that the method from the new
-    // class is available.
-    jarBytes = this.classBuilder.createJarFromClassContent("JarClassLoaderJUnitTestClass",
-        "public class JarClassLoaderJUnitTestClass { public Integer getValue10() { return new Integer(10); } }");
-    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitUpdate.jar", jarBytes);
-
-    clazz = ClassPathLoader.getLatest().forName("JarClassLoaderJUnitTestClass");
-    object = clazz.newInstance();
-    Method getValue10Method = clazz.getMethod("getValue10");
-    value = (Integer) getValue10Method.invoke(object);
-    assertThat(value).isEqualTo(10);
-  }
-
-  @Test
-  public void testMultiThreadingDoesNotCauseDeadlock() throws Exception {
-    // Add two JARs to the classpath
-    byte[] jarBytes = this.classBuilder.createJarFromName("JarClassLoaderJUnitA");
-    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitA.jar", jarBytes);
-
-    jarBytes = this.classBuilder.createJarFromClassContent("com/jcljunit/JarClassLoaderJUnitB",
-        "package com.jcljunit; public class JarClassLoaderJUnitB {}");
-    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitB.jar", jarBytes);
-
-    String[] classNames = new String[] {"JarClassLoaderJUnitA", "com.jcljunit.JarClassLoaderJUnitB",
-        "NON-EXISTENT CLASS"};
-
-    final int threadCount = 10;
-    ExecutorService executorService = Executors.newFixedThreadPool(threadCount);
-    for (int i = 0; i < threadCount; i++) {
-      executorService.submit(new ForNameExerciser(classNames));
-    }
-
-    executorService.shutdown();
-    Awaitility.await().atMost(60, TimeUnit.SECONDS).until(executorService::isTerminated);
-
-    ThreadMXBean threadMXBean = ManagementFactory.getThreadMXBean();
-    long[] threadIds = threadMXBean.findDeadlockedThreads();
-
-    if (threadIds != null) {
-      StringBuilder deadLockTrace = new StringBuilder();
-      for (long threadId : threadIds) {
-        ThreadInfo threadInfo = threadMXBean.getThreadInfo(threadId, 100);
-        deadLockTrace.append(threadInfo.getThreadName()).append("\n");
-        for (StackTraceElement stackTraceElem : threadInfo.getStackTrace()) {
-          deadLockTrace.append("\t").append(stackTraceElem).append("\n");
-        }
-      }
-      System.out.println(deadLockTrace);
-    }
-    assertThat(threadIds).isNull();
-  }
-
-  private void writeJarBytesToFile(File jarFile, byte[] jarBytes) throws IOException {
-    final OutputStream outStream = new FileOutputStream(jarFile);
-    outStream.write(jarBytes);
-    outStream.close();
-  }
-
-  private static class TestResultSender implements ResultSender<Object> {
-    private Object result;
-
-    public TestResultSender() {}
-
-    protected Object getResults() {
-      return this.result;
-    }
-
-    @Override
-    public void lastResult(final Object lastResult) {
-      this.result = lastResult;
-    }
-
-    @Override
-    public void sendResult(final Object oneResult) {
-      throw new UnsupportedOperationException();
-    }
-
-    @Override
-    public void sendException(final Throwable t) {
-      throw new UnsupportedOperationException();
-    }
-  }
-
-  static final Random random = new Random();
-
-  private class ForNameExerciser implements Runnable {
-    private final int numLoops = 1000;
-    private final String[] classNames;
-
-    ForNameExerciser(final String[] classNames) {
-      this.classNames = classNames;
-    }
+  private byte[] givenInvalidJarBytes() throws IOException {
+    byte[] invalidJarBytes = "INVALID JAR CONTENT".getBytes();
+    FileUtils.writeByteArrayToFile(jarFile, invalidJarBytes);
 
-    @Override
-    public void run() {
-      for (int i = 0; i < this.numLoops; i++) {
-        try {
-          // Random select a name from the list of class names and try to load it
-          String className = this.classNames[random.nextInt(this.classNames.length)];
-          ClassPathLoader.getLatest().forName(className);
-        } catch (ClassNotFoundException expected) { // expected
-        } catch (Exception e) {
-          throw new RuntimeException(e);
-        }
-      }
-    }
+    return invalidJarBytes;
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/c5dd26b7/geode-core/src/test/java/org/apache/geode/internal/JarDeployerDeadlockTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/JarDeployerDeadlockTest.java b/geode-core/src/test/java/org/apache/geode/internal/JarDeployerDeadlockTest.java
new file mode 100644
index 0000000..7ff1774
--- /dev/null
+++ b/geode-core/src/test/java/org/apache/geode/internal/JarDeployerDeadlockTest.java
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+import java.io.File;
+import java.lang.management.ManagementFactory;
+import java.lang.management.ThreadInfo;
+import java.lang.management.ThreadMXBean;
+import java.util.Random;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+
+import org.awaitility.Awaitility;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.contrib.java.lang.system.RestoreSystemProperties;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TemporaryFolder;
+
+import org.apache.geode.cache.execute.FunctionService;
+import org.apache.geode.test.compiler.JarBuilder;
+import org.apache.geode.test.junit.categories.IntegrationTest;
+
+@Category(IntegrationTest.class)
+public class JarDeployerDeadlockTest {
+  @Rule
+  public TemporaryFolder temporaryFolder = new TemporaryFolder();
+
+  @Rule
+  public RestoreSystemProperties restoreSystemProperties = new RestoreSystemProperties();
+
+  private ClassBuilder classBuilder;
+
+  @Before
+  public void setup() throws Exception {
+    File workingDir = temporaryFolder.newFolder();
+    ClassPathLoader.setLatestToDefault(workingDir);
+    classBuilder = new ClassBuilder();
+  }
+
+  @After
+  public void tearDown() throws Exception {
+    for (String functionName : FunctionService.getRegisteredFunctions().keySet()) {
+      FunctionService.unregisterFunction(functionName);
+    }
+
+    ClassPathLoader.setLatestToDefault();
+  }
+
+  @Test
+  public void testMultiThreadingDoesNotCauseDeadlock() throws Exception {
+    // Add two JARs to the classpath
+    byte[] jarBytes = this.classBuilder.createJarFromName("JarClassLoaderJUnitA");
+    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitA.jar", jarBytes);
+
+    jarBytes = this.classBuilder.createJarFromClassContent("com/jcljunit/JarClassLoaderJUnitB",
+        "package com.jcljunit; public class JarClassLoaderJUnitB {}");
+    ClassPathLoader.getLatest().getJarDeployer().deploy("JarClassLoaderJUnitB.jar", jarBytes);
+
+    String[] classNames = new String[] {"JarClassLoaderJUnitA", "com.jcljunit.JarClassLoaderJUnitB",
+        "NON-EXISTENT CLASS"};
+
+    final int threadCount = 10;
+    ExecutorService executorService = Executors.newFixedThreadPool(threadCount);
+    for (int i = 0; i < threadCount; i++) {
+      executorService.submit(new ForNameExerciser(classNames));
+    }
+
+    executorService.shutdown();
+    Awaitility.await().atMost(60, TimeUnit.SECONDS).until(executorService::isTerminated);
+
+    ThreadMXBean threadMXBean = ManagementFactory.getThreadMXBean();
+    long[] threadIds = threadMXBean.findDeadlockedThreads();
+
+    if (threadIds != null) {
+      StringBuilder deadLockTrace = new StringBuilder();
+      for (long threadId : threadIds) {
+        ThreadInfo threadInfo = threadMXBean.getThreadInfo(threadId, 100);
+        deadLockTrace.append(threadInfo.getThreadName()).append("\n");
+        for (StackTraceElement stackTraceElem : threadInfo.getStackTrace()) {
+          deadLockTrace.append("\t").append(stackTraceElem).append("\n");
+        }
+      }
+      System.out.println(deadLockTrace);
+    }
+    assertThat(threadIds).isNull();
+  }
+
+  private class ForNameExerciser implements Runnable {
+    private final Random random = new Random();
+
+    private final int numLoops = 1000;
+    private final String[] classNames;
+
+    ForNameExerciser(final String[] classNames) {
+      this.classNames = classNames;
+    }
+
+    @Override
+    public void run() {
+      for (int i = 0; i < this.numLoops; i++) {
+        try {
+          // Random select a name from the list of class names and try to load it
+          String className = this.classNames[random.nextInt(this.classNames.length)];
+          ClassPathLoader.getLatest().forName(className);
+        } catch (ClassNotFoundException expected) { // expected
+        } catch (Exception e) {
+          throw new RuntimeException(e);
+        }
+      }
+    }
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/c5dd26b7/geode-core/src/test/java/org/apache/geode/management/DeployJarTestSuite.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/management/DeployJarTestSuite.java b/geode-core/src/test/java/org/apache/geode/management/DeployJarTestSuite.java
index 6dfab66..9a8348e 100644
--- a/geode-core/src/test/java/org/apache/geode/management/DeployJarTestSuite.java
+++ b/geode-core/src/test/java/org/apache/geode/management/DeployJarTestSuite.java
@@ -19,7 +19,7 @@ import org.apache.geode.internal.ClassPathLoaderTest;
 import org.apache.geode.internal.DeployedJarJUnitTest;
 import org.apache.geode.internal.JarDeployerIntegrationTest;
 import org.apache.geode.management.internal.cli.commands.DeployCommandRedeployDUnitTest;
-import org.apache.geode.management.internal.cli.commands.DeployCommandsDUnitTest;
+import org.apache.geode.management.internal.cli.commands.DeployWithGroupsDUnitTest;
 import org.apache.geode.management.internal.configuration.ClusterConfigDeployJarDUnitTest;
 import org.junit.Ignore;
 import org.junit.runner.RunWith;
@@ -28,7 +28,7 @@ import org.junit.runners.Suite;
 
 @Ignore
 @RunWith(Suite.class)
-@Suite.SuiteClasses({DeployedJarJUnitTest.class, DeployCommandsDUnitTest.class,
+@Suite.SuiteClasses({DeployedJarJUnitTest.class, DeployWithGroupsDUnitTest.class,
     JarDeployerIntegrationTest.class, ClassPathLoaderIntegrationTest.class,
     ClassPathLoaderTest.class, DeployCommandRedeployDUnitTest.class,
     ClusterConfigDeployJarDUnitTest.class})

http://git-wip-us.apache.org/repos/asf/geode/blob/c5dd26b7/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/DeployCommandsDUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/DeployCommandsDUnitTest.java b/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/DeployCommandsDUnitTest.java
deleted file mode 100644
index 89148d7..0000000
--- a/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/DeployCommandsDUnitTest.java
+++ /dev/null
@@ -1,303 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
- * agreements. See the NOTICE file distributed with this work for additional information regarding
- * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.geode.management.internal.cli.commands;
-
-import static org.apache.geode.distributed.ConfigurationProperties.GROUPS;
-import static org.assertj.core.api.Assertions.assertThat;
-import static org.assertj.core.api.Assertions.assertThatThrownBy;
-
-import java.io.File;
-import java.io.Serializable;
-import java.util.Properties;
-
-import org.junit.Before;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-
-import org.apache.geode.internal.ClassBuilder;
-import org.apache.geode.internal.ClassPathLoader;
-import org.apache.geode.test.dunit.rules.GfshShellConnectionRule;
-import org.apache.geode.test.dunit.rules.LocatorServerStartupRule;
-import org.apache.geode.test.dunit.rules.MemberVM;
-import org.apache.geode.test.junit.categories.DistributedTest;
-import org.apache.geode.test.junit.rules.serializable.SerializableTemporaryFolder;
-
-/**
- * Unit tests for the DeployCommands class
- * 
- * @since GemFire 7.0
- */
-@SuppressWarnings("serial")
-@Category(DistributedTest.class)
-public class DeployCommandsDUnitTest implements Serializable {
-  private static final String GROUP1 = "Group1";
-  private static final String GROUP2 = "Group2";
-
-  private final String class1 = "DeployCommandsDUnitA";
-  private final String class2 = "DeployCommandsDUnitB";
-  private final String class3 = "DeployCommandsDUnitC";
-  private final String class4 = "DeployCommandsDUnitD";
-
-  private final String jarName1 = "DeployCommandsDUnit1.jar";
-  private final String jarName2 = "DeployCommandsDUnit2.jar";
-  private final String jarName3 = "DeployCommandsDUnit3.jar";
-  private final String jarName4 = "DeployCommandsDUnit4.jar";
-
-  private File jar1;
-  private File jar2;
-  private File jar3;
-  private File jar4;
-  private File subdirWithJars3and4;
-
-  private MemberVM locator;
-  private MemberVM server1;
-  private MemberVM server2;
-
-  @Rule
-  public SerializableTemporaryFolder temporaryFolder = new SerializableTemporaryFolder();
-
-  @Rule
-  public LocatorServerStartupRule lsRule = new LocatorServerStartupRule();
-
-  @Rule
-  public transient GfshShellConnectionRule gfshConnector = new GfshShellConnectionRule();
-
-  @Before
-  public void setup() throws Exception {
-    ClassBuilder classBuilder = new ClassBuilder();
-    File jarsDir = temporaryFolder.newFolder();
-    jar1 = new File(jarsDir, jarName1);
-    jar2 = new File(jarsDir, jarName2);
-
-    subdirWithJars3and4 = new File(jarsDir, "subdir");
-    subdirWithJars3and4.mkdirs();
-    jar3 = new File(subdirWithJars3and4, jarName3);
-    jar4 = new File(subdirWithJars3and4, jarName4);
-
-    classBuilder.writeJarFromName(class1, jar1);
-    classBuilder.writeJarFromName(class2, jar2);
-    classBuilder.writeJarFromName(class3, jar3);
-    classBuilder.writeJarFromName(class4, jar4);
-
-    locator = lsRule.startLocatorVM(0);
-
-    Properties props = new Properties();
-    props.setProperty(GROUPS, GROUP1);
-    server1 = lsRule.startServerVM(1, props, locator.getPort());
-
-    props.setProperty(GROUPS, GROUP2);
-    server2 = lsRule.startServerVM(2, props, locator.getPort());
-
-    gfshConnector.connectAndVerify(locator);
-  }
-
-  @Test
-  public void deployJarToOneGroup() throws Exception {
-    // Deploy a jar to a single group
-    gfshConnector.executeAndVerifyCommand("deploy --jar=" + jar2 + " --group=" + GROUP1);
-    String resultString = gfshConnector.getGfshOutput();
-
-    assertThat(resultString).contains(server1.getName());
-    assertThat(resultString).doesNotContain(server2.getName());
-    assertThat(resultString).contains(jarName2);
-
-    server1.invoke(() -> assertThatCanLoad(jarName2, class2));
-    server2.invoke(() -> assertThatCannotLoad(jarName2, class2));
-  }
-
-  @Test
-  public void deployJarsInDirToOneGroup() throws Exception {
-    // Deploy of multiple JARs to a single group
-    gfshConnector.executeAndVerifyCommand(
-        "deploy --group=" + GROUP1 + " --dir=" + subdirWithJars3and4.getCanonicalPath());
-    String resultString = gfshConnector.getGfshOutput();
-
-    assertThat(resultString).describedAs(resultString).contains(server1.getName());
-    assertThat(resultString).doesNotContain(server2.getName());
-    assertThat(resultString).contains(jarName3);
-    assertThat(resultString).contains(jarName4);
-
-    server1.invoke(() -> {
-      assertThatCanLoad(jarName3, class3);
-      assertThatCanLoad(jarName4, class4);
-    });
-    server2.invoke(() -> {
-      assertThatCannotLoad(jarName3, class3);
-      assertThatCannotLoad(jarName4, class4);
-    });
-
-    // Undeploy of multiple jars by specifying group
-    gfshConnector.executeAndVerifyCommand("undeploy --group=" + GROUP1);
-    server1.invoke(() -> {
-      assertThatCannotLoad(jarName3, class3);
-      assertThatCannotLoad(jarName4, class4);
-    });
-    server2.invoke(() -> {
-      assertThatCannotLoad(jarName3, class3);
-      assertThatCannotLoad(jarName4, class4);
-    });
-  }
-
-  @Test
-  public void deployMultipleJarsToOneGroup() throws Exception {
-    // Deploy of multiple JARs to a single group
-    gfshConnector.executeAndVerifyCommand("deploy --group=" + GROUP1 + " --jars="
-        + jar3.getAbsolutePath() + "," + jar4.getAbsolutePath());
-    String resultString = gfshConnector.getGfshOutput();
-
-    assertThat(resultString).describedAs(resultString).contains(server1.getName());
-    assertThat(resultString).doesNotContain(server2.getName());
-    assertThat(resultString).contains(jarName3);
-    assertThat(resultString).contains(jarName4);
-
-    server1.invoke(() -> {
-      assertThatCanLoad(jarName3, class3);
-      assertThatCanLoad(jarName4, class4);
-    });
-    server2.invoke(() -> {
-      assertThatCannotLoad(jarName3, class3);
-      assertThatCannotLoad(jarName4, class4);
-    });
-
-    // Undeploy of multiple jars by specifying group
-    gfshConnector.executeAndVerifyCommand("undeploy --jars=" + jarName3 + "," + jarName4);
-    server1.invoke(() -> {
-      assertThatCannotLoad(jarName3, class3);
-      assertThatCannotLoad(jarName4, class4);
-    });
-    server2.invoke(() -> {
-      assertThatCannotLoad(jarName3, class3);
-      assertThatCannotLoad(jarName4, class4);
-    });
-  }
-
-
-  @Test
-  public void deployJarToAllServers() throws Exception {
-    // Deploy a jar to all servers
-    gfshConnector.executeAndVerifyCommand("deploy --jar=" + jar1);
-    String resultString = gfshConnector.getGfshOutput();
-
-    assertThat(resultString).contains(server1.getName());
-    assertThat(resultString).contains(server2.getName());
-    assertThat(resultString).contains(jarName1);
-
-    server1.invoke(() -> assertThatCanLoad(jarName1, class1));
-    server2.invoke(() -> assertThatCanLoad(jarName1, class1));
-
-    // Undeploy of jar by specifying group
-    gfshConnector.executeAndVerifyCommand("undeploy --group=" + GROUP1);
-    server1.invoke(() -> assertThatCannotLoad(jarName1, class1));
-    server2.invoke(() -> assertThatCanLoad(jarName1, class1));
-  }
-
-  @Test
-  public void deployMultipleJarsToAllServers() throws Exception {
-    gfshConnector.executeAndVerifyCommand("deploy --dir=" + subdirWithJars3and4.getCanonicalPath());
-
-    server1.invoke(() -> {
-      assertThatCanLoad(jarName3, class3);
-      assertThatCanLoad(jarName4, class4);
-    });
-    server2.invoke(() -> {
-      assertThatCanLoad(jarName3, class3);
-      assertThatCanLoad(jarName4, class4);
-    });
-
-    gfshConnector.executeAndVerifyCommand("undeploy");
-
-    server1.invoke(() -> {
-      assertThatCannotLoad(jarName3, class3);
-      assertThatCannotLoad(jarName4, class4);
-    });
-    server2.invoke(() -> {
-      assertThatCannotLoad(jarName3, class3);
-      assertThatCannotLoad(jarName4, class4);
-    });
-  }
-
-  @Test
-  public void undeployOfMultipleJars() throws Exception {
-    gfshConnector.executeAndVerifyCommand("deploy --dir=" + subdirWithJars3and4.getCanonicalPath());
-
-    server1.invoke(() -> {
-      assertThatCanLoad(jarName3, class3);
-      assertThatCanLoad(jarName4, class4);
-    });
-    server2.invoke(() -> {
-      assertThatCanLoad(jarName3, class3);
-      assertThatCanLoad(jarName4, class4);
-    });
-
-    gfshConnector
-        .executeAndVerifyCommand("undeploy --jar=" + jar3.getName() + "," + jar4.getName());
-    server1.invoke(() -> {
-      assertThatCannotLoad(jarName3, class3);
-      assertThatCannotLoad(jarName4, class4);
-    });
-    server2.invoke(() -> {
-      assertThatCannotLoad(jarName3, class3);
-      assertThatCannotLoad(jarName4, class4);
-    });
-  }
-
-  private void assertThatCanLoad(String jarName, String className) throws ClassNotFoundException {
-    assertThat(ClassPathLoader.getLatest().getJarDeployer().findDeployedJar(jarName)).isNotNull();
-    assertThat(ClassPathLoader.getLatest().forName(className)).isNotNull();
-  }
-
-  private void assertThatCannotLoad(String jarName, String className) {
-    assertThat(ClassPathLoader.getLatest().getJarDeployer().findDeployedJar(jarName)).isNull();
-    assertThatThrownBy(() -> ClassPathLoader.getLatest().forName(className))
-        .isExactlyInstanceOf(ClassNotFoundException.class);
-  }
-
-
-  @Test
-  public void testListDeployed() throws Exception {
-    // Deploy a couple of JAR files which can be listed
-    gfshConnector
-        .executeAndVerifyCommand("deploy --group=" + GROUP1 + " --jar=" + jar1.getCanonicalPath());
-    gfshConnector
-        .executeAndVerifyCommand("deploy --group=" + GROUP2 + " --jar=" + jar2.getCanonicalPath());
-
-    // List for all members
-    gfshConnector.executeAndVerifyCommand("list deployed");
-    String resultString = gfshConnector.getGfshOutput();
-    assertThat(resultString).contains(server1.getName());
-    assertThat(resultString).contains(server2.getName());
-    assertThat(resultString).contains(jarName1);
-    assertThat(resultString).contains(jarName2);
-
-    // List for members in Group1
-    gfshConnector.executeAndVerifyCommand("list deployed --group=" + GROUP1);
-    resultString = gfshConnector.getGfshOutput();
-    assertThat(resultString).contains(server1.getName());
-    assertThat(resultString).doesNotContain(server2.getName());
-
-    assertThat(resultString).contains(jarName1);
-    assertThat(resultString).doesNotContain(jarName2);
-
-    // List for members in Group2
-    gfshConnector.executeAndVerifyCommand("list deployed --group=" + GROUP2);
-    resultString = gfshConnector.getGfshOutput();
-    assertThat(resultString).doesNotContain(server1.getName());
-    assertThat(resultString).contains(server2.getName());
-
-    assertThat(resultString).doesNotContain(jarName1);
-    assertThat(resultString).contains(jarName2);
-  }
-}

http://git-wip-us.apache.org/repos/asf/geode/blob/c5dd26b7/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/DeployWithGroupsDUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/DeployWithGroupsDUnitTest.java b/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/DeployWithGroupsDUnitTest.java
new file mode 100644
index 0000000..8db7275
--- /dev/null
+++ b/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/DeployWithGroupsDUnitTest.java
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.cli.commands;
+
+import static org.apache.geode.distributed.ConfigurationProperties.GROUPS;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
+import java.io.File;
+import java.io.Serializable;
+import java.util.Properties;
+
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import org.apache.geode.internal.ClassBuilder;
+import org.apache.geode.internal.ClassPathLoader;
+import org.apache.geode.test.dunit.rules.GfshShellConnectionRule;
+import org.apache.geode.test.dunit.rules.LocatorServerStartupRule;
+import org.apache.geode.test.dunit.rules.MemberVM;
+import org.apache.geode.test.junit.categories.DistributedTest;
+import org.apache.geode.test.junit.rules.serializable.SerializableTemporaryFolder;
+
+/**
+ * Unit tests for the DeployCommands class
+ * 
+ * @since GemFire 7.0
+ */
+@SuppressWarnings("serial")
+@Category(DistributedTest.class)
+public class DeployWithGroupsDUnitTest implements Serializable {
+  private static final String GROUP1 = "Group1";
+  private static final String GROUP2 = "Group2";
+
+  private final String class1 = "DeployCommandsDUnitA";
+  private final String class2 = "DeployCommandsDUnitB";
+  private final String class3 = "DeployCommandsDUnitC";
+  private final String class4 = "DeployCommandsDUnitD";
+
+  private final String jarName1 = "DeployCommandsDUnit1.jar";
+  private final String jarName2 = "DeployCommandsDUnit2.jar";
+  private final String jarName3 = "DeployCommandsDUnit3.jar";
+  private final String jarName4 = "DeployCommandsDUnit4.jar";
+
+  private File jar1;
+  private File jar2;
+  private File jar3;
+  private File jar4;
+  private File subdirWithJars3and4;
+
+  private MemberVM locator;
+  private MemberVM server1;
+  private MemberVM server2;
+
+  @Rule
+  public SerializableTemporaryFolder temporaryFolder = new SerializableTemporaryFolder();
+
+  @Rule
+  public LocatorServerStartupRule lsRule = new LocatorServerStartupRule();
+
+  @Rule
+  public transient GfshShellConnectionRule gfshConnector = new GfshShellConnectionRule();
+
+  @Before
+  public void setup() throws Exception {
+    ClassBuilder classBuilder = new ClassBuilder();
+    File jarsDir = temporaryFolder.newFolder();
+    jar1 = new File(jarsDir, jarName1);
+    jar2 = new File(jarsDir, jarName2);
+
+    subdirWithJars3and4 = new File(jarsDir, "subdir");
+    subdirWithJars3and4.mkdirs();
+    jar3 = new File(subdirWithJars3and4, jarName3);
+    jar4 = new File(subdirWithJars3and4, jarName4);
+
+    classBuilder.writeJarFromName(class1, jar1);
+    classBuilder.writeJarFromName(class2, jar2);
+    classBuilder.writeJarFromName(class3, jar3);
+    classBuilder.writeJarFromName(class4, jar4);
+
+    locator = lsRule.startLocatorVM(0);
+
+    Properties props = new Properties();
+    props.setProperty(GROUPS, GROUP1);
+    server1 = lsRule.startServerVM(1, props, locator.getPort());
+
+    props.setProperty(GROUPS, GROUP2);
+    server2 = lsRule.startServerVM(2, props, locator.getPort());
+
+    gfshConnector.connectAndVerify(locator);
+  }
+
+  @Test
+  public void deployJarToOneGroup() throws Exception {
+    // Deploy a jar to a single group
+    gfshConnector.executeAndVerifyCommand("deploy --jar=" + jar2 + " --group=" + GROUP1);
+    String resultString = gfshConnector.getGfshOutput();
+
+    assertThat(resultString).contains(server1.getName());
+    assertThat(resultString).doesNotContain(server2.getName());
+    assertThat(resultString).contains(jarName2);
+
+    server1.invoke(() -> assertThatCanLoad(jarName2, class2));
+    server2.invoke(() -> assertThatCannotLoad(jarName2, class2));
+  }
+
+  @Test
+  public void deployJarsInDirToOneGroup() throws Exception {
+    // Deploy of multiple JARs to a single group
+    gfshConnector.executeAndVerifyCommand(
+        "deploy --group=" + GROUP1 + " --dir=" + subdirWithJars3and4.getCanonicalPath());
+    String resultString = gfshConnector.getGfshOutput();
+
+    assertThat(resultString).describedAs(resultString).contains(server1.getName());
+    assertThat(resultString).doesNotContain(server2.getName());
+    assertThat(resultString).contains(jarName3);
+    assertThat(resultString).contains(jarName4);
+
+    server1.invoke(() -> {
+      assertThatCanLoad(jarName3, class3);
+      assertThatCanLoad(jarName4, class4);
+    });
+    server2.invoke(() -> {
+      assertThatCannotLoad(jarName3, class3);
+      assertThatCannotLoad(jarName4, class4);
+    });
+
+    // Undeploy of multiple jars by specifying group
+    gfshConnector.executeAndVerifyCommand("undeploy --group=" + GROUP1);
+    server1.invoke(() -> {
+      assertThatCannotLoad(jarName3, class3);
+      assertThatCannotLoad(jarName4, class4);
+    });
+    server2.invoke(() -> {
+      assertThatCannotLoad(jarName3, class3);
+      assertThatCannotLoad(jarName4, class4);
+    });
+  }
+
+  @Test
+  public void deployMultipleJarsToOneGroup() throws Exception {
+    // Deploy of multiple JARs to a single group
+    gfshConnector.executeAndVerifyCommand("deploy --group=" + GROUP1 + " --jars="
+        + jar3.getAbsolutePath() + "," + jar4.getAbsolutePath());
+    String resultString = gfshConnector.getGfshOutput();
+
+    assertThat(resultString).describedAs(resultString).contains(server1.getName());
+    assertThat(resultString).doesNotContain(server2.getName());
+    assertThat(resultString).contains(jarName3);
+    assertThat(resultString).contains(jarName4);
+
+    server1.invoke(() -> {
+      assertThatCanLoad(jarName3, class3);
+      assertThatCanLoad(jarName4, class4);
+    });
+    server2.invoke(() -> {
+      assertThatCannotLoad(jarName3, class3);
+      assertThatCannotLoad(jarName4, class4);
+    });
+
+    // Undeploy of multiple jars by specifying group
+    gfshConnector.executeAndVerifyCommand("undeploy --jars=" + jarName3 + "," + jarName4);
+    server1.invoke(() -> {
+      assertThatCannotLoad(jarName3, class3);
+      assertThatCannotLoad(jarName4, class4);
+    });
+    server2.invoke(() -> {
+      assertThatCannotLoad(jarName3, class3);
+      assertThatCannotLoad(jarName4, class4);
+    });
+  }
+
+
+  @Test
+  public void deployJarToAllServers() throws Exception {
+    // Deploy a jar to all servers
+    gfshConnector.executeAndVerifyCommand("deploy --jar=" + jar1);
+    String resultString = gfshConnector.getGfshOutput();
+
+    assertThat(resultString).contains(server1.getName());
+    assertThat(resultString).contains(server2.getName());
+    assertThat(resultString).contains(jarName1);
+
+    server1.invoke(() -> assertThatCanLoad(jarName1, class1));
+    server2.invoke(() -> assertThatCanLoad(jarName1, class1));
+
+    // Undeploy of jar by specifying group
+    gfshConnector.executeAndVerifyCommand("undeploy --group=" + GROUP1);
+    server1.invoke(() -> assertThatCannotLoad(jarName1, class1));
+    server2.invoke(() -> assertThatCanLoad(jarName1, class1));
+  }
+
+  @Test
+  public void deployMultipleJarsToAllServers() throws Exception {
+    gfshConnector.executeAndVerifyCommand("deploy --dir=" + subdirWithJars3and4.getCanonicalPath());
+
+    server1.invoke(() -> {
+      assertThatCanLoad(jarName3, class3);
+      assertThatCanLoad(jarName4, class4);
+    });
+    server2.invoke(() -> {
+      assertThatCanLoad(jarName3, class3);
+      assertThatCanLoad(jarName4, class4);
+    });
+
+    gfshConnector.executeAndVerifyCommand("undeploy");
+
+    server1.invoke(() -> {
+      assertThatCannotLoad(jarName3, class3);
+      assertThatCannotLoad(jarName4, class4);
+    });
+    server2.invoke(() -> {
+      assertThatCannotLoad(jarName3, class3);
+      assertThatCannotLoad(jarName4, class4);
+    });
+  }
+
+  @Test
+  public void undeployOfMultipleJars() throws Exception {
+    gfshConnector.executeAndVerifyCommand("deploy --dir=" + subdirWithJars3and4.getCanonicalPath());
+
+    server1.invoke(() -> {
+      assertThatCanLoad(jarName3, class3);
+      assertThatCanLoad(jarName4, class4);
+    });
+    server2.invoke(() -> {
+      assertThatCanLoad(jarName3, class3);
+      assertThatCanLoad(jarName4, class4);
+    });
+
+    gfshConnector
+        .executeAndVerifyCommand("undeploy --jar=" + jar3.getName() + "," + jar4.getName());
+    server1.invoke(() -> {
+      assertThatCannotLoad(jarName3, class3);
+      assertThatCannotLoad(jarName4, class4);
+    });
+    server2.invoke(() -> {
+      assertThatCannotLoad(jarName3, class3);
+      assertThatCannotLoad(jarName4, class4);
+    });
+  }
+
+  private void assertThatCanLoad(String jarName, String className) throws ClassNotFoundException {
+    assertThat(ClassPathLoader.getLatest().getJarDeployer().findDeployedJar(jarName)).isNotNull();
+    assertThat(ClassPathLoader.getLatest().forName(className)).isNotNull();
+  }
+
+  private void assertThatCannotLoad(String jarName, String className) {
+    assertThat(ClassPathLoader.getLatest().getJarDeployer().findDeployedJar(jarName)).isNull();
+    assertThatThrownBy(() -> ClassPathLoader.getLatest().forName(className))
+        .isExactlyInstanceOf(ClassNotFoundException.class);
+  }
+
+
+  @Test
+  public void testListDeployed() throws Exception {
+    // Deploy a couple of JAR files which can be listed
+    gfshConnector
+        .executeAndVerifyCommand("deploy --group=" + GROUP1 + " --jar=" + jar1.getCanonicalPath());
+    gfshConnector
+        .executeAndVerifyCommand("deploy --group=" + GROUP2 + " --jar=" + jar2.getCanonicalPath());
+
+    // List for all members
+    gfshConnector.executeAndVerifyCommand("list deployed");
+    String resultString = gfshConnector.getGfshOutput();
+    assertThat(resultString).contains(server1.getName());
+    assertThat(resultString).contains(server2.getName());
+    assertThat(resultString).contains(jarName1);
+    assertThat(resultString).contains(jarName2);
+
+    // List for members in Group1
+    gfshConnector.executeAndVerifyCommand("list deployed --group=" + GROUP1);
+    resultString = gfshConnector.getGfshOutput();
+    assertThat(resultString).contains(server1.getName());
+    assertThat(resultString).doesNotContain(server2.getName());
+
+    assertThat(resultString).contains(jarName1);
+    assertThat(resultString).doesNotContain(jarName2);
+
+    // List for members in Group2
+    gfshConnector.executeAndVerifyCommand("list deployed --group=" + GROUP2);
+    resultString = gfshConnector.getGfshOutput();
+    assertThat(resultString).doesNotContain(server1.getName());
+    assertThat(resultString).contains(server2.getName());
+
+    assertThat(resultString).doesNotContain(jarName1);
+    assertThat(resultString).contains(jarName2);
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/c5dd26b7/geode-web/src/test/java/org/apache/geode/management/internal/cli/commands/CommandOverHttpDUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-web/src/test/java/org/apache/geode/management/internal/cli/commands/CommandOverHttpDUnitTest.java b/geode-web/src/test/java/org/apache/geode/management/internal/cli/commands/CommandOverHttpDUnitTest.java
index 7753aaf..e74830c 100644
--- a/geode-web/src/test/java/org/apache/geode/management/internal/cli/commands/CommandOverHttpDUnitTest.java
+++ b/geode-web/src/test/java/org/apache/geode/management/internal/cli/commands/CommandOverHttpDUnitTest.java
@@ -26,7 +26,7 @@ import org.apache.geode.test.junit.runner.SuiteRunner;
 
 @Category({DistributedTest.class, SecurityTest.class})
 @RunWith(SuiteRunner.class)
-@Suite.SuiteClasses({ConfigCommandsDUnitTest.class, DeployCommandsDUnitTest.class,
+@Suite.SuiteClasses({ConfigCommandsDUnitTest.class, DeployWithGroupsDUnitTest.class,
     DiskStoreCommandsDUnitTest.class, FunctionCommandsDUnitTest.class,
     GemfireDataCommandsDUnitTest.class,
     GetCommandOnRegionWithCacheLoaderDuringCacheMissDUnitTest.class,


[21/51] [abbrv] geode git commit: GEODE-3444: Release the locks held when beforeCompletion failed with CommitConflictException

Posted by kl...@apache.org.
GEODE-3444: Release the locks held when beforeCompletion failed with CommitConflictException


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/9b7dd54d
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/9b7dd54d
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/9b7dd54d

Branch: refs/heads/feature/GEODE-1279
Commit: 9b7dd54d6b6798553cffe417344041d1a27918e7
Parents: 6be38ca
Author: eshu <es...@pivotal.io>
Authored: Wed Aug 16 10:24:53 2017 -0700
Committer: eshu <es...@pivotal.io>
Committed: Wed Aug 16 10:24:53 2017 -0700

----------------------------------------------------------------------
 .../src/main/java/org/apache/geode/internal/cache/TXState.java      | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/9b7dd54d/geode-core/src/main/java/org/apache/geode/internal/cache/TXState.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/TXState.java b/geode-core/src/main/java/org/apache/geode/internal/cache/TXState.java
index 55415e3..b01dacf 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/TXState.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/TXState.java
@@ -1034,6 +1034,7 @@ public class TXState implements TXStateInterface {
 
 
     } catch (CommitConflictException commitConflict) {
+      cleanup();
       this.proxy.getTxMgr().noteCommitFailure(opStart, this.jtaLifeTime, this);
       throw new SynchronizationCommitConflictException(
           LocalizedStrings.TXState_CONFLICT_DETECTED_IN_GEMFIRE_TRANSACTION_0


[42/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - Developing

Posted by kl...@apache.org.
http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/JTA_transactions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/JTA_transactions.html.md.erb b/geode-docs/developing/transactions/JTA_transactions.html.md.erb
index ffb6082..0dcc4fe 100644
--- a/geode-docs/developing/transactions/JTA_transactions.html.md.erb
+++ b/geode-docs/developing/transactions/JTA_transactions.html.md.erb
@@ -1,6 +1,4 @@
----
-title: JTA Global Transactions with Geode
----
+<% set_title("JTA Global Transactions with", product_name) %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -20,42 +18,42 @@ limitations under the License.
 -->
 
 
-Use JTA global transactions to coordinate Geode cache transactions and JDBC transactions.
+Use JTA global transactions to coordinate <%=vars.product_name%> cache transactions and JDBC transactions.
 
-JTA is a standard Java interface you can use to coordinate Geode cache transactions and JDBC transactions globally under one umbrella. JTA provides direct coordination between the Geode cache and another transactional resource, such as a database. The parties involved in a JTA transaction include:
+JTA is a standard Java interface you can use to coordinate <%=vars.product_name%> cache transactions and JDBC transactions globally under one umbrella. JTA provides direct coordination between the <%=vars.product_name%> cache and another transactional resource, such as a database. The parties involved in a JTA transaction include:
 
 -   The Java application, responsible for starting the global transaction
 -   The JTA transaction manager, responsible for opening, committing, and rolling back transactions
--   The transaction resource managers, including the Geode cache transaction manager and the JDBC resource manager, responsible for managing operations in the cache and database respectively
+-   The transaction resource managers, including the <%=vars.product_name%> cache transaction manager and the JDBC resource manager, responsible for managing operations in the cache and database respectively
 
-Using JTA, your application controls all transactions in the same standard way, whether the transactions act on the Geode cache, a JDBC resource, or both together. When a JTA global transaction is done, the Geode transaction and the database transaction are both complete.
+Using JTA, your application controls all transactions in the same standard way, whether the transactions act on the <%=vars.product_name%> cache, a JDBC resource, or both together. When a JTA global transaction is done, the <%=vars.product_name%> transaction and the database transaction are both complete.
 
-When using JTA global transactions with Geode, you have three options:
+When using JTA global transactions with <%=vars.product_name%>, you have three options:
 
 1.  Coordinate with an external JTA transaction manager in a container (such as WebLogic or JBoss)
-2.  Set Geode as the “last resource” while using a container (such as WebLogic or JBoss) as the JTA transaction manager
-3.  Have Geode act as the JTA transaction manager
+2.  Set <%=vars.product_name%> as the “last resource” while using a container (such as WebLogic or JBoss) as the JTA transaction manager
+3.  Have <%=vars.product_name%> act as the JTA transaction manager
 
-An application creates a global transaction by using `javax.transaction.UserTransaction` bound to the JNDI context `java:/UserTransaction` to start and terminate transactions. During the transaction, cache operations are done through Geode as usual as described in [Geode Cache Transactions](cache_transactions.html#topic_e15_mr3_5k).
+An application creates a global transaction by using `javax.transaction.UserTransaction` bound to the JNDI context `java:/UserTransaction` to start and terminate transactions. During the transaction, cache operations are done through <%=vars.product_name%> as usual as described in [<%=vars.product_name%> Cache Transactions](cache_transactions.html#topic_e15_mr3_5k).
 
 **Note:**
 See the Sun documentation for more information on topics such as JTA, `javax.transaction`, committing and rolling back global transactions, and the related exceptions.
 
 -   **[Coordinating with External JTA Transactions Managers](#concept_cp1_zx1_wk)**
 
-    Geode can work with the JTA transaction managers of several containers like JBoss, WebLogic, GlassFish, and so on.
+    <%=vars.product_name%> can work with the JTA transaction managers of several containers like JBoss, WebLogic, GlassFish, and so on.
 
--   **[Using Geode as the "Last Resource" in a Container-Managed JTA Transaction](#concept_csy_vfb_wk)**
+-   **[Using <%=vars.product_name%> as the "Last Resource" in a Container-Managed JTA Transaction](#concept_csy_vfb_wk)**
 
-    The "last resource" feature in certain 3rd party containers such as WebLogic allow the use one non-XAResource (such as Geode) in a transaction with multiple XAResources while ensuring consistency.
+    The "last resource" feature in certain 3rd party containers such as WebLogic allow the use one non-XAResource (such as <%=vars.product_name%>) in a transaction with multiple XAResources while ensuring consistency.
 
--   **[Using Geode as the JTA Transaction Manager](#concept_8567sdkbigige)**
+-   **[Using <%=vars.product_name%> as the JTA Transaction Manager](#concept_8567sdkbigige)**
 
-    You can also use Geode as the JTA transaction manager.
+    You can also use <%=vars.product_name%> as the JTA transaction manager.
 
--   **[Behavior of Geode Cache Writers and Loaders Under JTA](cache_plugins_with_jta.html)**
+-   **[Behavior of <%=vars.product_name%> Cache Writers and Loaders Under JTA](cache_plugins_with_jta.html)**
 
-    When Geode participates in a global transactions, you can still have Geode cache writers and cache loaders operating in the usual way.
+    When <%=vars.product_name%> participates in a global transactions, you can still have <%=vars.product_name%> cache writers and cache loaders operating in the usual way.
 
 -   **[Turning Off JTA Transactions](turning_off_jta.html)**
 
@@ -65,31 +63,31 @@ See the Sun documentation for more information on topics such as JTA, `javax.tra
 
 # Coordinating with External JTA Transactions Managers
 
-Geode can work with the JTA transaction managers of several containers like JBoss, WebLogic, GlassFish, and so on.
+<%=vars.product_name%> can work with the JTA transaction managers of several containers like JBoss, WebLogic, GlassFish, and so on.
 
-At startup Geode looks for a TransactionManager (`javax.transaction.TransactionManager`) that has been bound to its JNDI context. When Geode finds such an external transaction manager, all Geode region operations (such as get and put) will participate in global transactions hosted by this external JTA transaction manager.
+At startup <%=vars.product_name%> looks for a TransactionManager (`javax.transaction.TransactionManager`) that has been bound to its JNDI context. When <%=vars.product_name%> finds such an external transaction manager, all <%=vars.product_name%> region operations (such as get and put) will participate in global transactions hosted by this external JTA transaction manager.
 
-This figure shows the high-level operation of a JTA global transaction whose resources include a Geode cache and a database.
+This figure shows the high-level operation of a JTA global transaction whose resources include a <%=vars.product_name%> cache and a database.
 
 <img src="../../images/transactions_jta_app_server.png" id="concept_cp1_zx1_wk__image_C2935E48415349659FC39BF5C7E75579" class="image" />
 
 An externally coordinated JTA global transaction is run in the following manner:
 
-1.  Each region operation looks up for presence of a global transaction. If one is detected, then a Geode transaction is started automatically, and we register a `javax.transaction.Synchronization` callback with the external JTA transaction manager.
-2.  At transaction commit, Geode gets a `beforeCommit()` callback from the external JTA transaction manager. Geode does all locking and conflict detection at this time. If this fails, an exception is thrown back to JTA transaction manager, which then aborts the transaction.
+1.  Each region operation looks up for presence of a global transaction. If one is detected, then a <%=vars.product_name%> transaction is started automatically, and we register a `javax.transaction.Synchronization` callback with the external JTA transaction manager.
+2.  At transaction commit, <%=vars.product_name%> gets a `beforeCommit()` callback from the external JTA transaction manager. <%=vars.product_name%> does all locking and conflict detection at this time. If this fails, an exception is thrown back to JTA transaction manager, which then aborts the transaction.
 3.  After a successful `beforeCommit()`callback, JTA transaction manager asks other data sources to commit their transaction.
-4.  Geode then gets a `afterCommit()` callback in which changes are applied to the cache and distributed to other members.
+4.  <%=vars.product_name%> then gets a `afterCommit()` callback in which changes are applied to the cache and distributed to other members.
 
 You can disable JTA in any region that should not participate in JTA transactions. See [Turning Off JTA Transactions](turning_off_jta.html#concept_nw2_5gs_xk).
 
 ## <a id="task_j3g_3mn_1l" class="no-quick-link"></a>How to Run a JTA Transaction Coordinated by an External Transaction Manager
 
-Use the following procedure to run a Geode global JTA transaction coordinated by an external JTA transaction manager.
+Use the following procedure to run a <%=vars.product_name%> global JTA transaction coordinated by an external JTA transaction manager.
 
 1.  **Configure the external data sources in the external container.** Do not configure the data sources in cache.xml . They are not guaranteed to get bound to the JNDI tree.
 2.  
 
-    Configure Geode for any necessary transactional behavior in the `cache.xml` file. For example, enable `copy-on-read` and specify a transaction listener, as needed. See [Setting Global Copy on Read](working_with_transactions.html#concept_vx2_gs4_5k) and [Configuring Transaction Plug-In Event Handlers](working_with_transactions.html#concept_ocw_vf1_wk) for details. 
+    Configure <%=vars.product_name%> for any necessary transactional behavior in the `cache.xml` file. For example, enable `copy-on-read` and specify a transaction listener, as needed. See [Setting Global Copy on Read](working_with_transactions.html#concept_vx2_gs4_5k) and [Configuring Transaction Plug-In Event Handlers](working_with_transactions.html#concept_ocw_vf1_wk) for details. 
 3.  
 
     Make sure that JTA transactions are enabled for the regions that will participate in the transaction. See [Turning Off JTA Transactions](turning_off_jta.html#concept_nw2_5gs_xk) for details. 
@@ -98,7 +96,7 @@ Use the following procedure to run a Geode global JTA transaction coordinated by
      Start the transaction through the external container. 
 5.  
 
-    Initialize the Geode cache. Geode will automatically join the transaction. 
+    Initialize the <%=vars.product_name%> cache. <%=vars.product_name%> will automatically join the transaction. 
 6.  
 
      Execute operations in the cache and the database as usual. 
@@ -108,22 +106,22 @@ Use the following procedure to run a Geode global JTA transaction coordinated by
 
 <a id="concept_csy_vfb_wk"></a>
 
-# Using Geode as the "Last Resource" in a Container-Managed JTA Transaction
+# Using <%=vars.product_name%> as the "Last Resource" in a Container-Managed JTA Transaction
 
-The "last resource" feature in certain 3rd party containers such as WebLogic allow the use one non-XAResource (such as Geode) in a transaction with multiple XAResources while ensuring consistency.
+The "last resource" feature in certain 3rd party containers such as WebLogic allow the use one non-XAResource (such as <%=vars.product_name%>) in a transaction with multiple XAResources while ensuring consistency.
 
-In the previous two JTA transaction use cases, if the Geode member fails after the other data sources commit but before Geode receives the `afterCommit` callback, Geode and the other data sources may become inconsistent. To prevent this from occurring, you can use the container's "last resource optimization" feature, with Geode set as the "last resource". Using Geode as the last resource ensures that in the event of failure, Geode remains consistent with the other XAResources involved in the transaction.
+In the previous two JTA transaction use cases, if the <%=vars.product_name%> member fails after the other data sources commit but before <%=vars.product_name%> receives the `afterCommit` callback, <%=vars.product_name%> and the other data sources may become inconsistent. To prevent this from occurring, you can use the container's "last resource optimization" feature, with <%=vars.product_name%> set as the "last resource". Using <%=vars.product_name%> as the last resource ensures that in the event of failure, <%=vars.product_name%> remains consistent with the other XAResources involved in the transaction.
 
-To accomplish this, the application server container must use a JCA Resource Adapter to accomodate Geode as the transaction's last resource. The transaction manager of the container first issues a "prepare" message to the participating XAResources. If the XAResources all accept the transaction, then the manager issues a "commit" instruction to the non-XAResource (in this case, Geode). The non-XAResource (in this case, Geode) participates as a local transaction resource. If the non-XAResource fails, then the transaction manager can rollback the XAResources.
+To accomplish this, the application server container must use a JCA Resource Adapter to accomodate <%=vars.product_name%> as the transaction's last resource. The transaction manager of the container first issues a "prepare" message to the participating XAResources. If the XAResources all accept the transaction, then the manager issues a "commit" instruction to the non-XAResource (in this case, <%=vars.product_name%>). The non-XAResource (in this case, <%=vars.product_name%>) participates as a local transaction resource. If the non-XAResource fails, then the transaction manager can rollback the XAResources.
 
 <img src="../../images/transactions_jca_adapter.png" id="concept_csy_vfb_wk__image_opb_sgb_wk" class="image" />
 
 <a id="task_sln_x3b_wk"></a>
 
-# How to Run JTA Transactions with Geode as a "Last Resource"
+# How to Run JTA Transactions with <%=vars.product_name%> as a "Last Resource"
 
 1.  Locate the version-specific `geode-jca` RAR file within 
-the `lib` directory of your Geode installation. 
+the `lib` directory of your <%=vars.product_name%> installation. 
 2.  Add your container-specific XML file to the `geode-jca` RAR file. 
 <ol>
 <li>Create a container-specific resource adapter XML file named &lt;container&gt;-ra.xml. For example, an XML file for a WebLogic resource adapter XML file might look something like this:
@@ -158,7 +156,7 @@ the CLASSPATH of the JTA transaction coordinator container.
 4.  Deploy the version-specific `geode-jca` RAR file on 
 the JTA transaction coordinator container.
 When deploying the file, you specify the JNDI name and so on. 
-5.  Configure Geode for any necessary transactional behavior. Enable `copy-on-read` and specify a transaction listener, if you need one. See [Setting Global Copy on Read](working_with_transactions.html#concept_vx2_gs4_5k) and [Configuring Transaction Plug-In Event Handlers](working_with_transactions.html#concept_ocw_vf1_wk) for details.
+5.  Configure <%=vars.product_name%> for any necessary transactional behavior. Enable `copy-on-read` and specify a transaction listener, if you need one. See [Setting Global Copy on Read](working_with_transactions.html#concept_vx2_gs4_5k) and [Configuring Transaction Plug-In Event Handlers](working_with_transactions.html#concept_ocw_vf1_wk) for details.
 6.  Get an initial context through `org.apache.geode.cache.GemFireCache.getJNDIContext`. For example:
 
     ``` pre
@@ -167,13 +165,13 @@ When deploying the file, you specify the JNDI name and so on.
 
     This returns `javax.naming.Context` and gives you the JNDI associated with the cache. The context contains the `TransactionManager`, `UserTransaction`, and any configured JDBC resource manager.
 
-7.  Start and commit the global transaction using the `UserTransaction` object rather than with Geode's `CacheTransactionManager`. 
+7.  Start and commit the global transaction using the `UserTransaction` object rather than with <%=vars.product_name%>'s `CacheTransactionManager`. 
 
     ``` pre
     UserTransaction txManager = (UserTransaction)ctx.lookup("java:/UserTransaction");
     ```
 
-8.  Obtain a Geode connection.
+8.  Obtain a <%=vars.product_name%> connection.
 
     ``` pre
     GFConnectionFactory cf = (GFConnectionFactory) ctx.lookup("gfe/jca");
@@ -187,40 +185,40 @@ When deploying the file, you specify the JNDI name and so on.
 
 See [JCA Resource Adapter Example](jca_adapter_example.html#concept_swv_z2p_wk) for an example of how to set up a transaction using the JCA Resource Adapter.
 
-## <a id="concept_8567sdkbigige" class="no-quick-link"></a>Using Geode as the JTA Transaction Manager
+## <a id="concept_8567sdkbigige" class="no-quick-link"></a>Using <%=vars.product_name%> as the JTA Transaction Manager
 
-You can also use Geode as the JTA transaction manager.
-As of Geode 1.2, Geode's JTA transaction manager is deprecated.
+You can also use <%=vars.product_name%> as the JTA transaction manager.
+As of <%=vars.product_name%> 1.2, <%=vars.product_name%>'s JTA transaction manager is deprecated.
 
-Geode ships with its own implementation of a JTA transaction manager. However, note that this implementation is not XA-compliant; therefore, it does not persist any state, which could lead to an inconsistent state after recovering a crashed member.
+<%=vars.product_name%> ships with its own implementation of a JTA transaction manager. However, note that this implementation is not XA-compliant; therefore, it does not persist any state, which could lead to an inconsistent state after recovering a crashed member.
 
 <img src="../../images/transactions_jta.png" id="concept_8567sdkbigige__image_C8D94070E55F4BCC8B5FF3D5BEBA99ED" class="image" />
 
-The Geode JTA transaction manager is initialized when the Geode cache is initialized. Until then, JTA is not available for use. The application starts a JTA transaction by using the `UserTransaction.begin` method. The `UserTransaction` object is the application’s handle to instruct the JTA transaction manager on what to do.
+The <%=vars.product_name%> JTA transaction manager is initialized when the <%=vars.product_name%> cache is initialized. Until then, JTA is not available for use. The application starts a JTA transaction by using the `UserTransaction.begin` method. The `UserTransaction` object is the application’s handle to instruct the JTA transaction manager on what to do.
 
-The Geode JTA implementation also supports the J2EE Connector Architecture (JCA) `ManagedConnectionFactory`.
+The <%=vars.product_name%> JTA implementation also supports the J2EE Connector Architecture (JCA) `ManagedConnectionFactory`.
 
-The Geode implementation of JTA has the following limitations:
+The <%=vars.product_name%> implementation of JTA has the following limitations:
 
 -   Only one JDBC database instance per transaction is allowed, although you can have multiple connections to that database.
 -   Multiple threads cannot participate in a transaction.
 -   Transaction recovery after a crash is not supported.
 
-In addition, JTA transactions are subject to the limitations of Geode cache transactions such as not being supported on regions with global scope. When a global transaction needs to access the Geode cache, JTA silently starts a Geode cache transaction.
+In addition, JTA transactions are subject to the limitations of <%=vars.product_name%> cache transactions such as not being supported on regions with global scope. When a global transaction needs to access the <%=vars.product_name%> cache, JTA silently starts a <%=vars.product_name%> cache transaction.
 
 <a id="task_qjv_khb_wk"></a>
 
-# How to Run a JTA Global Transaction Using Geode as the JTA Transaction Manager
+# How to Run a JTA Global Transaction Using <%=vars.product_name%> as the JTA Transaction Manager
 
-This topic describes how to run a JTA global transaction in Geode .
+This topic describes how to run a JTA global transaction in <%=vars.product_name%> .
 
 To run a global transaction, perform the following steps:
 
 1. Configure the external data sources in the `cache.xml` file. See [Configuring Database Connections Using JNDI](configuring_db_connections_using_JNDI.html#topic_A5E3A67C808D48C08E1F0DC167C5C494) for examples. 
 2. Include the JAR file for any data sources in your CLASSPATH. 
-3.  Configure Geode for any necessary transactional behavior. Enable `copy-on-read` for your cache and specify a transaction listener, if you need one. See [Setting Global Copy on Read](working_with_transactions.html#concept_vx2_gs4_5k) and [Configuring Transaction Plug-In Event Handlers](working_with_transactions.html#concept_ocw_vf1_wk) for details. 
+3.  Configure <%=vars.product_name%> for any necessary transactional behavior. Enable `copy-on-read` for your cache and specify a transaction listener, if you need one. See [Setting Global Copy on Read](working_with_transactions.html#concept_vx2_gs4_5k) and [Configuring Transaction Plug-In Event Handlers](working_with_transactions.html#concept_ocw_vf1_wk) for details. 
 4.  Make sure that JTA transactions are not disabled in the `cache.xml` file or the application code. 
-5.  Initialize the Geode cache. 
+5.  Initialize the <%=vars.product_name%> cache. 
 6.  Get an initial context through `org.apache.geode.cache.GemFireCache.getJNDIContext`. For example: 
 
     ``` pre
@@ -236,11 +234,11 @@ To run a global transaction, perform the following steps:
     ```
 
     With `UserTransaction`, you can begin, commit, and rollback transactions.
-    If a global transaction exists when you use the cache, it automatically joins the transaction. Operations on a region automatically detect and become associated with the existing global transaction through JTA synchronization. If the global transaction has been marked for rollback, however, the Geode cache is not allowed to enlist with that transaction. Any cache operation that causes an attempt to enlist throws a `FailedSynchronizationException`.
+    If a global transaction exists when you use the cache, it automatically joins the transaction. Operations on a region automatically detect and become associated with the existing global transaction through JTA synchronization. If the global transaction has been marked for rollback, however, the <%=vars.product_name%> cache is not allowed to enlist with that transaction. Any cache operation that causes an attempt to enlist throws a `FailedSynchronizationException`.
 
-    The Geode cache transaction’s commit or rollback is triggered when the global transaction commits or rolls back. When the global transaction is committed using the `UserTransaction` interface, the transactions of any registered JTA resources are committed, including the Geode cache transaction. If the cache or database transaction fails to commit, the `UserTransaction` call throws a `TransactionRolledBackException`. If a commit or rollback is attempted directly on a Geode transaction that is registered with JTA, that action throws an `IllegalStateException`.
+    The <%=vars.product_name%> cache transaction’s commit or rollback is triggered when the global transaction commits or rolls back. When the global transaction is committed using the `UserTransaction` interface, the transactions of any registered JTA resources are committed, including the <%=vars.product_name%> cache transaction. If the cache or database transaction fails to commit, the `UserTransaction` call throws a `TransactionRolledBackException`. If a commit or rollback is attempted directly on a <%=vars.product_name%> transaction that is registered with JTA, that action throws an `IllegalStateException`.
 
-See [Geode JTA Transaction Example](transaction_jta_gemfire_example.html#concept_ffg_sj5_1l).
+See [<%=vars.product_name%> JTA Transaction Example](transaction_jta_gemfire_example.html#concept_ffg_sj5_1l).
 
 -   **[Configuring Database Connections Using JNDI](configuring_db_connections_using_JNDI.html)**
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/about_transactions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/about_transactions.html.md.erb b/geode-docs/developing/transactions/about_transactions.html.md.erb
index c0e3261..bc9e371 100644
--- a/geode-docs/developing/transactions/about_transactions.html.md.erb
+++ b/geode-docs/developing/transactions/about_transactions.html.md.erb
@@ -22,26 +22,26 @@ limitations under the License.
 <a id="topic_jbt_2y4_wk"></a>
 
 
-This section covers the features of Geode transactions.
+This section covers the features of <%=vars.product_name%> transactions.
 
-Geode transactions provide the following features:
+<%=vars.product_name%> transactions provide the following features:
 
 -   Basic transaction properties: atomicity, consistency, isolation, and durability
--   Rollback and commit operations along with standard Geode cache operations
+-   Rollback and commit operations along with standard <%=vars.product_name%> cache operations
 -   Ability to suspend and resume transactions
 -   High concurrency and high performance
 -   Transaction statistics gathering and archiving
--   Compatibility with Java Transaction API (JTA) transactions, using either Geode JTA or a third-party implementation
--   Ability to use Geode as a “last resource” in JTA transactions with multiple data sources to guarantee transactional consistency
+-   Compatibility with Java Transaction API (JTA) transactions, using either <%=vars.product_name%> JTA or a third-party implementation
+-   Ability to use <%=vars.product_name%> as a “last resource” in JTA transactions with multiple data sources to guarantee transactional consistency
 
 ## Types of Transactions
 
-Geode supports two kinds of transactions: **Geode cache transactions** and **JTA global transactions**.
+<%=vars.product_name%> supports two kinds of transactions: **<%=vars.product_name%> cache transactions** and **JTA global transactions**.
 
-Geode cache transactions are used to group the execution of cache operations and to gain the control offered by transactional commit and rollback. Applications create cache transactions by using an instance of the Geode `CacheTransactionManager`. During a transaction, cache operations are performed and distributed through Geode as usual. See [Geode Cache Transactions](cache_transactions.html#topic_e15_mr3_5k) for details on Geode cache transactions and how these transactions work.
+<%=vars.product_name%> cache transactions are used to group the execution of cache operations and to gain the control offered by transactional commit and rollback. Applications create cache transactions by using an instance of the <%=vars.product_name%> `CacheTransactionManager`. During a transaction, cache operations are performed and distributed through <%=vars.product_name%> as usual. See [<%=vars.product_name%> Cache Transactions](cache_transactions.html#topic_e15_mr3_5k) for details on <%=vars.product_name%> cache transactions and how these transactions work.
 
-JTA global transactions allow you to use the standard JTA interface to coordinate Geode transactions with JDBC transactions. When performing JTA global transactions, you have the option of using Geode’s own implementation of JTA or a third party’s implementation (typically application servers such as WebLogic or JBoss) of JTA. In addition, some third party JTA implementations allow you to set Geode as a “last resource” to ensure transactional consistency across data sources in the event that Geode or another data source becomes unavailable. For global transactions, applications use `java:/UserTransaction` to start and terminate transactions while Geode cache operations are performed in the same manner as regular Geode cache transactions. See [JTA Global Transactions with Geode](JTA_transactions.html) for details on JTA Global transactions.
+JTA global transactions allow you to use the standard JTA interface to coordinate <%=vars.product_name%> transactions with JDBC transactions. When performing JTA global transactions, you have the option of using <%=vars.product_name%>’s own implementation of JTA or a third party’s implementation (typically application servers such as WebLogic or JBoss) of JTA. In addition, some third party JTA implementations allow you to set <%=vars.product_name%> as a “last resource” to ensure transactional consistency across data sources in the event that <%=vars.product_name%> or another data source becomes unavailable. For global transactions, applications use `java:/UserTransaction` to start and terminate transactions while <%=vars.product_name%> cache operations are performed in the same manner as regular <%=vars.product_name%> cache transactions. See [JTA Global Transactions with <%=vars.product_name%>](JTA_transactions.html) for details on JTA Global transactions.
 
-You can also coordinate a Geode cache transaction with an external database by specifying database operations within cache and transaction application plug-ins (CacheWriters/CacheListeners and TransactionWriters/TransactionListeners.) This is an alternative to using JTA transactions. See [How to Run a Geode Cache Transaction that Coordinates with an External Database](run_a_cache_transaction_with_external_db.html#task_sdn_2qk_2l).
+You can also coordinate a <%=vars.product_name%> cache transaction with an external database by specifying database operations within cache and transaction application plug-ins (CacheWriters/CacheListeners and TransactionWriters/TransactionListeners.) This is an alternative to using JTA transactions. See [How to Run a <%=vars.product_name%> Cache Transaction that Coordinates with an External Database](run_a_cache_transaction_with_external_db.html#task_sdn_2qk_2l).
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/cache_plugins_with_jta.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/cache_plugins_with_jta.html.md.erb b/geode-docs/developing/transactions/cache_plugins_with_jta.html.md.erb
index 7735bf0..7df64bc 100644
--- a/geode-docs/developing/transactions/cache_plugins_with_jta.html.md.erb
+++ b/geode-docs/developing/transactions/cache_plugins_with_jta.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Behavior of Geode Cache Writers and Loaders Under JTA
----
+<% set_title("Behavior of", product_name, "Cache Writers and Loaders Under JTA") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,10 +17,10 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-When Geode participates in a global transactions, you can still have Geode cache writers and cache loaders operating in the usual way.
+When <%=vars.product_name%> participates in a global transactions, you can still have <%=vars.product_name%> cache writers and cache loaders operating in the usual way.
 
 For example, in addition to the transactional connection to the database, the region could also have a cache writer and cache loader configured to exchange data with that same database. As long as the data source is transactional, which means that it can detect the transaction manager, the cache writer and cache loader participate in the transaction. If the JTA rolls back its transaction, the changes made by the cache loader and the cache writer are rolled back. For more on transactional data sources, see the discussion of XAPooledDataSource and ManagedDataSource in[Configuring Database Connections Using JNDI](configuring_db_connections_using_JNDI.html#topic_A5E3A67C808D48C08E1F0DC167C5C494).
 
-If you are using a Geode cache or transaction listener with global transactions, be aware that the EntryEvent returned by a transaction has the Geode transaction ID, not the JTA transaction ID.
+If you are using a <%=vars.product_name%> cache or transaction listener with global transactions, be aware that the EntryEvent returned by a transaction has the <%=vars.product_name%> transaction ID, not the JTA transaction ID.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/cache_transaction_performance.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/cache_transaction_performance.html.md.erb b/geode-docs/developing/transactions/cache_transaction_performance.html.md.erb
index 5f25453..97134bc 100644
--- a/geode-docs/developing/transactions/cache_transaction_performance.html.md.erb
+++ b/geode-docs/developing/transactions/cache_transaction_performance.html.md.erb
@@ -23,7 +23,7 @@ Cache transaction performance can vary depending on the type of regions you are
 
 The most common region configurations for use with transactions are distributed replicated and partitioned:
 
--   Replicated regions are better suited for running transactions on small to mid-size data sets. To ensure all or nothing behavior, at commit time, distributed transactions use the global reservation system of the Geode distributed lock service. This works well as long as the data set is reasonably small.
+-   Replicated regions are better suited for running transactions on small to mid-size data sets. To ensure all or nothing behavior, at commit time, distributed transactions use the global reservation system of the <%=vars.product_name%> distributed lock service. This works well as long as the data set is reasonably small.
 -   Partitioned regions are the right choice for highly-performant, scalable operations. Transactions on partitioned regions use only local locking, and only send messages to the redundant data stores at commit time. Because of this, these transactions perform much better than distributed transactions. There are no global locks, so partitioned transactions are extremely scalable as well.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/cache_transactions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/cache_transactions.html.md.erb b/geode-docs/developing/transactions/cache_transactions.html.md.erb
index 7e00e42..8b5d0f6 100644
--- a/geode-docs/developing/transactions/cache_transactions.html.md.erb
+++ b/geode-docs/developing/transactions/cache_transactions.html.md.erb
@@ -1,6 +1,4 @@
----
-title: Geode Cache Transactions
----
+<% set_title(product_name, "Cache Transactions") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -22,30 +20,30 @@ limitations under the License.
 <a id="topic_e15_mr3_5k"></a>
 
 
-Use Geode cache transactions to group the execution of cache operations and to gain the control offered by transactional commit and rollback. Geode cache transactions control operations within the Geode cache while the Geode distributed system handles data distribution in the usual way.
+Use <%=vars.product_name%> cache transactions to group the execution of cache operations and to gain the control offered by transactional commit and rollback. <%=vars.product_name%> cache transactions control operations within the <%=vars.product_name%> cache while the <%=vars.product_name%> distributed system handles data distribution in the usual way.
 
--   **[Cache Transaction Performance](../../developing/transactions/cache_transaction_performance.html)**
+-   **[Cache Transaction Performance](cache_transaction_performance.html)**
 
     Cache transaction performance can vary depending on the type of regions you are using.
 
--   **[Data Location for Cache Transactions](../../developing/transactions/data_location_cache_transactions.html)**
+-   **[Data Location for Cache Transactions](data_location_cache_transactions.html)**
 
     The location where you can run your transaction depends on where you are storing your data.
 
--   **[How to Run a Geode Cache Transaction](../../developing/transactions/run_a_cache_transaction.html)**
+-   **[How to Run a <%=vars.product_name%> Cache Transaction](run_a_cache_transaction.html)**
 
-    This topic describes how to run a Geode cache transaction.
+    This topic describes how to run a <%=vars.product_name%> cache transaction.
 
--   **[How to Run a Geode Cache Transaction that Coordinates with an External Database](../../developing/transactions/run_a_cache_transaction_with_external_db.html)**
+-   **[How to Run a <%=vars.product_name%> Cache Transaction that Coordinates with an External Database](run_a_cache_transaction_with_external_db.html)**
 
-    Coordinate a Geode cache transaction with an external database by using CacheWriter/CacheListener and TransactionWriter/TransactionListener plug-ins, **to provide an alternative to using JTA transactions**.
+    Coordinate a <%=vars.product_name%> cache transaction with an external database by using CacheWriter/CacheListener and TransactionWriter/TransactionListener plug-ins, **to provide an alternative to using JTA transactions**.
 
--   **[Working with Geode Cache Transactions](../../developing/transactions/working_with_transactions.html)**
+-   **[Working with <%=vars.product_name%> Cache Transactions](working_with_transactions.html)**
 
-    This section contains guidelines and additional information on working with Geode and its cache transactions.
+    This section contains guidelines and additional information on working with <%=vars.product_name%> and its cache transactions.
 
--   **[How Geode Cache Transactions Work](../../developing/transactions/how_cache_transactions_work.html#topic_fls_1j1_wk)**
+-   **[How <%=vars.product_name%> Cache Transactions Work](how_cache_transactions_work.html#topic_fls_1j1_wk)**
 
-    This section provides an explanation of how transactions work on Geode caches.
+    This section provides an explanation of how transactions work on <%=vars.product_name%> caches.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/cache_transactions_by_region_type.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/cache_transactions_by_region_type.html.md.erb b/geode-docs/developing/transactions/cache_transactions_by_region_type.html.md.erb
index 550d755..7811bcb 100644
--- a/geode-docs/developing/transactions/cache_transactions_by_region_type.html.md.erb
+++ b/geode-docs/developing/transactions/cache_transactions_by_region_type.html.md.erb
@@ -21,7 +21,7 @@ limitations under the License.
 <a id="topic_nlq_sk1_wk"></a>
 
 
-A transaction is managed on a per-cache basis, so multiple regions in the cache can participate in a single transaction. The data scope of a Geode cache transaction is the cache that hosts the transactional data. For partitioned regions, this may be a remote host to the one running the transaction application. Any transaction that includes one or more partitioned regions is run on the member storing the primary copy of the partitioned region data. Otherwise, the transaction host is the same one running the application.
+A transaction is managed on a per-cache basis, so multiple regions in the cache can participate in a single transaction. The data scope of a <%=vars.product_name%> cache transaction is the cache that hosts the transactional data. For partitioned regions, this may be a remote host to the one running the transaction application. Any transaction that includes one or more partitioned regions is run on the member storing the primary copy of the partitioned region data. Otherwise, the transaction host is the same one running the application.
 
 -   The client executing the transaction code is called the transaction initiator.
 
@@ -83,7 +83,7 @@ The region’s scope affects how data is distributed during the commit phase. Tr
 Transactions on non-replicated regions (regions that use the old API with DataPolicy EMPTY, NORMAL and PRELOADED) are always transaction initiators, and the transaction data host is always a member with a replicated region. This is similar to the way transactions using the PARTITION\_PROXY shortcut are forwarded to members with primary bucket.
 
 **Note:**
-When you have transactions operating on EMPTY, NORMAL or PARTITION regions, make sure that the Geode property `conserve-sockets` is set to false to avoid distributed deadlocks. An empty region is a region created with the API `RegionShortcut.REPLICATE_PROXY` or a region with that uses the old API of `DataPolicy` set to `EMPTY`.
+When you have transactions operating on EMPTY, NORMAL or PARTITION regions, make sure that the <%=vars.product_name%> property `conserve-sockets` is set to false to avoid distributed deadlocks. An empty region is a region created with the API `RegionShortcut.REPLICATE_PROXY` or a region with that uses the old API of `DataPolicy` set to `EMPTY`.
 
 ## Conflicting Transactions in Distributed-Ack Regions
 
@@ -97,7 +97,7 @@ In this series of figures, even after the commit operation is launched, the tran
 
 <img src="../../images_svg/transactions_replicate_2.svg" id="concept_nl5_pk1_wk__image_sbh_21k_54" class="image" />
 
-**Step 3:** Changes are in transit. T1 commits and its changes are merged into the local cache. The commit does not complete until Geode distributes the changes to the remote regions and acknowledgment is received.
+**Step 3:** Changes are in transit. T1 commits and its changes are merged into the local cache. The commit does not complete until <%=vars.product_name%> distributes the changes to the remote regions and acknowledgment is received.
 
 <img src="../../images_svg/transactions_replicate_3.svg" id="concept_nl5_pk1_wk__image_qgl_k1k_54" class="image" />
 
@@ -113,7 +113,7 @@ These figures show how using the no-ack scope can produce unexpected results. Th
 
 <img src="../../images_svg/transactions_replicate_1.svg" id="concept_nl5_pk1_wk__image_jn2_cbk_54" class="image" />
 
-**Step 2:** Changes are in transit. Transactions T1 and T2 commit and merge their changes into the local cache. Geode then distributes changes to the remote regions.
+**Step 2:** Changes are in transit. Transactions T1 and T2 commit and merge their changes into the local cache. <%=vars.product_name%> then distributes changes to the remote regions.
 
 <img src="../../images_svg/transactions_replicate_no_ack_1.svg" id="concept_nl5_pk1_wk__image_fk1_hbk_54" class="image" />
 
@@ -129,18 +129,18 @@ When encountering conflicts with local scope, the first transaction to start the
 ## Transactions and Persistent Regions
 <a id="concept_omy_341_wk"></a>
 
-By default, Geode does not allow transactions on persistent regions. You can enable the use of transactions on persistent regions by setting the property `gemfire.ALLOW_PERSISTENT_TRANSACTIONS` to true. This may also be accomplished at server startup using gfsh:
+By default, <%=vars.product_name%> does not allow transactions on persistent regions. You can enable the use of transactions on persistent regions by setting the property `gemfire.ALLOW_PERSISTENT_TRANSACTIONS` to true. This may also be accomplished at server startup using gfsh:
 
 ``` pre
 gfsh start server --name=server1 --dir=server1_dir \
 --J=-Dgemfire.ALLOW_PERSISTENT_TRANSACTIONS=true 
 ```
 
-Since Geode does not provide atomic disk persistence guarantees, the default behavior is to disallow disk-persistent regions from participating in transactions. However, when choosing to enable transactions on persistent regions, consider the following:
+Since <%=vars.product_name%> does not provide atomic disk persistence guarantees, the default behavior is to disallow disk-persistent regions from participating in transactions. However, when choosing to enable transactions on persistent regions, consider the following:
 
--   Geode does ensure atomicity for in-memory updates.
+-   <%=vars.product_name%> does ensure atomicity for in-memory updates.
 -   When any failed member is unable to complete the logic triggered by a transaction (including subsequent disk writes), that failed member is removed from the distributed system and, if restarted, must rebuild its state from surviving nodes that successfully complete the updates.
--   The chances of multiple nodes failing to complete the disk writes that result from a transaction commit due to nodes crashing for unrelated reasons are small. The real risk is that the file system buffers holding the persistent updates do not get written to disk in the case of operating system or hardware failure. If only the Geode process crashes, atomicity still exists. The overall risk of losing disk updates can also be mitigated by enabling synchronized disk file mode for the disk stores, but this incurs a high performance penalty.
+-   The chances of multiple nodes failing to complete the disk writes that result from a transaction commit due to nodes crashing for unrelated reasons are small. The real risk is that the file system buffers holding the persistent updates do not get written to disk in the case of operating system or hardware failure. If only the <%=vars.product_name%> process crashes, atomicity still exists. The overall risk of losing disk updates can also be mitigated by enabling synchronized disk file mode for the disk stores, but this incurs a high performance penalty.
 
 To mitigate the risk of data not get fully written to disk on all copies of the participating persistent disk stores:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/chapter_overview.html.md.erb b/geode-docs/developing/transactions/chapter_overview.html.md.erb
index defcf4b..0f2dc37 100644
--- a/geode-docs/developing/transactions/chapter_overview.html.md.erb
+++ b/geode-docs/developing/transactions/chapter_overview.html.md.erb
@@ -19,27 +19,27 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode provides a transactions API, with `begin`, `commit`, and `rollback` methods. These methods are much the same as the familiar relational database transactions methods.
+<%=vars.product_name%> provides a transactions API, with `begin`, `commit`, and `rollback` methods. These methods are much the same as the familiar relational database transactions methods.
 
--   **[About Transactions](../../developing/transactions/about_transactions.html)**
+-   **[About Transactions](about_transactions.html)**
 
-    This section covers the features of Geode transactions.
-It also details the two kinds of transaction that Geode supports:
-**Geode cache transactions** and **JTA global transactions**.
+    This section covers the features of <%=vars.product_name%> transactions.
+It also details the two kinds of transaction that <%=vars.product_name%> supports:
+**<%=vars.product_name%> cache transactions** and **JTA global transactions**.
 
--   **[Geode Cache Transactions](../../developing/transactions/cache_transactions.html)**
+-   **[<%=vars.product_name%> Cache Transactions](cache_transactions.html)**
 
-    Use Geode cache transactions to group the execution of cache operations and to gain the control offered by transactional commit and rollback. Geode cache transactions control operations within the Geode cache while the Geode distributed system handles data distribution in the usual way.
+    Use <%=vars.product_name%> cache transactions to group the execution of cache operations and to gain the control offered by transactional commit and rollback. <%=vars.product_name%> cache transactions control operations within the <%=vars.product_name%> cache while the <%=vars.product_name%> distributed system handles data distribution in the usual way.
 
--   **[JTA Global Transactions with Geode](../../developing/transactions/JTA_transactions.html)**
+-   **[JTA Global Transactions with <%=vars.product_name%>](JTA_transactions.html)**
 
-    Use JTA global transactions to coordinate Geode cache transactions and JDBC transactions.
+    Use JTA global transactions to coordinate <%=vars.product_name%> cache transactions and JDBC transactions.
 
--   **[Monitoring and Troubleshooting Transactions](../../developing/transactions/monitor_troubleshoot_transactions.html)**
+-   **[Monitoring and Troubleshooting Transactions](monitor_troubleshoot_transactions.html)**
 
-    This topic covers errors that may occur when running transactions in Geode.
+    This topic covers errors that may occur when running transactions in <%=vars.product_name%>.
 
--   **[Transaction Coding Examples](../../developing/transactions/transaction_coding_examples.html)**
+-   **[Transaction Coding Examples](transaction_coding_examples.html)**
 
     This section provides several code examples for writing and executing transactions.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/client_server_transactions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/client_server_transactions.html.md.erb b/geode-docs/developing/transactions/client_server_transactions.html.md.erb
index 727683a..4781bd7f 100644
--- a/geode-docs/developing/transactions/client_server_transactions.html.md.erb
+++ b/geode-docs/developing/transactions/client_server_transactions.html.md.erb
@@ -20,19 +20,19 @@ limitations under the License.
 -->
 
 
-The syntax for writing client transactions is the same on the Java client as with any other Geode member, but the underlying behavior in a client-run transaction is different from general transaction behavior.
+The syntax for writing client transactions is the same on the Java client as with any other <%=vars.product_name%> member, but the underlying behavior in a client-run transaction is different from general transaction behavior.
 
-For general information about running a transaction, refer to [How to Run a Geode Cache Transaction](run_a_cache_transaction.html#task_f15_mr3_5k).
+For general information about running a transaction, refer to [How to Run a <%=vars.product_name%> Cache Transaction](run_a_cache_transaction.html#task_f15_mr3_5k).
 
--   **[How Geode Runs Client Transactions](../../developing/transactions/client_server_transactions.html#how_gemfire_runs_clients)**
+-   **[How <%=vars.product_name%> Runs Client Transactions](client_server_transactions.html#how_gemfire_runs_clients)**
 
--   **[Client Cache Access During a Transaction](../../developing/transactions/client_server_transactions.html#client_cache_access)**
+-   **[Client Cache Access During a Transaction](client_server_transactions.html#client_cache_access)**
 
--   **[Client Transactions and Client Application Plug-Ins](../../developing/transactions/client_server_transactions.html#client_app_plugins)**
+-   **[Client Transactions and Client Application Plug-Ins](client_server_transactions.html#client_app_plugins)**
 
--   **[Client Transaction Failures](../../developing/transactions/client_server_transactions.html#client_transaction_failures)**
+-   **[Client Transaction Failures](client_server_transactions.html#client_transaction_failures)**
 
-## <a id="how_gemfire_runs_clients" class="no-quick-link"></a>How Geode Runs Client Transactions
+## <a id="how_gemfire_runs_clients" class="no-quick-link"></a>How <%=vars.product_name%> Runs Client Transactions
 
 When a client performs a transaction, the transaction is delegated to a server that acts as the transaction initiator in the server system. As with regular, non-client transactions, this server delegate may or may not be the transaction host.
 
@@ -42,7 +42,7 @@ In this figure, the application code on the client makes changes to data entries
 
 ## <a id="client_cache_access" class="no-quick-link"></a>Client Cache Access During a Transaction
 
-To maintain cache consistency, Geode blocks access to the local client cache during a transaction. The local client cache may reflect information inconsistent with the transaction in progress. When the transaction completes, the local cache is accessible again.
+To maintain cache consistency, <%=vars.product_name%> blocks access to the local client cache during a transaction. The local client cache may reflect information inconsistent with the transaction in progress. When the transaction completes, the local cache is accessible again.
 
 ## <a id="client_app_plugins" class="no-quick-link"></a>Client Transactions and Client Application Plug-Ins
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/configuring_db_connections_using_JNDI.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/configuring_db_connections_using_JNDI.html.md.erb b/geode-docs/developing/transactions/configuring_db_connections_using_JNDI.html.md.erb
index 4a08b6a..f58d04e 100644
--- a/geode-docs/developing/transactions/configuring_db_connections_using_JNDI.html.md.erb
+++ b/geode-docs/developing/transactions/configuring_db_connections_using_JNDI.html.md.erb
@@ -31,9 +31,9 @@ The following are a list of `DataSource` connection types used in JTA transactio
 -   **PooledDataSource**. Pooled SQL connections.
 -   **SimpleDataSource**. Single SQL connection. No pooling of SQL connections is done. Connections are generated on the fly and cannot be reused.
 
-The `jndi-name` attribute of the `jndi-binding` element is the key binding parameter. If the value of `jndi-name` is a DataSource, it is bound as `java:/`*myDatabase*, where *myDatabase* is the name you assign to your data source. If the data source cannot be bound to JNDI at runtime, Geode logs a warning. For information on the `DataSource` interface, see: [http://docs.oracle.com/javase/8/docs/api/javax/sql/DataSource.html](http://docs.oracle.com/javase/8/docs/api/javax/sql/DataSource.html)
+The `jndi-name` attribute of the `jndi-binding` element is the key binding parameter. If the value of `jndi-name` is a DataSource, it is bound as `java:/`*myDatabase*, where *myDatabase* is the name you assign to your data source. If the data source cannot be bound to JNDI at runtime, <%=vars.product_name%> logs a warning. For information on the `DataSource` interface, see: [http://docs.oracle.com/javase/8/docs/api/javax/sql/DataSource.html](http://docs.oracle.com/javase/8/docs/api/javax/sql/DataSource.html)
 
-Geode supports JDBC 2.0 and 3.0.
+<%=vars.product_name%> supports JDBC 2.0 and 3.0.
 
 **Note:**
 Include any data source JAR files in your CLASSPATH.

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/how_cache_transactions_work.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/how_cache_transactions_work.html.md.erb b/geode-docs/developing/transactions/how_cache_transactions_work.html.md.erb
index 4cb0473..c7bca5b 100644
--- a/geode-docs/developing/transactions/how_cache_transactions_work.html.md.erb
+++ b/geode-docs/developing/transactions/how_cache_transactions_work.html.md.erb
@@ -1,6 +1,4 @@
----
-title: How Geode Cache Transactions Work
----
+<% set_title("How", product_name, "Cache Transactions Work") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -21,21 +19,21 @@ limitations under the License.
 <a id="topic_fls_1j1_wk"></a>
 
 
-This section provides an explanation of how transactions work on Geode caches.
+This section provides an explanation of how transactions work on <%=vars.product_name%> caches.
 
-All the regions in a Geode member cache can participate in a transaction. A Java application can operate on the cache using multiple transactions. A transaction is associated with only one thread, and a thread can operate on only one transaction at a time. Child threads do not inherit existing transactions.
+All the regions in a <%=vars.product_name%> member cache can participate in a transaction. A Java application can operate on the cache using multiple transactions. A transaction is associated with only one thread, and a thread can operate on only one transaction at a time. Child threads do not inherit existing transactions.
 
--   **[Transaction View](../../developing/transactions/how_cache_transactions_work.html#concept_hls_1j1_wk)**
+-   **[Transaction View](#concept_hls_1j1_wk)**
 
--   **[Committing Transactions](../../developing/transactions/how_cache_transactions_work.html#concept_sbj_lj1_wk)**
+-   **[Committing Transactions](#concept_sbj_lj1_wk)**
 
--   **[Transactions by Region Type](../../developing/transactions/cache_transactions_by_region_type.html#topic_nlq_sk1_wk)**
+-   **[Transactions by Region Type](cache_transactions_by_region_type.html#topic_nlq_sk1_wk)**
 
--   **[Client Transactions](../../developing/transactions/client_server_transactions.html)**
+-   **[Client Transactions](client_server_transactions.html)**
 
--   **[Comparing Transactional and Non-Transactional Operations](../../developing/transactions/transactional_and_nontransactional_ops.html#transactional_and_nontransactional_ops)**
+-   **[Comparing Transactional and Non-Transactional Operations](transactional_and_nontransactional_ops.html#transactional_and_nontransactional_ops)**
 
--   **[Geode Cache Transaction Semantics](../../developing/transactions/transaction_semantics.html)**
+-   **[<%=vars.product_name%> Cache Transaction Semantics](transaction_semantics.html)**
 
 ## Transaction View
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/jca_adapter_example.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/jca_adapter_example.html.md.erb b/geode-docs/developing/transactions/jca_adapter_example.html.md.erb
index 409b93e..1c7b420 100644
--- a/geode-docs/developing/transactions/jca_adapter_example.html.md.erb
+++ b/geode-docs/developing/transactions/jca_adapter_example.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-This example shows how to use the JCA Resource Adapter in Geode .
+This example shows how to use the JCA Resource Adapter in <%=vars.product_name%> .
 
 ``` pre
 Hashtable env = new Hashtable();

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/monitor_troubleshoot_transactions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/monitor_troubleshoot_transactions.html.md.erb b/geode-docs/developing/transactions/monitor_troubleshoot_transactions.html.md.erb
index 7956cac..b2ba4df 100644
--- a/geode-docs/developing/transactions/monitor_troubleshoot_transactions.html.md.erb
+++ b/geode-docs/developing/transactions/monitor_troubleshoot_transactions.html.md.erb
@@ -19,14 +19,14 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-This topic covers errors that may occur when running transactions in Geode.
+This topic covers errors that may occur when running transactions in <%=vars.product_name%>.
 
 <a id="monitor_troubleshoot_transactions__section_881D2FF6761B4D689DDB46C650E2A2E1"></a>
-Unlike database transactions, Geode does not write a transaction log to disk. To get the full details about committed operations, use a transaction listener to monitor the transaction events and their contained cache events for each of your transactions.
+Unlike database transactions, <%=vars.product_name%> does not write a transaction log to disk. To get the full details about committed operations, use a transaction listener to monitor the transaction events and their contained cache events for each of your transactions.
 
 ## <a id="monitor_troubleshoot_transactions__section_2B66338C851A4FF386B60CC5CF4DCF77" class="no-quick-link"></a>Statistics on Cache Transactions
 
-During the operation of Geode cache transactions, if statistics are enabled, transaction-related statistics are calculated and accessible from the CachePerfStats statistic resource. Because the transaction’s data scope is the cache, these statistics are collected on a per-cache basis.
+During the operation of <%=vars.product_name%> cache transactions, if statistics are enabled, transaction-related statistics are calculated and accessible from the CachePerfStats statistic resource. Because the transaction’s data scope is the cache, these statistics are collected on a per-cache basis.
 
 ## <a id="monitor_troubleshoot_transactions__section_EA9277E6CFD7423F95BA4D04955FDE2A" class="no-quick-link"></a>Commit
 
@@ -38,7 +38,7 @@ A transaction can create data beyond the capacity limit set in the region’s ev
 
 ## <a id="monitor_troubleshoot_transactions__section_C7588E4F143B4D7FAFAEDCF5AE4FF2C8" class="no-quick-link"></a>Interaction with the Resource Manager
 
-The Geode resource manager, which controls overall heap use, either allows all transactional operations or blocks the entire transaction. If a cache reaches the critical threshold in the middle of a commit, the commit is allowed to finish before the manager starts blocking operations.
+The <%=vars.product_name%> resource manager, which controls overall heap use, either allows all transactional operations or blocks the entire transaction. If a cache reaches the critical threshold in the middle of a commit, the commit is allowed to finish before the manager starts blocking operations.
 
 ## <a id="monitor_troubleshoot_transactions__section_8942ABA6F23C4ED58877C894B13F4F21" class="no-quick-link"></a>Transaction Exceptions
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/run_a_cache_transaction.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/run_a_cache_transaction.html.md.erb b/geode-docs/developing/transactions/run_a_cache_transaction.html.md.erb
index 7ec2be6..90b1183 100644
--- a/geode-docs/developing/transactions/run_a_cache_transaction.html.md.erb
+++ b/geode-docs/developing/transactions/run_a_cache_transaction.html.md.erb
@@ -1,6 +1,4 @@
----
-title: How to Run a Geode Cache Transaction
----
+<% set_title("How to Run a", product_name, "Cache Transaction") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -21,9 +19,9 @@ limitations under the License.
 <a id="task_f15_mr3_5k"></a>
 
 
-This topic describes how to run a Geode cache transaction.
+This topic describes how to run a <%=vars.product_name%> cache transaction.
 
-Applications manage transactions on a per-cache basis. A Geode cache transaction starts with a `CacheTransactionManager.begin` method and continues with a series of operations, which are typically region operations such as region create, update, clear and destroy. The begin, commit, and rollback are directly controlled by the application. A commit, failed commit, or voluntary rollback by the transaction manager ends the transaction.
+Applications manage transactions on a per-cache basis. A <%=vars.product_name%> cache transaction starts with a `CacheTransactionManager.begin` method and continues with a series of operations, which are typically region operations such as region create, update, clear and destroy. The begin, commit, and rollback are directly controlled by the application. A commit, failed commit, or voluntary rollback by the transaction manager ends the transaction.
 
 You can run transactions on any type of cache region except regions with **global** scope. An operation attempted on a region with global scope throws an `UnsupportedOperationException` exception.
 
@@ -38,10 +36,10 @@ This discussion centers on transactions on replicated and partitioned regions. I
     |---------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
     | **replicated regions**                                                                      | Use `distributed-ack` scope. The region shortcuts specifying `REPLICATE` use `distributed-ack` scope. This is particularly important if you have more than one data producer. With one data producer, you can safely use `distributed-no-ack`.                                                                                                                                                                        |
     | **partitioned regions**                                                                     | Custom partition and colocate data among regions so all the data for any single transaction is hosted by a single member. If the transaction is run from a member other than the one hosting the data, the transaction will run by proxy in the member hosting the data. The partitioned region must be defined for the application that runs the transaction, but the data can be hosted in a remote member. |
-    | **persistent regions**                                                                      | Configure Geode to allow transactions on persistent regions. By default, the configuration does not allow transactions on persistent regions. Enable the use of transactions on persistent regions by setting the property `gemfire.ALLOW_PERSISTENT_TRANSACTIONS` to true.                                                                                              |
+    | **persistent regions**                                                                      | Configure <%=vars.product_name%> to allow transactions on persistent regions. By default, the configuration does not allow transactions on persistent regions. Enable the use of transactions on persistent regions by setting the property `gemfire.ALLOW_PERSISTENT_TRANSACTIONS` to true.                                                                                              |
     | **a mix of partitioned and replicated regions**                                             | Make sure any replicated region involved in the transaction is hosted on every member that hosts the partitioned region data. All data for a single transaction must reside within a single host.                                                                                                                                                                                                             |
-    | **delta propagation**                                                                       | Set the region attribute `cloning-enabled` to true. This lets Geode do conflict checks at commit time. Without this, the transaction will throw an `UnsupportedOperationInTransactionException ` exception.                                                                                                                                                                      |
-    | **global JTA transactions with only Geode cache transactions** | Set the region attribute `ignore-jta` to true for all regions that you do *not* want to participate in JTA global transactions. It is false by default. For instructions on how to run a JTA global transaction, see [JTA Global Transactions with Geode](JTA_transactions.html).   |
+    | **delta propagation**                                                                       | Set the region attribute `cloning-enabled` to true. This lets <%=vars.product_name%> do conflict checks at commit time. Without this, the transaction will throw an `UnsupportedOperationInTransactionException ` exception.                                                                                                                                                                      |
+    | **global JTA transactions with only <%=vars.product_name%> cache transactions** | Set the region attribute `ignore-jta` to true for all regions that you do *not* want to participate in JTA global transactions. It is false by default. For instructions on how to run a JTA global transaction, see [JTA Global Transactions with <%=vars.product_name%>](JTA_transactions.html).   |
 
 3. **Update your cache event handler and transaction event handler implementations to handle your transactions.** 
     Cache event handlers may be used with transactions. Cache listeners are called after the commit, instead of after each cache operation, and the cache listeners receive conflated transaction events. Cache writers and loaders are called as usual, at the time of the operation.
@@ -85,6 +83,6 @@ This discussion centers on transactions on replicated and partitioned regions. I
 5. **Review all of your code for compatibility with transactions.** 
     When you commit a transaction, while the commit is in process, the changes are visible in the distributed cache. This provides better performance than locking everything involved with the transaction updates, but it means that another process accessing data used in the transaction might get some data in the pre-transaction state and some in the post-transaction state.
 
-    For example, suppose keys 1 and 2 are modified within a transaction, such that both values change from A to B. In another thread, it is possible to read key 1 with value B and key 2 with value A, after the commit begins, but before the commit completes. This is possible due to the nature of Geode reads. This choice sacrifices atomic visibility in favor of performance; reads do not block writes, and writes do not block reads.
+    For example, suppose keys 1 and 2 are modified within a transaction, such that both values change from A to B. In another thread, it is possible to read key 1 with value B and key 2 with value A, after the commit begins, but before the commit completes. This is possible due to the nature of <%=vars.product_name%> reads. This choice sacrifices atomic visibility in favor of performance; reads do not block writes, and writes do not block reads.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/run_a_cache_transaction_with_external_db.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/run_a_cache_transaction_with_external_db.html.md.erb b/geode-docs/developing/transactions/run_a_cache_transaction_with_external_db.html.md.erb
index 16a1397..40cf1a1 100644
--- a/geode-docs/developing/transactions/run_a_cache_transaction_with_external_db.html.md.erb
+++ b/geode-docs/developing/transactions/run_a_cache_transaction_with_external_db.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  How to Run a Geode Cache Transaction that Coordinates with an External Database
----
+<% set_title("How to Run a", product_name, "Cache Transaction that Coordinates with an External Database") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,9 +17,9 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Coordinate a Geode cache transaction with an external database by using CacheWriter/CacheListener and TransactionWriter/TransactionListener plug-ins, **to provide an alternative to using JTA transactions**.
+Coordinate a <%=vars.product_name%> cache transaction with an external database by using CacheWriter/CacheListener and TransactionWriter/TransactionListener plug-ins, **to provide an alternative to using JTA transactions**.
 
-There are a few things you should be careful about while working with Geode cache transactions and external databases:
+There are a few things you should be careful about while working with <%=vars.product_name%> cache transactions and external databases:
 
 -   When you set up the JDBC connection, make sure that auto-commit is disabled. For example, in Java:
 
@@ -40,15 +38,15 @@ There are a few things you should be careful about while working with Geode cach
     max_prepared_transactions = 1 # 1 or more enables, zero (default) disables this feature.
     ```
 
-Use the following procedure to write a Geode cache transaction that coordinates with an external database:
+Use the following procedure to write a <%=vars.product_name%> cache transaction that coordinates with an external database:
 
-1.  Configure Geode regions as necessary as described in [How to Run a Geode Cache Transaction](run_a_cache_transaction.html#task_f15_mr3_5k).
+1.  Configure <%=vars.product_name%> regions as necessary as described in [How to Run a <%=vars.product_name%> Cache Transaction](run_a_cache_transaction.html#task_f15_mr3_5k).
 2.  Begin the transaction.
 3.  If you have not previously committed a previous transaction in this connection, start a database transaction by issuing a BEGIN statement.
-4.  Perform Geode cache operations; each cache operation invokes the CacheWriter. Implement the CacheWriter to perform the corresponding external database operations.
+4.  Perform <%=vars.product_name%> cache operations; each cache operation invokes the CacheWriter. Implement the CacheWriter to perform the corresponding external database operations.
 5.  Commit the transaction.
     At this point, the TransactionWriter is invoked. The TransactionWriter returns a TransactionEvent, which contains all the operations in the transaction. Call PREPARE TRANSACTION within your TransactionWriter code.
 
-6.  After a transaction is successfully committed in Geode, the TransactionListener is invoked. The TransactionListener calls COMMIT PREPARED to commit the database transaction.
+6.  After a transaction is successfully committed in <%=vars.product_name%>, the TransactionListener is invoked. The TransactionListener calls COMMIT PREPARED to commit the database transaction.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/transaction_coding_examples.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/transaction_coding_examples.html.md.erb b/geode-docs/developing/transactions/transaction_coding_examples.html.md.erb
index 26aac45..bb13deb 100644
--- a/geode-docs/developing/transactions/transaction_coding_examples.html.md.erb
+++ b/geode-docs/developing/transactions/transaction_coding_examples.html.md.erb
@@ -21,24 +21,24 @@ limitations under the License.
 
 This section provides several code examples for writing and executing transactions.
 
--   **[Basic Transaction Example](../../developing/transactions/transactions_overview.html)**
+-   **[Basic Transaction Example](transactions_overview.html)**
 
     This example operates on two replicated regions. It begins a transaction, updates one entry in each region, and commits the result.
 
--   **[Basic Suspend and Resume Transaction Example](../../developing/transactions/transaction_suspend_resume_example.html)**
+-   **[Basic Suspend and Resume Transaction Example](transaction_suspend_resume_example.html)**
 
     This example suspends and resumes a transaction.
 
--   **[Transaction Embedded within a Function Example](../../developing/transactions/transactional_function_example.html)**
+-   **[Transaction Embedded within a Function Example](transactional_function_example.html)**
 
     This example demonstrates a function that does transactional updates to Customer and Order regions.
 
--   **[Geode JTA Transaction Example](../../developing/transactions/transaction_jta_gemfire_example.html)**
+-   **[<%=vars.product_name%> JTA Transaction Example](transaction_jta_gemfire_example.html)**
 
-    An example code fragment shows how to run a JTA global transaction using Geode as the JTA transaction manager.
+    An example code fragment shows how to run a JTA global transaction using <%=vars.product_name%> as the JTA transaction manager.
 
--   **[JCA Resource Adapter Example](../../developing/transactions/jca_adapter_example.html)**
+-   **[JCA Resource Adapter Example](jca_adapter_example.html)**
 
-    This example shows how to use the JCA Resource Adapter in Geode .
+    This example shows how to use the JCA Resource Adapter in <%=vars.product_name%> .
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/transaction_event_management.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/transaction_event_management.html.md.erb b/geode-docs/developing/transactions/transaction_event_management.html.md.erb
index 9ec6b82..e9d84a9 100644
--- a/geode-docs/developing/transactions/transaction_event_management.html.md.erb
+++ b/geode-docs/developing/transactions/transaction_event_management.html.md.erb
@@ -46,11 +46,11 @@ results in these events stored in the CacheEvent list:
 
 # At commit and after commit
 
-When the transaction is committed, Geode passes the `TransactionEvent` to the transaction writer local to the transactional view, if one is available. After commit, Geode :
+When the transaction is committed, <%=vars.product_name%> passes the `TransactionEvent` to the transaction writer local to the transactional view, if one is available. After commit, <%=vars.product_name%> :
     -   Passes the `TransactionEvent` to each installed transaction listener.
     -   Walks the `CacheEvent` list, calling all locally installed listeners for each operation listed.
     -   Distributes the `TransactionEvent` to all interested caches.
         **Note:**
-        For Geode and global JTA transactions, the `EntryEvent` object contains the Geode transaction ID. JTA transaction events do not contain the JTA transaction ID.
+        For <%=vars.product_name%> and global JTA transactions, the `EntryEvent` object contains the <%=vars.product_name%> transaction ID. JTA transaction events do not contain the JTA transaction ID.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/transaction_jta_gemfire_example.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/transaction_jta_gemfire_example.html.md.erb b/geode-docs/developing/transactions/transaction_jta_gemfire_example.html.md.erb
index 131d164..8f0b1ad 100644
--- a/geode-docs/developing/transactions/transaction_jta_gemfire_example.html.md.erb
+++ b/geode-docs/developing/transactions/transaction_jta_gemfire_example.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Geode JTA Transaction Example
----
+<% set_title(product_name, "JTA Transaction Example") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,7 +17,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-An example code fragment shows how to run a JTA global transaction using Geode as the JTA transaction manager.
+An example code fragment shows how to run a JTA global transaction using <%=vars.product_name%> as the JTA transaction manager.
 
 The external data sources used in this transaction are configured in the `cache.xml` file. See [Configuring Database Connections Using JNDI](configuring_db_connections_using_JNDI.html#topic_A5E3A67C808D48C08E1F0DC167C5C494) for a configuration example.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/transaction_semantics.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/transaction_semantics.html.md.erb b/geode-docs/developing/transactions/transaction_semantics.html.md.erb
index 9a2e21e..3df3f20 100644
--- a/geode-docs/developing/transactions/transaction_semantics.html.md.erb
+++ b/geode-docs/developing/transactions/transaction_semantics.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Geode Cache Transaction Semantics
----
+<% set_title(product_name, "Cache Transaction Semantics") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,13 +17,13 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode transaction semantics differ in some ways from the Atomicity-Consistency-Isolation-Durability (ACID) semantics of traditional relational databases. For performance reasons, Geode transactions do not adhere to ACID constraints by default, but can be configured for ACID support as described in this section.
+<%=vars.product_name%> transaction semantics differ in some ways from the Atomicity-Consistency-Isolation-Durability (ACID) semantics of traditional relational databases. For performance reasons, <%=vars.product_name%> transactions do not adhere to ACID constraints by default, but can be configured for ACID support as described in this section.
 
 ## <a id="transaction_semantics__section_8362ACD06C784B5BBB0B7E986F760169" class="no-quick-link"></a>Atomicity
 
 Atomicity is “all or nothing” behavior: a transaction completes successfully only when all of the operations it contains complete successfully. If problems occur during a transaction, perhaps due to other transactions with overlapping changes, the transaction cannot successfully complete until the problems are resolved.
 
-Geode transactions provide atomicity and realize speed by using a reservation system, instead of using the traditional relational database technique of a two-phase locking of rows. The reservation prevents other, intersecting transactions from completing, allowing the commit to check for conflicts and to reserve resources in an all-or-nothing fashion prior to making changes to the data. After all changes have been made, locally and remotely, the reservation is released. With the reservation system, an intersecting transaction is simply discarded. The serialization of obtaining locks is avoided. See [Committing Transactions](how_cache_transactions_work.html#concept_sbj_lj1_wk) for details on the two-phase commit protocol that implements the reservation system.
+<%=vars.product_name%> transactions provide atomicity and realize speed by using a reservation system, instead of using the traditional relational database technique of a two-phase locking of rows. The reservation prevents other, intersecting transactions from completing, allowing the commit to check for conflicts and to reserve resources in an all-or-nothing fashion prior to making changes to the data. After all changes have been made, locally and remotely, the reservation is released. With the reservation system, an intersecting transaction is simply discarded. The serialization of obtaining locks is avoided. See [Committing Transactions](how_cache_transactions_work.html#concept_sbj_lj1_wk) for details on the two-phase commit protocol that implements the reservation system.
 
 ## <a id="transaction_semantics__section_7C287DA4A5134780B3199CE074E3F890" class="no-quick-link"></a>Consistency
 
@@ -33,9 +31,9 @@ Consistency requires that data written within a transaction must observe the key
 
 ## <a id="transaction_semantics__section_126A24EC499D4CF39AE766A0B526A9A5" class="no-quick-link"></a>Isolation
 
-Isolation assures that operations will see either the pre-transaction state of the system or its post-transaction state, but not the transitional state that occurs while a transaction is in progress. Write operations in a transaction are always confirmed to ensure that stale values are not written. As a distributed cache-based system optimized for performance, Geode in its default configuration does not enforce read isolation. Geode transactions support repeatable read isolation, so once the committed value is read for a given key, it always returns that same value. If a transaction write, such as put or invalidate, deletes a value for a key that has already been read, subsequent reads return the transactional reference.
+Isolation assures that operations will see either the pre-transaction state of the system or its post-transaction state, but not the transitional state that occurs while a transaction is in progress. Write operations in a transaction are always confirmed to ensure that stale values are not written. As a distributed cache-based system optimized for performance, <%=vars.product_name%> in its default configuration does not enforce read isolation. <%=vars.product_name%> transactions support repeatable read isolation, so once the committed value is read for a given key, it always returns that same value. If a transaction write, such as put or invalidate, deletes a value for a key that has already been read, subsequent reads return the transactional reference.
 
-In the default configuration, Geode isolates transactions at the process thread level, so while a transaction is in progress, its changes are visible only inside the thread that is running the transaction. Threads inside the same process and in other processes cannot see changes until after the commit operation begins. At this point, the changes are visible in the cache, but other threads that access the changing data might see only partial results of the transaction leading to a dirty read.
+In the default configuration, <%=vars.product_name%> isolates transactions at the process thread level, so while a transaction is in progress, its changes are visible only inside the thread that is running the transaction. Threads inside the same process and in other processes cannot see changes until after the commit operation begins. At this point, the changes are visible in the cache, but other threads that access the changing data might see only partial results of the transaction leading to a dirty read.
 
 If an application requires the slower conventional isolation model (such that dirty reads of transitional states are not allowed), read operations must be encapsulated within transactions and the `gemfire.detectReadConflicts` parameter must be set to ‘true’:
 
@@ -45,7 +43,7 @@ This parameter causes read operations to succeed only when they read a consisten
 
 ## <a id="transaction_semantics__section_F092E368724945BCBF8E5DCB36B97EB4" class="no-quick-link"></a>Durability
 
-Relational databases provide durability by using disk storage for recovery and transaction logging. As a distributed cache-based system optimized for performance, Geode does not support on-disk or in-memory durability for transactions.
+Relational databases provide durability by using disk storage for recovery and transaction logging. As a distributed cache-based system optimized for performance, <%=vars.product_name%> does not support on-disk or in-memory durability for transactions.
 
 Applications can emulate the conventional disk-based durability model by setting the `gemfire.ALLOW_PERSISTENT_TRANSACTIONS` parameter to ‘true’.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/transactional_and_nontransactional_ops.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/transactional_and_nontransactional_ops.html.md.erb b/geode-docs/developing/transactions/transactional_and_nontransactional_ops.html.md.erb
index dc9f198..7cda91f 100644
--- a/geode-docs/developing/transactions/transactional_and_nontransactional_ops.html.md.erb
+++ b/geode-docs/developing/transactions/transactional_and_nontransactional_ops.html.md.erb
@@ -20,7 +20,7 @@ limitations under the License.
 -->
 
 
-Between the begin operation and the commit or rollback operation are a series of ordinary Geode operations. When they are launched from within a transaction, the Geode operations can be classified into two types:
+Between the begin operation and the commit or rollback operation are a series of ordinary <%=vars.product_name%> operations. When they are launched from within a transaction, the <%=vars.product_name%> operations can be classified into two types:
 
 -   Transactional operations affect the transactional view
 -   Non-transactional operations do not affect the transactional view


[09/51] [abbrv] geode git commit: GEODE-3423: Have Gradle set LOCAL_USER_ID

Posted by kl...@apache.org.
GEODE-3423: Have Gradle set LOCAL_USER_ID

- This is needed because Jenkins' Gradle job doesn't seem to provide the
  ability to pass environment variables in.


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/d295876d
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/d295876d
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/d295876d

Branch: refs/heads/feature/GEODE-1279
Commit: d295876d601300e52515193efcf5fd8549f10dbb
Parents: a600068
Author: Jens Deppe <jd...@pivotal.io>
Authored: Tue Aug 15 07:57:00 2017 -0700
Committer: Jens Deppe <jd...@pivotal.io>
Committed: Tue Aug 15 07:57:00 2017 -0700

----------------------------------------------------------------------
 gradle/docker.gradle | 38 ++++++++++++++++++++++++++++++++------
 1 file changed, 32 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/d295876d/gradle/docker.gradle
----------------------------------------------------------------------
diff --git a/gradle/docker.gradle b/gradle/docker.gradle
index d4828e4..b5a356f 100644
--- a/gradle/docker.gradle
+++ b/gradle/docker.gradle
@@ -38,6 +38,17 @@
  *                       The default is 'root'.
  */
 
+static def getWorkingDirArgIndex(args) {
+  def index = 0
+  for (arg in args) {
+    if (arg.equals('-w')) {
+      return index + 1
+    }
+    index++
+  }
+  return -1
+}
+
 def dockerConfig = {
   maxParallelForks = dunitParallelForks.toInteger()
 
@@ -76,17 +87,32 @@ def dockerConfig = {
       }
 
       // Remove JAVA_HOME and PATH env variables - they might not be the same as the container needs
-      args[javaHomeIdx] = 'JAVA_HOME_REMOVED='
-      args[pathIdx] = 'PATH_REMOVED='
+      if (javaHomeIdx > 0) {
+        args[javaHomeIdx] = 'JAVA_HOME_REMOVED='
+      }
+      if (pathIdx > 0) {
+        args[pathIdx] = 'PATH_REMOVED='
+      }
+
+      // Unfortunately this snippet of code is here and is required by dev-tools/docker/base/entrypoint.sh.
+      // This allows preserving the outer user inside the running container. Required for Jenkins
+      // and other environments. There doesn't seem to be a way to pass this environment variable
+      // in from a Jenkins Gradle job.
+      if (System.env['LOCAL_USER_ID'] == null) {
+        def username = System.getProperty("user.name")
+        def uid = ['id', '-u', username].execute().text.trim()
+        args.add(1, "-e" as String)
+        args.add(2, "LOCAL_USER_ID=${uid}" as String)
+      }
 
       // Infer the index of this invocation
       def matcher = (args[args.size - 1] =~ /.*Executor (\d*).*/)
 
-      args[3] = args[3] + matcher[0][1]
-      def workdir = new File(args[3])
-      // println "dockerize: making ${workdir}"
+      def pwdIndex = getWorkingDirArgIndex(args)
+      args[pwdIndex] = args[pwdIndex] + matcher[0][1]
+      def workdir = new File(args[pwdIndex])
       workdir.mkdirs()
-      // println args
+//      println args
 
       args
     }


[50/51] [abbrv] geode git commit: GEODE-1279: rename tests with old bug system numbers

Posted by kl...@apache.org.
http://git-wip-us.apache.org/repos/asf/geode/blob/23c4126a/geode-core/src/test/java/org/apache/geode/test/dunit/internal/JUnit4DistributedTestCase.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/test/dunit/internal/JUnit4DistributedTestCase.java b/geode-core/src/test/java/org/apache/geode/test/dunit/internal/JUnit4DistributedTestCase.java
index 3572e3f..174b385 100644
--- a/geode-core/src/test/java/org/apache/geode/test/dunit/internal/JUnit4DistributedTestCase.java
+++ b/geode-core/src/test/java/org/apache/geode/test/dunit/internal/JUnit4DistributedTestCase.java
@@ -18,13 +18,16 @@ import static org.apache.geode.distributed.ConfigurationProperties.LOCATORS;
 import static org.apache.geode.distributed.ConfigurationProperties.LOG_FILE;
 import static org.apache.geode.distributed.ConfigurationProperties.MCAST_PORT;
 import static org.apache.geode.distributed.ConfigurationProperties.STATISTIC_ARCHIVE_FILE;
+import static org.apache.geode.test.dunit.DistributedTestUtils.getAllDistributedSystemProperties;
+import static org.apache.geode.test.dunit.DistributedTestUtils.unregisterInstantiatorsInThisVM;
+import static org.apache.geode.test.dunit.Invoke.invokeInEveryVM;
+import static org.apache.geode.test.dunit.Invoke.invokeInLocator;
+import static org.apache.geode.test.dunit.LogWriterUtils.getLogWriter;
 import static org.junit.Assert.assertNotNull;
 
 import java.io.Serializable;
-import java.text.DecimalFormat;
-import java.util.Iterator;
 import java.util.LinkedHashSet;
-import java.util.Map;
+import java.util.Map.Entry;
 import java.util.Properties;
 import java.util.Set;
 
@@ -65,11 +68,8 @@ import org.apache.geode.internal.net.SocketCreator;
 import org.apache.geode.internal.net.SocketCreatorFactory;
 import org.apache.geode.management.internal.cli.LogWrapper;
 import org.apache.geode.test.dunit.DUnitBlackboard;
-import org.apache.geode.test.dunit.DistributedTestUtils;
 import org.apache.geode.test.dunit.Host;
 import org.apache.geode.test.dunit.IgnoredException;
-import org.apache.geode.test.dunit.Invoke;
-import org.apache.geode.test.dunit.LogWriterUtils;
 import org.apache.geode.test.dunit.standalone.DUnitLauncher;
 import org.apache.geode.test.junit.rules.serializable.SerializableTestName;
 
@@ -77,10 +77,9 @@ import org.apache.geode.test.junit.rules.serializable.SerializableTestName;
  * This class is the base class for all distributed tests using JUnit 4.
  */
 public abstract class JUnit4DistributedTestCase implements DistributedTestFixture, Serializable {
-
   private static final Logger logger = LogService.getLogger();
 
-  private static final Set<String> testHistory = new LinkedHashSet<String>();
+  private static final Set<String> testHistory = new LinkedHashSet<>();
 
   /** This VM's connection to the distributed system */
   private static InternalDistributedSystem system;
@@ -88,10 +87,7 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
   private static Properties lastSystemProperties;
   private static volatile String testMethodName;
 
-  /** For formatting timing info */
-  private static final DecimalFormat format = new DecimalFormat("###.###");
-
-  private static boolean reconnect = false;
+  private static DUnitBlackboard blackboard;
 
   private static final boolean logPerTest = Boolean.getBoolean("dunitLogPerTest");
 
@@ -118,17 +114,6 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
   @Rule
   public SerializableTestName testNameForDistributedTestCase = new SerializableTestName();
 
-  private static DUnitBlackboard blackboard;
-
-  /**
-   * Returns a DUnitBlackboard that can be used to pass data between VMs and synchronize actions.
-   * 
-   * @return the blackboard
-   */
-  public DUnitBlackboard getBlackboard() {
-    return blackboard;
-  }
-
   @BeforeClass
   public static final void initializeDistributedTestCase() {
     DUnitLauncher.launchIfNeeded();
@@ -149,19 +134,12 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
     return this.distributedTestFixture.getClass();
   }
 
-  // ---------------------------------------------------------------------------
-  // methods for tests
-  // ---------------------------------------------------------------------------
-
   /**
    * @deprecated Please override {@link #getDistributedSystemProperties()} instead.
    */
   @Deprecated
-  public final void setSystem(final Properties props, final DistributedSystem ds) { // TODO:
-                                                                                    // override
-                                                                                    // getDistributedSystemProperties
-                                                                                    // and then
-                                                                                    // delete
+  public final void setSystem(final Properties props, final DistributedSystem ds) {
+    // TODO: override getDistributedSystemProperties and then delete
     system = (InternalDistributedSystem) ds;
     lastSystemProperties = props;
     lastSystemCreatedInTest = getTestClass(); // used to be getDeclaringClass()
@@ -185,9 +163,10 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
     if (system == null) {
       system = InternalDistributedSystem.getAnyInstance();
     }
+
     if (system == null || !system.isConnected()) {
       // Figure out our distributed system properties
-      Properties p = DistributedTestUtils.getAllDistributedSystemProperties(props);
+      Properties p = getAllDistributedSystemProperties(props);
       lastSystemCreatedInTest = getTestClass(); // used to be getDeclaringClass()
       if (logPerTest) {
         String testMethod = getTestMethodName();
@@ -199,36 +178,37 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
       }
       system = (InternalDistributedSystem) DistributedSystem.connect(p);
       lastSystemProperties = p;
+
     } else {
       boolean needNewSystem = false;
       if (!getTestClass().equals(lastSystemCreatedInTest)) { // used to be getDeclaringClass()
-        Properties newProps = DistributedTestUtils.getAllDistributedSystemProperties(props);
+        Properties newProps = getAllDistributedSystemProperties(props);
         needNewSystem = !newProps.equals(lastSystemProperties);
         if (needNewSystem) {
-          LogWriterUtils.getLogWriter()
+          getLogWriter()
               .info("Test class has changed and the new DS properties are not an exact match. "
                   + "Forcing DS disconnect. Old props = " + lastSystemProperties + "new props="
                   + newProps);
         }
+
       } else {
         Properties activeProps = system.getProperties();
-        for (Map.Entry<Object, Object> objectObjectEntry : props.entrySet()) {
-          Map.Entry entry = objectObjectEntry;
+        for (Entry<Object, Object> entry : props.entrySet()) {
           String key = (String) entry.getKey();
           String value = (String) entry.getValue();
           if (!value.equals(activeProps.getProperty(key))) {
             needNewSystem = true;
-            LogWriterUtils.getLogWriter().info("Forcing DS disconnect. For property " + key
-                + " old value = " + activeProps.getProperty(key) + " new value = " + value);
+            getLogWriter().info("Forcing DS disconnect. For property " + key + " old value = "
+                + activeProps.getProperty(key) + " new value = " + value);
             break;
           }
         }
       }
+
       if (needNewSystem) {
         // the current system does not meet our needs to disconnect and
         // call recursively to get a new system.
-        LogWriterUtils.getLogWriter()
-            .info("Disconnecting from current DS in order to make a new one");
+        getLogWriter().info("Disconnecting from current DS in order to make a new one");
         disconnectFromDS();
         getSystem(props);
       }
@@ -307,14 +287,13 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
 
   public static final void disconnectAllFromDS() {
     disconnectFromDS();
-    Invoke.invokeInEveryVM("disconnectFromDS", () -> disconnectFromDS());
+    invokeInEveryVM("disconnectFromDS", () -> disconnectFromDS());
   }
 
   /**
    * Disconnects this VM from the distributed system
    */
   public static final void disconnectFromDS() {
-    // setTestMethodName(null);
     GemFireCacheImpl.testCacheXml = null;
     if (system != null) {
       system.disconnect();
@@ -328,20 +307,24 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
       }
       try {
         ds.disconnect();
-      } catch (Exception e) {
-        // ignore
+      } catch (Exception ignore) {
       }
     }
 
     AdminDistributedSystemImpl ads = AdminDistributedSystemImpl.getConnectedInstance();
-    if (ads != null) {// && ads.isConnected()) {
+    if (ads != null) {
       ads.disconnect();
     }
   }
 
-  // ---------------------------------------------------------------------------
-  // name methods
-  // ---------------------------------------------------------------------------
+  /**
+   * Returns a DUnitBlackboard that can be used to pass data between VMs and synchronize actions.
+   *
+   * @return the blackboard
+   */
+  public DUnitBlackboard getBlackboard() {
+    return blackboard;
+  }
 
   public static final String getTestMethodName() {
     return testMethodName;
@@ -360,10 +343,6 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
     return getTestClass().getSimpleName() + "_" + getName();
   }
 
-  // ---------------------------------------------------------------------------
-  // setup methods
-  // ---------------------------------------------------------------------------
-
   /**
    * Sets up the DistributedTestCase.
    *
@@ -372,7 +351,7 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
    * setUp() or override {@link #postSetUp()} with work that needs to occur after setUp().
    */
   @Before
-  public final void setUp() throws Exception {
+  public final void setUpJUnit4DistributedTestCase() throws Exception {
     preSetUp();
     setUpDistributedTestCase();
     postSetUp();
@@ -457,11 +436,10 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
         .set(new InternalDistributedSystem.CreationStackGenerator() {
           @Override
           public Throwable generateCreationStack(final DistributionConfig config) {
-            final StringBuilder sb = new StringBuilder();
-            final String[] validAttributeNames = config.getAttributeNames();
-            for (int i = 0; i < validAttributeNames.length; i++) {
-              final String attName = validAttributeNames[i];
-              final Object actualAtt = config.getAttributeObject(attName);
+            StringBuilder sb = new StringBuilder();
+            String[] validAttributeNames = config.getAttributeNames();
+            for (String attName : validAttributeNames) {
+              Object actualAtt = config.getAttributeObject(attName);
               String actualAttStr = actualAtt.toString();
               sb.append("  ");
               sb.append(attName);
@@ -489,10 +467,6 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
     System.out.println("Previously run tests: " + testHistory);
   }
 
-  // ---------------------------------------------------------------------------
-  // teardown methods
-  // ---------------------------------------------------------------------------
-
   /**
    * Tears down the DistributedTestCase.
    *
@@ -517,8 +491,7 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
   }
 
   private final void tearDownDistributedTestCase() throws Exception {
-    Invoke.invokeInEveryVM("tearDownCreationStackGenerator",
-        () -> tearDownCreationStackGenerator());
+    invokeInEveryVM("tearDownCreationStackGenerator", () -> tearDownCreationStackGenerator());
     if (logPerTest) {
       disconnectAllFromDS();
     }
@@ -526,7 +499,6 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
     if (!getDistributedSystemProperties().isEmpty()) {
       disconnectAllFromDS();
     }
-
   }
 
   /**
@@ -571,10 +543,10 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
 
   private static final void cleanupAllVms() {
     tearDownVM();
-    Invoke.invokeInEveryVM("tearDownVM", () -> tearDownVM());
-    Invoke.invokeInLocator(() -> {
+    invokeInEveryVM("tearDownVM", () -> tearDownVM());
+    invokeInLocator(() -> {
       DistributionMessageObserver.setInstance(null);
-      DistributedTestUtils.unregisterInstantiatorsInThisVM();
+      unregisterInstantiatorsInThisVM();
     });
     DUnitLauncher.closeAndCheckForSuspects();
   }
@@ -582,6 +554,7 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
   private static final void tearDownVM() {
     closeCache();
     disconnectFromDS();
+
     // keep alphabetized to detect duplicate lines
     CacheCreation.clearThreadLocals();
     CacheServerLauncher.clearStatics();
@@ -590,7 +563,7 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
     ClientServerTestCase.AUTO_LOAD_BALANCE = false;
     ClientStatsManager.cleanupForTests();
     DiskStoreObserver.setInstance(null);
-    DistributedTestUtils.unregisterInstantiatorsInThisVM();
+    unregisterInstantiatorsInThisVM();
     DistributionMessageObserver.setInstance(null);
     GlobalLockingDUnitTest.region_testBug32356 = null;
     InitialImageOperation.slowImageProcessing = 0;
@@ -617,7 +590,8 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
     SocketCreatorFactory.close();
   }
 
-  private static final void closeCache() { // TODO: this should move to CacheTestCase
+  // TODO: this should move to CacheTestCase
+  private static final void closeCache() {
     GemFireCacheImpl cache = GemFireCacheImpl.getInstance();
     if (cache != null && !cache.isClosed()) {
       destroyRegions(cache);
@@ -625,12 +599,11 @@ public abstract class JUnit4DistributedTestCase implements DistributedTestFixtur
     }
   }
 
-  protected static final void destroyRegions(final Cache cache) { // TODO: this should move to
-                                                                  // CacheTestCase
+  // TODO: this should move to CacheTestCase
+  protected static final void destroyRegions(final Cache cache) {
     if (cache != null && !cache.isClosed()) {
       // try to destroy the root regions first so that we clean up any persistent files.
-      for (Iterator itr = cache.rootRegions().iterator(); itr.hasNext();) {
-        Region root = (Region) itr.next();
+      for (Region<?, ?> root : cache.rootRegions()) {
         String regionFullPath = root == null ? null : root.getFullPath();
         // for colocated regions you can't locally destroy a partitioned region.
         if (root.isDestroyed() || root instanceof HARegion || root instanceof PartitionedRegion) {


[19/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - Tools & Modules

Posted by kl...@apache.org.
GEODE-3395 Variable-ize product version and name in user guide - Tools & Modules


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/bb988caa
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/bb988caa
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/bb988caa

Branch: refs/heads/feature/GEODE-1279
Commit: bb988caa5969d35948fd603f73bb1a32e879ef78
Parents: c5dd26b
Author: Dave Barnes <db...@pivotal.io>
Authored: Tue Aug 15 14:26:11 2017 -0700
Committer: Dave Barnes <db...@pivotal.io>
Committed: Tue Aug 15 16:27:12 2017 -0700

----------------------------------------------------------------------
 geode-docs/tools_modules/book_intro.html.md.erb | 18 +++----
 .../gemcached/about_gemcached.html.md.erb       |  6 +--
 .../gemcached/advantages.html.md.erb            | 14 +++---
 .../gemcached/chapter_overview.html.md.erb      |  4 +-
 .../gemcached/deploying_gemcached.html.md.erb   |  4 +-
 .../tools_modules/gfsh/about_gfsh.html.md.erb   | 12 ++---
 .../gfsh/cache_xml_2_gfsh.html.md.erb           |  2 +-
 .../gfsh/chapter_overview.html.md.erb           |  8 ++--
 .../gfsh/command-pages/alter.html.md.erb        |  4 +-
 .../gfsh/command-pages/change.html.md.erb       |  2 +-
 .../gfsh/command-pages/configure.html.md.erb    |  2 +-
 .../gfsh/command-pages/create.html.md.erb       | 26 +++++-----
 .../gfsh/command-pages/export.html.md.erb       |  4 +-
 .../gfsh/command-pages/list.html.md.erb         | 14 +++---
 .../gfsh/command-pages/query.html.md.erb        |  2 +-
 .../gfsh/command-pages/show.html.md.erb         |  4 +-
 .../gfsh/command-pages/start.html.md.erb        | 50 ++++++++++----------
 .../gfsh/command-pages/status.html.md.erb       |  8 ++--
 .../gfsh/command-pages/stop.html.md.erb         | 10 ++--
 .../gfsh/configuring_gfsh.html.md.erb           |  8 ++--
 .../gfsh/getting_started_gfsh.html.md.erb       |  2 +-
 .../gfsh/gfsh_command_index.html.md.erb         |  8 ++--
 .../gfsh/gfsh_quick_reference.html.md.erb       |  4 +-
 .../gfsh/quick_ref_commands_by_area.html.md.erb | 32 ++++++-------
 .../gfsh/starting_gfsh.html.md.erb              | 10 ++--
 .../tools_modules/gfsh/tour_of_gfsh.html.md.erb | 14 +++---
 .../useful_gfsh_shell_variables.html.md.erb     |  2 +-
 .../chapter_overview.html.md.erb                | 10 ++--
 .../common_gemfire_topologies.html.md.erb       |  2 +-
 .../http_why_use_gemfire.html.md.erb            | 16 +++----
 .../interactive_mode_ref.html.md.erb            | 38 +++++++--------
 .../http_session_mgmt/quick_start.html.md.erb   |  8 ++--
 .../session_mgmt_tcserver.html.md.erb           |  4 +-
 .../session_mgmt_tomcat.html.md.erb             |  6 +--
 .../session_mgmt_weblogic.html.md.erb           |  6 +--
 .../session_state_log_files.html.md.erb         | 20 ++++----
 .../tc_additional_info.html.md.erb              | 14 +++---
 .../tc_changing_gf_default_cfg.html.md.erb      | 16 +++----
 .../tc_installing_the_module.html.md.erb        |  2 +-
 .../tc_setting_up_the_module.html.md.erb        |  8 ++--
 .../tomcat_changing_gf_default_cfg.html.md.erb  | 38 +++++++--------
 .../tomcat_installing_the_module.html.md.erb    |  2 +-
 .../tomcat_setting_up_the_module.html.md.erb    | 18 +++----
 ...weblogic_changing_gf_default_cfg.html.md.erb | 48 +++++++++----------
 ...gic_common_configuration_changes.html.md.erb |  4 +-
 .../weblogic_setting_up_the_module.html.md.erb  | 20 ++++----
 .../lucene_integration.html.md.erb              |  4 +-
 .../tools_modules/pulse/pulse-auth.html.md.erb  |  4 +-
 .../pulse/pulse-embedded.html.md.erb            | 12 ++---
 .../pulse/pulse-hosted.html.md.erb              | 14 +++---
 .../pulse/pulse-overview.html.md.erb            | 16 +++----
 .../tools_modules/pulse/pulse-views.html.md.erb | 40 ++++++++--------
 .../tools_modules/redis_adapter.html.md.erb     | 34 ++++++-------
 53 files changed, 332 insertions(+), 346 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/book_intro.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/book_intro.html.md.erb b/geode-docs/tools_modules/book_intro.html.md.erb
index 2bf0930..c86f925 100644
--- a/geode-docs/tools_modules/book_intro.html.md.erb
+++ b/geode-docs/tools_modules/book_intro.html.md.erb
@@ -19,32 +19,32 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-*Tools and Modules* describes tools and modules associated with Apache Geode.
+*Tools and Modules* describes tools and modules associated with <%=vars.product_name_long%>.
 
 <a id="deploy_run__section_tool_intro"></a>
 
 -   **[gfsh](gfsh/chapter_overview.html)**
 
-    `gfsh` (pronounced "jee-fish") provides a single, powerful command-line interface from which you can launch, manage, and monitor Geode processes, data, and applications.
+    `gfsh` (pronounced "jee-fish") provides a single, powerful command-line interface from which you can launch, manage, and monitor <%=vars.product_name%> processes, data, and applications.
 
 -   **[Gemcached](gemcached/chapter_overview.html)**
 
-    Gemcached is a Geode adapter that allows Memcached clients to communicate with a Geode server cluster, as if the servers were memcached servers. Memcached is an open-source caching solution that uses a distributed, in-memory hash map to store key-value pairs of string or object data.
+    Gemcached is a <%=vars.product_name%> adapter that allows Memcached clients to communicate with a <%=vars.product_name%> server cluster, as if the servers were memcached servers. Memcached is an open-source caching solution that uses a distributed, in-memory hash map to store key-value pairs of string or object data.
 
 -   **[HTTP Session Management Modules](http_session_mgmt/chapter_overview.html)**
 
-    The Apache Geode HTTP Session Management modules provide fast, scalable, and reliable session replication for HTTP servers without requiring application changes.
+    The <%=vars.product_name_long%> HTTP Session Management modules provide fast, scalable, and reliable session replication for HTTP servers without requiring application changes.
 
--   **[Geode Pulse](pulse/pulse-overview.html)**
+-   **[<%=vars.product_name%> Pulse](pulse/pulse-overview.html)**
 
-    Geode Pulse is a Web Application that provides a graphical dashboard for monitoring vital, real-time health and performance of Geode clusters, members, and regions.
+    <%=vars.product_name%> Pulse is a Web Application that provides a graphical dashboard for monitoring vital, real-time health and performance of <%=vars.product_name%> clusters, members, and regions.
 
--   **[Geode Redis Adapter](redis_adapter.html)**
+-   **[<%=vars.product_name%> Redis Adapter](redis_adapter.html)**
 
-    The Geode Redis adapter allows Geode to function as a drop-in replacement for a Redis data store, letting Redis applications take advantage of Geode’s scaling capabilities without changing their client code. Redis clients connect to a Geode server in the same way they connect to a Redis server, using an IP address and a port number.
+    The <%=vars.product_name%> Redis adapter allows <%=vars.product_name%> to function as a drop-in replacement for a Redis data store, letting Redis applications take advantage of <%=vars.product_name%>’s scaling capabilities without changing their client code. Redis clients connect to a <%=vars.product_name%> server in the same way they connect to a Redis server, using an IP address and a port number.
 
 
 -   **[Apache Lucene&reg; Integration](lucene_integration.html)**
 
-    The Apache Lucene&reg; integration enables users to create Lucene indexes and execute Lucene searches on data stored in Geode.
+    The Apache Lucene&reg; integration enables users to create Lucene indexes and execute Lucene searches on data stored in <%=vars.product_name%>.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gemcached/about_gemcached.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gemcached/about_gemcached.html.md.erb b/geode-docs/tools_modules/gemcached/about_gemcached.html.md.erb
index 3bf9a9e..4083d0a 100644
--- a/geode-docs/tools_modules/gemcached/about_gemcached.html.md.erb
+++ b/geode-docs/tools_modules/gemcached/about_gemcached.html.md.erb
@@ -23,7 +23,7 @@ Applications use memcached clients to access data stored in embedded Gemcached s
 
 Applications can use memcached clients that are written in Python, C\#, Ruby, PHP, and other programming languages. Each memcached server in a cluster stores data as key/value pairs. A memcached client maintains a list of these servers, determines which server has the required data, and accesses the data directly on that server.
 
-To integrate memcached with Apache Geode, you embed a Gemcached server within a Geode cache server. These *Gemcached* servers take the place of memcached servers. The memcached client uses its normal wire protocol to communicate with the Gemcached servers, which appear to the client as memcached servers. No code changes in the clients are needed. Geode manages the distribution and access to data among the embedded Gemcached servers.
+To integrate memcached with <%=vars.product_name_long%>, you embed a Gemcached server within a <%=vars.product_name%> cache server. These *Gemcached* servers take the place of memcached servers. The memcached client uses its normal wire protocol to communicate with the Gemcached servers, which appear to the client as memcached servers. No code changes in the clients are needed. <%=vars.product_name%> manages the distribution and access to data among the embedded Gemcached servers.
 
 As shown in [Gemcached Architecture](about_gemcached.html#concept_4C654CA7F6B34E4CA1B0318BC9644536__fig_8BF351B5FAF1490F8B0D0E7F3098BC73), memcached clients, which ordinarily maintain a list of memcached servers, now maintain a list of embedded Gemcached servers. If more embedded Gemcached servers are added to the cluster, the new servers automatically become part of the cluster. The memcached clients can continue to communicate with the servers on the list, without having to update their list of servers.
 
@@ -32,11 +32,11 @@ As shown in [Gemcached Architecture](about_gemcached.html#concept_4C654CA7F6B34E
 
 <img src="../../images/Gemcached.png" id="concept_4C654CA7F6B34E4CA1B0318BC9644536__image_98B6222F29B940CD93381D03325C4455" class="image" />
 
-Memcached clients use the memcached API to read and write data that is stored in memcached servers; therefore, client-side Geode features are not available to these clients. Gemcached servers, however, can use Geode's server-side features and API. These features include the following. (For more detail, see [Advantages of Gemcached over Memcached](advantages.html#topic_849581E507544E63AF23793FBC47D778).)
+Memcached clients use the memcached API to read and write data that is stored in memcached servers; therefore, client-side <%=vars.product_name%> features are not available to these clients. Gemcached servers, however, can use <%=vars.product_name%>'s server-side features and API. These features include the following. (For more detail, see [Advantages of Gemcached over Memcached](advantages.html#topic_849581E507544E63AF23793FBC47D778).)
 
 -   Data consistency and scalability.
 -   High availability.
--   Read-through, write through, and write behind to a database, implemented from within the distributed Geode cache.
+-   Read-through, write through, and write behind to a database, implemented from within the distributed <%=vars.product_name%> cache.
 -   Storage keys and values of any type and size.
 -   For applications, a choice among partitioned and replicated region configurations.
 -   Automatic overflow of data to disk in low-memory scenarios.

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gemcached/advantages.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gemcached/advantages.html.md.erb b/geode-docs/tools_modules/gemcached/advantages.html.md.erb
index 6fbe8fb..96f9e30 100644
--- a/geode-docs/tools_modules/gemcached/advantages.html.md.erb
+++ b/geode-docs/tools_modules/gemcached/advantages.html.md.erb
@@ -19,18 +19,18 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-The standard memcached architecture has inherent architectural challenges that make memcached applications difficult to write, maintain, and scale. Using Gemcached with Geode addresses these challenges.
+The standard memcached architecture has inherent architectural challenges that make memcached applications difficult to write, maintain, and scale. Using Gemcached with <%=vars.product_name%> addresses these challenges.
 
-**Data consistency**. Memcached clients must maintain a list of servers where the distributed data is stored. Each client must maintain an identical list, with each list ordered in the same way. It is the responsibility of the application logic to maintain and propagate this list. If some clients do not have the correct list, the client can retrieve stale data. In Geode clusters, all members communicate with each other to maintain data consistency, which eliminates the need to code these behaviors in the memcached clients.
+**Data consistency**. Memcached clients must maintain a list of servers where the distributed data is stored. Each client must maintain an identical list, with each list ordered in the same way. It is the responsibility of the application logic to maintain and propagate this list. If some clients do not have the correct list, the client can retrieve stale data. In <%=vars.product_name%> clusters, all members communicate with each other to maintain data consistency, which eliminates the need to code these behaviors in the memcached clients.
 
-**High availability**. When a memcached server becomes unavailable, memcached clusters are subject to failures or degraded performance because clients must directly query the backend database. Memcached-based applications must be coded to handle these failures, while Geode clusters handle such failures natively.
+**High availability**. When a memcached server becomes unavailable, memcached clusters are subject to failures or degraded performance because clients must directly query the backend database. Memcached-based applications must be coded to handle these failures, while <%=vars.product_name%> clusters handle such failures natively.
 
-**Faster cluster startup time**. When a memcached cluster fails and a restart is required, the data must be reloaded and distributed to the cluster members while simultaneously processing requests for data. These startup activities can be time-consuming. When a Geode cluster restarts, data can be reloaded from other in-memory, redundant copies of the data or from disk, without having to query the back end database.
+**Faster cluster startup time**. When a memcached cluster fails and a restart is required, the data must be reloaded and distributed to the cluster members while simultaneously processing requests for data. These startup activities can be time-consuming. When a <%=vars.product_name%> cluster restarts, data can be reloaded from other in-memory, redundant copies of the data or from disk, without having to query the back end database.
 
-**Better handling of network segmentation**. Large deployments of memcached can use hundreds of servers to manage data. If, due to network segmentation, some clients cannot connect to all nodes of a partition, the clients will have to fetch the data from the backend database to avoid hosting stale data. Geode clusters handle network segmentation to ensure that client responses are consistent.
+**Better handling of network segmentation**. Large deployments of memcached can use hundreds of servers to manage data. If, due to network segmentation, some clients cannot connect to all nodes of a partition, the clients will have to fetch the data from the backend database to avoid hosting stale data. <%=vars.product_name%> clusters handle network segmentation to ensure that client responses are consistent.
 
-**Automatic scalability**. If you need to add capacity to a memcached cluster, you must propagate a new server list to all clients. As new clients come on line with the new list, older clients may not have a consistent view of the data in the cluster, which can result in inconsistent data in the servers. Because new Geode cache server members automatically discover each other, memcached clients do not need to maintain a complete server list. You can add capacity simply by adding servers.
+**Automatic scalability**. If you need to add capacity to a memcached cluster, you must propagate a new server list to all clients. As new clients come on line with the new list, older clients may not have a consistent view of the data in the cluster, which can result in inconsistent data in the servers. Because new <%=vars.product_name%> cache server members automatically discover each other, memcached clients do not need to maintain a complete server list. You can add capacity simply by adding servers.
 
-**Scalable client connections**. A memcached client may need to access multiple pieces of data stored on multiple servers, which can result in clients having a TCP connection open to every server. When a memcached client accesses a Gemcached server, only a single connection to a Gemcached server instance is required. The Gemcached server manages the distribution of data using Geode's standard features.
+**Scalable client connections**. A memcached client may need to access multiple pieces of data stored on multiple servers, which can result in clients having a TCP connection open to every server. When a memcached client accesses a Gemcached server, only a single connection to a Gemcached server instance is required. The Gemcached server manages the distribution of data using <%=vars.product_name%>'s standard features.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gemcached/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gemcached/chapter_overview.html.md.erb b/geode-docs/tools_modules/gemcached/chapter_overview.html.md.erb
index 5fef3e3..8e029ff 100644
--- a/geode-docs/tools_modules/gemcached/chapter_overview.html.md.erb
+++ b/geode-docs/tools_modules/gemcached/chapter_overview.html.md.erb
@@ -21,7 +21,7 @@ limitations under the License.
 <a id="topic_3751C8A924884B7F88F993CAD350D4FE"></a>
 
 
-Gemcached is a Geode adapter that allows Memcached clients to communicate with a Geode server cluster, as if the servers were memcached servers. Memcached is an open-source caching solution that uses a distributed, in-memory hash map to store key-value pairs of string or object data.
+Gemcached is a <%=vars.product_name%> adapter that allows Memcached clients to communicate with a <%=vars.product_name%> server cluster, as if the servers were memcached servers. Memcached is an open-source caching solution that uses a distributed, in-memory hash map to store key-value pairs of string or object data.
 
 For information about Memcached, see [http://www.memcached.org](http://www.memcached.org).
 
@@ -35,6 +35,6 @@ For information about Memcached, see [http://www.memcached.org](http://www.memca
 
 -   **[Advantages of Gemcached over Memcached](advantages.html)**
 
-    The standard memcached architecture has inherent architectural challenges that make memcached applications difficult to write, maintain, and scale. Using Gemcached with Geode addresses these challenges.
+    The standard memcached architecture has inherent architectural challenges that make memcached applications difficult to write, maintain, and scale. Using Gemcached with <%=vars.product_name%> addresses these challenges.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gemcached/deploying_gemcached.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gemcached/deploying_gemcached.html.md.erb b/geode-docs/tools_modules/gemcached/deploying_gemcached.html.md.erb
index 2c4724e..68801eb 100644
--- a/geode-docs/tools_modules/gemcached/deploying_gemcached.html.md.erb
+++ b/geode-docs/tools_modules/gemcached/deploying_gemcached.html.md.erb
@@ -23,9 +23,9 @@ You can configure and deploy Gemcached servers in a Java class or by using the g
 
 The following sections describe how to configure and deploy an embedded Gemcached server. You can configure and start a GemCached server either by invoking a Java class that calls the cache server's `start()` method, or by starting the cache server using the gfsh command line.
 
-## <a id="topic_7B158074B27A4FEF9D38E7C369905C72__section_17E7E4058D914334B9C5AC2E3DC1F7F2" class="no-quick-link"></a>Embedding a Gemcached server in a Geode Java Application
+## <a id="topic_7B158074B27A4FEF9D38E7C369905C72__section_17E7E4058D914334B9C5AC2E3DC1F7F2" class="no-quick-link"></a>Embedding a Gemcached server in a <%=vars.product_name%> Java Application
 
-The `org.apache.geode.memcached` package contains a single class, `GemFireMemcachedServer`. (See the [Geode Javadocs](http://static.springsource.org/spring-gemfire/docs/current/api/).) Use this class to configure and embed a Gemcached server in a Geode cache server. For example, the following statement creates and starts an embedded Gemcached server on port number 5555 using the binary protocol:
+The `org.apache.geode.memcached` package contains a single class, `GemFireMemcachedServer` (see the <%=vars.product_name%> Javadocs.) Use this class to configure and embed a Gemcached server in a <%=vars.product_name%> cache server. For example, the following statement creates and starts an embedded Gemcached server on port number 5555 using the binary protocol:
 
 ``` pre
 GemFireMemcachedServer server = 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/about_gfsh.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/about_gfsh.html.md.erb b/geode-docs/tools_modules/gfsh/about_gfsh.html.md.erb
index b440bfa..1f54575 100644
--- a/geode-docs/tools_modules/gfsh/about_gfsh.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/about_gfsh.html.md.erb
@@ -19,24 +19,24 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-`gfsh` supports the administration, debugging, and deployment of Apache Geode processes and applications.
+`gfsh` supports the administration, debugging, and deployment of <%=vars.product_name_long%> processes and applications.
 
 With `gfsh`, you can:
 
--   Start and stop Apache Geode processes, such as locators and cache servers
+-   Start and stop <%=vars.product_name_long%> processes, such as locators and cache servers
 -   Start and stop gateway sender and gateway receiver processes
 -   Deploy applications
 -   Create and destroy regions
 -   Execute functions
 -   Manage disk stores
 -   Import and export data
--   Monitor Apache Geode processes
--   Launch Apache Geode monitoring tools
+-   Monitor <%=vars.product_name_long%> processes
+-   Launch <%=vars.product_name_long%> monitoring tools
 
-The `gfsh` command line interface lets developers spend less time configuring cache instance XML, properties, logs, and statistics. gfsh commands generate reports; capture cluster-wide statistics; and support the export of statistics, logs, and configurations. Like Spring Roo, gfsh features command completion (so you do not have to know the syntax), context-sensitive help, scripting, and the ability to invoke any commands from within an application by using a simple API. The gfsh interface uses JMX/RMI to communicate with Apache Geode processes.
+The `gfsh` command line interface lets developers spend less time configuring cache instance XML, properties, logs, and statistics. gfsh commands generate reports; capture cluster-wide statistics; and support the export of statistics, logs, and configurations. Like Spring Roo, gfsh features command completion (so you do not have to know the syntax), context-sensitive help, scripting, and the ability to invoke any commands from within an application by using a simple API. The gfsh interface uses JMX/RMI to communicate with <%=vars.product_name_long%> processes.
 
 You can connect gfsh to a remote distributed system using the HTTP protocol. See [Using gfsh to Manage a Remote Cluster Over HTTP or HTTPS](../../configuring/cluster_config/gfsh_remote.html).
 
-By default, the cluster configuration service saves the configuration of your Apache Geode cluster as you create Apache Geode objects using gfsh. You can export this configuration and import it into another Apache Geode cluster. See [Overview of the Cluster Configuration Service](../../configuring/cluster_config/gfsh_persist.html#concept_r22_hyw_bl).
+By default, the cluster configuration service saves the configuration of your <%=vars.product_name_long%> cluster as you create <%=vars.product_name_long%> objects using gfsh. You can export this configuration and import it into another <%=vars.product_name_long%> cluster. See [Overview of the Cluster Configuration Service](../../configuring/cluster_config/gfsh_persist.html#concept_r22_hyw_bl).
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/cache_xml_2_gfsh.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/cache_xml_2_gfsh.html.md.erb b/geode-docs/tools_modules/gfsh/cache_xml_2_gfsh.html.md.erb
index e4bab4e..c845823 100644
--- a/geode-docs/tools_modules/gfsh/cache_xml_2_gfsh.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/cache_xml_2_gfsh.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-You can configure a Geode cluster using either cache.xml files,
+You can configure a <%=vars.product_name%> cluster using either cache.xml files,
 or you can use gfsh and the cluster configuration service
 to configure a cluster.
 This table maps `cache.xml` elements to the gfsh commands that

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/chapter_overview.html.md.erb b/geode-docs/tools_modules/gfsh/chapter_overview.html.md.erb
index 2cb90b5..318dc99 100644
--- a/geode-docs/tools_modules/gfsh/chapter_overview.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/chapter_overview.html.md.erb
@@ -19,11 +19,11 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-`gfsh` (pronounced "jee-fish") provides a single, powerful command-line interface from which you can launch, manage, and monitor Geode processes, data, and applications.
+`gfsh` (pronounced "jee-fish") provides a single, powerful command-line interface from which you can launch, manage, and monitor <%=vars.product_name%> processes, data, and applications.
 
 -   **[What You Can Do with gfsh](about_gfsh.html)**
 
-    `gfsh` supports the administration, debugging, and deployment of Apache Geode processes and applications.
+    `gfsh` supports the administration, debugging, and deployment of <%=vars.product_name_long%> processes and applications.
 
 -   **[Starting gfsh](starting_gfsh.html)**
 
@@ -31,7 +31,7 @@ limitations under the License.
 
 -   **[Configuring the gfsh Environment](configuring_gfsh.html)**
 
-    The `gfsh.bat` and `gfsh` bash script automatically append the required Apache Geode and JDK .jar libraries to your existing CLASSPATH. There are user-configurable properties you can set for security, environment variables, logging, and troubleshooting.
+    The `gfsh.bat` and `gfsh` bash script automatically append the required <%=vars.product_name_long%> and JDK .jar libraries to your existing CLASSPATH. There are user-configurable properties you can set for security, environment variables, logging, and troubleshooting.
 
 -   **[Useful gfsh Shell Variables](useful_gfsh_shell_variables.html)**
 
@@ -61,6 +61,6 @@ limitations under the License.
 
 -   **[Mapping of cache.xml Elements to gfsh Configuration Commands.](cache_xml_2_gfsh.html)**
 
-    You can configure a Geode cluster using either cache.xml files, or you can use `gfsh` and the cluster configuration service to configure a cluster. This section maps cache.xml elements to the `gfsh` commands that configure and manage a cluster.
+    You can configure a <%=vars.product_name%> cluster using either cache.xml files, or you can use `gfsh` and the cluster configuration service to configure a cluster. This section maps cache.xml elements to the `gfsh` commands that configure and manage a cluster.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/command-pages/alter.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/command-pages/alter.html.md.erb b/geode-docs/tools_modules/gfsh/command-pages/alter.html.md.erb
index f28c1ce..cbe44b8 100644
--- a/geode-docs/tools_modules/gfsh/command-pages/alter.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/command-pages/alter.html.md.erb
@@ -21,7 +21,7 @@ limitations under the License.
 <a id="topic_9323467A645D4F2B82EC236448030D14"></a>
 
 
-Modify an existing Geode resource.
+Modify an existing <%=vars.product_name%> resource.
 
 -   **[alter disk-store](../../../tools_modules/gfsh/command-pages/alter.html#topic_99BCAD98BDB5470189662D2F308B68EB)**
 
@@ -39,7 +39,7 @@ Modify an existing Geode resource.
 
 Modify or remove a region from an offline disk-store.
 
-When modifying a region's configuration, it is customary to take the region off-line and restart using the new configuration. You can use the `alter                         disk-store` command to change the configuration of the region stored in the disk-store to match the configuration you will use at restart.
+When modifying a region's configuration, it is customary to take the region off-line and restart using the new configuration. You can use the `alter disk-store` command to change the configuration of the region stored in the disk-store to match the configuration you will use at restart.
 
 **Availability:** Offline.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/command-pages/change.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/command-pages/change.html.md.erb b/geode-docs/tools_modules/gfsh/command-pages/change.html.md.erb
index 8ff5e9b..92a8696 100644
--- a/geode-docs/tools_modules/gfsh/command-pages/change.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/command-pages/change.html.md.erb
@@ -49,7 +49,7 @@ change loglevel --loglevel=value [--members=value(nullvalue)*] [--groups=value(n
 <tbody>
 <tr class="odd">
 <td><span class="keyword parmname">\-\-members</span></td>
-<td>Name or ID of one or more Geode distributed system member(s) whose logging level you want to change.</td>
+<td>Name or ID of one or more <%=vars.product_name%> distributed system member(s) whose logging level you want to change.</td>
 <td> </td>
 </tr>
 <tr class="even">

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/command-pages/configure.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/command-pages/configure.html.md.erb b/geode-docs/tools_modules/gfsh/command-pages/configure.html.md.erb
index c6ce59e..ad4ecae 100644
--- a/geode-docs/tools_modules/gfsh/command-pages/configure.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/command-pages/configure.html.md.erb
@@ -27,7 +27,7 @@ Configure Portable Data eXchange for all the cache(s) in the cluster.
 ## <a id="topic_jdkdiqbgphqh" class="no-quick-link"></a>configure pdx
 
 <a id="topic_jdkdiqbgphqh__section_C27BE964CE554180A65968DBEBF50B23"></a>
-Configures Geode's Portable Data eXchange for all the cache(s) in the cluster. This command does not effect on the running members in the system. This command persists the pdx configuration in the locator with cluster configuration service.
+Configures <%=vars.product_name%>'s Portable Data eXchange for all the cache(s) in the cluster. This command does not effect on the running members in the system. This command persists the pdx configuration in the locator with cluster configuration service.
 
 **Note:**
 This command should be issued before starting any data members.

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/command-pages/create.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/command-pages/create.html.md.erb b/geode-docs/tools_modules/gfsh/command-pages/create.html.md.erb
index 01efcd8..5d20794 100644
--- a/geode-docs/tools_modules/gfsh/command-pages/create.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/command-pages/create.html.md.erb
@@ -121,13 +121,13 @@ create async-event-queue --id=value --listener=value [--group=value(nullvalue)*]
 </tr>
 <tr class="odd">
 <td><span class="keyword parmname">\-\-persistent</span></td>
-<td>Boolean value that determines whether Geode persists this queue.</td>
+<td>Boolean value that determines whether <%=vars.product_name%> persists this queue.</td>
 <td>false
 <p>If specified with out a value, default is true.</p></td>
 </tr>
 <tr class="even">
 <td><span class="keyword parmname">\-\-disk-store</span></td>
-<td>Named disk store to use for storing queue overflow, or for persisting the queue. If you specify a value, the named disk store must exist. If you specify a null value, Geode uses the default disk store for overflow and queue persistence.</td>
+<td>Named disk store to use for storing queue overflow, or for persisting the queue. If you specify a value, the named disk store must exist. If you specify a null value, <%=vars.product_name%> uses the default disk store for overflow and queue persistence.</td>
 <td> </td>
 </tr>
 <tr class="odd">
@@ -397,14 +397,14 @@ create gateway-receiver [--group=value(,value)*] [--member=value(,value)*]
 </tr>
 <tr class="even">
 <td><span class="keyword parmname">\-\-start-port</span></td>
-<td><p>Starting port number to use when specifying the range of possible port numbers this gateway receiver will use to connects to gateway senders in other sites. Geode chooses an unused port number in the specified port number range to start the receiver. If no port numbers in the range are available, an exception is thrown.</p>
-<p>The <code class="ph codeph">STARTPORT</code> value is inclusive while the <code class="ph codeph">ENDPORT</code> value is exclusive. For example, if you specify <code class="ph codeph">STARTPORT=&quot;50510&quot;</code> and <code class="ph codeph">ENDPOINT=&quot;50520&quot;</code>, Geode chooses a port value from 50510 to 50519.</p></td>
+<td><p>Starting port number to use when specifying the range of possible port numbers this gateway receiver will use to connects to gateway senders in other sites. <%=vars.product_name%> chooses an unused port number in the specified port number range to start the receiver. If no port numbers in the range are available, an exception is thrown.</p>
+<p>The <code class="ph codeph">STARTPORT</code> value is inclusive while the <code class="ph codeph">ENDPORT</code> value is exclusive. For example, if you specify <code class="ph codeph">STARTPORT=&quot;50510&quot;</code> and <code class="ph codeph">ENDPOINT=&quot;50520&quot;</code>, <%=vars.product_name%> chooses a port value from 50510 to 50519.</p></td>
 <td>5000</td>
 </tr>
 <tr class="odd">
 <td><span class="keyword parmname">\-\-end-port</span></td>
-<td><p>Defines the upper bound port number to use when specifying the range of possible port numbers this gateway receiver will use to for connections from gateway senders in other sites. Geode chooses an unused port number in the specified port number range to start the receiver. If no port numbers in the range are available, an exception is thrown.</p>
-<p>The <code class="ph codeph">ENDPORT</code> value is exclusive while the <code class="ph codeph">STARTPORT</code> value is inclusive. For example, if you specify <code class="ph codeph">STARTPORT=&quot;50510&quot;</code> and <code class="ph codeph">ENDPOINT=&quot;50520&quot;</code>, Geode chooses a port value from 50510 to 50519.</p></td>
+<td><p>Defines the upper bound port number to use when specifying the range of possible port numbers this gateway receiver will use to for connections from gateway senders in other sites. <%=vars.product_name%> chooses an unused port number in the specified port number range to start the receiver. If no port numbers in the range are available, an exception is thrown.</p>
+<p>The <code class="ph codeph">ENDPORT</code> value is exclusive while the <code class="ph codeph">STARTPORT</code> value is inclusive. For example, if you specify <code class="ph codeph">STARTPORT=&quot;50510&quot;</code> and <code class="ph codeph">ENDPOINT=&quot;50520&quot;</code>, <%=vars.product_name%> chooses a port value from 50510 to 50519.</p></td>
 <td>5500</td>
 </tr>
 <tr class="even">
@@ -452,7 +452,7 @@ Creates a gateway sender on one or more members of a distributed system.
 See [Gateway Senders](../../../topologies_and_comm/topology_concepts/multisite_overview.html#topic_9AA37B43642D4DE19072CA3367C849BA).
 
 **Note:**
-The gateway sender configuration for a specific sender `id` must be identical on each Geode member that hosts the gateway sender.
+The gateway sender configuration for a specific sender `id` must be identical on each <%=vars.product_name%> member that hosts the gateway sender.
 
 **Availability:** Online. You must be connected in `gfsh` to a JMX Manager member to use this command.
 
@@ -512,7 +512,7 @@ create gateway-sender --id=value --remote-distributed-system-id=value
 </tr>
 <tr class="even">
 <td><span class="keyword parmname">\-\-enable-batch-conflation</span></td>
-<td>Boolean value that determines whether Geode should conflate messages.</td>
+<td>Boolean value that determines whether <%=vars.product_name%> should conflate messages.</td>
 <td>false</td>
 </tr>
 <tr class="odd">
@@ -542,12 +542,12 @@ create gateway-sender --id=value --remote-distributed-system-id=value
 </tr>
 <tr class="even">
 <td><span class="keyword parmname">\-\-enable-persistence</span></td>
-<td>Boolean value that determines whether Geode persists the gateway queue.</td>
+<td>Boolean value that determines whether <%=vars.product_name%> persists the gateway queue.</td>
 <td>false</td>
 </tr>
 <tr class="odd">
 <td><span class="keyword parmname">\-\-disk-store-name</span></td>
-<td>Named disk store to use for storing the queue overflow, or for persisting the queue. If you specify a value, the named disk store must exist. If you specify a null value, Geode uses the default disk store for overflow and queue persistence.</td>
+<td>Named disk store to use for storing the queue overflow, or for persisting the queue. If you specify a value, the named disk store must exist. If you specify a null value, <%=vars.product_name%> uses the default disk store for overflow and queue persistence.</td>
 <td> </td>
 </tr>
 <tr class="even">
@@ -562,7 +562,7 @@ create gateway-sender --id=value --remote-distributed-system-id=value
 </tr>
 <tr class="even">
 <td><span class="keyword parmname">\-\-alert-threshold</span></td>
-<td>Maximum number of milliseconds that a region event can remain in the gateway sender queue before Geode logs an alert.</td>
+<td>Maximum number of milliseconds that a region event can remain in the gateway sender queue before <%=vars.product_name%> logs an alert.</td>
 <td>0</td>
 </tr>
 <tr class="odd">
@@ -578,7 +578,7 @@ create gateway-sender --id=value --remote-distributed-system-id=value
 <dt><b>thread</b></dt>
 <dd>When distributing region events from the local queue, multiple dispatcher threads preserve the order in which a given thread added region events to the queue.</dd>
 <dt><b>partition</b></dt>
-<dd>When distributing region events from the local queue, multiple dispatcher threads preserve the order in which region events were added to the local queue. For a partitioned region, this means that all region events delivered to a specific partition are delivered in the same order to the remote Geode site. For a distributed region, this means that all key updates delivered to the local gateway sender queue are distributed to the remote site in the same order.</dd>
+<dd>When distributing region events from the local queue, multiple dispatcher threads preserve the order in which region events were added to the local queue. For a partitioned region, this means that all region events delivered to a specific partition are delivered in the same order to the remote <%=vars.product_name%> site. For a distributed region, this means that all key updates delivered to the local gateway sender queue are distributed to the remote site in the same order.</dd>
 
 <p>You cannot configure the <code class="ph codeph">order-policy</code> for a parallel event queue, because parallel queues cannot preserve event ordering for regions. Only the ordering of events for a given partition (or in a given queue of a distributed region) can be preserved.</p></td>
 <td>key</td>
@@ -942,7 +942,7 @@ See [Region Data Storage and Distribution](../../../developing/region_options/ch
 </tr>
 <tr class="even">
 <td><span class="keyword parmname">\-\-compressor</span></td>
-<td>Java class name that implements compression for the region. You can write a custom compressor that implements <code class="ph codeph">org.apache.geode.compression.Compressor</code> or you can specify the Snappy compressor (<code class="ph codeph">org.apache.geode.compression.SnappyCompressor</code>), which is bundled with Geode. See <a href="../../../managing/region_compression/region_compression.html#topic_r43_wgc_gl">Region Compression</a>.</td>
+<td>Java class name that implements compression for the region. You can write a custom compressor that implements <code class="ph codeph">org.apache.geode.compression.Compressor</code> or you can specify the Snappy compressor (<code class="ph codeph">org.apache.geode.compression.SnappyCompressor</code>), which is bundled with <%=vars.product_name%>. See <a href="../../../managing/region_compression/region_compression.html#topic_r43_wgc_gl">Region Compression</a>.</td>
 <td>no compression</td>
 </tr>
 <tr class="odd">

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/command-pages/export.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/command-pages/export.html.md.erb b/geode-docs/tools_modules/gfsh/command-pages/export.html.md.erb
index e9c79d3..4fe2140 100644
--- a/geode-docs/tools_modules/gfsh/command-pages/export.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/command-pages/export.html.md.erb
@@ -25,7 +25,7 @@ Export configurations, data, logs and stack-traces.
 
 -   **[export cluster-configuration](#topic_mdv_jgz_ck)**
 
-    Exports a cluster configuration zip file that contains the `cache.xml` files, `gemfire.properties` files, and application jar files needed to configure and operate a Geode distributed system.
+    Exports a cluster configuration zip file that contains the `cache.xml` files, `gemfire.properties` files, and application jar files needed to configure and operate a <%=vars.product_name%> distributed system.
 
 -   **[export config](#topic_C7C69306F93743459E65D46537F4A1EE)**
 
@@ -49,7 +49,7 @@ Export configurations, data, logs and stack-traces.
 
 ## <a id="topic_mdv_jgz_ck" class="no-quick-link"></a>export cluster-configuration
 
-Exports a cluster configuration zip file that contains the `cache.xml` files, `gemfire.properties` files, and application jar files needed to configure and operate a Geode distributed system.
+Exports a cluster configuration zip file that contains the `cache.xml` files, `gemfire.properties` files, and application jar files needed to configure and operate a <%=vars.product_name%> distributed system.
 
 **Availability:** Online. You must be connected in `gfsh` to a JMX Manager member to use this command.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/command-pages/list.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/command-pages/list.html.md.erb b/geode-docs/tools_modules/gfsh/command-pages/list.html.md.erb
index f7601b2..2933973 100644
--- a/geode-docs/tools_modules/gfsh/command-pages/list.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/command-pages/list.html.md.erb
@@ -21,7 +21,7 @@ limitations under the License.
 
 
 
-List existing Geode resources such as deployed applications, disk-stores, functions, members, servers, and regions.
+List existing <%=vars.product_name%> resources such as deployed applications, disk-stores, functions, members, servers, and regions.
 
 -   **[list async-event-queues](#topic_j22_kzk_2l)**
 
@@ -37,7 +37,7 @@ List existing Geode resources such as deployed applications, disk-stores, functi
 
 -   **[list disk-stores](#topic_BC14AD57EA304FB3845766898D01BD04)**
 
-    List all available disk stores across the Geode cluster
+    List all available disk stores across the <%=vars.product_name%> cluster
 
 -   **[list durable-cqs](#topic_66016A698C334F4EBA19B99F51B0204B)**
 
@@ -65,7 +65,7 @@ List existing Geode resources such as deployed applications, disk-stores, functi
 
 -   **[list regions](#topic_F0ECEFF26086474498598035DD83C588)**
 
-    Display regions of a member or members. If no parameter is specified, all regions in the Geode distributed system are listed.
+    Display regions of a member or members. If no parameter is specified, all regions in the <%=vars.product_name%> distributed system are listed.
 
 ## <a id="topic_j22_kzk_2l" class="no-quick-link"></a>list async-event-queues
 
@@ -163,7 +163,7 @@ No JAR Files Found
 
 ## <a id="topic_BC14AD57EA304FB3845766898D01BD04" class="no-quick-link"></a>list disk-stores
 
-List all available disk stores across the Geode cluster
+List all available disk stores across the <%=vars.product_name%> cluster
 
 The command also lists the configured disk directories and any Regions, Cache Servers, Gateways, PDX Serialization and Async Event Queues using Disk Stores to either overflow and/or persist information to disk. Use the `describe                         disk-store` command to see the details for a particular Disk Store.
 
@@ -307,7 +307,7 @@ gfsh> list functions --matches=reconcile.*
   excalibur | reconcileDailyExpenses
 
 
-Example of 'list functions' when no functions are found in Geode :
+Example of 'list functions' when no functions are found in <%=vars.product_name%> :
 
 gfsh> list functions
 No Functions Found.
@@ -389,7 +389,7 @@ ps...        | 192...    | /producers  | pidIdx   | RANGE | id                 |
 
 **Error Messages:**
 
-Example of output when no indexes are found in Geode:
+Example of output when no indexes are found in <%=vars.product_name%>:
 
 ``` pre
 gfsh> list indexes
@@ -479,7 +479,7 @@ locator1 | 192.0.2.0(locator1:216:locator):33368
 
 ## <a id="topic_F0ECEFF26086474498598035DD83C588" class="no-quick-link"></a>list regions
 
-Display regions of a member or members. If no parameter is specified, all regions in the Geode distributed system are listed.
+Display regions of a member or members. If no parameter is specified, all regions in the <%=vars.product_name%> distributed system are listed.
 
 **Syntax:**
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/command-pages/query.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/command-pages/query.html.md.erb b/geode-docs/tools_modules/gfsh/command-pages/query.html.md.erb
index 8df594e..9708e7c 100644
--- a/geode-docs/tools_modules/gfsh/command-pages/query.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/command-pages/query.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Run queries against Geode regions.
+Run queries against <%=vars.product_name%> regions.
 
 If a limit restricting the result size is not set in the query,
 then a default limit of the gfsh environment variable `APP_FETCH_SIZE`,

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/command-pages/show.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/command-pages/show.html.md.erb b/geode-docs/tools_modules/gfsh/command-pages/show.html.md.erb
index 881c1c6..1d2a0da 100644
--- a/geode-docs/tools_modules/gfsh/command-pages/show.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/command-pages/show.html.md.erb
@@ -25,7 +25,7 @@ Display deadlocks, logs, metrics and missing disk-stores.
 
 -   **[show dead-locks](#topic_1125347FAD6541DF995C9999650165B1)**
 
-    Display any deadlocks in the Geode distributed system.
+    Display any deadlocks in the <%=vars.product_name%> distributed system.
 
 -   **[show log](#topic_45AAEDAC3AFF46EC9BB68B24FC9A32B3)**
 
@@ -45,7 +45,7 @@ Display deadlocks, logs, metrics and missing disk-stores.
 
 ## <a id="topic_1125347FAD6541DF995C9999650165B1" class="no-quick-link"></a>show dead-locks
 
-Display any deadlocks in the Geode distributed system.
+Display any deadlocks in the <%=vars.product_name%> distributed system.
 
 **Availability:** Online. You must be connected in `gfsh` to a JMX Manager member to use this command.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/command-pages/start.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/command-pages/start.html.md.erb b/geode-docs/tools_modules/gfsh/command-pages/start.html.md.erb
index 17fefa5..e6a63dc 100644
--- a/geode-docs/tools_modules/gfsh/command-pages/start.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/command-pages/start.html.md.erb
@@ -45,11 +45,11 @@ Start servers, locators, gateway senders and gateway receivers, and monitoring t
 
 -   **[start pulse](#topic_E906BA7D9E7F4C5890FEFA7ECD40DD77)**
 
-    Launch the Geode Pulse monitoring dashboard tool in the user's default system browser and navigates the user to the landing page (login page).
+    Launch the <%=vars.product_name%> Pulse monitoring dashboard tool in the user's default system browser and navigates the user to the landing page (login page).
 
 -   **[start server](#topic_3764EE2DB18B4AE4A625E0354471738A)**
 
-    Start a Geode cache server process.
+    Start a <%=vars.product_name%> cache server process.
 
 ## <a id="topic_67738A5B68E84DEE95D1C92DAB2E26E5" class="no-quick-link"></a>start gateway-receiver
 
@@ -160,7 +160,7 @@ JConsole automatically connects to a running JMX Manager node if one is availabl
 
 Note that you must have a JDK installed (not just a JRE) and the correct PATH and JAVA\_HOME environment variables set.
 
-See [Browsing Geode MBeans through JConsole](../../../managing/management/mbeans_jconsole.html) for an example of using JConsole with the Geode management and monitoring system.
+See [Browsing <%=vars.product_name%> MBeans through JConsole](../../../managing/management/mbeans_jconsole.html) for an example of using JConsole with the <%=vars.product_name%> management and monitoring system.
 
 **Availability:** Online or offline.
 
@@ -209,7 +209,7 @@ The JConsole application appears and auto-connects to a JMX Manager node if one
 ``` pre
 An error occurred while launching JConsole = %1$s
 
-Connecting by the Geode member's name or ID is not currently supported.
+Connecting by the <%=vars.product_name%> member's name or ID is not currently supported.
 Please specify the member as '<hostname|IP>[PORT].
 
 An IO error occurred while launching JConsole.
@@ -256,7 +256,7 @@ The command creates a subdirectory and log file named after the locator. If the
 
 In addition, if gfsh is not already connected to a JMX Manager, the gfsh console will automatically connect to the new embedded JMX Manager started by the new locator.
 
-**Note:** When both `--max-heap` and `--initial-heap` are specified during locator startup, additional GC parameters are specified internally by Geode's Resource Manager. If you do not want the additional default GC properties set by the Resource Manager, then use the`-Xms` and `-Xmx` JVM options. See [Controlling Heap Use with the Resource Manager](../../../managing/heap_use/heap_management.html#configuring_resource_manager) for more information.
+**Note:** When both `--max-heap` and `--initial-heap` are specified during locator startup, additional GC parameters are specified internally by <%=vars.product_name%>'s Resource Manager. If you do not want the additional default GC properties set by the Resource Manager, then use the`-Xms` and `-Xmx` JVM options. See [Controlling Heap Use with the Resource Manager](../../../managing/heap_use/heap_management.html#configuring_resource_manager) for more information.
 
 **Availability:** Online or offline.
 
@@ -288,7 +288,7 @@ start locator --name=value [--bind-address=value] [--force(=value)] [--group=val
 <tbody>
 <tr class="odd">
 <td><span class="keyword parmname">\-\-name</span></td>
-<td>Name to be used for this Geode locator service. If not specified, gfsh generates a random name.</td>
+<td>Name to be used for this <%=vars.product_name%> locator service. If not specified, gfsh generates a random name.</td>
 <td> </td>
 </tr>
 <tr class="even">
@@ -313,7 +313,7 @@ start locator --name=value [--bind-address=value] [--force(=value)] [--group=val
 </tr>
 <tr class="even">
 <td><span class="keyword parmname">\-\-locators</span></td>
-<td>List of locators used by this locator to join the appropriate Geode cluster.</td>
+<td>List of locators used by this locator to join the appropriate <%=vars.product_name%> cluster.</td>
 <td> </td>
 </tr>
 <tr class="odd">
@@ -323,12 +323,12 @@ start locator --name=value [--bind-address=value] [--force(=value)] [--group=val
 </tr>
 <tr class="even">
 <td><span class="keyword parmname">\-\-mcast-address </span></td>
-<td>IP address or hostname used to bind the UPD socket for multi-cast networking so the locator can locate other members in the Geode cluster. If mcast-port is zero, then mcast-address is ignored.</td>
+<td>IP address or hostname used to bind the UPD socket for multi-cast networking so the locator can locate other members in the <%=vars.product_name%> cluster. If mcast-port is zero, then mcast-address is ignored.</td>
 <td> </td>
 </tr>
 <tr class="odd">
 <td><span class="keyword parmname">\-\-mcast-port</span></td>
-<td>Port used for multi-cast networking so the locator can locate other members of the Geode cluster. A zero value disables mcast.</td>
+<td>Port used for multi-cast networking so the locator can locate other members of the <%=vars.product_name%> cluster. A zero value disables mcast.</td>
 <td> </td>
 </tr>
 <tr class="even">
@@ -355,7 +355,7 @@ start locator --name=value [--bind-address=value] [--force(=value)] [--group=val
 <td><span class="keyword parmname">\-\-initial-heap</span></td>
 <td>Size has the same format as the <code class="ph codeph">-Xmx</code>/<code class="ph codeph">-Xms</code> JVM options.
 <div class="note note">
-<b>Note:</b> If you use the <code class="ph codeph">-J-Xms</code> and <code class="ph codeph">-J-Xmx</code> JVM properties instead of <code class="ph codeph">-initial-heap</code> and <code class="ph codeph">-max-heap</code>, then Geode does not use default JVM resource management properties. If you use the JVM properties, you must then specify all properties manually for eviction, garbage collection, heap percentage, and so forth.
+<b>Note:</b> If you use the <code class="ph codeph">-J-Xms</code> and <code class="ph codeph">-J-Xmx</code> JVM properties instead of <code class="ph codeph">-initial-heap</code> and <code class="ph codeph">-max-heap</code>, then <%=vars.product_name%> does not use default JVM resource management properties. If you use the JVM properties, you must then specify all properties manually for eviction, garbage collection, heap percentage, and so forth.
 </div></td>
 <td> </td>
 </tr>
@@ -363,7 +363,7 @@ start locator --name=value [--bind-address=value] [--force(=value)] [--group=val
 <td><span class="keyword parmname">\-\-max-heap</span></td>
 <td>Size has the same format as the <code class="ph codeph">-Xmx</code>/<code class="ph codeph">-Xms</code> JVM options
 <div class="note note">
-<b>Note:</b> If you use the <code class="ph codeph">-J-Xms</code> and <code class="ph codeph">-J-Xmx</code> JVM properties instead of <code class="ph codeph">-initial-heap</code> and <code class="ph codeph">-max-heap</code>, then Geode does not use default JVM resource management properties. If you use the JVM properties, you must then specify all properties manually for eviction, garbage collection, heap percentage, and so forth.
+<b>Note:</b> If you use the <code class="ph codeph">-J-Xms</code> and <code class="ph codeph">-J-Xmx</code> JVM properties instead of <code class="ph codeph">-initial-heap</code> and <code class="ph codeph">-max-heap</code>, then <%=vars.product_name%> does not use default JVM resource management properties. If you use the JVM properties, you must then specify all properties manually for eviction, garbage collection, heap percentage, and so forth.
 </div></td>
 <td> </td>
 </tr>
@@ -423,9 +423,9 @@ start locator --name=locator1
 
 ## <a id="topic_E906BA7D9E7F4C5890FEFA7ECD40DD77" class="no-quick-link"></a>start pulse
 
-Launch the Geode Pulse monitoring dashboard tool in the user's default system browser and navigates the user to the landing page (login page).
+Launch the <%=vars.product_name%> Pulse monitoring dashboard tool in the user's default system browser and navigates the user to the landing page (login page).
 
-For more information on Geode Pulse, see [Geode Pulse](../../pulse/pulse-overview.html).
+For more information on <%=vars.product_name%> Pulse, see [<%=vars.product_name%> Pulse](../../pulse/pulse-overview.html).
 
 **Availability:** Online or offline.
 
@@ -448,13 +448,13 @@ start pulse
 start pulse --url=http://gemfire.example.com:7070/pulse
 ```
 
-**Sample Output:** See [Geode Pulse](../../pulse/pulse-overview.html) for examples of Pulse.
+**Sample Output:** See [<%=vars.product_name%> Pulse](../../pulse/pulse-overview.html) for examples of Pulse.
 
 ## <a id="topic_3764EE2DB18B4AE4A625E0354471738A" class="no-quick-link"></a>start server
 
-Start a Geode cache server process.
+Start a <%=vars.product_name%> cache server process.
 
-**Note:** When both <span class="keyword parmname">\\-\\-max-heap</span> and <span class="keyword parmname">\\-\\-initial-heap</span> are specified during locator startup, additional GC parameters are specified internally by Geode's Resource Manager. If you do not want the additional default GC properties set by the Resource Manager, then use the `-Xms` and `-Xmx` JVM options. See [Controlling Heap Use with the Resource Manager](../../../managing/heap_use/heap_management.html#configuring_resource_manager) for more information.
+**Note:** When both <span class="keyword parmname">\\-\\-max-heap</span> and <span class="keyword parmname">\\-\\-initial-heap</span> are specified during locator startup, additional GC parameters are specified internally by <%=vars.product_name%>'s Resource Manager. If you do not want the additional default GC properties set by the Resource Manager, then use the `-Xms` and `-Xmx` JVM options. See [Controlling Heap Use with the Resource Manager](../../../managing/heap_use/heap_management.html#configuring_resource_manager) for more information.
 
 **Availability:** Online or offline.
 
@@ -500,7 +500,7 @@ start server --name=value [--assign-buckets(=value)] [--bind-address=value]
 <tbody>
 <tr class="odd">
 <td><span class="keyword parmname">\-\-name</span></td>
-<td>Member name for this Geode Cache Server service. If not specified, gfsh generates a random name.</td>
+<td>Member name for this <%=vars.product_name%> Cache Server service. If not specified, gfsh generates a random name.</td>
 <td> </td>
 </tr>
 <tr class="even">
@@ -540,7 +540,7 @@ start server --name=value [--assign-buckets(=value)] [--bind-address=value]
 </tr>
 <tr class="odd">
 <td><span class="keyword parmname">\-\-enable-time-statistics</span></td>
-<td>Causes additional time-based statistics to be gathered for Geode operations.</td>
+<td>Causes additional time-based statistics to be gathered for <%=vars.product_name%> operations.</td>
 <td>true</td>
 </tr>
 <tr class="even">
@@ -565,7 +565,7 @@ start server --name=value [--assign-buckets(=value)] [--bind-address=value]
 </tr>
 <tr class="even">
 <td><span class="keyword parmname">\-\-locators </span></td>
-<td>Sets the list of locators used by the Cache Server to join the appropriate Geode cluster.</td>
+<td>Sets the list of locators used by the Cache Server to join the appropriate <%=vars.product_name%> cluster.</td>
 <td> </td>
 </tr>
 <tr class="odd">
@@ -580,12 +580,12 @@ start server --name=value [--assign-buckets(=value)] [--bind-address=value]
 </tr>
 <tr class="odd">
 <td><span class="keyword parmname">\-\-mcast-address</span></td>
-<td>The IP address or hostname used to bind the UDP socket for multi-cast networking so the Cache Server can locate other members in the Geode cluster. If mcast-port is zero, then mcast-address is ignored.</td>
+<td>The IP address or hostname used to bind the UDP socket for multi-cast networking so the Cache Server can locate other members in the <%=vars.product_name%> cluster. If mcast-port is zero, then mcast-address is ignored.</td>
 <td> </td>
 </tr>
 <tr class="even">
 <td><span class="keyword parmname">\-\-mcast-port</span></td>
-<td>Sets the port used for multi-cast networking so the Cache Server can locate other members of the Geode cluster. A zero value disables mcast.</td>
+<td>Sets the port used for multi-cast networking so the Cache Server can locate other members of the <%=vars.product_name%> cluster. A zero value disables mcast.</td>
 <td> </td>
 </tr>
 <tr class="odd">
@@ -610,13 +610,13 @@ start server --name=value [--assign-buckets(=value)] [--bind-address=value]
 </tr>
 <tr class="odd">
 <td><span class="keyword parmname">\-\-spring-xml-location</span></td>
-<td>Specifies the location of a Spring XML configuration file(s) for bootstrapping and configuring a Geode Server. This configuration file can exist on the CLASSPATH (default) or any location supported by Spring's Resource(Loader) location specifiers (for example, classpath:, file:, etc). ResourceLoader is described in the 
+<td>Specifies the location of a Spring XML configuration file(s) for bootstrapping and configuring a <%=vars.product_name%> Server. This configuration file can exist on the CLASSPATH (default) or any location supported by Spring's Resource(Loader) location specifiers (for example, classpath:, file:, etc). ResourceLoader is described in the 
 <a href="http://docs.spring.io/spring/docs/4.0.9.RELEASE/spring-framework-reference/htmlsingle/#resources-resourceloader">Spring documentation</a>.</td>
 <td> </td>
 </tr>
 <tr class="even">
 <td><span class="keyword parmname">\-\-rebalance</span></td>
-<td>Whether to initiate rebalancing across the Geode cluster.</td>
+<td>Whether to initiate rebalancing across the <%=vars.product_name%> cluster.</td>
 <td>false</td>
 </tr>
 <tr class="odd">
@@ -633,7 +633,7 @@ start server --name=value [--assign-buckets(=value)] [--bind-address=value]
 <td><span class="keyword parmname">\-\-initial-heap</span></td>
 <td>Initial size of the heap in the same format as the JVM -Xms parameter.
 <div class="note note">
-<b>Note:</b> If you use the <code class="ph codeph">--J=-Xms</code> and <code class="ph codeph">--J=-Xmx</code> JVM properties instead of <code class="ph codeph">--initial-heap</code> and <code class="ph codeph">--max-heap</code>, then Geode does not use default JVM resource management properties. If you use the JVM properties, you must then specify all properties manually for eviction, garbage collection, heap percentage, and so forth.
+<b>Note:</b> If you use the <code class="ph codeph">--J=-Xms</code> and <code class="ph codeph">--J=-Xmx</code> JVM properties instead of <code class="ph codeph">--initial-heap</code> and <code class="ph codeph">--max-heap</code>, then <%=vars.product_name%> does not use default JVM resource management properties. If you use the JVM properties, you must then specify all properties manually for eviction, garbage collection, heap percentage, and so forth.
 </div></td>
 <td> </td>
 </tr>
@@ -641,7 +641,7 @@ start server --name=value [--assign-buckets(=value)] [--bind-address=value]
 <td><span class="keyword parmname">\-\-max-heap</span></td>
 <td>Maximum size of the heap in the same format as the JVM -Xmx parameter.
 <div class="note note">
-<b>Note:</b> If you use the <code class="ph codeph">--J=-Xms</code> and <code class="ph codeph">--J=-Xmx</code> JVM properties instead of <code class="ph codeph">--initial-heap</code> and <code class="ph codeph">--max-heap</code>, then Geode does not use default JVM resource management properties. If you use the JVM properties, you must then specify all properties manually for eviction, garbage collection, heap percentage, and so forth.
+<b>Note:</b> If you use the <code class="ph codeph">--J=-Xms</code> and <code class="ph codeph">--J=-Xmx</code> JVM properties instead of <code class="ph codeph">--initial-heap</code> and <code class="ph codeph">--max-heap</code>, then <%=vars.product_name%> does not use default JVM resource management properties. If you use the JVM properties, you must then specify all properties manually for eviction, garbage collection, heap percentage, and so forth.
 </div></td>
 <td> </td>
 </tr>

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/command-pages/status.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/command-pages/status.html.md.erb b/geode-docs/tools_modules/gfsh/command-pages/status.html.md.erb
index 01e4874..6129069 100644
--- a/geode-docs/tools_modules/gfsh/command-pages/status.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/command-pages/status.html.md.erb
@@ -21,7 +21,7 @@ limitations under the License.
 <a id="topic_7BCB054803CF48FE8688394C5C39000A"></a>
 
 
-Check the status of the cluster configuration service and Geode member processes, including locators, gateway receivers, gateway senders, and servers.
+Check the status of the cluster configuration service and <%=vars.product_name%> member processes, including locators, gateway receivers, gateway senders, and servers.
 
 -   **[status cluster-config-service](#topic_ts1_qb1_dk2)**
 
@@ -41,7 +41,7 @@ Check the status of the cluster configuration service and Geode member processes
 
 -   **[status server](#topic_E5DB49044978404D9D6B1971BF5D400D)**
 
-    Display the status of the specified Geode cache server.
+    Display the status of the specified <%=vars.product_name%> cache server.
 
 ## <a id="topic_ts1_qb1_dk2" class="no-quick-link"></a>status cluster-config-service
 
@@ -235,7 +235,7 @@ status locator --name=locator1
 
 ## <a id="topic_E5DB49044978404D9D6B1971BF5D400D" class="no-quick-link"></a>status server
 
-Display the status of the specified Geode cache server.
+Display the status of the specified <%=vars.product_name%> cache server.
 
 **Availability:** Online or offline. If you want to obtain the status of a server while you are offline, use the `--dir` option.
 
@@ -250,7 +250,7 @@ status server [--name=value] [--dir=value]
 | Name                                         | Description                                                                                                                                                                                                                                                                                                                                                                                                                           | Default Value     |
 |----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------|
 | <span class="keyword parmname">&#8209;&#8209;name</span> | Name or ID of the Cache Server for which to display status. You must be connected to the JMX Manager to use this option. Can be used to obtain status of remote servers. See [Using gfsh to Manage a Remote Cluster Over HTTP or HTTPS](../../../configuring/cluster_config/gfsh_remote.html). |                   |
-| <span class="keyword parmname">\\-\\-dir </span> | Directory in which the Geode Cache Server was started.                                                                                                                                                                                                                                                                                                                                                   | current directory |
+| <span class="keyword parmname">\\-\\-dir </span> | Directory in which the <%=vars.product_name%> Cache Server was started.                                                                                                                                                                                                                                                                                                                                                   | current directory |
 
 <span class="tablecap">Table 4. Status Server Parameters</span>
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/command-pages/stop.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/command-pages/stop.html.md.erb b/geode-docs/tools_modules/gfsh/command-pages/stop.html.md.erb
index e9d5820..f923dfc 100644
--- a/geode-docs/tools_modules/gfsh/command-pages/stop.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/command-pages/stop.html.md.erb
@@ -37,7 +37,7 @@ Stop gateway receivers, gateway senders, locators and servers.
 
 -   **[stop server](#topic_723EE395A63A40D6819618AFC2902115)**
 
-    Stop a Geode cache server.
+    Stop a <%=vars.product_name%> cache server.
 
 ## <a id="topic_CD1D526FD6F84A7B80B25C741129ED30" class="no-quick-link"></a>stop gateway-receiver
 
@@ -158,7 +158,7 @@ stop locator [--name=value] [--host=value] [--port=value] [--dir=value]
 
 | Name                                         | Description                                                                                                                                                                                                                                                                                                                                                                                                                                     | Default Value     |
 |----------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------|
-| <span class="keyword parmname">&#8209;&#8209;name</span> | The Geode member name or id of the Locator to stop. You must be connected to the JMX Manager to use this option. Can be used to stop remote locators. See [Using gfsh to Manage a Remote Cluster Over HTTP or HTTPS](../../../configuring/cluster_config/gfsh_remote.html). |                   |
+| <span class="keyword parmname">&#8209;&#8209;name</span> | The <%=vars.product_name%> member name or id of the Locator to stop. You must be connected to the JMX Manager to use this option. Can be used to stop remote locators. See [Using gfsh to Manage a Remote Cluster Over HTTP or HTTPS](../../../configuring/cluster_config/gfsh_remote.html). |                   |
 | <span class="keyword parmname">\\-\\-dir</span>  | Directory in which the Locator was started.                                                                                                                                                                                                                                                                                                                                                                                                     | current directory |
 
 <span class="tablecap">Table 3. Stop Locator Parameters</span>
@@ -171,7 +171,7 @@ stop locator [--name=value] [--host=value] [--port=value] [--dir=value]
 
 ## <a id="topic_723EE395A63A40D6819618AFC2902115" class="no-quick-link"></a>stop server
 
-Stop a Geode cache server.
+Stop a <%=vars.product_name%> cache server.
 
 **Availability:** Online or offline. If you want to stop a cache server while you are offline, use the `--dir` option.
 
@@ -185,8 +185,8 @@ stop server [--name=value] [--dir=value]
 
 | Name                                          | Description                                                                                                                                                                                                                                                                                                                                                                                                                           | Default Value     |
 |-----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------|
-| <span class="keyword parmname">&#8209;&#8209;name </span> | Name/Id of the Geode Cache Server to stop. You must be connected to the JMX Manager to use this option. Can be used to stop remote servers. See [Using gfsh to Manage a Remote Cluster Over HTTP or HTTPS](../../../configuring/cluster_config/gfsh_remote.html). |                   |
-| <span class="keyword parmname">\\-\\-dir </span>  | Directory in which the Geode Cache Server was started.                                                                                                                                                                                                                                                                                                                                                   | current directory |
+| <span class="keyword parmname">&#8209;&#8209;name </span> | Name/Id of the <%=vars.product_name%> Cache Server to stop. You must be connected to the JMX Manager to use this option. Can be used to stop remote servers. See [Using gfsh to Manage a Remote Cluster Over HTTP or HTTPS](../../../configuring/cluster_config/gfsh_remote.html). |                   |
+| <span class="keyword parmname">\\-\\-dir </span>  | Directory in which the <%=vars.product_name%> Cache Server was started.                                                                                                                                                                                                                                                                                                                                                   | current directory |
 
 <span class="tablecap">Table 4. Stop Server Parameters</span>
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/configuring_gfsh.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/configuring_gfsh.html.md.erb b/geode-docs/tools_modules/gfsh/configuring_gfsh.html.md.erb
index 6c31756..a8cd350 100644
--- a/geode-docs/tools_modules/gfsh/configuring_gfsh.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/configuring_gfsh.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-The `gfsh.bat` and `gfsh` bash script automatically append the required Apache Geode and JDK .jar libraries to your existing CLASSPATH. There are user-configurable properties you can set for security, environment variables, logging, and troubleshooting.
+The `gfsh.bat` and `gfsh` bash script automatically append the required <%=vars.product_name_long%> and JDK .jar libraries to your existing CLASSPATH. There are user-configurable properties you can set for security, environment variables, logging, and troubleshooting.
 
 
 ## <a id="concept_3B9C6CE2F64841E98C33D9F6441DF487__section_0D2EEA7A9ED54DFDB2E1EE955E47921E" class="no-quick-link"></a>JAR Libraries in CLASSPATH
@@ -34,7 +34,7 @@ On some operating systems, you may need to ensure that the hostname of your mach
 
 ## <a id="concept_3B9C6CE2F64841E98C33D9F6441DF487__section_3FA4CD2B451B4A30A12D30DDE8DF8619" class="no-quick-link"></a>Configuring gfsh Security
 
-Since `gfsh` must connect to a JMX Manager member to run certain commands (namely those commands that manage and monitor other members), JMX Manager configuration properties can affect `gfsh` security. In `gemfire.properties`, the following Geode properties can affect `gfsh` connection settings to the JMX Manager:
+Since `gfsh` must connect to a JMX Manager member to run certain commands (namely those commands that manage and monitor other members), JMX Manager configuration properties can affect `gfsh` security. In `gemfire.properties`, the following <%=vars.product_name%> properties can affect `gfsh` connection settings to the JMX Manager:
 
 -   `jmx-manager-ssl`
 -   `jmx-manager-port`
@@ -110,9 +110,9 @@ A history of commands that have been executed successfully is logged in `.gfsh.h
 
 ## <a id="concept_3B9C6CE2F64841E98C33D9F6441DF487__section_C84414FF16AB4279A43A41C6C8B61A7E" class="no-quick-link"></a>JMX Manager Update Rate and System Monitoring
 
-When you perform data operations (such as put) and then monitor the state of the system (such as using the gfsh `show metrics` command or Apache Geode Pulse), the monitored system may not immediately reflect the most recent operations. For example, if you perform a put operation and then immediately execute the `show metrics` gfsh command, you may not see the correct number of entries in the region. The management layer updates every 2 seconds. Wait a few seconds after performing operational activity to see the most accurate results.
+When you perform data operations (such as put) and then monitor the state of the system (such as using the gfsh `show metrics` command or <%=vars.product_name%> Pulse), the monitored system may not immediately reflect the most recent operations. For example, if you perform a put operation and then immediately execute the `show metrics` gfsh command, you may not see the correct number of entries in the region. The management layer updates every 2 seconds. Wait a few seconds after performing operational activity to see the most accurate results.
 
-You can modify the `jmx-manager-update-rate` property in `gemfire.properties` to increase or decrease the rate (specified in milliseconds) at which updates are pushed to the JMX Manager. This property setting should be greater than or equal to the `statistic-sample-rate`. You may want to increase this rate if you are experiencing performance issues; however, setting this value too high will cause stale values to be seen in `gfsh` and Apache Geode Pulse.
+You can modify the `jmx-manager-update-rate` property in `gemfire.properties` to increase or decrease the rate (specified in milliseconds) at which updates are pushed to the JMX Manager. This property setting should be greater than or equal to the `statistic-sample-rate`. You may want to increase this rate if you are experiencing performance issues; however, setting this value too high will cause stale values to be seen in `gfsh` and <%=vars.product_name%> Pulse.
 
 ## Formatting of Results
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/getting_started_gfsh.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/getting_started_gfsh.html.md.erb b/geode-docs/tools_modules/gfsh/getting_started_gfsh.html.md.erb
index 3a2b8f9..c09add0 100644
--- a/geode-docs/tools_modules/gfsh/getting_started_gfsh.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/getting_started_gfsh.html.md.erb
@@ -25,7 +25,7 @@ The `gfsh` utility provides useful features for a shell environment, including c
 
 **To view a list of available gfsh commands, press Tab at an empty prompt.**
 
-The list of commands you see depends on whether you are connected to a Geode distributed system. If you are not connected, you see a list of local commands that are available.
+The list of commands you see depends on whether you are connected to a <%=vars.product_name%> distributed system. If you are not connected, you see a list of local commands that are available.
 
 **Use the hint command to get information on a particular topic.**
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/gfsh_command_index.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/gfsh_command_index.html.md.erb b/geode-docs/tools_modules/gfsh/gfsh_command_index.html.md.erb
index e921fdc..2798096 100644
--- a/geode-docs/tools_modules/gfsh/gfsh_command_index.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/gfsh_command_index.html.md.erb
@@ -23,7 +23,7 @@ This section provides help and usage information on all `gfsh` commands, listed
 
 -   **[alter](../../tools_modules/gfsh/command-pages/alter.html)**
 
-    Modify an existing Geode resource.
+    Modify an existing <%=vars.product_name%> resource.
 
 -   **[backup disk-store](../../tools_modules/gfsh/command-pages/backup.html)**
 
@@ -127,7 +127,7 @@ This section provides help and usage information on all `gfsh` commands, listed
 
 -   **[list](../../tools_modules/gfsh/command-pages/list.html)**
 
-    List existing Geode resources such as deployed applications, disk-stores, functions, members, servers, and regions.
+    List existing <%=vars.product_name%> resources such as deployed applications, disk-stores, functions, members, servers, and regions.
 
 -   **[load-balance gateway-sender](../../tools_modules/gfsh/command-pages/load-balance.html)**
 
@@ -155,7 +155,7 @@ This section provides help and usage information on all `gfsh` commands, listed
 
 -   **[query](../../tools_modules/gfsh/command-pages/query.html)**
 
-    Run queries against Geode regions.
+    Run queries against <%=vars.product_name%> regions.
 
 -   **[rebalance](../../tools_modules/gfsh/command-pages/rebalance.html)**
 
@@ -203,7 +203,7 @@ This section provides help and usage information on all `gfsh` commands, listed
 
 -   **[status](../../tools_modules/gfsh/command-pages/status.html)**
 
-    Check the status of the cluster configuration service and Geode member processes, including locators, gateway receivers, gateway senders, and servers.
+    Check the status of the cluster configuration service and <%=vars.product_name%> member processes, including locators, gateway receivers, gateway senders, and servers.
 
 -   **[stop](../../tools_modules/gfsh/command-pages/stop.html)**
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/gfsh/gfsh_quick_reference.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/gfsh/gfsh_quick_reference.html.md.erb b/geode-docs/tools_modules/gfsh/gfsh_quick_reference.html.md.erb
index 4c78194..03a08ee 100644
--- a/geode-docs/tools_modules/gfsh/gfsh_quick_reference.html.md.erb
+++ b/geode-docs/tools_modules/gfsh/gfsh_quick_reference.html.md.erb
@@ -25,7 +25,7 @@ This quick reference sorts all commands into functional areas.
 
 Click a command to see additional information, including syntax, a list of options, and examples.
 
--   **[Basic Geode gfsh Commands](quick_ref_commands_by_area.html#topic_77DA6E3929404EB4AC24230CC7C21493)**
+-   **[Basic <%=vars.product_name%> gfsh Commands](quick_ref_commands_by_area.html#topic_77DA6E3929404EB4AC24230CC7C21493)**
 
 -   **[Configuration Commands](quick_ref_commands_by_area.html#topic_EB854534301A477BB01058B3B142AE1D)**
 
@@ -41,7 +41,7 @@ Click a command to see additional information, including syntax, a list of optio
 
 -   **[Gateway (WAN) Commands](quick_ref_commands_by_area.html#topic_F0AE5CE40D6D49BF92247F5EF4F871D2)**
 
--   **[Geode Monitoring Commands](quick_ref_commands_by_area.html#topic_B742E9E862BA457082E2346581C97D03)**
+-   **[<%=vars.product_name%> Monitoring Commands](quick_ref_commands_by_area.html#topic_B742E9E862BA457082E2346581C97D03)**
 
 -   **[Index Commands](quick_ref_commands_by_area.html#topic_688C66526B4649AFA51C0F72F34FA45E)**
 


[47/51] [abbrv] geode git commit: Geode-3466 User Guide: Add WAN caveat to Delta Propagation section

Posted by kl...@apache.org.
Geode-3466 User Guide: Add WAN caveat to Delta Propagation section


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/36daa9a1
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/36daa9a1
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/36daa9a1

Branch: refs/heads/feature/GEODE-1279
Commit: 36daa9a14a4aca9d6418f7173a3293e2e4c687fd
Parents: ed9a8fd
Author: Dave Barnes <db...@pivotal.io>
Authored: Fri Aug 18 15:25:32 2017 -0700
Committer: Dave Barnes <db...@pivotal.io>
Committed: Fri Aug 18 15:25:32 2017 -0700

----------------------------------------------------------------------
 .../delta_propagation/how_delta_propagation_works.html.md.erb      | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/36daa9a1/geode-docs/developing/delta_propagation/how_delta_propagation_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/delta_propagation/how_delta_propagation_works.html.md.erb b/geode-docs/developing/delta_propagation/how_delta_propagation_works.html.md.erb
index 3609734..fa13a1c 100644
--- a/geode-docs/developing/delta_propagation/how_delta_propagation_works.html.md.erb
+++ b/geode-docs/developing/delta_propagation/how_delta_propagation_works.html.md.erb
@@ -66,4 +66,4 @@ The following topologies support delta propagation (with some limitations):
     -   When the client's `gemfire.properties` setting `conflate-events` is set to true, the servers send full values for all regions.
     -   When the server region attribute `enable-subscription-conflation` is set to true and the client `gemfire.properties` setting `conflate-events` is set to `server`, the servers send full values for the region.
     -   When the client region is configured with the `PROXY` client region shortcut setting (empty client region), servers send full values.
-
+-   **Multi-site (WAN)**. Gateway senders do not send Deltas. The full value is always sent.


[51/51] [abbrv] geode git commit: GEODE-1279: rename tests with old bug system numbers

Posted by kl...@apache.org.
GEODE-1279: rename tests with old bug system numbers

* Bug34387DUnitTest -> CreateAndLocalDestroyInTXRegressionTest
* Bug35214DUnitTest -> EntriesDoNotExpireDuringGIIRegressionTest
* Bug38013DUnitTest -> RemotePRValuesAreNotDeserializedRegressionTest
* Bug34948DUnitTest -> ValuesAreLazilyDeserializedRegressionTest


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/23c4126a
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/23c4126a
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/23c4126a

Branch: refs/heads/feature/GEODE-1279
Commit: 23c4126a653fa6009a3c243fda6424d9427ae92c
Parents: d809076
Author: Kirk Lund <kl...@apache.org>
Authored: Wed May 24 13:09:11 2017 -0700
Committer: Kirk Lund <kl...@apache.org>
Committed: Fri Aug 18 17:09:30 2017 -0700

----------------------------------------------------------------------
 .../apache/geode/cache30/Bug34387DUnitTest.java | 188 ----------------
 .../apache/geode/cache30/Bug34948DUnitTest.java | 157 -------------
 .../apache/geode/cache30/Bug35214DUnitTest.java | 220 -------------------
 .../apache/geode/cache30/Bug38013DUnitTest.java | 150 -------------
 ...CreateAndLocalDestroyInTXRegressionTest.java | 166 ++++++++++++++
 ...triesDoNotExpireDuringGIIRegressionTest.java | 207 +++++++++++++++++
 ...RValuesAreNotDeserializedRegressionTest.java | 161 ++++++++++++++
 ...luesAreLazilyDeserializedRegressionTest.java | 166 ++++++++++++++
 .../cache/ConnectDisconnectDUnitTest.java       | 148 +++++--------
 .../dunit/internal/DistributedTestFixture.java  |  16 +-
 .../internal/JUnit3DistributedTestCase.java     |  62 ++----
 .../internal/JUnit4DistributedTestCase.java     | 123 ++++-------
 12 files changed, 827 insertions(+), 937 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/23c4126a/geode-core/src/test/java/org/apache/geode/cache30/Bug34387DUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/cache30/Bug34387DUnitTest.java b/geode-core/src/test/java/org/apache/geode/cache30/Bug34387DUnitTest.java
deleted file mode 100644
index d43be83..0000000
--- a/geode-core/src/test/java/org/apache/geode/cache30/Bug34387DUnitTest.java
+++ /dev/null
@@ -1,188 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
- * agreements. See the NOTICE file distributed with this work for additional information regarding
- * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.geode.cache30;
-
-import org.junit.experimental.categories.Category;
-import org.junit.Test;
-
-import static org.junit.Assert.*;
-
-import org.apache.geode.test.dunit.cache.internal.JUnit4CacheTestCase;
-import org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase;
-import org.apache.geode.test.junit.categories.DistributedTest;
-
-import org.apache.geode.cache.AttributesFactory;
-import org.apache.geode.cache.CacheException;
-import org.apache.geode.cache.CacheListener;
-import org.apache.geode.cache.CacheTransactionManager;
-import org.apache.geode.cache.DataPolicy;
-import org.apache.geode.cache.EntryEvent;
-import org.apache.geode.cache.Region;
-import org.apache.geode.cache.Scope;
-import org.apache.geode.cache.UnsupportedOperationInTransactionException;
-import org.apache.geode.cache.util.CacheListenerAdapter;
-import org.apache.geode.distributed.DistributedMember;
-import org.apache.geode.distributed.internal.InternalDistributedSystem;
-import org.apache.geode.internal.i18n.LocalizedStrings;
-import org.apache.geode.test.dunit.Host;
-import org.apache.geode.test.dunit.VM;
-
-/**
- * Test create + localDestroy for bug 34387
- *
- * @since GemFire 5.0
- */
-@Category(DistributedTest.class)
-public class Bug34387DUnitTest extends JUnit4CacheTestCase {
-
-  // private transient Region r;
-  // private transient DistributedMember otherId;
-  protected transient int invokeCount;
-
-  static volatile boolean callbackFailure;
-
-  public Bug34387DUnitTest() {
-    super();
-  }
-
-  protected static void callbackAssertEquals(String message, Object expected, Object actual) {
-    if (expected == null && actual == null)
-      return;
-    if (expected != null && expected.equals(actual))
-      return;
-    callbackFailure = true;
-    // Throws an error that is ignored, but...
-    assertEquals(message, expected, actual);
-  }
-
-
-  private VM getOtherVm() {
-    Host host = Host.getHost(0);
-    return host.getVM(0);
-  }
-
-  private void initOtherId() {
-    VM vm = getOtherVm();
-    vm.invoke(new CacheSerializableRunnable("Connect") {
-      public void run2() throws CacheException {
-        getCache();
-      }
-    });
-    vm.invoke(() -> Bug34387DUnitTest.getVMDistributedMember());
-  }
-
-  private void doCommitOtherVm(final boolean doDestroy) {
-    VM vm = getOtherVm();
-    vm.invoke(new CacheSerializableRunnable("create root") {
-      public void run2() throws CacheException {
-        AttributesFactory af = new AttributesFactory();
-        af.setScope(Scope.DISTRIBUTED_ACK);
-        af.setConcurrencyChecksEnabled(true);
-        Region r1 = createRootRegion("r1", af.create());
-        CacheTransactionManager ctm = getCache().getCacheTransactionManager();
-        ctm.begin();
-        r1.create("createKey", "createValue");
-        if (doDestroy) {
-          try {
-            r1.localDestroy("createKey");
-            fail("expected exception not thrown");
-          } catch (UnsupportedOperationInTransactionException e) {
-            assertEquals(e.getMessage(),
-                LocalizedStrings.TXStateStub_LOCAL_DESTROY_NOT_ALLOWED_IN_TRANSACTION
-                    .toLocalizedString());
-          }
-        } else {
-          try {
-            r1.localInvalidate("createKey");
-            fail("expected exception not thrown");
-          } catch (UnsupportedOperationInTransactionException e) {
-            assertEquals(e.getMessage(),
-                LocalizedStrings.TXStateStub_LOCAL_INVALIDATE_NOT_ALLOWED_IN_TRANSACTION
-                    .toLocalizedString());
-          }
-        }
-        ctm.commit();
-      }
-    });
-  }
-
-  public static DistributedMember getVMDistributedMember() {
-    return InternalDistributedSystem.getAnyInstance().getDistributedMember();
-  }
-
-  ////////////////////// Test Methods //////////////////////
-
-  /**
-   * test create followed by localDestroy
-   */
-  @Test
-  public void testCreateAndLD() throws CacheException {
-    initOtherId();
-    AttributesFactory af = new AttributesFactory();
-    af.setDataPolicy(DataPolicy.REPLICATE);
-    af.setScope(Scope.DISTRIBUTED_ACK);
-    af.setConcurrencyChecksEnabled(true);
-    callbackFailure = false;
-
-    CacheListener cl1 = new CacheListenerAdapter() {
-      public void afterCreate(EntryEvent e) {
-        callbackAssertEquals("Keys not equal", "createKey", e.getKey());
-        callbackAssertEquals("Values not equal", "createValue", e.getNewValue());
-        Bug34387DUnitTest.this.invokeCount++;
-      }
-    };
-    af.addCacheListener(cl1);
-    Region r1 = createRootRegion("r1", af.create());
-
-    this.invokeCount = 0;
-    assertNull(r1.getEntry("createKey"));
-    doCommitOtherVm(true);
-    assertNotNull(r1.getEntry("createKey"));
-    assertEquals("createValue", r1.getEntry("createKey").getValue());
-    assertEquals(1, this.invokeCount);
-    assertFalse("Errors in callbacks; check logs for details", callbackFailure);
-  }
-
-  /**
-   * test create followed by localInvalidate
-   */
-  @Test
-  public void testCreateAndLI() throws CacheException {
-    initOtherId();
-    AttributesFactory af = new AttributesFactory();
-    af.setDataPolicy(DataPolicy.REPLICATE);
-    af.setScope(Scope.DISTRIBUTED_ACK);
-    af.setConcurrencyChecksEnabled(true);
-    callbackFailure = false;
-
-    CacheListener cl1 = new CacheListenerAdapter() {
-      public void afterCreate(EntryEvent e) {
-        callbackAssertEquals("key not equal", "createKey", e.getKey());
-        callbackAssertEquals("value not equal", "createValue", e.getNewValue());
-        Bug34387DUnitTest.this.invokeCount++;
-      }
-    };
-    af.addCacheListener(cl1);
-    Region r1 = createRootRegion("r1", af.create());
-
-    this.invokeCount = 0;
-    assertNull(r1.getEntry("createKey"));
-    doCommitOtherVm(false);
-    assertNotNull(r1.getEntry("createKey"));
-    assertEquals("createValue", r1.getEntry("createKey").getValue());
-    assertEquals(1, this.invokeCount);
-    assertFalse("Errors in callbacks; check logs for details", callbackFailure);
-  }
-}

http://git-wip-us.apache.org/repos/asf/geode/blob/23c4126a/geode-core/src/test/java/org/apache/geode/cache30/Bug34948DUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/cache30/Bug34948DUnitTest.java b/geode-core/src/test/java/org/apache/geode/cache30/Bug34948DUnitTest.java
deleted file mode 100644
index 8b98cd3..0000000
--- a/geode-core/src/test/java/org/apache/geode/cache30/Bug34948DUnitTest.java
+++ /dev/null
@@ -1,157 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
- * agreements. See the NOTICE file distributed with this work for additional information regarding
- * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.geode.cache30;
-
-import org.junit.experimental.categories.Category;
-import org.junit.Test;
-
-import static org.junit.Assert.*;
-
-import org.apache.geode.test.dunit.cache.internal.JUnit4CacheTestCase;
-import org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase;
-import org.apache.geode.test.junit.categories.DistributedTest;
-
-import java.io.DataInput;
-import java.io.DataOutput;
-import java.io.IOException;
-
-import org.apache.geode.DataSerializable;
-import org.apache.geode.DataSerializer;
-import org.apache.geode.cache.AttributesFactory;
-import org.apache.geode.cache.CacheException;
-import org.apache.geode.cache.CacheListener;
-import org.apache.geode.cache.DataPolicy;
-import org.apache.geode.cache.EntryEvent;
-import org.apache.geode.cache.Region;
-import org.apache.geode.cache.Scope;
-import org.apache.geode.cache.util.CacheListenerAdapter;
-import org.apache.geode.distributed.DistributedMember;
-import org.apache.geode.distributed.DistributedSystem;
-import org.apache.geode.distributed.internal.InternalDistributedSystem;
-import org.apache.geode.test.dunit.Host;
-import org.apache.geode.test.dunit.VM;
-
-/**
- * Test to make sure cache values are lazily deserialized
- *
- * @since GemFire 5.0
- */
-@Category(DistributedTest.class)
-public class Bug34948DUnitTest extends JUnit4CacheTestCase {
-
-  public Bug34948DUnitTest() {
-    super();
-  }
-
-  ////////////////////// Test Methods //////////////////////
-
-  private VM getOtherVm() {
-    Host host = Host.getHost(0);
-    return host.getVM(0);
-  }
-
-  static protected Object lastCallback = null;
-
-  private void doCreateOtherVm() {
-    VM vm = getOtherVm();
-    vm.invoke(new CacheSerializableRunnable("create root") {
-      public void run2() throws CacheException {
-        getSystem();
-        AttributesFactory af = new AttributesFactory();
-        af.setScope(Scope.DISTRIBUTED_ACK);
-        af.setDataPolicy(DataPolicy.PRELOADED);
-        CacheListener cl = new CacheListenerAdapter() {
-          public void afterCreate(EntryEvent event) {
-            // getLogWriter().info("afterCreate " + event.getKey());
-            if (event.getCallbackArgument() != null) {
-              lastCallback = event.getCallbackArgument();
-            }
-          }
-
-          public void afterUpdate(EntryEvent event) {
-            // getLogWriter().info("afterUpdate " + event.getKey());
-            if (event.getCallbackArgument() != null) {
-              lastCallback = event.getCallbackArgument();
-            }
-          }
-
-          public void afterInvalidate(EntryEvent event) {
-            if (event.getCallbackArgument() != null) {
-              lastCallback = event.getCallbackArgument();
-            }
-          }
-
-          public void afterDestroy(EntryEvent event) {
-            if (event.getCallbackArgument() != null) {
-              lastCallback = event.getCallbackArgument();
-            }
-          }
-        };
-        af.setCacheListener(cl);
-        createRootRegion("bug34948", af.create());
-      }
-    });
-  }
-
-  /**
-   * Make sure that value is only deserialized in cache whose application asks for the value.
-   */
-  @Test
-  public void testBug34948() throws CacheException {
-    final AttributesFactory factory = new AttributesFactory();
-    factory.setScope(Scope.DISTRIBUTED_ACK);
-    factory.setDataPolicy(DataPolicy.PRELOADED);
-    final Region r = createRootRegion("bug34948", factory.create());
-
-    // before gii
-    r.put("key1", new HomeBoy());
-
-    doCreateOtherVm();
-
-    // after gii
-    r.put("key2", new HomeBoy());
-
-    r.localDestroy("key1");
-    r.localDestroy("key2");
-
-    Object o = r.get("key1");
-    assertTrue(r.get("key1") instanceof HomeBoy);
-    assertTrue(r.get("key2") == null); // preload will not distribute
-
-    // @todo darrel: add putAll test once it does not deserialize
-  }
-
-  public static class HomeBoy implements DataSerializable {
-    public HomeBoy() {}
-
-    public void toData(DataOutput out) throws IOException {
-      DistributedMember me = InternalDistributedSystem.getAnyInstance().getDistributedMember();
-      DataSerializer.writeObject(me, out);
-    }
-
-    public void fromData(DataInput in) throws IOException, ClassNotFoundException {
-      DistributedSystem ds = InternalDistributedSystem.getAnyInstance();
-      DistributedMember me = ds.getDistributedMember();
-      DistributedMember hb = (DistributedMember) DataSerializer.readObject(in);
-      if (me.equals(hb)) {
-        ds.getLogWriter().info("HomeBoy was deserialized on his home");
-      } else {
-        String msg = "HomeBoy was deserialized on " + me + " instead of his home " + hb;
-        ds.getLogWriter().error(msg);
-        throw new IllegalStateException(msg);
-      }
-    }
-  }
-}

http://git-wip-us.apache.org/repos/asf/geode/blob/23c4126a/geode-core/src/test/java/org/apache/geode/cache30/Bug35214DUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/cache30/Bug35214DUnitTest.java b/geode-core/src/test/java/org/apache/geode/cache30/Bug35214DUnitTest.java
deleted file mode 100644
index ed25b26..0000000
--- a/geode-core/src/test/java/org/apache/geode/cache30/Bug35214DUnitTest.java
+++ /dev/null
@@ -1,220 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
- * agreements. See the NOTICE file distributed with this work for additional information regarding
- * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.geode.cache30;
-
-import org.junit.experimental.categories.Category;
-import org.junit.Test;
-
-import static org.junit.Assert.*;
-
-import org.apache.geode.test.dunit.cache.internal.JUnit4CacheTestCase;
-import org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase;
-import org.apache.geode.test.junit.categories.DistributedTest;
-
-import org.apache.geode.SystemFailure;
-import org.apache.geode.cache.AttributesFactory;
-import org.apache.geode.cache.CacheException;
-import org.apache.geode.cache.CacheListener;
-import org.apache.geode.cache.DataPolicy;
-import org.apache.geode.cache.EntryEvent;
-import org.apache.geode.cache.ExpirationAction;
-import org.apache.geode.cache.ExpirationAttributes;
-import org.apache.geode.cache.Region;
-import org.apache.geode.cache.RegionEvent;
-import org.apache.geode.cache.Scope;
-import org.apache.geode.cache.util.CacheListenerAdapter;
-import org.apache.geode.internal.cache.LocalRegion;
-import org.apache.geode.test.dunit.Assert;
-import org.apache.geode.test.dunit.AsyncInvocation;
-import org.apache.geode.test.dunit.Host;
-import org.apache.geode.test.dunit.LogWriterUtils;
-import org.apache.geode.test.dunit.ThreadUtils;
-import org.apache.geode.test.dunit.VM;
-import org.apache.geode.test.dunit.Wait;
-import org.apache.geode.test.dunit.WaitCriterion;
-
-/**
- * Make sure entry expiration does not happen during gii for bug 35214
- *
- * @since GemFire 5.0
- */
-@Category(DistributedTest.class)
-public class Bug35214DUnitTest extends JUnit4CacheTestCase {
-
-  protected volatile int expirationCount = 0;
-
-  private final static int ENTRY_COUNT = 100;
-
-  protected static volatile boolean callbackFailure;
-
-  public Bug35214DUnitTest() {
-    super();
-  }
-
-  private VM getOtherVm() {
-    Host host = Host.getHost(0);
-    return host.getVM(0);
-  }
-
-  private void initOtherVm() {
-    VM vm = getOtherVm();
-    vm.invoke(new CacheSerializableRunnable("init") {
-      public void run2() throws CacheException {
-        getCache();
-        AttributesFactory af = new AttributesFactory();
-        af.setScope(Scope.DISTRIBUTED_ACK);
-        Region r1 = createRootRegion("r1", af.create());
-        for (int i = 1; i <= ENTRY_COUNT; i++) {
-          r1.put("key" + i, "value" + i);
-        }
-      }
-    });
-  }
-
-  private AsyncInvocation updateOtherVm() throws Throwable {
-    VM vm = getOtherVm();
-    AsyncInvocation otherUpdater = vm.invokeAsync(new CacheSerializableRunnable("update") {
-      public void run2() throws CacheException {
-        Region r1 = getRootRegion("r1");
-        // let the main guys gii get started; we want to do updates
-        // during his gii
-        {
-          // wait for profile of getInitialImage cache to show up
-          org.apache.geode.internal.cache.CacheDistributionAdvisor adv =
-              ((org.apache.geode.internal.cache.DistributedRegion) r1)
-                  .getCacheDistributionAdvisor();
-          int numProfiles;
-          int expectedProfiles = 1;
-          for (;;) {
-            numProfiles = adv.adviseInitialImage(null).getReplicates().size();
-            if (numProfiles < expectedProfiles) {
-              // getLogWriter().info("PROFILE CHECK: Found " + numProfiles +
-              // " getInitialImage Profiles (waiting for " + expectedProfiles + ")");
-              // pause(5);
-            } else {
-              LogWriterUtils.getLogWriter()
-                  .info("PROFILE CHECK: Found " + numProfiles + " getInitialImage Profiles (OK)");
-              break;
-            }
-          }
-        }
-        // start doing updates of the keys to see if we can get deadlocked
-        int updateCount = 1;
-        do {
-          for (int i = 1; i <= ENTRY_COUNT; i++) {
-            String key = "key" + i;
-            if (r1.containsKey(key)) {
-              r1.destroy(key);
-            } else {
-              r1.put(key, "value" + i + "uc" + updateCount);
-            }
-          }
-        } while (updateCount++ < 20);
-        // do one more loop with no destroys
-        for (int i = 1; i <= ENTRY_COUNT; i++) {
-          String key = "key" + i;
-          if (!r1.containsKey(key)) {
-            r1.put(key, "value" + i + "uc" + updateCount);
-          }
-        }
-      }
-    });
-
-    // FIXME this thread does not terminate
-    // DistributedTestCase.join(otherUpdater, 5 * 60 * 1000, getLogWriter());
-    // if(otherUpdater.exceptionOccurred()){
-    // fail("otherUpdater failed", otherUpdater.getException());
-    // }
-
-    return otherUpdater;
-  }
-
-  ////////////////////// Test Methods //////////////////////
-
-  protected boolean afterRegionCreateSeen = false;
-
-  protected static void callbackAssertTrue(String msg, boolean cond) {
-    if (cond)
-      return;
-    callbackFailure = true;
-    // Throws ignored error, but...
-    assertTrue(msg, cond);
-  }
-
-
-  /**
-   * make sure entries do not expire during a GII
-   */
-  @Test
-  public void testNoEntryExpireDuringGII() throws Exception {
-    initOtherVm();
-    AsyncInvocation updater = null;
-    try {
-      updater = updateOtherVm();
-    } catch (VirtualMachineError e) {
-      SystemFailure.initiateFailure(e);
-      throw e;
-    } catch (Throwable e1) {
-      Assert.fail("failed due to " + e1, e1);
-    }
-    System.setProperty(LocalRegion.EXPIRY_MS_PROPERTY, "true");
-    org.apache.geode.internal.cache.InitialImageOperation.slowImageProcessing = 30;
-    callbackFailure = false;
-
-    try {
-      AttributesFactory af = new AttributesFactory();
-      af.setDataPolicy(DataPolicy.REPLICATE);
-      af.setScope(Scope.DISTRIBUTED_ACK);
-      af.setStatisticsEnabled(true);
-      af.setEntryIdleTimeout(new ExpirationAttributes(1, ExpirationAction.INVALIDATE));
-      CacheListener cl1 = new CacheListenerAdapter() {
-        public void afterRegionCreate(RegionEvent re) {
-          afterRegionCreateSeen = true;
-        }
-
-        public void afterInvalidate(EntryEvent e) {
-          callbackAssertTrue("afterregionCreate not seen", afterRegionCreateSeen);
-          // make sure region is initialized
-          callbackAssertTrue("not initialized", ((LocalRegion) e.getRegion()).isInitialized());
-          expirationCount++;
-          org.apache.geode.internal.cache.InitialImageOperation.slowImageProcessing = 0;
-        }
-      };
-      af.addCacheListener(cl1);
-      final Region r1 = createRootRegion("r1", af.create());
-      ThreadUtils.join(updater, 60 * 1000);
-      WaitCriterion ev = new WaitCriterion() {
-        public boolean done() {
-          return r1.values().size() == 0;
-        }
-
-        public String description() {
-          return "region never became empty";
-        }
-      };
-      Wait.waitForCriterion(ev, 2 * 1000, 200, true);
-      {
-        assertEquals(0, r1.values().size());
-        assertEquals(ENTRY_COUNT, r1.keySet().size());
-      }
-
-    } finally {
-      org.apache.geode.internal.cache.InitialImageOperation.slowImageProcessing = 0;
-      System.getProperties().remove(LocalRegion.EXPIRY_MS_PROPERTY);
-      assertEquals(null, System.getProperty(LocalRegion.EXPIRY_MS_PROPERTY));
-    }
-    assertFalse("Errors in callbacks; check logs for details", callbackFailure);
-  }
-}

http://git-wip-us.apache.org/repos/asf/geode/blob/23c4126a/geode-core/src/test/java/org/apache/geode/cache30/Bug38013DUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/cache30/Bug38013DUnitTest.java b/geode-core/src/test/java/org/apache/geode/cache30/Bug38013DUnitTest.java
deleted file mode 100644
index a0e8021..0000000
--- a/geode-core/src/test/java/org/apache/geode/cache30/Bug38013DUnitTest.java
+++ /dev/null
@@ -1,150 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
- * agreements. See the NOTICE file distributed with this work for additional information regarding
- * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.geode.cache30;
-
-import org.junit.experimental.categories.Category;
-import org.junit.Test;
-
-import static org.junit.Assert.*;
-
-import org.apache.geode.test.dunit.cache.internal.JUnit4CacheTestCase;
-import org.apache.geode.test.dunit.internal.JUnit4DistributedTestCase;
-import org.apache.geode.test.junit.categories.DistributedTest;
-
-import java.io.DataInput;
-import java.io.DataOutput;
-import java.io.IOException;
-
-import org.apache.geode.DataSerializable;
-import org.apache.geode.DataSerializer;
-import org.apache.geode.cache.AttributesFactory;
-import org.apache.geode.cache.CacheException;
-import org.apache.geode.cache.CacheListener;
-import org.apache.geode.cache.EntryEvent;
-import org.apache.geode.cache.PartitionAttributesFactory;
-import org.apache.geode.cache.Region;
-import org.apache.geode.cache.util.CacheListenerAdapter;
-import org.apache.geode.distributed.DistributedMember;
-import org.apache.geode.distributed.DistributedSystem;
-import org.apache.geode.distributed.internal.InternalDistributedSystem;
-import org.apache.geode.test.dunit.Host;
-import org.apache.geode.test.dunit.VM;
-
-/**
- * Test to make sure PR cache values are lazily deserialized
- *
- * @since GemFire 5.0
- */
-@Category(DistributedTest.class)
-public class Bug38013DUnitTest extends JUnit4CacheTestCase {
-
-  public Bug38013DUnitTest() {
-    super();
-  }
-
-  ////////////////////// Test Methods //////////////////////
-
-  private VM getOtherVm() {
-    Host host = Host.getHost(0);
-    return host.getVM(0);
-  }
-
-  static protected Object lastCallback = null;
-
-  private void doCreateOtherVm() {
-    VM vm = getOtherVm();
-    vm.invoke(new CacheSerializableRunnable("create root") {
-      public void run2() throws CacheException {
-        getSystem();
-        AttributesFactory af = new AttributesFactory();
-        CacheListener cl = new CacheListenerAdapter() {
-          public void afterCreate(EntryEvent event) {
-            // getLogWriter().info("afterCreate " + event.getKey());
-            if (event.getCallbackArgument() != null) {
-              lastCallback = event.getCallbackArgument();
-            }
-          }
-
-          public void afterUpdate(EntryEvent event) {
-            // getLogWriter().info("afterUpdate " + event.getKey());
-            if (event.getCallbackArgument() != null) {
-              lastCallback = event.getCallbackArgument();
-            }
-          }
-
-          public void afterInvalidate(EntryEvent event) {
-            if (event.getCallbackArgument() != null) {
-              lastCallback = event.getCallbackArgument();
-            }
-          }
-
-          public void afterDestroy(EntryEvent event) {
-            if (event.getCallbackArgument() != null) {
-              lastCallback = event.getCallbackArgument();
-            }
-          }
-        };
-        af.setCacheListener(cl);
-        // create a pr with a data store
-        PartitionAttributesFactory paf = new PartitionAttributesFactory();
-        paf.setRedundantCopies(0);
-        // use defaults so this is a data store
-        af.setPartitionAttributes(paf.create());
-        createRootRegion("bug38013", af.create());
-      }
-    });
-  }
-
-  /**
-   * Make sure that value is only deserialized in cache whose application asks for the value.
-   */
-  @Test
-  public void testBug38013() throws CacheException {
-    final AttributesFactory factory = new AttributesFactory();
-    PartitionAttributesFactory paf = new PartitionAttributesFactory();
-    paf.setRedundantCopies(0);
-    paf.setLocalMaxMemory(0); // make it an accessor
-    factory.setPartitionAttributes(paf.create());
-    final Region r = createRootRegion("bug38013", factory.create());
-
-    doCreateOtherVm();
-
-    r.put("key1", new HomeBoy());
-
-    assertTrue(r.get("key1") instanceof HomeBoy);
-  }
-
-  public static class HomeBoy implements DataSerializable {
-    public HomeBoy() {}
-
-    public void toData(DataOutput out) throws IOException {
-      DistributedMember me = InternalDistributedSystem.getAnyInstance().getDistributedMember();
-      DataSerializer.writeObject(me, out);
-    }
-
-    public void fromData(DataInput in) throws IOException, ClassNotFoundException {
-      DistributedSystem ds = InternalDistributedSystem.getAnyInstance();
-      DistributedMember me = ds.getDistributedMember();
-      DistributedMember hb = (DistributedMember) DataSerializer.readObject(in);
-      if (me.equals(hb)) {
-        ds.getLogWriter().info("HomeBoy was deserialized on his home");
-      } else {
-        String msg = "HomeBoy was deserialized on " + me + " instead of his home " + hb;
-        ds.getLogWriter().error(msg);
-        throw new IllegalStateException(msg);
-      }
-    }
-  }
-}

http://git-wip-us.apache.org/repos/asf/geode/blob/23c4126a/geode-core/src/test/java/org/apache/geode/cache30/CreateAndLocalDestroyInTXRegressionTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/cache30/CreateAndLocalDestroyInTXRegressionTest.java b/geode-core/src/test/java/org/apache/geode/cache30/CreateAndLocalDestroyInTXRegressionTest.java
new file mode 100644
index 0000000..a58a1a6
--- /dev/null
+++ b/geode-core/src/test/java/org/apache/geode/cache30/CreateAndLocalDestroyInTXRegressionTest.java
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.cache30;
+
+import static org.apache.geode.internal.i18n.LocalizedStrings.TXStateStub_LOCAL_DESTROY_NOT_ALLOWED_IN_TRANSACTION;
+import static org.apache.geode.internal.i18n.LocalizedStrings.TXStateStub_LOCAL_INVALIDATE_NOT_ALLOWED_IN_TRANSACTION;
+import static org.hamcrest.core.IsEqual.equalTo;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import org.apache.geode.cache.AttributesFactory;
+import org.apache.geode.cache.CacheException;
+import org.apache.geode.cache.CacheListener;
+import org.apache.geode.cache.CacheTransactionManager;
+import org.apache.geode.cache.DataPolicy;
+import org.apache.geode.cache.EntryEvent;
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.Scope;
+import org.apache.geode.cache.UnsupportedOperationInTransactionException;
+import org.apache.geode.cache.util.CacheListenerAdapter;
+import org.apache.geode.test.dunit.Host;
+import org.apache.geode.test.dunit.VM;
+import org.apache.geode.test.dunit.cache.internal.JUnit4CacheTestCase;
+import org.apache.geode.test.junit.categories.DistributedTest;
+import org.apache.geode.test.junit.rules.serializable.SerializableErrorCollector;
+
+/**
+ * Test create + localDestroy for bug 34387
+ *
+ * #34387: TX in Proxy Regions with create followed by localDestroy on same key results in remote
+ * VMs receiving create events with null getNewValue().
+ *
+ * Create and LocalDestroy/LocalInvalidate should create event with NewValue
+ *
+ * @since GemFire 5.0
+ */
+@Category(DistributedTest.class)
+public class CreateAndLocalDestroyInTXRegressionTest extends JUnit4CacheTestCase {
+
+  private static final String REGION_NAME = "r1";
+
+  private int invokeCount;
+  private VM otherVM;
+  private transient Region region;
+
+  @Rule
+  public SerializableErrorCollector errorCollector = new SerializableErrorCollector();
+
+  @Before
+  public void setUp() throws Exception {
+    this.invokeCount = 0;
+    this.otherVM = Host.getHost(0).getVM(0);
+
+    initOtherVM(this.otherVM);
+    AttributesFactory af = new AttributesFactory();
+    af.setDataPolicy(DataPolicy.REPLICATE);
+    af.setScope(Scope.DISTRIBUTED_ACK);
+    af.setConcurrencyChecksEnabled(true);
+
+    CacheListener cl1 = new CacheListenerAdapter() {
+      @Override
+      public void afterCreate(EntryEvent e) {
+        errorCollector.checkThat("Keys not equal", "createKey", equalTo(e.getKey()));
+        errorCollector.checkThat("Values not equal", "createValue", equalTo(e.getNewValue()));
+        CreateAndLocalDestroyInTXRegressionTest.this.invokeCount++;
+      }
+    };
+
+    af.addCacheListener(cl1);
+    this.region = createRootRegion(REGION_NAME, af.create());
+
+    assertNull(this.region.getEntry("createKey"));
+  }
+
+  /**
+   * test create followed by localDestroy
+   */
+  @Test
+  public void createAndLocalDestroyShouldCreateEventWithNewValue() throws CacheException {
+    doCommitInOtherVm(otherVM, true);
+
+    assertNotNull(this.region.getEntry("createKey"));
+    assertEquals("createValue", this.region.getEntry("createKey").getValue());
+    assertEquals(1, this.invokeCount);
+  }
+
+  /**
+   * test create followed by localInvalidate
+   */
+  @Test
+  public void createAndLocalInvalidateShouldCreateEventWithNewValue() throws CacheException {
+    doCommitInOtherVm(this.otherVM, false);
+
+    assertNotNull(this.region.getEntry("createKey"));
+    assertEquals("createValue", this.region.getEntry("createKey").getValue());
+    assertEquals(1, this.invokeCount);
+  }
+
+  private void initOtherVM(VM otherVM) {
+    otherVM.invoke(new CacheSerializableRunnable("Connect") {
+      @Override
+      public void run2() throws CacheException {
+        getCache();
+      }
+    });
+  }
+
+  private void doCommitInOtherVm(VM otherVM, boolean doDestroy) {
+    otherVM.invoke(new CacheSerializableRunnable("create root") {
+      @Override
+      public void run2() throws CacheException {
+        AttributesFactory factory = new AttributesFactory();
+        factory.setScope(Scope.DISTRIBUTED_ACK);
+        factory.setConcurrencyChecksEnabled(true);
+
+        Region region = createRootRegion(REGION_NAME, factory.create());
+
+        CacheTransactionManager transactionManager = getCache().getCacheTransactionManager();
+        transactionManager.begin();
+
+        region.create("createKey", "createValue");
+
+        if (doDestroy) {
+          try {
+            region.localDestroy("createKey");
+            fail("expected exception not thrown");
+          } catch (UnsupportedOperationInTransactionException e) {
+            assertEquals(TXStateStub_LOCAL_DESTROY_NOT_ALLOWED_IN_TRANSACTION.toLocalizedString(),
+                e.getMessage());
+          }
+        } else {
+          try {
+            region.localInvalidate("createKey");
+            fail("expected exception not thrown");
+          } catch (UnsupportedOperationInTransactionException e) {
+            assertEquals(
+                TXStateStub_LOCAL_INVALIDATE_NOT_ALLOWED_IN_TRANSACTION.toLocalizedString(),
+                e.getMessage());
+          }
+        }
+
+        transactionManager.commit();
+      }
+    });
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/23c4126a/geode-core/src/test/java/org/apache/geode/cache30/EntriesDoNotExpireDuringGIIRegressionTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/cache30/EntriesDoNotExpireDuringGIIRegressionTest.java b/geode-core/src/test/java/org/apache/geode/cache30/EntriesDoNotExpireDuringGIIRegressionTest.java
new file mode 100644
index 0000000..48ad8ed
--- /dev/null
+++ b/geode-core/src/test/java/org/apache/geode/cache30/EntriesDoNotExpireDuringGIIRegressionTest.java
@@ -0,0 +1,207 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.cache30;
+
+import static java.util.concurrent.TimeUnit.MINUTES;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.hamcrest.core.Is.is;
+
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.awaitility.Awaitility;
+import org.awaitility.core.ConditionFactory;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import org.apache.geode.cache.AttributesFactory;
+import org.apache.geode.cache.CacheException;
+import org.apache.geode.cache.CacheListener;
+import org.apache.geode.cache.DataPolicy;
+import org.apache.geode.cache.EntryEvent;
+import org.apache.geode.cache.ExpirationAction;
+import org.apache.geode.cache.ExpirationAttributes;
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.RegionEvent;
+import org.apache.geode.cache.Scope;
+import org.apache.geode.cache.util.CacheListenerAdapter;
+import org.apache.geode.internal.cache.CacheDistributionAdvisor;
+import org.apache.geode.internal.cache.DistributedRegion;
+import org.apache.geode.internal.cache.InitialImageOperation;
+import org.apache.geode.internal.cache.LocalRegion;
+import org.apache.geode.test.dunit.AsyncInvocation;
+import org.apache.geode.test.dunit.Host;
+import org.apache.geode.test.dunit.VM;
+import org.apache.geode.test.dunit.cache.internal.JUnit4CacheTestCase;
+import org.apache.geode.test.dunit.rules.DistributedRestoreSystemProperties;
+import org.apache.geode.test.junit.categories.DistributedTest;
+import org.apache.geode.test.junit.rules.serializable.SerializableErrorCollector;
+
+/**
+ * Make sure entry expiration does not happen during gii for bug 35214
+ *
+ * #35214: hang during getInitialImage due to entry expiration
+ *
+ * Entries should not expire during GII
+ *
+ * @since GemFire 5.0
+ */
+@Category(DistributedTest.class)
+public class EntriesDoNotExpireDuringGIIRegressionTest extends JUnit4CacheTestCase {
+
+  private static final int ENTRY_COUNT = 100;
+  private static final String REGION_NAME = "r1";
+
+  // TODO: value of expirationCount is not validated
+  private AtomicInteger expirationCount;
+  private AtomicBoolean afterRegionCreateInvoked;
+  private VM otherVM;
+
+  @Rule
+  public DistributedRestoreSystemProperties restoreSystemProperties =
+      new DistributedRestoreSystemProperties();
+
+  @Rule
+  public SerializableErrorCollector errorCollector = new SerializableErrorCollector();
+
+  @Before
+  public void before() throws Exception {
+    this.expirationCount = new AtomicInteger(0);
+    this.afterRegionCreateInvoked = new AtomicBoolean(false);
+    this.otherVM = Host.getHost(0).getVM(0);
+    initOtherVm(this.otherVM);
+
+    System.setProperty(LocalRegion.EXPIRY_MS_PROPERTY, "true");
+    InitialImageOperation.slowImageProcessing = 30;
+  }
+
+  @After
+  public void after() throws Exception {
+    InitialImageOperation.slowImageProcessing = 0;
+  }
+
+  /**
+   * make sure entries do not expire during a GII
+   */
+  @Test
+  public void entriesShouldNotExpireDuringGII() throws Exception {
+    AsyncInvocation updater = updateOtherVm(this.otherVM);
+
+    AttributesFactory factory = new AttributesFactory();
+    factory.setDataPolicy(DataPolicy.REPLICATE);
+    factory.setScope(Scope.DISTRIBUTED_ACK);
+    factory.setStatisticsEnabled(true);
+    factory.setEntryIdleTimeout(new ExpirationAttributes(1, ExpirationAction.INVALIDATE));
+    factory.addCacheListener(createCacheListener());
+
+    Region region = createRootRegion(REGION_NAME, factory.create());
+
+    updater.await();
+
+    await().until(() -> region.values().size() == 0);
+
+    assertThat(region.values().size()).isEqualTo(0);
+    assertThat(region.keySet().size()).isEqualTo(ENTRY_COUNT);
+  }
+
+  private void initOtherVm(VM otherVM) {
+    otherVM.invoke(new CacheSerializableRunnable("init") {
+      @Override
+      public void run2() throws CacheException {
+        getCache();
+
+        AttributesFactory factory = new AttributesFactory();
+        factory.setScope(Scope.DISTRIBUTED_ACK);
+
+        Region region = createRootRegion(REGION_NAME, factory.create());
+
+        for (int i = 1; i <= ENTRY_COUNT; i++) {
+          region.put("key" + i, "value" + i);
+        }
+      }
+    });
+  }
+
+  private AsyncInvocation updateOtherVm(VM otherVM) {
+    return otherVM.invokeAsync(new CacheSerializableRunnable("update") {
+      @Override
+      public void run2() throws CacheException {
+        Region region = getRootRegion(REGION_NAME);
+        // let the main guys gii get started; we want to do updates during his gii
+
+        // wait for profile of getInitialImage cache to show up
+        CacheDistributionAdvisor advisor =
+            ((DistributedRegion) region).getCacheDistributionAdvisor();
+        int expectedProfiles = 1;
+        await().until(
+            () -> assertThat(numberProfiles(advisor)).isGreaterThanOrEqualTo(expectedProfiles));
+
+        // start doing updates of the keys to see if we can get deadlocked
+        int updateCount = 1;
+        do {
+          for (int i = 1; i <= ENTRY_COUNT; i++) {
+            String key = "key" + i;
+            if (region.containsKey(key)) {
+              region.destroy(key);
+            } else {
+              region.put(key, "value" + i + "uc" + updateCount);
+            }
+          }
+        } while (updateCount++ < 20);
+
+        // do one more loop with no destroys
+        for (int i = 1; i <= ENTRY_COUNT; i++) {
+          String key = "key" + i;
+          if (!region.containsKey(key)) {
+            region.put(key, "value" + i + "uc" + updateCount);
+          }
+        }
+      }
+    });
+  }
+
+  private int numberProfiles(CacheDistributionAdvisor advisor) {
+    return advisor.adviseInitialImage(null).getReplicates().size();
+  }
+
+  private CacheListener createCacheListener() {
+    return new CacheListenerAdapter() {
+      @Override
+      public void afterRegionCreate(RegionEvent event) {
+        afterRegionCreateInvoked.set(true);
+      }
+
+      @Override
+      public void afterInvalidate(EntryEvent event) {
+        errorCollector.checkThat("afterRegionCreate should have been seen",
+            afterRegionCreateInvoked.get(), is(true));
+        errorCollector.checkThat("Region should have been initialized",
+            ((LocalRegion) event.getRegion()).isInitialized(), is(true));
+
+        expirationCount.incrementAndGet();
+
+        InitialImageOperation.slowImageProcessing = 0;
+      }
+    };
+  }
+
+  private ConditionFactory await() {
+    return Awaitility.await().atMost(2, MINUTES);
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/23c4126a/geode-core/src/test/java/org/apache/geode/cache30/RemotePRValuesAreNotDeserializedRegressionTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/cache30/RemotePRValuesAreNotDeserializedRegressionTest.java b/geode-core/src/test/java/org/apache/geode/cache30/RemotePRValuesAreNotDeserializedRegressionTest.java
new file mode 100644
index 0000000..a490e17
--- /dev/null
+++ b/geode-core/src/test/java/org/apache/geode/cache30/RemotePRValuesAreNotDeserializedRegressionTest.java
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.cache30;
+
+import static org.junit.Assert.assertTrue;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import org.apache.geode.DataSerializable;
+import org.apache.geode.DataSerializer;
+import org.apache.geode.cache.AttributesFactory;
+import org.apache.geode.cache.CacheException;
+import org.apache.geode.cache.CacheListener;
+import org.apache.geode.cache.EntryEvent;
+import org.apache.geode.cache.PartitionAttributesFactory;
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.util.CacheListenerAdapter;
+import org.apache.geode.distributed.DistributedMember;
+import org.apache.geode.distributed.DistributedSystem;
+import org.apache.geode.distributed.internal.InternalDistributedSystem;
+import org.apache.geode.test.dunit.Host;
+import org.apache.geode.test.dunit.VM;
+import org.apache.geode.test.dunit.cache.internal.JUnit4CacheTestCase;
+import org.apache.geode.test.junit.categories.DistributedTest;
+
+/**
+ * Test to make sure PR cache values are lazily deserialized
+ *
+ * #38013: PR regions do deserialization on remote bucket during get causing NoClassDefFoundError
+ *
+ * Remote PartitionedRegion values should not be deserialized
+ *
+ * @since GemFire 5.0
+ */
+@Category(DistributedTest.class)
+public class RemotePRValuesAreNotDeserializedRegressionTest extends JUnit4CacheTestCase {
+
+  private static final String REGION_NAME = "bug38013";
+
+  // TODO: value of lastCallback is not validated
+  private static Object lastCallback = null;
+
+  private VM otherVM;
+
+  @Before
+  public void before() throws Exception {
+    this.otherVM = Host.getHost(0).getVM(0);
+  }
+
+  /**
+   * Make sure that value is only deserialized in cache whose application asks for the value.
+   */
+  @Test
+  public void remotePRValuesShouldNotBeDeserialized() throws Exception {
+    PartitionAttributesFactory partitionAttributesFactory = new PartitionAttributesFactory();
+    partitionAttributesFactory.setRedundantCopies(0);
+    partitionAttributesFactory.setLocalMaxMemory(0); // make it an accessor
+
+    AttributesFactory factory = new AttributesFactory();
+    factory.setPartitionAttributes(partitionAttributesFactory.create());
+
+    Region<String, HomeBoy> region = createRootRegion(REGION_NAME, factory.create());
+
+    doCreateOtherVm(this.otherVM);
+
+    region.put("key1", new HomeBoy());
+
+    assertTrue(region.get("key1") instanceof HomeBoy);
+  }
+
+  private void doCreateOtherVm(VM otherVM) {
+    otherVM.invoke(new CacheSerializableRunnable("create root") {
+      public void run2() throws CacheException {
+        getSystem();
+
+        CacheListener listener = new CacheListenerAdapter() {
+          @Override
+          public void afterCreate(EntryEvent event) {
+            if (event.getCallbackArgument() != null) {
+              lastCallback = event.getCallbackArgument();
+            }
+          }
+
+          @Override
+          public void afterUpdate(EntryEvent event) {
+            if (event.getCallbackArgument() != null) {
+              lastCallback = event.getCallbackArgument();
+            }
+          }
+
+          @Override
+          public void afterInvalidate(EntryEvent event) {
+            if (event.getCallbackArgument() != null) {
+              lastCallback = event.getCallbackArgument();
+            }
+          }
+
+          @Override
+          public void afterDestroy(EntryEvent event) {
+            if (event.getCallbackArgument() != null) {
+              lastCallback = event.getCallbackArgument();
+            }
+          }
+        };
+
+        AttributesFactory factory = new AttributesFactory();
+        factory.setCacheListener(listener);
+
+        // create a pr with a data store
+        PartitionAttributesFactory partitionAttributesFactory = new PartitionAttributesFactory();
+        partitionAttributesFactory.setRedundantCopies(0);
+
+        // use defaults so this is a data store
+        factory.setPartitionAttributes(partitionAttributesFactory.create());
+        createRootRegion(REGION_NAME, factory.create());
+      }
+    });
+  }
+
+  private static class HomeBoy implements DataSerializable {
+    public HomeBoy() {}
+
+    @Override
+    public void toData(DataOutput out) throws IOException {
+      DistributedMember me = InternalDistributedSystem.getAnyInstance().getDistributedMember();
+      DataSerializer.writeObject(me, out);
+    }
+
+    @Override
+    public void fromData(DataInput in) throws IOException, ClassNotFoundException {
+      DistributedSystem ds = InternalDistributedSystem.getAnyInstance();
+      DistributedMember me = ds.getDistributedMember();
+      DistributedMember hb = DataSerializer.readObject(in);
+      if (me.equals(hb)) {
+        ds.getLogWriter().info("HomeBoy was deserialized on his home");
+      } else {
+        String msg = "HomeBoy was deserialized on " + me + " instead of his home " + hb;
+        ds.getLogWriter().error(msg);
+        throw new IllegalStateException(msg);
+      }
+    }
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/23c4126a/geode-core/src/test/java/org/apache/geode/cache30/ValuesAreLazilyDeserializedRegressionTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/cache30/ValuesAreLazilyDeserializedRegressionTest.java b/geode-core/src/test/java/org/apache/geode/cache30/ValuesAreLazilyDeserializedRegressionTest.java
new file mode 100644
index 0000000..fcffa04
--- /dev/null
+++ b/geode-core/src/test/java/org/apache/geode/cache30/ValuesAreLazilyDeserializedRegressionTest.java
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.cache30;
+
+import static org.junit.Assert.assertTrue;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import org.apache.geode.DataSerializable;
+import org.apache.geode.DataSerializer;
+import org.apache.geode.cache.AttributesFactory;
+import org.apache.geode.cache.CacheException;
+import org.apache.geode.cache.CacheListener;
+import org.apache.geode.cache.DataPolicy;
+import org.apache.geode.cache.EntryEvent;
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.Scope;
+import org.apache.geode.cache.util.CacheListenerAdapter;
+import org.apache.geode.distributed.DistributedMember;
+import org.apache.geode.distributed.DistributedSystem;
+import org.apache.geode.distributed.internal.InternalDistributedSystem;
+import org.apache.geode.test.dunit.Host;
+import org.apache.geode.test.dunit.VM;
+import org.apache.geode.test.dunit.cache.internal.JUnit4CacheTestCase;
+import org.apache.geode.test.junit.categories.DistributedTest;
+
+/**
+ * Test to make sure cache values are lazily deserialized
+ *
+ * #34948: distributed cache values are always getting deserialized
+ *
+ * @since GemFire 5.0
+ */
+@Category(DistributedTest.class)
+public class ValuesAreLazilyDeserializedRegressionTest extends JUnit4CacheTestCase {
+
+  private static final String REGION_NAME = "bug34948";
+
+  // TODO: value of lastCallback is not validated
+  private static Object lastCallback = null;
+
+  private VM otherVM;
+
+  @Before
+  public void before() throws Exception {
+    this.otherVM = Host.getHost(0).getVM(0);
+  }
+
+  /**
+   * Make sure that value is only deserialized in cache whose application asks for the value.
+   */
+  @Test
+  public void valueShouldBeLazilyDeserialized() throws CacheException {
+    AttributesFactory factory = new AttributesFactory();
+    factory.setScope(Scope.DISTRIBUTED_ACK);
+    factory.setDataPolicy(DataPolicy.PRELOADED);
+
+    Region<String, HomeBoy> region = createRootRegion(REGION_NAME, factory.create());
+
+    // before gii
+    region.put("key1", new HomeBoy());
+
+    doCreateOtherVm(this.otherVM);
+
+    // after gii
+    region.put("key2", new HomeBoy());
+
+    region.localDestroy("key1");
+    region.localDestroy("key2");
+
+    Object value = region.get("key1");
+    assertTrue(region.get("key1") instanceof HomeBoy);
+    assertTrue(region.get("key2") == null); // preload will not distribute
+
+    // TODO: add putAll test once it does not deserialize
+  }
+
+  private void doCreateOtherVm(VM otherVM) {
+    otherVM.invoke(new CacheSerializableRunnable("create root") {
+
+      @Override
+      public void run2() throws CacheException {
+        getSystem();
+
+        CacheListener<String, HomeBoy> listener = new CacheListenerAdapter<String, HomeBoy>() {
+          @Override
+          public void afterCreate(EntryEvent event) {
+            if (event.getCallbackArgument() != null) {
+              lastCallback = event.getCallbackArgument();
+            }
+          }
+
+          @Override
+          public void afterUpdate(EntryEvent event) {
+            if (event.getCallbackArgument() != null) {
+              lastCallback = event.getCallbackArgument();
+            }
+          }
+
+          @Override
+          public void afterInvalidate(EntryEvent event) {
+            if (event.getCallbackArgument() != null) {
+              lastCallback = event.getCallbackArgument();
+            }
+          }
+
+          @Override
+          public void afterDestroy(EntryEvent event) {
+            if (event.getCallbackArgument() != null) {
+              lastCallback = event.getCallbackArgument();
+            }
+          }
+        };
+
+        AttributesFactory<String, HomeBoy> factory = new AttributesFactory<>();
+        factory.setScope(Scope.DISTRIBUTED_ACK);
+        factory.setDataPolicy(DataPolicy.PRELOADED);
+        factory.setCacheListener(listener);
+
+        createRootRegion(REGION_NAME, factory.create());
+      }
+    });
+  }
+
+  private static class HomeBoy implements DataSerializable {
+    public HomeBoy() {}
+
+    @Override
+    public void toData(DataOutput out) throws IOException {
+      DistributedMember me = InternalDistributedSystem.getAnyInstance().getDistributedMember();
+      DataSerializer.writeObject(me, out);
+    }
+
+    @Override
+    public void fromData(DataInput in) throws IOException, ClassNotFoundException {
+      DistributedSystem ds = InternalDistributedSystem.getAnyInstance();
+      DistributedMember me = ds.getDistributedMember();
+      DistributedMember hb = DataSerializer.readObject(in);
+      if (me.equals(hb)) {
+        ds.getLogWriter().info("HomeBoy was deserialized on his home");
+      } else {
+        String msg = "HomeBoy was deserialized on " + me + " instead of his home " + hb;
+        ds.getLogWriter().error(msg);
+        throw new IllegalStateException(msg);
+      }
+    }
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/23c4126a/geode-core/src/test/java/org/apache/geode/internal/cache/ConnectDisconnectDUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/cache/ConnectDisconnectDUnitTest.java b/geode-core/src/test/java/org/apache/geode/internal/cache/ConnectDisconnectDUnitTest.java
index de63433..b52fe4d 100755
--- a/geode-core/src/test/java/org/apache/geode/internal/cache/ConnectDisconnectDUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/cache/ConnectDisconnectDUnitTest.java
@@ -14,105 +14,87 @@
  */
 package org.apache.geode.internal.cache;
 
-import static org.apache.geode.distributed.ConfigurationProperties.*;
-
-import java.util.Properties;
-
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
+import static org.apache.geode.distributed.ConfigurationProperties.CONSERVE_SOCKETS;
+import static org.apache.geode.distributed.ConfigurationProperties.LOG_LEVEL;
+import static org.assertj.core.api.Assertions.assertThat;
 
+import org.apache.geode.internal.logging.LogService;
 import org.apache.geode.test.dunit.AsyncInvocation;
-import org.apache.geode.test.dunit.DistributedTestUtils;
 import org.apache.geode.test.dunit.Host;
-import org.apache.geode.test.dunit.IgnoredException;
-import org.apache.geode.test.dunit.LogWriterUtils;
 import org.apache.geode.test.dunit.SerializableRunnable;
 import org.apache.geode.test.dunit.VM;
 import org.apache.geode.test.dunit.cache.internal.JUnit4CacheTestCase;
+import org.apache.geode.test.junit.Repeat;
 import org.apache.geode.test.junit.categories.DistributedTest;
+import org.apache.geode.test.junit.rules.RepeatRule;
+import org.apache.logging.log4j.Logger;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import java.util.Properties;
 
-/** A test of 46438 - missing response to an update attributes message */
+/**
+ * A test of 46438 - missing response to an update attributes message
+ *
+ * see bugs #50785 and #46438
+ */
 @Category(DistributedTest.class)
 public class ConnectDisconnectDUnitTest extends JUnit4CacheTestCase {
+  private static final Logger logger = LogService.getLogger();
 
-  private IgnoredException ex;
+  private static int count;
 
-  // see bugs #50785 and #46438
-  @Test
-  public void testManyConnectsAndDisconnects() throws Throwable {
-    // invokeInEveryVM(new SerializableRunnable() {
-    //
-    // @Override
-    // public void run() {
-    // Log.setLogWriterLevel("info");
-    // }
-    // });
-
-    // uncomment these lines to use stand-alone locators
-    // int[] ports = AvailablePortHelper.getRandomAvailableTCPPorts(4);
-    // setLocatorPorts(ports);
-
-    for (int i = 0; i < 20; i++) {
-      LogWriterUtils.getLogWriter().info("Test run: " + i);
-      runOnce();
-      tearDown();
-      setUp();
-    }
+  @Rule
+  public RepeatRule repeat = new RepeatRule();
+
+  @BeforeClass
+  public static void beforeClass() {
+    count = 0;
   }
 
+  @Before
+  public void before() {
+    count++;
+  }
 
-  static int LOCATOR_PORT;
-  static String LOCATORS_STRING;
+  @After
+  public void after() {
+    disconnectAllFromDS();
 
-  static int[] locatorPorts;
+  }
 
-  public void setLocatorPorts(int[] ports) {
-    DistributedTestUtils.deleteLocatorStateFile(ports);
-    String locators = "";
-    for (int i = 0; i < ports.length; i++) {
-      if (i > 0) {
-        locators += ",";
-      }
-      locators += "localhost[" + ports[i] + "]";
-    }
-    final String locators_string = locators;
-    for (int i = 0; i < ports.length; i++) {
-      final int port = ports[i];
-      Host.getHost(0).getVM(i).invoke(new SerializableRunnable("set locator port") {
-        public void run() {
-          LOCATOR_PORT = port;
-          LOCATORS_STRING = locators_string;
-        }
-      });
-    }
-    locatorPorts = ports;
+  @AfterClass
+  public static void afterClass() {
+    assertThat(count).isEqualTo(20);
   }
 
   @Override
-  public final void postTearDownCacheTestCase() throws Exception {
-    if (locatorPorts != null) {
-      DistributedTestUtils.deleteLocatorStateFile(locatorPorts);
-    }
+  public Properties getDistributedSystemProperties() {
+    Properties props = super.getDistributedSystemProperties();
+    props.setProperty(LOG_LEVEL, "info");
+    props.setProperty(CONSERVE_SOCKETS, "false");
+    return props;
   }
 
   /**
    * This test creates 4 vms and starts a cache in each VM. If that doesn't hang, it destroys the DS
    * in all vms and recreates the cache.
-   * 
-   * @throws Throwable
    */
-  public void runOnce() throws Throwable {
+  @Test
+  @Repeat(20)
+  public void testManyConnectsAndDisconnects() throws Exception {
+    logger.info("Test run: {}", count);
 
     int numVMs = 4;
-
     VM[] vms = new VM[numVMs];
 
     for (int i = 0; i < numVMs; i++) {
-      // if(i == 0) {
-      // vms[i] = Host.getHost(0).getVM(4);
-      // } else {
       vms[i] = Host.getHost(0).getVM(i);
-      // }
     }
 
     AsyncInvocation[] asyncs = new AsyncInvocation[numVMs];
@@ -120,44 +102,14 @@ public class ConnectDisconnectDUnitTest extends JUnit4CacheTestCase {
       asyncs[i] = vms[i].invokeAsync(new SerializableRunnable("Create a cache") {
         @Override
         public void run() {
-          // try {
-          // JGroupMembershipManager.setDebugJGroups(true);
           getCache();
-          // } finally {
-          // JGroupMembershipManager.setDebugJGroups(false);
-          // }
         }
       });
     }
 
-
     for (int i = 0; i < numVMs; i++) {
-      asyncs[i].getResult();
-      // try {
-      // asyncs[i].getResult(30 * 1000);
-      // } catch(TimeoutException e) {
-      // getLogWriter().severe("DAN DEBUG - we have a hang");
-      // dumpAllStacks();
-      // fail("DAN - WE HIT THE ISSUE",e);
-      // throw e;
-      // }
-    }
-
-    disconnectAllFromDS();
-  }
-
-
-  @Override
-  public Properties getDistributedSystemProperties() {
-    Properties props = super.getDistributedSystemProperties();
-    props.setProperty(LOG_LEVEL, "info");
-    props.setProperty(CONSERVE_SOCKETS, "false");
-    if (LOCATOR_PORT > 0) {
-      props.setProperty(START_LOCATOR, "localhost[" + LOCATOR_PORT + "]");
-      props.setProperty(LOCATORS, LOCATORS_STRING);
+      asyncs[i].await();
     }
-    return props;
   }
 
-
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/23c4126a/geode-core/src/test/java/org/apache/geode/test/dunit/internal/DistributedTestFixture.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/test/dunit/internal/DistributedTestFixture.java b/geode-core/src/test/java/org/apache/geode/test/dunit/internal/DistributedTestFixture.java
index 4175e81..b372696 100755
--- a/geode-core/src/test/java/org/apache/geode/test/dunit/internal/DistributedTestFixture.java
+++ b/geode-core/src/test/java/org/apache/geode/test/dunit/internal/DistributedTestFixture.java
@@ -28,7 +28,7 @@ public interface DistributedTestFixture extends Serializable {
    * <p>
    * Override this as needed. Default implementation is empty.
    */
-  public void preSetUp() throws Exception;
+  void preSetUp() throws Exception;
 
   /**
    * {@code postSetUp()} is invoked after {@code DistributedTestCase#setUp()}.
@@ -36,7 +36,7 @@ public interface DistributedTestFixture extends Serializable {
    * <p>
    * Override this as needed. Default implementation is empty.
    */
-  public void postSetUp() throws Exception;
+  void postSetUp() throws Exception;
 
   /**
    * {@code preTearDown()} is invoked before {@code DistributedTestCase#tearDown()}.
@@ -44,7 +44,7 @@ public interface DistributedTestFixture extends Serializable {
    * <p>
    * Override this as needed. Default implementation is empty.
    */
-  public void preTearDown() throws Exception;
+  void preTearDown() throws Exception;
 
   /**
    * {@code postTearDown()} is invoked after {@code DistributedTestCase#tearDown()}.
@@ -52,7 +52,7 @@ public interface DistributedTestFixture extends Serializable {
    * <p>
    * Override this as needed. Default implementation is empty.
    */
-  public void postTearDown() throws Exception;
+  void postTearDown() throws Exception;
 
   /**
    * {@code preTearDownAssertions()} is invoked before any tear down methods have been invoked. If
@@ -61,7 +61,7 @@ public interface DistributedTestFixture extends Serializable {
    * <p>
    * Override this as needed. Default implementation is empty.
    */
-  public void preTearDownAssertions() throws Exception;
+  void preTearDownAssertions() throws Exception;
 
   /**
    * {@code postTearDownAssertions()} is invoked after all tear down methods have completed. This
@@ -70,7 +70,7 @@ public interface DistributedTestFixture extends Serializable {
    * <p>
    * Override this as needed. Default implementation is empty.
    */
-  public void postTearDownAssertions() throws Exception;
+  void postTearDownAssertions() throws Exception;
 
   /**
    * Returns the {@code Properties} used to define the {@code DistributedSystem}.
@@ -79,11 +79,11 @@ public interface DistributedTestFixture extends Serializable {
    * Override this as needed. This method is called by various {@code getSystem} methods in
    * {@code DistributedTestCase}.
    */
-  public Properties getDistributedSystemProperties();
+  Properties getDistributedSystemProperties();
 
   /**
    * Returns the {@code name} of the test method being executed.
    */
-  public String getName();
+  String getName();
 
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/23c4126a/geode-core/src/test/java/org/apache/geode/test/dunit/internal/JUnit3DistributedTestCase.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/test/dunit/internal/JUnit3DistributedTestCase.java b/geode-core/src/test/java/org/apache/geode/test/dunit/internal/JUnit3DistributedTestCase.java
index abdac89..fc0f2f6 100755
--- a/geode-core/src/test/java/org/apache/geode/test/dunit/internal/JUnit3DistributedTestCase.java
+++ b/geode-core/src/test/java/org/apache/geode/test/dunit/internal/JUnit3DistributedTestCase.java
@@ -18,13 +18,10 @@ import java.io.Serializable;
 import java.util.Properties;
 
 import junit.framework.TestCase;
-import org.apache.logging.log4j.Logger;
 import org.junit.experimental.categories.Category;
 
-import org.apache.geode.cache.Cache;
 import org.apache.geode.distributed.DistributedSystem;
 import org.apache.geode.distributed.internal.InternalDistributedSystem;
-import org.apache.geode.internal.logging.LogService;
 import org.apache.geode.test.junit.categories.DistributedTest;
 
 /**
@@ -34,8 +31,6 @@ import org.apache.geode.test.junit.categories.DistributedTest;
 public abstract class JUnit3DistributedTestCase extends TestCase
     implements DistributedTestFixture, Serializable {
 
-  private static final Logger logger = LogService.getLogger();
-
   private final JUnit4DistributedTestCase delegate = new JUnit4DistributedTestCase(this) {};
 
   /**
@@ -47,19 +42,12 @@ public abstract class JUnit3DistributedTestCase extends TestCase
     JUnit4DistributedTestCase.initializeDistributedTestCase();
   }
 
-  // ---------------------------------------------------------------------------
-  // methods for tests
-  // ---------------------------------------------------------------------------
-
   /**
    * @deprecated Please override {@link #getDistributedSystemProperties()} instead.
    */
   @Deprecated
-  public final void setSystem(final Properties props, final DistributedSystem ds) { // TODO:
-                                                                                    // override
-                                                                                    // getDistributedSystemProperties
-                                                                                    // and then
-                                                                                    // delete
+  public final void setSystem(final Properties props, final DistributedSystem ds) {
+    // TODO: override getDistributedSystemProperties and then delete
     delegate.setSystem(props, ds);
   }
 
@@ -100,10 +88,6 @@ public abstract class JUnit3DistributedTestCase extends TestCase
     return delegate.basicGetSystem();
   }
 
-  public final void nullSystem() { // TODO: delete
-    delegate.nullSystem();
-  }
-
   public static final InternalDistributedSystem getSystemStatic() {
     return JUnit4DistributedTestCase.getSystemStatic();
   }
@@ -146,10 +130,6 @@ public abstract class JUnit3DistributedTestCase extends TestCase
     JUnit4DistributedTestCase.disconnectFromDS();
   }
 
-  // ---------------------------------------------------------------------------
-  // name methods
-  // ---------------------------------------------------------------------------
-
   public static final String getTestMethodName() {
     return JUnit4DistributedTestCase.getTestMethodName();
   }
@@ -162,10 +142,6 @@ public abstract class JUnit3DistributedTestCase extends TestCase
     return delegate.getUniqueName();
   }
 
-  // ---------------------------------------------------------------------------
-  // setup methods
-  // ---------------------------------------------------------------------------
-
   /**
    * Sets up the DistributedTestCase.
    * <p>
@@ -174,7 +150,7 @@ public abstract class JUnit3DistributedTestCase extends TestCase
    */
   @Override
   public final void setUp() throws Exception {
-    delegate.setUp();
+    delegate.setUpJUnit4DistributedTestCase();
   }
 
   /**
@@ -184,7 +160,9 @@ public abstract class JUnit3DistributedTestCase extends TestCase
    * <p>
    * Override this as needed. Default implementation is empty.
    */
-  public void preSetUp() throws Exception {}
+  public void preSetUp() throws Exception {
+    // nothing by default
+  }
 
   /**
    * {@code postSetUp()} is invoked after
@@ -193,11 +171,9 @@ public abstract class JUnit3DistributedTestCase extends TestCase
    * <p>
    * Override this as needed. Default implementation is empty.
    */
-  public void postSetUp() throws Exception {}
-
-  // ---------------------------------------------------------------------------
-  // teardown methods
-  // ---------------------------------------------------------------------------
+  public void postSetUp() throws Exception {
+    // nothing by default
+  }
 
   /**
    * Tears down the DistributedTestCase.
@@ -219,7 +195,9 @@ public abstract class JUnit3DistributedTestCase extends TestCase
    * <p>
    * Override this as needed. Default implementation is empty.
    */
-  public void preTearDown() throws Exception {}
+  public void preTearDown() throws Exception {
+    // nothing by default
+  }
 
   /**
    * {@code postTearDown()} is invoked after
@@ -228,7 +206,9 @@ public abstract class JUnit3DistributedTestCase extends TestCase
    * <p>
    * Override this as needed. Default implementation is empty.
    */
-  public void postTearDown() throws Exception {}
+  public void postTearDown() throws Exception {
+    // nothing by default
+  }
 
   /**
    * {@code preTearDownAssertions()} is invoked before any tear down methods have been invoked. If
@@ -237,7 +217,9 @@ public abstract class JUnit3DistributedTestCase extends TestCase
    * <p>
    * Override this as needed. Default implementation is empty.
    */
-  public void preTearDownAssertions() throws Exception {}
+  public void preTearDownAssertions() throws Exception {
+    // nothing by default
+  }
 
   /**
    * {@code postTearDownAssertions()} is invoked after all tear down methods have completed. This
@@ -246,10 +228,8 @@ public abstract class JUnit3DistributedTestCase extends TestCase
    * <p>
    * Override this as needed. Default implementation is empty.
    */
-  public void postTearDownAssertions() throws Exception {}
-
-  protected static final void destroyRegions(final Cache cache) { // TODO: this should move to
-                                                                  // CacheTestCase
-    JUnit4DistributedTestCase.destroyRegions(cache);
+  public void postTearDownAssertions() throws Exception {
+    // nothing by default
   }
+
 }


[04/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - Reference section

Posted by kl...@apache.org.
http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/reference/statistics/statistics_list.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/statistics/statistics_list.html.md.erb b/geode-docs/reference/statistics/statistics_list.html.md.erb
deleted file mode 100644
index 49e416e..0000000
--- a/geode-docs/reference/statistics/statistics_list.html.md.erb
+++ /dev/null
@@ -1,1310 +0,0 @@
----
-title: Geode Statistics List
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-<a id="statistics_list"></a>
-
-
-This section describes the primary statistics gathered by Geode when statistics are enabled.
-
-All statistics gathering requires the `gemfire.properties` `statistic-sampling-enabled` in `gemfire.properties` file to be true. Statistics that use time require the `gemfire.properties` `enable-time-statistics` to be true.
-
-Performance statistics are collected for each Java application or cache server that connects to a distributed system.
-
--   **[Cache Performance (CachePerfStats)](#section_DEF8D3644D3246AB8F06FE09A37DC5C8)**
-
--   **[Cache Server (CacheServerStats)](#section_EF5C2C59BFC74FFB8607F9571AB9A471)**
-
--   **[Client-Side Notifications (CacheClientUpdaterStats)](#section_B08C0783BBF9489E8BB48B4AEC597C62)**
-
--   **[Client-to-Server Messaging Performance (ClientStats)](#section_04B7D7387E584712B7710B5ED1E876BB)**
-
--   **[Client Connection Pool (PoolStats)](#section_6C247F61DB834C079A16BE92789D4692)**
-
--   **[Continuous Querying (CQStatistics)](#section_66C0E7748501480B85209D57D24256D5)**
-
--   **[Delta Propagation (DeltaPropagationStatistics)](#section_D4ABED3FF94245C0BEE0F6FC9481E867)**
-
--   **[Disk Space Usage (DiskDirStatistics)](#section_6C2BECC63A83456190B029DEDB8F4BE3)**
-
--   **[Disk Usage and Performance (DiskRegionStatistics)](#section_983BFC6D53C74829A04A91C39E06315F)**
-
--   **[Distributed System Messaging (DistributionStats)](#section_ACB4161F10D64BC0B15871D003FF6FDF)**
-
--   **[Distributed Lock Services (DLockStats)](#section_78D346A580724E1EA645E31626EECE40)**
-
--   **[Function Execution (FunctionServiceStatistics)](#section_5E211DDB0E8640689AD0A4659511E17A)**
-
--   **[Gateway Queue (GatewayStatistics)](#section_C4199A541B1F4B82B6178C416C0FAE4B)**
-
--   **[Indexes (IndexStats)](#section_86A61860024B480592DAC67FFB882538)**
-
--   **[JVM Performance](#section_607C3867602E410CAE5FAB26A7FF1CB9)**
-
--   **[Locator (LocatorStatistics)](#section_C48B654F973E4B44AD825D459C23A6CD)**
-
--   **[Lucene Indexes (LuceneIndexStats)](#LuceneStats)**
-
--   **[Off-Heap (OffHeapMemoryStats)](#topic_ohc_tjk_w5)**
-
--   **[Operating System Statistics - Linux](#section_923B28F01BC3416786D3AFBD87F22A5E)**
-
--   **[Partitioned Regions (PartitionedRegion&lt;partitioned\_region\_name&gt;Statistics)](#section_35AC170770C944C3A336D9AEC2D2F7C5)**
-
--   **[Region Entry Eviction – Count-Based (LRUStatistics)](#section_374FBD92A3B74F6FA08AA23047929B4F)**
-
--   **[Region Entry Eviction – Size-based (LRUStatistics)](#section_3D2AA2BCE5B6485699A7B6ADD1C49FF7)**
-
--   **[Server Notifications for All Clients (CacheClientNotifierStatistics)](#section_5362EF9AECBC48D69475697109ABEDFA)**
-
--   **[Server Notifications for Single Client (CacheClientProxyStatistics)](#section_E03865F509E543D9B8F9462B3DA6255E)**
-
--   **[Server-to-Client Messaging Performance (ClientSubscriptionStats)](#section_3AB1C0AA55014163A2BBF68E13D25E3A)**
-
--   **[Statistics Collection (StatSampler)](#section_55F3AF6413474317902845EE4996CC21)**
-
-## <a id="section_DEF8D3644D3246AB8F06FE09A37DC5C8" class="no-quick-link"></a>Cache Performance (CachePerfStats)
-
-Statistics for the Geode cache. These can be used to determine the type and number of cache operations being performed and how much time they consume.
-
-Regarding Geode cache transactions, transaction-related statistics are compiled and stored as properties in the CachePerfStats statistic resource. Because the transaction’s data scope is the cache, these statistics are collected on a per-cache basis.
-
-The primary statistics are:
-
-| Statistic                        | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
-|----------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `cacheListenerCallsCompleted`    | Total number of times a cache listener call has completed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
-| `cacheListenerCallsInProgress`   | Current number of threads doing a cache listener call.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
-| `cacheListenerCallTime`          | Total time spent doing cache listener calls.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
-| `cacheWriterCallsCompleted`      | Total number of times a cache writer call has completed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
-| `cacheWriterCallsInProgress`     | Current number of threads doing a cache writer call.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
-| `cacheWriterCallTime`            | Total time spent doing cache writer calls.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
-| `compressions`                   | Total number of compression operations.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
-| `compressTime`                   | Total time, in nanoseconds, spent compressing data.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
-| `conflatedEvents`                | The number of events that were conflated, and not delivered to event listeners or gateway senders on this member. Events are typically conflated because a later event was already applied to the cache, or because a concurrent event was ignored to ensure cache consistency. Note that some members may discard an update while other members apply the update, depending on the order in which each member receives the update. For this reason, the `conflatedEvents` statistic will differ for each Geode member. See [Consistency for Region Updates](../../developing/distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045). |
-| `creates`                        | The total number of times an entry is added to this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
-| `decompressions`                 | Total number of decompression operations.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
-| `decompressTime`                 | Total time, in nanoseconds, spent decompressing data.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
-| `destroys`                       | The total number of times a cache object entry has been destroyed in this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
-| `eventQueueSize`                 | The number of cache events waiting to be processed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
-| `eventQueueThrottleCount`        | The total number of times a thread was delayed in adding an event to the event queue.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
-| `eventQueueThrottleTime`         | The total amount of time, in nanoseconds, spent delayed by the event queue throttle.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
-| `eventThreads`                   | The number of threads currently processing events.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
-| `getInitialImageKeysReceived`    | Total number of keys received while doing getInitialImage operations.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
-| `getInitialImagesCompleted`      | Total number of times getInitialImages initiated by this cache have completed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
-| `getInitialImagesInProgressDesc` | Current number of getInitialImage operations currently in progress.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
-| `getInitialImageTime`            | Total time spent doing getInitialImages for region creation.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
-| `getsDesc`                       | The total number of times a successful get has been done on this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
-| `getTime`                        | Total time spent doing get operations from this cache (including netsearch and netload).                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
-| `invalidates`                    | The total number of times an existing cache object entry value in this cache has been invalidated.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
-| `loadsCompleted`                 | Total number of times a load on this cache has completed as a result of either a local get() or a remote netload.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
-| `loadsInProgress`                | Current number of threads in this cache doing a cache load.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
-| `loadTime`                       | Total time spent invoking loaders on this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
-| `misses`                         | Total number of times a get on the cache did not find a value already in local memory. The number of hits (that is, gets that did not miss) can be calculated by subtracting misses from gets.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
-| `netloadsCompleted`              | Total number of times a network load initiated on this cache has completed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
-| `netloadsInProgress`             | Current number of threads doing a network load initiated by a get() in this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
-| `netloadTime`                    | Total time spent doing network loads on this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
-| `netsearchesCompleted`           | Total number of times network searches initiated by this cache have completed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
-| `netsearchesInProgress`          | Current number of threads doing a network search initiated by a get() in this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
-| `netsearchTimeDesc`              | Total time spent doing network searches for cache values.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
-| `nonReplicatedTombstonesSize`    | The approximate number of bytes that are currently consumed by tombstones in non-replicated regions. See [Consistency for Region Updates](../../developing/distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045).                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
-| `partitionedRegions`             | The current number of partitioned regions in the cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
-| `postCompressedBytes`            | Total number of bytes after compressing.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
-| `preCompressedBytes`             | Total number of bytes before compressing.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
-| `putAlls`                        | The total number of times a map is added or replaced in this cache as a result of a local operation. Note, this only counts putAlls done explicitly on this cache; it does not count updates pushed from other caches.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
-| `putallTime`                     | Total time spent replacing a map in this cache as a result of a local operation. This includes synchronizing on the map, invoking cache callbacks, sending messages to other caches and waiting for responses (if required).                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
-| `puts`                           | The total number of times an entry is added or replaced in this cache as a result of a local operation (put(), create(), or get() which results in load, netsearch, or netloading a value). Note, this only counts puts done explicitly on this cache; it does not count updates pushed from other caches.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
-| `putTime`                        | Total time spent adding or replacing an entry in this cache as a result of a local operation. This includes synchronizing on the map, invoking cache callbacks, sending messages to other caches, and waiting for responses (if required).                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
-| `queryExecutions`                | Total number of times some query has been executed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
-| `queryExecutionTime`             | Total time spent executing queries.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
-| `regions`                        | The current number of regions in the cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
-| `replicatedTombstonesSize`       | The approximate number of bytes that are currently consumed by tombstones in replicated or partitioned regions. See [Consistency for Region Updates](../../developing/distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045).                                                                                                                                                                                                                                                                                                                                                                                                                                      |
-| `tombstoneCount`                 | The total number of tombstone entries created for performing concurrency checks. See [Consistency for Region Updates](../../developing/distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045).                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
-| `tombstoneGCCount`               | The total number of tombstone garbage collection cycles that a member has performed. See [Consistency for Region Updates](../../developing/distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045).                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
-| `txCommitChanges`                | Total number of changes made by committed transactions.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
-| `txCommits`                      | Total number of times a transaction commit has succeeded.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
-| `txCommitTime`                   | The total amount of time, in nanoseconds, spent doing successful transaction commits.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
-| `txConflictCheckTime`            | The total amount of time, in nanoseconds, spent doing conflict checks during transaction commit.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
-| `txFailedLifeTime`               | The total amount of time, in nanoseconds, spent in a transaction before a failed commit. The time measured starts at transaction begin and ends when commit is called.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
-| `txFailureChanges`               | Total number of changes lost by failed transactions.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
-| `txFailures`                     | Total number of times a transaction commit has failed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
-| `txFailureTime`                  | The total amount of time, in nanoseconds, spent doing failed transaction commits.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
-| `txRollbackChanges`              | Total number of changes lost by explicit transaction rollbacks.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
-| `txRollbackLifeTime`             | The total amount of time, in nanoseconds, spent in a transaction before an explicit rollback. The time measured starts at transaction begin and ends when rollback is called.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
-| `txRollbacks`                    | Total number of times a transaction has been explicitly rolled back.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
-| `txRollbackTime`                 | The total amount of time, in nanoseconds, spent doing explicit transaction rollbacks.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
-| `txSuccessLifeTime`              | The total amount of time, in nanoseconds, spent in a transaction before a successful commit. The time measured starts at transaction begin and ends when commit is called.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
-| `updates`                        | The total number of updates originating remotely that have been applied to this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
-| `updateTime`                     | Total time spent performing an update.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
-
-## <a id="section_EF5C2C59BFC74FFB8607F9571AB9A471" class="no-quick-link"></a>Cache Server (CacheServerStats)
-
-Statistics used for cache servers and for gateway receivers are recorded in CacheServerStats in a cache server. The primary statistics are:
-
-| Statistic                                 | Description                                                                                                                                    |
-|-------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
-| `abandonedReadRequests`                   | Number of read operations (requests) abandoned by clients.                                                                                     |
-| `abandonedWriteRequests`                  | Number of write operations (requests) abandoned by clients.                                                                                    |
-| `acceptsInProgress`                       | Current number of server accepts that are attempting to do the initial handshake with the client.                                              |
-| `acceptThreadStarts`                      | Total number of threads created (starts) to deal with an accepted socket. Note, this is not the current number of threads.                     |
-| `batchSize`                               | The size (in bytes) of the batches received.                                                                                                   |
-| `clearRegionRequests`                     | Number of cache client operations clearRegion requests.                                                                                        |
-| `clearRegionResponses`                    | Number of clearRegion responses written to the cache client.                                                                                   |
-| `clientNotificationRequests`              | Number of cache client operations notification requests.                                                                                       |
-| `clientReadyRequests`                     | Number of cache client ready requests.                                                                                                         |
-| `clientReadyResponses`                    | Number of client ready responses written to the cache client.                                                                                  |
-| `closeConnectionRequests`                 | Number of cache client close connection operations requests.                                                                                   |
-| `connectionLoad`                          | The load from client to server connections as reported by the load probe installed in this server.                                             |
-| `connectionsTimedOut`                     | Total number of connections that have been timed out by the server because of client inactivity.                                               |
-| `connectionThreads`                       | Current number of threads dealing with a client connection.                                                                                    |
-| `connectionThreadStarts`                  | Total number of threads created (starts) to deal with a client connection. Note, this is not the current number of threads.                    |
-| `containsKeyRequests`                     | Number of cache client operations containsKey requests.                                                                                        |
-| `containsKeyResponses`                    | Number of containsKey responses written to the cache client.                                                                                   |
-| `currentClientConnections`                | Number of sockets accepted.                                                                                                                    |
-| `currentClients`                          | Number of client virtual machines (clients) connected.                                                                                         |
-| `destroyRegionRequests`                   | Number of cache client operations destroyRegion requests.                                                                                      |
-| `destroyRegionResponses`                  | Number of destroyRegion responses written to the cache client.                                                                                 |
-| `destroyRequests`                         | Number of cache client operations destroy requests.                                                                                            |
-| `destroyResponses`                        | Number of destroy responses written to the cache client.                                                                                       |
-| `failedConnectionAttempts`                | Number of failed connection attempts.                                                                                                          |
-| `getRequests`                             | Number of cache client operations get requests.                                                                                                |
-| `getResponses`                            | Number of getResponses written to the cache client.                                                                                            |
-| `loadPerConnection`                       | The estimate of how much load i.s added for each new connection as reported by the load probe installed in this server.                        |
-| `loadPerQueue`                            | The estimate of how much load would be added for each new subscription connection as reported by the load probe installed in this server.      |
-| `messageBytesBeingReceived`               | Current number of bytes consumed by messages being received or processed.                                                                      |
-| `messagesBeingReceived`                   | Current number of messages being received off the network or being processed after reception.                                                  |
-| `outOfOrderGatewayBatchIds`               | Number of Out of Order batch IDs (batches).                                                                                                    |
-| `processBatchRequests`                    | Number of cache client operations processBatch requests.                                                                                       |
-| `processBatchResponses`                   | Number of processBatch responses written to the cache client.                                                                                  |
-| `processBatchTime`                        | Total time, in nanoseconds, spent in processing a cache client processBatch request.                                                           |
-| `processClearRegionTime`                  | Total time, in nanoseconds, spent in processing a cache client clearRegion request, including the time to clear the region from the cache.     |
-| `processClientNotificationTime`           | Total time, in nanoseconds, spent in processing a cache client notification request.                                                           |
-| `processClientReadyTime`                  | Total time, in nanoseconds, spent in processing a cache client ready request, including the time to destroy an object from the cache.          |
-| `processCloseConnectionTime`              | Total time, in nanoseconds, spent in processing a cache client close connection request.                                                       |
-| `processContainsKeyTime`                  | Total time spent, in nanoseconds, processing a containsKey request.                                                                            |
-| `processDestroyRegionTime`                | Total time, in nanoseconds, spent in processing a cache client destroyRegion request, including the time to destroy the region from the cache. |
-| `processDestroyTime`                      | Total time, in nanoseconds, spent in processing a cache client destroy request, including the time to destroy an object from the cache.        |
-| `processGetTime`                          | Total time, in nanoseconds, spent in processing a cache client get request, including the time to get an object from the cache.                |
-| `processPutAllTime`                       | Total time, in nanoseconds, spent in processing a cache client putAll request, including the time to put all objects into the cache.           |
-| `processPutTime`                          | Total time, in nanoseconds, spent in processing a cache client put request, including the time to put an object into the cache.                |
-| `processQueryTime`                        | Total time, in nanoseconds, spent in processing a cache client query request, including the time to destroy an object from the cache.          |
-| `processUpdateClientNotificationTime`     | Total time, in nanoseconds, spent in processing a client notification update request.                                                          |
-| `putAllRequests`                          | Number of cache client operations putAll requests.                                                                                             |
-| `putAllResponses`                         | Number of putAllResponses written to the cache client.                                                                                         |
-| `putRequests`                             | Number of cache client operations put requests.                                                                                                |
-| `putResponses`                            | Number of putResponses written to the cache client.                                                                                            |
-| `queryRequests`                           | Number of cache client operations query requests.                                                                                              |
-| `queryResponses`                          | Number of query responses written to the cache client.                                                                                         |
-| `queueLoad`                               | The load from subscription queues as reported by the load probe installed in this server                                                       |
-| `readClearRegionRequestTime`              | Total time, in nanoseconds, spent in reading clearRegion requests.                                                                             |
-| `readClientNotificationRequestTime`       | Total time, in nanoseconds, spent in reading client notification requests.                                                                     |
-| `readClientReadyRequestTime`              | Total time, in nanoseconds, spent in reading cache client ready requests.                                                                      |
-| `readCloseConnectionRequestTime`          | Total time, in nanoseconds, spent in reading close connection requests.                                                                        |
-| `readContainsKeyRequestTime`              | Total time, in nanoseconds, spent reading containsKey requests.                                                                                |
-| `readDestroyRegionRequestTime`            | Total time, in nanoseconds, spent in reading destroyRegion requests.                                                                           |
-| `readDestroyRequestTime`                  | Total time, in nanoseconds, spent in reading destroy requests.                                                                                 |
-| `readGetRequestTime`                      | Total time, in nanoseconds, spent in reading get requests.                                                                                     |
-| `readProcessBatchRequestTime`             | Total time, in nanoseconds, spent in reading processBatch requests.                                                                            |
-| `readPutAllRequestTime`                   | Total time, in nanoseconds, spent in reading putAll requests.                                                                                  |
-| `readPutRequestTime`                      | Total time, in nanoseconds, spent in reading put requests.                                                                                     |
-| `readQueryRequestTime`                    | Total time, in nanoseconds, spent in reading query requests.                                                                                   |
-| `readUpdateClientNotificationRequestTime` | Total time, in nanoseconds, spent in reading client notification update requests.                                                              |
-| `receivedBytes`                           | Total number of bytes received from clients.                                                                                                   |
-| `sentBytes`                               | Total number of bytes sent to clients.                                                                                                         |
-| `threadQueueSize`                         | Current number of connections waiting for a thread to start processing their message.                                                          |
-| `updateClientNotificationRequests`        | Number of cache client notification update requests.                                                                                           |
-| `writeClearRegionResponseTime`            | Total time, in nanoseconds, spent in writing clearRegion responses.                                                                            |
-| `writeClientReadyResponseTime`            | Total time, in nanoseconds, spent in writing client ready responses.                                                                           |
-| `writeContainsKeyResponseTime`            | Total time, in nanoseconds, spent writing containsKey responses.                                                                               |
-| `writeDestroyRegionResponseTime`          | Total time, in nanoseconds, spent in writing destroyRegion responses.                                                                          |
-| `writeDestroyResponseTime`                | Total time, in nanoseconds, spent in writing destroy responses.                                                                                |
-| `writeGetResponseTime`                    | Total time, in nanoseconds, spent in writing get responses.                                                                                    |
-| `writeProcessBatchResponseTime`           | Total time, in nanoseconds, spent in writing processBatch responses.                                                                           |
-| `writePutAllResponseTime`                 | Total time, in nanoseconds, spent in writing putAll responses.                                                                                 |
-| `writePutResponseTime`                    | Total time, in nanoseconds, spent in writing put responses.                                                                                    |
-| `writeQueryResponseTime`                  | Total time, in nanoseconds, spent in writing query responses.                                                                                  |
-
-## <a id="section_B08C0783BBF9489E8BB48B4AEC597C62" class="no-quick-link"></a>Client-Side Notifications (CacheClientUpdaterStats)
-
-Statistics in a client that pertain to server-to-client data pushed from the server over a queue to the client (they are the client side of the server’s `CacheClientNotifierStatistics`) :
-
-| Statistic                   | Description                                                                                  |
-|-----------------------------|----------------------------------------------------------------------------------------------|
-| `receivedBytes`             | Total number of bytes received from the server.                                              |
-| `messagesBeingReceived`     | Current number of message being received off the network or being processed after reception. |
-| `messageBytesBeingReceived` | Current number of bytes consumed by messages being received or processed.                    |
-
-## <a id="section_04B7D7387E584712B7710B5ED1E876BB" class="no-quick-link"></a>Client-to-Server Messaging Performance (ClientStats)
-
-These statistics are in a client and they describe all the messages sent from the client to a specific server. The primary statistics are:
-
-| Statistic                              | Description                                                                                   |
-|----------------------------------------|-----------------------------------------------------------------------------------------------|
-| `clearFailures`                        | Total number of clear attempts that have failed.                                              |
-| `clears`                               | Total number of clears completed successfully.                                                |
-| `clearSendFailures`                    | Total number of clearSends that have failed.                                                  |
-| `clearSends`                           | Total number of clearSends that have completed successfully.                                  |
-| `clearSendsInProgress`                 | Current number of clearSends being executed.                                                  |
-| `clearSendTime`                        | Total amount of time, in nanoseconds, spent doing clearSends.                                 |
-| `clearsInProgress`                     | Current number of clears being executed.                                                      |
-| `clearTime`                            | Total amount of time, in nanoseconds, spent doing clears.                                     |
-| `clearTimeouts`                        | Total number of clear attempts that have timed out.                                           |
-| `closeConFailures`                     | Total number of closeCon attempts that have failed.                                           |
-| `closeCons`                            | Total number of closeCons that have completed successfully.                                   |
-| `closeConSendFailures`                 | Total number of closeConSends that have failed.                                               |
-| `closeConSends`                        | Total number of closeConSends that have completed successfully.                               |
-| `closeConSendsInProgress`              | Current number of closeConSends being executed.                                               |
-| `closeConSendTime`                     | Total amount of time, in nanoseconds, spent doing closeConSends.                              |
-| `closeConsInProgress`                  | Current number of closeCons being executed.                                                   |
-| `closeConTime`                         | Total amount of time, in nanoseconds, spent doing closeCons.                                  |
-| `closeConTimeouts`                     | Total number of closeCon attempts that have timed out.                                        |
-| `connections`                          | Current number of connections.                                                                |
-| `connects`                             | Total number of times a connection has been created.                                          |
-| `containsKeyFailures`                  | Total number of containsKey attempts that have failed.                                        |
-| `containsKeys`                         | Total number of containsKeys that completed successfully.                                     |
-| `containsKeySendFailures`              | Total number of containsKeySends that have failed.                                            |
-| `containsKeySends`                     | Total number of containsKeySends that have completed successfully.                            |
-| `containsKeySendsInProgress`           | Current number of containsKeySends being executed.                                            |
-| `containsKeySendTime`                  | Total amount of time, in nanoseconds, spent doing containsKeyends.                            |
-| `containsKeysInProgress`               | Current number of containsKeys being executed.                                                |
-| `containsKeyTime`                      | Total amount of time, in nanoseconds, spent doing containsKeys.                               |
-| `containsKeyTimeouts`                  | Total number of containsKey attempts that have timed out.                                     |
-| `destroyFailures`                      | Total number of destroy attempts that have failed.                                            |
-| `destroyRegionFailures`                | Total number of destroyRegion attempts that have failed.                                      |
-| `destroyRegions`                       | Total number of destroyRegions that have completed successfully.                              |
-| `destroyRegionSendFailures`            | Total number of destroyRegionSends that have failed.                                          |
-| `destroyRegionSends`                   | Total number of destroyRegionSends that have completed successfully.                          |
-| `destroyRegionSendsInProgress`         | Current number of destroyRegionSends being executed.                                          |
-| `destroyRegionSendTime`                | Total amount of time, in nanoseconds, spent doing destroyRegionSends.                         |
-| `destroyRegionsInProgress`             | Current number of destroyRegions being executed.                                              |
-| `destroyRegionTime`                    | Total amount of time, in nanoseconds, spent doing destroyRegions.                             |
-| `destroyRegionTimeouts`                | Total number of destroyRegion attempts that have timed out.                                   |
-| `destroys`                             | Total number of destroys that have completed successfully.                                    |
-| `destroySendFailures`                  | Total number of destroySends that have failed.                                                |
-| `destroySends`                         | Total number of destroySends that have completed successfully.                                |
-| `destroySendsInProgress`               | Current number of destroySends being executed.                                                |
-| `destroySendTime`                      | Total amount of time, in nanoseconds, spent doing destroySends.                               |
-| `destroysInProgress`                   | Current number of destroys being executed.                                                    |
-| `destroyTime`                          | Total amount of time, in nanoseconds, spent doing destroys.                                   |
-| `destroyTimeouts`                      | Total number of destroy attempts that have timed out.                                         |
-| `disconnects`                          | Total number of times a connection has been destroyed.                                        |
-| `gatewayBatchFailures`                 | Total number of gatewayBatch attempts that have failed.                                       |
-| `gatewayBatchs`                        | Total number of gatewayBatchs completed successfully.                                         |
-| `gatewayBatchSendFailures`             | Total number of gatewayBatchSends that have failed.                                           |
-| `gatewayBatchSends`                    | Total number of gatewayBatchSends that have completed successfully.                           |
-| `gatewayBatchSendsInProgress`          | Current number of gatewayBatchSends being executed.                                           |
-| `gatewayBatchSendTime`                 | Total amount of time, in nanoseconds, spent doing gatewayBatchSends.                          |
-| `gatewayBatchsInProgress`              | Current number of gatewayBatchs being executed.                                               |
-| `gatewayBatchTime`                     | Total amount of time, in nanoseconds, spent doing gatewayBatchs. 

<TRUNCATED>

[10/51] [abbrv] geode git commit: GEODE-3386 - Make KeyedErrorResponse & ErrorResponse siblings

Posted by kl...@apache.org.
GEODE-3386 - Make KeyedErrorResponse & ErrorResponse siblings


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/bfbe3e56
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/bfbe3e56
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/bfbe3e56

Branch: refs/heads/feature/GEODE-1279
Commit: bfbe3e5649158f45f797efcc389f77de88ddaf5a
Parents: d295876
Author: Alexander Murmann <am...@pivotal.io>
Authored: Wed Aug 9 11:52:17 2017 -0700
Committer: Udo Kohlmeyer <uk...@pivotal.io>
Committed: Tue Aug 15 09:03:31 2017 -0700

----------------------------------------------------------------------
 .../GetAllRequestOperationHandler.java          | 18 +++++++--------
 .../GetAvailableServersOperationHandler.java    |  6 ++---
 .../GetRegionRequestOperationHandler.java       |  8 +++----
 .../operations/GetRequestOperationHandler.java  | 17 +++++++-------
 .../PutAllRequestOperationHandler.java          |  7 +++---
 .../operations/PutRequestOperationHandler.java  | 24 ++++++++------------
 .../RemoveRequestOperationHandler.java          |  5 ++--
 .../utilities/ProtobufResponseUtilities.java    | 10 ++++++--
 geode-protobuf/src/main/proto/basicTypes.proto  |  8 +++++--
 ...tRegionRequestOperationHandlerJUnitTest.java |  2 +-
 .../GetRequestOperationHandlerJUnitTest.java    |  4 ++--
 .../PutRequestOperationHandlerJUnitTest.java    |  6 ++---
 .../RemoveRequestOperationHandlerJUnitTest.java |  4 ++--
 13 files changed, 62 insertions(+), 57 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/bfbe3e56/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAllRequestOperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAllRequestOperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAllRequestOperationHandler.java
index 7f2ffe4..607d1d2 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAllRequestOperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAllRequestOperationHandler.java
@@ -28,6 +28,7 @@ import org.apache.geode.protocol.protobuf.ProtocolErrorCode;
 import org.apache.geode.protocol.protobuf.RegionAPI;
 import org.apache.geode.protocol.protobuf.Result;
 import org.apache.geode.protocol.protobuf.Success;
+import org.apache.geode.protocol.protobuf.utilities.ProtobufResponseUtilities;
 import org.apache.geode.protocol.protobuf.utilities.ProtobufUtilities;
 import org.apache.geode.serialization.SerializationService;
 import org.apache.geode.serialization.exception.UnsupportedEncodingTypeException;
@@ -43,9 +44,8 @@ public class GetAllRequestOperationHandler
     String regionName = request.getRegionName();
     Region region = cache.getRegion(regionName);
     if (region == null) {
-      return Failure.of(BasicTypes.ErrorResponse.newBuilder()
-          .setErrorCode(ProtocolErrorCode.REGION_NOT_FOUND.codeValue).setMessage("Region not found")
-          .build());
+      return Failure.of(ProtobufResponseUtilities
+          .makeErrorResponse(ProtocolErrorCode.REGION_NOT_FOUND.codeValue, "Region not found"));
     }
 
     try {
@@ -61,13 +61,13 @@ public class GetAllRequestOperationHandler
       }
       return Success.of(RegionAPI.GetAllResponse.newBuilder().addAllEntries(entries).build());
     } catch (UnsupportedEncodingTypeException ex) {
-      return Failure.of(BasicTypes.ErrorResponse.newBuilder()
-          .setErrorCode(ProtocolErrorCode.VALUE_ENCODING_ERROR.codeValue)
-          .setMessage("Encoding not supported.").build());
+      return Failure.of(ProtobufResponseUtilities.makeErrorResponse(
+          ProtocolErrorCode.VALUE_ENCODING_ERROR.codeValue, "Encoding not supported."));
     } catch (CodecNotRegisteredForTypeException ex) {
-      return Failure.of(BasicTypes.ErrorResponse.newBuilder()
-          .setErrorCode(ProtocolErrorCode.VALUE_ENCODING_ERROR.codeValue)
-          .setMessage("Codec error in protobuf deserialization.").build());
+      return Failure.of(ProtobufResponseUtilities.makeErrorResponse(
+          ProtocolErrorCode.VALUE_ENCODING_ERROR.codeValue,
+          "Codec error in protobuf deserialization."));
     }
   }
+
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/bfbe3e56/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAvailableServersOperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAvailableServersOperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAvailableServersOperationHandler.java
index e58c8cd..239d9f7 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAvailableServersOperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetAvailableServersOperationHandler.java
@@ -40,6 +40,7 @@ import org.apache.geode.protocol.protobuf.ProtocolErrorCode;
 import org.apache.geode.protocol.protobuf.Result;
 import org.apache.geode.protocol.protobuf.ServerAPI;
 import org.apache.geode.protocol.protobuf.Success;
+import org.apache.geode.protocol.protobuf.utilities.ProtobufResponseUtilities;
 import org.apache.geode.serialization.SerializationService;
 
 @Experimental
@@ -73,9 +74,8 @@ public class GetAvailableServersOperationHandler implements
         // try the next locator
       }
     }
-    return Failure.of(BasicTypes.ErrorResponse.newBuilder()
-        .setErrorCode(ProtocolErrorCode.DATA_UNREACHABLE.codeValue)
-        .setMessage("Unable to find a locator").build());
+    return Failure.of(ProtobufResponseUtilities.makeErrorResponse(
+        ProtocolErrorCode.DATA_UNREACHABLE.codeValue, "Unable to find a locator"));
   }
 
   private Result<ServerAPI.GetAvailableServersResponse> getGetAvailableServersFromLocator(

http://git-wip-us.apache.org/repos/asf/geode/blob/bfbe3e56/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRegionRequestOperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRegionRequestOperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRegionRequestOperationHandler.java
index 3814bf6..b563a5d 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRegionRequestOperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRegionRequestOperationHandler.java
@@ -24,6 +24,7 @@ import org.apache.geode.protocol.protobuf.ProtocolErrorCode;
 import org.apache.geode.protocol.protobuf.RegionAPI;
 import org.apache.geode.protocol.protobuf.Result;
 import org.apache.geode.protocol.protobuf.Success;
+import org.apache.geode.protocol.protobuf.utilities.ProtobufResponseUtilities;
 import org.apache.geode.protocol.protobuf.utilities.ProtobufUtilities;
 import org.apache.geode.serialization.SerializationService;
 
@@ -34,14 +35,13 @@ public class GetRegionRequestOperationHandler
   @Override
   public Result<RegionAPI.GetRegionResponse> process(SerializationService serializationService,
       RegionAPI.GetRegionRequest request, Cache cache) {
-
     String regionName = request.getRegionName();
 
     Region region = cache.getRegion(regionName);
     if (region == null) {
-      return Failure.of(BasicTypes.ErrorResponse.newBuilder()
-          .setErrorCode(ProtocolErrorCode.REGION_NOT_FOUND.codeValue)
-          .setMessage("No region exists for name: " + regionName).build());
+      return Failure.of(
+          ProtobufResponseUtilities.makeErrorResponse(ProtocolErrorCode.REGION_NOT_FOUND.codeValue,
+              "No region exists for name: " + regionName));
     }
 
     BasicTypes.Region protoRegion = ProtobufUtilities.createRegionMessageFromRegion(region);

http://git-wip-us.apache.org/repos/asf/geode/blob/bfbe3e56/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRequestOperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRequestOperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRequestOperationHandler.java
index 1086bca..96c0282 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRequestOperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/GetRequestOperationHandler.java
@@ -24,6 +24,7 @@ import org.apache.geode.protocol.protobuf.ProtocolErrorCode;
 import org.apache.geode.protocol.protobuf.RegionAPI;
 import org.apache.geode.protocol.protobuf.Result;
 import org.apache.geode.protocol.protobuf.Success;
+import org.apache.geode.protocol.protobuf.utilities.ProtobufResponseUtilities;
 import org.apache.geode.protocol.protobuf.utilities.ProtobufUtilities;
 import org.apache.geode.serialization.SerializationService;
 import org.apache.geode.serialization.exception.UnsupportedEncodingTypeException;
@@ -39,9 +40,8 @@ public class GetRequestOperationHandler
     String regionName = request.getRegionName();
     Region region = cache.getRegion(regionName);
     if (region == null) {
-      return Failure.of(BasicTypes.ErrorResponse.newBuilder()
-          .setErrorCode(ProtocolErrorCode.REGION_NOT_FOUND.codeValue).setMessage("Region not found")
-          .build());
+      return Failure.of(ProtobufResponseUtilities
+          .makeErrorResponse(ProtocolErrorCode.REGION_NOT_FOUND.codeValue, "Region not found"));
     }
 
     try {
@@ -56,13 +56,12 @@ public class GetRequestOperationHandler
           ProtobufUtilities.createEncodedValue(serializationService, resultValue);
       return Success.of(RegionAPI.GetResponse.newBuilder().setResult(encodedValue).build());
     } catch (UnsupportedEncodingTypeException ex) {
-      return Failure.of(BasicTypes.ErrorResponse.newBuilder()
-          .setErrorCode(ProtocolErrorCode.VALUE_ENCODING_ERROR.codeValue)
-          .setMessage("Encoding not supported.").build());
+      return Failure.of(ProtobufResponseUtilities.makeErrorResponse(
+          ProtocolErrorCode.VALUE_ENCODING_ERROR.codeValue, "Encoding not supported."));
     } catch (CodecNotRegisteredForTypeException ex) {
-      return Failure.of(BasicTypes.ErrorResponse.newBuilder()
-          .setErrorCode(ProtocolErrorCode.VALUE_ENCODING_ERROR.codeValue)
-          .setMessage("Codec error in protobuf deserialization.").build());
+      return Failure.of(ProtobufResponseUtilities.makeErrorResponse(
+          ProtocolErrorCode.VALUE_ENCODING_ERROR.codeValue,
+          "Codec error in protobuf deserialization."));
     }
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/bfbe3e56/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutAllRequestOperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutAllRequestOperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutAllRequestOperationHandler.java
index 33e3ade..253a95d 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutAllRequestOperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutAllRequestOperationHandler.java
@@ -82,9 +82,10 @@ public class PutAllRequestOperationHandler
   private BasicTypes.KeyedErrorResponse buildAndLogKeyedError(BasicTypes.Entry entry,
       ProtocolErrorCode errorCode, String message, Exception ex) {
     logger.error(message, ex);
-    BasicTypes.ErrorResponse errorResponse = BasicTypes.ErrorResponse.newBuilder()
-        .setErrorCode(errorCode.codeValue).setMessage(message).build();
-    return BasicTypes.KeyedErrorResponse.newBuilder().setKey(entry.getKey()).setError(errorResponse)
+
+    return BasicTypes.KeyedErrorResponse.newBuilder().setKey(entry.getKey())
+        .setError(
+            BasicTypes.Error.newBuilder().setErrorCode(errorCode.codeValue).setMessage(message))
         .build();
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/bfbe3e56/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutRequestOperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutRequestOperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutRequestOperationHandler.java
index 637d8f1..c24fb29 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutRequestOperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/PutRequestOperationHandler.java
@@ -24,6 +24,7 @@ import org.apache.geode.protocol.protobuf.ProtocolErrorCode;
 import org.apache.geode.protocol.protobuf.RegionAPI;
 import org.apache.geode.protocol.protobuf.Result;
 import org.apache.geode.protocol.protobuf.Success;
+import org.apache.geode.protocol.protobuf.utilities.ProtobufResponseUtilities;
 import org.apache.geode.protocol.protobuf.utilities.ProtobufUtilities;
 import org.apache.geode.serialization.SerializationService;
 import org.apache.geode.serialization.exception.UnsupportedEncodingTypeException;
@@ -39,9 +40,9 @@ public class PutRequestOperationHandler
     String regionName = request.getRegionName();
     Region region = cache.getRegion(regionName);
     if (region == null) {
-      return Failure.of(BasicTypes.ErrorResponse.newBuilder()
-          .setErrorCode(ProtocolErrorCode.REGION_NOT_FOUND.codeValue)
-          .setMessage("Region passed by client did not exist: " + regionName).build());
+      return Failure.of(
+          ProtobufResponseUtilities.makeErrorResponse(ProtocolErrorCode.REGION_NOT_FOUND.codeValue,
+              "Region passed by client did not exist: " + regionName));
     }
 
     try {
@@ -53,18 +54,13 @@ public class PutRequestOperationHandler
         region.put(decodedKey, decodedValue);
         return Success.of(RegionAPI.PutResponse.newBuilder().build());
       } catch (ClassCastException ex) {
-        return Failure.of(BasicTypes.ErrorResponse.newBuilder()
-            .setErrorCode(ProtocolErrorCode.CONSTRAINT_VIOLATION.codeValue)
-            .setMessage("invalid key or value type for region " + regionName).build());
+        return Failure.of(ProtobufResponseUtilities.makeErrorResponse(
+            ProtocolErrorCode.CONSTRAINT_VIOLATION.codeValue,
+            "invalid key or value type for region " + regionName));
       }
-    } catch (UnsupportedEncodingTypeException ex) {
-      return Failure.of(BasicTypes.ErrorResponse.newBuilder()
-          .setErrorCode(ProtocolErrorCode.VALUE_ENCODING_ERROR.codeValue)
-          .setMessage(ex.getMessage()).build());
-    } catch (CodecNotRegisteredForTypeException ex) {
-      return Failure.of(BasicTypes.ErrorResponse.newBuilder()
-          .setErrorCode(ProtocolErrorCode.VALUE_ENCODING_ERROR.codeValue)
-          .setMessage(ex.getMessage()).build());
+    } catch (UnsupportedEncodingTypeException | CodecNotRegisteredForTypeException ex) {
+      return Failure.of(ProtobufResponseUtilities
+          .makeErrorResponse(ProtocolErrorCode.VALUE_ENCODING_ERROR.codeValue, ex.getMessage()));
     }
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/bfbe3e56/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/RemoveRequestOperationHandler.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/RemoveRequestOperationHandler.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/RemoveRequestOperationHandler.java
index dbc58bf..59236be 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/RemoveRequestOperationHandler.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/operations/RemoveRequestOperationHandler.java
@@ -45,9 +45,8 @@ public class RemoveRequestOperationHandler
     String regionName = request.getRegionName();
     Region region = cache.getRegion(regionName);
     if (region == null) {
-      return Failure.of(BasicTypes.ErrorResponse.newBuilder()
-          .setErrorCode(ProtocolErrorCode.REGION_NOT_FOUND.codeValue).setMessage("Region not found")
-          .build());
+      return Failure.of(ProtobufResponseUtilities
+          .makeErrorResponse(ProtocolErrorCode.REGION_NOT_FOUND.codeValue, "Region not found"));
     }
 
     try {

http://git-wip-us.apache.org/repos/asf/geode/blob/bfbe3e56/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufResponseUtilities.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufResponseUtilities.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufResponseUtilities.java
index 7bc766e..cedb11a 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufResponseUtilities.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/utilities/ProtobufResponseUtilities.java
@@ -24,6 +24,7 @@ import org.apache.geode.protocol.protobuf.BasicTypes;
 import org.apache.geode.protocol.protobuf.ProtocolErrorCode;
 import org.apache.geode.protocol.protobuf.RegionAPI;
 
+
 /**
  * This class contains helper functions for generating ClientProtocol.Response objects.
  * <p>
@@ -49,8 +50,7 @@ public abstract class ProtobufResponseUtilities {
     } else {
       logger.error(errorMessage);
     }
-    return BasicTypes.ErrorResponse.newBuilder().setErrorCode(errorCode.codeValue)
-        .setMessage(errorMessage).build();
+    return makeErrorResponse(errorCode.codeValue, errorMessage);
   }
 
   /**
@@ -68,4 +68,10 @@ public abstract class ProtobufResponseUtilities {
     }
     return builder.build();
   }
+
+  public static BasicTypes.ErrorResponse makeErrorResponse(int errorCode, String message) {
+    return BasicTypes.ErrorResponse.newBuilder()
+        .setError(BasicTypes.Error.newBuilder().setErrorCode(errorCode).setMessage(message))
+        .build();
+  }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/bfbe3e56/geode-protobuf/src/main/proto/basicTypes.proto
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/proto/basicTypes.proto b/geode-protobuf/src/main/proto/basicTypes.proto
index 684e4c8..5f63f6c 100644
--- a/geode-protobuf/src/main/proto/basicTypes.proto
+++ b/geode-protobuf/src/main/proto/basicTypes.proto
@@ -77,12 +77,16 @@ message Server {
     int32 port = 2;
 }
 
-message ErrorResponse {
+message Error {
     int32 errorCode = 1;
     string message = 2;
 }
 
+message ErrorResponse {
+    Error error = 1;
+}
+
 message KeyedErrorResponse {
     EncodedValue key = 1;
-    ErrorResponse error = 2;
+    Error error = 2;
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/bfbe3e56/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/GetRegionRequestOperationHandlerJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/GetRegionRequestOperationHandlerJUnitTest.java b/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/GetRegionRequestOperationHandlerJUnitTest.java
index a1f67df..5cfa6b3 100644
--- a/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/GetRegionRequestOperationHandlerJUnitTest.java
+++ b/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/GetRegionRequestOperationHandlerJUnitTest.java
@@ -99,6 +99,6 @@ public class GetRegionRequestOperationHandlerJUnitTest extends OperationHandlerJ
         MessageUtil.makeGetRegionRequest(unknownRegionName), emptyCache);
     Assert.assertTrue(result instanceof Failure);
     Assert.assertEquals(ProtocolErrorCode.REGION_NOT_FOUND.codeValue,
-        result.getErrorMessage().getErrorCode());
+        result.getErrorMessage().getError().getErrorCode());
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/bfbe3e56/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/GetRequestOperationHandlerJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/GetRequestOperationHandlerJUnitTest.java b/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/GetRequestOperationHandlerJUnitTest.java
index a632532..0213bf7 100644
--- a/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/GetRequestOperationHandlerJUnitTest.java
+++ b/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/GetRequestOperationHandlerJUnitTest.java
@@ -92,7 +92,7 @@ public class GetRequestOperationHandlerJUnitTest extends OperationHandlerJUnitTe
 
     Assert.assertTrue(response instanceof Failure);
     Assert.assertEquals(ProtocolErrorCode.REGION_NOT_FOUND.codeValue,
-        response.getErrorMessage().getErrorCode());
+        response.getErrorMessage().getError().getErrorCode());
   }
 
   @Test
@@ -137,7 +137,7 @@ public class GetRequestOperationHandlerJUnitTest extends OperationHandlerJUnitTe
 
     Assert.assertTrue(response instanceof Failure);
     Assert.assertEquals(ProtocolErrorCode.VALUE_ENCODING_ERROR.codeValue,
-        response.getErrorMessage().getErrorCode());
+        response.getErrorMessage().getError().getErrorCode());
   }
 
   private RegionAPI.GetRequest generateTestRequest(boolean missingRegion, boolean missingKey,

http://git-wip-us.apache.org/repos/asf/geode/blob/bfbe3e56/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/PutRequestOperationHandlerJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/PutRequestOperationHandlerJUnitTest.java b/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/PutRequestOperationHandlerJUnitTest.java
index 9fdadd8..fc697e4 100644
--- a/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/PutRequestOperationHandlerJUnitTest.java
+++ b/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/PutRequestOperationHandlerJUnitTest.java
@@ -100,7 +100,7 @@ public class PutRequestOperationHandlerJUnitTest extends OperationHandlerJUnitTe
 
     assertTrue(result instanceof Failure);
     assertEquals(ProtocolErrorCode.VALUE_ENCODING_ERROR.codeValue,
-        result.getErrorMessage().getErrorCode());
+        result.getErrorMessage().getError().getErrorCode());
   }
 
   @Test
@@ -113,7 +113,7 @@ public class PutRequestOperationHandlerJUnitTest extends OperationHandlerJUnitTe
 
     assertTrue(result instanceof Failure);
     assertEquals(ProtocolErrorCode.REGION_NOT_FOUND.codeValue,
-        result.getErrorMessage().getErrorCode());
+        result.getErrorMessage().getError().getErrorCode());
   }
 
   @Test
@@ -127,7 +127,7 @@ public class PutRequestOperationHandlerJUnitTest extends OperationHandlerJUnitTe
 
     assertTrue(result instanceof Failure);
     assertEquals(ProtocolErrorCode.CONSTRAINT_VIOLATION.codeValue,
-        result.getErrorMessage().getErrorCode());
+        result.getErrorMessage().getError().getErrorCode());
   }
 
   private RegionAPI.PutRequest generateTestRequest()

http://git-wip-us.apache.org/repos/asf/geode/blob/bfbe3e56/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/RemoveRequestOperationHandlerJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/RemoveRequestOperationHandlerJUnitTest.java b/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/RemoveRequestOperationHandlerJUnitTest.java
index 797538c..3b917b7 100644
--- a/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/RemoveRequestOperationHandlerJUnitTest.java
+++ b/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/operations/RemoveRequestOperationHandlerJUnitTest.java
@@ -90,7 +90,7 @@ public class RemoveRequestOperationHandlerJUnitTest extends OperationHandlerJUni
 
     assertTrue(result instanceof Failure);
     assertEquals(ProtocolErrorCode.REGION_NOT_FOUND.codeValue,
-        result.getErrorMessage().getErrorCode());
+        result.getErrorMessage().getError().getErrorCode());
   }
 
   @Test
@@ -125,7 +125,7 @@ public class RemoveRequestOperationHandlerJUnitTest extends OperationHandlerJUni
 
     assertTrue(result instanceof Failure);
     assertEquals(ProtocolErrorCode.VALUE_ENCODING_ERROR.codeValue,
-        result.getErrorMessage().getErrorCode());
+        result.getErrorMessage().getError().getErrorCode());
   }
 
   private ClientProtocol.Request generateTestRequest(boolean missingRegion, boolean missingKey)


[33/51] [abbrv] geode git commit: GEODE-3329: Empty commit to close PR

Posted by kl...@apache.org.
GEODE-3329: Empty commit to close PR

This closes #666


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/1c2418bd
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/1c2418bd
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/1c2418bd

Branch: refs/heads/feature/GEODE-1279
Commit: 1c2418bdcae8dbf5befd8b0ad1c3fe3a03156dd2
Parents: 04c446a
Author: Dan Smith <up...@apache.org>
Authored: Thu Aug 17 14:13:06 2017 -0700
Committer: Dan Smith <up...@apache.org>
Committed: Thu Aug 17 14:13:06 2017 -0700

----------------------------------------------------------------------

----------------------------------------------------------------------



[34/51] [abbrv] geode git commit: GEODE-3437: Fix list and describe region tests

Posted by kl...@apache.org.
GEODE-3437: Fix list and describe region tests


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/82fad645
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/82fad645
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/82fad645

Branch: refs/heads/feature/GEODE-1279
Commit: 82fad6453a3ac1978ab838ce5bb146b1a0329564
Parents: 1c2418b
Author: Jared Stewart <js...@pivotal.io>
Authored: Mon Aug 14 09:55:59 2017 -0700
Committer: Jared Stewart <js...@pivotal.io>
Committed: Thu Aug 17 14:47:18 2017 -0700

----------------------------------------------------------------------
 .../ListAndDescribeRegionDUnitTest.java         | 460 +++++++++----------
 .../dunit/rules/LocatorServerStartupRule.java   |  23 +-
 2 files changed, 230 insertions(+), 253 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/82fad645/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/ListAndDescribeRegionDUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/ListAndDescribeRegionDUnitTest.java b/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/ListAndDescribeRegionDUnitTest.java
index ab8c69b..ed4353d 100644
--- a/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/ListAndDescribeRegionDUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/ListAndDescribeRegionDUnitTest.java
@@ -14,34 +14,54 @@
  */
 package org.apache.geode.management.internal.cli.commands;
 
-import static org.apache.geode.distributed.ConfigurationProperties.*;
+import static org.apache.geode.distributed.ConfigurationProperties.ENABLE_TIME_STATISTICS;
+import static org.apache.geode.distributed.ConfigurationProperties.GROUPS;
+import static org.apache.geode.distributed.ConfigurationProperties.LOCATORS;
+import static org.apache.geode.distributed.ConfigurationProperties.LOG_LEVEL;
+import static org.apache.geode.distributed.ConfigurationProperties.MCAST_PORT;
+import static org.apache.geode.distributed.ConfigurationProperties.NAME;
+import static org.apache.geode.distributed.ConfigurationProperties.STATISTIC_SAMPLING_ENABLED;
+import static org.apache.geode.management.internal.cli.commands.CliCommandTestBase.commandResultToString;
+import static org.apache.geode.management.internal.cli.i18n.CliStrings.DESCRIBE_REGION;
+import static org.apache.geode.management.internal.cli.i18n.CliStrings.DESCRIBE_REGION__NAME;
+import static org.apache.geode.management.internal.cli.i18n.CliStrings.GROUP;
+import static org.apache.geode.management.internal.cli.i18n.CliStrings.LIST_REGION;
+import static org.apache.geode.management.internal.cli.i18n.CliStrings.MEMBER;
+import static org.assertj.core.api.Assertions.assertThat;
+
+import java.io.Serializable;
+import java.util.Properties;
 
-import org.apache.geode.cache.*;
-import org.apache.geode.cache.util.CacheListenerAdapter;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import org.apache.geode.cache.Cache;
+import org.apache.geode.cache.CacheFactory;
+import org.apache.geode.cache.EvictionAction;
+import org.apache.geode.cache.EvictionAttributes;
+import org.apache.geode.cache.FixedPartitionAttributes;
+import org.apache.geode.cache.PartitionAttributes;
+import org.apache.geode.cache.PartitionAttributesFactory;
+import org.apache.geode.cache.Region;
+import org.apache.geode.cache.RegionFactory;
+import org.apache.geode.cache.RegionShortcut;
 import org.apache.geode.compression.SnappyCompressor;
 import org.apache.geode.internal.cache.RegionEntryContext;
-import org.apache.geode.management.cli.Result.Status;
-import org.apache.geode.management.internal.cli.i18n.CliStrings;
 import org.apache.geode.management.internal.cli.result.CommandResult;
 import org.apache.geode.management.internal.cli.util.CommandStringBuilder;
 import org.apache.geode.management.internal.cli.util.RegionAttributesNames;
 import org.apache.geode.test.dunit.Host;
-import org.apache.geode.test.dunit.SerializableRunnable;
 import org.apache.geode.test.dunit.VM;
+import org.apache.geode.test.dunit.rules.GfshShellConnectionRule;
+import org.apache.geode.test.dunit.rules.LocatorServerStartupRule;
+import org.apache.geode.test.dunit.rules.MemberVM;
 import org.apache.geode.test.junit.categories.DistributedTest;
 import org.apache.geode.test.junit.categories.FlakyTest;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-
-import java.util.Properties;
-
-import static org.apache.geode.distributed.ConfigurationProperties.MCAST_PORT;
-import static org.apache.geode.test.dunit.Assert.*;
-import static org.apache.geode.test.dunit.LogWriterUtils.getLogWriter;
 
 @Category(DistributedTest.class)
-public class ListAndDescribeRegionDUnitTest extends CliCommandTestBase {
-
+public class ListAndDescribeRegionDUnitTest implements Serializable {
   private static final String REGION1 = "region1";
   private static final String REGION2 = "region2";
   private static final String REGION3 = "region3";
@@ -51,111 +71,193 @@ public class ListAndDescribeRegionDUnitTest extends CliCommandTestBase {
   private static final String PR1 = "PR1";
   private static final String LOCALREGIONONMANAGER = "LocalRegionOnManager";
 
-  static class CacheListener2 extends CacheListenerAdapter {
+  @ClassRule
+  public static LocatorServerStartupRule lsRule = new LocatorServerStartupRule();
+
+  @ClassRule
+  public static GfshShellConnectionRule gfshShellConnectionRule = new GfshShellConnectionRule();
+
+  @BeforeClass
+  public static void setupSystem() throws Exception {
+    final Properties locatorProps = createProperties("Locator", "G3");
+    MemberVM locator = lsRule.startLocatorVM(0, locatorProps);
+
+    final Properties managerProps = createProperties("Manager", "G1");
+    managerProps.setProperty(LOCATORS, "localhost[" + locator.getPort() + "]");
+    MemberVM manager = lsRule.startServerVM(1, managerProps, locator.getPort());
+
+    final Properties serverProps = createProperties("Server", "G2");
+    MemberVM server = lsRule.startServerVM(2, serverProps, locator.getPort());
+
+    manager.invoke(() -> {
+      final Cache cache = CacheFactory.getAnyInstance();
+      RegionFactory<String, Integer> dataRegionFactory =
+          cache.createRegionFactory(RegionShortcut.PARTITION);
+      dataRegionFactory.setConcurrencyLevel(4);
+      EvictionAttributes ea =
+          EvictionAttributes.createLIFOEntryAttributes(100, EvictionAction.LOCAL_DESTROY);
+      dataRegionFactory.setEvictionAttributes(ea);
+      dataRegionFactory.setEnableAsyncConflation(true);
+
+      FixedPartitionAttributes fpa = FixedPartitionAttributes.createFixedPartition("Par1", true);
+      PartitionAttributes pa = new PartitionAttributesFactory().setLocalMaxMemory(100)
+          .setRecoveryDelay(2).setTotalMaxMemory(200).setRedundantCopies(1)
+          .addFixedPartitionAttributes(fpa).create();
+      dataRegionFactory.setPartitionAttributes(pa);
+
+      dataRegionFactory.create(PR1);
+      createLocalRegion(LOCALREGIONONMANAGER);
+    });
+
+    server.invoke(() -> {
+      final Cache cache = CacheFactory.getAnyInstance();
+      RegionFactory<String, Integer> dataRegionFactory =
+          cache.createRegionFactory(RegionShortcut.PARTITION);
+      dataRegionFactory.setConcurrencyLevel(4);
+      EvictionAttributes ea =
+          EvictionAttributes.createLIFOEntryAttributes(100, EvictionAction.LOCAL_DESTROY);
+      dataRegionFactory.setEvictionAttributes(ea);
+      dataRegionFactory.setEnableAsyncConflation(true);
+
+      FixedPartitionAttributes fpa = FixedPartitionAttributes.createFixedPartition("Par2", 4);
+      PartitionAttributes pa = new PartitionAttributesFactory().setLocalMaxMemory(150)
+          .setRecoveryDelay(4).setTotalMaxMemory(200).setRedundantCopies(1)
+          .addFixedPartitionAttributes(fpa).create();
+      dataRegionFactory.setPartitionAttributes(pa);
+
+      dataRegionFactory.create(PR1);
+      createRegionsWithSubRegions();
+    });
+
+    gfshShellConnectionRule.connectAndVerify(locator);
   }
 
-  static class CacheListener1 extends CacheListenerAdapter {
+  @Test
+  public void listAllRegions() throws Exception {
+    CommandStringBuilder csb = new CommandStringBuilder(LIST_REGION);
+    CommandResult commandResult = gfshShellConnectionRule.executeAndVerifyCommand(csb.toString());
+    String commandResultString = commandResultToString(commandResult);
+    assertThat(commandResultString).contains(PR1);
+    assertThat(commandResultString).contains(LOCALREGIONONMANAGER);
+    assertThat(commandResultString).contains(REGION1);
+    assertThat(commandResultString).contains(REGION2);
+    assertThat(commandResultString).contains(REGION3);
   }
 
-  private Properties createProperties(String name, String groups) {
-    Properties props = new Properties();
-    props.setProperty(MCAST_PORT, "0");
-    props.setProperty(LOG_LEVEL, "info");
-    props.setProperty(STATISTIC_SAMPLING_ENABLED, "true");
-    props.setProperty(ENABLE_TIME_STATISTICS, "true");
-    props.setProperty(NAME, name);
-    props.setProperty(GROUPS, groups);
-    return props;
+  @Test
+  public void listRegionsOnManager() throws Exception {
+    CommandStringBuilder csb = new CommandStringBuilder(LIST_REGION);
+    csb.addOption(MEMBER, "Manager");
+    CommandResult commandResult = gfshShellConnectionRule.executeAndVerifyCommand(csb.toString());
+    String commandResultString = commandResultToString(commandResult);
+    assertThat(commandResultString).contains(PR1);
+    assertThat(commandResultString).contains(LOCALREGIONONMANAGER);
   }
 
-  private void createPartitionedRegion1() {
-    final Cache cache = getCache();
-    // Create the data region
-    RegionFactory<String, Integer> dataRegionFactory =
-        cache.createRegionFactory(RegionShortcut.PARTITION);
-    dataRegionFactory.create(PR1);
+  @Test
+  public void listRegionsOnServer() throws Exception {
+    CommandStringBuilder csb = new CommandStringBuilder(LIST_REGION);
+    csb.addOption(MEMBER, "Server");
+    CommandResult commandResult = gfshShellConnectionRule.executeAndVerifyCommand(csb.toString());
+    String commandResultString = commandResultToString(commandResult);
+    assertThat(commandResultString).contains(PR1);
+    assertThat(commandResultString).contains(REGION1);
+    assertThat(commandResultString).contains(REGION2);
+    assertThat(commandResultString).contains(REGION3);
+    assertThat(commandResultString).contains(SUBREGION1A);
   }
 
-  private void setupSystem() {
-    final Properties managerProps = createProperties("Manager", "G1");
-    setUpJmxManagerOnVm0ThenConnect(managerProps);
-
-    final Properties server1Props = createProperties("Server1", "G2");
-    final Host host = Host.getHost(0);
-    final VM[] servers = {host.getVM(0), host.getVM(1)};
-
-    // The mananger VM
-    servers[0].invoke(new SerializableRunnable() {
-      public void run() {
-        final Cache cache = getCache();
-        RegionFactory<String, Integer> dataRegionFactory =
-            cache.createRegionFactory(RegionShortcut.PARTITION);
-        dataRegionFactory.setConcurrencyLevel(4);
-        EvictionAttributes ea =
-            EvictionAttributes.createLIFOEntryAttributes(100, EvictionAction.LOCAL_DESTROY);
-        dataRegionFactory.setEvictionAttributes(ea);
-        dataRegionFactory.setEnableAsyncConflation(true);
-
-        FixedPartitionAttributes fpa = FixedPartitionAttributes.createFixedPartition("Par1", true);
-        PartitionAttributes pa = new PartitionAttributesFactory().setLocalMaxMemory(100)
-            .setRecoveryDelay(2).setTotalMaxMemory(200).setRedundantCopies(1)
-            .addFixedPartitionAttributes(fpa).create();
-        dataRegionFactory.setPartitionAttributes(pa);
-
-        dataRegionFactory.create(PR1);
-        createLocalRegion(LOCALREGIONONMANAGER);
-      }
-    });
+  @Test
+  public void listRegionsInGroup1() throws Exception {
+    CommandStringBuilder csb = new CommandStringBuilder(LIST_REGION);
+    csb.addOption(GROUP, "G1");
+    CommandResult commandResult = gfshShellConnectionRule.executeAndVerifyCommand(csb.toString());
+    String commandResultString = commandResultToString(commandResult);
+    assertThat(commandResultString).contains(PR1);
+    assertThat(commandResultString).contains(LOCALREGIONONMANAGER);
+  }
 
-    servers[1].invoke(new SerializableRunnable() {
-      public void run() {
-        getSystem(server1Props);
-        final Cache cache = getCache();
-        RegionFactory<String, Integer> dataRegionFactory =
-            cache.createRegionFactory(RegionShortcut.PARTITION);
-        dataRegionFactory.setConcurrencyLevel(4);
-        EvictionAttributes ea =
-            EvictionAttributes.createLIFOEntryAttributes(100, EvictionAction.LOCAL_DESTROY);
-        dataRegionFactory.setEvictionAttributes(ea);
-        dataRegionFactory.setEnableAsyncConflation(true);
-
-        FixedPartitionAttributes fpa = FixedPartitionAttributes.createFixedPartition("Par2", 4);
-        PartitionAttributes pa = new PartitionAttributesFactory().setLocalMaxMemory(150)
-            .setRecoveryDelay(4).setTotalMaxMemory(200).setRedundantCopies(1)
-            .addFixedPartitionAttributes(fpa).create();
-        dataRegionFactory.setPartitionAttributes(pa);
-
-        dataRegionFactory.create(PR1);
-        createRegionsWithSubRegions();
-      }
-    });
+  @Test
+  public void listRegionsInGroup2() throws Exception {
+    CommandStringBuilder csb = new CommandStringBuilder(LIST_REGION);
+    csb.addOption(GROUP, "G2");
+    CommandResult commandResult = gfshShellConnectionRule.executeAndVerifyCommand(csb.toString());
+    String commandResultString = commandResultToString(commandResult);
+    assertThat(commandResultString).contains(PR1);
+    assertThat(commandResultString).contains(REGION1);
+    assertThat(commandResultString).contains(REGION2);
+    assertThat(commandResultString).contains(REGION3);
+    assertThat(commandResultString).contains(SUBREGION1A);
   }
 
-  private void createPartitionedRegion(String regionName) {
+  @Test
+  public void describeRegionsOnManager() throws Exception {
+    CommandStringBuilder csb = new CommandStringBuilder(DESCRIBE_REGION);
+    csb.addOption(DESCRIBE_REGION__NAME, PR1);
+    CommandResult commandResult = gfshShellConnectionRule.executeAndVerifyCommand(csb.toString());
+
+    String commandResultString = commandResultToString(commandResult);
+    assertThat(commandResultString).contains(PR1);
+    assertThat(commandResultString).contains("Server");
+  }
 
-    final Cache cache = getCache();
-    // Create the data region
-    RegionFactory<String, Integer> dataRegionFactory =
-        cache.createRegionFactory(RegionShortcut.PARTITION);
-    dataRegionFactory.setConcurrencyLevel(4);
-    EvictionAttributes ea =
-        EvictionAttributes.createLIFOEntryAttributes(100, EvictionAction.LOCAL_DESTROY);
-    dataRegionFactory.setEvictionAttributes(ea);
-    dataRegionFactory.setEnableAsyncConflation(true);
-
-    FixedPartitionAttributes fpa = FixedPartitionAttributes.createFixedPartition("Par1", true);
-    PartitionAttributes pa =
-        new PartitionAttributesFactory().setLocalMaxMemory(100).setRecoveryDelay(2)
-            .setTotalMaxMemory(200).setRedundantCopies(1).addFixedPartitionAttributes(fpa).create();
-    dataRegionFactory.setPartitionAttributes(pa);
-    dataRegionFactory.addCacheListener(new CacheListener1());
-    dataRegionFactory.addCacheListener(new CacheListener2());
-    dataRegionFactory.create(regionName);
+  @Test
+  public void describeRegionsOnServer() throws Exception {
+    CommandStringBuilder csb = new CommandStringBuilder(DESCRIBE_REGION);
+    csb.addOption(DESCRIBE_REGION__NAME, LOCALREGIONONMANAGER);
+    CommandResult commandResult = gfshShellConnectionRule.executeAndVerifyCommand(csb.toString());
+
+    String commandResultString = commandResultToString(commandResult);
+    assertThat(commandResultString).contains(LOCALREGIONONMANAGER);
+    assertThat(commandResultString).contains("Manager");
   }
 
+  /**
+   * Asserts that a describe region command issued on a region with compression returns the correct
+   * non default region attribute for compression and the correct codec value.
+   */
+  @Category(FlakyTest.class) // GEODE-1033: HeadlesssGFSH, random port, Snappy dependency
+  @Test
+  public void describeRegionWithCompressionCodec() throws Exception {
+    final String regionName = "compressedRegion";
+    VM vm = Host.getHost(0).getVM(1);
 
-  private void createLocalRegion(final String regionName) {
-    final Cache cache = getCache();
+    // Create compressed region
+    vm.invoke(() -> {
+      createCompressedRegion(regionName);
+    });
+
+    // Test the describe command; look for compression
+    CommandStringBuilder csb = new CommandStringBuilder(DESCRIBE_REGION);
+    csb.addOption(DESCRIBE_REGION__NAME, regionName);
+    String commandString = csb.toString();
+    CommandResult commandResult = gfshShellConnectionRule.executeAndVerifyCommand(commandString);
+    String commandResultString = commandResultToString(commandResult);
+    assertThat(commandResultString).contains(regionName);
+    assertThat(commandResultString).contains(RegionAttributesNames.COMPRESSOR);
+    assertThat(commandResultString).contains(RegionEntryContext.DEFAULT_COMPRESSION_PROVIDER);
+
+    // Destroy compressed region
+    vm.invoke(() -> {
+      final Region region = CacheFactory.getAnyInstance().getRegion(regionName);
+      assertThat(region).isNotNull();
+      region.destroyRegion();
+    });
+  }
+
+  private static Properties createProperties(String name, String groups) {
+    Properties props = new Properties();
+    props.setProperty(MCAST_PORT, "0");
+    props.setProperty(LOG_LEVEL, "info");
+    props.setProperty(STATISTIC_SAMPLING_ENABLED, "true");
+    props.setProperty(ENABLE_TIME_STATISTICS, "true");
+    props.setProperty(NAME, name);
+    props.setProperty(GROUPS, groups);
+    return props;
+  }
+
+  private static void createLocalRegion(final String regionName) {
+    final Cache cache = CacheFactory.getAnyInstance();
     // Create the data region
     RegionFactory<String, Integer> dataRegionFactory =
         cache.createRegionFactory(RegionShortcut.LOCAL);
@@ -164,11 +266,11 @@ public class ListAndDescribeRegionDUnitTest extends CliCommandTestBase {
 
   /**
    * Creates a region that uses compression on region entry values.
-   *
+   * 
    * @param regionName a unique region name.
    */
-  private void createCompressedRegion(final String regionName) {
-    final Cache cache = getCache();
+  private static void createCompressedRegion(final String regionName) {
+    final Cache cache = CacheFactory.getAnyInstance();
 
     RegionFactory<String, Integer> dataRegionFactory =
         cache.createRegionFactory(RegionShortcut.REPLICATE);
@@ -177,8 +279,8 @@ public class ListAndDescribeRegionDUnitTest extends CliCommandTestBase {
   }
 
   @SuppressWarnings("deprecation")
-  private void createRegionsWithSubRegions() {
-    final Cache cache = getCache();
+  private static void createRegionsWithSubRegions() {
+    final Cache cache = CacheFactory.getAnyInstance();
 
     RegionFactory<String, Integer> dataRegionFactory =
         cache.createRegionFactory(RegionShortcut.REPLICATE);
@@ -192,140 +294,4 @@ public class ListAndDescribeRegionDUnitTest extends CliCommandTestBase {
     dataRegionFactory.create(REGION2);
     dataRegionFactory.create(REGION3);
   }
-
-  @Test
-  public void testListRegion() {
-    setupSystem();
-    CommandStringBuilder csb = new CommandStringBuilder(CliStrings.LIST_REGION);
-    String commandString = csb.toString();
-    CommandResult commandResult = executeCommand(commandString);
-    String commandResultAsString = commandResultToString(commandResult);
-    getLogWriter().info("Command String : " + commandString);
-    getLogWriter().info("Output : \n" + commandResultAsString);
-    assertEquals(Status.OK, commandResult.getStatus());
-    assertTrue(commandResultAsString.contains(PR1));
-    assertTrue(commandResultAsString.contains(LOCALREGIONONMANAGER));
-    assertTrue(commandResultAsString.contains(REGION1));
-    assertTrue(commandResultAsString.contains(REGION2));
-    assertTrue(commandResultAsString.contains(REGION3));
-
-
-    csb = new CommandStringBuilder(CliStrings.LIST_REGION);
-    csb.addOption(CliStrings.MEMBER, "Manager");
-    commandString = csb.toString();
-    commandResult = executeCommand(commandString);
-    commandResultAsString = commandResultToString(commandResult);
-    getLogWriter().info("Command String : " + commandString);
-    getLogWriter().info("Output : \n" + commandResultAsString);
-    assertEquals(Status.OK, commandResult.getStatus());
-    assertTrue(commandResultAsString.contains(PR1));
-    assertTrue(commandResultAsString.contains(LOCALREGIONONMANAGER));
-
-    csb = new CommandStringBuilder(CliStrings.LIST_REGION);
-    csb.addOption(CliStrings.MEMBER, "Server1");
-    commandString = csb.toString();
-    commandResult = executeCommand(commandString);
-    commandResultAsString = commandResultToString(commandResult);
-    getLogWriter().info("Command String : " + commandString);
-    getLogWriter().info("Output : \n" + commandResultAsString);
-    assertEquals(Status.OK, commandResult.getStatus());
-    assertTrue(commandResultAsString.contains(PR1));
-    assertTrue(commandResultAsString.contains(REGION1));
-    assertTrue(commandResultAsString.contains(REGION2));
-    assertTrue(commandResultAsString.contains(REGION3));
-    assertTrue(commandResultAsString.contains(SUBREGION1A));
-
-    csb = new CommandStringBuilder(CliStrings.LIST_REGION);
-    csb.addOption(CliStrings.GROUP, "G1");
-    commandString = csb.toString();
-    commandResult = executeCommand(commandString);
-    commandResultAsString = commandResultToString(commandResult);
-    getLogWriter().info("Command String : " + commandString);
-    getLogWriter().info("Output : \n" + commandResultAsString);
-    assertEquals(Status.OK, commandResult.getStatus());
-    assertTrue(commandResultAsString.contains(PR1));
-    assertTrue(commandResultAsString.contains(LOCALREGIONONMANAGER));
-
-    csb = new CommandStringBuilder(CliStrings.LIST_REGION);
-    csb.addOption(CliStrings.GROUP, "G2");
-    commandString = csb.toString();
-    commandResult = executeCommand(commandString);
-    commandResultAsString = commandResultToString(commandResult);
-    getLogWriter().info("Command String : " + commandString);
-    getLogWriter().info("Output : \n" + commandResultAsString);
-    assertEquals(Status.OK, commandResult.getStatus());
-    assertTrue(commandResultAsString.contains(PR1));
-    assertTrue(commandResultAsString.contains(REGION1));
-    assertTrue(commandResultAsString.contains(REGION2));
-    assertTrue(commandResultAsString.contains(REGION3));
-    assertTrue(commandResultAsString.contains(SUBREGION1A));
-  }
-
-  @Test
-  public void testDescribeRegion() {
-    setupSystem();
-    CommandStringBuilder csb = new CommandStringBuilder(CliStrings.DESCRIBE_REGION);
-    csb.addOption(CliStrings.DESCRIBE_REGION__NAME, PR1);
-    String commandString = csb.toString();
-    CommandResult commandResult = executeCommand(commandString);
-    String commandResultAsString = commandResultToString(commandResult);
-    getLogWriter().info("Command String : " + commandString);
-    getLogWriter().info("Output : \n" + commandResultAsString);
-    assertEquals(Status.OK, commandResult.getStatus());
-    assertTrue(commandResultAsString.contains(PR1));
-    assertTrue(commandResultAsString.contains("Server1"));
-
-    csb = new CommandStringBuilder(CliStrings.DESCRIBE_REGION);
-    csb.addOption(CliStrings.DESCRIBE_REGION__NAME, LOCALREGIONONMANAGER);
-    commandString = csb.toString();
-    commandResult = executeCommand(commandString);
-    commandResultAsString = commandResultToString(commandResult);
-    getLogWriter().info("Command String : " + commandString);
-    getLogWriter().info("Output : \n" + commandResultAsString);
-    assertEquals(Status.OK, commandResult.getStatus());
-    assertTrue(commandResultAsString.contains(LOCALREGIONONMANAGER));
-    assertTrue(commandResultAsString.contains("Manager"));
-  }
-
-  /**
-   * Asserts that a describe region command issued on a region with compression returns the correct
-   * non default region attribute for compression and the correct codec value.
-   */
-  @Category(FlakyTest.class) // GEODE-1033: HeadlesssGFSH, random port, Snappy dependency
-  @Test
-  public void testDescribeRegionWithCompressionCodec() {
-    final String regionName = "compressedRegion";
-    VM vm = Host.getHost(0).getVM(1);
-
-    setupSystem();
-
-    // Create compressed region
-    vm.invoke(new SerializableRunnable() {
-      @Override
-      public void run() {
-        createCompressedRegion(regionName);
-      }
-    });
-
-    // Test the describe command; look for compression
-    CommandStringBuilder csb = new CommandStringBuilder(CliStrings.DESCRIBE_REGION);
-    csb.addOption(CliStrings.DESCRIBE_REGION__NAME, regionName);
-    String commandString = csb.toString();
-    CommandResult commandResult = executeCommand(commandString);
-    String commandResultAsString = commandResultToString(commandResult);
-    assertEquals(Status.OK, commandResult.getStatus());
-    assertTrue(commandResultAsString.contains(regionName));
-    assertTrue(commandResultAsString.contains(RegionAttributesNames.COMPRESSOR));
-    assertTrue(commandResultAsString.contains(RegionEntryContext.DEFAULT_COMPRESSION_PROVIDER));
-
-    // Destroy compressed region
-    vm.invoke(new SerializableRunnable() {
-      @Override
-      public void run() {
-        Region region = getCache().getRegion(regionName);
-        assertNotNull(region);
-        region.destroyRegion();
-      }
-    });
-  }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/82fad645/geode-core/src/test/java/org/apache/geode/test/dunit/rules/LocatorServerStartupRule.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/test/dunit/rules/LocatorServerStartupRule.java b/geode-core/src/test/java/org/apache/geode/test/dunit/rules/LocatorServerStartupRule.java
index fc7966f..f0385e2 100644
--- a/geode-core/src/test/java/org/apache/geode/test/dunit/rules/LocatorServerStartupRule.java
+++ b/geode-core/src/test/java/org/apache/geode/test/dunit/rules/LocatorServerStartupRule.java
@@ -121,9 +121,15 @@ public class LocatorServerStartupRule extends ExternalResource implements Serial
    * 
    * @return VM locator vm
    */
-  public MemberVM<Locator> startLocatorVM(int index, Properties properties) throws Exception {
-    String name = "locator-" + index;
-    properties.setProperty(NAME, name);
+  public MemberVM<Locator> startLocatorVM(int index, Properties specifiedProperties)
+      throws Exception {
+    Properties properties = new Properties();
+    properties.putAll(specifiedProperties);
+
+    String defaultName = "locator-" + index;
+    properties.putIfAbsent(NAME, defaultName);
+    String name = properties.getProperty(NAME);
+
     VM locatorVM = getHost(0).getVM(index);
     Locator locator = locatorVM.invoke(() -> {
       locatorStarter = new LocatorStarterRule();
@@ -157,10 +163,15 @@ public class LocatorServerStartupRule extends ExternalResource implements Serial
   /**
    * Starts a cache server with given properties
    */
-  public MemberVM startServerVM(int index, Properties properties, int locatorPort)
+  public MemberVM startServerVM(int index, Properties specifiedProperties, int locatorPort)
       throws IOException {
-    String name = "server-" + index;
-    properties.setProperty(NAME, name);
+    Properties properties = new Properties();
+    properties.putAll(specifiedProperties);
+
+    String defaultName = "server-" + index;
+    properties.putIfAbsent(NAME, defaultName);
+    String name = properties.getProperty(NAME);
+
     VM serverVM = getHost(0).getVM(index);
     Server server = serverVM.invoke(() -> {
       serverStarter = new ServerStarterRule();


[24/51] [abbrv] geode git commit: GEODE-3055: change the error message's log level from info to debug.

Posted by kl...@apache.org.
GEODE-3055: change the error message's log level from info to debug.


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/2f61dd60
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/2f61dd60
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/2f61dd60

Branch: refs/heads/feature/GEODE-1279
Commit: 2f61dd600eec36f2588ec82a7cfdf1b6e27057f9
Parents: 91430e1
Author: zhouxh <gz...@pivotal.io>
Authored: Wed Aug 16 16:18:36 2017 -0700
Committer: zhouxh <gz...@pivotal.io>
Committed: Wed Aug 16 16:19:38 2017 -0700

----------------------------------------------------------------------
 .../geode/internal/cache/PartitionedRegionDataStore.java  | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/2f61dd60/geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegionDataStore.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegionDataStore.java b/geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegionDataStore.java
index 95e6598..893ca6b 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegionDataStore.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegionDataStore.java
@@ -500,10 +500,12 @@ public class PartitionedRegionDataStore implements HasCachePerfStats {
       boolean isLeader = leader.equals(this.partitionedRegion);
       if (!isLeader) {
         leader.getDataStore().removeBucket(possiblyFreeBucketId, true);
-        logger.info(
-            "For bucket " + possiblyFreeBucketId + ", failed to create cololcated child bucket for "
-                + this.partitionedRegion.getFullPath() + ", removed leader region "
-                + leader.getFullPath() + " bucket.");
+        if (isDebugEnabled) {
+          logger.debug("For bucket " + possiblyFreeBucketId
+              + ", failed to create cololcated child bucket for "
+              + this.partitionedRegion.getFullPath() + ", removed leader region "
+              + leader.getFullPath() + " bucket.");
+        }
       }
       throw validationException;
     } finally {


[11/51] [abbrv] geode git commit: GEODE-3386: This now closes #700

Posted by kl...@apache.org.
GEODE-3386: This now closes #700


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/a3c0ebaf
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/a3c0ebaf
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/a3c0ebaf

Branch: refs/heads/feature/GEODE-1279
Commit: a3c0ebaf0d0246b45900bee257b7893cf3be5dba
Parents: bfbe3e5
Author: Udo Kohlmeyer <uk...@pivotal.io>
Authored: Tue Aug 15 09:04:49 2017 -0700
Committer: Udo Kohlmeyer <uk...@pivotal.io>
Committed: Tue Aug 15 09:04:49 2017 -0700

----------------------------------------------------------------------

----------------------------------------------------------------------



[40/51] [abbrv] geode git commit: GEODE-3169: Decoupling of DiskStore and backups This closes #715 * move backup logic away from DiskStore and into BackupManager * refactor code into smaller methods * improve test code clarity

Posted by kl...@apache.org.
GEODE-3169: Decoupling of DiskStore and backups
This closes #715
  * move backup logic away from DiskStore and into BackupManager
  * refactor code into smaller methods
  * improve test code clarity


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/3bb6a221
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/3bb6a221
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/3bb6a221

Branch: refs/heads/feature/GEODE-1279
Commit: 3bb6a2214d02fcb339ecba0d0645457d3926ab12
Parents: f38dff9
Author: Nick Reich <nr...@pivotal.io>
Authored: Tue Aug 8 11:30:17 2017 -0700
Committer: Anil <ag...@pivotal.io>
Committed: Fri Aug 18 09:52:24 2017 -0700

----------------------------------------------------------------------
 .../admin/internal/FinishBackupRequest.java     |   2 +-
 .../admin/internal/PrepareBackupRequest.java    |   4 +-
 .../geode/internal/cache/BackupManager.java     | 603 +++++++++++++++++++
 .../geode/internal/cache/DiskStoreBackup.java   |   9 +-
 .../internal/cache/DiskStoreFactoryImpl.java    |   1 -
 .../geode/internal/cache/DiskStoreImpl.java     | 224 +------
 .../geode/internal/cache/GemFireCacheImpl.java  |   5 +-
 .../geode/internal/cache/InternalCache.java     |   1 -
 .../org/apache/geode/internal/cache/Oplog.java  |   1 +
 .../cache/PartitionedRegionDataStore.java       |   1 -
 .../cache/persistence/BackupManager.java        | 389 ------------
 .../internal/cache/xmlcache/CacheCreation.java  |   2 +-
 .../internal/beans/MemberMBeanBridge.java       |   6 +-
 .../geode/internal/cache/BackupDUnitTest.java   | 176 +++---
 .../geode/internal/cache/BackupJUnitTest.java   | 145 +++--
 .../cache/IncrementalBackupDUnitTest.java       |   3 +-
 .../BackupPrepareAndFinishMsgDUnitTest.java     | 548 ++++-------------
 ...ionedBackupPrepareAndFinishMsgDUnitTest.java |  28 +
 ...icateBackupPrepareAndFinishMsgDUnitTest.java |  28 +
 .../beans/DistributedSystemBridgeJUnitTest.java |   8 +-
 20 files changed, 935 insertions(+), 1249 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/main/java/org/apache/geode/admin/internal/FinishBackupRequest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/admin/internal/FinishBackupRequest.java b/geode-core/src/main/java/org/apache/geode/admin/internal/FinishBackupRequest.java
index f01666d..88f67bd 100644
--- a/geode-core/src/main/java/org/apache/geode/admin/internal/FinishBackupRequest.java
+++ b/geode-core/src/main/java/org/apache/geode/admin/internal/FinishBackupRequest.java
@@ -99,7 +99,7 @@ public class FinishBackupRequest extends CliLegacyMessage {
       persistentIds = new HashSet<PersistentID>();
     } else {
       try {
-        persistentIds = cache.getBackupManager().finishBackup(targetDir, baselineDir, abort);
+        persistentIds = cache.getBackupManager().doBackup(targetDir, baselineDir, abort);
       } catch (IOException e) {
         logger.error(
             LocalizedMessage.create(LocalizedStrings.CliLegacyMessage_ERROR, this.getClass()), e);

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/main/java/org/apache/geode/admin/internal/PrepareBackupRequest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/admin/internal/PrepareBackupRequest.java b/geode-core/src/main/java/org/apache/geode/admin/internal/PrepareBackupRequest.java
index 0c096f9..ede70c1 100644
--- a/geode-core/src/main/java/org/apache/geode/admin/internal/PrepareBackupRequest.java
+++ b/geode-core/src/main/java/org/apache/geode/admin/internal/PrepareBackupRequest.java
@@ -37,7 +37,7 @@ import org.apache.geode.internal.admin.remote.AdminResponse;
 import org.apache.geode.internal.admin.remote.CliLegacyMessage;
 import org.apache.geode.internal.cache.GemFireCacheImpl;
 import org.apache.geode.internal.cache.InternalCache;
-import org.apache.geode.internal.cache.persistence.BackupManager;
+import org.apache.geode.internal.cache.BackupManager;
 import org.apache.geode.internal.i18n.LocalizedStrings;
 import org.apache.geode.internal.logging.LogService;
 import org.apache.geode.internal.logging.log4j.LocalizedMessage;
@@ -87,7 +87,7 @@ public class PrepareBackupRequest extends CliLegacyMessage {
     } else {
       try {
         BackupManager manager = cache.startBackup(getSender());
-        persistentIds = manager.prepareBackup();
+        persistentIds = manager.prepareForBackup();
       } catch (IOException e) {
         logger.error(
             LocalizedMessage.create(LocalizedStrings.CliLegacyMessage_ERROR, this.getClass()), e);

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/main/java/org/apache/geode/internal/cache/BackupManager.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/BackupManager.java b/geode-core/src/main/java/org/apache/geode/internal/cache/BackupManager.java
new file mode 100644
index 0000000..b7e0e47
--- /dev/null
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/BackupManager.java
@@ -0,0 +1,603 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.cache;
+
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.net.URL;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.logging.log4j.Logger;
+
+import org.apache.geode.InternalGemFireError;
+import org.apache.geode.cache.DiskStore;
+import org.apache.geode.cache.persistence.PersistentID;
+import org.apache.geode.distributed.DistributedSystem;
+import org.apache.geode.distributed.internal.DM;
+import org.apache.geode.distributed.internal.DistributionConfig;
+import org.apache.geode.distributed.internal.MembershipListener;
+import org.apache.geode.distributed.internal.membership.InternalDistributedMember;
+import org.apache.geode.internal.ClassPathLoader;
+import org.apache.geode.internal.DeployedJar;
+import org.apache.geode.internal.JarDeployer;
+import org.apache.geode.internal.cache.persistence.BackupInspector;
+import org.apache.geode.internal.cache.persistence.RestoreScript;
+import org.apache.geode.internal.i18n.LocalizedStrings;
+import org.apache.geode.internal.logging.LogService;
+
+/**
+ * This class manages the state an logic to backup a single cache.
+ */
+public class BackupManager implements MembershipListener {
+  private static final Logger logger = LogService.getLogger(BackupManager.class);
+
+  static final String INCOMPLETE_BACKUP_FILE = "INCOMPLETE_BACKUP_FILE";
+
+  private static final String BACKUP_DIR_PREFIX = "dir";
+  private static final String README_FILE = "README_FILE.txt";
+  private static final String DATA_STORES_DIRECTORY = "diskstores";
+  private static final String USER_FILES = "user";
+  private static final String CONFIG_DIRECTORY = "config";
+
+  private final Map<DiskStoreImpl, DiskStoreBackup> backupByDiskStore = new HashMap<>();
+  private final RestoreScript restoreScript = new RestoreScript();
+  private final InternalDistributedMember sender;
+  private final InternalCache cache;
+  private final CountDownLatch allowDestroys = new CountDownLatch(1);
+  private volatile boolean isCancelled = false;
+
+  public BackupManager(InternalDistributedMember sender, InternalCache gemFireCache) {
+    this.sender = sender;
+    this.cache = gemFireCache;
+  }
+
+  public void validateRequestingAdmin() {
+    // We need to watch for pure admin guys that depart. this allMembershipListener set
+    // looks like it should receive those events.
+    Set allIds = getDistributionManager().addAllMembershipListenerAndGetAllIds(this);
+    if (!allIds.contains(sender)) {
+      cleanup();
+      throw new IllegalStateException("The admin member requesting a backup has already departed");
+    }
+  }
+
+  public HashSet<PersistentID> prepareForBackup() {
+    HashSet<PersistentID> persistentIds = new HashSet<>();
+    for (DiskStore store : cache.listDiskStoresIncludingRegionOwned()) {
+      DiskStoreImpl storeImpl = (DiskStoreImpl) store;
+      storeImpl.lockStoreBeforeBackup();
+      if (storeImpl.hasPersistedData()) {
+        persistentIds.add(storeImpl.getPersistentID());
+        storeImpl.getStats().startBackup();
+      }
+    }
+    return persistentIds;
+  }
+
+  public HashSet<PersistentID> doBackup(File targetDir, File baselineDir, boolean abort)
+      throws IOException {
+    try {
+      if (abort) {
+        return new HashSet<>();
+      }
+      HashSet<PersistentID> persistentIds = new HashSet<>();
+      File backupDir = getBackupDir(targetDir);
+
+      // Make sure our baseline is okay for this member
+      baselineDir = checkBaseline(baselineDir);
+
+      // Create an inspector for the baseline backup
+      BackupInspector inspector =
+          (baselineDir == null ? null : BackupInspector.createInspector(baselineDir));
+
+      File storesDir = new File(backupDir, DATA_STORES_DIRECTORY);
+      Collection<DiskStore> diskStores = cache.listDiskStoresIncludingRegionOwned();
+      Map<DiskStoreImpl, DiskStoreBackup> backupByDiskStore = new HashMap<>();
+
+      boolean foundPersistentData = false;
+      for (DiskStore store : diskStores) {
+        DiskStoreImpl diskStore = (DiskStoreImpl) store;
+        if (diskStore.hasPersistedData()) {
+          if (!foundPersistentData) {
+            createBackupDir(backupDir);
+            foundPersistentData = true;
+          }
+          File diskStoreDir = new File(storesDir, getBackupDirName(diskStore));
+          diskStoreDir.mkdir();
+          DiskStoreBackup backup = startDiskStoreBackup(diskStore, diskStoreDir, inspector);
+          backupByDiskStore.put(diskStore, backup);
+        }
+        diskStore.releaseBackupLock();
+      }
+
+      allowDestroys.countDown();
+
+      for (Map.Entry<DiskStoreImpl, DiskStoreBackup> entry : backupByDiskStore.entrySet()) {
+        DiskStoreImpl diskStore = entry.getKey();
+        completeBackup(diskStore, entry.getValue());
+        diskStore.getStats().endBackup();
+        persistentIds.add(diskStore.getPersistentID());
+      }
+
+      if (!backupByDiskStore.isEmpty()) {
+        completeRestoreScript(backupDir);
+      }
+
+      return persistentIds;
+
+    } finally {
+      cleanup();
+    }
+  }
+
+  public void abort() {
+    cleanup();
+  }
+
+  private DM getDistributionManager() {
+    return cache.getInternalDistributedSystem().getDistributionManager();
+  }
+
+  private void cleanup() {
+    isCancelled = true;
+    allowDestroys.countDown();
+    releaseBackupLocks();
+    getDistributionManager().removeAllMembershipListener(this);
+    cache.clearBackupManager();
+  }
+
+  private void releaseBackupLocks() {
+    for (DiskStore store : cache.listDiskStoresIncludingRegionOwned()) {
+      ((DiskStoreImpl) store).releaseBackupLock();
+    }
+  }
+
+  /**
+   * Returns the memberId directory for this member in the baseline. The memberId may have changed
+   * if this member has been restarted since the last backup.
+   * 
+   * @param baselineParentDir parent directory of last backup.
+   * @return null if the baseline for this member could not be located.
+   */
+  private File findBaselineForThisMember(File baselineParentDir) {
+    File baselineDir = null;
+
+    /*
+     * Find the first matching DiskStoreId directory for this member.
+     */
+    for (DiskStore diskStore : cache.listDiskStoresIncludingRegionOwned()) {
+      File[] matchingFiles = baselineParentDir
+          .listFiles((file, name) -> name.endsWith(getBackupDirName((DiskStoreImpl) diskStore)));
+      // We found it? Good. Set this member's baseline to the backed up disk store's member dir (two
+      // levels up).
+      if (null != matchingFiles && matchingFiles.length > 0)
+        baselineDir = matchingFiles[0].getParentFile().getParentFile();
+    }
+    return baselineDir;
+  }
+
+  /**
+   * Performs a sanity check on the baseline directory for incremental backups. If a baseline
+   * directory exists for the member and there is no INCOMPLETE_BACKUP_FILE file then return the
+   * data stores directory for this member.
+   * 
+   * @param baselineParentDir a previous backup directory. This is used with the incremental backup
+   *        option. May be null if the user specified a full backup.
+   * @return null if the backup is to be a full backup otherwise return the data store directory in
+   *         the previous backup for this member (if incremental).
+   */
+  private File checkBaseline(File baselineParentDir) throws IOException {
+    File baselineDir = null;
+
+    if (null != baselineParentDir) {
+      // Start by looking for this memberId
+      baselineDir = getBackupDir(baselineParentDir);
+
+      if (!baselineDir.exists()) {
+        // hmmm, did this member have a restart?
+        // Determine which member dir might be a match for us
+        baselineDir = findBaselineForThisMember(baselineParentDir);
+      }
+
+      if (null != baselineDir) {
+        // check for existence of INCOMPLETE_BACKUP_FILE file
+        File incompleteBackup = new File(baselineDir, INCOMPLETE_BACKUP_FILE);
+        if (incompleteBackup.exists()) {
+          baselineDir = null;
+        }
+      }
+    }
+
+    return baselineDir;
+  }
+
+  private void completeRestoreScript(File backupDir) throws IOException {
+    backupConfigFiles(restoreScript, backupDir);
+    backupUserFiles(restoreScript, backupDir);
+    backupDeployedJars(restoreScript, backupDir);
+    restoreScript.generate(backupDir);
+    File incompleteFile = new File(backupDir, INCOMPLETE_BACKUP_FILE);
+    if (!incompleteFile.delete()) {
+      throw new IOException("Could not delete file " + INCOMPLETE_BACKUP_FILE);
+    }
+  }
+
+  /**
+   * Copy the oplogs to the backup directory. This is the final step of the backup process. The
+   * oplogs we copy are defined in the startDiskStoreBackup method.
+   */
+  private void completeBackup(DiskStoreImpl diskStore, DiskStoreBackup backup) throws IOException {
+    if (backup == null) {
+      return;
+    }
+    try {
+      // Wait for oplogs to be unpreblown before backing them up.
+      diskStore.waitForDelayedWrites();
+
+      // Backup all of the oplogs
+      for (Oplog oplog : backup.getPendingBackup()) {
+        if (isCancelled()) {
+          break;
+        }
+        // Copy theoplog to the destination directory
+        int index = oplog.getDirectoryHolder().getArrayIndex();
+        File backupDir = getBackupDir(backup.getTargetDir(), index);
+        // TODO prpersist - We could probably optimize this to *move* the files
+        // that we know are supposed to be deleted.
+        oplog.copyTo(backupDir);
+
+        // Allow the oplog to be deleted, and process any pending delete
+        backup.backupFinished(oplog);
+      }
+    } finally {
+      backup.cleanup();
+    }
+  }
+
+  /**
+   * Returns the dir name used to back up this DiskStore's directories under. The name is a
+   * concatenation of the disk store name and id.
+   */
+  private String getBackupDirName(DiskStoreImpl diskStore) {
+    String name = diskStore.getName();
+
+    if (name == null) {
+      name = GemFireCacheImpl.getDefaultDiskStoreName();
+    }
+
+    return (name + "_" + diskStore.getDiskStoreID().toString());
+  }
+
+  /**
+   * Start the backup process. This is the second step of the backup process. In this method, we
+   * define the data we're backing up by copying the init file and rolling to the next file. After
+   * this method returns operations can proceed as normal, except that we don't remove oplogs.
+   */
+  private DiskStoreBackup startDiskStoreBackup(DiskStoreImpl diskStore, File targetDir,
+      BackupInspector baselineInspector) throws IOException {
+    diskStore.getBackupLock().setBackupThread();
+    DiskStoreBackup backup = null;
+    boolean done = false;
+    try {
+      for (;;) {
+        Oplog childOplog = diskStore.getPersistentOplogSet().getChild();
+        if (childOplog == null) {
+          backup = new DiskStoreBackup(new Oplog[0], targetDir);
+          backupByDiskStore.put(diskStore, backup);
+          break;
+        }
+
+        // Get an appropriate lock object for each set of oplogs.
+        Object childLock = childOplog.lock;
+
+        // TODO - We really should move this lock into the disk store, but
+        // until then we need to do this magic to make sure we're actually
+        // locking the latest child for both types of oplogs
+
+        // This ensures that all writing to disk is blocked while we are
+        // creating the snapshot
+        synchronized (childLock) {
+          if (diskStore.getPersistentOplogSet().getChild() != childOplog) {
+            continue;
+          }
+
+          if (logger.isDebugEnabled()) {
+            logger.debug("snapshotting oplogs for disk store {}", diskStore.getName());
+          }
+
+          createDiskStoreBackupDirs(diskStore, targetDir);
+
+          restoreScript.addExistenceTest(diskStore.getDiskInitFile().getIFFile());
+
+          // Contains all oplogs that will backed up
+          Oplog[] allOplogs = null;
+
+          // Incremental backup so filter out oplogs that have already been
+          // backed up
+          if (null != baselineInspector) {
+            Map<File, File> baselineCopyMap = new HashMap<>();
+            allOplogs = filterBaselineOplogs(diskStore, baselineInspector, baselineCopyMap);
+            restoreScript.addBaselineFiles(baselineCopyMap);
+          } else {
+            allOplogs = diskStore.getAllOplogsForBackup();
+          }
+
+          // mark all oplogs as being backed up. This will
+          // prevent the oplogs from being deleted
+          backup = new DiskStoreBackup(allOplogs, targetDir);
+          backupByDiskStore.put(diskStore, backup);
+
+          // copy the init file
+          File firstDir = getBackupDir(targetDir, diskStore.getInforFileDirIndex());
+          diskStore.getDiskInitFile().copyTo(firstDir);
+          diskStore.getPersistentOplogSet().forceRoll(null);
+
+          if (logger.isDebugEnabled()) {
+            logger.debug("done snaphotting for disk store {}", diskStore.getName());
+          }
+          break;
+        }
+      }
+      done = true;
+    } finally {
+      if (!done) {
+        if (backup != null) {
+          backupByDiskStore.remove(diskStore);
+          backup.cleanup();
+        }
+      }
+    }
+    return backup;
+  }
+
+  private void createDiskStoreBackupDirs(DiskStoreImpl diskStore, File targetDir)
+      throws IOException {
+    // Create the directories for this disk store
+    DirectoryHolder[] directories = diskStore.getDirectoryHolders();
+    for (int i = 0; i < directories.length; i++) {
+      File dir = getBackupDir(targetDir, i);
+      if (!dir.mkdirs()) {
+        throw new IOException("Could not create directory " + dir);
+      }
+      restoreScript.addFile(directories[i].getDir(), dir);
+    }
+  }
+
+  /**
+   * Filters and returns the current set of oplogs that aren't already in the baseline for
+   * incremental backup
+   *
+   * @param baselineInspector the inspector for the previous backup.
+   * @param baselineCopyMap this will be populated with baseline oplogs Files that will be used in
+   *        the restore script.
+   * @return an array of Oplogs to be copied for an incremental backup.
+   */
+  private Oplog[] filterBaselineOplogs(DiskStoreImpl diskStore, BackupInspector baselineInspector,
+      Map<File, File> baselineCopyMap) throws IOException {
+    File baselineDir =
+        new File(baselineInspector.getBackupDir(), BackupManager.DATA_STORES_DIRECTORY);
+    baselineDir = new File(baselineDir, getBackupDirName(diskStore));
+
+    // Find all of the member's diskstore oplogs in the member's baseline
+    // diskstore directory structure (*.crf,*.krf,*.drf)
+    Collection<File> baselineOplogFiles =
+        FileUtils.listFiles(baselineDir, new String[] {"krf", "drf", "crf"}, true);
+    // Our list of oplogs to copy (those not already in the baseline)
+    List<Oplog> oplogList = new LinkedList<>();
+
+    // Total list of member oplogs
+    Oplog[] allOplogs = diskStore.getAllOplogsForBackup();
+
+    /*
+     * Loop through operation logs and see if they are already part of the baseline backup.
+     */
+    for (Oplog log : allOplogs) {
+      // See if they are backed up in the current baseline
+      Map<File, File> oplogMap = log.mapBaseline(baselineOplogFiles);
+
+      // No? Then see if they were backed up in previous baselines
+      if (oplogMap.isEmpty() && baselineInspector.isIncremental()) {
+        Set<String> matchingOplogs =
+            log.gatherMatchingOplogFiles(baselineInspector.getIncrementalOplogFileNames());
+        if (!matchingOplogs.isEmpty()) {
+          for (String matchingOplog : matchingOplogs) {
+            oplogMap.put(new File(baselineInspector.getCopyFromForOplogFile(matchingOplog)),
+                new File(baselineInspector.getCopyToForOplogFile(matchingOplog)));
+          }
+        }
+      }
+
+      if (oplogMap.isEmpty()) {
+        /*
+         * These are fresh operation log files so lets back them up.
+         */
+        oplogList.add(log);
+      } else {
+        /*
+         * These have been backed up before so lets just add their entries from the previous backup
+         * or restore script into the current one.
+         */
+        baselineCopyMap.putAll(oplogMap);
+      }
+    }
+
+    // Convert the filtered oplog list to an array
+    return oplogList.toArray(new Oplog[oplogList.size()]);
+  }
+
+  private File getBackupDir(File targetDir, int index) {
+    return new File(targetDir, BACKUP_DIR_PREFIX + index);
+  }
+
+  private void backupConfigFiles(RestoreScript restoreScript, File backupDir) throws IOException {
+    File configBackupDir = new File(backupDir, CONFIG_DIRECTORY);
+    configBackupDir.mkdirs();
+    URL url = cache.getCacheXmlURL();
+    if (url != null) {
+      File cacheXMLBackup =
+          new File(configBackupDir, DistributionConfig.DEFAULT_CACHE_XML_FILE.getName());
+      FileUtils.copyFile(new File(cache.getCacheXmlURL().getFile()), cacheXMLBackup);
+    }
+
+    URL propertyURL = DistributedSystem.getPropertiesFileURL();
+    if (propertyURL != null) {
+      File propertyBackup =
+          new File(configBackupDir, DistributionConfig.GEMFIRE_PREFIX + "properties");
+      FileUtils.copyFile(new File(DistributedSystem.getPropertiesFile()), propertyBackup);
+    }
+
+    // TODO: should the gfsecurity.properties file be backed up?
+  }
+
+  private void backupUserFiles(RestoreScript restoreScript, File backupDir) throws IOException {
+    List<File> backupFiles = cache.getBackupFiles();
+    File userBackupDir = new File(backupDir, USER_FILES);
+    if (!userBackupDir.exists()) {
+      userBackupDir.mkdir();
+    }
+    for (File original : backupFiles) {
+      if (original.exists()) {
+        original = original.getAbsoluteFile();
+        File dest = new File(userBackupDir, original.getName());
+        if (original.isDirectory()) {
+          FileUtils.copyDirectory(original, dest);
+        } else {
+          FileUtils.copyFile(original, dest);
+        }
+        restoreScript.addExistenceTest(original);
+        restoreScript.addFile(original, dest);
+      }
+    }
+  }
+
+  /**
+   * Copies user deployed jars to the backup directory.
+   * 
+   * @param restoreScript Used to restore from this backup.
+   * @param backupDir The backup directory for this member.
+   * @throws IOException one or more of the jars did not successfully copy.
+   */
+  private void backupDeployedJars(RestoreScript restoreScript, File backupDir) throws IOException {
+    JarDeployer deployer = null;
+
+    try {
+      /*
+       * Suspend any user deployed jar file updates during this backup.
+       */
+      deployer = ClassPathLoader.getLatest().getJarDeployer();
+      deployer.suspendAll();
+
+      List<DeployedJar> jarList = deployer.findDeployedJars();
+      if (!jarList.isEmpty()) {
+        File userBackupDir = new File(backupDir, USER_FILES);
+        if (!userBackupDir.exists()) {
+          userBackupDir.mkdir();
+        }
+
+        for (DeployedJar loader : jarList) {
+          File source = new File(loader.getFileCanonicalPath());
+          File dest = new File(userBackupDir, source.getName());
+          if (source.isDirectory()) {
+            FileUtils.copyDirectory(source, dest);
+          } else {
+            FileUtils.copyFile(source, dest);
+          }
+          restoreScript.addFile(source, dest);
+        }
+      }
+    } finally {
+      /*
+       * Re-enable user deployed jar file updates.
+       */
+      if (null != deployer) {
+        deployer.resumeAll();
+      }
+    }
+  }
+
+  private File getBackupDir(File targetDir) throws IOException {
+    InternalDistributedMember memberId =
+        cache.getInternalDistributedSystem().getDistributedMember();
+    String vmId = memberId.toString();
+    vmId = cleanSpecialCharacters(vmId);
+    return new File(targetDir, vmId);
+  }
+
+  private void createBackupDir(File backupDir) throws IOException {
+    if (backupDir.exists()) {
+      throw new IOException("Backup directory " + backupDir.getAbsolutePath() + " already exists.");
+    }
+
+    if (!backupDir.mkdirs()) {
+      throw new IOException("Could not create directory: " + backupDir);
+    }
+
+    File incompleteFile = new File(backupDir, INCOMPLETE_BACKUP_FILE);
+    if (!incompleteFile.createNewFile()) {
+      throw new IOException("Could not create file: " + incompleteFile);
+    }
+
+    File readme = new File(backupDir, README_FILE);
+    FileOutputStream fos = new FileOutputStream(readme);
+
+    try {
+      String text = LocalizedStrings.BackupManager_README.toLocalizedString();
+      fos.write(text.getBytes());
+    } finally {
+      fos.close();
+    }
+  }
+
+  private String cleanSpecialCharacters(String string) {
+    return string.replaceAll("[^\\w]+", "_");
+  }
+
+  public void memberDeparted(InternalDistributedMember id, boolean crashed) {
+    cleanup();
+  }
+
+  public void memberJoined(InternalDistributedMember id) {}
+
+  public void quorumLost(Set<InternalDistributedMember> failures,
+      List<InternalDistributedMember> remaining) {}
+
+  public void memberSuspect(InternalDistributedMember id, InternalDistributedMember whoSuspected,
+      String reason) {}
+
+  public void waitForBackup() {
+    try {
+      allowDestroys.await();
+    } catch (InterruptedException e) {
+      throw new InternalGemFireError(e);
+    }
+  }
+
+  public boolean isCancelled() {
+    return isCancelled;
+  }
+
+  public DiskStoreBackup getBackupForDiskStore(DiskStoreImpl diskStore) {
+    return backupByDiskStore.get(diskStore);
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreBackup.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreBackup.java b/geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreBackup.java
index 309dea3..53c5ca1 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreBackup.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreBackup.java
@@ -25,17 +25,16 @@ import org.apache.geode.internal.cache.persistence.BackupInspector;
  * This class manages the state of the backup of an individual disk store. It holds the list of
  * oplogs that still need to be backed up, along with the lists of oplog files that should be
  * deleted when the oplog is backed up. See
- * {@link DiskStoreImpl#startBackup(File, BackupInspector, org.apache.geode.internal.cache.persistence.RestoreScript)}
  */
 public class DiskStoreBackup {
 
   private final Set<Oplog> pendingBackup;
-  private final Set<Oplog> deferredCrfDeletes = new HashSet<Oplog>();
-  private final Set<Oplog> deferredDrfDeletes = new HashSet<Oplog>();
+  private final Set<Oplog> deferredCrfDeletes = new HashSet<>();
+  private final Set<Oplog> deferredDrfDeletes = new HashSet<>();
   private final File targetDir;
 
   public DiskStoreBackup(Oplog[] allOplogs, File targetDir) {
-    this.pendingBackup = new HashSet<Oplog>(Arrays.asList(allOplogs));
+    this.pendingBackup = new HashSet<>(Arrays.asList(allOplogs));
     this.targetDir = targetDir;
   }
 
@@ -70,7 +69,7 @@ public class DiskStoreBackup {
   }
 
   public synchronized Set<Oplog> getPendingBackup() {
-    return new HashSet<Oplog>(pendingBackup);
+    return new HashSet<>(pendingBackup);
   }
 
   public synchronized void backupFinished(Oplog oplog) {

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreFactoryImpl.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreFactoryImpl.java b/geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreFactoryImpl.java
index 0288ef1..d6d55d6 100755
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreFactoryImpl.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreFactoryImpl.java
@@ -21,7 +21,6 @@ import org.apache.geode.GemFireIOException;
 import org.apache.geode.cache.DiskStoreFactory;
 import org.apache.geode.cache.DiskStore;
 import org.apache.geode.distributed.internal.ResourceEvent;
-import org.apache.geode.internal.cache.persistence.BackupManager;
 import org.apache.geode.internal.cache.xmlcache.CacheCreation;
 import org.apache.geode.internal.cache.xmlcache.CacheXml;
 import org.apache.geode.internal.cache.xmlcache.DiskStoreAttributesCreation;

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreImpl.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreImpl.java b/geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreImpl.java
index 94d1253..a8a8a53 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreImpl.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/DiskStoreImpl.java
@@ -33,8 +33,6 @@ import java.util.Comparator;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Iterator;
-import java.util.LinkedList;
-import java.util.List;
 import java.util.Map;
 import java.util.Properties;
 import java.util.Set;
@@ -60,7 +58,6 @@ import java.util.regex.Pattern;
 
 import it.unimi.dsi.fastutil.ints.IntOpenHashSet;
 import it.unimi.dsi.fastutil.longs.LongOpenHashSet;
-import org.apache.commons.io.FileUtils;
 import org.apache.logging.log4j.Logger;
 
 import org.apache.geode.CancelCriterion;
@@ -86,8 +83,6 @@ import org.apache.geode.internal.cache.DiskEntry.RecoveredEntry;
 import org.apache.geode.internal.cache.ExportDiskRegion.ExportWriter;
 import org.apache.geode.internal.cache.lru.LRUAlgorithm;
 import org.apache.geode.internal.cache.lru.LRUStatistics;
-import org.apache.geode.internal.cache.persistence.BackupInspector;
-import org.apache.geode.internal.cache.persistence.BackupManager;
 import org.apache.geode.internal.cache.persistence.BytesAndBits;
 import org.apache.geode.internal.cache.persistence.DiskRecoveryStore;
 import org.apache.geode.internal.cache.persistence.DiskRegionView;
@@ -97,7 +92,6 @@ import org.apache.geode.internal.cache.persistence.OplogType;
 import org.apache.geode.internal.cache.persistence.PRPersistentConfig;
 import org.apache.geode.internal.cache.persistence.PersistentMemberID;
 import org.apache.geode.internal.cache.persistence.PersistentMemberPattern;
-import org.apache.geode.internal.cache.persistence.RestoreScript;
 import org.apache.geode.internal.cache.snapshot.GFSnapshot;
 import org.apache.geode.internal.cache.snapshot.GFSnapshot.SnapshotWriter;
 import org.apache.geode.internal.cache.snapshot.SnapshotPacket.SnapshotRecord;
@@ -126,8 +120,6 @@ import org.apache.geode.pdx.internal.PeerTypeRegistration;
 public class DiskStoreImpl implements DiskStore {
   private static final Logger logger = LogService.getLogger();
 
-  private static final String BACKUP_DIR_PREFIX = "dir";
-
   public static final boolean KRF_DEBUG = Boolean.getBoolean("disk.KRF_DEBUG");
 
   public static final int MAX_OPEN_INACTIVE_OPLOGS =
@@ -302,8 +294,6 @@ public class DiskStoreImpl implements DiskStore {
 
   private DiskInitFile initFile = null;
 
-  private volatile DiskStoreBackup diskStoreBackup = null;
-
   private final ReentrantReadWriteLock compactorLock = new ReentrantReadWriteLock();
 
   private final WriteLock compactorWriteLock = compactorLock.writeLock();
@@ -672,6 +662,10 @@ public class DiskStoreImpl implements DiskStore {
     }
   }
 
+  public PersistentOplogSet getPersistentOplogSet() {
+    return persistentOplogs;
+  }
+
   PersistentOplogSet getPersistentOplogSet(DiskRegionView drv) {
     assert drv.isBackup();
     return persistentOplogs;
@@ -2031,6 +2025,10 @@ public class DiskStoreImpl implements DiskStore {
     return this.directories[this.infoFileDirIndex];
   }
 
+  int getInforFileDirIndex() {
+    return this.infoFileDirIndex;
+  }
+
   /**
    * returns the size of the biggest directory available to the region
    */
@@ -2692,84 +2690,9 @@ public class DiskStoreImpl implements DiskStore {
   }
 
   /**
-   * Returns the dir name used to back up this DiskStore's directories under. The name is a
-   * concatenation of the disk store name and id.
-   */
-  public String getBackupDirName() {
-    String name = getName();
-
-    if (name == null) {
-      name = GemFireCacheImpl.getDefaultDiskStoreName();
-    }
-
-    return (name + "_" + getDiskStoreID().toString());
-  }
-
-  /**
-   * Filters and returns the current set of oplogs that aren't already in the baseline for
-   * incremental backup
-   * 
-   * @param baselineInspector the inspector for the previous backup.
-   * @param baselineCopyMap this will be populated with baseline oplogs Files that will be used in
-   *        the restore script.
-   * @return an array of Oplogs to be copied for an incremental backup.
-   */
-  private Oplog[] filterBaselineOplogs(BackupInspector baselineInspector,
-      Map<File, File> baselineCopyMap) throws IOException {
-    File baselineDir = new File(baselineInspector.getBackupDir(), BackupManager.DATA_STORES);
-    baselineDir = new File(baselineDir, getBackupDirName());
-
-    // Find all of the member's diskstore oplogs in the member's baseline
-    // diskstore directory structure (*.crf,*.krf,*.drf)
-    Collection<File> baselineOplogFiles =
-        FileUtils.listFiles(baselineDir, new String[] {"krf", "drf", "crf"}, true);
-    // Our list of oplogs to copy (those not already in the baseline)
-    List<Oplog> oplogList = new LinkedList<Oplog>();
-
-    // Total list of member oplogs
-    Oplog[] allOplogs = getAllOplogsForBackup();
-
-    /*
-     * Loop through operation logs and see if they are already part of the baseline backup.
-     */
-    for (Oplog log : allOplogs) {
-      // See if they are backed up in the current baseline
-      Map<File, File> oplogMap = log.mapBaseline(baselineOplogFiles);
-
-      // No? Then see if they were backed up in previous baselines
-      if (oplogMap.isEmpty() && baselineInspector.isIncremental()) {
-        Set<String> matchingOplogs =
-            log.gatherMatchingOplogFiles(baselineInspector.getIncrementalOplogFileNames());
-        if (!matchingOplogs.isEmpty()) {
-          for (String matchingOplog : matchingOplogs) {
-            oplogMap.put(new File(baselineInspector.getCopyFromForOplogFile(matchingOplog)),
-                new File(baselineInspector.getCopyToForOplogFile(matchingOplog)));
-          }
-        }
-      }
-
-      if (oplogMap.isEmpty()) {
-        /*
-         * These are fresh operation log files so lets back them up.
-         */
-        oplogList.add(log);
-      } else {
-        /*
-         * These have been backed up before so lets just add their entries from the previous backup
-         * or restore script into the current one.
-         */
-        baselineCopyMap.putAll(oplogMap);
-      }
-    }
-
-    // Convert the filtered oplog list to an array
-    return oplogList.toArray(new Oplog[oplogList.size()]);
-  }
-
-  /**
    * Get all of the oplogs
    */
-  private Oplog[] getAllOplogsForBackup() {
+  Oplog[] getAllOplogsForBackup() {
     return persistentOplogs.getAllOplogs();
   }
 
@@ -4066,124 +3989,6 @@ public class DiskStoreImpl implements DiskStore {
     getBackupLock().unlockForBackup();
   }
 
-  /**
-   * Start the backup process. This is the second step of the backup process. In this method, we
-   * define the data we're backing up by copying the init file and rolling to the next file. After
-   * this method returns operations can proceed as normal, except that we don't remove oplogs.
-   */
-  public void startBackup(File targetDir, BackupInspector baselineInspector,
-      RestoreScript restoreScript) throws IOException {
-    getBackupLock().setBackupThread();
-    boolean done = false;
-    try {
-      for (;;) {
-        Oplog childOplog = persistentOplogs.getChild();
-        if (childOplog == null) {
-          this.diskStoreBackup = new DiskStoreBackup(new Oplog[0], targetDir);
-          break;
-        }
-
-        // Get an appropriate lock object for each set of oplogs.
-        Object childLock = childOplog.lock;
-
-        // TODO - We really should move this lock into the disk store, but
-        // until then we need to do this magic to make sure we're actually
-        // locking the latest child for both types of oplogs
-
-        // This ensures that all writing to disk is blocked while we are
-        // creating the snapshot
-        synchronized (childLock) {
-          if (persistentOplogs.getChild() != childOplog) {
-            continue;
-          }
-
-          if (logger.isDebugEnabled()) {
-            logger.debug("snapshotting oplogs for disk store {}", getName());
-          }
-
-          // Create the directories for this disk store
-          for (int i = 0; i < directories.length; i++) {
-            File dir = getBackupDir(targetDir, i);
-            if (!dir.mkdirs()) {
-              throw new IOException("Could not create directory " + dir);
-            }
-            restoreScript.addFile(directories[i].getDir(), dir);
-          }
-
-          restoreScript.addExistenceTest(this.initFile.getIFFile());
-
-          // Contains all oplogs that will backed up
-          Oplog[] allOplogs = null;
-
-          // Incremental backup so filter out oplogs that have already been
-          // backed up
-          if (null != baselineInspector) {
-            Map<File, File> baselineCopyMap = new HashMap<File, File>();
-            allOplogs = filterBaselineOplogs(baselineInspector, baselineCopyMap);
-            restoreScript.addBaselineFiles(baselineCopyMap);
-          } else {
-            allOplogs = getAllOplogsForBackup();
-          }
-
-          // mark all oplogs as being backed up. This will
-          // prevent the oplogs from being deleted
-          this.diskStoreBackup = new DiskStoreBackup(allOplogs, targetDir);
-
-          // copy the init file
-          File firstDir = getBackupDir(targetDir, infoFileDirIndex);
-          initFile.copyTo(firstDir);
-          persistentOplogs.forceRoll(null);
-
-          if (logger.isDebugEnabled()) {
-            logger.debug("done snaphotting for disk store {}", getName());
-          }
-          break;
-        }
-      }
-      done = true;
-    } finally {
-      if (!done) {
-        clearBackup();
-      }
-    }
-  }
-
-  private File getBackupDir(File targetDir, int index) {
-    return new File(targetDir, BACKUP_DIR_PREFIX + index);
-  }
-
-  /**
-   * Copy the oplogs to the backup directory. This is the final step of the backup process. The
-   * oplogs we copy are defined in the startBackup method.
-   */
-  public void finishBackup(BackupManager backupManager) throws IOException {
-    if (diskStoreBackup == null) {
-      return;
-    }
-    try {
-      // Wait for oplogs to be unpreblown before backing them up.
-      waitForDelayedWrites();
-
-      // Backup all of the oplogs
-      for (Oplog oplog : this.diskStoreBackup.getPendingBackup()) {
-        if (backupManager.isCancelled()) {
-          break;
-        }
-        // Copy theoplog to the destination directory
-        int index = oplog.getDirectoryHolder().getArrayIndex();
-        File backupDir = getBackupDir(this.diskStoreBackup.getTargetDir(), index);
-        // TODO prpersist - We could probably optimize this to *move* the files
-        // that we know are supposed to be deleted.
-        oplog.copyTo(backupDir);
-
-        // Allow the oplog to be deleted, and process any pending delete
-        this.diskStoreBackup.backupFinished(oplog);
-      }
-    } finally {
-      clearBackup();
-    }
-  }
-
   private int getArrayIndexOfDirectory(File searchDir) {
     for (DirectoryHolder holder : directories) {
       if (holder.getDir().equals(searchDir)) {
@@ -4197,16 +4002,9 @@ public class DiskStoreImpl implements DiskStore {
     return this.directories;
   }
 
-  private void clearBackup() {
-    DiskStoreBackup backup = this.diskStoreBackup;
-    if (backup != null) {
-      this.diskStoreBackup = null;
-      backup.cleanup();
-    }
-  }
-
   public DiskStoreBackup getInProgressBackup() {
-    return diskStoreBackup;
+    BackupManager backupManager = cache.getBackupManager();
+    return backupManager == null ? null : backupManager.getBackupForDiskStore(this);
   }
 
   public Collection<DiskRegionView> getKnown() {

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/main/java/org/apache/geode/internal/cache/GemFireCacheImpl.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/GemFireCacheImpl.java b/geode-core/src/main/java/org/apache/geode/internal/cache/GemFireCacheImpl.java
index 67c8add..6d250d9 100755
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/GemFireCacheImpl.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/GemFireCacheImpl.java
@@ -79,6 +79,8 @@ import com.sun.jna.Platform;
 import org.apache.commons.lang.StringUtils;
 import org.apache.logging.log4j.Logger;
 
+import org.apache.geode.internal.cache.event.EventTrackerExpiryTask;
+import org.apache.geode.internal.security.SecurityServiceFactory;
 import org.apache.geode.CancelCriterion;
 import org.apache.geode.CancelException;
 import org.apache.geode.ForcedDisconnectException;
@@ -184,7 +186,6 @@ import org.apache.geode.internal.cache.locks.TXLockService;
 import org.apache.geode.internal.cache.lru.HeapEvictor;
 import org.apache.geode.internal.cache.lru.OffHeapEvictor;
 import org.apache.geode.internal.cache.partitioned.RedundancyAlreadyMetException;
-import org.apache.geode.internal.cache.persistence.BackupManager;
 import org.apache.geode.internal.cache.persistence.PersistentMemberID;
 import org.apache.geode.internal.cache.persistence.PersistentMemberManager;
 import org.apache.geode.internal.cache.snapshot.CacheSnapshotServiceImpl;
@@ -4351,7 +4352,7 @@ public class GemFireCacheImpl implements InternalCache, InternalClientCache, Has
     if (!this.backupManager.compareAndSet(null, manager)) {
       throw new IOException("Backup already in progress");
     }
-    manager.start();
+    manager.validateRequestingAdmin();
     return manager;
   }
 

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/main/java/org/apache/geode/internal/cache/InternalCache.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/InternalCache.java b/geode-core/src/main/java/org/apache/geode/internal/cache/InternalCache.java
index d162010..84aa66e 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/InternalCache.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/InternalCache.java
@@ -55,7 +55,6 @@ import org.apache.geode.internal.cache.control.InternalResourceManager;
 import org.apache.geode.internal.cache.control.ResourceAdvisor;
 import org.apache.geode.internal.cache.event.EventTrackerExpiryTask;
 import org.apache.geode.internal.cache.extension.Extensible;
-import org.apache.geode.internal.cache.persistence.BackupManager;
 import org.apache.geode.internal.cache.persistence.PersistentMemberManager;
 import org.apache.geode.internal.cache.tier.sockets.CacheClientNotifier;
 import org.apache.geode.internal.cache.tier.sockets.ClientProxyMembershipID;

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/main/java/org/apache/geode/internal/cache/Oplog.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/Oplog.java b/geode-core/src/main/java/org/apache/geode/internal/cache/Oplog.java
index 80f19b5..860db98 100755
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/Oplog.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/Oplog.java
@@ -5702,6 +5702,7 @@ public class Oplog implements CompactableOplog, Flushable {
 
   public void deleteCRF() {
     oplogSet.crfDelete(this.oplogId);
+    BackupManager backupManager = getInternalCache().getBackupManager();
     DiskStoreBackup inProgressBackup = getParent().getInProgressBackup();
     if (inProgressBackup == null || !inProgressBackup.deferCrfDelete(this)) {
       deleteCRFFileOnly();

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegionDataStore.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegionDataStore.java b/geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegionDataStore.java
index 893ca6b..3d9ac18 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegionDataStore.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegionDataStore.java
@@ -42,7 +42,6 @@ import org.apache.geode.internal.cache.execute.PartitionedRegionFunctionResultSe
 import org.apache.geode.internal.cache.execute.RegionFunctionContextImpl;
 import org.apache.geode.internal.cache.partitioned.*;
 import org.apache.geode.internal.cache.partitioned.RemoveBucketMessage.RemoveBucketResponse;
-import org.apache.geode.internal.cache.persistence.BackupManager;
 import org.apache.geode.internal.cache.tier.sockets.ClientProxyMembershipID;
 import org.apache.geode.internal.cache.tier.sockets.ServerConnection;
 import org.apache.geode.internal.cache.wan.AbstractGatewaySender;

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/main/java/org/apache/geode/internal/cache/persistence/BackupManager.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/persistence/BackupManager.java b/geode-core/src/main/java/org/apache/geode/internal/cache/persistence/BackupManager.java
deleted file mode 100644
index f464e0d..0000000
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/persistence/BackupManager.java
+++ /dev/null
@@ -1,389 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
- * agreements. See the NOTICE file distributed with this work for additional information regarding
- * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package org.apache.geode.internal.cache.persistence;
-
-import java.io.File;
-import java.io.FileOutputStream;
-import java.io.FilenameFilter;
-import java.io.IOException;
-import java.net.URL;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Set;
-import java.util.concurrent.CountDownLatch;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-
-import org.apache.commons.io.FileUtils;
-
-import org.apache.geode.InternalGemFireError;
-import org.apache.geode.cache.DiskStore;
-import org.apache.geode.cache.persistence.PersistentID;
-import org.apache.geode.distributed.DistributedSystem;
-import org.apache.geode.distributed.internal.DM;
-import org.apache.geode.distributed.internal.DistributionConfig;
-import org.apache.geode.distributed.internal.MembershipListener;
-import org.apache.geode.distributed.internal.membership.InternalDistributedMember;
-import org.apache.geode.internal.ClassPathLoader;
-import org.apache.geode.internal.DeployedJar;
-import org.apache.geode.internal.JarDeployer;
-import org.apache.geode.internal.cache.DiskStoreImpl;
-import org.apache.geode.internal.cache.InternalCache;
-import org.apache.geode.internal.i18n.LocalizedStrings;
-
-/**
- * This class manages the state an logic to backup a single cache.
- */
-public class BackupManager implements MembershipListener {
-
-  // TODO prpersist internationalize this.
-  public static final String INCOMPLETE_BACKUP = "INCOMPLETE_BACKUP";
-  public static final String README = "README.txt";
-  public static final String DATA_STORES = "diskstores";
-  public static final String USER_FILES = "user";
-  public static final String CONFIG = "config";
-  private InternalDistributedMember sender;
-  private InternalCache cache;
-  private CountDownLatch allowDestroys = new CountDownLatch(1);
-  private volatile boolean isCancelled = false;
-
-  public BackupManager(InternalDistributedMember sender, InternalCache gemFireCache) {
-    this.sender = sender;
-    this.cache = gemFireCache;
-  }
-
-  public void start() {
-    final DM distributionManager = cache.getInternalDistributedSystem().getDistributionManager();
-    // We need to watch for pure admin guys that depart. this allMembershipListener set
-    // looks like it should receive those events.
-    Set allIds = distributionManager.addAllMembershipListenerAndGetAllIds(this);
-    if (!allIds.contains(sender)) {
-      cleanup();
-      throw new IllegalStateException("The admin member requesting a backup has already departed");
-    }
-  }
-
-  private void cleanup() {
-    isCancelled = true;
-    allowDestroys.countDown();
-    Collection<DiskStore> diskStores = cache.listDiskStoresIncludingRegionOwned();
-    for (DiskStore store : diskStores) {
-      ((DiskStoreImpl) store).releaseBackupLock();
-    }
-    final DM distributionManager = cache.getInternalDistributedSystem().getDistributionManager();
-    distributionManager.removeAllMembershipListener(this);
-    cache.clearBackupManager();
-  }
-
-  public HashSet<PersistentID> prepareBackup() {
-    HashSet<PersistentID> persistentIds = new HashSet<PersistentID>();
-    Collection<DiskStore> diskStores = cache.listDiskStoresIncludingRegionOwned();
-    for (DiskStore store : diskStores) {
-      DiskStoreImpl storeImpl = (DiskStoreImpl) store;
-      storeImpl.lockStoreBeforeBackup();
-      if (storeImpl.hasPersistedData()) {
-        persistentIds.add(storeImpl.getPersistentID());
-        storeImpl.getStats().startBackup();
-      }
-    }
-    return persistentIds;
-  }
-
-  /**
-   * Returns the memberId directory for this member in the baseline. The memberId may have changed
-   * if this member has been restarted since the last backup.
-   * 
-   * @param baselineParentDir parent directory of last backup.
-   * @return null if the baseline for this member could not be located.
-   */
-  private File findBaselineForThisMember(File baselineParentDir) {
-    File baselineDir = null;
-
-    /*
-     * Find the first matching DiskStoreId directory for this member.
-     */
-    for (DiskStore diskStore : cache.listDiskStoresIncludingRegionOwned()) {
-      File[] matchingFiles = baselineParentDir.listFiles(new FilenameFilter() {
-        Pattern pattern =
-            Pattern.compile(".*" + ((DiskStoreImpl) diskStore).getBackupDirName() + "$");
-
-        public boolean accept(File dir, String name) {
-          Matcher m = pattern.matcher(name);
-          return m.find();
-        }
-      });
-      // We found it? Good. Set this member's baseline to the backed up disk store's member dir (two
-      // levels up).
-      if (null != matchingFiles && matchingFiles.length > 0)
-        baselineDir = matchingFiles[0].getParentFile().getParentFile();
-    }
-    return baselineDir;
-  }
-
-  /**
-   * Performs a sanity check on the baseline directory for incremental backups. If a baseline
-   * directory exists for the member and there is no INCOMPLETE_BACKUP file then return the data
-   * stores directory for this member.
-   * 
-   * @param baselineParentDir a previous backup directory. This is used with the incremental backup
-   *        option. May be null if the user specified a full backup.
-   * @return null if the backup is to be a full backup otherwise return the data store directory in
-   *         the previous backup for this member (if incremental).
-   */
-  private File checkBaseline(File baselineParentDir) throws IOException {
-    File baselineDir = null;
-
-    if (null != baselineParentDir) {
-      // Start by looking for this memberId
-      baselineDir = getBackupDir(baselineParentDir);
-
-      if (!baselineDir.exists()) {
-        // hmmm, did this member have a restart?
-        // Determine which member dir might be a match for us
-        baselineDir = findBaselineForThisMember(baselineParentDir);
-      }
-
-      if (null != baselineDir) {
-        // check for existence of INCOMPLETE_BACKUP file
-        File incompleteBackup = new File(baselineDir, INCOMPLETE_BACKUP);
-        if (incompleteBackup.exists()) {
-          baselineDir = null;
-        }
-      }
-    }
-
-    return baselineDir;
-  }
-
-  public HashSet<PersistentID> finishBackup(File targetDir, File baselineDir, boolean abort)
-      throws IOException {
-    try {
-      if (abort) {
-        return new HashSet<PersistentID>();
-      }
-
-      File backupDir = getBackupDir(targetDir);
-
-      // Make sure our baseline is okay for this member
-      baselineDir = checkBaseline(baselineDir);
-
-      // Create an inspector for the baseline backup
-      BackupInspector inspector =
-          (baselineDir == null ? null : BackupInspector.createInspector(baselineDir));
-
-      File storesDir = new File(backupDir, DATA_STORES);
-      RestoreScript restoreScript = new RestoreScript();
-      HashSet<PersistentID> persistentIds = new HashSet<PersistentID>();
-      Collection<DiskStore> diskStores =
-          new ArrayList<DiskStore>(cache.listDiskStoresIncludingRegionOwned());
-
-      boolean foundPersistentData = false;
-      for (Iterator<DiskStore> itr = diskStores.iterator(); itr.hasNext();) {
-        DiskStoreImpl store = (DiskStoreImpl) itr.next();
-        if (store.hasPersistedData()) {
-          if (!foundPersistentData) {
-            createBackupDir(backupDir);
-            foundPersistentData = true;
-          }
-          File diskStoreDir = new File(storesDir, store.getBackupDirName());
-          diskStoreDir.mkdir();
-          store.startBackup(diskStoreDir, inspector, restoreScript);
-        } else {
-          itr.remove();
-        }
-        store.releaseBackupLock();
-      }
-
-      allowDestroys.countDown();
-
-      for (DiskStore store : diskStores) {
-        DiskStoreImpl storeImpl = (DiskStoreImpl) store;
-        storeImpl.finishBackup(this);
-        storeImpl.getStats().endBackup();
-        persistentIds.add(storeImpl.getPersistentID());
-      }
-
-      if (foundPersistentData) {
-        backupConfigFiles(restoreScript, backupDir);
-        backupUserFiles(restoreScript, backupDir);
-        backupDeployedJars(restoreScript, backupDir);
-        restoreScript.generate(backupDir);
-        File incompleteFile = new File(backupDir, INCOMPLETE_BACKUP);
-        if (!incompleteFile.delete()) {
-          throw new IOException("Could not delete file " + INCOMPLETE_BACKUP);
-        }
-      }
-
-      return persistentIds;
-
-    } finally {
-      cleanup();
-    }
-  }
-
-  public void abort() {
-    cleanup();
-  }
-
-  private void backupConfigFiles(RestoreScript restoreScript, File backupDir) throws IOException {
-    File configBackupDir = new File(backupDir, CONFIG);
-    configBackupDir.mkdirs();
-    URL url = cache.getCacheXmlURL();
-    if (url != null) {
-      File cacheXMLBackup =
-          new File(configBackupDir, DistributionConfig.DEFAULT_CACHE_XML_FILE.getName());
-      FileUtils.copyFile(new File(cache.getCacheXmlURL().getFile()), cacheXMLBackup);
-    }
-
-    URL propertyURL = DistributedSystem.getPropertiesFileURL();
-    if (propertyURL != null) {
-      File propertyBackup =
-          new File(configBackupDir, DistributionConfig.GEMFIRE_PREFIX + "properties");
-      FileUtils.copyFile(new File(DistributedSystem.getPropertiesFile()), propertyBackup);
-    }
-
-    // TODO: should the gfsecurity.properties file be backed up?
-  }
-
-  private void backupUserFiles(RestoreScript restoreScript, File backupDir) throws IOException {
-    List<File> backupFiles = cache.getBackupFiles();
-    File userBackupDir = new File(backupDir, USER_FILES);
-    if (!userBackupDir.exists()) {
-      userBackupDir.mkdir();
-    }
-    for (File original : backupFiles) {
-      if (original.exists()) {
-        original = original.getAbsoluteFile();
-        File dest = new File(userBackupDir, original.getName());
-        if (original.isDirectory()) {
-          FileUtils.copyDirectory(original, dest);
-        } else {
-          FileUtils.copyFile(original, dest);
-        }
-        restoreScript.addExistenceTest(original);
-        restoreScript.addFile(original, dest);
-      }
-    }
-  }
-
-  /**
-   * Copies user deployed jars to the backup directory.
-   * 
-   * @param restoreScript Used to restore from this backup.
-   * @param backupDir The backup directory for this member.
-   * @throws IOException one or more of the jars did not successfully copy.
-   */
-  private void backupDeployedJars(RestoreScript restoreScript, File backupDir) throws IOException {
-    JarDeployer deployer = null;
-
-    try {
-      /*
-       * Suspend any user deployed jar file updates during this backup.
-       */
-      deployer = ClassPathLoader.getLatest().getJarDeployer();
-      deployer.suspendAll();
-
-      List<DeployedJar> jarList = deployer.findDeployedJars();
-      if (!jarList.isEmpty()) {
-        File userBackupDir = new File(backupDir, USER_FILES);
-        if (!userBackupDir.exists()) {
-          userBackupDir.mkdir();
-        }
-
-        for (DeployedJar loader : jarList) {
-          File source = new File(loader.getFileCanonicalPath());
-          File dest = new File(userBackupDir, source.getName());
-          if (source.isDirectory()) {
-            FileUtils.copyDirectory(source, dest);
-          } else {
-            FileUtils.copyFile(source, dest);
-          }
-          restoreScript.addFile(source, dest);
-        }
-      }
-    } finally {
-      /*
-       * Re-enable user deployed jar file updates.
-       */
-      if (null != deployer) {
-        deployer.resumeAll();
-      }
-    }
-  }
-
-  private File getBackupDir(File targetDir) throws IOException {
-    InternalDistributedMember memberId =
-        cache.getInternalDistributedSystem().getDistributedMember();
-    String vmId = memberId.toString();
-    vmId = cleanSpecialCharacters(vmId);
-    return new File(targetDir, vmId);
-  }
-
-  private void createBackupDir(File backupDir) throws IOException {
-    if (backupDir.exists()) {
-      throw new IOException("Backup directory " + backupDir.getAbsolutePath() + " already exists.");
-    }
-
-    if (!backupDir.mkdirs()) {
-      throw new IOException("Could not create directory: " + backupDir);
-    }
-
-    File incompleteFile = new File(backupDir, INCOMPLETE_BACKUP);
-    if (!incompleteFile.createNewFile()) {
-      throw new IOException("Could not create file: " + incompleteFile);
-    }
-
-    File readme = new File(backupDir, README);
-    FileOutputStream fos = new FileOutputStream(readme);
-
-    try {
-      String text = LocalizedStrings.BackupManager_README.toLocalizedString();
-      fos.write(text.getBytes());
-    } finally {
-      fos.close();
-    }
-  }
-
-  private String cleanSpecialCharacters(String string) {
-    return string.replaceAll("[^\\w]+", "_");
-  }
-
-  public void memberDeparted(InternalDistributedMember id, boolean crashed) {
-    cleanup();
-  }
-
-  public void memberJoined(InternalDistributedMember id) {}
-
-  public void quorumLost(Set<InternalDistributedMember> failures,
-      List<InternalDistributedMember> remaining) {}
-
-  public void memberSuspect(InternalDistributedMember id, InternalDistributedMember whoSuspected,
-      String reason) {}
-
-  public void waitForBackup() {
-    try {
-      allowDestroys.await();
-    } catch (InterruptedException e) {
-      throw new InternalGemFireError(e);
-    }
-  }
-
-  public boolean isCancelled() {
-    return isCancelled;
-  }
-}

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/main/java/org/apache/geode/internal/cache/xmlcache/CacheCreation.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/xmlcache/CacheCreation.java b/geode-core/src/main/java/org/apache/geode/internal/cache/xmlcache/CacheCreation.java
index a7f2a11..e5e372d 100755
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/xmlcache/CacheCreation.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/xmlcache/CacheCreation.java
@@ -108,7 +108,7 @@ import org.apache.geode.internal.cache.extension.Extensible;
 import org.apache.geode.internal.cache.extension.ExtensionPoint;
 import org.apache.geode.internal.cache.extension.SimpleExtensionPoint;
 import org.apache.geode.internal.cache.ha.HARegionQueue;
-import org.apache.geode.internal.cache.persistence.BackupManager;
+import org.apache.geode.internal.cache.BackupManager;
 import org.apache.geode.internal.cache.persistence.PersistentMemberManager;
 import org.apache.geode.internal.cache.tier.sockets.CacheClientNotifier;
 import org.apache.geode.internal.cache.tier.sockets.ClientProxyMembershipID;

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/main/java/org/apache/geode/management/internal/beans/MemberMBeanBridge.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/management/internal/beans/MemberMBeanBridge.java b/geode-core/src/main/java/org/apache/geode/management/internal/beans/MemberMBeanBridge.java
index dd905eb..5105c3d 100644
--- a/geode-core/src/main/java/org/apache/geode/management/internal/beans/MemberMBeanBridge.java
+++ b/geode-core/src/main/java/org/apache/geode/management/internal/beans/MemberMBeanBridge.java
@@ -77,7 +77,7 @@ import org.apache.geode.internal.cache.PartitionedRegionStats;
 import org.apache.geode.internal.cache.control.ResourceManagerStats;
 import org.apache.geode.internal.cache.execute.FunctionServiceStats;
 import org.apache.geode.internal.cache.lru.LRUStatistics;
-import org.apache.geode.internal.cache.persistence.BackupManager;
+import org.apache.geode.internal.cache.BackupManager;
 import org.apache.geode.internal.i18n.LocalizedStrings;
 import org.apache.geode.internal.logging.LogService;
 import org.apache.geode.internal.logging.log4j.LocalizedMessage;
@@ -1037,10 +1037,10 @@ public class MemberMBeanBridge {
         Set<PersistentID> existingDataStores;
         Set<PersistentID> successfulDataStores;
         try {
-          existingDataStores = manager.prepareBackup();
+          existingDataStores = manager.prepareForBackup();
           abort = false;
         } finally {
-          successfulDataStores = manager.finishBackup(targetDir, null/* TODO rishi */, abort);
+          successfulDataStores = manager.doBackup(targetDir, null/* TODO rishi */, abort);
         }
         diskBackUpResult = new DiskBackupResult[existingDataStores.size()];
         int j = 0;

http://git-wip-us.apache.org/repos/asf/geode/blob/3bb6a221/geode-core/src/test/java/org/apache/geode/internal/cache/BackupDUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/cache/BackupDUnitTest.java b/geode-core/src/test/java/org/apache/geode/internal/cache/BackupDUnitTest.java
index f2cee71..338c712 100755
--- a/geode-core/src/test/java/org/apache/geode/internal/cache/BackupDUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/cache/BackupDUnitTest.java
@@ -46,11 +46,13 @@ import org.apache.geode.test.dunit.DUnitEnv;
 import org.apache.geode.test.dunit.Host;
 import org.apache.geode.test.dunit.IgnoredException;
 import org.apache.geode.test.dunit.Invoke;
-import org.apache.geode.test.dunit.LogWriterUtils;
 import org.apache.geode.test.dunit.SerializableCallable;
 import org.apache.geode.test.dunit.SerializableRunnable;
 import org.apache.geode.test.dunit.VM;
 import org.apache.geode.test.junit.categories.DistributedTest;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -67,34 +69,38 @@ import java.util.Collection;
 import java.util.Collections;
 import java.util.Set;
 import java.util.TreeSet;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicInteger;
-import java.util.concurrent.atomic.AtomicReference;
 import java.util.regex.Pattern;
 
 @Category(DistributedTest.class)
 public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
+  Logger logger = LogManager.getLogger(BackupDUnitTest.class);
 
-  private static final long MAX_WAIT = 30 * 1000;
+  private static final long MAX_WAIT_SECONDS = 30;
+  private VM vm0;
+  private VM vm1;
 
   @Override
   public final void preTearDownCacheTestCase() throws Exception {
     StringBuilder failures = new StringBuilder();
     delete(getBackupDir(), failures);
     if (failures.length() > 0) {
-      LogWriterUtils.getLogWriter().error(failures.toString());
+      logger.error(failures.toString());
     }
   }
 
   @Test
   public void testBackupPR() throws Throwable {
     Host host = Host.getHost(0);
-    VM vm0 = host.getVM(0);
-    VM vm1 = host.getVM(1);
+    vm0 = host.getVM(0);
+    vm1 = host.getVM(1);
     VM vm2 = host.getVM(2);
 
-    LogWriterUtils.getLogWriter().info("Creating region in VM0");
+    logger.info("Creating region in VM0");
     createPersistentRegion(vm0);
-    LogWriterUtils.getLogWriter().info("Creating region in VM1");
+    logger.info("Creating region in VM1");
     createPersistentRegion(vm1);
 
     long lm0 = setBackupFiles(vm0);
@@ -107,7 +113,6 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
     assertEquals(2, status.getBackedUpDiskStores().size());
     assertEquals(Collections.emptySet(), status.getOfflineDiskStores());
 
-    Pattern pattern = Pattern.compile(".*my.txt.*");
     Collection<File> files = FileUtils.listFiles(getBackupDir(), new String[] {"txt"}, true);
     assertEquals(4, files.size());
     deleteOldUserUserFile(vm0);
@@ -136,13 +141,7 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
 
     restoreBackup(2);
 
-    LogWriterUtils.getLogWriter().info("Creating region in VM0");
-    AsyncInvocation async0 = createPersistentRegionAsync(vm0);
-    LogWriterUtils.getLogWriter().info("Creating region in VM1");
-    AsyncInvocation async1 = createPersistentRegionAsync(vm1);
-
-    async0.getResult(MAX_WAIT);
-    async1.getResult(MAX_WAIT);
+    createPersistentRegionsAsync();
 
     checkData(vm0, 0, 5, "A", "region1");
     checkData(vm0, 0, 5, "B", "region2");
@@ -156,12 +155,12 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
   @Test
   public void testBackupFromMemberWithDiskStore() throws Throwable {
     Host host = Host.getHost(0);
-    VM vm0 = host.getVM(0);
-    VM vm1 = host.getVM(1);
+    vm0 = host.getVM(0);
+    vm1 = host.getVM(1);
 
-    LogWriterUtils.getLogWriter().info("Creating region in VM0");
+    logger.info("Creating region in VM0");
     createPersistentRegion(vm0);
-    LogWriterUtils.getLogWriter().info("Creating region in VM1");
+    logger.info("Creating region in VM1");
     createPersistentRegion(vm1);
 
     createData(vm0, 0, 5, "A", "region1");
@@ -192,25 +191,21 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
 
     restoreBackup(2);
 
-    LogWriterUtils.getLogWriter().info("Creating region in VM0");
-    AsyncInvocation async0 = createPersistentRegionAsync(vm0);
-    LogWriterUtils.getLogWriter().info("Creating region in VM1");
-    AsyncInvocation async1 = createPersistentRegionAsync(vm1);
-
-    async0.getResult(MAX_WAIT);
-    async1.getResult(MAX_WAIT);
+    createPersistentRegionsAsync();
 
     checkData(vm0, 0, 5, "A", "region1");
     checkData(vm0, 0, 5, "B", "region2");
   }
 
-  // public void testLoop() throws Throwable {
-  // for(int i =0 ;i < 100; i++) {
-  // testBackupWhileBucketIsCreated();
-  // setUp();
-  // tearDown();
-  // }
-  // }
+  private void createPersistentRegionsAsync() throws java.util.concurrent.ExecutionException,
+      InterruptedException, java.util.concurrent.TimeoutException {
+    logger.info("Creating region in VM0");
+    AsyncInvocation async0 = createPersistentRegionAsync(vm0);
+    logger.info("Creating region in VM1");
+    AsyncInvocation async1 = createPersistentRegionAsync(vm1);
+    async0.get(MAX_WAIT_SECONDS, TimeUnit.SECONDS);
+    async1.get(MAX_WAIT_SECONDS, TimeUnit.SECONDS);
+  }
 
   /**
    * Test for bug 42419
@@ -218,40 +213,27 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
   @Test
   public void testBackupWhileBucketIsCreated() throws Throwable {
     Host host = Host.getHost(0);
-    final VM vm0 = host.getVM(0);
-    VM vm1 = host.getVM(1);
+    vm0 = host.getVM(0);
+    vm1 = host.getVM(1);
     final VM vm2 = host.getVM(2);
 
-    LogWriterUtils.getLogWriter().info("Creating region in VM0");
+    logger.info("Creating region in VM0");
     createPersistentRegion(vm0);
 
     // create a bucket on vm0
     createData(vm0, 0, 1, "A", "region1");
 
     // create the pr on vm1, which won't have any buckets
-    LogWriterUtils.getLogWriter().info("Creating region in VM1");
+    logger.info("Creating region in VM1");
     createPersistentRegion(vm1);
 
-    final AtomicReference<BackupStatus> statusRef = new AtomicReference<BackupStatus>();
-    Thread thread1 = new Thread() {
-      public void run() {
+    CompletableFuture<BackupStatus> backupStatusFuture =
+        CompletableFuture.supplyAsync(() -> backup(vm2));
+    CompletableFuture<Void> createDataFuture =
+        CompletableFuture.runAsync(() -> createData(vm0, 1, 5, "A", "region1"));
+    CompletableFuture.allOf(backupStatusFuture, createDataFuture);
 
-        BackupStatus status = backup(vm2);
-        statusRef.set(status);
-
-      }
-    };
-    thread1.start();
-    Thread thread2 = new Thread() {
-      public void run() {
-        createData(vm0, 1, 5, "A", "region1");
-      }
-    };
-    thread2.start();
-    thread1.join();
-    thread2.join();
-
-    BackupStatus status = statusRef.get();
+    BackupStatus status = backupStatusFuture.get();
     assertEquals(2, status.getBackedUpDiskStores().size());
     assertEquals(Collections.emptySet(), status.getOfflineDiskStores());
 
@@ -278,13 +260,7 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
 
     restoreBackup(2);
 
-    LogWriterUtils.getLogWriter().info("Creating region in VM0");
-    AsyncInvocation async0 = createPersistentRegionAsync(vm0);
-    LogWriterUtils.getLogWriter().info("Creating region in VM1");
-    AsyncInvocation async1 = createPersistentRegionAsync(vm1);
-
-    async0.getResult(MAX_WAIT);
-    async1.getResult(MAX_WAIT);
+    createPersistentRegionsAsync();
 
     checkData(vm0, 0, 1, "A", "region1");
   }
@@ -296,8 +272,6 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
 
     DistributionMessageObserver observer = new SerializableDistributionMessageObserver() {
       private volatile boolean done;
-      private AtomicInteger count = new AtomicInteger();
-      private volatile int replyId = -0xBAD;
 
       @Override
       public void beforeSendMessage(DistributionManager dm, DistributionMessage msg) {
@@ -316,8 +290,8 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
   @Test
   public void testBackupWhileBucketIsMovedBackupAfterSendDestroy() throws Throwable {
     Host host = Host.getHost(0);
-    final VM vm0 = host.getVM(0);
-    VM vm1 = host.getVM(1);
+    vm0 = host.getVM(0);
+    vm1 = host.getVM(1);
     final VM vm2 = host.getVM(2);
 
     DistributionMessageObserver observer = new SerializableDistributionMessageObserver() {
@@ -407,12 +381,11 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
    * 
    * @param observer - a message observer that triggers at the backup at the correct time.
    */
-  public void backupWhileBucketIsMoved(final DistributionMessageObserver observer)
+  private void backupWhileBucketIsMoved(final DistributionMessageObserver observer)
       throws Throwable {
     Host host = Host.getHost(0);
-    final VM vm0 = host.getVM(0);
-    VM vm1 = host.getVM(1);
-    final VM vm2 = host.getVM(2);
+    vm0 = host.getVM(0);
+    vm1 = host.getVM(1);
 
     vm0.invoke(new SerializableRunnable("Add listener to invoke backup") {
 
@@ -428,14 +401,14 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
     });
     try {
 
-      LogWriterUtils.getLogWriter().info("Creating region in VM0");
+      logger.info("Creating region in VM0");
       createPersistentRegion(vm0);
 
       // create twos bucket on vm0
       createData(vm0, 0, 2, "A", "region1");
 
       // create the pr on vm1, which won't have any buckets
-      LogWriterUtils.getLogWriter().info("Creating region in VM1");
+      logger.info("Creating region in VM1");
 
       createPersistentRegion(vm1);
 
@@ -476,13 +449,7 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
 
       restoreBackup(2);
 
-      LogWriterUtils.getLogWriter().info("Creating region in VM0");
-      AsyncInvocation async0 = createPersistentRegionAsync(vm0);
-      LogWriterUtils.getLogWriter().info("Creating region in VM1");
-      AsyncInvocation async1 = createPersistentRegionAsync(vm1);
-
-      async0.getResult(MAX_WAIT);
-      async1.getResult(MAX_WAIT);
+      createPersistentRegionsAsync();
 
       checkData(vm0, 0, 2, "A", "region1");
     } finally {
@@ -502,13 +469,13 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
   @Test
   public void testBackupOverflow() throws Throwable {
     Host host = Host.getHost(0);
-    VM vm0 = host.getVM(0);
-    VM vm1 = host.getVM(1);
+    vm0 = host.getVM(0);
+    vm1 = host.getVM(1);
     VM vm2 = host.getVM(2);
 
-    LogWriterUtils.getLogWriter().info("Creating region in VM0");
+    logger.info("Creating region in VM0");
     createPersistentRegion(vm0);
-    LogWriterUtils.getLogWriter().info("Creating region in VM1");
+    logger.info("Creating region in VM1");
     createOverflowRegion(vm1);
 
     createData(vm0, 0, 5, "A", "region1");
@@ -526,16 +493,16 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
   @Test
   public void testBackupPRWithOfflineMembers() throws Throwable {
     Host host = Host.getHost(0);
-    VM vm0 = host.getVM(0);
-    VM vm1 = host.getVM(1);
+    vm0 = host.getVM(0);
+    vm1 = host.getVM(1);
     VM vm2 = host.getVM(2);
     VM vm3 = host.getVM(3);
 
-    LogWriterUtils.getLogWriter().info("Creating region in VM0");
+    logger.info("Creating region in VM0");
     createPersistentRegion(vm0);
-    LogWriterUtils.getLogWriter().info("Creating region in VM1");
+    logger.info("Creating region in VM1");
     createPersistentRegion(vm1);
-    LogWriterUtils.getLogWriter().info("Creating region in VM2");
+    logger.info("Creating region in VM2");
     createPersistentRegion(vm2);
 
     createData(vm0, 0, 5, "A", "region1");
@@ -562,11 +529,11 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
     assertTrue(files.length == 0);
   }
 
-  protected void createPersistentRegion(VM vm) throws Throwable {
+  private void createPersistentRegion(VM vm) throws Throwable {
     AsyncInvocation future = createPersistentRegionAsync(vm);
-    future.join(MAX_WAIT);
+    future.get(MAX_WAIT_SECONDS, TimeUnit.SECONDS);
     if (future.isAlive()) {
-      fail("Region not created within" + MAX_WAIT);
+      fail("Region not created within" + MAX_WAIT_SECONDS);
     }
     if (future.exceptionOccurred()) {
       throw new RuntimeException(future.getException());
@@ -576,9 +543,8 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
   private void deleteOldUserUserFile(final VM vm) {
     SerializableRunnable validateUserFileBackup = new SerializableRunnable("set user backups") {
       public void run() {
-        final int pid = vm.getPid();
         try {
-          FileUtils.deleteDirectory(new File("userbackup_" + pid));
+          FileUtils.deleteDirectory(new File("userbackup_" + vm.getPid()));
         } catch (IOException e) {
           fail(e.getMessage());
         }
@@ -587,7 +553,7 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
     vm.invoke(validateUserFileBackup);
   }
 
-  protected long setBackupFiles(final VM vm) {
+  private long setBackupFiles(final VM vm) {
     SerializableCallable setUserBackups = new SerializableCallable("set user backups") {
       public Object call() {
         final int pid = DUnitEnv.get().getPid();
@@ -595,7 +561,7 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
         File test1 = new File(vmdir, "test1");
         File test2 = new File(test1, "test2");
         File mytext = new File(test2, "my.txt");
-        final ArrayList<File> backuplist = new ArrayList<File>();
+        final ArrayList<File> backuplist = new ArrayList<>();
         test2.mkdirs();
         PrintStream ps = null;
         try {
@@ -619,7 +585,7 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
     return (long) vm.invoke(setUserBackups);
   }
 
-  protected void verifyUserFileRestored(VM vm, final long lm) {
+  private void verifyUserFileRestored(VM vm, final long lm) {
     vm.invoke(new SerializableRunnable() {
       public void run() {
         final int pid = DUnitEnv.get().getPid();
@@ -640,8 +606,6 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
           BufferedReader bin = new BufferedReader(fr);
           String content = bin.readLine();
           assertTrue(content.equals("" + pid));
-        } catch (FileNotFoundException e) {
-          fail(e.getMessage());
         } catch (IOException e) {
           fail(e.getMessage());
         }
@@ -649,7 +613,7 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
     });
   }
 
-  protected AsyncInvocation createPersistentRegionAsync(final VM vm) {
+  private AsyncInvocation createPersistentRegionAsync(final VM vm) {
     SerializableRunnable createRegion = new SerializableRunnable("Create persistent region") {
       public void run() {
         Cache cache = getCache();
@@ -670,7 +634,7 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
         dsf = cache.createDiskStoreFactory();
         dsf.setDiskDirs(getDiskDirs(getUniqueName() + 2));
         dsf.setMaxOplogSize(1);
-        ds = dsf.create(getUniqueName() + 2);
+        dsf.create(getUniqueName() + 2);
         rf.setDiskStoreName(getUniqueName() + 2);
         rf.create("region2");
       }
@@ -678,7 +642,7 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
     return vm.invokeAsync(createRegion);
   }
 
-  protected void createOverflowRegion(final VM vm) {
+  private void createOverflowRegion(final VM vm) {
     SerializableRunnable createRegion = new SerializableRunnable("Create persistent region") {
       public void run() {
         Cache cache = getCache();
@@ -760,14 +724,14 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
       public Object call() throws Exception {
         Cache cache = getCache();
         PartitionedRegion region = (PartitionedRegion) cache.getRegion(regionName);
-        return new TreeSet<Integer>(region.getDataStore().getAllLocalBucketIds());
+        return new TreeSet<>(region.getDataStore().getAllLocalBucketIds());
       }
     };
 
     return (Set<Integer>) vm0.invoke(getBuckets);
   }
 
-  public File[] getDiskDirs(String dsName) {
+  private File[] getDiskDirs(String dsName) {
     File[] dirs = getDiskDirs();
     File[] diskStoreDirs = new File[1];
     diskStoreDirs[0] = new File(dirs[0], dsName);
@@ -775,7 +739,7 @@ public class BackupDUnitTest extends PersistentPartitionedRegionTestBase {
     return diskStoreDirs;
   }
 
-  protected DataPolicy getDataPolicy() {
+  private DataPolicy getDataPolicy() {
     return DataPolicy.PERSISTENT_PARTITION;
   }
 


[13/51] [abbrv] geode git commit: GEODE-3412: Add simple authentication flow to protobuf protocol. This now closes #707

Posted by kl...@apache.org.
GEODE-3412: Add simple authentication flow to protobuf protocol. This now closes #707

This change adds a simple username/password validation to the protobuf protocol.
It also adds a new configuration parameter to specify the type of authentication required.

Signed-off-by: Galen O'Sullivan <go...@pivotal.io>


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/a7a197d6
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/a7a197d6
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/a7a197d6

Branch: refs/heads/feature/GEODE-1279
Commit: a7a197d633a20ee3a2161d47389581858745c1cc
Parents: 190cfed
Author: Brian Rowe <br...@pivotal.io>
Authored: Thu Aug 10 11:16:25 2017 -0700
Committer: Udo Kohlmeyer <uk...@pivotal.io>
Committed: Tue Aug 15 10:49:33 2017 -0700

----------------------------------------------------------------------
 .../geode/internal/cache/CacheServerImpl.java   |  10 +-
 .../cache/tier/sockets/AcceptorImpl.java        |  39 ++---
 .../GenericProtocolServerConnection.java        |  13 +-
 .../tier/sockets/ServerConnectionFactory.java   |  86 +++++++----
 .../geode/security/NoOpStreamAuthenticator.java |  45 ++++++
 .../geode/security/StreamAuthenticator.java     |  52 +++++++
 ...rg.apache.geode.security.StreamAuthenticator |   1 +
 .../tier/sockets/AcceptorImplJUnitTest.java     |  25 ++--
 .../GenericProtocolServerConnectionTest.java    |   2 +-
 .../sockets/ServerConnectionFactoryTest.java    |  53 ++++---
 .../tier/sockets/ServerConnectionTest.java      |   4 +-
 .../protobuf/ProtobufSimpleAuthenticator.java   |  63 ++++++++
 .../src/main/proto/authentication_API.proto     |  26 ++++
 .../src/main/proto/clientProtocol.proto         |   1 -
 ...rg.apache.geode.security.StreamAuthenticator |   1 +
 .../protocol/AuthenticationIntegrationTest.java | 142 +++++++++++++++++++
 .../ProtobufSimpleAuthenticatorJUnitTest.java   | 111 +++++++++++++++
 17 files changed, 584 insertions(+), 90 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-core/src/main/java/org/apache/geode/internal/cache/CacheServerImpl.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/CacheServerImpl.java b/geode-core/src/main/java/org/apache/geode/internal/cache/CacheServerImpl.java
index 7d4b6d4..bcd8b32 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/CacheServerImpl.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/CacheServerImpl.java
@@ -27,6 +27,7 @@ import java.util.List;
 import java.util.Set;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import org.apache.geode.internal.cache.tier.sockets.ServerConnectionFactory;
 import org.apache.logging.log4j.Logger;
 
 import org.apache.geode.CancelCriterion;
@@ -91,6 +92,13 @@ public class CacheServerImpl extends AbstractCacheServer implements Distribution
 
   private final SecurityService securityService;
 
+  /**
+   * The server connection factory, that provides either a
+   * {@link org.apache.geode.internal.cache.tier.sockets.LegacyServerConnection} or a new
+   * {@link org.apache.geode.internal.cache.tier.sockets.GenericProtocolServerConnection}
+   */
+  private final ServerConnectionFactory serverConnectionFactory = new ServerConnectionFactory();
+
   /** The acceptor that does the actual serving */
   private volatile AcceptorImpl acceptor;
 
@@ -343,7 +351,7 @@ public class CacheServerImpl extends AbstractCacheServer implements Distribution
         getSocketBufferSize(), getMaximumTimeBetweenPings(), this.cache, getMaxConnections(),
         getMaxThreads(), getMaximumMessageCount(), getMessageTimeToLive(), this.loadMonitor,
         overflowAttributesList, this.isGatewayReceiver, this.gatewayTransportFilters,
-        this.tcpNoDelay);
+        this.tcpNoDelay, serverConnectionFactory);
 
     this.acceptor.start();
     this.advisor.handshake();

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/AcceptorImpl.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/AcceptorImpl.java b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/AcceptorImpl.java
index d18fa6a..2e33af8 100755
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/AcceptorImpl.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/AcceptorImpl.java
@@ -303,6 +303,8 @@ public class AcceptorImpl extends Acceptor implements Runnable, CommBufferPool {
 
   private final SecurityService securityService;
 
+  private final ServerConnectionFactory serverConnectionFactory;
+
   /**
    * Initializes this acceptor thread to listen for connections on the given port.
    *
@@ -324,13 +326,15 @@ public class AcceptorImpl extends Acceptor implements Runnable, CommBufferPool {
       int socketBufferSize, int maximumTimeBetweenPings, InternalCache internalCache,
       int maxConnections, int maxThreads, int maximumMessageCount, int messageTimeToLive,
       ConnectionListener listener, List overflowAttributesList, boolean isGatewayReceiver,
-      List<GatewayTransportFilter> transportFilter, boolean tcpNoDelay) throws IOException {
+      List<GatewayTransportFilter> transportFilter, boolean tcpNoDelay,
+      ServerConnectionFactory serverConnectionFactory) throws IOException {
     this.securityService = internalCache.getSecurityService();
     this.bindHostName = calcBindHostName(internalCache, bindHostName);
     this.connectionListener = listener == null ? new ConnectionListenerAdapter() : listener;
     this.notifyBySubscription = notifyBySubscription;
     this.isGatewayReceiver = isGatewayReceiver;
     this.gatewayTransportFilters = transportFilter;
+    this.serverConnectionFactory = serverConnectionFactory;
     {
       int tmp_maxConnections = maxConnections;
       if (tmp_maxConnections < MINIMUM_MAX_CONNECTIONS) {
@@ -1243,13 +1247,13 @@ public class AcceptorImpl extends Acceptor implements Runnable, CommBufferPool {
 
       crHelper.checkCancelInProgress(null); // throws
 
-      Socket s = null;
+      Socket socket = null;
       try {
-        s = serverSock.accept();
+        socket = serverSock.accept();
         crHelper.checkCancelInProgress(null); // throws
 
         // Optionally enable SO_KEEPALIVE in the OS network protocol.
-        s.setKeepAlive(SocketCreator.ENABLE_TCP_KEEP_ALIVE);
+        socket.setKeepAlive(SocketCreator.ENABLE_TCP_KEEP_ALIVE);
 
         // The synchronization below was added to prevent close from being
         // called
@@ -1265,22 +1269,22 @@ public class AcceptorImpl extends Acceptor implements Runnable, CommBufferPool {
 
         synchronized (this.syncLock) {
           if (!isRunning()) {
-            closeSocket(s);
+            closeSocket(socket);
             break;
           }
         }
         this.loggedAcceptError = false;
 
-        handOffNewClientConnection(s);
+        handOffNewClientConnection(socket, serverConnectionFactory);
       } catch (InterruptedIOException e) { // Solaris only
-        closeSocket(s);
+        closeSocket(socket);
         if (isRunning()) {
           if (logger.isDebugEnabled()) {
             logger.debug("Aborted due to interrupt: {}", e);
           }
         }
       } catch (IOException e) {
-        closeSocket(s);
+        closeSocket(socket);
         if (isRunning()) {
           if (!this.loggedAcceptError) {
             this.loggedAcceptError = true;
@@ -1291,10 +1295,10 @@ public class AcceptorImpl extends Acceptor implements Runnable, CommBufferPool {
           // try {Thread.sleep(3000);} catch (InterruptedException ie) {}
         }
       } catch (CancelException e) {
-        closeSocket(s);
+        closeSocket(socket);
         throw e;
       } catch (Exception e) {
-        closeSocket(s);
+        closeSocket(socket);
         if (isRunning()) {
           logger.fatal(LocalizedMessage
               .create(LocalizedStrings.AcceptorImpl_CACHE_SERVER_UNEXPECTED_EXCEPTION, e));
@@ -1303,20 +1307,20 @@ public class AcceptorImpl extends Acceptor implements Runnable, CommBufferPool {
     }
   }
 
-
   /**
    * Hand off a new client connection to the thread pool that processes handshakes. If all the
    * threads in this pool are busy then the hand off will block until a thread is available. This
    * blocking is good because it will throttle the rate at which we create new connections.
    */
-  private void handOffNewClientConnection(final Socket s) {
+  private void handOffNewClientConnection(final Socket socket,
+      final ServerConnectionFactory serverConnectionFactory) {
     try {
       this.stats.incAcceptsInProgress();
       this.hsPool.execute(new Runnable() {
         public void run() {
           boolean finished = false;
           try {
-            handleNewClientConnection(s);
+            handleNewClientConnection(socket, serverConnectionFactory);
             finished = true;
           } catch (RegionDestroyedException rde) {
             // aborted due to disconnect - bug 42273
@@ -1343,7 +1347,7 @@ public class AcceptorImpl extends Acceptor implements Runnable, CommBufferPool {
             }
           } finally {
             if (!finished) {
-              closeSocket(s);
+              closeSocket(socket);
             }
             if (isRunning()) {
               AcceptorImpl.this.stats.decAcceptsInProgress();
@@ -1352,7 +1356,7 @@ public class AcceptorImpl extends Acceptor implements Runnable, CommBufferPool {
         }
       });
     } catch (RejectedExecutionException rejected) {
-      closeSocket(s);
+      closeSocket(socket);
       if (isRunning()) {
         this.stats.decAcceptsInProgress();
         logger.warn(LocalizedMessage.create(LocalizedStrings.AcceptorImpl_UNEXPECTED, rejected));
@@ -1389,7 +1393,8 @@ public class AcceptorImpl extends Acceptor implements Runnable, CommBufferPool {
     return this.clientServerCnxCount.get();
   }
 
-  protected void handleNewClientConnection(final Socket socket) throws IOException {
+  protected void handleNewClientConnection(final Socket socket,
+      final ServerConnectionFactory serverConnectionFactory) throws IOException {
     // Read the first byte. If this socket is being used for 'client to server'
     // communication, create a ServerConnection. If this socket is being used
     // for 'server to client' communication, send it to the CacheClientNotifier
@@ -1468,7 +1473,7 @@ public class AcceptorImpl extends Acceptor implements Runnable, CommBufferPool {
       }
     }
 
-    ServerConnection serverConn = ServerConnectionFactory.makeServerConnection(socket, this.cache,
+    ServerConnection serverConn = serverConnectionFactory.makeServerConnection(socket, this.cache,
         this.crHelper, this.stats, AcceptorImpl.handShakeTimeout, this.socketBufferSize,
         communicationModeStr, communicationMode, this, this.securityService);
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnection.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnection.java b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnection.java
index 76b3b7e..7c8fb5c 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnection.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnection.java
@@ -19,6 +19,7 @@ import org.apache.geode.internal.cache.InternalCache;
 import org.apache.geode.internal.cache.tier.Acceptor;
 import org.apache.geode.internal.cache.tier.CachedRegionHelper;
 import org.apache.geode.internal.security.SecurityService;
+import org.apache.geode.security.SecurityManager;
 
 import java.io.IOException;
 import java.io.InputStream;
@@ -31,6 +32,8 @@ import java.net.Socket;
 public class GenericProtocolServerConnection extends ServerConnection {
   // The new protocol lives in a separate module and gets loaded when this class is instantiated.
   private final ClientProtocolMessageHandler messageHandler;
+  private final SecurityManager securityManager;
+  private final StreamAuthenticator authenticator;
 
   /**
    * Creates a new <code>GenericProtocolServerConnection</code> that processes messages received
@@ -39,10 +42,12 @@ public class GenericProtocolServerConnection extends ServerConnection {
   public GenericProtocolServerConnection(Socket s, InternalCache c, CachedRegionHelper helper,
       CacheServerStats stats, int hsTimeout, int socketBufferSize, String communicationModeStr,
       byte communicationMode, Acceptor acceptor, ClientProtocolMessageHandler newClientProtocol,
-      SecurityService securityService) {
+      SecurityService securityService, StreamAuthenticator authenticator) {
     super(s, c, helper, stats, hsTimeout, socketBufferSize, communicationModeStr, communicationMode,
         acceptor, securityService);
+    securityManager = securityService.getSecurityManager();
     this.messageHandler = newClientProtocol;
+    this.authenticator = authenticator;
   }
 
   @Override
@@ -52,7 +57,11 @@ public class GenericProtocolServerConnection extends ServerConnection {
       InputStream inputStream = socket.getInputStream();
       OutputStream outputStream = socket.getOutputStream();
 
-      messageHandler.receiveMessage(inputStream, outputStream, this.getCache());
+      if (!authenticator.isAuthenticated()) {
+        authenticator.receiveMessage(inputStream, outputStream, securityManager);
+      } else {
+        messageHandler.receiveMessage(inputStream, outputStream, this.getCache());
+      }
     } catch (IOException e) {
       logger.warn(e);
       this.setFlagProcessMessagesAsFalse(); // TODO: better shutdown.

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionFactory.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionFactory.java b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionFactory.java
index ad13b78..1d53297 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionFactory.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionFactory.java
@@ -22,59 +22,89 @@ import org.apache.geode.internal.security.SecurityService;
 
 import java.io.IOException;
 import java.net.Socket;
+import java.util.HashMap;
 import java.util.Iterator;
+import java.util.Map;
 import java.util.ServiceLoader;
-import javax.management.ServiceNotFoundException;
 
 /**
  * Creates instances of ServerConnection based on the connection mode provided.
  */
 public class ServerConnectionFactory {
-  private static ClientProtocolMessageHandler protobufProtocolHandler;
-  private static final Object protocolLoadLock = new Object();
+  private ClientProtocolMessageHandler protobufProtocolHandler;
+  private Map<String, Class<? extends StreamAuthenticator>> authenticators = null;
 
-  private static ClientProtocolMessageHandler findClientProtocolMessageHandler() {
+  public ServerConnectionFactory() {}
+
+  private synchronized void initializeAuthenticatorsMap() {
+    if (authenticators != null) {
+      return;
+    }
+    authenticators = new HashMap<>();
+    ServiceLoader<StreamAuthenticator> loader = ServiceLoader.load(StreamAuthenticator.class);
+    for (StreamAuthenticator streamAuthenticator : loader) {
+      authenticators.put(streamAuthenticator.implementationID(), streamAuthenticator.getClass());
+    }
+  }
+
+  private synchronized ClientProtocolMessageHandler initializeMessageHandler() {
     if (protobufProtocolHandler != null) {
       return protobufProtocolHandler;
     }
+    ServiceLoader<ClientProtocolMessageHandler> loader =
+        ServiceLoader.load(ClientProtocolMessageHandler.class);
+    Iterator<ClientProtocolMessageHandler> iterator = loader.iterator();
 
-    synchronized (protocolLoadLock) {
-      if (protobufProtocolHandler != null) {
-        return protobufProtocolHandler;
-      }
-
-      ServiceLoader<ClientProtocolMessageHandler> loader =
-          ServiceLoader.load(ClientProtocolMessageHandler.class);
-      Iterator<ClientProtocolMessageHandler> iterator = loader.iterator();
-
-      if (!iterator.hasNext()) {
-        throw new ServiceLoadingFailureException(
-            "ClientProtocolMessageHandler implementation not found in JVM");
-      }
+    if (!iterator.hasNext()) {
+      throw new ServiceLoadingFailureException(
+          "There is no ClientProtocolMessageHandler implementation found in JVM");
+    }
 
-      ClientProtocolMessageHandler returnValue = iterator.next();
+    protobufProtocolHandler = iterator.next();
+    return protobufProtocolHandler;
+  }
 
-      if (iterator.hasNext()) {
+  private StreamAuthenticator findStreamAuthenticator(String implementationID) {
+    if (authenticators == null) {
+      initializeAuthenticatorsMap();
+    }
+    Class<? extends StreamAuthenticator> streamAuthenticatorClass =
+        authenticators.get(implementationID);
+    if (streamAuthenticatorClass == null) {
+      throw new ServiceLoadingFailureException(
+          "Could not find implementation for StreamAuthenticator with implementation ID "
+              + implementationID);
+    } else {
+      try {
+        return streamAuthenticatorClass.newInstance();
+      } catch (InstantiationException | IllegalAccessException e) {
         throw new ServiceLoadingFailureException(
-            "Multiple service implementations found for ClientProtocolMessageHandler");
+            "Unable to instantiate authenticator for ID " + implementationID, e);
       }
+    }
+  }
 
-      return returnValue;
+  private ClientProtocolMessageHandler getClientProtocolMessageHandler() {
+    if (protobufProtocolHandler == null) {
+      initializeMessageHandler();
     }
+    return protobufProtocolHandler;
   }
 
-  public static ServerConnection makeServerConnection(Socket s, InternalCache c,
-      CachedRegionHelper helper, CacheServerStats stats, int hsTimeout, int socketBufferSize,
-      String communicationModeStr, byte communicationMode, Acceptor acceptor,
-      SecurityService securityService) throws IOException {
+  public ServerConnection makeServerConnection(Socket s, InternalCache c, CachedRegionHelper helper,
+      CacheServerStats stats, int hsTimeout, int socketBufferSize, String communicationModeStr,
+      byte communicationMode, Acceptor acceptor, SecurityService securityService)
+      throws IOException {
     if (communicationMode == Acceptor.PROTOBUF_CLIENT_SERVER_PROTOCOL) {
       if (!Boolean.getBoolean("geode.feature-protobuf-protocol")) {
         throw new IOException("Acceptor received unknown communication mode: " + communicationMode);
       } else {
-        protobufProtocolHandler = findClientProtocolMessageHandler();
+        String authenticationMode =
+            System.getProperty("geode.protocol-authentication-mode", "NOOP");
+
         return new GenericProtocolServerConnection(s, c, helper, stats, hsTimeout, socketBufferSize,
-            communicationModeStr, communicationMode, acceptor, protobufProtocolHandler,
-            securityService);
+            communicationModeStr, communicationMode, acceptor, getClientProtocolMessageHandler(),
+            securityService, findStreamAuthenticator(authenticationMode));
       }
     } else {
       return new LegacyServerConnection(s, c, helper, stats, hsTimeout, socketBufferSize,

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-core/src/main/java/org/apache/geode/security/NoOpStreamAuthenticator.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/security/NoOpStreamAuthenticator.java b/geode-core/src/main/java/org/apache/geode/security/NoOpStreamAuthenticator.java
new file mode 100644
index 0000000..bca1ec2
--- /dev/null
+++ b/geode-core/src/main/java/org/apache/geode/security/NoOpStreamAuthenticator.java
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.cache.tier.sockets;
+
+import org.apache.geode.security.SecurityManager;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+/**
+ * An implementation of {@link StreamAuthenticator} that doesn't use its parameters and always
+ * returns true.
+ */
+public class NoOpStreamAuthenticator implements StreamAuthenticator {
+
+
+  @Override
+  public void receiveMessage(InputStream inputStream, OutputStream outputStream,
+      SecurityManager securityManager) throws IOException {
+    // this method needs to do nothing as it is a pass-through implementation
+  }
+
+  @Override
+  public boolean isAuthenticated() {
+    return true;
+  }
+
+  @Override
+  public String implementationID() {
+    return "NOOP";
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-core/src/main/java/org/apache/geode/security/StreamAuthenticator.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/security/StreamAuthenticator.java b/geode-core/src/main/java/org/apache/geode/security/StreamAuthenticator.java
new file mode 100644
index 0000000..51cbf2e
--- /dev/null
+++ b/geode-core/src/main/java/org/apache/geode/security/StreamAuthenticator.java
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.internal.cache.tier.sockets;
+
+import org.apache.geode.security.SecurityManager;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+/**
+ * Implementers of this interface do some message passing over a socket to authenticate a client,
+ * then hand off the connection to the protocol that will talk on the socket.
+ *
+ * If authentication fails, an implementor may continue to wait for another valid authentication
+ * exchange.
+ */
+public interface StreamAuthenticator {
+  /**
+   *
+   * @param inputStream to read auth messages from.
+   * @param outputStream to send messages to.
+   * @param securityManager can be used for validating credentials against.
+   * @throws IOException if EOF or if invalid input is received.
+   */
+  void receiveMessage(InputStream inputStream, OutputStream outputStream,
+      SecurityManager securityManager) throws IOException;
+
+  /**
+   * Until authentication is complete, isAuthenticated() must return false, and the socket will
+   * always be passed to the StreamAuthenticator. Once authentication succeeds, calls to this
+   * function must always return true.
+   */
+  boolean isAuthenticated();
+
+  /**
+   * @return a unique identifier for this particular implementation (NOOP, PASSTHROUGH, etc.)
+   */
+  String implementationID();
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-core/src/main/resources/META-INF/services/org.apache.geode.security.StreamAuthenticator
----------------------------------------------------------------------
diff --git a/geode-core/src/main/resources/META-INF/services/org.apache.geode.security.StreamAuthenticator b/geode-core/src/main/resources/META-INF/services/org.apache.geode.security.StreamAuthenticator
new file mode 100644
index 0000000..3b93815
--- /dev/null
+++ b/geode-core/src/main/resources/META-INF/services/org.apache.geode.security.StreamAuthenticator
@@ -0,0 +1 @@
+org.apache.geode.security.NoOpStreamAuthenticator
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/AcceptorImplJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/AcceptorImplJUnitTest.java b/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/AcceptorImplJUnitTest.java
index 1fe5980..6c46eff 100644
--- a/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/AcceptorImplJUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/AcceptorImplJUnitTest.java
@@ -14,11 +14,6 @@
  */
 package org.apache.geode.internal.cache.tier.sockets;
 
-import static org.apache.geode.distributed.ConfigurationProperties.MCAST_PORT;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
-
 import org.apache.geode.cache.CacheException;
 import org.apache.geode.cache.CacheFactory;
 import org.apache.geode.cache.server.CacheServer;
@@ -40,6 +35,11 @@ import java.net.BindException;
 import java.util.Collections;
 import java.util.Properties;
 
+import static org.apache.geode.distributed.ConfigurationProperties.MCAST_PORT;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
 @Category({IntegrationTest.class, ClientServerTest.class})
 public class AcceptorImplJUnitTest {
 
@@ -74,12 +74,14 @@ public class AcceptorImplJUnitTest {
       int port2 = freeTCPPorts[1];
 
 
+      ServerConnectionFactory serverConnectionFactory = new ServerConnectionFactory();
       try {
         new AcceptorImpl(port1, null, false, CacheServer.DEFAULT_SOCKET_BUFFER_SIZE,
             CacheServer.DEFAULT_MAXIMUM_TIME_BETWEEN_PINGS, this.cache,
             AcceptorImpl.MINIMUM_MAX_CONNECTIONS - 1, CacheServer.DEFAULT_MAX_THREADS,
             CacheServer.DEFAULT_MAXIMUM_MESSAGE_COUNT, CacheServer.DEFAULT_MESSAGE_TIME_TO_LIVE,
-            null, null, false, Collections.EMPTY_LIST, CacheServer.DEFAULT_TCP_NO_DELAY);
+            null, null, false, Collections.EMPTY_LIST, CacheServer.DEFAULT_TCP_NO_DELAY,
+            serverConnectionFactory);
         fail("Expected an IllegalArgumentExcption due to max conns < min pool size");
       } catch (IllegalArgumentException expected) {
       }
@@ -89,7 +91,7 @@ public class AcceptorImplJUnitTest {
             CacheServer.DEFAULT_MAXIMUM_TIME_BETWEEN_PINGS, this.cache, 0,
             CacheServer.DEFAULT_MAX_THREADS, CacheServer.DEFAULT_MAXIMUM_MESSAGE_COUNT,
             CacheServer.DEFAULT_MESSAGE_TIME_TO_LIVE, null, null, false, Collections.EMPTY_LIST,
-            CacheServer.DEFAULT_TCP_NO_DELAY);
+            CacheServer.DEFAULT_TCP_NO_DELAY, serverConnectionFactory);
         fail("Expected an IllegalArgumentExcption due to max conns of zero");
       } catch (IllegalArgumentException expected) {
       }
@@ -99,12 +101,14 @@ public class AcceptorImplJUnitTest {
             CacheServer.DEFAULT_MAXIMUM_TIME_BETWEEN_PINGS, this.cache,
             AcceptorImpl.MINIMUM_MAX_CONNECTIONS, CacheServer.DEFAULT_MAX_THREADS,
             CacheServer.DEFAULT_MAXIMUM_MESSAGE_COUNT, CacheServer.DEFAULT_MESSAGE_TIME_TO_LIVE,
-            null, null, false, Collections.EMPTY_LIST, CacheServer.DEFAULT_TCP_NO_DELAY);
+            null, null, false, Collections.EMPTY_LIST, CacheServer.DEFAULT_TCP_NO_DELAY,
+            serverConnectionFactory);
         a2 = new AcceptorImpl(port1, null, false, CacheServer.DEFAULT_SOCKET_BUFFER_SIZE,
             CacheServer.DEFAULT_MAXIMUM_TIME_BETWEEN_PINGS, this.cache,
             AcceptorImpl.MINIMUM_MAX_CONNECTIONS, CacheServer.DEFAULT_MAX_THREADS,
             CacheServer.DEFAULT_MAXIMUM_MESSAGE_COUNT, CacheServer.DEFAULT_MESSAGE_TIME_TO_LIVE,
-            null, null, false, Collections.EMPTY_LIST, CacheServer.DEFAULT_TCP_NO_DELAY);
+            null, null, false, Collections.EMPTY_LIST, CacheServer.DEFAULT_TCP_NO_DELAY,
+            serverConnectionFactory);
         fail("Expecetd a BindException while attaching to the same port");
       } catch (BindException expected) {
       }
@@ -113,7 +117,8 @@ public class AcceptorImplJUnitTest {
           CacheServer.DEFAULT_MAXIMUM_TIME_BETWEEN_PINGS, this.cache,
           AcceptorImpl.MINIMUM_MAX_CONNECTIONS, CacheServer.DEFAULT_MAX_THREADS,
           CacheServer.DEFAULT_MAXIMUM_MESSAGE_COUNT, CacheServer.DEFAULT_MESSAGE_TIME_TO_LIVE, null,
-          null, false, Collections.EMPTY_LIST, CacheServer.DEFAULT_TCP_NO_DELAY);
+          null, false, Collections.EMPTY_LIST, CacheServer.DEFAULT_TCP_NO_DELAY,
+          serverConnectionFactory);
       assertEquals(port2, a3.getPort());
       InternalDistributedSystem isystem =
           (InternalDistributedSystem) this.cache.getDistributedSystem();

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnectionTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnectionTest.java b/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnectionTest.java
index 3bfcd8b..3dcf343 100644
--- a/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnectionTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnectionTest.java
@@ -57,6 +57,6 @@ public class GenericProtocolServerConnectionTest {
     return new GenericProtocolServerConnection(socketMock, mock(InternalCache.class),
         mock(CachedRegionHelper.class), mock(CacheServerStats.class), 0, 0, "",
         Acceptor.PROTOBUF_CLIENT_SERVER_PROTOCOL, mock(AcceptorImpl.class), clientProtocolMock,
-        mock(SecurityService.class));
+        mock(SecurityService.class), new NoOpStreamAuthenticator());
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionFactoryTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionFactoryTest.java b/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionFactoryTest.java
index b3c3e32..cffa05f 100644
--- a/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionFactoryTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionFactoryTest.java
@@ -15,13 +15,14 @@
 
 package org.apache.geode.internal.cache.tier.sockets;
 
-import org.apache.geode.internal.Assert;
 import org.apache.geode.internal.cache.InternalCache;
 import org.apache.geode.internal.cache.tier.Acceptor;
 import org.apache.geode.internal.cache.tier.CachedRegionHelper;
 import org.apache.geode.internal.security.SecurityService;
 import org.apache.geode.test.junit.categories.UnitTest;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.contrib.java.lang.system.RestoreSystemProperties;
 import org.junit.experimental.categories.Category;
 
 import java.io.IOException;
@@ -36,18 +37,22 @@ import static org.mockito.Mockito.when;
  * We don't test the path where the service providing protobufProtocolHandler is actually present,
  * because it lives outside this module, and all the integration tests from that module will test
  * the newclient protocol happy path.
- *
+ * <p>
  * What we are concerned with is making sure that everything stays the same when the feature flag
  * isn't set, and that we at least try to load the service when the feature flag is true.
  */
 @Category(UnitTest.class)
 public class ServerConnectionFactoryTest {
+
+  @Rule
+  public RestoreSystemProperties restoreSystemProperties = new RestoreSystemProperties();
+
   /**
    * Safeguard that we won't create the new client protocol object unless the feature flag is
    * enabled.
    */
   @Test(expected = IOException.class)
-  public void newClientProtocolFailsWithoutSystemPropertySet() throws Exception {
+  public void newClientProtocolFailsWithoutSystemPropertySet() throws IOException {
     ServerConnection serverConnection =
         serverConnectionMockedExceptForCommunicationMode(Acceptor.PROTOBUF_CLIENT_SERVER_PROTOCOL);
 
@@ -58,14 +63,10 @@ public class ServerConnectionFactoryTest {
    *         module, and when this unit test is run, that module won't be present.
    */
   @Test(expected = ServiceLoadingFailureException.class)
-  public void newClientProtocolFailsWithSystemPropertySet() throws Exception {
-    try {
-      System.setProperty("geode.feature-protobuf-protocol", "true");
-      ServerConnection serverConnection = serverConnectionMockedExceptForCommunicationMode(
-          Acceptor.PROTOBUF_CLIENT_SERVER_PROTOCOL);
-    } finally {
-      System.clearProperty("geode.feature-protobuf-protocol");
-    }
+  public void newClientProtocolFailsWithSystemPropertySet() throws IOException {
+    System.setProperty("geode.feature-protobuf-protocol", "true");
+    ServerConnection serverConnection =
+        serverConnectionMockedExceptForCommunicationMode(Acceptor.PROTOBUF_CLIENT_SERVER_PROTOCOL);
   }
 
   @Test
@@ -86,29 +87,25 @@ public class ServerConnectionFactoryTest {
   @Test
   public void makeServerConnectionForOldProtocolWithFeatureFlagEnabled() throws IOException {
     System.setProperty("geode.feature-protobuf-protocol", "true");
-    try {
-      byte[] communicationModes =
-          new byte[] {Acceptor.CLIENT_TO_SERVER, Acceptor.PRIMARY_SERVER_TO_CLIENT,
-              Acceptor.SECONDARY_SERVER_TO_CLIENT, Acceptor.GATEWAY_TO_GATEWAY,
-              Acceptor.MONITOR_TO_SERVER, Acceptor.SUCCESSFUL_SERVER_TO_CLIENT,
-              Acceptor.UNSUCCESSFUL_SERVER_TO_CLIENT, Acceptor.CLIENT_TO_SERVER_FOR_QUEUE,};
-
-      for (byte communicationMode : communicationModes) {
-        ServerConnection serverConnection =
-            serverConnectionMockedExceptForCommunicationMode(communicationMode);
-        assertTrue(serverConnection instanceof LegacyServerConnection);
-      }
-    } finally {
-      System.clearProperty("geode.feature-protobuf-protocol");
+    byte[] communicationModes =
+        new byte[] {Acceptor.CLIENT_TO_SERVER, Acceptor.PRIMARY_SERVER_TO_CLIENT,
+            Acceptor.SECONDARY_SERVER_TO_CLIENT, Acceptor.GATEWAY_TO_GATEWAY,
+            Acceptor.MONITOR_TO_SERVER, Acceptor.SUCCESSFUL_SERVER_TO_CLIENT,
+            Acceptor.UNSUCCESSFUL_SERVER_TO_CLIENT, Acceptor.CLIENT_TO_SERVER_FOR_QUEUE,};
+
+    for (byte communicationMode : communicationModes) {
+      ServerConnection serverConnection =
+          serverConnectionMockedExceptForCommunicationMode(communicationMode);
+      assertTrue(serverConnection instanceof LegacyServerConnection);
     }
   }
 
-  private static ServerConnection serverConnectionMockedExceptForCommunicationMode(
-      byte communicationMode) throws IOException {
+  private ServerConnection serverConnectionMockedExceptForCommunicationMode(byte communicationMode)
+      throws IOException {
     Socket socketMock = mock(Socket.class);
     when(socketMock.getInetAddress()).thenReturn(InetAddress.getByName("localhost"));
 
-    return ServerConnectionFactory.makeServerConnection(socketMock, mock(InternalCache.class),
+    return new ServerConnectionFactory().makeServerConnection(socketMock, mock(InternalCache.class),
         mock(CachedRegionHelper.class), mock(CacheServerStats.class), 0, 0, "", communicationMode,
         mock(AcceptorImpl.class), mock(SecurityService.class));
   }

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionTest.java b/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionTest.java
index 7399a72..2aa8995 100644
--- a/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionTest.java
@@ -83,8 +83,8 @@ public class ServerConnectionTest {
     InternalCache cache = mock(InternalCache.class);
     SecurityService securityService = mock(SecurityService.class);
 
-    serverConnection = ServerConnectionFactory.makeServerConnection(socket, cache, null, null, 0, 0,
-        null, Acceptor.PRIMARY_SERVER_TO_CLIENT, acceptor, securityService);
+    serverConnection = new ServerConnectionFactory().makeServerConnection(socket, cache, null, null,
+        0, 0, null, Acceptor.PRIMARY_SERVER_TO_CLIENT, acceptor, securityService);
     MockitoAnnotations.initMocks(this);
   }
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufSimpleAuthenticator.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufSimpleAuthenticator.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufSimpleAuthenticator.java
new file mode 100644
index 0000000..59c61e2
--- /dev/null
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufSimpleAuthenticator.java
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.protocol.protobuf;
+
+import org.apache.geode.internal.cache.tier.sockets.StreamAuthenticator;
+import org.apache.geode.security.AuthenticationFailedException;
+import org.apache.geode.security.SecurityManager;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.Properties;
+
+public class ProtobufSimpleAuthenticator implements StreamAuthenticator {
+  private boolean authenticated;
+
+  @Override
+  public void receiveMessage(InputStream inputStream, OutputStream outputStream,
+      SecurityManager securityManager) throws IOException {
+    AuthenticationAPI.SimpleAuthenticationRequest authenticationRequest =
+        AuthenticationAPI.SimpleAuthenticationRequest.parseDelimitedFrom(inputStream);
+    if (authenticationRequest == null) {
+      throw new EOFException();
+    }
+
+    Properties properties = new Properties();
+    properties.setProperty("username", authenticationRequest.getUsername());
+    properties.setProperty("password", authenticationRequest.getPassword());
+
+    try {
+      Object principal = securityManager.authenticate(properties);
+      authenticated = principal != null;
+    } catch (AuthenticationFailedException e) {
+      authenticated = false;
+    }
+
+    AuthenticationAPI.SimpleAuthenticationResponse.newBuilder().setAuthenticated(authenticated)
+        .build().writeDelimitedTo(outputStream);
+  }
+
+  @Override
+  public boolean isAuthenticated() {
+    return authenticated;
+  }
+
+  @Override
+  public String implementationID() {
+    return "SIMPLE";
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-protobuf/src/main/proto/authentication_API.proto
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/proto/authentication_API.proto b/geode-protobuf/src/main/proto/authentication_API.proto
new file mode 100644
index 0000000..0e651bd
--- /dev/null
+++ b/geode-protobuf/src/main/proto/authentication_API.proto
@@ -0,0 +1,26 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+syntax = "proto3";
+package org.apache.geode.protocol.protobuf;
+
+message SimpleAuthenticationRequest {
+    string username = 1;
+    string password = 2;
+}
+
+message SimpleAuthenticationResponse {
+    bool authenticated = 1;
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-protobuf/src/main/proto/clientProtocol.proto
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/proto/clientProtocol.proto b/geode-protobuf/src/main/proto/clientProtocol.proto
index 8203c43..91783b2 100644
--- a/geode-protobuf/src/main/proto/clientProtocol.proto
+++ b/geode-protobuf/src/main/proto/clientProtocol.proto
@@ -56,7 +56,6 @@ message Request {
         GetAvailableServersRequest getAvailableServersRequest = 42;
         GetRegionNamesRequest getRegionNamesRequest = 43;
         GetRegionRequest getRegionRequest = 44;
-
     }
 }
 

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-protobuf/src/main/resources/META-INF/services/org.apache.geode.security.StreamAuthenticator
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/resources/META-INF/services/org.apache.geode.security.StreamAuthenticator b/geode-protobuf/src/main/resources/META-INF/services/org.apache.geode.security.StreamAuthenticator
new file mode 100644
index 0000000..45e4eea
--- /dev/null
+++ b/geode-protobuf/src/main/resources/META-INF/services/org.apache.geode.security.StreamAuthenticator
@@ -0,0 +1 @@
+org.apache.geode.protocol.protobuf.ProtobufSimpleAuthenticator
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-protobuf/src/test/java/org/apache/geode/protocol/AuthenticationIntegrationTest.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/test/java/org/apache/geode/protocol/AuthenticationIntegrationTest.java b/geode-protobuf/src/test/java/org/apache/geode/protocol/AuthenticationIntegrationTest.java
new file mode 100644
index 0000000..794375e
--- /dev/null
+++ b/geode-protobuf/src/test/java/org/apache/geode/protocol/AuthenticationIntegrationTest.java
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.protocol;
+
+import org.apache.geode.cache.Cache;
+import org.apache.geode.cache.CacheFactory;
+import org.apache.geode.cache.server.CacheServer;
+import org.apache.geode.distributed.internal.DistributionConfig;
+import org.apache.geode.internal.AvailablePortHelper;
+import org.apache.geode.protocol.protobuf.AuthenticationAPI;
+import org.apache.geode.protocol.protobuf.ClientProtocol;
+import org.apache.geode.protocol.protobuf.ProtobufSerializationService;
+import org.apache.geode.protocol.protobuf.RegionAPI;
+import org.apache.geode.protocol.protobuf.serializer.ProtobufProtocolSerializer;
+import org.apache.geode.security.SecurityManager;
+import org.apache.geode.serialization.registry.exception.CodecAlreadyRegisteredForTypeException;
+import org.apache.geode.test.junit.categories.IntegrationTest;
+import org.awaitility.Awaitility;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.contrib.java.lang.system.RestoreSystemProperties;
+import org.junit.experimental.categories.Category;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.Socket;
+import java.util.Properties;
+import java.util.concurrent.TimeUnit;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.same;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+@Category(IntegrationTest.class)
+public class AuthenticationIntegrationTest {
+
+  private static final String TEST_USERNAME = "bob";
+  private static final String TEST_PASSWORD = "bobspassword";
+
+  @Rule
+  public final RestoreSystemProperties restoreSystemProperties = new RestoreSystemProperties();
+
+  private Cache cache;
+  private int cacheServerPort;
+  private CacheServer cacheServer;
+  private Socket socket;
+  private OutputStream outputStream;
+  private ProtobufSerializationService serializationService;
+  private InputStream inputStream;
+  private ProtobufProtocolSerializer protobufProtocolSerializer;
+  private Object securityPrincipal;
+  private SecurityManager mockSecurityManager;
+
+  public void setUp(String authenticationMode)
+      throws IOException, CodecAlreadyRegisteredForTypeException {
+    Properties expectedAuthProperties = new Properties();
+    expectedAuthProperties.setProperty("username", TEST_USERNAME);
+    expectedAuthProperties.setProperty("password", TEST_PASSWORD);
+
+    securityPrincipal = new Object();
+    mockSecurityManager = mock(SecurityManager.class);
+    when(mockSecurityManager.authenticate(expectedAuthProperties)).thenReturn(securityPrincipal);
+    when(mockSecurityManager.authorize(same(securityPrincipal), any())).thenReturn(true);
+
+    Properties properties = new Properties();
+    CacheFactory cacheFactory = new CacheFactory(properties);
+    cacheFactory.set("mcast-port", "0"); // sometimes it isn't due to other tests.
+
+    cacheFactory.setSecurityManager(mockSecurityManager);
+    cache = cacheFactory.create();
+
+    cacheServer = cache.addCacheServer();
+    cacheServerPort = AvailablePortHelper.getRandomAvailableTCPPort();
+    cacheServer.setPort(cacheServerPort);
+    cacheServer.start();
+
+
+    System.setProperty("geode.feature-protobuf-protocol", "true");
+    System.setProperty("geode.protocol-authentication-mode", authenticationMode);
+    socket = new Socket("localhost", cacheServerPort);
+
+    Awaitility.await().atMost(5, TimeUnit.SECONDS).until(socket::isConnected);
+    outputStream = socket.getOutputStream();
+    inputStream = socket.getInputStream();
+    outputStream.write(110);
+
+    serializationService = new ProtobufSerializationService();
+    protobufProtocolSerializer = new ProtobufProtocolSerializer();
+  }
+
+  @Test
+  public void noopAuthenticationSucceeds() throws Exception {
+    setUp("NOOP");
+    ClientProtocol.Message getRegionsMessage =
+        ClientProtocol.Message.newBuilder().setRequest(ClientProtocol.Request.newBuilder()
+            .setGetRegionNamesRequest(RegionAPI.GetRegionNamesRequest.newBuilder())).build();
+    protobufProtocolSerializer.serialize(getRegionsMessage, outputStream);
+
+    ClientProtocol.Message regionsResponse = protobufProtocolSerializer.deserialize(inputStream);
+    assertEquals(ClientProtocol.Response.ResponseAPICase.GETREGIONNAMESRESPONSE,
+        regionsResponse.getResponse().getResponseAPICase());
+  }
+
+  @Test
+  public void simpleAuthenticationSucceeds() throws Exception {
+    setUp("SIMPLE");
+    AuthenticationAPI.SimpleAuthenticationRequest authenticationRequest =
+        AuthenticationAPI.SimpleAuthenticationRequest.newBuilder().setUsername(TEST_USERNAME)
+            .setPassword(TEST_PASSWORD).build();
+    authenticationRequest.writeDelimitedTo(outputStream);
+
+    AuthenticationAPI.SimpleAuthenticationResponse authenticationResponse =
+        AuthenticationAPI.SimpleAuthenticationResponse.parseDelimitedFrom(inputStream);
+    assertTrue(authenticationResponse.getAuthenticated());
+
+    ClientProtocol.Message getRegionsMessage =
+        ClientProtocol.Message.newBuilder().setRequest(ClientProtocol.Request.newBuilder()
+            .setGetRegionNamesRequest(RegionAPI.GetRegionNamesRequest.newBuilder())).build();
+    protobufProtocolSerializer.serialize(getRegionsMessage, outputStream);
+
+    ClientProtocol.Message regionsResponse = protobufProtocolSerializer.deserialize(inputStream);
+    assertEquals(ClientProtocol.Response.ResponseAPICase.GETREGIONNAMESRESPONSE,
+        regionsResponse.getResponse().getResponseAPICase());
+
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/a7a197d6/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/ProtobufSimpleAuthenticatorJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/ProtobufSimpleAuthenticatorJUnitTest.java b/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/ProtobufSimpleAuthenticatorJUnitTest.java
new file mode 100644
index 0000000..3d16f5e
--- /dev/null
+++ b/geode-protobuf/src/test/java/org/apache/geode/protocol/protobuf/ProtobufSimpleAuthenticatorJUnitTest.java
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package org.apache.geode.protocol.protobuf;
+
+import org.apache.geode.security.AuthenticationFailedException;
+import org.apache.geode.security.ResourcePermission;
+import org.apache.geode.security.SecurityManager;
+import org.apache.geode.test.junit.categories.UnitTest;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.util.Properties;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.same;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+@Category(UnitTest.class)
+public class ProtobufSimpleAuthenticatorJUnitTest {
+  private static final String TEST_USERNAME = "user1";
+  private static final String TEST_PASSWORD = "hunter2";
+  private ByteArrayInputStream byteArrayInputStream; // initialized with an incoming request in
+                                                     // setUp.
+  private ByteArrayOutputStream byteArrayOutputStream;
+  private ProtobufSimpleAuthenticator protobufSimpleAuthenticator;
+  private SecurityManager mockSecurityManager;
+  private Object securityPrincipal;
+
+  @Before
+  public void setUp() throws IOException {
+    AuthenticationAPI.SimpleAuthenticationRequest basicAuthenticationRequest =
+        AuthenticationAPI.SimpleAuthenticationRequest.newBuilder().setUsername(TEST_USERNAME)
+            .setPassword(TEST_PASSWORD).build();
+
+    Properties expectedAuthProperties = new Properties();
+    expectedAuthProperties.setProperty("username", TEST_USERNAME);
+    expectedAuthProperties.setProperty("password", TEST_PASSWORD);
+
+    ByteArrayOutputStream messageStream = new ByteArrayOutputStream();
+    basicAuthenticationRequest.writeDelimitedTo(messageStream);
+    byteArrayInputStream = new ByteArrayInputStream(messageStream.toByteArray());
+    byteArrayOutputStream = new ByteArrayOutputStream();
+
+    securityPrincipal = new Object();
+    mockSecurityManager = mock(SecurityManager.class);
+    when(mockSecurityManager.authenticate(expectedAuthProperties)).thenReturn(securityPrincipal);
+    when(mockSecurityManager.authorize(same(securityPrincipal), any())).thenReturn(true);
+
+    protobufSimpleAuthenticator = new ProtobufSimpleAuthenticator();
+  }
+
+  @Test
+  public void successfulAuthentication() throws IOException {
+    assertFalse(protobufSimpleAuthenticator.isAuthenticated());
+
+    protobufSimpleAuthenticator.receiveMessage(byteArrayInputStream, byteArrayOutputStream,
+        mockSecurityManager);
+
+    AuthenticationAPI.SimpleAuthenticationResponse simpleAuthenticationResponse =
+        getSimpleAuthenticationResponse(byteArrayOutputStream);
+
+    assertTrue(simpleAuthenticationResponse.getAuthenticated());
+    assertTrue(protobufSimpleAuthenticator.isAuthenticated());
+  }
+
+  @Test
+  public void authenticationFails() throws IOException {
+    assertFalse(protobufSimpleAuthenticator.isAuthenticated());
+
+    Properties expectedAuthProperties = new Properties();
+    expectedAuthProperties.setProperty("username", TEST_USERNAME);
+    expectedAuthProperties.setProperty("password", TEST_PASSWORD);
+    when(mockSecurityManager.authenticate(expectedAuthProperties))
+        .thenThrow(new AuthenticationFailedException("BOOM!"));
+
+    protobufSimpleAuthenticator.receiveMessage(byteArrayInputStream, byteArrayOutputStream,
+        mockSecurityManager);
+
+    AuthenticationAPI.SimpleAuthenticationResponse simpleAuthenticationResponse =
+        getSimpleAuthenticationResponse(byteArrayOutputStream);
+
+    assertFalse(simpleAuthenticationResponse.getAuthenticated());
+    assertFalse(protobufSimpleAuthenticator.isAuthenticated());
+  }
+
+  private AuthenticationAPI.SimpleAuthenticationResponse getSimpleAuthenticationResponse(
+      ByteArrayOutputStream outputStream) throws IOException {
+    ByteArrayInputStream responseStream = new ByteArrayInputStream(outputStream.toByteArray());
+    return AuthenticationAPI.SimpleAuthenticationResponse.parseDelimitedFrom(responseStream);
+  }
+}


[38/51] [abbrv] geode git commit: GEODE-3434: Allow the modules to be interoperable with current and older versions of tomcat 7

Posted by kl...@apache.org.
GEODE-3434: Allow the modules to be interoperable with current and older versions of tomcat 7

  * Modified DeltaSessions to use reflection to handle attributes fields incase an earlier tomcat 7 is used
  * Modified DeltaSession7 and DeltaSession8 to extend from DeltaSession
  * Added session backward compatibility tests
  * Modified aseembly build to download old product installs
  * Minor refactor of VersionManager to reuse property file load code


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/f38dff9d
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/f38dff9d
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/f38dff9d

Branch: refs/heads/feature/GEODE-1279
Commit: f38dff9d217a8808117b6fbb2e5f4021ef9c84ce
Parents: 0486700
Author: Jason Huynh <hu...@gmail.com>
Authored: Mon Aug 14 09:02:11 2017 -0700
Committer: Jason Huynh <hu...@gmail.com>
Committed: Thu Aug 17 17:00:14 2017 -0700

----------------------------------------------------------------------
 .../modules/session/catalina/DeltaSession7.java | 572 +------------------
 .../modules/session/catalina/DeltaSession8.java | 570 +-----------------
 .../modules/session/catalina/DeltaSession.java  |  50 +-
 geode-assembly/build.gradle                     |   3 +
 .../geode/session/tests/ContainerInstall.java   |  96 ++--
 .../geode/session/tests/TomcatInstall.java      |  68 +--
 ...TomcatSessionBackwardsCompatibilityTest.java | 244 ++++++++
 .../test/dunit/standalone/VersionManager.java   |  72 ++-
 .../standalone/VersionManagerJUnitTest.java     |   6 +-
 geode-old-versions/build.gradle                 |  64 ++-
 10 files changed, 495 insertions(+), 1250 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/f38dff9d/extensions/geode-modules-tomcat7/src/main/java/org/apache/geode/modules/session/catalina/DeltaSession7.java
----------------------------------------------------------------------
diff --git a/extensions/geode-modules-tomcat7/src/main/java/org/apache/geode/modules/session/catalina/DeltaSession7.java b/extensions/geode-modules-tomcat7/src/main/java/org/apache/geode/modules/session/catalina/DeltaSession7.java
index d7f30bd..f5dfbdc 100644
--- a/extensions/geode-modules-tomcat7/src/main/java/org/apache/geode/modules/session/catalina/DeltaSession7.java
+++ b/extensions/geode-modules-tomcat7/src/main/java/org/apache/geode/modules/session/catalina/DeltaSession7.java
@@ -14,88 +14,17 @@
  */
 package org.apache.geode.modules.session.catalina;
 
-import java.io.DataInput;
-import java.io.DataOutput;
-import java.io.IOException;
-import java.io.InputStream;
-import java.security.AccessController;
-import java.security.Principal;
-import java.security.PrivilegedAction;
-import java.util.ArrayList;
-import java.util.Enumeration;
-import java.util.Hashtable;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.ConcurrentMap;
-
-import javax.servlet.http.HttpSession;
-
 import org.apache.catalina.Manager;
-import org.apache.catalina.realm.GenericPrincipal;
-import org.apache.catalina.security.SecurityUtil;
-import org.apache.catalina.session.StandardSession;
-import org.apache.juli.logging.Log;
-import org.apache.juli.logging.LogFactory;
-
-import org.apache.geode.DataSerializable;
-import org.apache.geode.DataSerializer;
-import org.apache.geode.Delta;
-import org.apache.geode.InvalidDeltaException;
-import org.apache.geode.cache.Region;
-import org.apache.geode.internal.cache.lru.Sizeable;
-import org.apache.geode.internal.util.BlobHelper;
-import org.apache.geode.modules.gatewaydelta.GatewayDelta;
-import org.apache.geode.modules.gatewaydelta.GatewayDeltaEvent;
-import org.apache.geode.modules.session.catalina.internal.DeltaSessionAttributeEvent;
-import org.apache.geode.modules.session.catalina.internal.DeltaSessionAttributeEventBatch;
-import org.apache.geode.modules.session.catalina.internal.DeltaSessionDestroyAttributeEvent;
-import org.apache.geode.modules.session.catalina.internal.DeltaSessionUpdateAttributeEvent;
 
 @SuppressWarnings("serial")
-public class DeltaSession7 extends StandardSession
-    implements DataSerializable, Delta, GatewayDelta, Sizeable, DeltaSessionInterface {
-
-  private transient Region<String, HttpSession> operatingRegion;
-
-  private String sessionRegionName;
-
-  private String contextName;
-
-  private boolean hasDelta;
-
-  private boolean applyRemotely;
-
-  private boolean enableGatewayDeltaReplication;
-
-  private transient final Object changeLock = new Object();
-
-  private final List<DeltaSessionAttributeEvent> eventQueue =
-      new ArrayList<DeltaSessionAttributeEvent>();
-
-  private transient GatewayDeltaEvent currentGatewayDeltaEvent;
-
-  private transient boolean expired = false;
-
-  private transient boolean preferDeserializedForm = true;
-
-  private byte[] serializedPrincipal;
-
-  private final Log LOG = LogFactory.getLog(DeltaSession7.class.getName());
-
-  /**
-   * The string manager for this package.
-   */
-  // protected static StringManager STRING_MANAGER =
-  // StringManager.getManager("org.apache.geode.modules.session.catalina");
+public class DeltaSession7 extends DeltaSession {
 
   /**
    * Construct a new <code>Session</code> associated with no <code>Manager</code>. The
    * <code>Manager</code> will be assigned later using {@link #setOwner(Object)}.
    */
   public DeltaSession7() {
-    super(null);
+    super();
   }
 
   /**
@@ -105,503 +34,6 @@ public class DeltaSession7 extends StandardSession
    */
   public DeltaSession7(Manager manager) {
     super(manager);
-    setOwner(manager);
-  }
-
-  /**
-   * Return the <code>HttpSession</code> for which this object is the facade.
-   */
-  @SuppressWarnings("unchecked")
-  public HttpSession getSession() {
-    if (facade == null) {
-      if (SecurityUtil.isPackageProtectionEnabled()) {
-        final DeltaSession7 fsession = this;
-        facade = (DeltaSessionFacade) AccessController.doPrivileged(new PrivilegedAction() {
-          public Object run() {
-            return new DeltaSessionFacade(fsession);
-          }
-        });
-      } else {
-        facade = new DeltaSessionFacade(this);
-      }
-    }
-    return (facade);
-  }
-
-  public Principal getPrincipal() {
-    if (this.principal == null && this.serializedPrincipal != null) {
-      Principal sp = null;
-      try {
-        sp = (Principal) BlobHelper.deserializeBlob(this.serializedPrincipal);
-      } catch (Exception e) {
-        StringBuilder builder = new StringBuilder();
-        builder.append(this).append(
-            ": Serialized principal contains a byte[] that cannot be deserialized due to the following exception");
-        ((DeltaSessionManager) getManager()).getLogger().warn(builder.toString(), e);
-        return null;
-      }
-      this.principal = sp;
-      if (getManager() != null) {
-        DeltaSessionManager mgr = (DeltaSessionManager) getManager();
-        if (mgr.getLogger().isDebugEnabled()) {
-          mgr.getLogger().debug(this + ": Deserialized principal: " + this.principal);
-        }
-      }
-    }
-    return this.principal;
   }
 
-  public void setPrincipal(Principal principal) {
-    super.setPrincipal(principal);
-
-    // Put the session into the region to serialize the principal
-    if (getManager() != null) {
-      // TODO convert this to a delta
-      getManager().add(this);
-      DeltaSessionManager mgr = (DeltaSessionManager) getManager();
-      if (mgr.getLogger().isDebugEnabled()) {
-        mgr.getLogger().debug(this + ": Cached principal: " + principal);
-      }
-    }
-  }
-
-  private byte[] getSerializedPrincipal() {
-    if (this.serializedPrincipal == null) {
-      if (this.principal != null && this.principal instanceof GenericPrincipal) {
-        GenericPrincipal gp = (GenericPrincipal) this.principal;
-        this.serializedPrincipal = serialize(gp);
-        if (manager != null) {
-          DeltaSessionManager mgr = (DeltaSessionManager) getManager();
-          if (mgr.getLogger().isDebugEnabled()) {
-            mgr.getLogger().debug(this + ": Serialized principal: " + gp);
-          }
-        }
-      }
-    }
-    return this.serializedPrincipal;
-  }
-
-  protected Region<String, HttpSession> getOperatingRegion() {
-    // This region shouldn't be null when it is needed.
-    // It should have been set by the setOwner method.
-    return this.operatingRegion;
-  }
-
-  public boolean isCommitEnabled() {
-    DeltaSessionManager mgr = (DeltaSessionManager) getManager();
-    return mgr.isCommitValveEnabled();
-  }
-
-  public GatewayDeltaEvent getCurrentGatewayDeltaEvent() {
-    return this.currentGatewayDeltaEvent;
-  }
-
-  public void setCurrentGatewayDeltaEvent(GatewayDeltaEvent currentGatewayDeltaEvent) {
-    this.currentGatewayDeltaEvent = currentGatewayDeltaEvent;
-  }
-
-  @SuppressWarnings("unchecked")
-  public void setOwner(Object manager) {
-    if (manager instanceof DeltaSessionManager) {
-      DeltaSessionManager sessionManager = (DeltaSessionManager) manager;
-      this.manager = sessionManager;
-      initializeRegion(sessionManager);
-      this.hasDelta = false;
-      this.applyRemotely = false;
-      this.enableGatewayDeltaReplication = sessionManager.getEnableGatewayDeltaReplication();
-      this.preferDeserializedForm = sessionManager.getPreferDeserializedForm();
-
-      // Initialize transient variables
-      if (this.listeners == null) {
-        this.listeners = new ArrayList();
-      }
-
-      if (this.notes == null) {
-        this.notes = new Hashtable();
-      }
-
-      contextName = ((DeltaSessionManager) manager).getContextName();
-    } else {
-      throw new IllegalArgumentException(this + ": The Manager must be an AbstractManager");
-    }
-  }
-
-  private void checkBackingCacheAvailable() {
-    if (!((SessionManager) getManager()).isBackingCacheAvailable()) {
-      throw new IllegalStateException("No backing cache server is available.");
-    }
-  }
-
-  public void setAttribute(String name, Object value, boolean notify) {
-    checkBackingCacheAvailable();
-    synchronized (this.changeLock) {
-      // Serialize the value
-      byte[] serializedValue = serialize(value);
-
-      // Store the attribute locally
-      if (this.preferDeserializedForm) {
-        super.setAttribute(name, value, true);
-      } else {
-        super.setAttribute(name, serializedValue, true);
-      }
-
-      if (serializedValue == null) {
-        return;
-      }
-
-      // Create the update attribute message
-      DeltaSessionAttributeEvent event =
-          new DeltaSessionUpdateAttributeEvent(name, serializedValue);
-      queueAttributeEvent(event, true);
-
-      // Distribute the update
-      if (!isCommitEnabled()) {
-        putInRegion(getOperatingRegion(), true, null);
-      }
-    }
-  }
-
-  public void removeAttribute(String name, boolean notify) {
-    checkBackingCacheAvailable();
-    if (expired) {
-      return;
-    }
-    synchronized (this.changeLock) {
-      // Remove the attribute locally
-      super.removeAttribute(name, true);
-
-      // Create the destroy attribute message
-      DeltaSessionAttributeEvent event = new DeltaSessionDestroyAttributeEvent(name);
-      queueAttributeEvent(event, true);
-
-      // Distribute the update
-      if (!isCommitEnabled()) {
-        putInRegion(getOperatingRegion(), true, null);
-      }
-    }
-  }
-
-  public Object getAttribute(String name) {
-    checkBackingCacheAvailable();
-    Object value = super.getAttribute(name);
-
-    // If the attribute is a byte[] (meaning it came from the server),
-    // deserialize it and add it to attributes map before returning it.
-    if (value instanceof byte[]) {
-      try {
-        value = BlobHelper.deserializeBlob((byte[]) value);
-      } catch (Exception e) {
-        StringBuilder builder = new StringBuilder();
-        builder.append(this).append(": Attribute named ").append(name).append(
-            " contains a byte[] that cannot be deserialized due to the following exception");
-        ((DeltaSessionManager) getManager()).getLogger().warn(builder.toString(), e);
-      }
-      if (this.preferDeserializedForm) {
-        localUpdateAttribute(name, value);
-      }
-    }
-
-    // Touch the session region if necessary. This is an asynchronous operation
-    // that prevents the session region from prematurely expiring a session that
-    // is only getting attributes.
-    ((DeltaSessionManager) getManager()).addSessionToTouch(getId());
-
-    return value;
-  }
-
-  public void invalidate() {
-    super.invalidate();
-    // getOperatingRegion().destroy(this.id, true); // already done in super (remove)
-    ((DeltaSessionManager) getManager()).getStatistics().incSessionsInvalidated();
-  }
-
-  public void processExpired() {
-    DeltaSessionManager manager = (DeltaSessionManager) getManager();
-    if (manager != null && manager.getLogger() != null && manager.getLogger().isDebugEnabled()) {
-      ((DeltaSessionManager) getManager()).getLogger().debug(this + ": Expired");
-    }
-
-    // Set expired (so region.destroy is not called again)
-    setExpired(true);
-
-    // Do expire processing
-    super.expire(true);
-
-    // Update statistics
-    if (manager != null) {
-      manager.getStatistics().incSessionsExpired();
-    }
-  }
-
-  @Override
-  public void expire(boolean notify) {
-    if (notify) {
-      getOperatingRegion().destroy(this.getId(), this);
-    } else {
-      super.expire(false);
-    }
-  }
-
-  public void setMaxInactiveInterval(int interval) {
-    super.setMaxInactiveInterval(interval);
-  }
-
-  public void localUpdateAttribute(String name, Object value) {
-    super.setAttribute(name, value, false); // don't do notification since this is a replication
-  }
-
-  public void localDestroyAttribute(String name) {
-    super.removeAttribute(name, false); // don't do notification since this is a replication
-  }
-
-  public void applyAttributeEvents(Region<String, DeltaSessionInterface> region,
-      List<DeltaSessionAttributeEvent> events) {
-    for (DeltaSessionAttributeEvent event : events) {
-      event.apply(this);
-      queueAttributeEvent(event, false);
-    }
-
-    putInRegion(region, false, true);
-  }
-
-  private void initializeRegion(DeltaSessionManager sessionManager) {
-    // Get the session region name
-    this.sessionRegionName = sessionManager.getRegionName();
-
-    // Get the operating region.
-    // If a P2P manager is used, then this will be a local region fronting the
-    // session region if local cache is enabled; otherwise, it will be the
-    // session region itself.
-    // If a CS manager is used, it will be the session proxy region.
-    this.operatingRegion = sessionManager.getSessionCache().getOperatingRegion();
-    if (sessionManager.getLogger().isDebugEnabled()) {
-      sessionManager.getLogger().debug(this + ": Set operating region: " + this.operatingRegion);
-    }
-  }
-
-  private void queueAttributeEvent(DeltaSessionAttributeEvent event,
-      boolean checkAddToCurrentGatewayDelta) {
-    // Add to current gateway delta if necessary
-    if (checkAddToCurrentGatewayDelta) {
-      // If the manager has enabled gateway delta replication and is a P2P
-      // manager, the GatewayDeltaForwardCacheListener will be invoked in this
-      // VM. Add the event to the currentDelta.
-      DeltaSessionManager mgr = (DeltaSessionManager) this.manager;
-      if (this.enableGatewayDeltaReplication && mgr.isPeerToPeer()) {
-        // If commit is not enabled, add the event to the current batch; else,
-        // the current batch will be initialized to the events in the queue will
-        // be added at commit time.
-        if (!isCommitEnabled()) {
-          List<DeltaSessionAttributeEvent> events = new ArrayList<DeltaSessionAttributeEvent>();
-          events.add(event);
-          this.currentGatewayDeltaEvent =
-              new DeltaSessionAttributeEventBatch(this.sessionRegionName, this.id, events);
-        }
-      }
-    }
-    this.eventQueue.add(event);
-  }
-
-  @SuppressWarnings("unchecked")
-  private void putInRegion(Region region, boolean applyRemotely, Object callbackArgument) {
-    this.hasDelta = true;
-    this.applyRemotely = applyRemotely;
-    region.put(this.id, this, callbackArgument);
-    this.eventQueue.clear();
-  }
-
-  public void commit() {
-    if (!isValidInternal())
-      throw new IllegalStateException("commit: Session " + getId() + " already invalidated");
-
-    synchronized (this.changeLock) {
-      // Jens - there used to be a check to only perform this if the queue is
-      // empty, but we want this to always run so that the lastAccessedTime
-      // will be updated even when no attributes have been changed.
-      DeltaSessionManager mgr = (DeltaSessionManager) this.manager;
-      if (this.enableGatewayDeltaReplication && mgr.isPeerToPeer()) {
-        setCurrentGatewayDeltaEvent(
-            new DeltaSessionAttributeEventBatch(this.sessionRegionName, this.id, this.eventQueue));
-      }
-      this.hasDelta = true;
-      this.applyRemotely = true;
-      putInRegion(getOperatingRegion(), true, null);
-      this.eventQueue.clear();
-    }
-  }
-
-  public void abort() {
-    synchronized (this.changeLock) {
-      this.eventQueue.clear();
-    }
-  }
-
-  private void setExpired(boolean expired) {
-    this.expired = expired;
-  }
-
-  public boolean getExpired() {
-    return this.expired;
-  }
-
-  public String getContextName() {
-    return contextName;
-  }
-
-  public boolean hasDelta() {
-    return this.hasDelta;
-  }
-
-  public void toDelta(DataOutput out) throws IOException {
-    // Write whether to apply the changes to another DS if necessary
-    out.writeBoolean(this.applyRemotely);
-
-    // Write the events
-    DataSerializer.writeArrayList((ArrayList) this.eventQueue, out);
-
-    out.writeLong(this.lastAccessedTime);
-    out.writeInt(this.maxInactiveInterval);
-  }
-
-  public void fromDelta(DataInput in) throws IOException, InvalidDeltaException {
-    // Read whether to apply the changes to another DS if necessary
-    this.applyRemotely = in.readBoolean();
-
-    // Read the events
-    List<DeltaSessionAttributeEvent> events = null;
-    try {
-      events = DataSerializer.readArrayList(in);
-    } catch (ClassNotFoundException e) {
-      throw new InvalidDeltaException(e);
-    }
-
-    // This allows for backwards compatibility with 2.1 clients
-    if (((InputStream) in).available() > 0) {
-      this.lastAccessedTime = in.readLong();
-      this.maxInactiveInterval = in.readInt();
-    }
-
-    // Iterate and apply the events
-    for (DeltaSessionAttributeEvent event : events) {
-      event.apply(this);
-    }
-
-    // Add the events to the gateway delta region if necessary
-    if (this.enableGatewayDeltaReplication && this.applyRemotely) {
-      setCurrentGatewayDeltaEvent(
-          new DeltaSessionAttributeEventBatch(this.sessionRegionName, this.id, events));
-    }
-
-    // Access it to set the last accessed time. End access it to set not new.
-    access();
-    endAccess();
-  }
-
-  @Override
-  public void toData(DataOutput out) throws IOException {
-    // Write the StandardSession state
-    DataSerializer.writeString(this.id, out);
-    out.writeLong(this.creationTime);
-    out.writeLong(this.lastAccessedTime);
-    out.writeLong(this.thisAccessedTime);
-    out.writeInt(this.maxInactiveInterval);
-    out.writeBoolean(this.isNew);
-    out.writeBoolean(this.isValid);
-    DataSerializer.writeObject(getSerializedAttributes(), out);
-    DataSerializer.writeByteArray(getSerializedPrincipal(), out);
-
-    // Write the DeltaSession state
-    out.writeBoolean(this.enableGatewayDeltaReplication);
-    DataSerializer.writeString(this.sessionRegionName, out);
-
-    DataSerializer.writeString(this.contextName, out);
-  }
-
-  @Override
-  public void fromData(DataInput in) throws IOException, ClassNotFoundException {
-    // Read the StandardSession state
-    this.id = DataSerializer.readString(in);
-    this.creationTime = in.readLong();
-    this.lastAccessedTime = in.readLong();
-    this.thisAccessedTime = in.readLong();
-    this.maxInactiveInterval = in.readInt();
-    this.isNew = in.readBoolean();
-    this.isValid = in.readBoolean();
-    this.attributes = readInAttributes(in);
-    this.serializedPrincipal = DataSerializer.readByteArray(in);
-
-    // Read the DeltaSession state
-    this.enableGatewayDeltaReplication = in.readBoolean();
-    this.sessionRegionName = DataSerializer.readString(in);
-
-    // This allows for backwards compatibility with 2.1 clients
-    if (((InputStream) in).available() > 0) {
-      this.contextName = DataSerializer.readString(in);
-    }
-
-    // Initialize the transients if necessary
-    if (this.listeners == null) {
-      this.listeners = new ArrayList<>();
-    }
-
-    if (this.notes == null) {
-      this.notes = new Hashtable<>();
-    }
-  }
-
-  protected ConcurrentMap<String, Object> readInAttributes(final DataInput in)
-      throws IOException, ClassNotFoundException {
-    return DataSerializer.readObject(in);
-  }
-
-  @Override
-  public int getSizeInBytes() {
-    int size = 0;
-    for (Enumeration<String> e = getAttributeNames(); e.hasMoreElements();) {
-      // Don't use this.getAttribute() because we don't want to deserialize
-      // the value.
-      Object value = super.getAttribute(e.nextElement());
-      if (value instanceof byte[]) {
-        size += ((byte[]) value).length;
-      }
-    }
-
-    return size;
-  }
-
-  @SuppressWarnings({"unchecked", "rawtypes"})
-  protected ConcurrentMap<String, byte[]> getSerializedAttributes() {
-    // Iterate the values and serialize them if necessary before sending them to the server. This
-    // makes the application classes unnecessary on the server.
-    ConcurrentMap<String, byte[]> serializedAttributes = new ConcurrentHashMap<String, byte[]>();
-    for (Iterator i = this.attributes.entrySet().iterator(); i.hasNext();) {
-      Map.Entry<String, Object> entry = (Map.Entry<String, Object>) i.next();
-      Object value = entry.getValue();
-      byte[] serializedValue = value instanceof byte[] ? (byte[]) value : serialize(value);
-      serializedAttributes.put(entry.getKey(), serializedValue);
-    }
-    return serializedAttributes;
-  }
-
-  protected byte[] serialize(Object obj) {
-    byte[] serializedValue = null;
-    try {
-      serializedValue = BlobHelper.serializeToBlob(obj);
-    } catch (IOException e) {
-      StringBuilder builder = new StringBuilder();
-      builder.append(this).append(": Object ").append(obj)
-          .append(" cannot be serialized due to the following exception");
-      ((DeltaSessionManager) getManager()).getLogger().warn(builder.toString(), e);
-    }
-    return serializedValue;
-  }
-
-  @Override
-  public String toString() {
-    return new StringBuilder().append("DeltaSession[").append("id=").append(getId())
-        .append("; context=").append(this.contextName).append("; sessionRegionName=")
-        .append(this.sessionRegionName).append("; operatingRegionName=")
-        .append(getOperatingRegion() == null ? "unset" : getOperatingRegion().getFullPath())
-        .append("]").toString();
-  }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/f38dff9d/extensions/geode-modules-tomcat8/src/main/java/org/apache/geode/modules/session/catalina/DeltaSession8.java
----------------------------------------------------------------------
diff --git a/extensions/geode-modules-tomcat8/src/main/java/org/apache/geode/modules/session/catalina/DeltaSession8.java b/extensions/geode-modules-tomcat8/src/main/java/org/apache/geode/modules/session/catalina/DeltaSession8.java
index f69382a..f2e797e 100644
--- a/extensions/geode-modules-tomcat8/src/main/java/org/apache/geode/modules/session/catalina/DeltaSession8.java
+++ b/extensions/geode-modules-tomcat8/src/main/java/org/apache/geode/modules/session/catalina/DeltaSession8.java
@@ -14,83 +14,17 @@
  */
 package org.apache.geode.modules.session.catalina;
 
-import java.io.DataInput;
-import java.io.DataOutput;
-import java.io.IOException;
-import java.io.InputStream;
-import java.security.AccessController;
-import java.security.Principal;
-import java.security.PrivilegedAction;
-import java.util.ArrayList;
-import java.util.Enumeration;
-import java.util.Hashtable;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.ConcurrentMap;
-
-import javax.servlet.http.HttpSession;
-
 import org.apache.catalina.Manager;
-import org.apache.catalina.realm.GenericPrincipal;
-import org.apache.catalina.security.SecurityUtil;
-import org.apache.catalina.session.StandardSession;
-import org.apache.juli.logging.Log;
-import org.apache.juli.logging.LogFactory;
-
-import org.apache.geode.DataSerializable;
-import org.apache.geode.DataSerializer;
-import org.apache.geode.Delta;
-import org.apache.geode.InvalidDeltaException;
-import org.apache.geode.cache.Region;
-import org.apache.geode.internal.cache.lru.Sizeable;
-import org.apache.geode.internal.util.BlobHelper;
-import org.apache.geode.modules.gatewaydelta.GatewayDelta;
-import org.apache.geode.modules.gatewaydelta.GatewayDeltaEvent;
-import org.apache.geode.modules.session.catalina.internal.DeltaSessionAttributeEvent;
-import org.apache.geode.modules.session.catalina.internal.DeltaSessionAttributeEventBatch;
-import org.apache.geode.modules.session.catalina.internal.DeltaSessionDestroyAttributeEvent;
-import org.apache.geode.modules.session.catalina.internal.DeltaSessionUpdateAttributeEvent;
 
 
 @SuppressWarnings("serial")
-public class DeltaSession8 extends StandardSession
-    implements DataSerializable, Delta, GatewayDelta, Sizeable, DeltaSessionInterface {
-
-  private transient Region<String, HttpSession> operatingRegion;
-
-  private String sessionRegionName;
-
-  private String contextName;
-
-  private boolean hasDelta;
-
-  private boolean applyRemotely;
-
-  private boolean enableGatewayDeltaReplication;
-
-  private transient final Object changeLock = new Object();
-
-  private final List<DeltaSessionAttributeEvent> eventQueue =
-      new ArrayList<DeltaSessionAttributeEvent>();
-
-  private transient GatewayDeltaEvent currentGatewayDeltaEvent;
-
-  private transient boolean expired = false;
-
-  private transient boolean preferDeserializedForm = true;
-
-  private byte[] serializedPrincipal;
-
-  private final Log LOG = LogFactory.getLog(DeltaSession.class.getName());
-
+public class DeltaSession8 extends DeltaSession {
   /**
    * Construct a new <code>Session</code> associated with no <code>Manager</code>. The
    * <code>Manager</code> will be assigned later using {@link #setOwner(Object)}.
    */
   public DeltaSession8() {
-    super(null);
+    super();
   }
 
   /**
@@ -100,506 +34,6 @@ public class DeltaSession8 extends StandardSession
    */
   public DeltaSession8(Manager manager) {
     super(manager);
-    setOwner(manager);
-  }
-
-  /**
-   * Return the <code>HttpSession</code> for which this object is the facade.
-   */
-  @SuppressWarnings("unchecked")
-  public HttpSession getSession() {
-    if (facade == null) {
-      if (SecurityUtil.isPackageProtectionEnabled()) {
-        final DeltaSession8 fsession = this;
-        facade = (DeltaSessionFacade) AccessController.doPrivileged(new PrivilegedAction() {
-          public Object run() {
-            return new DeltaSessionFacade(fsession);
-          }
-        });
-      } else {
-        facade = new DeltaSessionFacade(this);
-      }
-    }
-    return (facade);
-  }
-
-  public Principal getPrincipal() {
-    if (this.principal == null && this.serializedPrincipal != null) {
-
-      Principal sp = null;
-      try {
-        sp = (Principal) BlobHelper.deserializeBlob(this.serializedPrincipal);
-      } catch (Exception e) {
-        StringBuilder builder = new StringBuilder();
-        builder.append(this).append(
-            ": Serialized principal contains a byte[] that cannot be deserialized due to the following exception");
-        ((DeltaSessionManager) getManager()).getLogger().warn(builder.toString(), e);
-        return null;
-      }
-      this.principal = sp;
-      if (getManager() != null) {
-        DeltaSessionManager mgr = (DeltaSessionManager) getManager();
-        if (mgr.getLogger().isDebugEnabled()) {
-          mgr.getLogger().debug(this + ": Deserialized principal: " + this.principal);
-        }
-      }
-    }
-    return this.principal;
-  }
-
-  public void setPrincipal(Principal principal) {
-    super.setPrincipal(principal);
-
-    // Put the session into the region to serialize the principal
-    if (getManager() != null) {
-      getManager().add(this);
-      DeltaSessionManager mgr = (DeltaSessionManager) getManager();
-      if (mgr.getLogger().isDebugEnabled()) {
-        mgr.getLogger().debug(this + ": Cached principal: " + principal);
-      }
-    }
-  }
-
-  private byte[] getSerializedPrincipal() {
-    if (this.serializedPrincipal == null) {
-      if (this.principal != null && this.principal instanceof GenericPrincipal) {
-        GenericPrincipal gp = (GenericPrincipal) this.principal;
-        this.serializedPrincipal = serialize(gp);
-        if (manager != null) {
-          DeltaSessionManager mgr = (DeltaSessionManager) getManager();
-          if (mgr.getLogger().isDebugEnabled()) {
-            mgr.getLogger().debug(this + ": Serialized principal: " + gp);
-          }
-        }
-      }
-    }
-    return this.serializedPrincipal;
-  }
-
-  protected Region<String, HttpSession> getOperatingRegion() {
-    // This region shouldn't be null when it is needed.
-    // It should have been set by the setOwner method.
-    return this.operatingRegion;
-  }
-
-  public boolean isCommitEnabled() {
-    DeltaSessionManager mgr = (DeltaSessionManager) getManager();
-    return mgr.isCommitValveEnabled();
-  }
-
-  public GatewayDeltaEvent getCurrentGatewayDeltaEvent() {
-    return this.currentGatewayDeltaEvent;
-  }
-
-  public void setCurrentGatewayDeltaEvent(GatewayDeltaEvent currentGatewayDeltaEvent) {
-    this.currentGatewayDeltaEvent = currentGatewayDeltaEvent;
-  }
-
-  @SuppressWarnings("unchecked")
-  public void setOwner(Object manager) {
-    if (manager instanceof DeltaSessionManager) {
-      DeltaSessionManager sessionManager = (DeltaSessionManager) manager;
-      this.manager = sessionManager;
-      initializeRegion(sessionManager);
-      this.hasDelta = false;
-      this.applyRemotely = false;
-      this.enableGatewayDeltaReplication = sessionManager.getEnableGatewayDeltaReplication();
-      this.preferDeserializedForm = sessionManager.getPreferDeserializedForm();
-
-      // Initialize transient variables
-      if (this.listeners == null) {
-        this.listeners = new ArrayList();
-      }
-
-      if (this.notes == null) {
-        this.notes = new Hashtable();
-      }
-
-      contextName = ((DeltaSessionManager) manager).getContextName();
-    } else {
-      throw new IllegalArgumentException(this + ": The Manager must be an AbstractManager");
-    }
-  }
-
-  private void checkBackingCacheAvailable() {
-    if (!((SessionManager) getManager()).isBackingCacheAvailable()) {
-      throw new IllegalStateException("No backing cache server is available.");
-    }
-  }
-
-  public void setAttribute(String name, Object value, boolean notify) {
-    checkBackingCacheAvailable();
-    synchronized (this.changeLock) {
-      // Serialize the value
-      byte[] serializedValue = serialize(value);
-
-      // Store the attribute locally
-      if (this.preferDeserializedForm) {
-        super.setAttribute(name, value, true);
-      } else {
-        super.setAttribute(name, serializedValue, true);
-      }
-
-      if (serializedValue == null) {
-        return;
-      }
-
-      // Create the update attribute message
-      DeltaSessionAttributeEvent event =
-          new DeltaSessionUpdateAttributeEvent(name, serializedValue);
-      queueAttributeEvent(event, true);
-
-      // Distribute the update
-      if (!isCommitEnabled()) {
-        putInRegion(getOperatingRegion(), true, null);
-      }
-    }
-  }
-
-  public void removeAttribute(String name, boolean notify) {
-    checkBackingCacheAvailable();
-    if (expired) {
-      return;
-    }
-    synchronized (this.changeLock) {
-      // Remove the attribute locally
-      super.removeAttribute(name, true);
-
-      // Create the destroy attribute message
-      DeltaSessionAttributeEvent event = new DeltaSessionDestroyAttributeEvent(name);
-      queueAttributeEvent(event, true);
-
-      // Distribute the update
-      if (!isCommitEnabled()) {
-        putInRegion(getOperatingRegion(), true, null);
-      }
-    }
-  }
-
-  public Object getAttribute(String name) {
-    checkBackingCacheAvailable();
-    Object value = super.getAttribute(name);
-
-    // If the attribute is a byte[] (meaning it came from the server),
-    // deserialize it and add it to attributes map before returning it.
-    if (value instanceof byte[]) {
-      try {
-        value = BlobHelper.deserializeBlob((byte[]) value);
-      } catch (Exception e) {
-        StringBuilder builder = new StringBuilder();
-        builder.append(this).append(": Attribute named ").append(name).append(
-            " contains a byte[] that cannot be deserialized due to the following exception");
-        ((DeltaSessionManager) getManager()).getLogger().warn(builder.toString(), e);
-      }
-      if (this.preferDeserializedForm) {
-        localUpdateAttribute(name, value);
-      }
-    }
-
-    // Touch the session region if necessary. This is an asynchronous operation
-    // that prevents the session region from prematurely expiring a session that
-    // is only getting attributes.
-    ((DeltaSessionManager) getManager()).addSessionToTouch(getId());
-
-    return value;
-  }
-
-  public void invalidate() {
-    super.invalidate();
-    // getOperatingRegion().destroy(this.id, true); // already done in super (remove)
-    ((DeltaSessionManager) getManager()).getStatistics().incSessionsInvalidated();
-  }
-
-  public void processExpired() {
-    DeltaSessionManager manager = (DeltaSessionManager) getManager();
-    if (manager != null && manager.getLogger() != null && manager.getLogger().isDebugEnabled()) {
-      ((DeltaSessionManager) getManager()).getLogger().debug(this + ": Expired");
-    }
-
-    // Set expired (so region.destroy is not called again)
-    setExpired(true);
-
-    // Do expire processing
-    super.expire(true);
-
-    // Update statistics
-    if (manager != null) {
-      manager.getStatistics().incSessionsExpired();
-    }
-  }
-
-  @Override
-  public void expire(boolean notify) {
-    if (notify) {
-      getOperatingRegion().destroy(this.getId(), this);
-    } else {
-      super.expire(false);
-    }
-  }
-
-  public void setMaxInactiveInterval(int interval) {
-    super.setMaxInactiveInterval(interval);
-  }
-
-  public void localUpdateAttribute(String name, Object value) {
-    super.setAttribute(name, value, false); // don't do notification since this is a replication
-  }
-
-  public void localDestroyAttribute(String name) {
-    super.removeAttribute(name, false); // don't do notification since this is a replication
-  }
-
-  public void applyAttributeEvents(Region<String, DeltaSessionInterface> region,
-      List<DeltaSessionAttributeEvent> events) {
-    for (DeltaSessionAttributeEvent event : events) {
-      event.apply(this);
-      queueAttributeEvent(event, false);
-    }
-
-    putInRegion(region, false, true);
-  }
-
-  private void initializeRegion(DeltaSessionManager sessionManager) {
-    // Get the session region name
-    this.sessionRegionName = sessionManager.getRegionName();
-
-    // Get the operating region.
-    // If a P2P manager is used, then this will be a local region fronting the
-    // session region if local cache is enabled; otherwise, it will be the
-    // session region itself.
-    // If a CS manager is used, it will be the session proxy region.
-    this.operatingRegion = sessionManager.getSessionCache().getOperatingRegion();
-    if (sessionManager.getLogger().isDebugEnabled()) {
-      sessionManager.getLogger().debug(this + ": Set operating region: " + this.operatingRegion);
-    }
-  }
-
-  private void queueAttributeEvent(DeltaSessionAttributeEvent event,
-      boolean checkAddToCurrentGatewayDelta) {
-    // Add to current gateway delta if necessary
-    if (checkAddToCurrentGatewayDelta) {
-      // If the manager has enabled gateway delta replication and is a P2P
-      // manager, the GatewayDeltaForwardCacheListener will be invoked in this
-      // VM. Add the event to the currentDelta.
-      DeltaSessionManager mgr = (DeltaSessionManager) this.manager;
-      if (this.enableGatewayDeltaReplication && mgr.isPeerToPeer()) {
-        // If commit is not enabled, add the event to the current batch; else,
-        // the current batch will be initialized to the events in the queue will
-        // be added at commit time.
-        if (!isCommitEnabled()) {
-          List<DeltaSessionAttributeEvent> events = new ArrayList<DeltaSessionAttributeEvent>();
-          events.add(event);
-          this.currentGatewayDeltaEvent =
-              new DeltaSessionAttributeEventBatch(this.sessionRegionName, this.id, events);
-        }
-      }
-    }
-    this.eventQueue.add(event);
-  }
-
-  @SuppressWarnings("unchecked")
-  private void putInRegion(Region region, boolean applyRemotely, Object callbackArgument) {
-    this.hasDelta = true;
-    this.applyRemotely = applyRemotely;
-    region.put(this.id, this, callbackArgument);
-    this.eventQueue.clear();
-  }
-
-  public void commit() {
-    if (!isValidInternal())
-      throw new IllegalStateException("commit: Session " + getId() + " already invalidated");
-    // (STRING_MANAGER.getString("deltaSession.commit.ise", getId()));
-
-    synchronized (this.changeLock) {
-      // Jens - there used to be a check to only perform this if the queue is
-      // empty, but we want this to always run so that the lastAccessedTime
-      // will be updated even when no attributes have been changed.
-      DeltaSessionManager mgr = (DeltaSessionManager) this.manager;
-      if (this.enableGatewayDeltaReplication && mgr.isPeerToPeer()) {
-        setCurrentGatewayDeltaEvent(
-            new DeltaSessionAttributeEventBatch(this.sessionRegionName, this.id, this.eventQueue));
-      }
-      this.hasDelta = true;
-      this.applyRemotely = true;
-      putInRegion(getOperatingRegion(), true, null);
-      this.eventQueue.clear();
-    }
-  }
-
-  public void abort() {
-    synchronized (this.changeLock) {
-      this.eventQueue.clear();
-    }
-  }
-
-  private void setExpired(boolean expired) {
-    this.expired = expired;
-  }
-
-  public boolean getExpired() {
-    return this.expired;
-  }
-
-  public String getContextName() {
-    return contextName;
-  }
-
-  public boolean hasDelta() {
-    return this.hasDelta;
-  }
-
-  public void toDelta(DataOutput out) throws IOException {
-    // Write whether to apply the changes to another DS if necessary
-    out.writeBoolean(this.applyRemotely);
-
-    // Write the events
-    DataSerializer.writeArrayList((ArrayList) this.eventQueue, out);
-
-    out.writeLong(this.lastAccessedTime);
-    out.writeInt(this.maxInactiveInterval);
-  }
-
-  public void fromDelta(DataInput in) throws IOException, InvalidDeltaException {
-    // Read whether to apply the changes to another DS if necessary
-    this.applyRemotely = in.readBoolean();
-
-    // Read the events
-    List<DeltaSessionAttributeEvent> events = null;
-    try {
-      events = DataSerializer.readArrayList(in);
-    } catch (ClassNotFoundException e) {
-      throw new InvalidDeltaException(e);
-    }
-
-    // This allows for backwards compatibility with 2.1 clients
-    if (((InputStream) in).available() > 0) {
-      this.lastAccessedTime = in.readLong();
-      this.maxInactiveInterval = in.readInt();
-    }
-
-    // Iterate and apply the events
-    for (DeltaSessionAttributeEvent event : events) {
-      event.apply(this);
-    }
-
-    // Add the events to the gateway delta region if necessary
-    if (this.enableGatewayDeltaReplication && this.applyRemotely) {
-      setCurrentGatewayDeltaEvent(
-          new DeltaSessionAttributeEventBatch(this.sessionRegionName, this.id, events));
-    }
-
-    // Access it to set the last accessed time. End access it to set not new.
-    access();
-    endAccess();
-  }
-
-  @Override
-  public void toData(DataOutput out) throws IOException {
-    // Write the StandardSession state
-    DataSerializer.writeString(this.id, out);
-    out.writeLong(this.creationTime);
-    out.writeLong(this.lastAccessedTime);
-    out.writeLong(this.thisAccessedTime);
-    out.writeInt(this.maxInactiveInterval);
-    out.writeBoolean(this.isNew);
-    out.writeBoolean(this.isValid);
-    DataSerializer.writeObject(getSerializedAttributes(), out);
-    DataSerializer.writeByteArray(getSerializedPrincipal(), out);
-
-    // Write the DeltaSession state
-    out.writeBoolean(this.enableGatewayDeltaReplication);
-    DataSerializer.writeString(this.sessionRegionName, out);
-
-    DataSerializer.writeString(this.contextName, out);
-  }
-
-  @Override
-  @SuppressWarnings("unchecked")
-  public void fromData(DataInput in) throws IOException, ClassNotFoundException {
-    // Read the StandardSession state
-    this.id = DataSerializer.readString(in);
-    this.creationTime = in.readLong();
-    this.lastAccessedTime = in.readLong();
-    this.thisAccessedTime = in.readLong();
-    this.maxInactiveInterval = in.readInt();
-    this.isNew = in.readBoolean();
-    this.isValid = in.readBoolean();
-    this.attributes = readInAttributes(in);
-    this.serializedPrincipal = DataSerializer.readByteArray(in);
-
-    // Read the DeltaSession state
-    this.enableGatewayDeltaReplication = in.readBoolean();
-    this.sessionRegionName = DataSerializer.readString(in);
-
-    // This allows for backwards compatibility with 2.1 clients
-    if (((InputStream) in).available() > 0) {
-      this.contextName = DataSerializer.readString(in);
-    }
-
-    // Initialize the transients if necessary
-    if (this.listeners == null) {
-      this.listeners = new ArrayList();
-    }
-
-    if (this.notes == null) {
-      this.notes = new Hashtable();
-    }
-  }
-
-  @Override
-  public int getSizeInBytes() {
-    int size = 0;
-    for (Enumeration<String> e = getAttributeNames(); e.hasMoreElements();) {
-      // Don't use this.getAttribute() because we don't want to deserialize
-      // the value.
-      Object value = super.getAttribute(e.nextElement());
-      if (value instanceof byte[]) {
-        size += ((byte[]) value).length;
-      }
-    }
-
-    return size;
-  }
-
-  protected byte[] serialize(Object obj) {
-    byte[] serializedValue = null;
-    try {
-      serializedValue = BlobHelper.serializeToBlob(obj);
-    } catch (IOException e) {
-      StringBuilder builder = new StringBuilder();
-      builder.append(this).append(": Object ").append(obj)
-          .append(" cannot be serialized due to the following exception");
-      ((DeltaSessionManager) getManager()).getLogger().warn(builder.toString(), e);
-    }
-    return serializedValue;
-  }
-
-  @Override
-  public String toString() {
-    return new StringBuilder().append("DeltaSession[").append("id=").append(getId())
-        .append("; context=").append(this.contextName).append("; sessionRegionName=")
-        .append(this.sessionRegionName).append("; operatingRegionName=")
-        .append(getOperatingRegion() == null ? "unset" : getOperatingRegion().getFullPath())
-        .append("]").toString();
-  }
-
-  @SuppressWarnings({"unchecked", "rawtypes"})
-  protected ConcurrentMap<String, byte[]> getSerializedAttributes() {
-    // Iterate the values and serialize them if necessary before sending them to the server. This
-    // makes the application classes unnecessary on the server.
-    ConcurrentMap<String, byte[]> serializedAttributes = new ConcurrentHashMap<String, byte[]>();
-    for (Iterator i = this.attributes.entrySet().iterator(); i.hasNext();) {
-      Map.Entry<String, Object> entry = (Map.Entry<String, Object>) i.next();
-      Object value = entry.getValue();
-      byte[] serializedValue = value instanceof byte[] ? (byte[]) value : serialize(value);
-      serializedAttributes.put(entry.getKey(), serializedValue);
-    }
-    return serializedAttributes;
-  }
-
-  protected ConcurrentMap readInAttributes(final DataInput in)
-      throws IOException, ClassNotFoundException {
-    return DataSerializer.readObject(in);
   }
 
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/f38dff9d/extensions/geode-modules/src/main/java/org/apache/geode/modules/session/catalina/DeltaSession.java
----------------------------------------------------------------------
diff --git a/extensions/geode-modules/src/main/java/org/apache/geode/modules/session/catalina/DeltaSession.java b/extensions/geode-modules/src/main/java/org/apache/geode/modules/session/catalina/DeltaSession.java
index ac612da..27e5bce 100644
--- a/extensions/geode-modules/src/main/java/org/apache/geode/modules/session/catalina/DeltaSession.java
+++ b/extensions/geode-modules/src/main/java/org/apache/geode/modules/session/catalina/DeltaSession.java
@@ -40,6 +40,7 @@ import java.io.DataInput;
 import java.io.DataOutput;
 import java.io.IOException;
 import java.io.InputStream;
+import java.lang.reflect.Field;
 import java.security.AccessController;
 import java.security.Principal;
 import java.security.PrivilegedAction;
@@ -80,8 +81,18 @@ public class DeltaSession extends StandardSession
 
   private byte[] serializedPrincipal;
 
+  private static Field cachedField = null;
+
   private final Log LOG = LogFactory.getLog(DeltaSession.class.getName());
 
+  static {
+    try {
+      cachedField = StandardSession.class.getDeclaredField("attributes");
+      cachedField.setAccessible(true);
+    } catch (NoSuchFieldException e) {
+      throw new IllegalStateException(e);
+    }
+  }
   /**
    * The string manager for this package.
    */
@@ -531,7 +542,7 @@ public class DeltaSession extends StandardSession
     this.maxInactiveInterval = in.readInt();
     this.isNew = in.readBoolean();
     this.isValid = in.readBoolean();
-    this.attributes = readInAttributes(in);
+    readInAttributes(in);
     this.serializedPrincipal = DataSerializer.readByteArray(in);
 
     // Read the DeltaSession state
@@ -553,8 +564,26 @@ public class DeltaSession extends StandardSession
     }
   }
 
-  protected Map readInAttributes(final DataInput in) throws IOException, ClassNotFoundException {
-    return DataSerializer.readObject(in);
+  private void readInAttributes(DataInput in) throws IOException, ClassNotFoundException {
+    ConcurrentHashMap map = (ConcurrentHashMap) DataSerializer.readObject(in);
+    try {
+      Field field = getAttributesFieldObject();
+      field.set(this, map);
+    } catch (IllegalAccessException e) {
+      logError(e);
+      throw new IllegalStateException(e);
+    }
+  }
+
+  protected Field getAttributesFieldObject() {
+    return cachedField;
+  }
+
+  protected void logError(Exception e) {
+    if (getManager() != null) {
+      DeltaSessionManager mgr = (DeltaSessionManager) getManager();
+      mgr.getLogger().error(e);
+    }
   }
 
   @Override
@@ -576,8 +605,8 @@ public class DeltaSession extends StandardSession
   protected Map<String, byte[]> getSerializedAttributes() {
     // Iterate the values and serialize them if necessary before sending them to the server. This
     // makes the application classes unnecessary on the server.
-    Map<String, byte[]> serializedAttributes = new ConcurrentHashMap<String, byte[]>();
-    for (Iterator i = this.attributes.entrySet().iterator(); i.hasNext();) {
+    Map<String, byte[]> serializedAttributes = new ConcurrentHashMap<>();
+    for (Iterator i = getAttributes().entrySet().iterator(); i.hasNext();) {
       Map.Entry<String, Object> entry = (Map.Entry<String, Object>) i.next();
       Object value = entry.getValue();
       byte[] serializedValue = value instanceof byte[] ? (byte[]) value : serialize(value);
@@ -586,6 +615,17 @@ public class DeltaSession extends StandardSession
     return serializedAttributes;
   }
 
+  protected Map getAttributes() {
+    try {
+      Field field = getAttributesFieldObject();
+      Map map = (Map) field.get(this);
+      return map;
+    } catch (IllegalAccessException e) {
+      logError(e);
+    }
+    throw new IllegalStateException("Unable to access attributes field");
+  }
+
   protected byte[] serialize(Object obj) {
     byte[] serializedValue = null;
     try {

http://git-wip-us.apache.org/repos/asf/geode/blob/f38dff9d/geode-assembly/build.gradle
----------------------------------------------------------------------
diff --git a/geode-assembly/build.gradle b/geode-assembly/build.gradle
index a76afdc..e135675 100755
--- a/geode-assembly/build.gradle
+++ b/geode-assembly/build.gradle
@@ -97,6 +97,8 @@ dependencies {
     exclude module: 'spring-core'
     exclude module: 'commons-logging'
   }
+
+  testCompile project(':geode-old-versions')
 }
 
 sourceSets {
@@ -430,6 +432,7 @@ build.dependsOn installDist
 
 installDist.dependsOn ':extensions/geode-modules-assembly:dist'
 distributedTest.dependsOn ':extensions/session-testing-war:war'
+distributedTest.dependsOn ':geode-old-versions:build'
 
 /**Print the names of all jar files in a fileTree */
 def printJars(tree) {

http://git-wip-us.apache.org/repos/asf/geode/blob/f38dff9d/geode-assembly/src/test/java/org/apache/geode/session/tests/ContainerInstall.java
----------------------------------------------------------------------
diff --git a/geode-assembly/src/test/java/org/apache/geode/session/tests/ContainerInstall.java b/geode-assembly/src/test/java/org/apache/geode/session/tests/ContainerInstall.java
index 9d03417..f9bce0a 100644
--- a/geode-assembly/src/test/java/org/apache/geode/session/tests/ContainerInstall.java
+++ b/geode-assembly/src/test/java/org/apache/geode/session/tests/ContainerInstall.java
@@ -65,7 +65,7 @@ public abstract class ContainerInstall {
 
   public static final String GEODE_BUILD_HOME = System.getenv("GEODE_HOME");
   public static final String DEFAULT_INSTALL_DIR = "/tmp/cargo_containers/";
-  private static final String DEFAULT_MODULE_LOCATION = GEODE_BUILD_HOME + "/tools/Modules/";
+  protected static final String DEFAULT_MODULE_LOCATION = GEODE_BUILD_HOME + "/tools/Modules/";
   public static final String DEFAULT_MODULE_EXTRACTION_DIR = "/tmp/cargo_modules/";
 
   /**
@@ -95,6 +95,11 @@ public abstract class ContainerInstall {
     }
   }
 
+  public ContainerInstall(String installDir, String downloadURL, ConnectionType connType,
+      String moduleName) throws IOException {
+    this(installDir, downloadURL, connType, moduleName, DEFAULT_MODULE_LOCATION);
+  }
+
   /**
    * Base class for handling downloading and configuring J2EE installations
    *
@@ -103,18 +108,17 @@ public abstract class ContainerInstall {
    * installations.
    *
    * Subclasses provide installation of specific containers.
-   *
+   * 
    * @param connType Enum representing the connection type of this installation (either client
    *        server or peer to peer)
    * @param moduleName The module name of the installation being setup (i.e. tomcat, appserver,
    *        etc.)
    */
   public ContainerInstall(String installDir, String downloadURL, ConnectionType connType,
-      String moduleName) throws IOException {
+      String moduleName, String geodeModuleLocation) throws IOException {
     this.connType = connType;
 
-    // Removes previous run stuff (modules, installs, etc.)
-    clearPreviousRuns();
+    clearPreviousInstall(installDir);
 
     logger.info("Installing container from URL " + downloadURL);
 
@@ -125,7 +129,7 @@ public abstract class ContainerInstall {
     // Set install home
     INSTALL_PATH = installer.getHome();
     // Find and extract the module path
-    MODULE_PATH = findAndExtractModule(moduleName);
+    MODULE_PATH = findAndExtractModule(geodeModuleLocation, moduleName);
     logger.info("Extracted module " + moduleName + " to " + MODULE_PATH);
     // Find the session testing war path
     WAR_FILE_PATH = findSessionTestingWar();
@@ -148,17 +152,12 @@ public abstract class ContainerInstall {
   /**
    * Cleans up the installation by deleting the extracted module and downloaded installation folders
    */
-  public void clearPreviousRuns() throws IOException {
-    File modulesFolder = new File(DEFAULT_MODULE_EXTRACTION_DIR);
-    File installsFolder = new File(DEFAULT_INSTALL_DIR);
-
-    // Remove default modules extraction from previous runs
-    if (modulesFolder.exists()) {
-      FileUtils.deleteDirectory(modulesFolder);
-    }
-    // Remove default installs from previous runs
-    if (installsFolder.exists()) {
-      FileUtils.deleteDirectory(installsFolder);
+  public void clearPreviousInstall(String installDir) throws IOException {
+    File installFolder = new File(installDir);
+    // Remove installs from previous runs in the same folder
+    if (installFolder.exists()) {
+      logger.info("Deleting previous install folder " + installFolder.getAbsolutePath());
+      FileUtils.deleteDirectory(installFolder);
     }
   }
 
@@ -256,7 +255,7 @@ public abstract class ContainerInstall {
 
   /**
    * Generates a {@link ServerContainer} from the given {@link ContainerInstall}
-   *
+   * 
    * @param containerDescriptors Additional descriptors used to identify a container
    */
   public abstract ServerContainer generateContainer(File containerConfigHome,
@@ -298,15 +297,15 @@ public abstract class ContainerInstall {
 
   /**
    * Finds and extracts the geode module associated with the specified module.
-   *
+   * 
    * @param moduleName The module name (i.e. tomcat, appserver, etc.) of the module that should be
    *        extract. Used as a search parameter to find the module archive.
    * @return The path to the non-archive (extracted) version of the module files
-   * @throws IOException
    */
-  protected static String findAndExtractModule(String moduleName) throws IOException {
+  protected static String findAndExtractModule(String geodeModuleLocation, String moduleName)
+      throws IOException {
     File modulePath = null;
-    File modulesDir = new File(DEFAULT_MODULE_LOCATION);
+    File modulesDir = new File(geodeModuleLocation);
 
     boolean archive = false;
     logger.info("Trying to access build dir " + modulesDir);
@@ -318,21 +317,28 @@ public abstract class ContainerInstall {
         modulePath = file;
 
         archive = !file.isDirectory();
-        if (!archive)
+        if (!archive) {
           break;
+        }
       }
     }
 
+    String extractedModulePath =
+        modulePath.getName().substring(0, modulePath.getName().length() - 4);
+    // Get the name of the new module folder within the extraction directory
+    File newModuleFolder = new File(DEFAULT_MODULE_EXTRACTION_DIR + extractedModulePath);
+    // Remove any previous module folders extracted here
+    if (newModuleFolder.exists()) {
+      logger.info("Deleting previous modules directory " + newModuleFolder.getAbsolutePath());
+      FileUtils.deleteDirectory(newModuleFolder);
+    }
+
     // Unzip if it is a zip file
     if (archive) {
       if (!FilenameUtils.getExtension(modulePath.getAbsolutePath()).equals("zip")) {
         throw new IOException("Bad module archive " + modulePath);
       }
 
-      // Get the name of the new module folder within the extraction directory
-      File newModuleFolder = new File(DEFAULT_MODULE_EXTRACTION_DIR
-          + modulePath.getName().substring(0, modulePath.getName().length() - 4));
-
       // Extract folder to location if not already there
       if (!newModuleFolder.exists()) {
         ZipUtils.unzip(modulePath.getAbsolutePath(), newModuleFolder.getAbsolutePath());
@@ -342,14 +348,15 @@ public abstract class ContainerInstall {
     }
 
     // No module found within directory throw IOException
-    if (modulePath == null)
+    if (modulePath == null) {
       throw new IOException("No module found in " + modulesDir);
+    }
     return modulePath.getAbsolutePath();
   }
 
   /**
    * Edits the specified property within the given property file
-   *
+   * 
    * @param filePath path to the property file
    * @param propertyName property name to edit
    * @param propertyValue new property value
@@ -364,10 +371,11 @@ public abstract class ContainerInstall {
     properties.load(input);
 
     String val;
-    if (append)
+    if (append) {
       val = properties.getProperty(propertyName) + propertyValue;
-    else
+    } else {
       val = propertyValue;
+    }
 
     properties.setProperty(propertyName, val);
     properties.store(new FileOutputStream(filePath), null);
@@ -397,7 +405,7 @@ public abstract class ContainerInstall {
    * {@link #rewriteNodeAttributes(Node, HashMap)},
    * {@link #nodeHasExactAttributes(Node, HashMap, boolean)} to edit the required parts of the XML
    * file.
-   *
+   * 
    * @param XMLPath The path to the xml file to edit
    * @param tagId The id of tag to edit. If null, then this method will add a new xml element,
    *        unless writeOnSimilarAttributeNames is set to true.
@@ -441,11 +449,13 @@ public abstract class ContainerInstall {
       } else {
         Element e = doc.createElement(tagName);
         // Set id attribute
-        if (tagId != null)
+        if (tagId != null) {
           e.setAttribute("id", tagId);
+        }
         // Set other attributes
-        for (String key : attributes.keySet())
+        for (String key : attributes.keySet()) {
           e.setAttribute(key, attributes.get(key));
+        }
 
         // Add it as a child of the tag for the file
         doc.getElementsByTagName(parentTagName).item(0).appendChild(e);
@@ -466,7 +476,7 @@ public abstract class ContainerInstall {
 
   /**
    * Finds the node in the given document with the given name and attribute
-   *
+   * 
    * @param doc XML document to search for the node
    * @param nodeName The name of the node to search for
    * @param name The name of the attribute that the node should contain
@@ -476,15 +486,17 @@ public abstract class ContainerInstall {
   private static Node findNodeWithAttribute(Document doc, String nodeName, String name,
       String value) {
     NodeList nodes = doc.getElementsByTagName(nodeName);
-    if (nodes == null)
+    if (nodes == null) {
       return null;
+    }
 
     for (int i = 0; i < nodes.getLength(); i++) {
       Node node = nodes.item(i);
       Node nodeAttr = node.getAttributes().getNamedItem(name);
 
-      if (nodeAttr != null && nodeAttr.getTextContent().equals(value))
+      if (nodeAttr != null && nodeAttr.getTextContent().equals(value)) {
         return node;
+      }
     }
 
     return null;
@@ -492,7 +504,7 @@ public abstract class ContainerInstall {
 
   /**
    * Replaces the node's attributes with the attributes in the given hashmap
-   *
+   * 
    * @param node XML node that should be edited
    * @param attributes HashMap of strings representing the attributes of a node (key = value)
    * @return The given node with ONLY the given attributes
@@ -501,12 +513,14 @@ public abstract class ContainerInstall {
     NamedNodeMap nodeAttrs = node.getAttributes();
 
     // Remove all previous attributes
-    while (nodeAttrs.getLength() > 0)
+    while (nodeAttrs.getLength() > 0) {
       nodeAttrs.removeNamedItem(nodeAttrs.item(0).getNodeName());
+    }
 
     // Set to new attributes
-    for (String key : attributes.keySet())
+    for (String key : attributes.keySet()) {
       ((Element) node).setAttribute(key, attributes.get(key));
+    }
 
     return node;
   }
@@ -514,7 +528,7 @@ public abstract class ContainerInstall {
   /**
    * Checks to see whether the given XML node has the exact attributes given in the attributes
    * hashmap
-   *
+   * 
    * @param checkSimilarValues If true, will also check to make sure that the given node's
    *        attributes also have the exact same values as the ones given in the attributes HashMap.
    * @return True if the node has only the attributes the are given by the HashMap (no more and no

http://git-wip-us.apache.org/repos/asf/geode/blob/f38dff9d/geode-assembly/src/test/java/org/apache/geode/session/tests/TomcatInstall.java
----------------------------------------------------------------------
diff --git a/geode-assembly/src/test/java/org/apache/geode/session/tests/TomcatInstall.java b/geode-assembly/src/test/java/org/apache/geode/session/tests/TomcatInstall.java
index ba5f6bc..57dc519 100644
--- a/geode-assembly/src/test/java/org/apache/geode/session/tests/TomcatInstall.java
+++ b/geode-assembly/src/test/java/org/apache/geode/session/tests/TomcatInstall.java
@@ -32,6 +32,8 @@ import java.util.regex.Pattern;
  */
 public class TomcatInstall extends ContainerInstall {
 
+  public static final String GEODE_BUILD_HOME_LIB = GEODE_BUILD_HOME + "/lib/";
+
   /**
    * Version of tomcat that this class will install
    *
@@ -43,6 +45,10 @@ public class TomcatInstall extends ContainerInstall {
         "http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.37/bin/apache-tomcat-6.0.37.zip"),
     TOMCAT7(7,
         "http://archive.apache.org/dist/tomcat/tomcat-7/v7.0.73/bin/apache-tomcat-7.0.73.zip"),
+    TOMCAT755(7,
+        "http://archive.apache.org/dist/tomcat/tomcat-7/v7.0.55/bin/apache-tomcat-7.0.55.zip"),
+    TOMCAT779(7,
+        "http://archive.apache.org/dist/tomcat/tomcat-7/v7.0.79/bin/apache-tomcat-7.0.79.zip"),
     TOMCAT8(8,
         "http://archive.apache.org/dist/tomcat/tomcat-8/v8.5.15/bin/apache-tomcat-8.5.15.zip"),
     TOMCAT9(9,
@@ -72,7 +78,7 @@ public class TomcatInstall extends ContainerInstall {
         case TOMCAT9:
           return 8;
         default:
-          throw new IllegalArgumentException("Illegal tomcat version option");
+          return getVersion();
       }
     }
 
@@ -93,6 +99,8 @@ public class TomcatInstall extends ContainerInstall {
         case TOMCAT6:
           return null;
         case TOMCAT7:
+        case TOMCAT755:
+        case TOMCAT779:
           return "tomcat.util.scan.DefaultJarScanner.jarsToSkip";
         case TOMCAT8:
         case TOMCAT9:
@@ -104,42 +112,40 @@ public class TomcatInstall extends ContainerInstall {
   }
 
   private static final String[] tomcatRequiredJars =
-      {"antlr", "commons-lang", "fastutil", "geode-core", "geode-modules", "geode-modules-tomcat7",
-          "geode-modules-tomcat8", "javax.transaction-api", "jgroups", "log4j-api", "log4j-core",
-          "log4j-jul", "shiro-core", "slf4j-api", "slf4j-jdk14", "commons-validator"};
+      {"antlr", "commons-lang", "fastutil", "geode-core", "javax.transaction-api", "jgroups",
+          "log4j-api", "log4j-core", "log4j-jul", "shiro-core", "commons-validator"};
 
   private final TomcatVersion version;
 
-  public TomcatInstall(TomcatVersion version) throws Exception {
-    this(version, ConnectionType.PEER_TO_PEER, DEFAULT_INSTALL_DIR);
-  }
-
   public TomcatInstall(TomcatVersion version, String installDir) throws Exception {
-    this(version, ConnectionType.PEER_TO_PEER, installDir);
+    this(version, ConnectionType.PEER_TO_PEER, installDir, DEFAULT_MODULE_LOCATION,
+        GEODE_BUILD_HOME_LIB);
   }
 
-  public TomcatInstall(TomcatVersion version, ConnectionType connType) throws Exception {
-    this(version, connType, DEFAULT_INSTALL_DIR);
+  public TomcatInstall(TomcatVersion version, ConnectionType connType, String installDir)
+      throws Exception {
+    this(version, connType, installDir, DEFAULT_MODULE_LOCATION, GEODE_BUILD_HOME_LIB);
   }
 
   /**
    * Download and setup an installation tomcat using the {@link ContainerInstall} constructor and
    * some extra functions this class provides
    *
-   * Specifically, this function uses {@link #copyTomcatGeodeReqFiles(String)} to install geode
-   * session into Tomcat, {@link #setupDefaultSettings()} to modify the context and server XML files
-   * within the installation's 'conf' folder, and {@link #updateProperties()} to set the jar
+   * Specifically, this function uses {@link #copyTomcatGeodeReqFiles(String, String)} to install
+   * geode session into Tomcat, {@link #setupDefaultSettings()} to modify the context and server XML
+   * files within the installation's 'conf' folder, and {@link #updateProperties()} to set the jar
    * skipping properties needed to speedup container startup.
    */
-  public TomcatInstall(TomcatVersion version, ConnectionType connType, String installDir)
-      throws Exception {
+  public TomcatInstall(TomcatVersion version, ConnectionType connType, String installDir,
+      String modulesJarLocation, String extraJarsPath) throws Exception {
     // Does download and install from URL
-    super(installDir, version.getDownloadURL(), connType, "tomcat");
+    super(installDir, version.getDownloadURL(), connType, "tomcat", modulesJarLocation);
 
     this.version = version;
+    modulesJarLocation = getModulePath() + "/lib/";
 
     // Install geode sessions into tomcat install
-    copyTomcatGeodeReqFiles(GEODE_BUILD_HOME + "/lib/");
+    copyTomcatGeodeReqFiles(modulesJarLocation, extraJarsPath);
     // Set some default XML attributes in server and cache XMLs
     setupDefaultSettings();
 
@@ -255,39 +261,26 @@ public class TomcatInstall extends ContainerInstall {
    * @throws IOException if the {@link #getModulePath()}, installation lib directory, or extra
    *         directory passed in contain no files.
    */
-  private void copyTomcatGeodeReqFiles(String extraJarsPath) throws IOException {
+  private void copyTomcatGeodeReqFiles(String moduleJarDir, String extraJarsPath)
+      throws IOException {
     ArrayList<File> requiredFiles = new ArrayList<>();
     // The library path for the current tomcat installation
     String tomcatLibPath = getHome() + "/lib/";
 
     // List of required jars and form version regexps from them
-    String versionRegex = "-[0-9]+.*\\.jar";
+    String versionRegex = "-?[0-9]*.*\\.jar";
     ArrayList<Pattern> patterns = new ArrayList<>(tomcatRequiredJars.length);
     for (String jar : tomcatRequiredJars)
       patterns.add(Pattern.compile(jar + versionRegex));
 
     // Don't need to copy any jars already in the tomcat install
     File tomcatLib = new File(tomcatLibPath);
-    if (tomcatLib.exists()) {
-      try {
-        for (File file : tomcatLib.listFiles())
-          patterns.removeIf(pattern -> pattern.matcher(file.getName()).find());
-      } catch (NullPointerException e) {
-        throw new IOException("No files found in tomcat lib directory " + tomcatLibPath);
-      }
-    } else {
-      tomcatLib.mkdir();
-    }
 
     // Find all the required jars in the tomcatModulePath
     try {
-      for (File file : (new File(getModulePath() + "/lib/")).listFiles()) {
-        for (Pattern pattern : patterns) {
-          if (pattern.matcher(file.getName()).find()) {
-            requiredFiles.add(file);
-            patterns.remove(pattern);
-            break;
-          }
+      for (File file : (new File(moduleJarDir)).listFiles()) {
+        if (file.isFile() && file.getName().endsWith(".jar")) {
+          requiredFiles.add(file);
         }
       }
     } catch (NullPointerException e) {
@@ -301,7 +294,6 @@ public class TomcatInstall extends ContainerInstall {
         for (Pattern pattern : patterns) {
           if (pattern.matcher(file.getName()).find()) {
             requiredFiles.add(file);
-            patterns.remove(pattern);
             break;
           }
         }

http://git-wip-us.apache.org/repos/asf/geode/blob/f38dff9d/geode-assembly/src/test/java/org/apache/geode/session/tests/TomcatSessionBackwardsCompatibilityTest.java
----------------------------------------------------------------------
diff --git a/geode-assembly/src/test/java/org/apache/geode/session/tests/TomcatSessionBackwardsCompatibilityTest.java b/geode-assembly/src/test/java/org/apache/geode/session/tests/TomcatSessionBackwardsCompatibilityTest.java
new file mode 100644
index 0000000..7b23380
--- /dev/null
+++ b/geode-assembly/src/test/java/org/apache/geode/session/tests/TomcatSessionBackwardsCompatibilityTest.java
@@ -0,0 +1,244 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.session.tests;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URISyntaxException;
+import java.util.Collection;
+import java.util.List;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import org.apache.geode.internal.AvailablePortHelper;
+import org.apache.geode.management.internal.cli.i18n.CliStrings;
+import org.apache.geode.management.internal.cli.util.CommandStringBuilder;
+import org.apache.geode.test.dunit.rules.GfshShellConnectionRule;
+import org.apache.geode.test.dunit.rules.LocatorServerStartupRule;
+import org.apache.geode.test.dunit.standalone.VersionManager;
+import org.apache.geode.test.junit.categories.BackwardCompatibilityTest;
+import org.apache.geode.test.junit.categories.DistributedTest;
+import org.apache.geode.test.junit.runners.CategoryWithParameterizedRunnerFactory;
+
+/**
+ * This test iterates through the versions of Geode and executes session client compatibility with
+ * the current version of Geode.
+ */
+@Category({DistributedTest.class, BackwardCompatibilityTest.class})
+@RunWith(Parameterized.class)
+@Parameterized.UseParametersRunnerFactory(CategoryWithParameterizedRunnerFactory.class)
+public class TomcatSessionBackwardsCompatibilityTest {
+
+  @Parameterized.Parameters
+  public static Collection<String> data() {
+    List<String> result = VersionManager.getInstance().getVersionsWithoutCurrent();
+    result.removeIf(s -> Integer.parseInt(s) < 120);
+    if (result.size() < 1) {
+      throw new RuntimeException("No older versions of Geode were found to test against");
+    }
+    return result;
+  }
+
+  @Rule
+  public transient GfshShellConnectionRule gfsh = new GfshShellConnectionRule();
+
+  @Rule
+  public transient LocatorServerStartupRule locatorStartup = new LocatorServerStartupRule();
+
+  @Rule
+  public transient TestName testName = new TestName();
+
+  public transient Client client;
+  public transient ContainerManager manager;
+
+  File oldBuild;
+  File oldModules;
+
+  TomcatInstall tomcat7079AndOldModules;
+  TomcatInstall tomcat7079AndCurrentModules;
+  TomcatInstall tomcat8AndOldModules;
+  TomcatInstall tomcat8AndCurrentModules;
+
+  int locatorPort;
+  String classPathTomcat7079;
+  String classPathTomcat8;
+
+  public TomcatSessionBackwardsCompatibilityTest(String version) {
+    VersionManager versionManager = VersionManager.getInstance();
+    String installLocation = versionManager.getInstall(version);
+    oldBuild = new File(installLocation);
+    oldModules = new File(installLocation + "/tools/Modules/");
+  }
+
+  protected void startServer(String name, String classPath, int locatorPort) throws Exception {
+    CommandStringBuilder command = new CommandStringBuilder(CliStrings.START_SERVER);
+    command.addOption(CliStrings.START_SERVER__NAME, name);
+    command.addOption(CliStrings.START_SERVER__SERVER_PORT, "0");
+    command.addOption(CliStrings.START_SERVER__CLASSPATH, classPath);
+    command.addOption(CliStrings.START_SERVER__LOCATORS, "localhost[" + locatorPort + "]");
+    gfsh.executeAndVerifyCommand(command.toString());
+  }
+
+  protected void startLocator(String name, String classPath, int port) throws Exception {
+    CommandStringBuilder locStarter = new CommandStringBuilder(CliStrings.START_LOCATOR);
+    locStarter.addOption(CliStrings.START_LOCATOR__MEMBER_NAME, name);
+    locStarter.addOption(CliStrings.START_LOCATOR__CLASSPATH, classPath);
+    locStarter.addOption(CliStrings.START_LOCATOR__PORT, Integer.toString(port));
+    gfsh.executeAndVerifyCommand(locStarter.toString());
+
+  }
+
+  @Before
+  public void setup() throws Exception {
+    tomcat7079AndOldModules = new TomcatInstall(TomcatInstall.TomcatVersion.TOMCAT779,
+        ContainerInstall.ConnectionType.CLIENT_SERVER,
+        ContainerInstall.DEFAULT_INSTALL_DIR + "Tomcat7079AndOldModules",
+        oldModules.getAbsolutePath(), oldBuild.getAbsolutePath() + "/lib");
+
+    tomcat7079AndCurrentModules = new TomcatInstall(TomcatInstall.TomcatVersion.TOMCAT779,
+        ContainerInstall.ConnectionType.CLIENT_SERVER,
+        ContainerInstall.DEFAULT_INSTALL_DIR + "Tomcat7079AndCurrentModules");
+
+    tomcat8AndOldModules = new TomcatInstall(TomcatInstall.TomcatVersion.TOMCAT8,
+        ContainerInstall.ConnectionType.CLIENT_SERVER,
+        ContainerInstall.DEFAULT_INSTALL_DIR + "Tomcat8AndOldModules", oldModules.getAbsolutePath(),
+        oldBuild.getAbsolutePath() + "/lib");
+
+    tomcat8AndCurrentModules = new TomcatInstall(TomcatInstall.TomcatVersion.TOMCAT8,
+        ContainerInstall.ConnectionType.CLIENT_SERVER,
+        ContainerInstall.DEFAULT_INSTALL_DIR + "Tomcat8AndCurrentModules");
+
+    classPathTomcat7079 = tomcat7079AndCurrentModules.getHome() + "/lib/*" + File.pathSeparator
+        + tomcat7079AndCurrentModules.getHome() + "/bin/*";
+    classPathTomcat8 = tomcat8AndCurrentModules.getHome() + "/lib/*" + File.pathSeparator
+        + tomcat8AndCurrentModules.getHome() + "/bin/*";
+
+    // Get available port for the locator
+    locatorPort = AvailablePortHelper.getRandomAvailableTCPPort();
+
+    tomcat7079AndOldModules.setDefaultLocator("localhost", locatorPort);
+    tomcat7079AndCurrentModules.setDefaultLocator("localhost", locatorPort);
+
+    tomcat8AndOldModules.setDefaultLocator("localhost", locatorPort);
+    tomcat8AndCurrentModules.setDefaultLocator("localhost", locatorPort);
+
+    client = new Client();
+    manager = new ContainerManager();
+    // Due to parameterization of the test name, the URI would be malformed. Instead, it strips off
+    // the [] symbols
+    manager.setTestName(testName.getMethodName().replace("[", "").replace("]", ""));
+  }
+
+  private void startClusterWithTomcat(String tomcatClassPath) throws Exception {
+    startLocator("loc", tomcatClassPath, locatorPort);
+    startServer("server", tomcatClassPath, locatorPort);
+  }
+
+  /**
+   * Stops all containers that were previously started and cleans up their configurations
+   */
+  @After
+  public void stop() throws Exception {
+    manager.stopAllActiveContainers();
+    manager.cleanUp();
+
+    CommandStringBuilder locStop = new CommandStringBuilder(CliStrings.STOP_LOCATOR);
+    locStop.addOption(CliStrings.STOP_LOCATOR__DIR, "loc");
+    gfsh.executeAndVerifyCommand(locStop.toString());
+
+    CommandStringBuilder command = new CommandStringBuilder(CliStrings.STOP_SERVER);
+    command.addOption(CliStrings.STOP_SERVER__DIR, "server");
+    gfsh.executeAndVerifyCommand(command.toString());
+  }
+
+  private void doPutAndGetSessionOnAllClients() throws IOException, URISyntaxException {
+    // This has to happen at the start of every test
+    manager.startAllInactiveContainers();
+
+    String key = "value_testSessionPersists";
+    String value = "Foo";
+
+    client.setPort(Integer.parseInt(manager.getContainerPort(0)));
+    Client.Response resp = client.set(key, value);
+    String cookie = resp.getSessionCookie();
+
+    for (int i = 0; i < manager.numContainers(); i++) {
+      System.out.println("Checking get for container:" + i);
+      client.setPort(Integer.parseInt(manager.getContainerPort(i)));
+      resp = client.get(key);
+
+      assertEquals("Sessions are not replicating properly", cookie, resp.getSessionCookie());
+      assertEquals("Session data is not replicating properly", value, resp.getResponse());
+    }
+  }
+
+  @Test
+  public void tomcat7079WithOldModuleCanDoPuts() throws Exception {
+    startClusterWithTomcat(classPathTomcat7079);
+    manager.addContainer(tomcat7079AndOldModules);
+    manager.addContainer(tomcat7079AndOldModules);
+    doPutAndGetSessionOnAllClients();
+  }
+
+  @Test
+  public void tomcat7079WithOldModulesMixedWithCurrentCanDoPutFromOldModule() throws Exception {
+    startClusterWithTomcat(classPathTomcat7079);
+    manager.addContainer(tomcat7079AndOldModules);
+    manager.addContainer(tomcat7079AndCurrentModules);
+    doPutAndGetSessionOnAllClients();
+  }
+
+  @Test
+  public void tomcat7079WithOldModulesMixedWithCurrentCanDoPutFromCurrentModule() throws Exception {
+    startClusterWithTomcat(classPathTomcat7079);
+    manager.addContainer(tomcat7079AndCurrentModules);
+    manager.addContainer(tomcat7079AndOldModules);
+    doPutAndGetSessionOnAllClients();
+  }
+
+  @Test
+  public void tomcat8WithOldModuleCanDoPuts() throws Exception {
+    startClusterWithTomcat(classPathTomcat8);
+    manager.addContainer(tomcat8AndOldModules);
+    manager.addContainer(tomcat8AndOldModules);
+    doPutAndGetSessionOnAllClients();
+  }
+
+  @Test
+  public void tomcat8WithOldModulesMixedWithCurrentCanDoPutFromOldModule() throws Exception {
+    startClusterWithTomcat(classPathTomcat8);
+    manager.addContainer(tomcat8AndOldModules);
+    manager.addContainer(tomcat8AndCurrentModules);
+    doPutAndGetSessionOnAllClients();
+  }
+
+  @Test
+  public void tomcat8WithOldModulesMixedWithCurrentCanDoPutFromCurrentModule() throws Exception {
+    startClusterWithTomcat(classPathTomcat8);
+    manager.addContainer(tomcat8AndCurrentModules);
+    manager.addContainer(tomcat8AndOldModules);
+    doPutAndGetSessionOnAllClients();
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/f38dff9d/geode-core/src/test/java/org/apache/geode/test/dunit/standalone/VersionManager.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/test/dunit/standalone/VersionManager.java b/geode-core/src/test/java/org/apache/geode/test/dunit/standalone/VersionManager.java
index 8eefa01..9f4c357 100755
--- a/geode-core/src/test/java/org/apache/geode/test/dunit/standalone/VersionManager.java
+++ b/geode-core/src/test/java/org/apache/geode/test/dunit/standalone/VersionManager.java
@@ -24,7 +24,9 @@ import java.util.Collections;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
+import java.util.Optional;
 import java.util.Properties;
+import java.util.function.BiConsumer;
 
 /**
  * VersionManager loads the class-paths for all of the releases of Geode configured for
@@ -44,7 +46,11 @@ public class VersionManager {
   protected static void init() {
     instance = new VersionManager();
     final String fileName = "geodeOldVersionClasspaths.txt";
+    final String installLocations = "geodeOldVersionInstalls.txt";
     instance.findVersions(fileName);
+    instance.findInstalls(installLocations);
+    System.out
+        .println("VersionManager has loaded the following classpaths:\n" + instance.classPaths);
   }
 
   public static VersionManager getInstance() {
@@ -58,7 +64,7 @@ public class VersionManager {
    * for unit testing, this creates a VersionManager with paths loaded from the given file, which
    * may or may not exist. The instance is not retained
    */
-  protected static VersionManager getInstance(String classpathsFileName) {
+  protected static VersionManager getInstance(String classpathsFileName, String installFileName) {
     VersionManager result = new VersionManager();
     result.findVersions(classpathsFileName);
     return result;
@@ -71,6 +77,8 @@ public class VersionManager {
 
   private List<String> testVersions = new ArrayList<String>(10);
 
+  private Map<String, String> installs = new HashMap();
+
   /**
    * Test to see if a version string is known to VersionManager. Versions are either CURRENT_VERSION
    * or one of the versions returned by VersionManager#getVersions()
@@ -94,6 +102,11 @@ public class VersionManager {
     return classPaths.get(version);
   }
 
+
+  public String getInstall(String version) {
+    return installs.get(version);
+  }
+
   /**
    * Returns a list of older versions available for testing
    */
@@ -118,30 +131,57 @@ public class VersionManager {
 
   private void findVersions(String fileName) {
     // this file is created by the gradle task createClasspathsPropertiesFile
+    readVersionsFile(fileName, (version, path) -> {
+      Optional<String> parsedVersion = parseVersion(version);
+      if (parsedVersion.isPresent()) {
+        classPaths.put(parsedVersion.get(), path);
+        testVersions.add(parsedVersion.get());
+      }
+    });
+  }
+
+  private void findInstalls(String fileName) {
+    readVersionsFile(fileName, (version, install) -> {
+      Optional<String> parsedVersion = parseVersion(version);
+      if (parsedVersion.isPresent()) {
+        installs.put(parsedVersion.get(), install);
+      }
+    });
+  }
+
+  private Optional<String> parseVersion(String version) {
+    String parsedVersion = null;
+    if (version.startsWith("test") && version.length() >= "test".length()) {
+      if (version.equals("test")) {
+        parsedVersion = CURRENT_VERSION;
+      } else {
+        parsedVersion = version.substring("test".length());
+      }
+    }
+    return Optional.ofNullable(parsedVersion);
+  }
+
+  private void readVersionsFile(String fileName, BiConsumer<String, String> consumer) {
+    Properties props = readPropertiesFile(fileName);
+    props.forEach((k, v) -> {
+      consumer.accept(k.toString(), v.toString());
+    });
+  }
+
+  public Properties readPropertiesFile(String fileName) {
+    // this file is created by the gradle task createClasspathsPropertiesFile
     Properties props = new Properties();
     URL url = VersionManager.class.getResource("/" + fileName);
     if (url == null) {
       loadFailure = "VersionManager: unable to locate " + fileName + " in class-path";
-      return;
+      return props;
     }
     try (InputStream in = VersionManager.class.getResource("/" + fileName).openStream()) {
       props.load(in);
     } catch (IOException e) {
       loadFailure = "VersionManager: unable to read resource " + fileName;
-      return;
-    }
-
-    for (Map.Entry<Object, Object> entry : props.entrySet()) {
-      String version = (String) entry.getKey();
-      if (version.startsWith("test") && version.length() >= "test".length()) {
-        if (version.equals("test")) {
-          version = CURRENT_VERSION;
-        } else {
-          version = version.substring("test".length());
-        }
-        classPaths.put(version, (String) entry.getValue());
-        testVersions.add(version);
-      }
+      return props;
     }
+    return props;
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/f38dff9d/geode-core/src/test/java/org/apache/geode/test/dunit/standalone/VersionManagerJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/test/dunit/standalone/VersionManagerJUnitTest.java b/geode-core/src/test/java/org/apache/geode/test/dunit/standalone/VersionManagerJUnitTest.java
index af1fa58..7e89dfc 100755
--- a/geode-core/src/test/java/org/apache/geode/test/dunit/standalone/VersionManagerJUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/test/dunit/standalone/VersionManagerJUnitTest.java
@@ -27,13 +27,15 @@ public class VersionManagerJUnitTest {
 
   @Test
   public void exceptionIsNotThrownInInitialization() throws Exception {
-    VersionManager instance = VersionManager.getInstance("--nonexistant-file?--");
+    VersionManager instance =
+        VersionManager.getInstance("--nonexistant-file?--", "--nonexistant-install-file--");
     Assert.assertNotEquals("", instance.loadFailure);
   }
 
   @Test
   public void exceptionIsThrownOnUse() throws Exception {
-    VersionManager instance = VersionManager.getInstance("--nonexistant-file?--");
+    VersionManager instance =
+        VersionManager.getInstance("--nonexistant-file?--", "--nonexistant-install-file--");
     Assert.assertNotEquals("", instance.loadFailure);
     assertThatThrownBy(() -> instance.getVersionsWithoutCurrent()).hasMessage(instance.loadFailure);
     assertThatThrownBy(() -> instance.getVersions()).hasMessage(instance.loadFailure);

http://git-wip-us.apache.org/repos/asf/geode/blob/f38dff9d/geode-old-versions/build.gradle
----------------------------------------------------------------------
diff --git a/geode-old-versions/build.gradle b/geode-old-versions/build.gradle
index 1a39ea0..2e9257c 100644
--- a/geode-old-versions/build.gradle
+++ b/geode-old-versions/build.gradle
@@ -15,11 +15,16 @@
  * limitations under the License.
  */
 
+plugins {
+  id "de.undercouch.download" version "3.2.0"
+}
 
+import de.undercouch.gradle.tasks.download.Download
 disableMavenPublishing()
 
-def addTestSource(def source, def geodeVersion) {
-//  def sourceSet =
+project.ext.installs = new Properties();
+
+def addOldVersion(def source, def geodeVersion, def downloadInstall) {
   sourceSets.create(source, {
     compileClasspath += configurations.provided
     runtimeClasspath += configurations.provided
@@ -33,14 +38,36 @@ def addTestSource(def source, def geodeVersion) {
   dependencies.add "${source}Compile", "org.apache.geode:geode-cq:$geodeVersion"
   dependencies.add "${source}Compile", "org.apache.geode:geode-rebalancer:$geodeVersion"
 
-}
+  project.ext.installs.setProperty(source, "$buildDir/apache-geode-${geodeVersion}")
 
-// Add sourceSets for backwards compatibility, rolling upgrade, and
-// pdx testing.
-addTestSource('test100', '1.0.0-incubating')
-addTestSource('test110', '1.1.0')
-addTestSource('test111', '1.1.1')
-addTestSource('test120', '1.2.0')
+  task "downloadZipFile${source}" (type: Download) {
+    src "https://www.apache.org/dyn/closer.cgi?action=download&filename=geode/$geodeVersion/apache-geode-${geodeVersion}.tar.gz"
+    dest new File(buildDir, "apache-geode-${geodeVersion}.tar.gz")
+  }
+
+  task "downloadSHA${source}" (type: Download) {
+    src "https://www.apache.org/dist/geode/${geodeVersion}/apache-geode-${geodeVersion}.tar.gz.sha256"
+    dest new File(buildDir, "apache-geode-${geodeVersion}.tar.gz.sha256")
+  }
+
+
+  task "verifyGeode${source}" (type: de.undercouch.gradle.tasks.download.Verify, dependsOn: [tasks["downloadSHA${source}"], tasks["downloadZipFile${source}"]]) {
+    src tasks["downloadZipFile${source}"].dest
+    algorithm "SHA-256"
+    doFirst {
+      checksum new File(buildDir, "apache-geode-${geodeVersion}.tar.gz.sha256").text.split(' ')[0]
+    }
+  }
+
+  task "downloadAndUnzipFile${source}" (dependsOn: "verifyGeode${source}", type: Copy) {
+    from tarTree(tasks["downloadZipFile${source}"].dest)
+    into buildDir
+  }
+
+  if (downloadInstall) {
+    createGeodeClasspathsFile.dependsOn tasks["downloadAndUnzipFile${source}"]
+  }
+}
 
 def generatedResources = "$buildDir/generated-resources/main"
 
@@ -52,7 +79,9 @@ sourceSets {
 
 task createGeodeClasspathsFile  {
   File classpathsFile = file("$generatedResources/geodeOldVersionClasspaths.txt")
-  outputs.file(classpathsFile);
+  File installsFile = file("$generatedResources/geodeOldVersionInstalls.txt")
+  outputs.file(classpathsFile)
+  outputs.file(installsFile)
 
   doLast {
     Properties versions = new Properties();
@@ -65,6 +94,21 @@ task createGeodeClasspathsFile  {
     new FileOutputStream(classpathsFile).withStream { fos ->
       versions.store(fos, '')
     }
+
+    installsFile.getParentFile().mkdirs();
+
+    new FileOutputStream(installsFile).withStream { fos ->
+      project.ext.installs.store(fos, '')
+    }
   }
+
+  // Add sourceSets for backwards compatibility, rolling upgrade, and
+  // pdx testing.
+  addOldVersion('test100', '1.0.0-incubating', false)
+  addOldVersion('test110', '1.1.0', false)
+  addOldVersion('test111', '1.1.1', false)
+  addOldVersion('test120', '1.2.0', true)
+
 }
 
+


[49/51] [abbrv] geode git commit: GEODE-3055: Should use a conservative fix to only catch the PartitionOfflineEx to remove the leader region bucket.

Posted by kl...@apache.org.
GEODE-3055: Should use a conservative fix to only catch the PartitionOfflineEx
to remove the leader region bucket.

Previous fix to catch all RuntimeException is too aggressive.

This closes #723


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/d809076d
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/d809076d
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/d809076d

Branch: refs/heads/feature/GEODE-1279
Commit: d809076d01c28b9b819ab32d6af172004b3f8740
Parents: 1c04aab
Author: zhouxh <gz...@pivotal.io>
Authored: Fri Aug 18 14:51:31 2017 -0700
Committer: zhouxh <gz...@pivotal.io>
Committed: Fri Aug 18 16:11:53 2017 -0700

----------------------------------------------------------------------
 .../apache/geode/internal/cache/PartitionedRegionDataStore.java   | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/d809076d/geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegionDataStore.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegionDataStore.java b/geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegionDataStore.java
index 3d9ac18..6b0c0a8 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegionDataStore.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegionDataStore.java
@@ -21,6 +21,7 @@ import org.apache.geode.cache.Region.Entry;
 import org.apache.geode.cache.execute.Function;
 import org.apache.geode.cache.execute.FunctionException;
 import org.apache.geode.cache.execute.ResultSender;
+import org.apache.geode.cache.persistence.PartitionOfflineException;
 import org.apache.geode.cache.query.QueryInvalidException;
 import org.apache.geode.cache.query.internal.QCompiler;
 import org.apache.geode.cache.query.internal.index.IndexCreationData;
@@ -493,7 +494,7 @@ public class PartitionedRegionDataStore implements HasCachePerfStats {
 
       return result;
 
-    } catch (RuntimeException validationException) {
+    } catch (PartitionOfflineException validationException) {
       // GEODE-3055
       PartitionedRegion leader = ColocationHelper.getLeaderRegion(this.partitionedRegion);
       boolean isLeader = leader.equals(this.partitionedRegion);


[44/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - Developing

Posted by kl...@apache.org.
http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/implementing_write_behind_event_handler.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/implementing_write_behind_event_handler.html.md.erb b/geode-docs/developing/events/implementing_write_behind_event_handler.html.md.erb
index 54cf174..76b1248 100644
--- a/geode-docs/developing/events/implementing_write_behind_event_handler.html.md.erb
+++ b/geode-docs/developing/events/implementing_write_behind_event_handler.html.md.erb
@@ -25,13 +25,13 @@ An `AsyncEventListener` asynchronously processes batches of events after they ha
 
 An `AsyncEventListener` instance is serviced by its own dedicated thread in which a callback method is invoked. Events that update a region are placed in an internal `AsyncEventQueue`, and one or more threads dispatch batches of events at a time to the listener implementation.
 
-You can configure an `AsyncEventQueue` to be either serial or parallel. A serial queue is deployed to one Geode member, and it delivers all of a region's events, in order of occurrence, to a configured `AsyncEventListener` implementation. A parallel queue is deployed to multiple Geode members, and each instance of the queue delivers region events, possibly simultaneously, to a local `AsyncEventListener` implementation.
+You can configure an `AsyncEventQueue` to be either serial or parallel. A serial queue is deployed to one <%=vars.product_name%> member, and it delivers all of a region's events, in order of occurrence, to a configured `AsyncEventListener` implementation. A parallel queue is deployed to multiple <%=vars.product_name%> members, and each instance of the queue delivers region events, possibly simultaneously, to a local `AsyncEventListener` implementation.
 
-While a parallel queue provides the best throughput for writing events, it provides less control for ordering those events. With a parallel queue, you cannot preserve event ordering for a region as a whole because multiple Geode servers queue and deliver the region's events at the same time. However, the ordering of events for a given partition (or for a given queue of a distributed region) can be preserved.
+While a parallel queue provides the best throughput for writing events, it provides less control for ordering those events. With a parallel queue, you cannot preserve event ordering for a region as a whole because multiple <%=vars.product_name%> servers queue and deliver the region's events at the same time. However, the ordering of events for a given partition (or for a given queue of a distributed region) can be preserved.
 
 For both serial and parallel queues, you can control the maximum amount of memory that each queue uses, as well as the batch size and frequency for processing batches in the queue. You can also configure queues to persist to disk (instead of simply overflowing to disk) so that write-behind caching can pick up where it left off when a member shuts down and is later restarted.
 
-Optionally, a queue can use multiple threads to dispatch queued events. When you configure multiple threads for a serial queue, the logical queue that is hosted on a Geode member is divided into multiple physical queues, each with a dedicated dispatcher thread. You can then configure whether the threads dispatch queued events by key, by thread, or in the same order in which events were added to the queue. When you configure multiple threads for a parallel queue, each queue hosted on a Geode member is processed by dispatcher threads; the total number of queues created depends on the number of members that host the region.
+Optionally, a queue can use multiple threads to dispatch queued events. When you configure multiple threads for a serial queue, the logical queue that is hosted on a <%=vars.product_name%> member is divided into multiple physical queues, each with a dedicated dispatcher thread. You can then configure whether the threads dispatch queued events by key, by thread, or in the same order in which events were added to the queue. When you configure multiple threads for a parallel queue, each queue hosted on a <%=vars.product_name%> member is processed by dispatcher threads; the total number of queues created depends on the number of members that host the region.
 
 A `GatewayEventFilter` can be placed on the `AsyncEventQueue` to control whether a particular event is sent to a selected `AsyncEventListener`. For example, events associated with sensitive data could be detected and not queued. For more detail, see the Javadocs for `GatewayEventFilter`.
 
@@ -61,11 +61,11 @@ Review the following guidelines before using an AsyncEventListener:
 
 -   If you use an `AsyncEventListener` to implement a write-behind cache listener, your code should check for the possibility that an existing database connection may have been closed due to an earlier exception. For example, check for `Connection.isClosed()` in a catch block and re-create the connection as needed before performing further operations.
 -   Use a serial `AsyncEventQueue` if you need to preserve the order of region events within a thread when delivering events to your listener implementation. Use parallel queues when the order of events within a thread is not important, and when you require maximum throughput for processing events. In both cases, serial and parallel, the order of operations on a given key is preserved within the scope of the thread.
--   You must install the `AsyncEventListener` implementation on a Geode member that hosts the region whose events you want to process.
--   If you configure a parallel `AsyncEventQueue`, deploy the queue on each Geode member that hosts the region.
+-   You must install the `AsyncEventListener` implementation on a <%=vars.product_name%> member that hosts the region whose events you want to process.
+-   If you configure a parallel `AsyncEventQueue`, deploy the queue on each <%=vars.product_name%> member that hosts the region.
 -   You can install a listener on more than one member to provide high availability and guarantee delivery for events, in the event that a member with the active `AsyncEventListener` shuts down. At any given time only one member has an active listener for dispatching events. The listeners on other members remain on standby for redundancy. For best performance and most efficient use of memory, install only one standby listener (redundancy of at most one).
 -   Install no more than one standby listener (redundancy of at most one) for performance and memory reasons.
--   To preserve pending events through member shutdowns, configure Geode to persist the internal queue of the `AsyncEventListener` to an available disk store. By default, any pending events that reside in the internal queue of an `AsyncEventListener` are lost if the active listener's member shuts down.
+-   To preserve pending events through member shutdowns, configure <%=vars.product_name%> to persist the internal queue of the `AsyncEventListener` to an available disk store. By default, any pending events that reside in the internal queue of an `AsyncEventListener` are lost if the active listener's member shuts down.
 -   To ensure high availability and reliable delivery of events, configure the event queue to be both persistent and redundant.
 
 ## <a id="implementing_write_behind_cache_event_handling__section_FB3EB382E37945D9895E09B47A64D6B9" class="no-quick-link"></a>Implementing an AsyncEventListener
@@ -94,7 +94,7 @@ class MyAsyncEventListener implements AsyncEventListener {
 
 ## <a id="implementing_write_behind_cache_event_handling__section_AB80262CFB6D4867B52A5D6D880A5294" class="no-quick-link"></a>Processing AsyncEvents
 
-Use the [AsyncEventListener.processEvents](/releases/latest/javadoc/org/apache/geode/cache/asyncqueue/AsyncEventListener.html) method to process AsyncEvents. This method is called asynchronously when events are queued to be processed. The size of the list reflects the number of batch events where batch size is defined in the AsyncEventQueueFactory. The `processEvents` method returns a boolean; true if the AsyncEvents are processed correctly, and false if any events fail processing. As long as `processEvents` returns false, Geode continues to re-try processing the events.
+Use the [AsyncEventListener.processEvents](/releases/latest/javadoc/org/apache/geode/cache/asyncqueue/AsyncEventListener.html) method to process AsyncEvents. This method is called asynchronously when events are queued to be processed. The size of the list reflects the number of batch events where batch size is defined in the AsyncEventQueueFactory. The `processEvents` method returns a boolean; true if the AsyncEvents are processed correctly, and false if any events fail processing. As long as `processEvents` returns false, <%=vars.product_name%> continues to re-try processing the events.
 
 You can use the `getDeserializedValue` method to obtain cache values for entries that have been updated or created. Since the `getDeserializedValue` method will return a null value for destroyed entries, you should use the `getKey` method to obtain references to cache objects that have been destroyed. Here's an example of processing AsyncEvents:
 
@@ -188,11 +188,11 @@ To configure a write-behind cache listener, you first configure an asynchronous
     AsyncEventQueue asyncQueue = factory.create("sampleQueue", listener);
     ```
 
-2.  If you are using a parallel `AsyncEventQueue`, the gfsh example above requires no alteration, as gfsh applies to all members. If using cache.xml or the Java API to configure your `AsyncEventQueue`, repeat the above configuration in each Geode member that will host the region. Use the same ID and configuration settings for each queue configuration.
+2.  If you are using a parallel `AsyncEventQueue`, the gfsh example above requires no alteration, as gfsh applies to all members. If using cache.xml or the Java API to configure your `AsyncEventQueue`, repeat the above configuration in each <%=vars.product_name%> member that will host the region. Use the same ID and configuration settings for each queue configuration.
     **Note:**
     You can ensure other members use the sample configuration by using the cluster configuration service available in gfsh. See [Overview of the Cluster Configuration Service](../../configuring/cluster_config/gfsh_persist.html).
 
-3.  On each Geode member that hosts the `AsyncEventQueue`, assign the queue to each region that you want to use with the `AsyncEventListener` implementation.
+3.  On each <%=vars.product_name%> member that hosts the `AsyncEventQueue`, assign the queue to each region that you want to use with the `AsyncEventListener` implementation.
 
     **gfsh Configuration**
 
@@ -234,7 +234,7 @@ To configure a write-behind cache listener, you first configure an asynchronous
     mutator.addAsyncEventQueueId("sampleQueue");        
     ```
 
-    See the [Geode API documentation](/releases/latest/javadoc/org/apache/geode/cache/AttributesMutator.html) for more information.
+    See the [<%=vars.product_name%> API documentation](/releases/latest/javadoc/org/apache/geode/cache/AttributesMutator.html) for more information.
 
 4.  Optionally configure persistence and conflation for the queue.
     **Note:**

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb b/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb
index cda18ee..a1e763a 100644
--- a/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb
+++ b/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode provides many types of events and event handlers to help you manage your different data and application needs.
+<%=vars.product_name%> provides many types of events and event handlers to help you manage your different data and application needs.
 
 ## <a id="event_handlers_and_events__section_E7B7502F673B43E794884D0F6BF537CF" class="no-quick-link"></a>Event Handlers
 
@@ -79,7 +79,7 @@ Use either cache handlers or membership handlers in any single application. Do n
 <td><code class="ph codeph">MembershipListener</code>
 <p>(org.apache.geode.management .membership.MembershipListener)</p></td>
 <td><code class="ph codeph">MembershipEvent</code></td>
-<td>Use this interface to receive membership events only about peers. This listener's callback methods are invoked when peer members join or leave the Geode distributed system. Callback methods include <code class="ph codeph">memberCrashed</code>, <code class="ph codeph">memberJoined</code>, and <code class="ph codeph">memberLeft</code> (graceful exit).</td>
+<td>Use this interface to receive membership events only about peers. This listener's callback methods are invoked when peer members join or leave the <%=vars.product_name%> distributed system. Callback methods include <code class="ph codeph">memberCrashed</code>, <code class="ph codeph">memberJoined</code>, and <code class="ph codeph">memberLeft</code> (graceful exit).</td>
 </tr>
 <tr>
 <td><code class="ph codeph">RegionMembershipListener</code></td>
@@ -151,7 +151,7 @@ The events in this table are cache events unless otherwise noted.
 <tr>
 <td><code class="ph codeph">EntryEvent</code></td>
 <td><code class="ph codeph">CacheListener</code>, <code class="ph codeph">CacheWriter</code>, <code class="ph codeph">TransactionListener</code> (inside the <code class="ph codeph">TransactionEvent</code>)</td>
-<td>Extends <code class="ph codeph">CacheEvent</code> for entry events. Contains information about an event affecting a data entry in the cache. The information includes the key, the value before this event, and the value after this event. <code class="ph codeph">EntryEvent.getNewValue</code> returns the current value of the data entry. <code class="ph codeph">EntryEvent.getOldValue</code> returns the value before this event if it is available. For a partitioned region, returns the old value if the local cache holds the primary copy of the entry. <code class="ph codeph">EntryEvent</code> provides the Geode transaction ID if available.
+<td>Extends <code class="ph codeph">CacheEvent</code> for entry events. Contains information about an event affecting a data entry in the cache. The information includes the key, the value before this event, and the value after this event. <code class="ph codeph">EntryEvent.getNewValue</code> returns the current value of the data entry. <code class="ph codeph">EntryEvent.getOldValue</code> returns the value before this event if it is available. For a partitioned region, returns the old value if the local cache holds the primary copy of the entry. <code class="ph codeph">EntryEvent</code> provides the <%=vars.product_name%> transaction ID if available.
 <p>You can retrieve serialized values from <code class="ph codeph">EntryEvent</code> using the <code class="ph codeph">getSerialized</code>* methods. This is useful if you get values from one region’s events just to put them into a separate cache region. There is no counterpart <code class="ph codeph">put</code> function as the put recognizes that the value is serialized and bypasses the serialization step.</p></td>
 </tr>
 <tr>

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb b/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb
index db43ed5..06e14b1 100644
--- a/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb
+++ b/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb
@@ -28,10 +28,10 @@ A single client thread receives and processes messages from the server, tracking
 
 The client’s message tracking list holds the highest sequence ID of any message received for each originating thread. The list can become quite large in systems where there are many different threads coming and going and doing work on the cache. After a thread dies, its tracking entry is not needed. To avoid maintaining tracking information for threads that have died, the client expires entries that have had no activity for more than the `subscription-message-tracking-timeout`.
 
--   **[Conflate the Server Subscription Queue](../../developing/events/conflate_server_subscription_queue.html)**
+-   **[Conflate the Server Subscription Queue](conflate_server_subscription_queue.html)**
 
--   **[Limit the Server's Subscription Queue Memory Use](../../developing/events/limit_server_subscription_queue_size.html)**
+-   **[Limit the Server's Subscription Queue Memory Use](limit_server_subscription_queue_size.html)**
 
--   **[Tune the Client's Subscription Message Tracking Timeout](../../developing/events/tune_client_message_tracking_timeout.html)**
+-   **[Tune the Client's Subscription Message Tracking Timeout](tune_client_message_tracking_timeout.html)**
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb b/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb
index 7b201bc..56a3b12 100644
--- a/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb
+++ b/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb
@@ -23,20 +23,20 @@ Event handlers are synchronous. If you need to change the cache or perform any o
 
 ## <a id="writing_callbacks_that_modify_the_cache__section_98E49363C91945DEB0A3B2FD9A209969" class="no-quick-link"></a>Operations to Avoid in Event Handlers
 
-Do not perform distributed operations of any kind directly from your event handler. Geode is a highly distributed system and many operations that may seem local invoke distributed operations.
+Do not perform distributed operations of any kind directly from your event handler. <%=vars.product_name%> is a highly distributed system and many operations that may seem local invoke distributed operations.
 
 These are common distributed operations that can get you into trouble:
 
 -   Calling `Region` methods, on the event's region or any other region.
--   Using the Geode `DistributedLockService`.
+-   Using the <%=vars.product_name%> `DistributedLockService`.
 -   Modifying region attributes.
--   Executing a function through the Geode `FunctionService`.
+-   Executing a function through the <%=vars.product_name%> `FunctionService`.
 
-To be on the safe side, do not make any calls to the Geode API directly from your event handler. Make all Geode API calls from within a separate thread or executor.
+To be on the safe side, do not make any calls to the <%=vars.product_name%> API directly from your event handler. Make all <%=vars.product_name%> API calls from within a separate thread or executor.
 
 ## <a id="writing_callbacks_that_modify_the_cache__section_78648D4177E14EA695F0B059E336137C" class="no-quick-link"></a>How to Perform Distributed Operations Based on Events
 
-If you need to use the Geode API from your handlers, make your work asynchronous to the event handler. You can spawn a separate thread or use a solution like the `java.util.concurrent.Executor` interface.
+If you need to use the <%=vars.product_name%> API from your handlers, make your work asynchronous to the event handler. You can spawn a separate thread or use a solution like the `java.util.concurrent.Executor` interface.
 
 This example shows a serial executor where the callback creates a `Runnable` that can be pulled off a queue and run by another object. This preserves the ordering of events.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/eviction/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/eviction/chapter_overview.html.md.erb b/geode-docs/developing/eviction/chapter_overview.html.md.erb
index c5d1417..1cd9814 100644
--- a/geode-docs/developing/eviction/chapter_overview.html.md.erb
+++ b/geode-docs/developing/eviction/chapter_overview.html.md.erb
@@ -23,11 +23,11 @@ Use eviction to control data region size.
 
 <a id="eviction__section_C3409270DD794822B15E819E2276B21A"></a>
 
--   **[How Eviction Works](../../developing/eviction/how_eviction_works.html)**
+-   **[How Eviction Works](how_eviction_works.html)**
 
-    Eviction settings cause Apache Geode to work to keep a region's resource use under a specified level by removing least recently used (LRU) entries to make way for new entries.
+    Eviction settings cause <%=vars.product_name_long%> to work to keep a region's resource use under a specified level by removing least recently used (LRU) entries to make way for new entries.
 
--   **[Configure Data Eviction](../../developing/eviction/configuring_data_eviction.html)**
+-   **[Configure Data Eviction](configuring_data_eviction.html)**
 
     Use eviction controllers to configure the eviction-attributes region attribute settings to keep your region within a specified limit.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb b/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb
index 6c22284..530c22f 100644
--- a/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb
+++ b/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb
@@ -22,21 +22,21 @@ limitations under the License.
 Use eviction controllers to configure the eviction-attributes region attribute settings to keep your region within a specified limit.
 
 <a id="configuring_data_eviction__section_8515EC9635C342C0916EE9E6120E2AC9"></a>
-Eviction controllers monitor region and memory use and, when the limit is reached, remove older entries to make way for new data. For heap percentage, the controller used is the Geode resource manager, configured in conjunction with the JVM's garbage collector for optimum performance.
+Eviction controllers monitor region and memory use and, when the limit is reached, remove older entries to make way for new data. For heap percentage, the controller used is the <%=vars.product_name%> resource manager, configured in conjunction with the JVM's garbage collector for optimum performance.
 
 Configure data eviction as follows. You do not need to perform these steps in the sequence shown.
 
 1.  Decide whether to evict based on:
     -   Entry count (useful if your entry sizes are relatively uniform).
     -   Total bytes used. In partitioned regions, this is set using `local-max-memory`. In non-partitioned, it is set in `eviction-attributes`.
-    -   Percentage of application heap used. This uses the Geode resource manager. When the manager determines that eviction is required, the manager orders the eviction controller to start evicting from all regions where the eviction algorithm is set to `lru-heap-percentage`. Eviction continues until the manager calls a halt. Geode evicts the least recently used entry hosted by the member for the region. See [Managing Heap and Off-heap Memory](../../managing/heap_use/heap_management.html#resource_manager).
+    -   Percentage of application heap used. This uses the <%=vars.product_name%> resource manager. When the manager determines that eviction is required, the manager orders the eviction controller to start evicting from all regions where the eviction algorithm is set to `lru-heap-percentage`. Eviction continues until the manager calls a halt. <%=vars.product_name%> evicts the least recently used entry hosted by the member for the region. See [Managing Heap and Off-heap Memory](../../managing/heap_use/heap_management.html#resource_manager).
 
 2.  Decide what action to take when the limit is reached:
     -   Locally destroy the entry.
     -   Overflow the entry data to disk. See [Persistence and Overflow](../storing_data_on_disk/chapter_overview.html).
 
 3.  Decide the maximum amount of data to allow in the member for the eviction measurement indicated. This is the maximum for all storage for the region in the member. For partitioned regions, this is the total for all buckets stored in the member for the region - including any secondary buckets used for redundancy.
-4.  Decide whether to program a custom sizer for your region. If you are able to provide such a class, it might be faster than the standard sizing done by Geode. Your custom class must follow the guidelines for defining custom classes and, additionally, must implement `org.apache.geode.cache.util.ObjectSizer`. See [Requirements for Using Custom Classes in Data Caching](../../basic_config/data_entries_custom_classes/using_custom_classes.html).
+4.  Decide whether to program a custom sizer for your region. If you are able to provide such a class, it might be faster than the standard sizing done by <%=vars.product_name%>. Your custom class must follow the guidelines for defining custom classes and, additionally, must implement `org.apache.geode.cache.util.ObjectSizer`. See [Requirements for Using Custom Classes in Data Caching](../../basic_config/data_entries_custom_classes/using_custom_classes.html).
 
 **Note:**
 You can also configure Regions using the gfsh command-line interface, however, you cannot configure `eviction-attributes` using gfsh. See [Region Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_EF03119A40EE492984F3B6248596E1DD) and [Disk Store Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_1ACC91B493EE446E89EC7DBFBBAE00EA).

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/eviction/how_eviction_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/eviction/how_eviction_works.html.md.erb b/geode-docs/developing/eviction/how_eviction_works.html.md.erb
index a714253..0c11f0b 100644
--- a/geode-docs/developing/eviction/how_eviction_works.html.md.erb
+++ b/geode-docs/developing/eviction/how_eviction_works.html.md.erb
@@ -19,16 +19,16 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Eviction settings cause Apache Geode to work to keep a region's resource use under a specified level by removing least recently used (LRU) entries to make way for new entries.
+Eviction settings cause <%=vars.product_name_long%> to work to keep a region's resource use under a specified level by removing least recently used (LRU) entries to make way for new entries.
 
 <a id="how_eviction_works__section_C3409270DD794822B15E819E2276B21A"></a>
 You configure for eviction based on entry count, percentage of available heap, and absolute memory usage. You also configure what to do when you need to evict: destroy entries or overflow them to disk. See [Persistence and Overflow](../storing_data_on_disk/chapter_overview.html).
 
-When Geode determines that adding or updating an entry would take the region over the specified level, it overflows or removes enough older entries to make room. For entry count eviction, this means a one-to-one trade of an older entry for the newer one. For the memory settings, the number of older entries that need to be removed to make space depends entirely on the relative sizes of the older and newer entries.
+When <%=vars.product_name%> determines that adding or updating an entry would take the region over the specified level, it overflows or removes enough older entries to make room. For entry count eviction, this means a one-to-one trade of an older entry for the newer one. For the memory settings, the number of older entries that need to be removed to make space depends entirely on the relative sizes of the older and newer entries.
 
 ## <a id="how_eviction_works__section_69E2AA453EDE4E088D1C3332C071AFE1" class="no-quick-link"></a>Eviction in Partitioned Regions
 
-In partitioned regions, Geode removes the oldest entry it can find *in the bucket where the new entry operation is being performed*. Geode maintains LRU entry information on a bucket-by-bucket basis, as the cost of maintaining information across the partitioned region would be too great a performance hit.
+In partitioned regions, <%=vars.product_name%> removes the oldest entry it can find *in the bucket where the new entry operation is being performed*. <%=vars.product_name%> maintains LRU entry information on a bucket-by-bucket basis, as the cost of maintaining information across the partitioned region would be too great a performance hit.
 
 -   For memory and entry count eviction, LRU eviction is done in the bucket where the new entry operation is being performed until the overall size of the combined buckets in the member has dropped enough to perform the operation without going over the limit.
 -   For heap eviction, each partitioned region bucket is treated as if it were a separate region, with each eviction action only considering the LRU for the bucket, and not the partitioned region as a whole.

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/expiration/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/expiration/chapter_overview.html.md.erb b/geode-docs/developing/expiration/chapter_overview.html.md.erb
index 546af32..3764b6f 100644
--- a/geode-docs/developing/expiration/chapter_overview.html.md.erb
+++ b/geode-docs/developing/expiration/chapter_overview.html.md.erb
@@ -21,11 +21,11 @@ limitations under the License.
 
 Use expiration to keep data current by removing stale entries. You can also use it to remove entries you are not using so your region uses less space. Expired entries are reloaded the next time they are requested.
 
--   **[How Expiration Works](../../developing/expiration/how_expiration_works.html)**
+-   **[How Expiration Works](how_expiration_works.html)**
 
     Expiration removes old entries and entries that you are not using. You can destroy or invalidate entries.
 
--   **[Configure Data Expiration](../../developing/expiration/configuring_data_expiration.html)**
+-   **[Configure Data Expiration](configuring_data_expiration.html)**
 
     Configure the type of expiration and the expiration action to use.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/expiration/how_expiration_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/expiration/how_expiration_works.html.md.erb b/geode-docs/developing/expiration/how_expiration_works.html.md.erb
index 4ec5015..e005581 100644
--- a/geode-docs/developing/expiration/how_expiration_works.html.md.erb
+++ b/geode-docs/developing/expiration/how_expiration_works.html.md.erb
@@ -30,14 +30,14 @@ This figure shows two basic expiration settings for a producer/consumer system.
 
 ## <a id="how_expiration_works__section_B6C55A610F4243ED8F1986E8A98858CF" class="no-quick-link"></a>Expiration Types
 
-Apache Geode uses the following expiration types:
+<%=vars.product_name_long%> uses the following expiration types:
 
 -   **Time to live (TTL)**. The amount of time, in seconds, the object may remain in the cache after the last creation or update. For entries, the counter is set to zero for create and put operations. Region counters are reset when the region is created and when an entry has its counter reset. The TTL expiration attributes are `region-time-to-live` and `entry-time-to-live`.
 -   **Idle timeout**. The amount of time, in seconds, the object may remain in the cache after the last access. The idle timeout counter for an object is reset any time its TTL counter is reset. In addition, an entry’s idle timeout counter is reset any time the entry is accessed through a get operation or a netSearch . The idle timeout counter for a region is reset whenever the idle timeout is reset for one of its entries. Idle timeout expiration attributes are: `region-idle-time` and `entry-idle-time`.
 
 ## <a id="how_expiration_works__section_BA995343EF584104B9853CFE4CAD88AD" class="no-quick-link"></a>Expiration Actions
 
-Apache Geode uses the following expiration actions:
+<%=vars.product_name_long%> uses the following expiration actions:
 
 -   destroy
 -   local destroy

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/function_exec/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/function_exec/chapter_overview.html.md.erb b/geode-docs/developing/function_exec/chapter_overview.html.md.erb
index c85e9c8..46d39f8 100644
--- a/geode-docs/developing/function_exec/chapter_overview.html.md.erb
+++ b/geode-docs/developing/function_exec/chapter_overview.html.md.erb
@@ -31,6 +31,6 @@ A function is a body of code that resides on a server and that an application ca
 
 -   **[How Function Execution Works](how_function_execution_works.html)**
 
--   **[Executing a Function in Apache Geode](function_execution.html)**
+-   **[Executing a Function in <%=vars.product_name_long%>](function_execution.html)**
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/function_exec/function_execution.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/function_exec/function_execution.html.md.erb b/geode-docs/developing/function_exec/function_execution.html.md.erb
index 221098b..a7ce138 100644
--- a/geode-docs/developing/function_exec/function_execution.html.md.erb
+++ b/geode-docs/developing/function_exec/function_execution.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Executing a Function in Apache Geode
----
+<% set_title("Executing a Function in", product_name_long) %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -37,7 +35,7 @@ Code the methods you need for the function. These steps do not have to be done i
 
 1.  Code `getId` to return a unique name for your function. You can use this name to access the function through the `FunctionService` API.
 2.  For high availability:
-    1.  Code `isHa` to return true to indicate to Geode that it can re-execute your function after one or more members fails
+    1.  Code `isHa` to return true to indicate to <%=vars.product_name%> that it can re-execute your function after one or more members fails
     2.  Code your function to return a result
     3.  Code `hasResult` to return true
 
@@ -57,7 +55,7 @@ Code the methods you need for the function. These steps do not have to be done i
             **Note:**
             When you use `PartitionRegionHelper.getLocalDataForContext`, `putIfAbsent` may not return expected results if you are working on local data set instead of the region.
 
-    4.  To propagate an error condition or exception back to the caller of the function, throw a FunctionException from the `execute` method. Geode transmits the exception back to the caller as if it had been thrown on the calling side. See the Java API documentation for [FunctionException](/releases/latest/javadoc/org/apache/geode/cache/execute/FunctionException.html) for more information.
+    4.  To propagate an error condition or exception back to the caller of the function, throw a FunctionException from the `execute` method. <%=vars.product_name%> transmits the exception back to the caller as if it had been thrown on the calling side. See the Java API documentation for [FunctionException](/releases/latest/javadoc/org/apache/geode/cache/execute/FunctionException.html) for more information.
 
 Example function code:
 
@@ -114,7 +112,7 @@ When you deploy a JAR file that contains a Function (in other words, contains a
 To register a function by using `gfsh`:
 
 1.  Package your class files into a JAR file.
-2.  Start a `gfsh` prompt. If necessary, start a Locator and connect to the Geode distributed system where you want to run the function.
+2.  Start a `gfsh` prompt. If necessary, start a Locator and connect to the <%=vars.product_name%> distributed system where you want to run the function.
 3.  At the gfsh prompt, type the following command:
 
     ``` pre
@@ -125,7 +123,7 @@ To register a function by using `gfsh`:
 
 If another JAR file is deployed (either with the same JAR filename or another filename) with the same Function, the new implementation of the Function will be registered, overwriting the old one. If a JAR file is undeployed, any Functions that were auto-registered at the time of deployment will be unregistered. Since deploying a JAR file that has the same name multiple times results in the JAR being un-deployed and re-deployed, Functions in the JAR will be unregistered and re-registered each time this occurs. If a Function with the same ID is registered from multiple differently named JAR files, the Function will be unregistered if either of those JAR files is re-deployed or un-deployed.
 
-See [Deploying Application JARs to Apache Geode Members](../../configuring/cluster_config/deploying_application_jars.html#concept_4436C021FB934EC4A330D27BD026602C) for more details on deploying JAR files.
+See [Deploying Application JARs to <%=vars.product_name_long%> Members](../../configuring/cluster_config/deploying_application_jars.html#concept_4436C021FB934EC4A330D27BD026602C) for more details on deploying JAR files.
 
 ## <a id="function_execution__section_1D1056F843044F368FB76F47061FCD50" class="no-quick-link"></a>Register the Function Programmatically
 
@@ -169,7 +167,7 @@ In every member where you want to explicitly execute the function and process th
 **Running the Function Using gfsh**
 
 1.  Start a gfsh prompt.
-2.  If necessary, start a Locator and connect to the Geode distributed system where you want to run the function.
+2.  If necessary, start a Locator and connect to the <%=vars.product_name%> distributed system where you want to run the function.
 3.  At the gfsh prompt, type the following command:
 
     ``` pre
@@ -228,12 +226,12 @@ ResultCollector rc = execution.execute(function);
 List result = (List)rc.getResult();
 ```
 
-Geode’s default `ResultCollector` collects all results into an `ArrayList`. Its `getResult` methods block until all results are received. Then they return the full result set.
+<%=vars.product_name%>’s default `ResultCollector` collects all results into an `ArrayList`. Its `getResult` methods block until all results are received. Then they return the full result set.
 
 To customize results collecting:
 
 1.  Write a class that extends `ResultCollector` and code the methods to store and retrieve the results as you need. Note that the methods are of two types:
-    1.  `addResult` and `endResults` are called by Geode when results arrive from the `Function` instance `SendResults` methods
+    1.  `addResult` and `endResults` are called by <%=vars.product_name%> when results arrive from the `Function` instance `SendResults` methods
     2.  `getResult` is available to your executing application (the one that calls `Execution.execute`) to retrieve the results
 
 2.  Use high availability for `onRegion` functions that have been coded for it:

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb b/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb
index a72045f..ae80b01 100644
--- a/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb
+++ b/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb
@@ -21,7 +21,7 @@ limitations under the License.
 
 ## <a id="how_function_execution_works__section_881D2FF6761B4D689DDB46C650E2A2E1" class="no-quick-link"></a>Where Functions Are Executed
 
-You can execute data-independent functions or data-dependent functions in Geode in the following places:
+You can execute data-independent functions or data-dependent functions in <%=vars.product_name%> in the following places:
 
 **For Data-independent Functions**
 
@@ -39,13 +39,13 @@ See the `org.apache.geode.cache.execute.FunctionService` Java API documentation
 
 The following things occur when executing a function:
 
-1.  When you call the `execute` method on the `Execution` object, Geode invokes the function on all members where it needs to run. The locations are determined by the `FunctionService` `on*` method calls, region configuration, and any filters.
+1.  When you call the `execute` method on the `Execution` object, <%=vars.product_name%> invokes the function on all members where it needs to run. The locations are determined by the `FunctionService` `on*` method calls, region configuration, and any filters.
 2.  If the function has results, they are returned to the `addResult` method call in a `ResultCollector` object.
 3.  The originating member collects results using `ResultCollector.getResult`.
 
 ## <a id="how_function_execution_works__section_14FF9932C7134C5584A14246BB4D4FF6" class="no-quick-link"></a>Highly Available Functions
 
-Generally, function execution errors are returned to the calling application. You can code for high availability for `onRegion` functions that return a result, so Geode automatically retries a function if it does not execute successfully. You must code and configure the function to be highly available, and the calling application must invoke the function using the results collector `getResult` method.
+Generally, function execution errors are returned to the calling application. You can code for high availability for `onRegion` functions that return a result, so <%=vars.product_name%> automatically retries a function if it does not execute successfully. You must code and configure the function to be highly available, and the calling application must invoke the function using the results collector `getResult` method.
 
 When a failure (such as an execution error or member crash while executing) occurs, the system responds by:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb b/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb
index a008ede..3d8c30c 100644
--- a/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb
+++ b/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Apache Geode has application plug-ins to read data into the cache and write it out.
+<%=vars.product_name_long%> has application plug-ins to read data into the cache and write it out.
 
 <a id="outside_data_sources__section_100B707BB812430E8D9CFDE3BE4698D1"></a>
 The application plug-ins:

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb b/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb
index 4f309a0..b342e41 100644
--- a/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb
+++ b/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb
@@ -24,7 +24,7 @@ By default, a region has no data loader defined. Plug an application-defined loa
 <a id="how_data_loaders_work__section_1E600469D223498DB49446434CE9B0B4"></a>
 The loader is called on cache misses during get operations, and it populates the cache with the new entry value in addition to returning the value to the calling thread.
 
-A loader can be configured to load data into the Geode cache from an outside data store. To do the reverse operation, writing data from the Geode cache to an outside data store, use a cache writer event handler. See [Implementing Cache Event Handlers](../events/implementing_cache_event_handlers.html).
+A loader can be configured to load data into the <%=vars.product_name%> cache from an outside data store. To do the reverse operation, writing data from the <%=vars.product_name%> cache to an outside data store, use a cache writer event handler. See [Implementing Cache Event Handlers](../events/implementing_cache_event_handlers.html).
 
 How to install your cache loader depends on the type of region.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb b/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb
index 728b664..767e507 100644
--- a/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb
+++ b/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb
@@ -21,15 +21,15 @@ limitations under the License.
 
 Keep your distributed cache in sync with an outside data source by programming and installing application plug-ins for your region.
 
--   **[Overview of Outside Data Sources](../../developing/outside_data_sources/chapter_overview.html)**
+-   **[Overview of Outside Data Sources](chapter_overview.html)**
 
-    Apache Geode has application plug-ins to read data into the cache and write it out.
+    <%=vars.product_name_long%> has application plug-ins to read data into the cache and write it out.
 
--   **[How Data Loaders Work](../../developing/outside_data_sources/how_data_loaders_work.html)**
+-   **[How Data Loaders Work](how_data_loaders_work.html)**
 
     By default, a region has no data loader defined. Plug an application-defined loader into any region by setting the region attribute cache-loader on the members that host data for the region.
 
--   **[Implement a Data Loader](../../developing/outside_data_sources/implementing_data_loaders.html)**
+-   **[Implement a Data Loader](implementing_data_loaders.html)**
 
     Program a data loader and configure your region to use it.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb b/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb
index e450ee5..0d41532 100644
--- a/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb
@@ -21,44 +21,44 @@ limitations under the License.
 
 In addition to basic region management, partitioned regions include options for high availability, data location control, and data balancing across the distributed system.
 
--   **[Understanding Partitioning](../../developing/partitioned_regions/how_partitioning_works.html)**
+-   **[Understanding Partitioning](how_partitioning_works.html)**
 
     To use partitioned regions, you should understand how they work and your options for managing them.
 
--   **[Configuring Partitioned Regions](../../developing/partitioned_regions/managing_partitioned_regions.html)**
+-   **[Configuring Partitioned Regions](managing_partitioned_regions.html)**
 
     Plan the configuration and ongoing management of your partitioned region for host and accessor members and configure the regions for startup.
 
--   **[Configuring the Number of Buckets for a Partitioned Region](../../developing/partitioned_regions/configuring_bucket_for_pr.html)**
+-   **[Configuring the Number of Buckets for a Partitioned Region](configuring_bucket_for_pr.html)**
 
     Decide how many buckets to assign to your partitioned region and set the configuration accordingly.
 
--   **[Custom-Partitioning and Colocating Data](../../developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html)**
+-   **[Custom-Partitioning and Colocating Data](overview_custom_partitioning_and_data_colocation.html)**
 
-    You can customize how Apache Geode groups your partitioned region data with custom partitioning and data colocation.
+    You can customize how <%=vars.product_name_long%> groups your partitioned region data with custom partitioning and data colocation.
 
--   **[Configuring High Availability for Partitioned Regions](../../developing/partitioned_regions/overview_how_pr_ha_works.html)**
+-   **[Configuring High Availability for Partitioned Regions](overview_how_pr_ha_works.html)**
 
-    By default, Apache Geode stores only a single copy of your partitioned region data among the region's data stores. You can configure Geode to maintain redundant copies of your partitioned region data for high availability.
+    By default, <%=vars.product_name_long%> stores only a single copy of your partitioned region data among the region's data stores. You can configure <%=vars.product_name%> to maintain redundant copies of your partitioned region data for high availability.
 
--   **[Configuring Single-Hop Client Access to Server-Partitioned Regions](../../developing/partitioned_regions/overview_how_pr_single_hop_works.html)**
+-   **[Configuring Single-Hop Client Access to Server-Partitioned Regions](overview_how_pr_single_hop_works.html)**
 
     Single-hop data access enables the client pool to track where a partitioned region’s data is hosted in the servers. To access a single entry, the client directly contacts the server that hosts the key, in a single hop.
 
--   **[Rebalancing Partitioned Region Data](../../developing/partitioned_regions/rebalancing_pr_data.html)**
+-   **[Rebalancing Partitioned Region Data](rebalancing_pr_data.html)**
 
     In a distributed system with minimal contention to the concurrent threads reading or updating from the members, you can use rebalancing to dynamically increase or decrease your data and processing capacity.
 
-- **[Automated Rebalancing of Partitioned Region Data](../../developing/partitioned_regions/automated_rebalance.html)**
+- **[Automated Rebalancing of Partitioned Region Data](automated_rebalance.html)**
 
     The automated rebalance feature triggers a rebalance operation
 based on a time schedule.
 
--   **[Checking Redundancy in Partitioned Regions](../../developing/partitioned_regions/checking_region_redundancy.html)**
+-   **[Checking Redundancy in Partitioned Regions](checking_region_redundancy.html)**
 
     Under some circumstances, it can be important to verify that your partitioned region data is redundant and that upon member restart, redundancy has been recovered properly across partitioned region members.
 
--   **[Moving Partitioned Region Data to Another Member](../../developing/partitioned_regions/moving_partitioned_data.html)**
+-   **[Moving Partitioned Region Data to Another Member](moving_partitioned_data.html)**
 
     You can use the `PartitionRegionHelper` `moveBucketByKey` and `moveData` methods to explicitly move partitioned region data from one member to another.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb b/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb
index c20e30e..962c21e 100644
--- a/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-By default, Geode allocates the data locations for a partitioned region independent of the data locations for any other partitioned region. You can change this policy for any group of partitioned regions, so that cross-region, related data is all hosted by the same member. This colocation speeds queries and other operations that access data from the regions.
+By default, <%=vars.product_name%> allocates the data locations for a partitioned region independent of the data locations for any other partitioned region. You can change this policy for any group of partitioned regions, so that cross-region, related data is all hosted by the same member. This colocation speeds queries and other operations that access data from the regions.
 
 <a id="colocating_partitioned_region_data__section_131EC040055E48A6B35E981B5C845A65"></a>
 **Note:**
@@ -39,7 +39,7 @@ Data colocation between partitioned regions generally improves the performance o
 **Procedure**
 
 1.  Identify one region as the central region, with which data in the other regions is explicitly colocated. If you use persistence for any of the regions, you must persist the central region.
-    1.  Create the central region before you create the others, either in the cache.xml or your code. Regions in the XML are created before regions in the code, so if you create any of your colocated regions in the XML, you must create the central region in the XML before the others. Geode will verify its existence when the others are created and return `IllegalStateException` if the central region is not there. Do not add any colocation specifications to this central region.
+    1.  Create the central region before you create the others, either in the cache.xml or your code. Regions in the XML are created before regions in the code, so if you create any of your colocated regions in the XML, you must create the central region in the XML before the others. <%=vars.product_name%> will verify its existence when the others are created and return `IllegalStateException` if the central region is not there. Do not add any colocation specifications to this central region.
     2.  For all other regions, in the region partition attributes, provide the central region's name in the `colocated-with` attribute. Use one of these methods:
         -   XML:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb b/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb
index ccb7e71..f8dc971 100644
--- a/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb
@@ -22,7 +22,7 @@ limitations under the License.
 Decide how many buckets to assign to your partitioned region and set the configuration accordingly.
 
 <a id="configuring_total_buckets__section_DF52B2BF467F4DB4B8B3D16A79EFCA39"></a>
-The total number of buckets for the partitioned region determines the granularity of data storage and thus how evenly the data can be distributed. Geode distributes the buckets as evenly as possible across the data stores. The number of buckets is fixed after region creation.
+The total number of buckets for the partitioned region determines the granularity of data storage and thus how evenly the data can be distributed. <%=vars.product_name%> distributes the buckets as evenly as possible across the data stores. The number of buckets is fixed after region creation.
 
 The partition attribute `total-num-buckets` sets the number for the entire partitioned region across all participating members. Set it using one of the following:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/configuring_ha_for_pr.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/configuring_ha_for_pr.html.md.erb b/geode-docs/developing/partitioned_regions/configuring_ha_for_pr.html.md.erb
index c084f4a..d77006c 100644
--- a/geode-docs/developing/partitioned_regions/configuring_ha_for_pr.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/configuring_ha_for_pr.html.md.erb
@@ -25,33 +25,33 @@ Here are the main steps for configuring high availability for a partitioned regi
 
 1.  Set the number of redundant copies the system should maintain of the region data. See [Set the Number of Redundant Copies](set_pr_redundancy.html#set_pr_redundancy). 
 2.  (Optional) If you want to group your data store members into redundancy zones, configure them accordingly. See [Configure Redundancy Zones for Members](set_redundancy_zones.html#set_redundancy_zones). 
-3.  (Optional) If you want Geode to only place redundant copies on different physical machines, configure for that. See [Set Enforce Unique Host](set_enforce_unique_host.html#set_pr_redundancy). 
-4.  Decide how to manage redundancy recovery and change Geode's default behavior as needed. 
+3.  (Optional) If you want <%=vars.product_name%> to only place redundant copies on different physical machines, configure for that. See [Set Enforce Unique Host](set_enforce_unique_host.html#set_pr_redundancy). 
+4.  Decide how to manage redundancy recovery and change <%=vars.product_name%>'s default behavior as needed. 
     - **After a member crashes**. If you want automatic redundancy recovery, change the configuration for that. See [Configure Member Crash Redundancy Recovery for a Partitioned Region](set_crash_redundancy_recovery.html#set_crash_redundancy_recovery). 
     - **After a member joins**. If you do *not* want immediate, automatic redundancy recovery, change the configuration for that. See [Configure Member Join Redundancy Recovery for a Partitioned Region](set_join_redundancy_recovery.html#set_join_redundancy_recovery). 
 
-5.  Decide how many buckets Geode should attempt to recover in parallel when performing redundancy recovery. By default, the system recovers up to 8 buckets in parallel. Use the `gemfire.MAX_PARALLEL_BUCKET_RECOVERIES` system property to increase or decrease the maximum number of buckets to recover in parallel any time redundancy recovery is performed.
+5.  Decide how many buckets <%=vars.product_name%> should attempt to recover in parallel when performing redundancy recovery. By default, the system recovers up to 8 buckets in parallel. Use the `gemfire.MAX_PARALLEL_BUCKET_RECOVERIES` system property to increase or decrease the maximum number of buckets to recover in parallel any time redundancy recovery is performed.
 6.  For all but fixed partitioned regions, review the points at which you kick off rebalancing. Redundancy recovery is done automatically at the start of any rebalancing. This is most important if you run with no automated recovery after member crashes or joins. See [Rebalancing Partitioned Region Data](rebalancing_pr_data.html#rebalancing_pr_data). 
 
 During runtime, you can add capacity by adding new members for the region. For regions that do not use fixed partitioning, you can also kick off a rebalancing operation to spread the region buckets among all members.
 
--   **[Set the Number of Redundant Copies](../../developing/partitioned_regions/set_pr_redundancy.html)**
+-   **[Set the Number of Redundant Copies](set_pr_redundancy.html)**
 
     Configure in-memory high availability for your partitioned region by specifying the number of secondary copies you want to maintain in the region's data stores.
 
--   **[Configure Redundancy Zones for Members](../../developing/partitioned_regions/set_redundancy_zones.html)**
+-   **[Configure Redundancy Zones for Members](set_redundancy_zones.html)**
 
-    Group members into redundancy zones so Geode will separate redundant data copies into different zones.
+    Group members into redundancy zones so <%=vars.product_name%> will separate redundant data copies into different zones.
 
--   **[Set Enforce Unique Host](../../developing/partitioned_regions/set_enforce_unique_host.html)**
+-   **[Set Enforce Unique Host](set_enforce_unique_host.html)**
 
-    Configure Geode to use only unique physical machines for redundant copies of partitioned region data.
+    Configure <%=vars.product_name%> to use only unique physical machines for redundant copies of partitioned region data.
 
--   **[Configure Member Crash Redundancy Recovery for a Partitioned Region](../../developing/partitioned_regions/set_crash_redundancy_recovery.html)**
+-   **[Configure Member Crash Redundancy Recovery for a Partitioned Region](set_crash_redundancy_recovery.html)**
 
     Configure whether and how redundancy is recovered in a partition region after a member crashes.
 
--   **[Configure Member Join Redundancy Recovery for a Partitioned Region](../../developing/partitioned_regions/set_join_redundancy_recovery.html)**
+-   **[Configure Member Join Redundancy Recovery for a Partitioned Region](set_join_redundancy_recovery.html)**
 
     Configure whether and how redundancy is recovered in a partition region after a member joins.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/custom_partitioning_and_data_colocation.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/custom_partitioning_and_data_colocation.html.md.erb b/geode-docs/developing/partitioned_regions/custom_partitioning_and_data_colocation.html.md.erb
index 0876613..62e5cab 100644
--- a/geode-docs/developing/partitioned_regions/custom_partitioning_and_data_colocation.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/custom_partitioning_and_data_colocation.html.md.erb
@@ -23,7 +23,7 @@ Custom partitioning and data colocation can be used separately or in conjunction
 
 ## <a id="custom_partitioning_and_data_colocation__section_ABFEE9CB17AF44F1AE252AC10FB5E999" class="no-quick-link"></a>Custom Partitioning
 
-Use custom partitioning to group like entries into region buckets within a region. By default, Geode assigns new entries to buckets based on the entry key contents. With custom partitioning, you can assign your entries to buckets in whatever way you want.
+Use custom partitioning to group like entries into region buckets within a region. By default, <%=vars.product_name%> assigns new entries to buckets based on the entry key contents. With custom partitioning, you can assign your entries to buckets in whatever way you want.
 
 You can generally get better performance if you use custom partitioning to group similar data within a region. For example, a query run on all accounts created in January runs faster if all January account data is hosted by a single member. Grouping all data for a single customer can improve performance of data operations that work on customer data. Data aware function execution takes advantage of custom partitioning.
 
@@ -40,19 +40,19 @@ All keys must be strings, specified with a syntax that includes
 a '|' character that delimits the string.
 The substring that precedes the '|' delimiter within the key
 partitions the entry.  
--   **Standard custom partitioning**. With standard partitioning, you group entries into buckets, but you do not specify where the buckets reside. Geode always keeps the entries in the buckets you have specified, but may move the buckets around for load balancing.
+-   **Standard custom partitioning**. With standard partitioning, you group entries into buckets, but you do not specify where the buckets reside. <%=vars.product_name%> always keeps the entries in the buckets you have specified, but may move the buckets around for load balancing.
 -   **Fixed custom partitioning**. With fixed partitioning, you provide standard partitioning plus you specify the exact member where each data entry resides. You do this by assigning the data entry to a bucket and to a partition and by naming specific members as primary and secondary hosts of each partition.
 
     This gives you complete control over the locations of your primary and any secondary buckets for the region. This can be useful when you want to store specific data on specific physical machines or when you need to keep data close to certain hardware elements.
 
     Fixed partitioning has these requirements and caveats:
 
-    -   Geode cannot rebalance fixed partition region data because it cannot move the buckets around among the host members. You must carefully consider your expected data loads for the partitions you create.
+    -   <%=vars.product_name%> cannot rebalance fixed partition region data because it cannot move the buckets around among the host members. You must carefully consider your expected data loads for the partitions you create.
     -   With fixed partitioning, the region configuration is different between host members. Each member identifies the named partitions it hosts, and whether it is hosting the primary copy or a secondary copy. You then program fixed partition resolver to return the partition id, so the entry is placed on the right members. Only one member can be primary for a particular partition name and that member cannot be the partition's secondary.
 
 ## <a id="custom_partitioning_and_data_colocation__section_D2C66951FE38426F9C05050D2B9028D8" class="no-quick-link"></a>Data Colocation Between Regions
 
-With data colocation, Geode stores entries that are related across multiple data regions in a single member. Geode does this by storing all of the regions' buckets with the same ID together in the same member. During rebalancing operations, Geode moves these bucket groups together or not at all.
+With data colocation, <%=vars.product_name%> stores entries that are related across multiple data regions in a single member. <%=vars.product_name%> does this by storing all of the regions' buckets with the same ID together in the same member. During rebalancing operations, <%=vars.product_name%> moves these bucket groups together or not at all.
 
 So, for example, if you have one region with customer contact information and another region with customer orders, you can use colocation to keep all contact information and all orders for a single customer in a single member. This way, any operation done for a single customer uses the cache of only a single member.
 
@@ -60,6 +60,6 @@ This figure shows two regions with data colocation where the data is partitioned
 
 <img src="../../images_svg/colocated_partitioned_regions.svg" id="custom_partitioning_and_data_colocation__image_525AC474950F473ABCDE8E372583C5DF" class="image" />
 
-Data colocation requires the same data partitioning mechanism for all of the colocated regions. You can use the default partitioning provided by Geode or any of the custom partitioning strategies.
+Data colocation requires the same data partitioning mechanism for all of the colocated regions. You can use the default partitioning provided by <%=vars.product_name%> or any of the custom partitioning strategies.
 
 You must use the same high availability settings across your colocated regions.

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/how_partitioning_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/how_partitioning_works.html.md.erb b/geode-docs/developing/partitioned_regions/how_partitioning_works.html.md.erb
index c846995..42ea7f8 100644
--- a/geode-docs/developing/partitioned_regions/how_partitioning_works.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/how_partitioning_works.html.md.erb
@@ -32,13 +32,13 @@ A distributed system can have multiple partitioned regions, and it can mix parti
 
 ## <a id="how_partitioning_works__section_260C2455FC8C40A094B39BF585D06B7D" class="no-quick-link"></a>Data Partitioning
 
-Geode automatically determines the physical location of data in the members that host a partitioned region's data. Geode breaks partitioned region data into units of storage known as buckets and stores each bucket in a region host member. Buckets are distributed in accordance to the member’s region attribute settings.
+<%=vars.product_name%> automatically determines the physical location of data in the members that host a partitioned region's data. <%=vars.product_name%> breaks partitioned region data into units of storage known as buckets and stores each bucket in a region host member. Buckets are distributed in accordance to the member’s region attribute settings.
 
 When an entry is created, it is assigned to a bucket. Keys are grouped together in a bucket and always remain there. If the configuration allows, the buckets may be moved between members to balance the load.
 
 You must run the data stores needed to accommodate storage for the partitioned region’s buckets. You can start new data stores on the fly. When a new data store creates the region, it takes responsibility for as many buckets as allowed by the partitioned region and member configuration.
 
-You can customize how Geode groups your partitioned region data with custom partitioning and data colocation.
+You can customize how <%=vars.product_name%> groups your partitioned region data with custom partitioning and data colocation.
 
 ## <a id="how_partitioning_works__section_155F9D4AB539473F848FD05E413B21B3" class="no-quick-link"></a>Partitioned Region Operation
 
@@ -52,7 +52,7 @@ Keep the following in mind about partitioned regions:
 
 -   Partitioned regions never run asynchronously. Operations in partitioned regions always wait for acknowledgement from the caches containing the original data entry and any redundant copies.
 -   A partitioned region needs a cache loader in every region data store (`local-max-memory` &gt; 0).
--   Geode distributes the data buckets as evenly as possible across all members storing the partitioned region data, within the limits of any custom partitioning or data colocation that you use. The number of buckets allotted for the partitioned region determines the granularity of data storage and thus how evenly the data can be distributed. The number of buckets is a total for the entire region across the distributed system.
--   In rebalancing data for the region, Geode moves buckets, but does not move data around inside the buckets.
+-   <%=vars.product_name%> distributes the data buckets as evenly as possible across all members storing the partitioned region data, within the limits of any custom partitioning or data colocation that you use. The number of buckets allotted for the partitioned region determines the granularity of data storage and thus how evenly the data can be distributed. The number of buckets is a total for the entire region across the distributed system.
+-   In rebalancing data for the region, <%=vars.product_name%> moves buckets, but does not move data around inside the buckets.
 -   You can query partitioned regions, but there are certain limitations. See [Querying Partitioned Regions](../querying_basics/querying_partitioned_regions.html#querying_partitioned_regions) for more information.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb b/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb
index ba83732..baa5e56 100644
--- a/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb
@@ -25,7 +25,7 @@ With high availability, each member that hosts data for the partitioned region g
 
 With redundancy, if one member fails, operations continue on the partitioned region with no interruption of service:
 
--   If the member hosting the primary copy is lost, Geode makes a secondary copy the primary. This might cause a temporary loss of redundancy, but not a loss of data.
+-   If the member hosting the primary copy is lost, <%=vars.product_name%> makes a secondary copy the primary. This might cause a temporary loss of redundancy, but not a loss of data.
 -   Whenever there are not enough secondary copies to satisfy redundancy, the system works to recover redundancy by assigning another member as secondary and copying the data to it.
 
 **Note:**
@@ -37,20 +37,20 @@ Without redundancy, the loss of any of the region's data stores causes the loss
 
 ## <a id="how_pr_ha_works__section_7045530D601F4C65A062B5FDD0DD9206" class="no-quick-link"></a>Controlling Where Your Primaries and Secondaries Reside
 
-By default, Geode places your primary and secondary data copies for you, avoiding placement of two copies on the same physical machine. If there are not enough machines to keep different copies separate, Geode places copies on the same physical machine. You can change this behavior, so Geode only places copies on separate machines.
+By default, <%=vars.product_name%> places your primary and secondary data copies for you, avoiding placement of two copies on the same physical machine. If there are not enough machines to keep different copies separate, <%=vars.product_name%> places copies on the same physical machine. You can change this behavior, so <%=vars.product_name%> only places copies on separate machines.
 
-You can also control which members store your primary and secondary data copies. Geode provides two options:
+You can also control which members store your primary and secondary data copies. <%=vars.product_name%> provides two options:
 
--   **Fixed custom partitioning**. This option is set for the region. Fixed partitioning gives you absolute control over where your region data is hosted. With fixed partitioning, you provide Geode with the code that specifies the bucket and data store for each data entry in the region. When you use this option with redundancy, you specify the primary and secondary data stores. Fixed partitioning does not participate in rebalancing because all bucket locations are fixed by you.
--   **Redundancy zones**. This option is set at the member level. Redundancy zones let you separate primary and secondary copies by member groups, or zones. You assign each data host to a zone. Then Geode places redundant copies in different redundancy zones, the same as it places redundant copies on different physical machines. You can use this to split data copies across different machine racks or networks, This option allows you to add members on the fly and use rebalancing to redistribute the data load, with redundant data maintained in separate zones. When you use redundancy zones, Geode will not place two copies of the data in the same zone, so make sure you have enough zones.
+-   **Fixed custom partitioning**. This option is set for the region. Fixed partitioning gives you absolute control over where your region data is hosted. With fixed partitioning, you provide <%=vars.product_name%> with the code that specifies the bucket and data store for each data entry in the region. When you use this option with redundancy, you specify the primary and secondary data stores. Fixed partitioning does not participate in rebalancing because all bucket locations are fixed by you.
+-   **Redundancy zones**. This option is set at the member level. Redundancy zones let you separate primary and secondary copies by member groups, or zones. You assign each data host to a zone. Then <%=vars.product_name%> places redundant copies in different redundancy zones, the same as it places redundant copies on different physical machines. You can use this to split data copies across different machine racks or networks, This option allows you to add members on the fly and use rebalancing to redistribute the data load, with redundant data maintained in separate zones. When you use redundancy zones, <%=vars.product_name%> will not place two copies of the data in the same zone, so make sure you have enough zones.
 
 ## <a id="how_pr_ha_works__section_87A2429B6277497184926E08E64B81C6" class="no-quick-link"></a>Running Processes in Virtual Machines
 
-By default, Geode stores redundant copies on different machines. When you run your processes in virtual machines, the normal view of the machine becomes the VM and not the physical machine. If you run multiple VMs on the same physical machine, you could end up storing partitioned region primary buckets in separate VMs, but on the same physical machine as your secondaries. If the physical machine fails, you can lose data. When you run in VMs, you can configure Geode to identify the physical machine and store redundant copies on different physical machines.
+By default, <%=vars.product_name%> stores redundant copies on different machines. When you run your processes in virtual machines, the normal view of the machine becomes the VM and not the physical machine. If you run multiple VMs on the same physical machine, you could end up storing partitioned region primary buckets in separate VMs, but on the same physical machine as your secondaries. If the physical machine fails, you can lose data. When you run in VMs, you can configure <%=vars.product_name%> to identify the physical machine and store redundant copies on different physical machines.
 
 ## <a id="how_pr_ha_works__section_CAB9440BABD6484D99525766E937CB55" class="no-quick-link"></a>Reads and Writes in Highly-Available Partitioned Regions
 
-Geode treats reads and writes differently in highly-available partitioned regions than in other regions because the data is available in multiple members:
+<%=vars.product_name%> treats reads and writes differently in highly-available partitioned regions than in other regions because the data is available in multiple members:
 
 -   Write operations (like `put` and `create`) go to the primary for the data keys and then are distributed synchronously to the redundant copies. Events are sent to the members configured with `subscription-attributes` `interest-policy` set to `all`.
 -   Read operations go to any member holding a copy of the data, with the local cache favored, so a read intensive system can scale much better and handle higher loads.

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb b/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb
index 358b1a1..c48e328 100644
--- a/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-In order to perform equi-join operations on partitioned regions or partitioned regions and replicated regions, you need to use the `query.execute` method and supply it with a function execution context. You need to use Geode's FunctionService executor because join operations are not yet directly supported for partitioned regions without providing a function execution context.
+In order to perform equi-join operations on partitioned regions or partitioned regions and replicated regions, you need to use the `query.execute` method and supply it with a function execution context. You need to use <%=vars.product_name%>'s FunctionService executor because join operations are not yet directly supported for partitioned regions without providing a function execution context.
 
 See [Partitioned Region Query Restrictions](../query_additional/partitioned_region_query_restrictions.html#concept_5353476380D44CC1A7F586E5AE1CE7E8) for more information on partitioned region query limitations.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb b/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb
index 1221873..b2ebc08 100644
--- a/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb
@@ -19,18 +19,18 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-You can customize how Apache Geode groups your partitioned region data with custom partitioning and data colocation.
+You can customize how <%=vars.product_name_long%> groups your partitioned region data with custom partitioning and data colocation.
 
--   **[Understanding Custom Partitioning and Data Colocation](../../developing/partitioned_regions/custom_partitioning_and_data_colocation.html)**
+-   **[Understanding Custom Partitioning and Data Colocation](custom_partitioning_and_data_colocation.html)**
 
     Custom partitioning and data colocation can be used separately or in conjunction with one another.
 
--   **[Custom-Partition Your Region Data](../../developing/partitioned_regions/using_custom_partition_resolvers.html)**
+-   **[Custom-Partition Your Region Data](using_custom_partition_resolvers.html)**
 
-    By default, Geode partitions each data entry into a bucket using a hashing policy on the key. Additionally, the physical location of the key-value pair is abstracted away from the application. You can change these policies for a partitioned region. You can provide your own data partitioning resolver and you can additionally specify which members host which data buckets.
+    By default, <%=vars.product_name%> partitions each data entry into a bucket using a hashing policy on the key. Additionally, the physical location of the key-value pair is abstracted away from the application. You can change these policies for a partitioned region. You can provide your own data partitioning resolver and you can additionally specify which members host which data buckets.
 
--   **[Colocate Data from Different Partitioned Regions](../../developing/partitioned_regions/colocating_partitioned_region_data.html)**
+-   **[Colocate Data from Different Partitioned Regions](colocating_partitioned_region_data.html)**
 
-    By default, Geode allocates the data locations for a partitioned region independent of the data locations for any other partitioned region. You can change this policy for any group of partitioned regions, so that cross-region, related data is all hosted by the same member. This colocation speeds queries and other operations that access data from the regions.
+    By default, <%=vars.product_name%> allocates the data locations for a partitioned region independent of the data locations for any other partitioned region. You can change this policy for any group of partitioned regions, so that cross-region, related data is all hosted by the same member. This colocation speeds queries and other operations that access data from the regions.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb b/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb
index 889c56c..e12ddc5 100644
--- a/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb
@@ -19,13 +19,13 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-By default, Apache Geode stores only a single copy of your partitioned region data among the region's data stores. You can configure Geode to maintain redundant copies of your partitioned region data for high availability.
+By default, <%=vars.product_name_long%> stores only a single copy of your partitioned region data among the region's data stores. You can configure <%=vars.product_name%> to maintain redundant copies of your partitioned region data for high availability.
 
--   **[Understanding High Availability for Partitioned Regions](../../developing/partitioned_regions/how_pr_ha_works.html)**
+-   **[Understanding High Availability for Partitioned Regions](how_pr_ha_works.html)**
 
     With high availability, each member that hosts data for the partitioned region gets some primary copies and some redundant (secondary) copies.
 
--   **[Configure High Availability for a Partitioned Region](../../developing/partitioned_regions/configuring_ha_for_pr.html)**
+-   **[Configure High Availability for a Partitioned Region](configuring_ha_for_pr.html)**
 
     Configure in-memory high availability for your partitioned region. Set other high-availability options, like redundancy zones and redundancy recovery strategies.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb b/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb
index 8be43f6..13d7498 100644
--- a/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb
@@ -21,11 +21,11 @@ limitations under the License.
 
 Single-hop data access enables the client pool to track where a partitioned region’s data is hosted in the servers. To access a single entry, the client directly contacts the server that hosts the key, in a single hop.
 
--   **[Understanding Client Single-Hop Access to Server-Partitioned Regions](../../developing/partitioned_regions/how_pr_single_hop_works.html)**
+-   **[Understanding Client Single-Hop Access to Server-Partitioned Regions](how_pr_single_hop_works.html)**
 
     With single-hop access the client connects to every server, so more connections are generally used. This works fine for smaller installations, but is a barrier to scaling.
 
--   **[Configure Client Single-Hop Access to Server-Partitioned Regions](../../developing/partitioned_regions/configure_pr_single_hop.html)**
+-   **[Configure Client Single-Hop Access to Server-Partitioned Regions](configure_pr_single_hop.html)**
 
     Configure your client/server system for direct, single-hop access to partitioned region data in the servers.
 


[31/51] [abbrv] geode git commit: GEODE-3249: internal messages should require credentials

Posted by kl...@apache.org.
GEODE-3249: internal messages should require credentials

Removed unnecessary statement in one of the tests added for this ticket.


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/7cbbf67f
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/7cbbf67f
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/7cbbf67f

Branch: refs/heads/feature/GEODE-1279
Commit: 7cbbf67f3ae313920421fe24f15a72ce27ea2308
Parents: 83c1916
Author: Bruce Schuchardt <bs...@pivotal.io>
Authored: Thu Aug 17 10:12:18 2017 -0700
Committer: Bruce Schuchardt <bs...@pivotal.io>
Committed: Thu Aug 17 10:13:38 2017 -0700

----------------------------------------------------------------------
 .../apache/geode/security/ClientAuthenticationPart2DUnitTest.java   | 1 -
 1 file changed, 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/7cbbf67f/geode-core/src/test/java/org/apache/geode/security/ClientAuthenticationPart2DUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/security/ClientAuthenticationPart2DUnitTest.java b/geode-core/src/test/java/org/apache/geode/security/ClientAuthenticationPart2DUnitTest.java
index 5a78535..f8ebe05 100644
--- a/geode-core/src/test/java/org/apache/geode/security/ClientAuthenticationPart2DUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/security/ClientAuthenticationPart2DUnitTest.java
@@ -59,7 +59,6 @@ public class ClientAuthenticationPart2DUnitTest extends ClientAuthenticationTest
       Message message = mock(Message.class);
       when(message.getMessageType()).thenReturn(oldInternalMessages[i]);
 
-      serverConnection.setRequestMsg(message);
       Assert.assertFalse(serverConnection.isInternalMessage(message, false));
       Assert.assertTrue(serverConnection.isInternalMessage(message, true));
     }


[27/51] [abbrv] geode git commit: GEODE-2886:Logged the IllegalStateException inside WaitUntilFlushedFunction and returned the result as false.

Posted by kl...@apache.org.
GEODE-2886:Logged the IllegalStateException inside
WaitUntilFlushedFunction and returned the result as false.


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/a1c3fc76
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/a1c3fc76
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/a1c3fc76

Branch: refs/heads/feature/GEODE-1279
Commit: a1c3fc7660af757e0efbb2fe4f9911e4c81dbffc
Parents: 40185e8
Author: Amey Barve <ab...@apache.org>
Authored: Wed Aug 2 18:29:47 2017 +0530
Committer: Amey Barve <ab...@apache.org>
Committed: Thu Aug 17 15:47:30 2017 +0530

----------------------------------------------------------------------
 .../cache/lucene/internal/LuceneServiceImpl.java    | 16 ++++------------
 .../distributed/WaitUntilFlushedFunction.java       |  6 ++++--
 2 files changed, 8 insertions(+), 14 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/a1c3fc76/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
index 7280d66..1b125ed 100644
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
@@ -479,20 +479,12 @@ public class LuceneServiceImpl implements InternalLuceneService {
         new WaitUntilFlushedFunctionContext(indexName, timeout, unit);
     Execution execution = FunctionService.onRegion(dataRegion);
     ResultCollector rs = execution.setArguments(context).execute(WaitUntilFlushedFunction.ID);
-    List<Object> results = (List<Object>) rs.getResult();
-    if (results != null) {
-      if (results.get(0) instanceof IllegalStateException) {
+    List<Boolean> results = (List<Boolean>) rs.getResult();
+    for (Boolean oneResult : results) {
+      if (oneResult == false) {
         return false;
-      } else {
-        for (Object oneResult : results) {
-          if ((boolean) oneResult == false) {
-            return false;
-          }
-        }
-        return true;
       }
-    } else {
-      return false;
     }
+    return true;
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/a1c3fc76/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
index 0fecc41..ca77873 100644
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
@@ -61,8 +61,10 @@ public class WaitUntilFlushedFunction implements Function, InternalEntity {
       }
 
     } else {
-      resultSender.sendException(new IllegalStateException(
-          "The AEQ does not exist for the index " + indexName + " region " + region.getFullPath()));
+      IllegalStateException illegalStateException = new IllegalStateException(
+          "The AEQ does not exist for the index " + indexName + " region " + region.getFullPath());
+      logger.error(illegalStateException.getMessage());
+      resultSender.lastResult(result);
     }
     resultSender.lastResult(result);
   }


[23/51] [abbrv] geode git commit: GEODE-3427 CI failure in GMSJoinLeaveJUnitTest.testCoordinatorFindRequestSuccess

Posted by kl...@apache.org.
GEODE-3427 CI failure in GMSJoinLeaveJUnitTest.testCoordinatorFindRequestSuccess

Removed use of background threads and modified the test to focus on
the method findCoordinator.  This is what the name of the test implies
that is validating.


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/91430e12
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/91430e12
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/91430e12

Branch: refs/heads/feature/GEODE-1279
Commit: 91430e122bdd9a6add219914a5175fddadf4ecf1
Parents: d291a45
Author: Bruce Schuchardt <bs...@pivotal.io>
Authored: Wed Aug 16 15:05:06 2017 -0700
Committer: Bruce Schuchardt <bs...@pivotal.io>
Committed: Wed Aug 16 15:06:45 2017 -0700

----------------------------------------------------------------------
 .../membership/gms/membership/GMSJoinLeave.java |  2 +-
 .../gms/membership/GMSJoinLeaveJUnitTest.java   | 25 +++++---------------
 2 files changed, 7 insertions(+), 20 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/91430e12/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/membership/GMSJoinLeave.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/membership/GMSJoinLeave.java b/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/membership/GMSJoinLeave.java
index c63c30f..cba95f9 100644
--- a/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/membership/GMSJoinLeave.java
+++ b/geode-core/src/main/java/org/apache/geode/distributed/internal/membership/gms/membership/GMSJoinLeave.java
@@ -1069,7 +1069,7 @@ public class GMSJoinLeave implements JoinLeave, MessageHandler {
    * This contacts the locators to find out who the current coordinator is. All locators are
    * contacted. If they don't agree then we choose the oldest coordinator and return it.
    */
-  private boolean findCoordinator() {
+  boolean findCoordinator() {
     SearchState state = searchState;
 
     assert this.localAddress != null;

http://git-wip-us.apache.org/repos/asf/geode/blob/91430e12/geode-core/src/test/java/org/apache/geode/distributed/internal/membership/gms/membership/GMSJoinLeaveJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/distributed/internal/membership/gms/membership/GMSJoinLeaveJUnitTest.java b/geode-core/src/test/java/org/apache/geode/distributed/internal/membership/gms/membership/GMSJoinLeaveJUnitTest.java
index 1acb989..f4bdbbf 100644
--- a/geode-core/src/test/java/org/apache/geode/distributed/internal/membership/gms/membership/GMSJoinLeaveJUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/distributed/internal/membership/gms/membership/GMSJoinLeaveJUnitTest.java
@@ -36,7 +36,6 @@ import org.apache.geode.distributed.internal.DistributionConfig;
 import org.apache.geode.distributed.internal.membership.InternalDistributedMember;
 import org.apache.geode.distributed.internal.membership.NetView;
 import org.apache.geode.distributed.internal.membership.gms.GMSMember;
-import org.apache.geode.distributed.internal.membership.gms.GMSUtil;
 import org.apache.geode.distributed.internal.membership.gms.ServiceConfig;
 import org.apache.geode.distributed.internal.membership.gms.Services;
 import org.apache.geode.distributed.internal.membership.gms.Services.Stopper;
@@ -68,8 +67,6 @@ import org.junit.Assert;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 import org.mockito.internal.verification.Times;
-import org.mockito.invocation.InvocationOnMock;
-import org.mockito.stubbing.Answer;
 import org.mockito.verification.Timeout;
 
 import java.io.IOException;
@@ -1222,7 +1219,6 @@ public class GMSJoinLeaveJUnitTest {
   }
 
   @Test
-  @Category(FlakyTest.class) // GEODE-3427 - intermittent CI failures
   public void testCoordinatorFindRequestSuccess() throws Exception {
     initMocks(false);
     HashSet<InternalDistributedMember> registrants = new HashSet<>();
@@ -1230,25 +1226,16 @@ public class GMSJoinLeaveJUnitTest {
     FindCoordinatorResponse fcr = new FindCoordinatorResponse(mockMembers[0], mockMembers[0], false,
         null, registrants, false, true, null);
     NetView view = createView();
-    JoinResponseMessage jrm = new JoinResponseMessage(mockMembers[0], view, 0);
 
     TcpClientWrapper tcpClientWrapper = mock(TcpClientWrapper.class);
     gmsJoinLeave.setTcpClientWrapper(tcpClientWrapper);
-    FindCoordinatorRequest fcreq =
-        new FindCoordinatorRequest(gmsJoinLeaveMemberId, new HashSet<>(), -1, null, 0, "");
-    int connectTimeout = (int) services.getConfig().getMemberTimeout() * 2;
-    when(tcpClientWrapper.sendCoordinatorFindRequest(new InetSocketAddress("localhost", 12345),
-        fcreq, connectTimeout)).thenReturn(fcr);
-    callAsnyc(() -> {
-      gmsJoinLeave.installView(view);
-    });
-    Awaitility.await().atMost(10, TimeUnit.SECONDS)
-        .until(() -> assertTrue("Should be able to join ", gmsJoinLeave.join()));
-  }
 
-  private void callAsnyc(Runnable run) {
-    Thread th = new Thread(run);
-    th.start();
+    when(tcpClientWrapper.sendCoordinatorFindRequest(isA(InetSocketAddress.class),
+        isA(FindCoordinatorRequest.class), isA(Integer.class))).thenReturn(fcr);
+
+    boolean foundCoordinator = gmsJoinLeave.findCoordinator();
+    assertTrue(gmsJoinLeave.searchState.toString(), foundCoordinator);
+    assertEquals(gmsJoinLeave.searchState.possibleCoordinator, mockMembers[0]);
   }
 
   @Test


[15/51] [abbrv] geode git commit: GEODE-3328: fix testAddGemFirePropertyFileToCommandLine on Windows

Posted by kl...@apache.org.
GEODE-3328: fix testAddGemFirePropertyFileToCommandLine on Windows

Modification of ca4b81 committed by Jinmei Liao


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/e07b5c1c
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/e07b5c1c
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/e07b5c1c

Branch: refs/heads/feature/GEODE-1279
Commit: e07b5c1c6531b7ad57c5e455fa0a31ce61f3b25f
Parents: bc655eb
Author: Kirk Lund <kl...@apache.org>
Authored: Tue Aug 15 11:47:58 2017 -0700
Committer: Kirk Lund <kl...@apache.org>
Committed: Tue Aug 15 14:57:24 2017 -0700

----------------------------------------------------------------------
 .../internal/cli/commands/GfshCommandJUnitTest.java         | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/e07b5c1c/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/GfshCommandJUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/GfshCommandJUnitTest.java b/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/GfshCommandJUnitTest.java
index da60c7a..f6c7cae 100644
--- a/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/GfshCommandJUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/GfshCommandJUnitTest.java
@@ -409,13 +409,16 @@ public class GfshCommandJUnitTest {
 
   @Test
   public void testAddGemFirePropertyFileToCommandLine() {
-    final List<String> commandLine = new ArrayList<>();
+    List<String> commandLine = new ArrayList<>();
     assertTrue(commandLine.isEmpty());
+
     StartMemberUtils.addGemFirePropertyFile(commandLine, null);
     assertTrue(commandLine.isEmpty());
-    StartMemberUtils.addGemFirePropertyFile(commandLine, new File("/path/to/gemfire.properties"));
+
+    File file = new File("/path/to/gemfire.properties");
+    StartMemberUtils.addGemFirePropertyFile(commandLine, file);
     assertFalse(commandLine.isEmpty());
-    assertTrue(commandLine.contains("-DgemfirePropertyFile=/path/to/gemfire.properties"));
+    assertTrue(commandLine.contains("-DgemfirePropertyFile=" + file.getAbsolutePath()));
   }
 
   @Test


[05/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - Reference section

Posted by kl...@apache.org.
GEODE-3395 Variable-ize product version and name in user guide - Reference section


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/13ad4b6e
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/13ad4b6e
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/13ad4b6e

Branch: refs/heads/feature/GEODE-1279
Commit: 13ad4b6e07d80cd9961f6fbd634213c462315073
Parents: c1129c7
Author: Dave Barnes <db...@pivotal.io>
Authored: Mon Aug 14 15:22:16 2017 -0700
Committer: Dave Barnes <db...@pivotal.io>
Committed: Mon Aug 14 15:22:58 2017 -0700

----------------------------------------------------------------------
 geode-book/Gemfile.lock                         |    2 +-
 .../source/subnavs/geode-subnav.erb             |   54 +-
 .../how_region_versioning_works.html.md.erb     |    4 +-
 .../disk_free_space_monitoring.html.md.erb      |    2 +-
 .../heap_use/off_heap_management.html.md.erb    |    2 +-
 .../region_compression.html.md.erb              |    2 +-
 geode-docs/reference/book_intro.html.md.erb     |   20 +-
 .../statistics/statistics_list.html.md.erb      | 1310 ------------------
 .../reference/statistics_list.html.md.erb       | 1310 ++++++++++++++++++
 .../topics/cache-elements-list.html.md.erb      |    4 +-
 .../reference/topics/cache_xml.html.md.erb      |   50 +-
 .../chapter_overview_cache_xml.html.md.erb      |    8 +-
 ...chapter_overview_regionshortcuts.html.md.erb |   54 +-
 .../client-cache-elements-list.html.md.erb      |    2 +-
 .../reference/topics/client-cache.html.md.erb   |   42 +-
 .../topics/gemfire_properties.html.md.erb       |   46 +-
 .../reference/topics/gfe_cache_xml.html.md.erb  |   78 +-
 ...handling_exceptions_and_failures.html.md.erb |   10 +-
 ...mory_requirements_for_cache_data.html.md.erb |   30 +-
 ...on-ascii_strings_in_config_files.html.md.erb |    6 +-
 .../region_shortcuts_reference.html.md.erb      |    2 +-
 21 files changed, 1516 insertions(+), 1522 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-book/Gemfile.lock
----------------------------------------------------------------------
diff --git a/geode-book/Gemfile.lock b/geode-book/Gemfile.lock
index 5f6b59a..232c3b3 100644
--- a/geode-book/Gemfile.lock
+++ b/geode-book/Gemfile.lock
@@ -198,7 +198,7 @@ PLATFORMS
   ruby
 
 DEPENDENCIES
-  bookbindery
+  bookbindery (= 10.1.7)
   libv8 (= 3.16.14.7)
 
 BUNDLED WITH

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-book/master_middleman/source/subnavs/geode-subnav.erb
----------------------------------------------------------------------
diff --git a/geode-book/master_middleman/source/subnavs/geode-subnav.erb b/geode-book/master_middleman/source/subnavs/geode-subnav.erb
index 52e31a7..838b265 100644
--- a/geode-book/master_middleman/source/subnavs/geode-subnav.erb
+++ b/geode-book/master_middleman/source/subnavs/geode-subnav.erb
@@ -2997,86 +2997,86 @@ gfsh</a>
                         <a href="/docs/guide/12/reference/topics/memory_requirements_for_cache_data.html">Memory Requirements for Cached Data</a>
                     </li>
                     <li class="has_submenu">
-                        <a href="/docs/guide/12/reference/statistics/statistics_list.html">Geode Statistics List</a>
+                        <a href="/docs/guide/12/reference/statistics_list.html">Geode Statistics List</a>
                         <ul>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_DEF8D3644D3246AB8F06FE09A37DC5C8">Cache Performance (CachePerfStats)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_DEF8D3644D3246AB8F06FE09A37DC5C8">Cache Performance (CachePerfStats)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_EF5C2C59BFC74FFB8607F9571AB9A471">Cache Server (CacheServerStats)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_EF5C2C59BFC74FFB8607F9571AB9A471">Cache Server (CacheServerStats)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_B08C0783BBF9489E8BB48B4AEC597C62">Client-Side Notifications (CacheClientUpdaterStats)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_B08C0783BBF9489E8BB48B4AEC597C62">Client-Side Notifications (CacheClientUpdaterStats)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_04B7D7387E584712B7710B5ED1E876BB">Client-to-Server Messaging Performance (ClientStats)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_04B7D7387E584712B7710B5ED1E876BB">Client-to-Server Messaging Performance (ClientStats)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_6C247F61DB834C079A16BE92789D4692">Client Connection Pool (PoolStats)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_6C247F61DB834C079A16BE92789D4692">Client Connection Pool (PoolStats)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_66C0E7748501480B85209D57D24256D5">Continuous Querying (CQStatistics)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_66C0E7748501480B85209D57D24256D5">Continuous Querying (CQStatistics)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_D4ABED3FF94245C0BEE0F6FC9481E867">Delta Propagation (DeltaPropagationStatistics)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_D4ABED3FF94245C0BEE0F6FC9481E867">Delta Propagation (DeltaPropagationStatistics)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_6C2BECC63A83456190B029DEDB8F4BE3">Disk Space Usage (DiskDirStatistics)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_6C2BECC63A83456190B029DEDB8F4BE3">Disk Space Usage (DiskDirStatistics)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_983BFC6D53C74829A04A91C39E06315F">Disk Usage and Performance (DiskRegionStatistics)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_983BFC6D53C74829A04A91C39E06315F">Disk Usage and Performance (DiskRegionStatistics)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_ACB4161F10D64BC0B15871D003FF6FDF">Distributed System Messaging (DistributionStats)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_ACB4161F10D64BC0B15871D003FF6FDF">Distributed System Messaging (DistributionStats)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_78D346A580724E1EA645E31626EECE40">Distributed Lock Services (DLockStats)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_78D346A580724E1EA645E31626EECE40">Distributed Lock Services (DLockStats)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_5E212DDB0E8640689AD0A4659512E17A">Function Execution (FunctionServiceStatistics)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_5E212DDB0E8640689AD0A4659512E17A">Function Execution (FunctionServiceStatistics)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_C4199A541B1F4B82B6178C416C0FAE4B">Gateway Queue (GatewayStatistics)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_C4199A541B1F4B82B6178C416C0FAE4B">Gateway Queue (GatewayStatistics)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_86A61860024B480592DAC67FFB882538">Indexes (IndexStats)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_86A61860024B480592DAC67FFB882538">Indexes (IndexStats)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_607C3867602E410CAE5FAB26A7FF1CB9">JVM Performance</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_607C3867602E410CAE5FAB26A7FF1CB9">JVM Performance</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_C48B654F973E4B44AD825D459C23A6CD">Locator (LocatorStatistics)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_C48B654F973E4B44AD825D459C23A6CD">Locator (LocatorStatistics)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#LuceneStats">Lucene Indexes (LuceneIndexStats)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#LuceneStats">Lucene Indexes (LuceneIndexStats)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#topic_ohc_tjk_w5">Off-Heap (OffHeapMemoryStats)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#topic_ohc_tjk_w5">Off-Heap (OffHeapMemoryStats)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_923B28F01BC3416786D3AFBD87F22A5E">Operating System Statistics - Linux</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_923B28F01BC3416786D3AFBD87F22A5E">Operating System Statistics - Linux</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_35AC170770C944C3A336D9AEC2D2F7C5">Partitioned Regions (PartitionedRegion&lt;partitioned_region_name&gt;Statistics)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_35AC170770C944C3A336D9AEC2D2F7C5">Partitioned Regions (PartitionedRegion&lt;partitioned_region_name&gt;Statistics)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_374FBD92A3B74F6FA08AA23047929B4F">Region Entry Eviction – Count-Based (LRUStatistics)
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_374FBD92A3B74F6FA08AA23047929B4F">Region Entry Eviction – Count-Based (LRUStatistics)
                                 </a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_3D2AA2BCE5B6485699A7B6ADD1C49FF7">Region Entry Eviction – Size-based (LRUStatistics)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_3D2AA2BCE5B6485699A7B6ADD1C49FF7">Region Entry Eviction – Size-based (LRUStatistics)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_5362EF9AECBC48D69475697109ABEDFA">Server Notifications for All Clients (CacheClientNotifierStatistics)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_5362EF9AECBC48D69475697109ABEDFA">Server Notifications for All Clients (CacheClientNotifierStatistics)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_E03865F509E543D9B8F9462B3DA6255E">Server Notifications for Single Client (CacheClientProxyStatistics)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_E03865F509E543D9B8F9462B3DA6255E">Server Notifications for Single Client (CacheClientProxyStatistics)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_3AB1C0AA55014163A2BBF68E13D25E3A">Server-to-Client Messaging Performance (ClientSubscriptionStats)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_3AB1C0AA55014163A2BBF68E13D25E3A">Server-to-Client Messaging Performance (ClientSubscriptionStats)</a>
                             </li>
                             <li>
-                                <a href="/docs/guide/12/reference/statistics/statistics_list.html#section_55F3AF6413474317902845EE4996CC21">Statistics Collection (StatSampler)</a>
+                                <a href="/docs/guide/12/reference/statistics_list.html#section_55F3AF6413474317902845EE4996CC21">Statistics Collection (StatSampler)</a>
                             </li>
                         </ul>
                     </li>

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/developing/distributed_regions/how_region_versioning_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/distributed_regions/how_region_versioning_works.html.md.erb b/geode-docs/developing/distributed_regions/how_region_versioning_works.html.md.erb
index c8b98f7..9911d31 100644
--- a/geode-docs/developing/distributed_regions/how_region_versioning_works.html.md.erb
+++ b/geode-docs/developing/distributed_regions/how_region_versioning_works.html.md.erb
@@ -75,7 +75,7 @@ A Geode member or client that receives an update message first compares the upda
 An identical version stamp indicates that multiple Geode members updated the same entry at the same time. To resolve a concurrent update, a Geode member always applies (or keeps) the region entry that has the highest membership ID; the region entry having the lower membership ID is discarded.
 
 **Note:**
-When a Geode member discards an update message (either for an out-of-order update or when resolving a concurrent update), it does not pass the discarded event to an event listener for the region. You can track the number of discarded updates for each member using the `conflatedEvents` statistic. See [Geode Statistics List](../../reference/statistics/statistics_list.html#statistics_list). Some members may discard an update while other members apply the update, depending on the order in which each member receives the update. For this reason, the `conflatedEvents` statistic differs for each Geode member. The example below describes this behavior in more detail.
+When a Geode member discards an update message (either for an out-of-order update or when resolving a concurrent update), it does not pass the discarded event to an event listener for the region. You can track the number of discarded updates for each member using the `conflatedEvents` statistic. See [Geode Statistics List](../../reference/statistics_list.html#statistics_list). Some members may discard an update while other members apply the update, depending on the order in which each member receives the update. For this reason, the `conflatedEvents` statistic differs for each Geode member. The example below describes this behavior in more detail.
 
 The following example shows how a concurrent update is handled in a distributed system of three Geode members. Assume that Members A, B, and C have membership IDs of 1, 2, and 3, respectively. Each member currently stores an entry, X, in their caches at version C2 (the entry was last updated by member C):
 
@@ -110,7 +110,7 @@ A tombstone for a replicated or partitioned region expires after 10 minutes. Exp
 **Note:**
 To avoid out-of-memory errors, a Geode member also initiates garbage collection for tombstones when the amount of free memory drops below 30 percent of total memory.
 
-You can monitor the total number of tombstones in a cache using the `tombstoneCount` statistic in `CachePerfStats`. The `tombstoneGCCount` statistic records the total number of tombstone garbage collection cycles that a member has performed. `replicatedTombstonesSize` and `nonReplicatedTombstonesSize` show the approximate number of bytes that are currently consumed by tombstones in replicated or partitioned regions, and in non-replicated regions, respectively. See [Geode Statistics List](../../reference/statistics/statistics_list.html#statistics_list).
+You can monitor the total number of tombstones in a cache using the `tombstoneCount` statistic in `CachePerfStats`. The `tombstoneGCCount` statistic records the total number of tombstone garbage collection cycles that a member has performed. `replicatedTombstonesSize` and `nonReplicatedTombstonesSize` show the approximate number of bytes that are currently consumed by tombstones in replicated or partitioned regions, and in non-replicated regions, respectively. See [Geode Statistics List](../../reference/statistics_list.html#statistics_list).
 
 ## <a id="topic_321B05044B6641FCAEFABBF5066BD399__section_4D0140E96A3141EB8D983D0A43464097" class="no-quick-link"></a>About Region.clear() Operations
 

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/managing/disk_storage/disk_free_space_monitoring.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/disk_storage/disk_free_space_monitoring.html.md.erb b/geode-docs/managing/disk_storage/disk_free_space_monitoring.html.md.erb
index 837ac25..0fef3d3 100644
--- a/geode-docs/managing/disk_storage/disk_free_space_monitoring.html.md.erb
+++ b/geode-docs/managing/disk_storage/disk_free_space_monitoring.html.md.erb
@@ -52,6 +52,6 @@ You can obtain statistics on disk space usage and the performance of disk space
 -   `volumeFreeSpaceChecks`
 -   `volumeFreeSpaceTime`
 
-See [Disk Space Usage (DiskDirStatistics)](../../reference/statistics/statistics_list.html#section_6C2BECC63A83456190B029DEDB8F4BE3).
+See [Disk Space Usage (DiskDirStatistics)](../../reference/statistics_list.html#section_6C2BECC63A83456190B029DEDB8F4BE3).
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/managing/heap_use/off_heap_management.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/heap_use/off_heap_management.html.md.erb b/geode-docs/managing/heap_use/off_heap_management.html.md.erb
index 05f33e1..3e1515d 100644
--- a/geode-docs/managing/heap_use/off_heap_management.html.md.erb
+++ b/geode-docs/managing/heap_use/off_heap_management.html.md.erb
@@ -189,7 +189,7 @@ For example:
 
 ## <a id="managing-off-heap-memory__section_o4s_tg5_gv" class="no-quick-link"></a>Tuning Off-heap Memory Usage
 
-Geode collects statistics on off-heap memory usage which you can view with the gfsh `show metrics` command. See [Off-Heap (OffHeapMemoryStats)](../../reference/statistics/statistics_list.html#topic_ohc_tjk_w5) for a description of available off-heap statistics.
+Geode collects statistics on off-heap memory usage which you can view with the gfsh `show metrics` command. See [Off-Heap (OffHeapMemoryStats)](../../reference/statistics_list.html#topic_ohc_tjk_w5) for a description of available off-heap statistics.
 
 Off-heap memory is optimized, by default, for storing values of 128 KB in size. This figure is known as the "maximum optimized stored value size," which we will denote here by *maxOptStoredValSize*. If your data typically runs larger, you can enhance performance by increasing the OFF\_HEAP\_FREE\_LIST\_COUNT system parameter to a number larger than `maxOptStoredValSize/8`, where *maxOptStoredValSize* is expressed in KB (1024 bytes). So, the default values correspond to:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/managing/region_compression/region_compression.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/region_compression/region_compression.html.md.erb b/geode-docs/managing/region_compression/region_compression.html.md.erb
index 754dc85..ac351dd 100644
--- a/geode-docs/managing/region_compression/region_compression.html.md.erb
+++ b/geode-docs/managing/region_compression/region_compression.html.md.erb
@@ -221,6 +221,6 @@ The following statistics provide monitoring for cache compression:
 -   `preCompressedBytes`
 -   `postCompressedBytes`
 
-See [Cache Performance (CachePerfStats)](../../reference/statistics/statistics_list.html#section_DEF8D3644D3246AB8F06FE09A37DC5C8) for statistic descriptions.
+See [Cache Performance (CachePerfStats)](../../reference/statistics_list.html#section_DEF8D3644D3246AB8F06FE09A37DC5C8) for statistic descriptions.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/reference/book_intro.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/book_intro.html.md.erb b/geode-docs/reference/book_intro.html.md.erb
index e05e618..a7390f4 100644
--- a/geode-docs/reference/book_intro.html.md.erb
+++ b/geode-docs/reference/book_intro.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Reference
----
+<% set_title(product_name_long, "Reference") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,11 +17,11 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-*Reference* documents Apache Geode properties, region attributes, the `cache.xml` file, cache memory requirements, and statistics.
+*Reference* documents <%=vars.product_name_long%> properties, region attributes, the `cache.xml` file, cache memory requirements, and statistics.
 
--   **[gemfire.properties and gfsecurity.properties (Geode Properties)](../reference/topics/gemfire_properties.html)**
+-   **[gemfire.properties and gfsecurity.properties (<%=vars.product_name%> Properties)](../reference/topics/gemfire_properties.html)**
 
-    You use the `gemfire.properties` settings to join a distributed system and configure system member behavior. Distributed system members include applications, the cache server, the locator, and other Geode processes.
+    You use the `gemfire.properties` settings to join a distributed system and configure system member behavior. Distributed system members include applications, the cache server, the locator, and other <%=vars.product_name%> processes.
 
 -   **[cache.xml](../reference/topics/chapter_overview_cache_xml.html)**
 
@@ -31,18 +29,18 @@ limitations under the License.
 
 -   **[Region Shortcuts](../reference/topics/chapter_overview_regionshortcuts.html)**
 
-    This topic describes the various region shortcuts you can use to configure Geode regions.
+    This topic describes the various region shortcuts you can use to configure <%=vars.product_name%> regions.
 
 -   **[Exceptions and System Failures](../reference/topics/handling_exceptions_and_failures.html)**
 
-    Your application needs to catch certain classes to handle all the exceptions and system failures thrown by Apache Geode.
+    Your application needs to catch certain classes to handle all the exceptions and system failures thrown by <%=vars.product_name_long%>.
 
 -   **[Memory Requirements for Cached Data](../reference/topics/memory_requirements_for_cache_data.html)**
 
-    Geode solutions architects need to estimate resource requirements for meeting application performance, scalability and availability goals.
+    <%=vars.product_name%> solutions architects need to estimate resource requirements for meeting application performance, scalability and availability goals.
 
--   **[Geode Statistics List](../reference/statistics/statistics_list.html)**
+-   **[<%=vars.product_name%> Statistics List](statistics_list.html)**
 
-    This section describes the primary statistics gathered by Geode when statistics are enabled.
+    This section describes the primary statistics gathered by <%=vars.product_name%> when statistics are enabled.
 
 


[25/51] [abbrv] geode git commit: GEODE-2886 : 1. sent IllegalStateException instead of throwing IllegalArgumentException inside WaitUntilFlushedFunction. 2. Added dunit test with invalid indexName to get IllegalStateException to the caller of the WaitUn

Posted by kl...@apache.org.
GEODE-2886 : 1. sent IllegalStateException instead of throwing
IllegalArgumentException inside WaitUntilFlushedFunction.
2. Added dunit test with invalid indexName to get IllegalStateException
to the caller of the WaitUntilFlushedFunction.


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/11971d51
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/11971d51
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/11971d51

Branch: refs/heads/feature/GEODE-1279
Commit: 11971d51b946804e5c01c752be95c3174fea3569
Parents: 2f61dd6
Author: Amey Barve <ab...@apache.org>
Authored: Fri Jun 23 16:18:33 2017 +0530
Committer: Amey Barve <ab...@apache.org>
Committed: Thu Aug 17 15:38:41 2017 +0530

----------------------------------------------------------------------
 .../cache/lucene/internal/LuceneServiceImpl.java  | 16 ++++++++++++----
 .../distributed/WaitUntilFlushedFunction.java     |  4 ++--
 .../lucene/LuceneQueriesIntegrationTest.java      | 18 ++++++++++++++++++
 3 files changed, 32 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/11971d51/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
index 39b5d36..258b8a4 100644
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/LuceneServiceImpl.java
@@ -473,12 +473,20 @@ public class LuceneServiceImpl implements InternalLuceneService {
         new WaitUntilFlushedFunctionContext(indexName, timeout, unit);
     Execution execution = FunctionService.onRegion(dataRegion);
     ResultCollector rs = execution.setArguments(context).execute(WaitUntilFlushedFunction.ID);
-    List<Boolean> results = (List<Boolean>) rs.getResult();
-    for (Boolean oneResult : results) {
-      if (oneResult == false) {
+    List<Object> results = (List<Object>) rs.getResult();
+    if (results != null) {
+      if (results.get(0) instanceof IllegalStateException) {
         return false;
+      } else {
+        for (Object oneResult : results) {
+          if ((boolean) oneResult == false) {
+            return false;
+          }
+        }
+        return true;
       }
+    } else {
+      return false;
     }
-    return true;
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/11971d51/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
index bae0e74..6c14fcd 100644
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
@@ -62,8 +62,8 @@ public class WaitUntilFlushedFunction implements Function, InternalEntity {
       }
 
     } else {
-      throw new IllegalArgumentException(
-          "The AEQ does not exist for the index " + indexName + " region " + region.getFullPath());
+      resultSender.sendException(new IllegalStateException(
+          "The AEQ does not exist for the index " + indexName + " region " + region.getFullPath()));
     }
     resultSender.lastResult(result);
   }

http://git-wip-us.apache.org/repos/asf/geode/blob/11971d51/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
index fb86e19..779b12a 100644
--- a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
+++ b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
@@ -331,6 +331,24 @@ public class LuceneQueriesIntegrationTest extends LuceneIntegrationTest {
   }
 
   @Test()
+  public void testWaitUntilFlushedForException() throws Exception {
+    Map<String, Analyzer> fields = new HashMap<String, Analyzer>();
+    fields.put("name", null);
+    fields.put("lastName", null);
+    fields.put("address", null);
+    luceneService.createIndexFactory().setFields(fields).create(INDEX_NAME, REGION_NAME);
+    Region region = cache.createRegionFactory(RegionShortcut.PARTITION).create(REGION_NAME);
+    final LuceneIndex index = luceneService.getIndex(INDEX_NAME, REGION_NAME);
+
+    // This is to send IllegalStateException from WaitUntilFlushedFunction
+    String nonCreatedIndex = "index2";
+
+    boolean b =
+        luceneService.waitUntilFlushed(nonCreatedIndex, REGION_NAME, 60000, TimeUnit.MILLISECONDS);
+    assertFalse(b);
+  }
+
+  @Test()
   public void shouldAllowQueryOnRegionWithStringValue() throws Exception {
     luceneService.createIndexFactory().setFields(LuceneService.REGION_VALUE_FIELD)
         .create(INDEX_NAME, REGION_NAME);


[41/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - Developing

Posted by kl...@apache.org.
http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/transactions/working_with_transactions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/working_with_transactions.html.md.erb b/geode-docs/developing/transactions/working_with_transactions.html.md.erb
index 4a26d4c..d75f6ad 100644
--- a/geode-docs/developing/transactions/working_with_transactions.html.md.erb
+++ b/geode-docs/developing/transactions/working_with_transactions.html.md.erb
@@ -1,6 +1,4 @@
----
-title: Working with Geode Cache Transactions
----
+<% set_title("Working with", product_name, "Cache Transactions") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -21,7 +19,7 @@ limitations under the License.
 <a id="topic_tx2_gs4_5k"></a>
 
 
-This section contains guidelines and additional information on working with Geode and its cache transactions.
+This section contains guidelines and additional information on working with <%=vars.product_name%> and its cache transactions.
 
 -   **[Setting Global Copy on Read](#concept_vx2_gs4_5k)**
 
@@ -134,13 +132,13 @@ Local expiration actions do not cause write conflicts, but distributed expiratio
 
 A transaction that modifies a region in which consistency checking is enabled generates all necessary version information for region updates when the transaction commits.
 
-If a transaction modifies a normal, preloaded or empty region, the transaction is first delegated to a Geode member that holds a replicate for the region. This behavior is similar to the transactional behavior for partitioned regions, where the partitioned region transaction is forwarded to a member that hosts the primary for the partitioned region update.
+If a transaction modifies a normal, preloaded or empty region, the transaction is first delegated to a <%=vars.product_name%> member that holds a replicate for the region. This behavior is similar to the transactional behavior for partitioned regions, where the partitioned region transaction is forwarded to a member that hosts the primary for the partitioned region update.
 
-The limitation for transactions with a normal, preloaded or empty region is that, when consistency checking is enabled, a transaction cannot perform a `localDestroy` or `localInvalidate` operation against the region. Geode throws an `UnsupportedOperationInTransactionException` exception in such cases. An application should use a `Destroy` or `Invalidate` operation in place of a `localDestroy` or `localInvalidate` when consistency checks are enabled.
+The limitation for transactions with a normal, preloaded or empty region is that, when consistency checking is enabled, a transaction cannot perform a `localDestroy` or `localInvalidate` operation against the region. <%=vars.product_name%> throws an `UnsupportedOperationInTransactionException` exception in such cases. An application should use a `Destroy` or `Invalidate` operation in place of a `localDestroy` or `localInvalidate` when consistency checks are enabled.
 
 ## Suspending and Resuming Transactions
 
-The Geode `CacheTransactionManager` API provides the ability to suspend and resume transactions with the `suspend` and `resume` methods. The ability to suspend and resume is useful when a thread must perform some operations that should not be part of the transaction before the transaction can complete. A complex use case of suspend and resume implements a transaction that spans clients in which only one client at a time will not be suspended.
+The <%=vars.product_name%> `CacheTransactionManager` API provides the ability to suspend and resume transactions with the `suspend` and `resume` methods. The ability to suspend and resume is useful when a thread must perform some operations that should not be part of the transaction before the transaction can complete. A complex use case of suspend and resume implements a transaction that spans clients in which only one client at a time will not be suspended.
 
 Once a transaction is suspended, it loses the transactional view of the cache. None of the operations done within the transaction are visible to the thread. Any operations that are performed by the thread while the transaction is suspended are not part of the transaction.
 
@@ -150,17 +148,17 @@ Before resuming a transaction, you may want to check if the transaction exists o
 
 If the member with the primary copy of the data crashes, the transactional view associated with that data is lost. The secondary member for the data will not be able to resume any transactions suspended on the crashed member. You will need to take remedial steps to retry the transaction on a new primary copy of the data.
 
-If a suspended transaction is not touched for a period of time, Geode cleans it up automatically. By default, the timeout for a suspended transaction is 30 minutes and can be configured using the system property `gemfire.suspendedtxTimeout`. For example, `gemfire.suspendedtxTimeout=60` specifies a timeout of 60 minutes.
+If a suspended transaction is not touched for a period of time, <%=vars.product_name%> cleans it up automatically. By default, the timeout for a suspended transaction is 30 minutes and can be configured using the system property `gemfire.suspendedtxTimeout`. For example, `gemfire.suspendedtxTimeout=60` specifies a timeout of 60 minutes.
 
 See [Basic Suspend and Resume Transaction Example](transaction_suspend_resume_example.html) for a sample code fragment that suspends and resumes a transaction.
 
 ## Using Cache Writer and Cache Listener Plug-Ins
 
-All standard Geode application plug-ins work with transactions. In addition, the transaction interface offers specialized plug-ins that support transactional operation.
+All standard <%=vars.product_name%> application plug-ins work with transactions. In addition, the transaction interface offers specialized plug-ins that support transactional operation.
 
-No direct interaction exists between client transactions and client application plug-ins. When a client runs a transaction, Geode calls the plug-ins that are installed on the transaction's server delegate and its server host. Client application plug-ins are not called for operations inside the transaction or for the transaction as a whole. When the transaction is committed, the changes to the server cache are sent to the client cache according to client interest registration. These events can result in calls to the client's `CacheListener`s, as with any other events received from the server.
+No direct interaction exists between client transactions and client application plug-ins. When a client runs a transaction, <%=vars.product_name%> calls the plug-ins that are installed on the transaction's server delegate and its server host. Client application plug-ins are not called for operations inside the transaction or for the transaction as a whole. When the transaction is committed, the changes to the server cache are sent to the client cache according to client interest registration. These events can result in calls to the client's `CacheListener`s, as with any other events received from the server.
 
-The `EntryEvent` that a callback receives has a unique Geode transaction ID, so the cache listener can associate each event, as it occurs, with a particular transaction. The transaction ID of an `EntryEvent` that is not part of a transaction is null to distinguish it from a transaction ID.
+The `EntryEvent` that a callback receives has a unique <%=vars.product_name%> transaction ID, so the cache listener can associate each event, as it occurs, with a particular transaction. The transaction ID of an `EntryEvent` that is not part of a transaction is null to distinguish it from a transaction ID.
 
 -   `CacheLoader`. When a cache loader is called by a transaction operation, values loaded by the cache loader may cause a write conflict when the transaction commits.
 -   `CacheWriter`. During a transaction, if a cache writer exists, its methods are invoked as usual for all operations, as the operations are called in the transactions. The `netWrite` operation is not used. The only cache writer used is the one in the member where the transactional data resides.
@@ -170,9 +168,9 @@ For more information on writing cache event handlers, see [Implementing Cache Ev
 
 ## <a id="concept_ocw_vf1_wk" class="no-quick-link"></a>Configuring Transaction Plug-In Event Handlers
 
-Geode has two types of transaction plug-ins: Transaction Writers and Transaction Listeners. You can optionally install one transaction writer and one or more transaction listeners per cache.
+<%=vars.product_name%> has two types of transaction plug-ins: Transaction Writers and Transaction Listeners. You can optionally install one transaction writer and one or more transaction listeners per cache.
 
-Like JTA global transactions, you can use transaction plug-in event handlers to coordinate Geode cache transaction activity with an external data store. However, you typically use JTA global transactions when Geode is running as a peer data store with your external data stores. Transaction writers and listeners are typically used when Geode is acting as a front end cache to your backend database.
+Like JTA global transactions, you can use transaction plug-in event handlers to coordinate <%=vars.product_name%> cache transaction activity with an external data store. However, you typically use JTA global transactions when <%=vars.product_name%> is running as a peer data store with your external data stores. Transaction writers and listeners are typically used when <%=vars.product_name%> is acting as a front end cache to your backend database.
 
 **Note:**
 You can also use transaction plug-in event handlers when running JTA global transactions.
@@ -181,7 +179,7 @@ You can also use transaction plug-in event handlers when running JTA global tran
 
 When you commit a transaction, if a transaction writer is installed in the cache where the data updates were performed, it is called. The writer can do whatever work you need, including aborting the transaction.
 
-The transaction writer is the last place that an application can rollback a transaction. If the transaction writer throws any exception, the transaction is rolled back. For example, you might use a transaction writer to update a backend data source before the Geode cache transaction completes the commit. If the backend data source update fails, the transaction writer implementation can throw a [TransactionWriterException](/releases/latest/javadoc/org/apache/geode/cache/TransactionWriterException.html) to veto the transaction.
+The transaction writer is the last place that an application can rollback a transaction. If the transaction writer throws any exception, the transaction is rolled back. For example, you might use a transaction writer to update a backend data source before the <%=vars.product_name%> cache transaction completes the commit. If the backend data source update fails, the transaction writer implementation can throw a [TransactionWriterException](/releases/latest/javadoc/org/apache/geode/cache/TransactionWriterException.html) to veto the transaction.
 
 A typical usage scenario would be to use the transaction writer to prepare the commit on the external database. Then in a transaction listener, you can apply the commit on the database.
 
@@ -193,7 +191,7 @@ Transaction listeners have access to the transactional view and thus are not aff
 
 A transaction listener can preserve the result of a transaction, perhaps to compare with other transactions, or for reference in case of a failed commit. When a commit fails and the transaction ends, the application cannot just retry the transaction, but must build up the data again. For most applications, the most efficient action is just to start a new transaction and go back through the application logic again.
 
-The rollback and failed commit operations are local to the member where the transactional operations are run. When a successful commit writes to a distributed or partitioned region, however, the transaction results are distributed to other members the same as other updates. The transaction listener on the receiving members reflect the changes the transaction makes in that member, not the originating member. Any exceptions thrown by the transaction listener are caught by Geode and logged.
+The rollback and failed commit operations are local to the member where the transactional operations are run. When a successful commit writes to a distributed or partitioned region, however, the transaction results are distributed to other members the same as other updates. The transaction listener on the receiving members reflect the changes the transaction makes in that member, not the originating member. Any exceptions thrown by the transaction listener are caught by <%=vars.product_name%> and logged.
 
 To configure a transaction listener, add a `cache-transaction-manager` configuration to the cache definition and define one or more instances of `transaction-listener` there. The only parameter to this `transaction-listener` is `URL`, which must be a string, as shown in the following cache.xml example.
 


[35/51] [abbrv] geode git commit: GEODE-3235: Deploy jar registers functions which extend FunctionAdapter

Posted by kl...@apache.org.
GEODE-3235: Deploy jar registers functions which extend FunctionAdapter


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/64f33c3e
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/64f33c3e
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/64f33c3e

Branch: refs/heads/feature/GEODE-1279
Commit: 64f33c3e456af775d7ee35b05a67f76cb3a23941
Parents: 82fad64
Author: Jared Stewart <js...@pivotal.io>
Authored: Tue Jul 25 15:32:18 2017 -0700
Committer: Jared Stewart <js...@pivotal.io>
Committed: Thu Aug 17 15:57:59 2017 -0700

----------------------------------------------------------------------
 .../org/apache/geode/internal/DeployedJar.java  |  49 ++++----
 .../internal/deployment/FunctionScanner.java    |  47 ++++++++
 ...loyCommandFunctionRegistrationDUnitTest.java | 118 +++++++++++++++++++
 .../deployment/FunctionScannerTest.java         | 106 +++++++++++++++++
 .../AbstractExtendsFunctionAdapter.java         |  24 ++++
 .../internal/deployment/AbstractFunction.java   |  33 ++++++
 .../deployment/AbstractImplementsFunction.java  |  24 ++++
 ...teExtendsAbstractExtendsFunctionAdapter.java |  23 ++++
 ...ncreteExtendsAbstractImplementsFunction.java |  23 ++++
 .../deployment/ExtendsAbstractFunction.java     |  25 ++++
 .../deployment/ExtendsFunctionAdapter.java      |  25 ++++
 .../internal/deployment/ImplementsFunction.java |  24 ++++
 12 files changed, 494 insertions(+), 27 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/64f33c3e/geode-core/src/main/java/org/apache/geode/internal/DeployedJar.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/DeployedJar.java b/geode-core/src/main/java/org/apache/geode/internal/DeployedJar.java
index 037ef9e..a341ee3 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/DeployedJar.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/DeployedJar.java
@@ -14,19 +14,6 @@
  */
 package org.apache.geode.internal;
 
-import io.github.lukehutch.fastclasspathscanner.FastClasspathScanner;
-import io.github.lukehutch.fastclasspathscanner.scanner.ScanResult;
-import org.apache.commons.collections.CollectionUtils;
-import org.apache.geode.cache.CacheClosedException;
-import org.apache.geode.cache.CacheFactory;
-import org.apache.geode.cache.Declarable;
-import org.apache.geode.cache.execute.Function;
-import org.apache.geode.cache.execute.FunctionService;
-import org.apache.geode.internal.cache.InternalCache;
-import org.apache.geode.internal.logging.LogService;
-import org.apache.geode.pdx.internal.TypeRegistry;
-import org.apache.logging.log4j.Logger;
-
 import java.io.ByteArrayInputStream;
 import java.io.ByteArrayOutputStream;
 import java.io.File;
@@ -38,7 +25,6 @@ import java.lang.reflect.Constructor;
 import java.lang.reflect.Modifier;
 import java.net.MalformedURLException;
 import java.net.URL;
-import java.net.URLClassLoader;
 import java.nio.file.Files;
 import java.security.MessageDigest;
 import java.security.NoSuchAlgorithmException;
@@ -53,9 +39,22 @@ import java.util.jar.JarInputStream;
 import java.util.regex.Pattern;
 import java.util.stream.Stream;
 
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.logging.log4j.Logger;
+
+import org.apache.geode.cache.CacheClosedException;
+import org.apache.geode.cache.CacheFactory;
+import org.apache.geode.cache.Declarable;
+import org.apache.geode.cache.execute.Function;
+import org.apache.geode.cache.execute.FunctionService;
+import org.apache.geode.internal.cache.InternalCache;
+import org.apache.geode.internal.logging.LogService;
+import org.apache.geode.management.internal.deployment.FunctionScanner;
+import org.apache.geode.pdx.internal.TypeRegistry;
+
 /**
  * ClassLoader for a single JAR file.
- * 
+ *
  * @since GemFire 7.0
  */
 public class DeployedJar {
@@ -123,7 +122,7 @@ public class DeployedJar {
 
   /**
    * Peek into the JAR data and make sure that it is valid JAR content.
-   * 
+   *
    * @param inputStream InputStream containing data to be validated.
    * @return True if the data has JAR content, false otherwise
    */
@@ -149,7 +148,7 @@ public class DeployedJar {
 
   /**
    * Peek into the JAR data and make sure that it is valid JAR content.
-   * 
+   *
    * @param jarBytes Bytes of data to be validated.
    * @return True if the data has JAR content, false otherwise
    */
@@ -171,7 +170,7 @@ public class DeployedJar {
 
     JarInputStream jarInputStream = null;
     try {
-      List<String> functionClasses = findFunctionsInThisJar();
+      Collection<String> functionClasses = findFunctionsInThisJar();
 
       jarInputStream = new JarInputStream(byteArrayInputStream);
       JarEntry jarEntry = jarInputStream.getNextJarEntry();
@@ -259,7 +258,7 @@ public class DeployedJar {
   /**
    * Uses MD5 hashes to determine if the original byte content of this DeployedJar is the same as
    * that past in.
-   * 
+   *
    * @param compareToBytes Bytes to compare the original content to
    * @return True of the MD5 hash is the same o
    */
@@ -281,7 +280,7 @@ public class DeployedJar {
    * Check to see if the class implements the Function interface. If so, it will be registered with
    * FunctionService. Also, if the functions's class was originally declared in a cache.xml file
    * then any properties specified at that time will be reused when re-registering the function.
-   * 
+   *
    * @param clazz Class to check for implementation of the Function class
    * @return A collection of Objects that implement the Function interface.
    */
@@ -333,15 +332,11 @@ public class DeployedJar {
     return registerableFunctions;
   }
 
-  private List<String> findFunctionsInThisJar() throws IOException {
-    URLClassLoader urlClassLoader =
-        new URLClassLoader(new URL[] {this.getFile().getCanonicalFile().toURL()});
-    FastClasspathScanner fastClasspathScanner = new FastClasspathScanner()
-        .removeTemporaryFilesAfterScan(true).overrideClassLoaders(urlClassLoader);
-    ScanResult scanResult = fastClasspathScanner.scan();
-    return scanResult.getNamesOfClassesImplementing(Function.class);
+  protected Collection<String> findFunctionsInThisJar() throws IOException {
+    return new FunctionScanner().findFunctionsInJar(this.file);
   }
 
+
   private Function newFunction(final Class<Function> clazz, final boolean errorOnNoSuchMethod) {
     try {
       final Constructor<Function> constructor = clazz.getConstructor();

http://git-wip-us.apache.org/repos/asf/geode/blob/64f33c3e/geode-core/src/main/java/org/apache/geode/management/internal/deployment/FunctionScanner.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/management/internal/deployment/FunctionScanner.java b/geode-core/src/main/java/org/apache/geode/management/internal/deployment/FunctionScanner.java
new file mode 100644
index 0000000..9b7d6c4
--- /dev/null
+++ b/geode-core/src/main/java/org/apache/geode/management/internal/deployment/FunctionScanner.java
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.deployment;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URL;
+import java.net.URLClassLoader;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.Set;
+
+import io.github.lukehutch.fastclasspathscanner.FastClasspathScanner;
+import io.github.lukehutch.fastclasspathscanner.scanner.ScanResult;
+
+import org.apache.geode.cache.execute.Function;
+import org.apache.geode.cache.execute.FunctionAdapter;
+
+public class FunctionScanner {
+
+  public Collection<String> findFunctionsInJar(File jarFile) throws IOException {
+    URLClassLoader urlClassLoader =
+        new URLClassLoader(new URL[] {jarFile.getCanonicalFile().toURL()});
+    FastClasspathScanner fastClasspathScanner = new FastClasspathScanner()
+        .removeTemporaryFilesAfterScan(true).overrideClassLoaders(urlClassLoader);
+    ScanResult scanResult = fastClasspathScanner.scan();
+
+    Set<String> functionClasses = new HashSet<>();
+
+    functionClasses.addAll(scanResult.getNamesOfClassesImplementing(Function.class));
+    functionClasses.addAll(scanResult.getNamesOfSubclassesOf(FunctionAdapter.class));
+
+    return functionClasses;
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/64f33c3e/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/DeployCommandFunctionRegistrationDUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/DeployCommandFunctionRegistrationDUnitTest.java b/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/DeployCommandFunctionRegistrationDUnitTest.java
new file mode 100644
index 0000000..6b933bc
--- /dev/null
+++ b/geode-core/src/test/java/org/apache/geode/management/internal/cli/commands/DeployCommandFunctionRegistrationDUnitTest.java
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.cli.commands;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+import java.io.File;
+import java.io.Serializable;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.List;
+
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import org.apache.geode.cache.execute.Execution;
+import org.apache.geode.cache.execute.FunctionService;
+import org.apache.geode.distributed.DistributedSystem;
+import org.apache.geode.internal.ClassPathLoader;
+import org.apache.geode.internal.cache.GemFireCacheImpl;
+import org.apache.geode.test.compiler.JarBuilder;
+import org.apache.geode.test.dunit.rules.GfshShellConnectionRule;
+import org.apache.geode.test.dunit.rules.LocatorServerStartupRule;
+import org.apache.geode.test.dunit.rules.MemberVM;
+import org.apache.geode.test.junit.categories.DistributedTest;
+import org.apache.geode.test.junit.rules.serializable.SerializableTemporaryFolder;
+
+@Category(DistributedTest.class)
+public class DeployCommandFunctionRegistrationDUnitTest implements Serializable {
+  private MemberVM locator;
+  private MemberVM server;
+
+  @Rule
+  public SerializableTemporaryFolder temporaryFolder = new SerializableTemporaryFolder();
+
+  @Rule
+  public LocatorServerStartupRule lsRule = new LocatorServerStartupRule();
+
+  @Rule
+  public transient GfshShellConnectionRule gfshConnector = new GfshShellConnectionRule();
+
+  @Before
+  public void setup() throws Exception {
+    locator = lsRule.startLocatorVM(0);
+    server = lsRule.startServerVM(1, locator.getPort());
+
+    gfshConnector.connectAndVerify(locator);
+  }
+
+  @Test
+  public void deployImplements() throws Exception {
+    JarBuilder jarBuilder = new JarBuilder();
+    File source = loadTestResource(
+        "/org/apache/geode/management/internal/deployment/ImplementsFunction.java");
+
+    File outputJar = new File(temporaryFolder.getRoot(), "output.jar");
+    jarBuilder.buildJar(outputJar, source);
+
+    gfshConnector.executeAndVerifyCommand("deploy --jar=" + outputJar.getCanonicalPath());
+    server.invoke(() -> assertThatCanLoad(
+        "org.apache.geode.management.internal.deployment.ImplementsFunction"));
+    server.invoke(() -> assertThatFunctionHasVersion(
+        "org.apache.geode.management.internal.deployment.ImplementsFunction",
+        "ImplementsFunctionResult"));
+  }
+
+  @Test
+  public void deployExtends() throws Exception {
+    JarBuilder jarBuilder = new JarBuilder();
+    File source = loadTestResource(
+        "/org/apache/geode/management/internal/deployment/ExtendsFunctionAdapter.java");
+
+    File outputJar = new File(temporaryFolder.getRoot(), "output.jar");
+    jarBuilder.buildJar(outputJar, source);
+
+    gfshConnector.executeAndVerifyCommand("deploy --jar=" + outputJar.getCanonicalPath());
+    server.invoke(() -> assertThatCanLoad(
+        "org.apache.geode.management.internal.deployment.ExtendsFunctionAdapter"));
+    server.invoke(() -> assertThatFunctionHasVersion(
+        "org.apache.geode.management.internal.deployment.ExtendsFunctionAdapter",
+        "ExtendsFunctionAdapterResult"));
+  }
+
+  private File loadTestResource(String fileName) throws URISyntaxException {
+    URL resourceFileURL = this.getClass().getResource(fileName);
+    assertThat(resourceFileURL).isNotNull();
+
+    URI resourceUri = resourceFileURL.toURI();
+    return new File(resourceUri);
+  }
+
+  private void assertThatFunctionHasVersion(String functionId, String version) {
+    GemFireCacheImpl gemFireCache = GemFireCacheImpl.getInstance();
+    DistributedSystem distributedSystem = gemFireCache.getDistributedSystem();
+    Execution execution = FunctionService.onMember(distributedSystem.getDistributedMember());
+    List<String> result = (List<String>) execution.execute(functionId).getResult();
+    assertThat(result.get(0)).isEqualTo(version);
+  }
+
+  private void assertThatCanLoad(String className) throws ClassNotFoundException {
+    assertThat(ClassPathLoader.getLatest().forName(className)).isNotNull();
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/64f33c3e/geode-core/src/test/java/org/apache/geode/management/internal/deployment/FunctionScannerTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/management/internal/deployment/FunctionScannerTest.java b/geode-core/src/test/java/org/apache/geode/management/internal/deployment/FunctionScannerTest.java
new file mode 100644
index 0000000..af9ffdf
--- /dev/null
+++ b/geode-core/src/test/java/org/apache/geode/management/internal/deployment/FunctionScannerTest.java
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.deployment;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+import java.io.File;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.util.Collection;
+
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TemporaryFolder;
+
+import org.apache.geode.test.compiler.JarBuilder;
+import org.apache.geode.test.junit.categories.IntegrationTest;
+
+@Category(IntegrationTest.class)
+public class FunctionScannerTest {
+  @Rule
+  public TemporaryFolder temporaryFolder = new TemporaryFolder();
+
+  private JarBuilder jarBuilder;
+  private FunctionScanner functionScanner;
+  private File outputJar;
+
+  @Before
+  public void setup() {
+    jarBuilder = new JarBuilder();
+    functionScanner = new FunctionScanner();
+    outputJar = new File(temporaryFolder.getRoot(), "output.jar");
+  }
+
+  @Test
+  public void implementsFunction() throws Exception {
+    File sourceFileOne = loadTestResource("ImplementsFunction.java");
+
+    jarBuilder.buildJar(outputJar, sourceFileOne);
+
+    Collection<String> functionsFoundInJar = functionScanner.findFunctionsInJar(outputJar);
+    assertThat(functionsFoundInJar)
+        .containsExactly("org.apache.geode.management.internal.deployment.ImplementsFunction");
+  }
+
+  @Test
+  public void extendsFunctionAdapter() throws Exception {
+    File sourceFileOne = loadTestResource("ExtendsFunctionAdapter.java");
+
+    jarBuilder.buildJar(outputJar, sourceFileOne);
+
+    Collection<String> functionsFoundInJar = functionScanner.findFunctionsInJar(outputJar);
+    assertThat(functionsFoundInJar)
+        .containsExactly("org.apache.geode.management.internal.deployment.ExtendsFunctionAdapter");
+  }
+
+  @Test
+  public void testConcreteExtendsAbstractExtendsFunctionAdapter() throws Exception {
+    File sourceFileOne = loadTestResource("AbstractExtendsFunctionAdapter.java");
+    File sourceFileTwo = loadTestResource("ConcreteExtendsAbstractExtendsFunctionAdapter.java");
+
+    jarBuilder.buildJar(outputJar, sourceFileOne, sourceFileTwo);
+
+    Collection<String> functionsFoundInJar = functionScanner.findFunctionsInJar(outputJar);
+    assertThat(functionsFoundInJar).containsExactlyInAnyOrder(
+        "org.apache.geode.management.internal.deployment.ConcreteExtendsAbstractExtendsFunctionAdapter",
+        "org.apache.geode.management.internal.deployment.AbstractExtendsFunctionAdapter");
+  }
+
+  @Test
+  public void testConcreteExtendsAbstractImplementsFunction() throws Exception {
+    File sourceFileOne = loadTestResource("AbstractImplementsFunction.java");
+    File sourceFileTwo = loadTestResource("ConcreteExtendsAbstractImplementsFunction.java");
+
+    jarBuilder.buildJar(outputJar, sourceFileOne, sourceFileTwo);
+
+    Collection<String> functionsFoundInJar = functionScanner.findFunctionsInJar(outputJar);
+    assertThat(functionsFoundInJar).containsExactlyInAnyOrder(
+        "org.apache.geode.management.internal.deployment.ConcreteExtendsAbstractImplementsFunction",
+        "org.apache.geode.management.internal.deployment.AbstractImplementsFunction");
+  }
+
+  private File loadTestResource(String fileName) throws URISyntaxException {
+    URL resourceFileURL = this.getClass().getResource(fileName);
+    assertThat(resourceFileURL).isNotNull();
+
+    URI resourceUri = resourceFileURL.toURI();
+    return new File(resourceUri);
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/64f33c3e/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractExtendsFunctionAdapter.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractExtendsFunctionAdapter.java b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractExtendsFunctionAdapter.java
new file mode 100644
index 0000000..5bcc22c
--- /dev/null
+++ b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractExtendsFunctionAdapter.java
@@ -0,0 +1,24 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.deployment;
+
+import org.apache.geode.cache.execute.FunctionAdapter;
+import org.apache.geode.cache.execute.FunctionContext;
+
+public abstract class AbstractExtendsFunctionAdapter extends FunctionAdapter {
+  public void execute(FunctionContext context) {
+    context.getResultSender().lastResult("AbstractExtendsFunctionAdapterResult");
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/64f33c3e/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractFunction.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractFunction.java b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractFunction.java
new file mode 100644
index 0000000..afc83ab
--- /dev/null
+++ b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractFunction.java
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.deployment;
+
+import org.apache.geode.cache.execute.FunctionContext;
+
+public class AbstractFunction implements Function {
+  public void execute(FunctionContext context) {
+    context.getResultSender().lastResult("ConcreteResult");
+  }
+
+  public static abstract class AbstractImplementsFunction implements Function {
+    public abstract void execute(FunctionContext context);
+  }
+
+  public static class Concrete extends AbstractImplementsFunction {
+    public void execute(FunctionContext context) {
+      context.getResultSender().lastResult("ConcreteResult");
+    }
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/64f33c3e/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractImplementsFunction.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractImplementsFunction.java b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractImplementsFunction.java
new file mode 100644
index 0000000..a31399d
--- /dev/null
+++ b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/AbstractImplementsFunction.java
@@ -0,0 +1,24 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.deployment;
+
+import org.apache.geode.cache.execute.Function;
+import org.apache.geode.cache.execute.FunctionContext;
+
+public abstract class AbstractImplementsFunction implements Function {
+  public void execute(FunctionContext context) {
+    context.getResultSender().lastResult("AbstractImplementsFunctionResult");
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/64f33c3e/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ConcreteExtendsAbstractExtendsFunctionAdapter.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ConcreteExtendsAbstractExtendsFunctionAdapter.java b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ConcreteExtendsAbstractExtendsFunctionAdapter.java
new file mode 100644
index 0000000..3515558
--- /dev/null
+++ b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ConcreteExtendsAbstractExtendsFunctionAdapter.java
@@ -0,0 +1,23 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.deployment;
+
+import org.apache.geode.cache.execute.FunctionContext;
+
+public class ConcreteExtendsAbstractExtendsFunctionAdapter extends AbstractExtendsFunctionAdapter {
+  public void execute(FunctionContext context) {
+    context.getResultSender().lastResult("ConcreteExtendsAbstractExtendsFunctionAdapter");
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/64f33c3e/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ConcreteExtendsAbstractImplementsFunction.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ConcreteExtendsAbstractImplementsFunction.java b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ConcreteExtendsAbstractImplementsFunction.java
new file mode 100644
index 0000000..b62f38b
--- /dev/null
+++ b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ConcreteExtendsAbstractImplementsFunction.java
@@ -0,0 +1,23 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.deployment;
+
+import org.apache.geode.cache.execute.FunctionContext;
+
+public class ConcreteExtendsAbstractImplementsFunction extends AbstractImplementsFunction {
+  public void execute(FunctionContext context) {
+    context.getResultSender().lastResult("ConcreteExtendsAbstractImplementsFunctionResult");
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/64f33c3e/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ExtendsAbstractFunction.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ExtendsAbstractFunction.java b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ExtendsAbstractFunction.java
new file mode 100644
index 0000000..cf7c7a2
--- /dev/null
+++ b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ExtendsAbstractFunction.java
@@ -0,0 +1,25 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.deployment;
+
+
+import org.apache.geode.cache.execute.FunctionAdapter;
+import org.apache.geode.cache.execute.FunctionContext;
+
+public class ExtendsFunctionAdapter extends FunctionAdapter {
+  public void execute(FunctionContext context) {
+    context.getResultSender().lastResult("ExtendsFunctionAdapterResult");
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/64f33c3e/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ExtendsFunctionAdapter.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ExtendsFunctionAdapter.java b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ExtendsFunctionAdapter.java
new file mode 100644
index 0000000..cf7c7a2
--- /dev/null
+++ b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ExtendsFunctionAdapter.java
@@ -0,0 +1,25 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.deployment;
+
+
+import org.apache.geode.cache.execute.FunctionAdapter;
+import org.apache.geode.cache.execute.FunctionContext;
+
+public class ExtendsFunctionAdapter extends FunctionAdapter {
+  public void execute(FunctionContext context) {
+    context.getResultSender().lastResult("ExtendsFunctionAdapterResult");
+  }
+}

http://git-wip-us.apache.org/repos/asf/geode/blob/64f33c3e/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ImplementsFunction.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ImplementsFunction.java b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ImplementsFunction.java
new file mode 100644
index 0000000..c9fef3c
--- /dev/null
+++ b/geode-core/src/test/resources/org/apache/geode/management/internal/deployment/ImplementsFunction.java
@@ -0,0 +1,24 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.geode.management.internal.deployment;
+
+import org.apache.geode.cache.execute.Function;
+import org.apache.geode.cache.execute.FunctionContext;
+
+public class ImplementsFunction implements Function {
+  public void execute(FunctionContext context) {
+    context.getResultSender().lastResult("ImplementsFunctionResult");
+  }
+}


[46/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - Developing

Posted by kl...@apache.org.
GEODE-3395 Variable-ize product version and name in user guide - Developing


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/ed9a8fd4
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/ed9a8fd4
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/ed9a8fd4

Branch: refs/heads/feature/GEODE-1279
Commit: ed9a8fd47a56fa84b810f0e4c4261b299150d1de
Parents: 3bb6a22
Author: Dave Barnes <db...@pivotal.io>
Authored: Wed Aug 16 16:20:25 2017 -0700
Committer: Dave Barnes <db...@pivotal.io>
Committed: Fri Aug 18 10:42:31 2017 -0700

----------------------------------------------------------------------
 geode-docs/developing/book_intro.html.md.erb    |  10 +-
 .../chapter_overview.html.md.erb                |   6 +-
 .../PDX_Serialization_Features.html.md.erb      |  10 +-
 .../auto_serialization.html.md.erb              |   2 +-
 ...ation_with_class_pattern_strings.html.md.erb |   2 +-
 .../chapter_overview.html.md.erb                |  16 +--
 .../data_serialization_options.html.md.erb      |  24 ++---
 .../extending_the_autoserializer.html.md.erb    |   2 +-
 .../gemfire_data_serialization.html.md.erb      |   8 +-
 .../gemfire_pdx_serialization.html.md.erb       |  32 +++---
 .../jsonformatter_pdxinstances.html.md.erb      |  16 ++-
 .../persist_pdx_metadata_to_disk.html.md.erb    |  10 +-
 .../program_application_for_pdx.html.md.erb     |   2 +-
 .../use_pdx_high_level_steps.html.md.erb        |   4 +-
 .../use_pdx_serializable.html.md.erb            |   8 +-
 .../use_pdx_serializer.html.md.erb              |   4 +-
 .../delta_propagation_example.html.md.erb       |   6 +-
 .../delta_propagation_properties.html.md.erb    |   4 +-
 .../how_delta_propagation_works.html.md.erb     |  10 +-
 .../implementing_delta_propagation.html.md.erb  |   4 +-
 .../chapter_overview.html.md.erb                |  14 +--
 .../choosing_level_of_dist.html.md.erb          |   2 +-
 .../how_region_versioning_works.html.md.erb     |  42 ++++----
 .../how_region_versioning_works_wan.html.md.erb |  14 +--
 .../locking_in_global_regions.html.md.erb       |   2 +-
 .../managing_distributed_regions.html.md.erb    |   2 +-
 .../region_entry_versions.html.md.erb           |  22 ++--
 .../events/chapter_overview.html.md.erb         |  16 +--
 ...re_client_server_event_messaging.html.md.erb |   6 +-
 ...figure_multisite_event_messaging.html.md.erb |  10 +-
 ...uring_gateway_concurrency_levels.html.md.erb |  12 +--
 ..._highly_available_gateway_queues.html.md.erb |   4 +-
 ...iguring_highly_available_servers.html.md.erb |   2 +-
 .../events/event_handler_overview.html.md.erb   |   4 +-
 .../filtering_multisite_events.html.md.erb      |  12 +--
 .../events/how_cache_events_work.html.md.erb    |   4 +-
 ...client_server_distribution_works.html.md.erb |   2 +-
 .../events/how_events_work.html.md.erb          |  20 ++--
 ...how_multisite_distribution_works.html.md.erb |   2 +-
 ...mplementing_cache_event_handlers.html.md.erb |   2 +-
 ..._durable_client_server_messaging.html.md.erb |   4 +-
 ...nting_write_behind_event_handler.html.md.erb |  20 ++--
 ...ist_of_event_handlers_and_events.html.md.erb |   6 +-
 ...ne_client_server_event_messaging.html.md.erb |   6 +-
 ..._callbacks_that_modify_the_cache.html.md.erb |  10 +-
 .../eviction/chapter_overview.html.md.erb       |   6 +-
 .../configuring_data_eviction.html.md.erb       |   6 +-
 .../eviction/how_eviction_works.html.md.erb     |   6 +-
 .../expiration/chapter_overview.html.md.erb     |   4 +-
 .../expiration/how_expiration_works.html.md.erb |   4 +-
 .../function_exec/chapter_overview.html.md.erb  |   2 +-
 .../function_execution.html.md.erb              |  18 ++--
 .../how_function_execution_works.html.md.erb    |   6 +-
 .../chapter_overview.html.md.erb                |   2 +-
 .../how_data_loaders_work.html.md.erb           |   2 +-
 .../sync_outside_data.html.md.erb               |   8 +-
 .../chapter_overview.html.md.erb                |  24 ++---
 ...locating_partitioned_region_data.html.md.erb |   4 +-
 .../configuring_bucket_for_pr.html.md.erb       |   2 +-
 .../configuring_ha_for_pr.html.md.erb           |  20 ++--
 ...partitioning_and_data_colocation.html.md.erb |  10 +-
 .../how_partitioning_works.html.md.erb          |   8 +-
 .../how_pr_ha_works.html.md.erb                 |  14 +--
 .../join_query_partitioned_regions.html.md.erb  |   2 +-
 ...partitioning_and_data_colocation.html.md.erb |  12 +--
 .../overview_how_pr_ha_works.html.md.erb        |   6 +-
 ...overview_how_pr_single_hop_works.html.md.erb |   4 +-
 .../rebalancing_pr_data.html.md.erb             |   8 +-
 .../set_enforce_unique_host.html.md.erb         |   4 +-
 .../set_redundancy_zones.html.md.erb            |   2 +-
 ...using_custom_partition_resolvers.html.md.erb |  10 +-
 .../advanced_querying.html.md.erb               |  18 ++--
 .../query_additional/literals.html.md.erb       |   4 +-
 .../query_additional/operators.html.md.erb      |   4 +-
 .../query_debugging.html.md.erb                 |   2 +-
 .../query_language_features.html.md.erb         |  16 +--
 .../using_query_bind_parameters.html.md.erb     |   2 +-
 .../create_multiple_indexes.html.md.erb         |   2 +-
 .../query_index/creating_an_index.html.md.erb   |   2 +-
 .../creating_hash_indexes.html.md.erb           |   2 +-
 .../query_index/indexing_guidelines.html.md.erb |   2 +-
 .../query_index/maintaining_indexes.html.md.erb |   2 +-
 .../query_index/query_index.html.md.erb         |  34 +++----
 .../query_index/query_index_hints.html.md.erb   |   2 +-
 .../the_select_statement.html.md.erb            |   4 +-
 .../query_select/the_where_clause.html.md.erb   |   4 +-
 .../chapter_overview.html.md.erb                |   4 +-
 .../querying_basics/query_basics.html.md.erb    |  22 ++--
 .../querying_partitioned_regions.html.md.erb    |  16 +--
 .../querying_basics/reserved_words.html.md.erb  |   2 +-
 ...ictions_and_unsupported_features.html.md.erb |   2 +-
 .../querying_basics/running_a_query.html.md.erb |   6 +-
 .../supported_character_sets.html.md.erb        |   2 +-
 .../what_is_a_query_string.html.md.erb          |  10 +-
 .../region_options/chapter_overview.html.md.erb |  12 +--
 .../dynamic_region_creation.html.md.erb         |   2 +-
 .../region_options/region_types.html.md.erb     |   6 +-
 .../storage_distribution_options.html.md.erb    |   4 +-
 .../chapter_overview.html.md.erb                |  10 +-
 .../how_persist_overflow_work.html.md.erb       |   6 +-
 .../transactions/JTA_transactions.html.md.erb   | 100 +++++++++----------
 .../transactions/about_transactions.html.md.erb |  18 ++--
 .../cache_plugins_with_jta.html.md.erb          |   8 +-
 .../cache_transaction_performance.html.md.erb   |   2 +-
 .../transactions/cache_transactions.html.md.erb |  26 +++--
 ...ache_transactions_by_region_type.html.md.erb |  16 +--
 .../transactions/chapter_overview.html.md.erb   |  24 ++---
 .../client_server_transactions.html.md.erb      |  16 +--
 ...guring_db_connections_using_JNDI.html.md.erb |   4 +-
 .../how_cache_transactions_work.html.md.erb     |  20 ++--
 .../jca_adapter_example.html.md.erb             |   2 +-
 ...onitor_troubleshoot_transactions.html.md.erb |   8 +-
 .../run_a_cache_transaction.html.md.erb         |  16 ++-
 ...che_transaction_with_external_db.html.md.erb |  16 ++-
 .../transaction_coding_examples.html.md.erb     |  14 +--
 .../transaction_event_management.html.md.erb    |   4 +-
 .../transaction_jta_gemfire_example.html.md.erb |   6 +-
 .../transaction_semantics.html.md.erb           |  14 ++-
 ...ctional_and_nontransactional_ops.html.md.erb |   2 +-
 .../working_with_transactions.html.md.erb       |  28 +++---
 120 files changed, 564 insertions(+), 598 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/book_intro.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/book_intro.html.md.erb b/geode-docs/developing/book_intro.html.md.erb
index 8086b7a..c78f753 100644
--- a/geode-docs/developing/book_intro.html.md.erb
+++ b/geode-docs/developing/book_intro.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Developing with Apache Geode
----
+<% set_title("Developing with", product_name_long) %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,13 +17,13 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-*Developing with Apache Geode* explains main concepts of application programming with Apache Geode. It describes how to plan and implement regions, data serialization, event handling, delta propagation, transactions, and more.
+*Developing with <%=vars.product_name_long%>* explains main concepts of application programming with <%=vars.product_name_long%>. It describes how to plan and implement regions, data serialization, event handling, delta propagation, transactions, and more.
 
-For information about Geode REST application development, see [Developing REST Applications for Apache Geode](../rest_apps/book_intro.html).
+For information about Geode REST application development, see [Developing REST Applications for <%=vars.product_name_long%>](../rest_apps/book_intro.html).
 
 -   **[Region Data Storage and Distribution](../developing/region_options/chapter_overview.html)**
 
-    The Apache Geode data storage and distribution models put your data in the right place at the right time. You should understand all the options for data storage in Geode before you start configuring your data regions.
+    The <%=vars.product_name_long%> data storage and distribution models put your data in the right place at the right time. You should understand all the options for data storage in Geode before you start configuring your data regions.
 
 -   **[Partitioned Regions](../developing/partitioned_regions/chapter_overview.html)**
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/continuous_querying/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/continuous_querying/chapter_overview.html.md.erb b/geode-docs/developing/continuous_querying/chapter_overview.html.md.erb
index 3f77edb..865050a 100644
--- a/geode-docs/developing/continuous_querying/chapter_overview.html.md.erb
+++ b/geode-docs/developing/continuous_querying/chapter_overview.html.md.erb
@@ -23,15 +23,15 @@ Continuous querying continuously returns events that match the queries you set u
 
 <a id="continuous__section_779B4E4D06E948618E5792335174E70D"></a>
 
--   **[How Continuous Querying Works](../../developing/continuous_querying/how_continuous_querying_works.html)**
+-   **[How Continuous Querying Works](how_continuous_querying_works.html)**
 
     Clients subscribe to server-side events by using SQL-type query filtering. The server sends all events that modify the query results. CQ event delivery uses the client/server subscription framework.
 
--   **[Implementing Continuous Querying](../../developing/continuous_querying/implementing_continuous_querying.html)**
+-   **[Implementing Continuous Querying](implementing_continuous_querying.html)**
 
     Use continuous querying in your clients to receive continuous updates to queries run on the servers.
 
--   **[Managing Continuous Querying](../../developing/continuous_querying/continuous_querying_whats_next.html)**
+-   **[Managing Continuous Querying](continuous_querying_whats_next.html)**
 
     This topic discusses CQ management options, CQ states, and retrieving initial result sets.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/data_serialization/PDX_Serialization_Features.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/data_serialization/PDX_Serialization_Features.html.md.erb b/geode-docs/developing/data_serialization/PDX_Serialization_Features.html.md.erb
index e6c06f4..6f30d02 100644
--- a/geode-docs/developing/data_serialization/PDX_Serialization_Features.html.md.erb
+++ b/geode-docs/developing/data_serialization/PDX_Serialization_Features.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Geode PDX Serialization Features
----
+<% set_title(product_name, "PDX Serialization Features") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,17 +17,17 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode PDX serialization offers several advantages in terms of functionality.
+<%=vars.product_name%> PDX serialization offers several advantages in terms of functionality.
 
 ## <a id="concept_F02E40517C4B42F2A75B133BB507C626__section_A0EEB4DA3E9F4EA4B65FE727D3951EA1" class="no-quick-link"></a>Application Versioning of PDX Domain Objects
 
 Domain objects evolve along with your application code. You might create an address object with two address lines, then realize later that a third line is required for some situations. Or you might realize that a particular field is not used and want to get rid of it. With PDX, you can use old and new versions of domain objects together in a distributed system if the versions differ by the addition or removal of fields. This compatibility lets you gradually introduce modified code and data into the system, without bringing the system down.
 
-Geode maintains a central registry of the PDX domain object metadata. Using the registry, Geode preserves fields in each member's cache regardless of whether the field is defined. When a member receives an object with a registered field that the member is not aware of, the member does not access the field, but preserves it and passes it along with the entire object to other members. When a member receives an object that is missing one or more fields according to the member's version, Geode assigns the Java default values for the field types to the missing fields.
+<%=vars.product_name%> maintains a central registry of the PDX domain object metadata. Using the registry, <%=vars.product_name%> preserves fields in each member's cache regardless of whether the field is defined. When a member receives an object with a registered field that the member is not aware of, the member does not access the field, but preserves it and passes it along with the entire object to other members. When a member receives an object that is missing one or more fields according to the member's version, <%=vars.product_name%> assigns the Java default values for the field types to the missing fields.
 
 ## <a id="concept_F02E40517C4B42F2A75B133BB507C626__section_D68A6A9C2C0C4D32AE7DADA2A4C3104D" class="no-quick-link"></a>Portability of PDX Serializable Objects
 
-When you serialize an object using PDX, Geode stores the object's type information in the central registry. The information is passed among clients and servers, peers, and distributed systems.
+When you serialize an object using PDX, <%=vars.product_name%> stores the object's type information in the central registry. The information is passed among clients and servers, peers, and distributed systems.
 
 This centralization of object type information is advantageous for client/server installations in which clients and servers are written in different languages. Clients pass registry information to servers automatically when they store a PDX serialized object. Clients can run queries and functions against the data in the servers without compatibility between server and the stored objects. One client can store data on the server to be retrieved by another client, with no requirements on the part of the server.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/data_serialization/auto_serialization.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/data_serialization/auto_serialization.html.md.erb b/geode-docs/developing/data_serialization/auto_serialization.html.md.erb
index cb347a9..0ed38fc 100644
--- a/geode-docs/developing/data_serialization/auto_serialization.html.md.erb
+++ b/geode-docs/developing/data_serialization/auto_serialization.html.md.erb
@@ -33,7 +33,7 @@ Your custom PDX autoserializable classes cannot use the `org.apache.geode` packa
 
 **Prerequisites**
 
--   Understand generally how to configure the Geode cache.
+-   Understand generally how to configure the <%=vars.product_name%> cache.
 -   Understand how PDX serialization works and how to configure your application to use `PdxSerializer`.
 
 <a id="auto_serialization__section_43F6E45FF69E470897FD9D002FBE896D"><strong>Procedure</strong></a>

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/data_serialization/autoserialization_with_class_pattern_strings.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/data_serialization/autoserialization_with_class_pattern_strings.html.md.erb b/geode-docs/developing/data_serialization/autoserialization_with_class_pattern_strings.html.md.erb
index b545610..41879e2 100644
--- a/geode-docs/developing/data_serialization/autoserialization_with_class_pattern_strings.html.md.erb
+++ b/geode-docs/developing/data_serialization/autoserialization_with_class_pattern_strings.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Use class pattern strings to name the classes that you want to serialize using Geode's reflection-based autoserializer and to specify object identity fields and to specify fields to exclude from serialization.
+Use class pattern strings to name the classes that you want to serialize using <%=vars.product_name%>'s reflection-based autoserializer and to specify object identity fields and to specify fields to exclude from serialization.
 
 The class pattern strings used to configured the `ReflectionBasedAutoSerializer` are standard regular expressions. For example, this expression would select all classes defined in the `com.company.domain` package and its subpackages:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/data_serialization/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/data_serialization/chapter_overview.html.md.erb b/geode-docs/developing/data_serialization/chapter_overview.html.md.erb
index 7e13c20..ebc55f0 100644
--- a/geode-docs/developing/data_serialization/chapter_overview.html.md.erb
+++ b/geode-docs/developing/data_serialization/chapter_overview.html.md.erb
@@ -19,21 +19,21 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Data that you manage in Geode must be serialized and deserialized for storage and transmittal between processes. You can choose among several options for data serialization.
+Data that you manage in <%=vars.product_name%> must be serialized and deserialized for storage and transmittal between processes. You can choose among several options for data serialization.
 
--   **[Overview of Data Serialization](../../developing/data_serialization/data_serialization_options.html)**
+-   **[Overview of Data Serialization](data_serialization_options.html)**
 
-    Geode offers serialization options other than Java serialization that give you higher performance and greater flexibility for data storage, transfers, and language types.
+    <%=vars.product_name%> offers serialization options other than Java serialization that give you higher performance and greater flexibility for data storage, transfers, and language types.
 
--   **[Geode PDX Serialization](../../developing/data_serialization/gemfire_pdx_serialization.html)**
+-   **[<%=vars.product_name%> PDX Serialization](gemfire_pdx_serialization.html)**
 
-    Geode's Portable Data eXchange (PDX) is a cross-language data format that can reduce the cost of distributing and serializing your objects. PDX stores data in named fields that you can access individually, to avoid the cost of deserializing the entire data object. PDX also allows you to mix versions of objects where you have added or removed fields.
+    <%=vars.product_name%>'s Portable Data eXchange (PDX) is a cross-language data format that can reduce the cost of distributing and serializing your objects. PDX stores data in named fields that you can access individually, to avoid the cost of deserializing the entire data object. PDX also allows you to mix versions of objects where you have added or removed fields.
 
--   **[Geode Data Serialization (DataSerializable and DataSerializer)](../../developing/data_serialization/gemfire_data_serialization.html)**
+-   **[<%=vars.product_name%> Data Serialization (DataSerializable and DataSerializer)](gemfire_data_serialization.html)**
 
-    Geode's `DataSerializable` interface gives you quick serialization of your objects.
+    <%=vars.product_name%>'s `DataSerializable` interface gives you quick serialization of your objects.
 
--   **[Standard Java Serialization](../../developing/data_serialization/java_serialization.html)**
+-   **[Standard Java Serialization](java_serialization.html)**
 
     You can use standard Java serialization for data you only distribute between Java applications. If you distribute your data between non-Java clients and Java servers, you need to do additional programming to get the data between the various class formats.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/data_serialization/data_serialization_options.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/data_serialization/data_serialization_options.html.md.erb b/geode-docs/developing/data_serialization/data_serialization_options.html.md.erb
index dfe18d0..0115cfc 100644
--- a/geode-docs/developing/data_serialization/data_serialization_options.html.md.erb
+++ b/geode-docs/developing/data_serialization/data_serialization_options.html.md.erb
@@ -19,10 +19,10 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode offers serialization options other than Java serialization that give you higher performance and greater flexibility for data storage, transfers, and language types.
+<%=vars.product_name%> offers serialization options other than Java serialization that give you higher performance and greater flexibility for data storage, transfers, and language types.
 
 <a id="data_serialization_options__section_B1BDB0E7F6814DFD8BACD8D8C5CAA81B"></a>
-All data that Geode moves out of the local cache must be serializable. However, you do not necessarily need to implement `java.io.Serializable` since other serialization options are available in Geode. Region data that must be serializable falls under the following categories:
+All data that <%=vars.product_name%> moves out of the local cache must be serializable. However, you do not necessarily need to implement `java.io.Serializable` since other serialization options are available in <%=vars.product_name%>. Region data that must be serializable falls under the following categories:
 
 -   Partitioned regions
 -   Distributed regions
@@ -35,34 +35,34 @@ All data that Geode moves out of the local cache must be serializable. However,
 **Note:**
 If you are storing objects with the [HTTP Session Management Modules](../../tools_modules/http_session_mgmt/chapter_overview.html), these objects must be serializable since they are serialized before being stored in the region.
 
-To minimize the cost of serialization and deserialization, Geode avoids changing the data format whenever possible. This means your data might be stored in the cache in serialized or deserialized form, depending on how you use it. For example, if a server acts only as a storage location for data distribution between clients, it makes sense to leave the data in serialized form, ready to be transmitted to clients that request it. Partitioned region data is always initially stored in serialized form.
+To minimize the cost of serialization and deserialization, <%=vars.product_name%> avoids changing the data format whenever possible. This means your data might be stored in the cache in serialized or deserialized form, depending on how you use it. For example, if a server acts only as a storage location for data distribution between clients, it makes sense to leave the data in serialized form, ready to be transmitted to clients that request it. Partitioned region data is always initially stored in serialized form.
 
 ## <a id="data_serialization_options__section_691C2CF5A4E24D599070A7AADEDF2BEC" class="no-quick-link"></a>Data Serialization Options
 
 <a id="data_serialization_options__section_44CC2DEEDA0F41D49D416ABA921A6436"></a>
 
-With Geode, you have the option to serialize your domain objects automatically or to implement serialization using one of Geode's interfaces. Enabling automatic serialization means that domain objects are serialized and deserialized without your having to make any code changes to those objects. This automatic serialization is performed by registering your domain objects with a custom `PdxSerializer` called the `ReflectionBasedAutoSerializer`, which uses Java reflection to infer which fields to serialize.
+With <%=vars.product_name%>, you have the option to serialize your domain objects automatically or to implement serialization using one of <%=vars.product_name%>'s interfaces. Enabling automatic serialization means that domain objects are serialized and deserialized without your having to make any code changes to those objects. This automatic serialization is performed by registering your domain objects with a custom `PdxSerializer` called the `ReflectionBasedAutoSerializer`, which uses Java reflection to infer which fields to serialize.
 
-If autoserialization does not meet your needs, you can serialize your objects by implementing one of the Geode interfaces, `PdxSerializable` or `DataSerializable`. You can use these interfaces to replace any standard Java data serialization for better performance. If you cannot or do not want to modify your domain classes, each interface has an alternate serializer class, `PdxSerializer` and `DataSerializer`. To use these, you create your custom serializer class and then associate it with your domain class in the Geode cache configuration.
+If autoserialization does not meet your needs, you can serialize your objects by implementing one of the <%=vars.product_name%> interfaces, `PdxSerializable` or `DataSerializable`. You can use these interfaces to replace any standard Java data serialization for better performance. If you cannot or do not want to modify your domain classes, each interface has an alternate serializer class, `PdxSerializer` and `DataSerializer`. To use these, you create your custom serializer class and then associate it with your domain class in the <%=vars.product_name%> cache configuration.
 
-Geode Data serialization is about 25% faster than PDX serialization, however using PDX serialization will help you to avoid the even larger costs of performing deserialization.
+<%=vars.product_name%> Data serialization is about 25% faster than PDX serialization, however using PDX serialization will help you to avoid the even larger costs of performing deserialization.
 
 <a id="data_serialization_options__section_993B4A298874459BB4A8A0A9811854D9"></a><a id="data_serialization_options__table_ccf00c9f-9b98-47f7-ab30-3d23ecaff0a1"></a>
 
-| Capability                                                                                                                       | Geode Data Serializable | Geode PDX Serializable |
+| Capability                                                                                                                       | <%=vars.product_name%> Data Serializable | <%=vars.product_name%> PDX Serializable |
 |----------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------|-----------------------------------------------------|
 | Implements Java Serializable.                                                                                                    | X                                                    |                                                     |
 | Handles multiple versions of application domain objects, providing the versions differ by the addition or subtraction of fields. |                                                      | X                                                   |
 | Provides single field access of serialized data, without full deserialization - supported also for OQL querying.                 |                                                      | X                                                   |
-| Automatically ported to other languages by Geode                                                    |                                                      | X                                                   |
+| Automatically ported to other languages by <%=vars.product_name%>                                                    |                                                      | X                                                   |
 | Works with .NET clients.                                                                                                         | X                                                    | X                                                   |
 | Works with C++ clients.                                                                                                         | X                                                    | X                                                   |
-| Works with Geode delta propagation.                                                                 | X                                                    | X (See note below.)                                 |
+| Works with <%=vars.product_name%> delta propagation.                                                                 | X                                                    | X (See note below.)                                 |
 
 <span class="tablecap">**Table 1.** Serialization Options: Comparison of Features</span>
 
-**Note:** By default, you can use Geode delta propagation with PDX serialization. However, delta propagation will not work if you have set the Geode property `read-serialized` to "true". In terms of deserialization, to apply a change delta propagation requires a domain class instance and the `fromDelta `method. If you have set `read-serialized` to true, then you will receive a `PdxInstance` instead of a domain class instance and `PdxInstance` does not have the `fromDelta` method required for delta propagation.
+**Note:** By default, you can use <%=vars.product_name%> delta propagation with PDX serialization. However, delta propagation will not work if you have set the <%=vars.product_name%> property `read-serialized` to "true". In terms of deserialization, to apply a change delta propagation requires a domain class instance and the `fromDelta `method. If you have set `read-serialized` to true, then you will receive a `PdxInstance` instead of a domain class instance and `PdxInstance` does not have the `fromDelta` method required for delta propagation.
 
-## <a id="data_serialization_options__section_D90C2C09B95C40B6803CF202CF8008BF" class="no-quick-link"></a>Differences between Geode Serialization (PDX or Data Serializable) and Java Serialization
+## <a id="data_serialization_options__section_D90C2C09B95C40B6803CF202CF8008BF" class="no-quick-link"></a>Differences between <%=vars.product_name%> Serialization (PDX or Data Serializable) and Java Serialization
 
-Geode serialization (either PDX Serialization or Data Serialization) does not support circular object graphs whereas Java serialization does. In Geode serialization, if the same object is referenced more than once in an object graph, the object is serialized for each reference, and deserialization produces multiple copies of the object. By contrast in this situation, Java serialization serializes the object once and when deserializing the object, it produces one instance of the object with multiple references.
+<%=vars.product_name%> serialization (either PDX Serialization or Data Serialization) does not support circular object graphs whereas Java serialization does. In <%=vars.product_name%> serialization, if the same object is referenced more than once in an object graph, the object is serialized for each reference, and deserialization produces multiple copies of the object. By contrast in this situation, Java serialization serializes the object once and when deserializing the object, it produces one instance of the object with multiple references.

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/data_serialization/extending_the_autoserializer.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/data_serialization/extending_the_autoserializer.html.md.erb b/geode-docs/developing/data_serialization/extending_the_autoserializer.html.md.erb
index cfa69f5..47ee92b 100644
--- a/geode-docs/developing/data_serialization/extending_the_autoserializer.html.md.erb
+++ b/geode-docs/developing/data_serialization/extending_the_autoserializer.html.md.erb
@@ -25,7 +25,7 @@ You can extend the `ReflectionBasedAutoSerializer` to handle serialization in a
 
 One of the main use cases for extending the `ReflectionBasedAutoSerializer` is that you want it to handle an object that would currently need to be handled by standard Java serialization. There are several issues with having to use standard Java serialization that can be addressed by extending the PDX `ReflectionBasedAutoSerializer`.
 
--   Each time we transition from a Geode serialized object to an object that will be Java I/O serialized, extra data must get serialized. This can cause a great deal of serialization overhead. This is why it is worth extending the `ReflectionBasedAutoSerializer` to handle any classes that normally would have to be Java I/O serialized.
+-   Each time we transition from a <%=vars.product_name%> serialized object to an object that will be Java I/O serialized, extra data must get serialized. This can cause a great deal of serialization overhead. This is why it is worth extending the `ReflectionBasedAutoSerializer` to handle any classes that normally would have to be Java I/O serialized.
 -   Expanding the number of classes that can use the `ReflectionBasedAutoSerializer` is beneficial when you encounter object graphs. After we use Java I/O serialization on an object, any objects under that object in the object graph will also have to be Java I/O serialized. This includes objects that normally would have been serialized using PDX or `DataSerializable`.
 -   If standard Java I/O serialization is done on an object and you have enabled check-portability, then an exception will be thrown. Even if you are not concerned with the object's portability, you can use this flag to find out what classes would use standard Java serialization (by getting an exception on them) and then enhancing your auto serializer to handle them.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/data_serialization/gemfire_data_serialization.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/data_serialization/gemfire_data_serialization.html.md.erb b/geode-docs/developing/data_serialization/gemfire_data_serialization.html.md.erb
index 24acbfd..96689ec 100644
--- a/geode-docs/developing/data_serialization/gemfire_data_serialization.html.md.erb
+++ b/geode-docs/developing/data_serialization/gemfire_data_serialization.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Geode Data Serialization (DataSerializable and DataSerializer)
----
+<% set_title(product_name, "Data Serialization (DataSerializable and DataSerializer)") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,11 +17,11 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode's `DataSerializable` interface gives you quick serialization of your objects.
+<%=vars.product_name%>'s `DataSerializable` interface gives you quick serialization of your objects.
 
 ## <a id="gemfire_data_serialization__section_0C84D6BF5E9748CB865E6BB944A077DE" class="no-quick-link"></a>Data Serialization with the DataSerializable Interface
 
-Geode's `DataSerializable` interface gives you faster and more compact data serialization than the standard Java serialization or Geode PDX serialization. However, while Geode `DataSerializable` interface is generally more performant than Geode's `PdxSerializable`, it requires full deserialization on the server and then reserialization to send the data back to the client.
+<%=vars.product_name%>'s `DataSerializable` interface gives you faster and more compact data serialization than the standard Java serialization or <%=vars.product_name%> PDX serialization. However, while <%=vars.product_name%> `DataSerializable` interface is generally more performant than <%=vars.product_name%>'s `PdxSerializable`, it requires full deserialization on the server and then reserialization to send the data back to the client.
 
 You can further speed serialization by registering the instantiator for your `DataSerializable` class through `Instantiator`, eliminating the need for reflection to find the right serializer. You can provide your own serialization through the API.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/data_serialization/gemfire_pdx_serialization.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/data_serialization/gemfire_pdx_serialization.html.md.erb b/geode-docs/developing/data_serialization/gemfire_pdx_serialization.html.md.erb
index c8bcdb4..9dd25ec 100644
--- a/geode-docs/developing/data_serialization/gemfire_pdx_serialization.html.md.erb
+++ b/geode-docs/developing/data_serialization/gemfire_pdx_serialization.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Geode PDX Serialization
----
+<% set_title(product_name, "PDX Serialization") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,45 +17,45 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode's Portable Data eXchange (PDX) is a cross-language data format that can reduce the cost of distributing and serializing your objects. PDX stores data in named fields that you can access individually, to avoid the cost of deserializing the entire data object. PDX also allows you to mix versions of objects where you have added or removed fields.
+<%=vars.product_name%>'s Portable Data eXchange (PDX) is a cross-language data format that can reduce the cost of distributing and serializing your objects. PDX stores data in named fields that you can access individually, to avoid the cost of deserializing the entire data object. PDX also allows you to mix versions of objects where you have added or removed fields.
 
--   **[Geode PDX Serialization Features](../../developing/data_serialization/PDX_Serialization_Features.html)**
+-   **[<%=vars.product_name%> PDX Serialization Features](PDX_Serialization_Features.html)**
 
-    Geode PDX serialization offers several advantages in terms of functionality.
+    <%=vars.product_name%> PDX serialization offers several advantages in terms of functionality.
 
--   **[High Level Steps for Using PDX Serialization](../../developing/data_serialization/use_pdx_high_level_steps.html)**
+-   **[High Level Steps for Using PDX Serialization](use_pdx_high_level_steps.html)**
 
-    To use PDX serialization, you can configure and use Geode's reflection-based autoserializer, or you can program the serialization of your objects by using the PDX interfaces and classes.
+    To use PDX serialization, you can configure and use <%=vars.product_name%>'s reflection-based autoserializer, or you can program the serialization of your objects by using the PDX interfaces and classes.
 
--   **[Using Automatic Reflection-Based PDX Serialization](../../developing/data_serialization/auto_serialization.html)**
+-   **[Using Automatic Reflection-Based PDX Serialization](auto_serialization.html)**
 
     You can configure your cache to automatically serialize and deserialize domain objects without having to add any extra code to them.
 
--   **[Serializing Your Domain Object with a PdxSerializer](../../developing/data_serialization/use_pdx_serializer.html)**
+-   **[Serializing Your Domain Object with a PdxSerializer](use_pdx_serializer.html)**
 
     For a domain object that you cannot or do not want to modify, use the `PdxSerializer` class to serialize and deserialize the object's fields. You use one `PdxSerializer` implementation for the entire cache, programming it for all of the domain objects that you handle in this way.
 
--   **[Implementing PdxSerializable in Your Domain Object](../../developing/data_serialization/use_pdx_serializable.html)**
+-   **[Implementing PdxSerializable in Your Domain Object](use_pdx_serializable.html)**
 
     For a domain object with source that you can modify, implement the `PdxSerializable` interface in the object and use its methods to serialize and deserialize the object's fields.
 
--   **[Programming Your Application to Use PdxInstances](../../developing/data_serialization/program_application_for_pdx.html)**
+-   **[Programming Your Application to Use PdxInstances](program_application_for_pdx.html)**
 
     A `PdxInstance` is a light-weight wrapper around PDX serialized bytes. It provides applications with run-time access to fields of a PDX serialized object.
 
--   **[Adding JSON Documents to the Geode Cache](../../developing/data_serialization/jsonformatter_pdxinstances.html)**
+-   **[Adding JSON Documents to the <%=vars.product_name%> Cache](jsonformatter_pdxinstances.html)**
 
     The `JSONFormatter` API allows you to put JSON formatted documents into regions and retrieve them later by storing the documents internally as PdxInstances.
 
--   **[Using PdxInstanceFactory to Create PdxInstances](../../developing/data_serialization/using_PdxInstanceFactory.html)**
+-   **[Using PdxInstanceFactory to Create PdxInstances](using_PdxInstanceFactory.html)**
 
     You can use the `PdxInstanceFactory` interface to create a `PdxInstance` from raw data when the domain class is not available on the server.
 
--   **[Persisting PDX Metadata to Disk](../../developing/data_serialization/persist_pdx_metadata_to_disk.html)**
+-   **[Persisting PDX Metadata to Disk](persist_pdx_metadata_to_disk.html)**
 
-    Geode allows you to persist PDX metadata to disk and specify the disk store to use.
+    <%=vars.product_name%> allows you to persist PDX metadata to disk and specify the disk store to use.
 
--   **[Using PDX Objects as Region Entry Keys](../../developing/data_serialization/using_pdx_region_entry_keys.html)**
+-   **[Using PDX Objects as Region Entry Keys](using_pdx_region_entry_keys.html)**
 
     Using PDX objects as region entry keys is highly discouraged.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/data_serialization/jsonformatter_pdxinstances.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/data_serialization/jsonformatter_pdxinstances.html.md.erb b/geode-docs/developing/data_serialization/jsonformatter_pdxinstances.html.md.erb
index 09aaae2..280012b 100644
--- a/geode-docs/developing/data_serialization/jsonformatter_pdxinstances.html.md.erb
+++ b/geode-docs/developing/data_serialization/jsonformatter_pdxinstances.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Adding JSON Documents to the Geode Cache
----
+<% set_title("Adding JSON Documents to the", product_name, "Cache") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -21,9 +19,9 @@ limitations under the License.
 
 The `JSONFormatter` API allows you to put JSON formatted documents into regions and retrieve them later by storing the documents internally as PdxInstances.
 
-Geode supports the use of JSON formatted documents natively. When you add a JSON document to a Geode cache, you call the JSONFormatter APIs to transform them into the PDX format (as a `PdxInstance`), which enables Geode to understand the JSON document at a field level.
+<%=vars.product_name%> supports the use of JSON formatted documents natively. When you add a JSON document to a <%=vars.product_name%> cache, you call the JSONFormatter APIs to transform them into the PDX format (as a `PdxInstance`), which enables <%=vars.product_name%> to understand the JSON document at a field level.
 
-In terms of querying and indexing, because the documents are stored internally as PDX, applications can index on any field contained inside the JSON document including any nested field (within JSON objects or JSON arrays.) Any queries run on these stored documents will return PdxInstances as results. To update a JSON document stored in Geode , you can execute a function on the PdxInstance.
+In terms of querying and indexing, because the documents are stored internally as PDX, applications can index on any field contained inside the JSON document including any nested field (within JSON objects or JSON arrays.) Any queries run on these stored documents will return PdxInstances as results. To update a JSON document stored in <%=vars.product_name%> , you can execute a function on the PdxInstance.
 
 You can then use the `JSONFormatter` to convert the PdxInstance results back into the JSON document.
 
@@ -31,14 +29,14 @@ You can then use the `JSONFormatter` to convert the PdxInstance results back int
 
 The `JSONFormatter` class has four static methods that are used to convert JSON document into PdxInstances and then to convert those PdxInstances back into JSON document.
 
-You need to call the following methods before putting any JSON document into the Geode region:
+You need to call the following methods before putting any JSON document into the <%=vars.product_name%> region:
 
 -   `fromJSON`. Creates a PdxInstance from a JSON byte array. Returns the PdxInstance.
 -   `fromJSON`. Creates a PdxInstance from a JSON string. Returns the PdxInstance.
 
-After putting the JSON document into a region as a PdxInstance, you can execute standard Geode queries and create indexes on the JSON document in the same manner you would query or index any other Geode PdxInstance.
+After putting the JSON document into a region as a PdxInstance, you can execute standard <%=vars.product_name%> queries and create indexes on the JSON document in the same manner you would query or index any other <%=vars.product_name%> PdxInstance.
 
-After executing a Geode query or calling `region.get`, you can use the following methods to convert a PdxInstance back into the JSON format:
+After executing a <%=vars.product_name%> query or calling `region.get`, you can use the following methods to convert a PdxInstance back into the JSON format:
 
 -   `toJSON`. Reads a PdxInstance and returns a JSON string.
 -   `toJSONByteArray`. Reads a PdxInstance and returns a JSON byte array.
@@ -47,7 +45,7 @@ For more information on using the JSONFormatter, see the Java API documentation
 
 # Sorting Behavior of Serialized JSON Fields
 
-By default, Geode serialization creates a unique pdx typeID for each unique JSON document, even if the
+By default, <%=vars.product_name%> serialization creates a unique pdx typeID for each unique JSON document, even if the
 only difference between the JSON documents is the order in which their fields are specified. 
 
 If you prefer that JSON documents which differ only in the order in which their fields are specified

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/data_serialization/persist_pdx_metadata_to_disk.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/data_serialization/persist_pdx_metadata_to_disk.html.md.erb b/geode-docs/developing/data_serialization/persist_pdx_metadata_to_disk.html.md.erb
index 2d044ce..7b30eae 100644
--- a/geode-docs/developing/data_serialization/persist_pdx_metadata_to_disk.html.md.erb
+++ b/geode-docs/developing/data_serialization/persist_pdx_metadata_to_disk.html.md.erb
@@ -19,22 +19,22 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode allows you to persist PDX metadata to disk and specify the disk store to use.
+<%=vars.product_name%> allows you to persist PDX metadata to disk and specify the disk store to use.
 
 <a id="persist_pdx_metadata_to_disk__section_7F357A8E56B54BFB9A5778C0F89E034E"></a>
 **Prerequisites**
 
--   Understand generally how to configure the Geode cache. See [Basic Configuration and Programming](../../basic_config/book_intro.html).
--   Understand how Geode disk stores work. See [Disk Storage](../../managing/disk_storage/chapter_overview.html).
+-   Understand generally how to configure the <%=vars.product_name%> cache. See [Basic Configuration and Programming](../../basic_config/book_intro.html).
+-   Understand how <%=vars.product_name%> disk stores work. See [Disk Storage](../../managing/disk_storage/chapter_overview.html).
 
 **Procedure**
 
 1.  Set the `<pdx>` attribute `persistent` to true in your cache configuration. This is required for caches that use PDX with persistent regions and with regions that use a gateway sender to distribute events across a WAN.. Otherwise, it is optional.
-2.  (Optional) If you want to use a disk store that is not the Geode default disk store, set the `<pdx>` attribute `disk-store-name` to the name of your non-default disk store.
+2.  (Optional) If you want to use a disk store that is not the <%=vars.product_name%> default disk store, set the `<pdx>` attribute `disk-store-name` to the name of your non-default disk store.
     **Note:**
     If you are using PDX serialized objects as region entry keys and you are using persistent regions, then you must configure your PDX disk store to be a different one than the disk store used by the persistent regions.
 
-3.  (Optional) If you later want to rename the PDX types that are persisted to disk, you can do so on your offline disk-stores by executing the `pdx                             rename` command. See [pdx rename](../../tools_modules/gfsh/command-pages/pdx.html).
+3.  (Optional) If you later want to rename the PDX types that are persisted to disk, you can do so on your offline disk-stores by executing the `pdx rename` command. See [pdx rename](../../tools_modules/gfsh/command-pages/pdx.html).
 
 **Example cache.xml:**
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/data_serialization/program_application_for_pdx.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/data_serialization/program_application_for_pdx.html.md.erb b/geode-docs/developing/data_serialization/program_application_for_pdx.html.md.erb
index ae8be23..1ff499e 100644
--- a/geode-docs/developing/data_serialization/program_application_for_pdx.html.md.erb
+++ b/geode-docs/developing/data_serialization/program_application_for_pdx.html.md.erb
@@ -44,7 +44,7 @@ When fetching data in a cache with PDX serialized reads enabled, the safest appr
 
 **Prerequisites**
 
--   Understand generally how to configure the Geode cache. See [Basic Configuration and Programming](../../basic_config/book_intro.html#basic_config_management).
+-   Understand generally how to configure the <%=vars.product_name%> cache. See [Basic Configuration and Programming](../../basic_config/book_intro.html#basic_config_management).
 
 <a id="program_application_for_pdx__section_B3C7C7629DFD4483B32B27F84D64DFCF"></a>
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/data_serialization/use_pdx_high_level_steps.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/data_serialization/use_pdx_high_level_steps.html.md.erb b/geode-docs/developing/data_serialization/use_pdx_high_level_steps.html.md.erb
index c4894b6..c21f488 100644
--- a/geode-docs/developing/data_serialization/use_pdx_high_level_steps.html.md.erb
+++ b/geode-docs/developing/data_serialization/use_pdx_high_level_steps.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-To use PDX serialization, you can configure and use Geode's reflection-based autoserializer, or you can program the serialization of your objects by using the PDX interfaces and classes.
+To use PDX serialization, you can configure and use <%=vars.product_name%>'s reflection-based autoserializer, or you can program the serialization of your objects by using the PDX interfaces and classes.
 
 <a id="concept_A7C8890826394B4293C036DD739835BD__section_7F357A8E56B54BFB9A5778C0F89E034E"></a>
 Optionally, program your application code to deserialize individual fields out of PDX representations of your serialized objects. You may also need to persist your PDX metadata to disk for recovery on startup.
@@ -39,7 +39,7 @@ Optionally, program your application code to deserialize individual fields out o
 
     By using gfsh, this configuration can propagated across the cluster through the [Cluster Configuration Service](../../configuring/cluster_config/gfsh_persist.html). Alternately, you would need to configure `pdx read-serialized` in each server's `cache.xml` file.
 
-3.  If you are storing any Geode data on disk, then you must configure PDX serialization to use persistence. See [Persisting PDX Metadata to Disk](persist_pdx_metadata_to_disk.html) for more information.
+3.  If you are storing any <%=vars.product_name%> data on disk, then you must configure PDX serialization to use persistence. See [Persisting PDX Metadata to Disk](persist_pdx_metadata_to_disk.html) for more information.
 4.  (Optional) Wherever you run explicit application code to retrieve and manage your cached entries, you may want to manage your data objects without using full deserialization. To do this, see [Programming Your Application to Use PdxInstances](program_application_for_pdx.html).
 
 ## PDX and Multi-Site (WAN) Deployments

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/data_serialization/use_pdx_serializable.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/data_serialization/use_pdx_serializable.html.md.erb b/geode-docs/developing/data_serialization/use_pdx_serializable.html.md.erb
index 2716814..7c367f6 100644
--- a/geode-docs/developing/data_serialization/use_pdx_serializable.html.md.erb
+++ b/geode-docs/developing/data_serialization/use_pdx_serializable.html.md.erb
@@ -46,8 +46,8 @@ For a domain object with source that you can modify, implement the `PdxSerializa
     ```
 
 3.  Program `PdxSerializable.toData.`
-    1.  Write each standard Java data field of your domain class using the `PdxWriter` write methods. Geode automatically provides `PdxWriter` to the `toData` method for `PdxSerializable` objects.
-    2.  Call the `PdxWriter` `markIdentifyField` method for each field you want to have Geode use to identify your object. Put this after the field's write method. Geode uses this information to compare objects for operations like distinct queries. If you do not set as least one identity field, then the `equals` and `hashCode` methods will use all PDX fields to compare objects and consequently, will not perform as well. It is important that the fields used by your `equals` and `hashCode` implementations are the same fields that you mark as identity fields.
+    1.  Write each standard Java data field of your domain class using the `PdxWriter` write methods. <%=vars.product_name%> automatically provides `PdxWriter` to the `toData` method for `PdxSerializable` objects.
+    2.  Call the `PdxWriter` `markIdentifyField` method for each field you want to have <%=vars.product_name%> use to identify your object. Put this after the field's write method. <%=vars.product_name%> uses this information to compare objects for operations like distinct queries. If you do not set as least one identity field, then the `equals` and `hashCode` methods will use all PDX fields to compare objects and consequently, will not perform as well. It is important that the fields used by your `equals` and `hashCode` implementations are the same fields that you mark as identity fields.
     3.  For a particular version of your class, you need to consistently write the same named field each time. The field names or number of fields must not change from one instance to another for the same class version.
     4.  For best performance, do fixed width fields first and then variable length fields.
 
@@ -85,7 +85,7 @@ For a domain object with source that you can modify, implement the `PdxSerializa
 
     Provide the same names that you did in `toData` and call the read operations in the same order as you called the write operations in your `toData` implementation.
 
-    Geode automatically provides `PdxReader` to the `fromData` method for `PdxSerializable` objects.
+    <%=vars.product_name%> automatically provides `PdxReader` to the `fromData` method for `PdxSerializable` objects.
 
     Example `fromData` code:
 
@@ -110,6 +110,6 @@ For a domain object with source that you can modify, implement the `PdxSerializa
 
 **What to do next**
 
--   As needed, configure and program your Geode applications to use `PdxInstance` for selective object deserialization. See [Programming Your Application to Use PdxInstances](program_application_for_pdx.html).
+-   As needed, configure and program your <%=vars.product_name%> applications to use `PdxInstance` for selective object deserialization. See [Programming Your Application to Use PdxInstances](program_application_for_pdx.html).
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/data_serialization/use_pdx_serializer.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/data_serialization/use_pdx_serializer.html.md.erb b/geode-docs/developing/data_serialization/use_pdx_serializer.html.md.erb
index 8feee8f..74b0a1d 100644
--- a/geode-docs/developing/data_serialization/use_pdx_serializer.html.md.erb
+++ b/geode-docs/developing/data_serialization/use_pdx_serializer.html.md.erb
@@ -81,7 +81,7 @@ The `PdxSerializer` `toData` and `fromData` methods differ from those for `PdxSe
 3.  Program `PdxSerializer.toData` to recognize, cast, and handle your domain object:
 
     1.  Write each standard Java data field of your domain class using the `PdxWriter` write methods.
-    2.  Call the `PdxWriter` `markIdentityField` method for each field you want to have Geode use to identify your object. Put this after the field's write method. Geode uses this information to compare objects for operations like distinct queries. If you do not set as least one identity field, then the `equals` and `hashCode` methods will use all PDX fields to compare objects and consequently, will not perform as well. It is important that the fields used by your `equals` and `hashCode` implementations are the same fields that you mark as identity fields.
+    2.  Call the `PdxWriter` `markIdentityField` method for each field you want to have <%=vars.product_name%> use to identify your object. Put this after the field's write method. <%=vars.product_name%> uses this information to compare objects for operations like distinct queries. If you do not set as least one identity field, then the `equals` and `hashCode` methods will use all PDX fields to compare objects and consequently, will not perform as well. It is important that the fields used by your `equals` and `hashCode` implementations are the same fields that you mark as identity fields.
     3.  For a particular version of your class, you need to consistently write the same named field each time. The field names or number of fields must not change from one instance to another for the same class version.
     4.  For best performance, do fixed width fields first and then variable length fields.
     5.  If desired, you can check the portability of the object before serializing it by adding the `checkPortability` parameter when using the`                                 PdxWriter` `writeObject`, `writeObjectArray`, and `writeField` methods.
@@ -115,7 +115,7 @@ The `PdxSerializer` `toData` and `fromData` methods differ from those for `PdxSe
 
         Provide the same names that you did in `toData` and call the read operations in the same order as you called the write operations in your `toData` implementation.
 
-        Geode provides the domain class type and `PdxReader` to the `fromData` method.
+        <%=vars.product_name%> provides the domain class type and `PdxReader` to the `fromData` method.
 
         Example `fromData` code:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/delta_propagation/delta_propagation_example.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/delta_propagation/delta_propagation_example.html.md.erb b/geode-docs/developing/delta_propagation/delta_propagation_example.html.md.erb
index 7a81962..f4b0b4a 100644
--- a/geode-docs/developing/delta_propagation/delta_propagation_example.html.md.erb
+++ b/geode-docs/developing/delta_propagation/delta_propagation_example.html.md.erb
@@ -28,9 +28,9 @@ In this example, the feeder client is connected to the first server, and the rec
 
 The example demonstrates the following operations:
 
-1.  In the Feeder client, the application updates the entry object and puts the entry. In response to the `put`, Geode calls `hasDelta`, which returns true, so Geode calls `toDelta` and forwards the extracted delta to the server. If `hasDelta` returned false, Geode would distribute the full entry value.
-2.  In Server1, Geode applies the delta to the cache, distributes the received delta to the server's peers, and forwards it to any other clients with interest in the entry (there are no other clients to Server1 in this example)
-3.  In Server2, Geode applies the delta to the cache and forwards it to its interested clients, which in this case is just the Receiver client.
+1.  In the Feeder client, the application updates the entry object and puts the entry. In response to the `put`, <%=vars.product_name%> calls `hasDelta`, which returns true, so <%=vars.product_name%> calls `toDelta` and forwards the extracted delta to the server. If `hasDelta` returned false, <%=vars.product_name%> would distribute the full entry value.
+2.  In Server1, <%=vars.product_name%> applies the delta to the cache, distributes the received delta to the server's peers, and forwards it to any other clients with interest in the entry (there are no other clients to Server1 in this example)
+3.  In Server2, <%=vars.product_name%> applies the delta to the cache and forwards it to its interested clients, which in this case is just the Receiver client.
 
 <a id="delta_propagation_example__section_185444FC51FB467587A62DFEC07C9C7D"></a>
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/delta_propagation/delta_propagation_properties.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/delta_propagation/delta_propagation_properties.html.md.erb b/geode-docs/developing/delta_propagation/delta_propagation_properties.html.md.erb
index 0b5b40a..b5fddd1 100644
--- a/geode-docs/developing/delta_propagation/delta_propagation_properties.html.md.erb
+++ b/geode-docs/developing/delta_propagation/delta_propagation_properties.html.md.erb
@@ -54,12 +54,12 @@ Exceptions to this behavior:
 
 Cloning can be expensive, but it ensures that the new object is fully initialized with the delta before any application code sees it.
 
-When cloning is enabled, by default Geode does a deep copy of the object, using serialization. You may be able to improve performance by implementing `java.lang.Cloneable` and then implementing the `clone` method, making a deep copy of anything to which a delta may be applied. The goal is to reduce significantly the overhead of copying the object while still retaining the isolation needed for your deltas.
+When cloning is enabled, by default <%=vars.product_name%> does a deep copy of the object, using serialization. You may be able to improve performance by implementing `java.lang.Cloneable` and then implementing the `clone` method, making a deep copy of anything to which a delta may be applied. The goal is to reduce significantly the overhead of copying the object while still retaining the isolation needed for your deltas.
 
 Without cloning:
 
 -   It is possible for application code to read the entry value as it is being modified, possibly seeing the value in an intermediate, inconsistent state, with just part of the delta applied. You may choose to resolve this issue by having your application code synchronize on reads and writes.
--   Geode loses any reference to the old value because the old value is transformed in place into the new value. Because of this, your `CacheListener` sees the same new value returned for `EntryEvent.getOldValue` and `EntryEvent.getNewValue` .
+-   <%=vars.product_name%> loses any reference to the old value because the old value is transformed in place into the new value. Because of this, your `CacheListener` sees the same new value returned for `EntryEvent.getOldValue` and `EntryEvent.getNewValue` .
 -   Exceptions thrown from `fromDelta` may leave your cache in an inconsistent state. Without cloning, any interruption of the delta application could leave you with some of the fields in your cached object changed and others unchanged. If you do not use cloning, keep this in mind when you program your error handling in your `fromDelta` implementation.
 
 With cloning:

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/delta_propagation/how_delta_propagation_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/delta_propagation/how_delta_propagation_works.html.md.erb b/geode-docs/developing/delta_propagation/how_delta_propagation_works.html.md.erb
index 7aec9ab..3609734 100644
--- a/geode-docs/developing/delta_propagation/how_delta_propagation_works.html.md.erb
+++ b/geode-docs/developing/delta_propagation/how_delta_propagation_works.html.md.erb
@@ -26,7 +26,7 @@ In most distributed data management systems, the data stored in the system tends
 
 <a id="how_delta_propagation_works__section_ABE3589920D6477BBB2223A583AF169A"></a>
 
-Geode propagates object deltas using methods that you program. The methods are in the `Delta` interface, which you implement in your cached objects' classes. If any of your classes are plain old Java objects, you need to wrap them for this implementation.
+<%=vars.product_name%> propagates object deltas using methods that you program. The methods are in the `Delta` interface, which you implement in your cached objects' classes. If any of your classes are plain old Java objects, you need to wrap them for this implementation.
 
 This figure shows delta propagation for a change to an entry with key, k, and value object, v.
 
@@ -48,21 +48,21 @@ Sometimes `fromDelta` cannot be invoked because there is no object to apply the
 1.  If the system can determine beforehand that the receiver does not have a local copy, it sends the initial message with the full value. This is possible when regions are configured with no local data storage, such as with the region shortcut settings `PARTITION_PROXY` and `REPLICATE_PROXY`. These configurations are used to accomplish things like provide data update information to listeners and to pass updates forward to clients.
 2.  In less obvious cases, such as when an entry has been locally deleted, first the delta is sent, then the receiver requests a full value and that is sent. Whenever the full value is received, any further distributions to the receiver's peers or clients uses the full value.
 
-Geode also does not propagate deltas for:
+<%=vars.product_name%> also does not propagate deltas for:
 
 -   Transactional commit
 -   The `putAll` operation
--   JVMs running Geode versions that do not support delta propagation (6.0 and earlier)
+-   JVMs running <%=vars.product_name%> versions that do not support delta propagation (6.0 and earlier)
 
 ## <a id="how_delta_propagation_works__section_F4A102A74530429F87BEA53C90D5CCFB" class="no-quick-link"></a>Supported Topologies and Limitations
 
 The following topologies support delta propagation (with some limitations):
 
--   **Peer-to-peer**. Geode system members distribute and receive entry changes using delta propagation, with these requirements and caveats:
+-   **Peer-to-peer**. <%=vars.product_name%> system members distribute and receive entry changes using delta propagation, with these requirements and caveats:
     -   Regions must be partitioned or have their scope set to `distributed-ack` or `global`. The region shortcut settings for distributed regions use `distributed-ack` `scope`. Delta propagation does not work for regions with `distributed-no-ack` `scope` because the receiver could not recover if an exception occurred while applying the delta.
     -   For partitioned regions, if a receiving peer does not hold the primary or a secondary copy of the entry, but still requires a value, the system automatically sends the full value.
     -   To receive deltas, a region must be non-empty. The system automatically sends the full value to empty regions. Empty regions can send deltas.
--   **Client/server**. Geode clients can always send deltas to the servers, and servers can usually sent deltas to clients. These configurations require the servers to send full values to the clients, instead of deltas:
+-   **Client/server**. <%=vars.product_name%> clients can always send deltas to the servers, and servers can usually sent deltas to clients. These configurations require the servers to send full values to the clients, instead of deltas:
     -   When the client's `gemfire.properties` setting `conflate-events` is set to true, the servers send full values for all regions.
     -   When the server region attribute `enable-subscription-conflation` is set to true and the client `gemfire.properties` setting `conflate-events` is set to `server`, the servers send full values for the region.
     -   When the client region is configured with the `PROXY` client region shortcut setting (empty client region), servers send full values.

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/delta_propagation/implementing_delta_propagation.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/delta_propagation/implementing_delta_propagation.html.md.erb b/geode-docs/developing/delta_propagation/implementing_delta_propagation.html.md.erb
index 4b6ae99..5aacf91 100644
--- a/geode-docs/developing/delta_propagation/implementing_delta_propagation.html.md.erb
+++ b/geode-docs/developing/delta_propagation/implementing_delta_propagation.html.md.erb
@@ -26,12 +26,12 @@ Use the following procedure to implement delta propagation in your distributed s
 
 1.  Study your object types and expected application behavior to determine which regions can benefit from using delta propagation. Delta propagation does not improve performance for all data and data modification scenarios. See [When to Avoid Delta Propagation](when_to_use_delta_prop.html#when_to_use_delta_prop).
 2.  For each region where you are using delta propagation, choose whether to enable cloning using the delta propagation property `cloning-enabled`. Cloning is disabled by default. See [Delta Propagation Properties](delta_propagation_properties.html#delta_propagation_properties).
-3.  If you do not enable cloning, review all associated listener code for dependencies on `EntryEvent.getOldValue`. Without cloning, Geode modifies the entry in place and so loses its reference to the old value. For delta events, the `EntryEvent` methods `getOldValue` and `getNewValue` both return the new value.
+3.  If you do not enable cloning, review all associated listener code for dependencies on `EntryEvent.getOldValue`. Without cloning, <%=vars.product_name%> modifies the entry in place and so loses its reference to the old value. For delta events, the `EntryEvent` methods `getOldValue` and `getNewValue` both return the new value.
 4.  For every class where you want delta propagation, implement `org.apache.geode.Delta` and update your methods to support delta propagation. Exactly how you do this depends on your application and object needs, but these steps describe the basic approach:
     1.  If the class is a plain old Java object (POJO), wrap it for this implementation and update your code to work with the wrapper class.
     2.  Define as transient any extra object fields that you use to manage delta state. This can help performance when the full object is distributed. Whenever standard Java serialization is used, the transient keyword indicates to Java to not serialize the field.
     3.  Study the object contents to decide how to handle delta changes. Delta propagation has the same issues of distributed concurrency control as the distribution of full objects, but on a more detailed level. Some parts of your objects may be able to change independent of one another while others may always need to change together. Send deltas large enough to keep your data logically consistent. If, for example, field A and field B depend on each other, then your delta distributions should either update both fields or neither. As with regular updates, the fewer producers you have on a data region, the lower your likelihood of concurrency issues.
-    4.  In the application code that puts entries, put the fully populated object into the local cache. Even though you are planning to send only deltas, errors on the receiving end could cause Geode to request the full object, so you must provide it to the originating put method. Do this even in empty producers, with regions configured for no local data storage. This usually means doing a get on the entry unless you are sure it does not already exist anywhere in the distributed region.
+    4.  In the application code that puts entries, put the fully populated object into the local cache. Even though you are planning to send only deltas, errors on the receiving end could cause <%=vars.product_name%> to request the full object, so you must provide it to the originating put method. Do this even in empty producers, with regions configured for no local data storage. This usually means doing a get on the entry unless you are sure it does not already exist anywhere in the distributed region.
     5.  Change each field's update method to record information about the update. The information must be sufficient for `toDelta` to encode the delta and any additional required delta information when it is invoked.
     6.  Write `hasDelta` to report on whether a delta is available.
     7.  Write `toDelta` to create a byte stream with the changes to the object and any other information `fromDelta` will need to apply the changes. Before returning from `toDelta`, reset your delta state to indicate that there are no delta changes waiting to be sent.

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/distributed_regions/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/distributed_regions/chapter_overview.html.md.erb b/geode-docs/developing/distributed_regions/chapter_overview.html.md.erb
index d24de37..48fe83d 100644
--- a/geode-docs/developing/distributed_regions/chapter_overview.html.md.erb
+++ b/geode-docs/developing/distributed_regions/chapter_overview.html.md.erb
@@ -19,25 +19,25 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-In addition to basic region management, distributed and replicated regions include options for things like push and pull distribution models, global locking, and region entry versions to ensure consistency across Geode members.
+In addition to basic region management, distributed and replicated regions include options for things like push and pull distribution models, global locking, and region entry versions to ensure consistency across <%=vars.product_name%> members.
 
--   **[How Distribution Works](../../developing/distributed_regions/how_distribution_works.html)**
+-   **[How Distribution Works](how_distribution_works.html)**
 
     To use distributed and replicated regions, you should understand how they work and your options for managing them.
 
--   **[Options for Region Distribution](../../developing/distributed_regions/choosing_level_of_dist.html)**
+-   **[Options for Region Distribution](choosing_level_of_dist.html)**
 
-    You can use distribution with and without acknowledgment, or global locking for your region distribution. Regions that are configured for distribution with acknowledgment can also be configured to resolve concurrent updates consistently across all Geode members that host the region.
+    You can use distribution with and without acknowledgment, or global locking for your region distribution. Regions that are configured for distribution with acknowledgment can also be configured to resolve concurrent updates consistently across all <%=vars.product_name%> members that host the region.
 
--   **[How Replication and Preloading Work](../../developing/distributed_regions/how_replication_works.html)**
+-   **[How Replication and Preloading Work](how_replication_works.html)**
 
     To work with replicated and preloaded regions, you should understand how their data is initialized and maintained in the cache.
 
--   **[Configure Distributed, Replicated, and Preloaded Regions](../../developing/distributed_regions/managing_distributed_regions.html)**
+-   **[Configure Distributed, Replicated, and Preloaded Regions](managing_distributed_regions.html)**
 
     Plan the configuration and ongoing management of your distributed, replicated, and preloaded regions, and configure the regions.
 
--   **[Locking in Global Regions](../../developing/distributed_regions/locking_in_global_regions.html)**
+-   **[Locking in Global Regions](locking_in_global_regions.html)**
 
     In global regions, the system locks entries and the region during updates. You can also explicitly lock the region and its entries as needed by your application. Locking includes system settings that help you optimize performance and locking behavior between your members.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/distributed_regions/choosing_level_of_dist.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/distributed_regions/choosing_level_of_dist.html.md.erb b/geode-docs/developing/distributed_regions/choosing_level_of_dist.html.md.erb
index 3d48ab4..72cfcfe 100644
--- a/geode-docs/developing/distributed_regions/choosing_level_of_dist.html.md.erb
+++ b/geode-docs/developing/distributed_regions/choosing_level_of_dist.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-You can use distribution with and without acknowledgment, or global locking for your region distribution. Regions that are configured for distribution with acknowledgment can also be configured to resolve concurrent updates consistently across all Geode members that host the region.
+You can use distribution with and without acknowledgment, or global locking for your region distribution. Regions that are configured for distribution with acknowledgment can also be configured to resolve concurrent updates consistently across all <%=vars.product_name%> members that host the region.
 
 <a id="choosing_level_of_dist__section_F2528B151DD54CEFA05C4BA655BCF016"></a>
 Each distributed region must have the same scope and concurrency checking setting throughout the distributed system.


[07/51] [abbrv] geode git commit: Merge branch 'develop' of https://git-wip-us.apache.org/repos/asf/geode into develop

Posted by kl...@apache.org.
Merge branch 'develop' of https://git-wip-us.apache.org/repos/asf/geode into develop


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/87bee084
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/87bee084
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/87bee084

Branch: refs/heads/feature/GEODE-1279
Commit: 87bee0843d255187c8a53ccb4ffd57534168f873
Parents: 684f85d 13ad4b6
Author: Udo Kohlmeyer <uk...@pivotal.io>
Authored: Mon Aug 14 15:31:49 2017 -0700
Committer: Udo Kohlmeyer <uk...@pivotal.io>
Committed: Mon Aug 14 15:31:49 2017 -0700

----------------------------------------------------------------------
 geode-book/Gemfile.lock                         |    2 +-
 .../source/subnavs/geode-subnav.erb             |   54 +-
 .../how_region_versioning_works.html.md.erb     |    4 +-
 .../disk_free_space_monitoring.html.md.erb      |    2 +-
 .../heap_use/off_heap_management.html.md.erb    |    2 +-
 .../region_compression.html.md.erb              |    2 +-
 geode-docs/reference/book_intro.html.md.erb     |   20 +-
 .../statistics/statistics_list.html.md.erb      | 1310 ------------------
 .../reference/statistics_list.html.md.erb       | 1310 ++++++++++++++++++
 .../topics/cache-elements-list.html.md.erb      |    4 +-
 .../reference/topics/cache_xml.html.md.erb      |   50 +-
 .../chapter_overview_cache_xml.html.md.erb      |    8 +-
 ...chapter_overview_regionshortcuts.html.md.erb |   54 +-
 .../client-cache-elements-list.html.md.erb      |    2 +-
 .../reference/topics/client-cache.html.md.erb   |   42 +-
 .../topics/gemfire_properties.html.md.erb       |   46 +-
 .../reference/topics/gfe_cache_xml.html.md.erb  |   78 +-
 ...handling_exceptions_and_failures.html.md.erb |   10 +-
 ...mory_requirements_for_cache_data.html.md.erb |   30 +-
 ...on-ascii_strings_in_config_files.html.md.erb |    6 +-
 .../region_shortcuts_reference.html.md.erb      |    2 +-
 21 files changed, 1516 insertions(+), 1522 deletions(-)
----------------------------------------------------------------------



[14/51] [abbrv] geode git commit: GEODE-3412: adding files missing from last commit. This now closes #714

Posted by kl...@apache.org.
GEODE-3412: adding files missing from last commit. This now closes #714


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/bc655eb9
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/bc655eb9
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/bc655eb9

Branch: refs/heads/feature/GEODE-1279
Commit: bc655eb9b3683282705e7449f0f7a720c1a6243d
Parents: a7a197d
Author: Brian Rowe <br...@pivotal.io>
Authored: Tue Aug 15 11:21:51 2017 -0700
Committer: Udo Kohlmeyer <uk...@pivotal.io>
Committed: Tue Aug 15 13:43:07 2017 -0700

----------------------------------------------------------------------
 .../cache/tier/sockets/GenericProtocolServerConnection.java      | 1 +
 .../internal/cache/tier/sockets/ServerConnectionFactory.java     | 1 +
 .../java/org/apache/geode/security/NoOpStreamAuthenticator.java  | 4 +---
 .../main/java/org/apache/geode/security/StreamAuthenticator.java | 2 +-
 .../cache/tier/sockets/GenericProtocolServerConnectionTest.java  | 1 +
 .../geode/protocol/protobuf/ProtobufSimpleAuthenticator.java     | 2 +-
 6 files changed, 6 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/bc655eb9/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnection.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnection.java b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnection.java
index 7c8fb5c..93a7f6f 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnection.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnection.java
@@ -20,6 +20,7 @@ import org.apache.geode.internal.cache.tier.Acceptor;
 import org.apache.geode.internal.cache.tier.CachedRegionHelper;
 import org.apache.geode.internal.security.SecurityService;
 import org.apache.geode.security.SecurityManager;
+import org.apache.geode.security.StreamAuthenticator;
 
 import java.io.IOException;
 import java.io.InputStream;

http://git-wip-us.apache.org/repos/asf/geode/blob/bc655eb9/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionFactory.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionFactory.java b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionFactory.java
index 1d53297..9173f6a 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionFactory.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ServerConnectionFactory.java
@@ -19,6 +19,7 @@ import org.apache.geode.internal.cache.InternalCache;
 import org.apache.geode.internal.cache.tier.Acceptor;
 import org.apache.geode.internal.cache.tier.CachedRegionHelper;
 import org.apache.geode.internal.security.SecurityService;
+import org.apache.geode.security.StreamAuthenticator;
 
 import java.io.IOException;
 import java.net.Socket;

http://git-wip-us.apache.org/repos/asf/geode/blob/bc655eb9/geode-core/src/main/java/org/apache/geode/security/NoOpStreamAuthenticator.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/security/NoOpStreamAuthenticator.java b/geode-core/src/main/java/org/apache/geode/security/NoOpStreamAuthenticator.java
index bca1ec2..0a6dde1 100644
--- a/geode-core/src/main/java/org/apache/geode/security/NoOpStreamAuthenticator.java
+++ b/geode-core/src/main/java/org/apache/geode/security/NoOpStreamAuthenticator.java
@@ -12,9 +12,7 @@
  * or implied. See the License for the specific language governing permissions and limitations under
  * the License.
  */
-package org.apache.geode.internal.cache.tier.sockets;
-
-import org.apache.geode.security.SecurityManager;
+package org.apache.geode.security;
 
 import java.io.IOException;
 import java.io.InputStream;

http://git-wip-us.apache.org/repos/asf/geode/blob/bc655eb9/geode-core/src/main/java/org/apache/geode/security/StreamAuthenticator.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/security/StreamAuthenticator.java b/geode-core/src/main/java/org/apache/geode/security/StreamAuthenticator.java
index 51cbf2e..7db1a2b 100644
--- a/geode-core/src/main/java/org/apache/geode/security/StreamAuthenticator.java
+++ b/geode-core/src/main/java/org/apache/geode/security/StreamAuthenticator.java
@@ -12,7 +12,7 @@
  * or implied. See the License for the specific language governing permissions and limitations under
  * the License.
  */
-package org.apache.geode.internal.cache.tier.sockets;
+package org.apache.geode.security;
 
 import org.apache.geode.security.SecurityManager;
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bc655eb9/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnectionTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnectionTest.java b/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnectionTest.java
index 3dcf343..383fbf0 100644
--- a/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnectionTest.java
+++ b/geode-core/src/test/java/org/apache/geode/internal/cache/tier/sockets/GenericProtocolServerConnectionTest.java
@@ -24,6 +24,7 @@ import org.apache.geode.internal.cache.InternalCache;
 import org.apache.geode.internal.cache.tier.Acceptor;
 import org.apache.geode.internal.cache.tier.CachedRegionHelper;
 import org.apache.geode.internal.security.SecurityService;
+import org.apache.geode.security.NoOpStreamAuthenticator;
 import org.apache.geode.test.junit.categories.UnitTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;

http://git-wip-us.apache.org/repos/asf/geode/blob/bc655eb9/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufSimpleAuthenticator.java
----------------------------------------------------------------------
diff --git a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufSimpleAuthenticator.java b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufSimpleAuthenticator.java
index 59c61e2..1517552 100644
--- a/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufSimpleAuthenticator.java
+++ b/geode-protobuf/src/main/java/org/apache/geode/protocol/protobuf/ProtobufSimpleAuthenticator.java
@@ -14,7 +14,7 @@
  */
 package org.apache.geode.protocol.protobuf;
 
-import org.apache.geode.internal.cache.tier.sockets.StreamAuthenticator;
+import org.apache.geode.security.StreamAuthenticator;
 import org.apache.geode.security.AuthenticationFailedException;
 import org.apache.geode.security.SecurityManager;
 


[03/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - Reference section

Posted by kl...@apache.org.
http://git-wip-us.apache.org/repos/asf/geode/blob/13ad4b6e/geode-docs/reference/statistics_list.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/reference/statistics_list.html.md.erb b/geode-docs/reference/statistics_list.html.md.erb
new file mode 100644
index 0000000..f26075d
--- /dev/null
+++ b/geode-docs/reference/statistics_list.html.md.erb
@@ -0,0 +1,1310 @@
+---
+title: Geode Statistics List
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+<a id="statistics_list"></a>
+
+
+This section describes the primary statistics gathered by Geode when statistics are enabled.
+
+All statistics gathering requires the `gemfire.properties` `statistic-sampling-enabled` in `gemfire.properties` file to be true. Statistics that use time require the `gemfire.properties` `enable-time-statistics` to be true.
+
+Performance statistics are collected for each Java application or cache server that connects to a distributed system.
+
+-   **[Cache Performance (CachePerfStats)](#section_DEF8D3644D3246AB8F06FE09A37DC5C8)**
+
+-   **[Cache Server (CacheServerStats)](#section_EF5C2C59BFC74FFB8607F9571AB9A471)**
+
+-   **[Client-Side Notifications (CacheClientUpdaterStats)](#section_B08C0783BBF9489E8BB48B4AEC597C62)**
+
+-   **[Client-to-Server Messaging Performance (ClientStats)](#section_04B7D7387E584712B7710B5ED1E876BB)**
+
+-   **[Client Connection Pool (PoolStats)](#section_6C247F61DB834C079A16BE92789D4692)**
+
+-   **[Continuous Querying (CQStatistics)](#section_66C0E7748501480B85209D57D24256D5)**
+
+-   **[Delta Propagation (DeltaPropagationStatistics)](#section_D4ABED3FF94245C0BEE0F6FC9481E867)**
+
+-   **[Disk Space Usage (DiskDirStatistics)](#section_6C2BECC63A83456190B029DEDB8F4BE3)**
+
+-   **[Disk Usage and Performance (DiskRegionStatistics)](#section_983BFC6D53C74829A04A91C39E06315F)**
+
+-   **[Distributed System Messaging (DistributionStats)](#section_ACB4161F10D64BC0B15871D003FF6FDF)**
+
+-   **[Distributed Lock Services (DLockStats)](#section_78D346A580724E1EA645E31626EECE40)**
+
+-   **[Function Execution (FunctionServiceStatistics)](#section_5E211DDB0E8640689AD0A4659511E17A)**
+
+-   **[Gateway Queue (GatewayStatistics)](#section_C4199A541B1F4B82B6178C416C0FAE4B)**
+
+-   **[Indexes (IndexStats)](#section_86A61860024B480592DAC67FFB882538)**
+
+-   **[JVM Performance](#section_607C3867602E410CAE5FAB26A7FF1CB9)**
+
+-   **[Locator (LocatorStatistics)](#section_C48B654F973E4B44AD825D459C23A6CD)**
+
+-   **[Lucene Indexes (LuceneIndexStats)](#LuceneStats)**
+
+-   **[Off-Heap (OffHeapMemoryStats)](#topic_ohc_tjk_w5)**
+
+-   **[Operating System Statistics - Linux](#section_923B28F01BC3416786D3AFBD87F22A5E)**
+
+-   **[Partitioned Regions (PartitionedRegion&lt;partitioned\_region\_name&gt;Statistics)](#section_35AC170770C944C3A336D9AEC2D2F7C5)**
+
+-   **[Region Entry Eviction – Count-Based (LRUStatistics)](#section_374FBD92A3B74F6FA08AA23047929B4F)**
+
+-   **[Region Entry Eviction – Size-based (LRUStatistics)](#section_3D2AA2BCE5B6485699A7B6ADD1C49FF7)**
+
+-   **[Server Notifications for All Clients (CacheClientNotifierStatistics)](#section_5362EF9AECBC48D69475697109ABEDFA)**
+
+-   **[Server Notifications for Single Client (CacheClientProxyStatistics)](#section_E03865F509E543D9B8F9462B3DA6255E)**
+
+-   **[Server-to-Client Messaging Performance (ClientSubscriptionStats)](#section_3AB1C0AA55014163A2BBF68E13D25E3A)**
+
+-   **[Statistics Collection (StatSampler)](#section_55F3AF6413474317902845EE4996CC21)**
+
+## <a id="section_DEF8D3644D3246AB8F06FE09A37DC5C8" class="no-quick-link"></a>Cache Performance (CachePerfStats)
+
+Statistics for the Geode cache. These can be used to determine the type and number of cache operations being performed and how much time they consume.
+
+Regarding Geode cache transactions, transaction-related statistics are compiled and stored as properties in the CachePerfStats statistic resource. Because the transaction’s data scope is the cache, these statistics are collected on a per-cache basis.
+
+The primary statistics are:
+
+| Statistic                        | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
+|----------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `cacheListenerCallsCompleted`    | Total number of times a cache listener call has completed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
+| `cacheListenerCallsInProgress`   | Current number of threads doing a cache listener call.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
+| `cacheListenerCallTime`          | Total time spent doing cache listener calls.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
+| `cacheWriterCallsCompleted`      | Total number of times a cache writer call has completed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
+| `cacheWriterCallsInProgress`     | Current number of threads doing a cache writer call.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
+| `cacheWriterCallTime`            | Total time spent doing cache writer calls.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
+| `compressions`                   | Total number of compression operations.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
+| `compressTime`                   | Total time, in nanoseconds, spent compressing data.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
+| `conflatedEvents`                | The number of events that were conflated, and not delivered to event listeners or gateway senders on this member. Events are typically conflated because a later event was already applied to the cache, or because a concurrent event was ignored to ensure cache consistency. Note that some members may discard an update while other members apply the update, depending on the order in which each member receives the update. For this reason, the `conflatedEvents` statistic will differ for each Geode member. See [Consistency for Region Updates](../developing/distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045). |
+| `creates`                        | The total number of times an entry is added to this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
+| `decompressions`                 | Total number of decompression operations.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
+| `decompressTime`                 | Total time, in nanoseconds, spent decompressing data.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
+| `destroys`                       | The total number of times a cache object entry has been destroyed in this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
+| `eventQueueSize`                 | The number of cache events waiting to be processed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
+| `eventQueueThrottleCount`        | The total number of times a thread was delayed in adding an event to the event queue.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
+| `eventQueueThrottleTime`         | The total amount of time, in nanoseconds, spent delayed by the event queue throttle.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
+| `eventThreads`                   | The number of threads currently processing events.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
+| `getInitialImageKeysReceived`    | Total number of keys received while doing getInitialImage operations.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
+| `getInitialImagesCompleted`      | Total number of times getInitialImages initiated by this cache have completed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
+| `getInitialImagesInProgressDesc` | Current number of getInitialImage operations currently in progress.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
+| `getInitialImageTime`            | Total time spent doing getInitialImages for region creation.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
+| `getsDesc`                       | The total number of times a successful get has been done on this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
+| `getTime`                        | Total time spent doing get operations from this cache (including netsearch and netload).                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
+| `invalidates`                    | The total number of times an existing cache object entry value in this cache has been invalidated.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
+| `loadsCompleted`                 | Total number of times a load on this cache has completed as a result of either a local get() or a remote netload.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
+| `loadsInProgress`                | Current number of threads in this cache doing a cache load.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
+| `loadTime`                       | Total time spent invoking loaders on this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
+| `misses`                         | Total number of times a get on the cache did not find a value already in local memory. The number of hits (that is, gets that did not miss) can be calculated by subtracting misses from gets.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
+| `netloadsCompleted`              | Total number of times a network load initiated on this cache has completed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
+| `netloadsInProgress`             | Current number of threads doing a network load initiated by a get() in this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
+| `netloadTime`                    | Total time spent doing network loads on this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
+| `netsearchesCompleted`           | Total number of times network searches initiated by this cache have completed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
+| `netsearchesInProgress`          | Current number of threads doing a network search initiated by a get() in this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
+| `netsearchTimeDesc`              | Total time spent doing network searches for cache values.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
+| `nonReplicatedTombstonesSize`    | The approximate number of bytes that are currently consumed by tombstones in non-replicated regions. See [Consistency for Region Updates](../developing/distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045).                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
+| `partitionedRegions`             | The current number of partitioned regions in the cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
+| `postCompressedBytes`            | Total number of bytes after compressing.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
+| `preCompressedBytes`             | Total number of bytes before compressing.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
+| `putAlls`                        | The total number of times a map is added or replaced in this cache as a result of a local operation. Note, this only counts putAlls done explicitly on this cache; it does not count updates pushed from other caches.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
+| `putallTime`                     | Total time spent replacing a map in this cache as a result of a local operation. This includes synchronizing on the map, invoking cache callbacks, sending messages to other caches and waiting for responses (if required).                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
+| `puts`                           | The total number of times an entry is added or replaced in this cache as a result of a local operation (put(), create(), or get() which results in load, netsearch, or netloading a value). Note, this only counts puts done explicitly on this cache; it does not count updates pushed from other caches.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
+| `putTime`                        | Total time spent adding or replacing an entry in this cache as a result of a local operation. This includes synchronizing on the map, invoking cache callbacks, sending messages to other caches, and waiting for responses (if required).                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
+| `queryExecutions`                | Total number of times some query has been executed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
+| `queryExecutionTime`             | Total time spent executing queries.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
+| `regions`                        | The current number of regions in the cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
+| `replicatedTombstonesSize`       | The approximate number of bytes that are currently consumed by tombstones in replicated or partitioned regions. See [Consistency for Region Updates](../developing/distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045).                                                                                                                                                                                                                                                                                                                                                                                                                                      |
+| `tombstoneCount`                 | The total number of tombstone entries created for performing concurrency checks. See [Consistency for Region Updates](../developing/distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045).                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
+| `tombstoneGCCount`               | The total number of tombstone garbage collection cycles that a member has performed. See [Consistency for Region Updates](../developing/distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045).                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
+| `txCommitChanges`                | Total number of changes made by committed transactions.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
+| `txCommits`                      | Total number of times a transaction commit has succeeded.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
+| `txCommitTime`                   | The total amount of time, in nanoseconds, spent doing successful transaction commits.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
+| `txConflictCheckTime`            | The total amount of time, in nanoseconds, spent doing conflict checks during transaction commit.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
+| `txFailedLifeTime`               | The total amount of time, in nanoseconds, spent in a transaction before a failed commit. The time measured starts at transaction begin and ends when commit is called.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
+| `txFailureChanges`               | Total number of changes lost by failed transactions.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
+| `txFailures`                     | Total number of times a transaction commit has failed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
+| `txFailureTime`                  | The total amount of time, in nanoseconds, spent doing failed transaction commits.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
+| `txRollbackChanges`              | Total number of changes lost by explicit transaction rollbacks.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
+| `txRollbackLifeTime`             | The total amount of time, in nanoseconds, spent in a transaction before an explicit rollback. The time measured starts at transaction begin and ends when rollback is called.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
+| `txRollbacks`                    | Total number of times a transaction has been explicitly rolled back.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
+| `txRollbackTime`                 | The total amount of time, in nanoseconds, spent doing explicit transaction rollbacks.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
+| `txSuccessLifeTime`              | The total amount of time, in nanoseconds, spent in a transaction before a successful commit. The time measured starts at transaction begin and ends when commit is called.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
+| `updates`                        | The total number of updates originating remotely that have been applied to this cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
+| `updateTime`                     | Total time spent performing an update.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
+
+## <a id="section_EF5C2C59BFC74FFB8607F9571AB9A471" class="no-quick-link"></a>Cache Server (CacheServerStats)
+
+Statistics used for cache servers and for gateway receivers are recorded in CacheServerStats in a cache server. The primary statistics are:
+
+| Statistic                                 | Description                                                                                                                                    |
+|-------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
+| `abandonedReadRequests`                   | Number of read operations (requests) abandoned by clients.                                                                                     |
+| `abandonedWriteRequests`                  | Number of write operations (requests) abandoned by clients.                                                                                    |
+| `acceptsInProgress`                       | Current number of server accepts that are attempting to do the initial handshake with the client.                                              |
+| `acceptThreadStarts`                      | Total number of threads created (starts) to deal with an accepted socket. Note, this is not the current number of threads.                     |
+| `batchSize`                               | The size (in bytes) of the batches received.                                                                                                   |
+| `clearRegionRequests`                     | Number of cache client operations clearRegion requests.                                                                                        |
+| `clearRegionResponses`                    | Number of clearRegion responses written to the cache client.                                                                                   |
+| `clientNotificationRequests`              | Number of cache client operations notification requests.                                                                                       |
+| `clientReadyRequests`                     | Number of cache client ready requests.                                                                                                         |
+| `clientReadyResponses`                    | Number of client ready responses written to the cache client.                                                                                  |
+| `closeConnectionRequests`                 | Number of cache client close connection operations requests.                                                                                   |
+| `connectionLoad`                          | The load from client to server connections as reported by the load probe installed in this server.                                             |
+| `connectionsTimedOut`                     | Total number of connections that have been timed out by the server because of client inactivity.                                               |
+| `connectionThreads`                       | Current number of threads dealing with a client connection.                                                                                    |
+| `connectionThreadStarts`                  | Total number of threads created (starts) to deal with a client connection. Note, this is not the current number of threads.                    |
+| `containsKeyRequests`                     | Number of cache client operations containsKey requests.                                                                                        |
+| `containsKeyResponses`                    | Number of containsKey responses written to the cache client.                                                                                   |
+| `currentClientConnections`                | Number of sockets accepted.                                                                                                                    |
+| `currentClients`                          | Number of client virtual machines (clients) connected.                                                                                         |
+| `destroyRegionRequests`                   | Number of cache client operations destroyRegion requests.                                                                                      |
+| `destroyRegionResponses`                  | Number of destroyRegion responses written to the cache client.                                                                                 |
+| `destroyRequests`                         | Number of cache client operations destroy requests.                                                                                            |
+| `destroyResponses`                        | Number of destroy responses written to the cache client.                                                                                       |
+| `failedConnectionAttempts`                | Number of failed connection attempts.                                                                                                          |
+| `getRequests`                             | Number of cache client operations get requests.                                                                                                |
+| `getResponses`                            | Number of getResponses written to the cache client.                                                                                            |
+| `loadPerConnection`                       | The estimate of how much load i.s added for each new connection as reported by the load probe installed in this server.                        |
+| `loadPerQueue`                            | The estimate of how much load would be added for each new subscription connection as reported by the load probe installed in this server.      |
+| `messageBytesBeingReceived`               | Current number of bytes consumed by messages being received or processed.                                                                      |
+| `messagesBeingReceived`                   | Current number of messages being received off the network or being processed after reception.                                                  |
+| `outOfOrderGatewayBatchIds`               | Number of Out of Order batch IDs (batches).                                                                                                    |
+| `processBatchRequests`                    | Number of cache client operations processBatch requests.                                                                                       |
+| `processBatchResponses`                   | Number of processBatch responses written to the cache client.                                                                                  |
+| `processBatchTime`                        | Total time, in nanoseconds, spent in processing a cache client processBatch request.                                                           |
+| `processClearRegionTime`                  | Total time, in nanoseconds, spent in processing a cache client clearRegion request, including the time to clear the region from the cache.     |
+| `processClientNotificationTime`           | Total time, in nanoseconds, spent in processing a cache client notification request.                                                           |
+| `processClientReadyTime`                  | Total time, in nanoseconds, spent in processing a cache client ready request, including the time to destroy an object from the cache.          |
+| `processCloseConnectionTime`              | Total time, in nanoseconds, spent in processing a cache client close connection request.                                                       |
+| `processContainsKeyTime`                  | Total time spent, in nanoseconds, processing a containsKey request.                                                                            |
+| `processDestroyRegionTime`                | Total time, in nanoseconds, spent in processing a cache client destroyRegion request, including the time to destroy the region from the cache. |
+| `processDestroyTime`                      | Total time, in nanoseconds, spent in processing a cache client destroy request, including the time to destroy an object from the cache.        |
+| `processGetTime`                          | Total time, in nanoseconds, spent in processing a cache client get request, including the time to get an object from the cache.                |
+| `processPutAllTime`                       | Total time, in nanoseconds, spent in processing a cache client putAll request, including the time to put all objects into the cache.           |
+| `processPutTime`                          | Total time, in nanoseconds, spent in processing a cache client put request, including the time to put an object into the cache.                |
+| `processQueryTime`                        | Total time, in nanoseconds, spent in processing a cache client query request, including the time to destroy an object from the cache.          |
+| `processUpdateClientNotificationTime`     | Total time, in nanoseconds, spent in processing a client notification update request.                                                          |
+| `putAllRequests`                          | Number of cache client operations putAll requests.                                                                                             |
+| `putAllResponses`                         | Number of putAllResponses written to the cache client.                                                                                         |
+| `putRequests`                             | Number of cache client operations put requests.                                                                                                |
+| `putResponses`                            | Number of putResponses written to the cache client.                                                                                            |
+| `queryRequests`                           | Number of cache client operations query requests.                                                                                              |
+| `queryResponses`                          | Number of query responses written to the cache client.                                                                                         |
+| `queueLoad`                               | The load from subscription queues as reported by the load probe installed in this server                                                       |
+| `readClearRegionRequestTime`              | Total time, in nanoseconds, spent in reading clearRegion requests.                                                                             |
+| `readClientNotificationRequestTime`       | Total time, in nanoseconds, spent in reading client notification requests.                                                                     |
+| `readClientReadyRequestTime`              | Total time, in nanoseconds, spent in reading cache client ready requests.                                                                      |
+| `readCloseConnectionRequestTime`          | Total time, in nanoseconds, spent in reading close connection requests.                                                                        |
+| `readContainsKeyRequestTime`              | Total time, in nanoseconds, spent reading containsKey requests.                                                                                |
+| `readDestroyRegionRequestTime`            | Total time, in nanoseconds, spent in reading destroyRegion requests.                                                                           |
+| `readDestroyRequestTime`                  | Total time, in nanoseconds, spent in reading destroy requests.                                                                                 |
+| `readGetRequestTime`                      | Total time, in nanoseconds, spent in reading get requests.                                                                                     |
+| `readProcessBatchRequestTime`             | Total time, in nanoseconds, spent in reading processBatch requests.                                                                            |
+| `readPutAllRequestTime`                   | Total time, in nanoseconds, spent in reading putAll requests.                                                                                  |
+| `readPutRequestTime`                      | Total time, in nanoseconds, spent in reading put requests.                                                                                     |
+| `readQueryRequestTime`                    | Total time, in nanoseconds, spent in reading query requests.                                                                                   |
+| `readUpdateClientNotificationRequestTime` | Total time, in nanoseconds, spent in reading client notification update requests.                                                              |
+| `receivedBytes`                           | Total number of bytes received from clients.                                                                                                   |
+| `sentBytes`                               | Total number of bytes sent to clients.                                                                                                         |
+| `threadQueueSize`                         | Current number of connections waiting for a thread to start processing their message.                                                          |
+| `updateClientNotificationRequests`        | Number of cache client notification update requests.                                                                                           |
+| `writeClearRegionResponseTime`            | Total time, in nanoseconds, spent in writing clearRegion responses.                                                                            |
+| `writeClientReadyResponseTime`            | Total time, in nanoseconds, spent in writing client ready responses.                                                                           |
+| `writeContainsKeyResponseTime`            | Total time, in nanoseconds, spent writing containsKey responses.                                                                               |
+| `writeDestroyRegionResponseTime`          | Total time, in nanoseconds, spent in writing destroyRegion responses.                                                                          |
+| `writeDestroyResponseTime`                | Total time, in nanoseconds, spent in writing destroy responses.                                                                                |
+| `writeGetResponseTime`                    | Total time, in nanoseconds, spent in writing get responses.                                                                                    |
+| `writeProcessBatchResponseTime`           | Total time, in nanoseconds, spent in writing processBatch responses.                                                                           |
+| `writePutAllResponseTime`                 | Total time, in nanoseconds, spent in writing putAll responses.                                                                                 |
+| `writePutResponseTime`                    | Total time, in nanoseconds, spent in writing put responses.                                                                                    |
+| `writeQueryResponseTime`                  | Total time, in nanoseconds, spent in writing query responses.                                                                                  |
+
+## <a id="section_B08C0783BBF9489E8BB48B4AEC597C62" class="no-quick-link"></a>Client-Side Notifications (CacheClientUpdaterStats)
+
+Statistics in a client that pertain to server-to-client data pushed from the server over a queue to the client (they are the client side of the server’s `CacheClientNotifierStatistics`) :
+
+| Statistic                   | Description                                                                                  |
+|-----------------------------|----------------------------------------------------------------------------------------------|
+| `receivedBytes`             | Total number of bytes received from the server.                                              |
+| `messagesBeingReceived`     | Current number of message being received off the network or being processed after reception. |
+| `messageBytesBeingReceived` | Current number of bytes consumed by messages being received or processed.                    |
+
+## <a id="section_04B7D7387E584712B7710B5ED1E876BB" class="no-quick-link"></a>Client-to-Server Messaging Performance (ClientStats)
+
+These statistics are in a client and they describe all the messages sent from the client to a specific server. The primary statistics are:
+
+| Statistic                              | Description                                                                                   |
+|----------------------------------------|-----------------------------------------------------------------------------------------------|
+| `clearFailures`                        | Total number of clear attempts that have failed.                                              |
+| `clears`                               | Total number of clears completed successfully.                                                |
+| `clearSendFailures`                    | Total number of clearSends that have failed.                                                  |
+| `clearSends`                           | Total number of clearSends that have completed successfully.                                  |
+| `clearSendsInProgress`                 | Current number of clearSends being executed.                                                  |
+| `clearSendTime`                        | Total amount of time, in nanoseconds, spent doing clearSends.                                 |
+| `clearsInProgress`                     | Current number of clears being executed.                                                      |
+| `clearTime`                            | Total amount of time, in nanoseconds, spent doing clears.                                     |
+| `clearTimeouts`                        | Total number of clear attempts that have timed out.                                           |
+| `closeConFailures`                     | Total number of closeCon attempts that have failed.                                           |
+| `closeCons`                            | Total number of closeCons that have completed successfully.                                   |
+| `closeConSendFailures`                 | Total number of closeConSends that have failed.                                               |
+| `closeConSends`                        | Total number of closeConSends that have completed successfully.                               |
+| `closeConSendsInProgress`              | Current number of closeConSends being executed.                                               |
+| `closeConSendTime`                     | Total amount of time, in nanoseconds, spent doing closeConSends.                              |
+| `closeConsInProgress`                  | Current number of closeCons being executed.                                                   |
+| `closeConTime`                         | Total amount of time, in nanoseconds, spent doing closeCons.                                  |
+| `closeConTimeouts`                     | Total number of closeCon attempts that have timed out.                                        |
+| `connections`                          | Current number of connections.                                                                |
+| `connects`                             | Total number of times a connection has been created.                                          |
+| `containsKeyFailures`                  | Total number of containsKey attempts that have failed.                                        |
+| `containsKeys`                         | Total number of containsKeys that completed successfully.                                     |
+| `containsKeySendFailures`              | Total number of containsKeySends that have failed.                                            |
+| `containsKeySends`                     | Total number of containsKeySends that have completed successfully.                            |
+| `containsKeySendsInProgress`           | Current number of containsKeySends being executed.                                            |
+| `containsKeySendTime`                  | Total amount of time, in nanoseconds, spent doing containsKeyends.                            |
+| `containsKeysInProgress`               | Current number of containsKeys being executed.                                                |
+| `containsKeyTime`                      | Total amount of time, in nanoseconds, spent doing containsKeys.                               |
+| `containsKeyTimeouts`                  | Total number of containsKey attempts that have timed out.                                     |
+| `destroyFailures`                      | Total number of destroy attempts that have failed.                                            |
+| `destroyRegionFailures`                | Total number of destroyRegion attempts that have failed.                                      |
+| `destroyRegions`                       | Total number of destroyRegions that have completed successfully.                              |
+| `destroyRegionSendFailures`            | Total number of destroyRegionSends that have failed.                                          |
+| `destroyRegionSends`                   | Total number of destroyRegionSends that have completed successfully.                          |
+| `destroyRegionSendsInProgress`         | Current number of destroyRegionSends being executed.                                          |
+| `destroyRegionSendTime`                | Total amount of time, in nanoseconds, spent doing destroyRegionSends.                         |
+| `destroyRegionsInProgress`             | Current number of destroyRegions being executed.                                              |
+| `destroyRegionTime`                    | Total amount of time, in nanoseconds, spent doing destroyRegions.                             |
+| `destroyRegionTimeouts`                | Total number of destroyRegion attempts that have timed out.                                   |
+| `destroys`                             | Total number of destroys that have completed successfully.                                    |
+| `destroySendFailures`                  | Total number of destroySends that have failed.                                                |
+| `destroySends`                         | Total number of destroySends that have completed successfully.                                |
+| `destroySendsInProgress`               | Current number of destroySends being executed.                                                |
+| `destroySendTime`                      | Total amount of time, in nanoseconds, spent doing destroySends.                               |
+| `destroysInProgress`                   | Current number of destroys being executed.                                                    |
+| `destroyTime`                          | Total amount of time, in nanoseconds, spent doing destroys.                                   |
+| `destroyTimeouts`                      | Total number of destroy attempts that have timed out.                                         |
+| `disconnects`                          | Total number of times a connection has been destroyed.                                        |
+| `gatewayBatchFailures`                 | Total number of gatewayBatch attempts that have failed.                                       |
+| `gatewayBatchs`                        | Total number of gatewayBatchs completed successfully.                                         |
+| `gatewayBatchSendFailures`             | Total number of gatewayBatchSends that have failed.                                           |
+| `gatewayBatchSends`                    | Total number of gatewayBatchSends that have completed successfully.                           |
+| `gatewayBatchSendsInProgress`          | Current number of gatewayBatchSends being executed.                                           |
+| `gatewayBatchSendTime`                 | Total amount of time, in nanoseconds, spent doing gatewayBatchSends.                          |
+| `gatewayBatchsInProgress`              | Current number of gatewayBatchs being executed.                                               |
+| `gatewayBatchTime`                     | Total amount of time, in nanoseconds, spent doing gatewayBatchs.                              |
+| `gatewayBatchTimeouts`       

<TRUNCATED>

[17/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - Tools & Modules

Posted by kl...@apache.org.
http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/weblogic_changing_gf_default_cfg.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/weblogic_changing_gf_default_cfg.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/weblogic_changing_gf_default_cfg.html.md.erb
index 50e41be..03ab078 100644
--- a/geode-docs/tools_modules/http_session_mgmt/weblogic_changing_gf_default_cfg.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/weblogic_changing_gf_default_cfg.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Changing the Default Geode Configuration in the AppServers Module
----
+<% set_title("Changing the Default", product_name, "Configuration in the AppServers Module") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,14 +17,14 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-By default, the AppServers module will run Geode automatically with preconfigured settings. You can change these Geode settings.
+By default, the AppServers module will run <%=vars.product_name%> automatically with preconfigured settings. You can change these <%=vars.product_name%> settings.
 
 Here are the default settings:
 
--   Geode peer-to-peer members are discovered using locators.
+-   <%=vars.product_name%> peer-to-peer members are discovered using locators.
 -   The region name is set to `gemfire_modules_sessions`.
 -   The cache region is replicated for peer-to-peer configurations and partitioned (with redundancy turned on) for client/server configurations.
--   Geode clients have local caching turned on and when the local cache needs to evict data, it will evict least-recently-used (LRU) data first.
+-   <%=vars.product_name%> clients have local caching turned on and when the local cache needs to evict data, it will evict least-recently-used (LRU) data first.
 
 **Note:**
 On the application server side, the default inactive interval for session expiration is set to 30 minutes. To change this value, refer to [Session Expiration](tc_additional_info.html#tc_additional_info__section_C7C4365EA2D84636AE1586F187007EC4).
@@ -36,9 +34,9 @@ However, you may want to change this default configuration. For example, you mig
 **Note:**
 You cannot override region attributes on the cache server when using the HTTP Session Management Module. You must place all region attribute definitions in the region attributes template that you customize in your application server. See [Overriding Region Attributes](weblogic_common_configuration_changes.html#weblogic_common_cfg_changes__section_38D803A7E8474188898963F456188543) for more information.
 
-## <a id="weblogic_changing_gf_default_cfg__section_changing_sys_props" class="no-quick-link"></a>Changing Geode Distributed System Properties
+## <a id="weblogic_changing_gf_default_cfg__section_changing_sys_props" class="no-quick-link"></a>Changing <%=vars.product_name%> Distributed System Properties
 
-To edit Geode system properties, you must add properties to Geode Session Filter definition in the application's web.xml file. As mentioned previously, this can be done by using the **-p** option to the `modify_war` script. All Geode system properties should be prefixed with the string **gemfire.property**. For example:
+To edit <%=vars.product_name%> system properties, you must add properties to <%=vars.product_name%> Session Filter definition in the application's web.xml file. As mentioned previously, this can be done by using the **-p** option to the `modify_war` script. All <%=vars.product_name%> system properties should be prefixed with the string **gemfire.property**. For example:
 
 -   **-p gemfire.property.locators=hostname\[10334\]**
 -   **-p gemfire.property.cache-xml-file=/u01/weblogic/conf/cache.xml**.
@@ -64,19 +62,19 @@ To edit Geode system properties, you must add properties to Geode Session Filter
 </filter>
 ```
 
-This example specifies that the file name for Geode's cache XML configuration is `cache-peer.xml`.
+This example specifies that the file name for <%=vars.product_name%>'s cache XML configuration is `cache-peer.xml`.
 
-The list of configurable `server.xml` system properties include any of the properties that can be specified in Geode's `gemfire.properties` file. The following list contains some of the more common parameters that can be configured.
+The list of configurable `server.xml` system properties include any of the properties that can be specified in <%=vars.product_name%>'s `gemfire.properties` file. The following list contains some of the more common parameters that can be configured.
 
 | Parameter                               | Description                                                                                                                                                                                 | Default                                                                 |
 |-----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------|
 | cache-xml-file                          | Name of the cache configuration file.                                                                                                                                                       | `cache-peer.xml` for peer-to-peer, `cache-client.xml` for client/server |
-| locators (only for peer-to-peer config) | (required) list of locators (host\[port\]) used by Geode members; if a single locator listens on its default port, then set this value to `"localhost[10334]"` | Empty string                                                            |
-| log-file                                | Name of the Geode log file.                                                                                                                                    | `gemfire_modules.log`                                                   |
-| statistic-archive-file                  | Name of the Geode statistics file.                                                                                                                             | `gemfire_modules.gfs`                                                   |
-| statistic-sampling-enabled              | Whether Geode statistics sampling is enabled.                                                                                                                  | false                                                                   |
+| locators (only for peer-to-peer config) | (required) list of locators (host\[port\]) used by <%=vars.product_name%> members; if a single locator listens on its default port, then set this value to `"localhost[10334]"` | Empty string                                                            |
+| log-file                                | Name of the <%=vars.product_name%> log file.                                                                                                                                    | `gemfire_modules.log`                                                   |
+| statistic-archive-file                  | Name of the <%=vars.product_name%> statistics file.                                                                                                                             | `gemfire_modules.gfs`                                                   |
+| statistic-sampling-enabled              | Whether <%=vars.product_name%> statistics sampling is enabled.                                                                                                                  | false                                                                   |
 
-In addition to the standard Geode system properties, the following cache-specific properties can also be configured.
+In addition to the standard <%=vars.product_name%> system properties, the following cache-specific properties can also be configured.
 
 | Parameter              | Description                                                                                      | Default      |
 |------------------------|--------------------------------------------------------------------------------------------------|--------------|
@@ -84,14 +82,14 @@ In addition to the standard Geode system properties, the following cache-specifi
 | evictionHeapPercentage | Percentage of heap at which session eviction begins.                                             | 80.0         |
 | rebalance              | Whether a rebalance of the cache should be done when the application server instance is started. | false        |
 
-Although these properties are not part of the standard Geode system properties, they apply to the entire JVM instance. For more information about managing the heap, refer to [Managing Heap and Off-heap Memory](../../managing/heap_use/heap_management.html#resource_manager).
+Although these properties are not part of the standard <%=vars.product_name%> system properties, they apply to the entire JVM instance. For more information about managing the heap, refer to [Managing Heap and Off-heap Memory](../../managing/heap_use/heap_management.html#resource_manager).
 
 **Note:**
-It is important to note that the Geode Distributed System is a singleton within the entire application server JVM. As such it is important to ensure that different web applications, within the same container, set (or expect) the same cache configuration. When the application server starts, the first web application to start that uses Geode Session Caching will determine the overall configuration of the distributed system since it will trigger the creation of the distributed system.
+It is important to note that the <%=vars.product_name%> Distributed System is a singleton within the entire application server JVM. As such it is important to ensure that different web applications, within the same container, set (or expect) the same cache configuration. When the application server starts, the first web application to start that uses <%=vars.product_name%> Session Caching will determine the overall configuration of the distributed system since it will trigger the creation of the distributed system.
 
 ## <a id="weblogic_changing_gf_default_cfg__section_changing_cache_config_props" class="no-quick-link"></a>Changing Cache Configuration Properties
 
-To edit Geode cache properties (such as the name and the characteristics of the cache region), you must configure these using a filter initialization parameter prefix of **gemfire.cache** with the `modify_war` script. For example:
+To edit <%=vars.product_name%> cache properties (such as the name and the characteristics of the cache region), you must configure these using a filter initialization parameter prefix of **gemfire.cache** with the `modify_war` script. For example:
 
 **-p gemfire.cache.region\_name=custom\_sessions**
 
@@ -115,11 +113,11 @@ To edit Geode cache properties (such as the name and the characteristics of the
 The following parameters are the cache configuration parameters that can be added to the filter definition as initialization parameters.
 
 <dt>**enable\_debug\_listener**</dt>
-<dd>Whether to enable a debug listener in the session region; if this parameter is set to true, info-level messages are logged to the Geode log when sessions are created, updated, invalidated or expired.</dd>
+<dd>Whether to enable a debug listener in the session region; if this parameter is set to true, info-level messages are logged to the <%=vars.product_name%> log when sessions are created, updated, invalidated or expired.</dd>
 
 Default: `false`
 
-The Geode API equivalent to setting this parameter:
+The <%=vars.product_name%> API equivalent to setting this parameter:
 
 ``` pre
 // Create factory
@@ -134,7 +132,7 @@ factory.addCacheListener(new DebugCacheListener());
 
 Default: `false` for peer-to-peer, `true` for client/server
 
-The Geode API equivalent to setting this parameter:
+The <%=vars.product_name%> API equivalent to setting this parameter:
 
 ``` pre
 // For peer-to-peer members: 
@@ -148,7 +146,7 @@ ClientCache.createClientRegionFactory(CACHING_PROXY_HEAP_LRU)
 
 Default: REPLICATE for peer-to-peer, PARTITION\_REDUNDANT for client/server
 
-The Geode API equivalent to setting this parameter:
+The <%=vars.product_name%> API equivalent to setting this parameter:
 
 ``` pre
 // Creates a region factory for the specified region shortcut 
@@ -160,7 +158,7 @@ Cache.createRegionFactory(regionAttributesId);
 
 Default: gemfire\_modules\_sessions
 
-The Geode API equivalent to setting this parameter:
+The <%=vars.product_name%> API equivalent to setting this parameter:
 
 ``` pre
 // Creates a region with the specified name 
@@ -172,8 +170,8 @@ RegionFactory.create(regionName);
 
 Default: delta\_queued
 
-Delta replication can be configured to occur immediately when HttpSession.setAttribute() is called (delta\_immediate) or when the HTTP request has completed processing (delta\_queued). If the latter mode is configured, all attribute updates for a particular request are 'batched' and multiple updates to the same attribute are collapsed. Depending on the number of attributes updates within a given request, delta\_queued may provide a significant performance gain. For complete session attribute integrity across the cache, delta\_immediate is recommended. Note that this option is specific to this module and there is no equivalent Geode API to enable it.
+Delta replication can be configured to occur immediately when HttpSession.setAttribute() is called (delta\_immediate) or when the HTTP request has completed processing (delta\_queued). If the latter mode is configured, all attribute updates for a particular request are 'batched' and multiple updates to the same attribute are collapsed. Depending on the number of attributes updates within a given request, delta\_queued may provide a significant performance gain. For complete session attribute integrity across the cache, delta\_immediate is recommended. Note that this option is specific to this module and there is no equivalent <%=vars.product_name%> API to enable it.
 
--   **[Common Geode Configuration Changes for AppServers](weblogic_common_configuration_changes.html)**
+-   **[Common <%=vars.product_name%> Configuration Changes for AppServers](weblogic_common_configuration_changes.html)**
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/weblogic_common_configuration_changes.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/weblogic_common_configuration_changes.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/weblogic_common_configuration_changes.html.md.erb
index 7669fdd..f44f5b3 100644
--- a/geode-docs/tools_modules/http_session_mgmt/weblogic_common_configuration_changes.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/weblogic_common_configuration_changes.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Common Geode Configuration Changes for AppServers
----
+<% set_title("Common", product_name, "Configuration Changes for AppServers") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/http_session_mgmt/weblogic_setting_up_the_module.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/http_session_mgmt/weblogic_setting_up_the_module.html.md.erb b/geode-docs/tools_modules/http_session_mgmt/weblogic_setting_up_the_module.html.md.erb
index e2d9f37..1f12d54 100644
--- a/geode-docs/tools_modules/http_session_mgmt/weblogic_setting_up_the_module.html.md.erb
+++ b/geode-docs/tools_modules/http_session_mgmt/weblogic_setting_up_the_module.html.md.erb
@@ -31,7 +31,7 @@ $ modify_war -h
 
 To modify your war or ear file manually, make the following updates:
 
--   **web.xml** needs a filter and listener added as follows. If you have your own filters, the Geode Module filter **must** be the first one.
+-   **web.xml** needs a filter and listener added as follows. If you have your own filters, the <%=vars.product_name%> Module filter **must** be the first one.
 
     ``` pre
     <filter>
@@ -63,7 +63,7 @@ To modify your war or ear file manually, make the following updates:
     -   geode-modules-session jar
     -   slf4j-api jar
     -   slf4j-jdk14 jar
--   Add the following jar files from the `$GEODE/lib` directory to the `WEB-INF/lib` directory of the war, where `$GEODE` is set to the Geode product installation:
+-   Add the following jar files from the `$GEODE/lib` directory to the `WEB-INF/lib` directory of the war, where `$GEODE` is set to the <%=vars.product_name%> product installation:
     -   antlr jar
     -   fastutil jar
     -   geode-core jar
@@ -105,7 +105,7 @@ If you are deploying an ear file:
 
 <img src="../../images_svg/http_module_p2p_with_locator.svg" id="weblogic_setting_up_the_module__image_86E949E0F1AD4E9EB67605EFA4E97E13" class="image" />
 
-To run Geode in a peer-to-peer configuration, use the `modify_war` script with options
+To run <%=vars.product_name%> in a peer-to-peer configuration, use the `modify_war` script with options
 `-t peer-to-peer`,  `-p gemfire.property.locators=localhost[10334]`, and `-p gemfire.propery.cache-xml-file=<moduleDir>/conf/cache-peer.xml`
 to result in the following `web.xml` content:
 
@@ -130,9 +130,9 @@ to result in the following `web.xml` content:
 
 <img src="../../images_svg/http_module_cs_with_locator.svg" id="weblogic_setting_up_the_module__image_BDF2273487EA4FEB9895D02A6F6FD445" class="image" />
 
-To run Geode in a client/server configuration, you make the application server operate as a Geode client. Use the `-t client-server` option to the `modify_war` script. This adds the following filter to application server's `web.xml` file:
+To run <%=vars.product_name%> in a client/server configuration, you make the application server operate as a <%=vars.product_name%> client. Use the `-t client-server` option to the `modify_war` script. This adds the following filter to application server's `web.xml` file:
 
-To run Geode in a client/server configuration, you make the application server operate as a Geode client. Use the `modify_war` script with options
+To run <%=vars.product_name%> in a client/server configuration, you make the application server operate as a <%=vars.product_name%> client. Use the `modify_war` script with options
 `-t client-server` and `-p gemfire.property.cache-xml-file=<module dir>/conf/cache-client.xml`
 to result in the following `web.xml` content:
 
@@ -174,15 +174,15 @@ $ gfsh start server \
 <moduleDir>/lib/geode-modules-session-internal-1.0.0.jar
 ```
 
-Once the application server is started, the Geode client will automatically launch within the application server process.
+Once the application server is started, the <%=vars.product_name%> client will automatically launch within the application server process.
 
-## <a id="weblogic_setting_up_the_module__section_3E186713737E4D5383E23B41CDFED59B" class="no-quick-link"></a>Verifying that Geode Started
+## <a id="weblogic_setting_up_the_module__section_3E186713737E4D5383E23B41CDFED59B" class="no-quick-link"></a>Verifying that <%=vars.product_name%> Started
 
-You can verify that Geode has successfully started by inspecting the application server log file. For example:
+You can verify that <%=vars.product_name%> has successfully started by inspecting the application server log file. For example:
 
 ``` pre
 info 2016/04/18 10:04:18.685 PDT <localhost-startStop-2> tid=0x1a]
-Initializing Geode Modules
+Initializing <%=vars.product_name%> Modules
 Java version:   1.0.0 user1 041816 2016-11-18 08:46:17 -0700
 javac 1.8.0_92
 Native version: native code unavailable
@@ -191,7 +191,7 @@ Source repository: develop
 Running on: /192.0.2.0, 8 cpu(s), x86_64 Mac OS X 10.11.4
 ```
 
-Information is also logged within the Geode log file, which by default is named `gemfire_modules.<date>.log`.
+Information is also logged within the <%=vars.product_name%> log file, which by default is named `gemfire_modules.<date>.log`.
 
 ## <a id="weblogic_setting_up_the_module__section_E0E0E5A1C9484D4AA13878273F16A920" class="no-quick-link"></a>Configuring Non-Sticky Session Replication
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/lucene_integration.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/lucene_integration.html.md.erb b/geode-docs/tools_modules/lucene_integration.html.md.erb
index 625d659..9bf65c4 100644
--- a/geode-docs/tools_modules/lucene_integration.html.md.erb
+++ b/geode-docs/tools_modules/lucene_integration.html.md.erb
@@ -23,8 +23,8 @@ We assume that the reader is familiar with Apache Lucene's indexing and search f
 
 The Apache Lucene integration:
 
-- enables users to create Lucene indexes on data stored in Geode
-- provides high availability of indexes using Geode's HA capabilities to store the indexes in memory
+- enables users to create Lucene indexes on data stored in <%=vars.product_name%>
+- provides high availability of indexes using <%=vars.product_name%>'s HA capabilities to store the indexes in memory
 - optionally stores indexes on disk
 - updates the indexes asynchronously to minimize impacting write latency
 - provides scalability by partitioning index data

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/pulse/pulse-auth.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/pulse/pulse-auth.html.md.erb b/geode-docs/tools_modules/pulse/pulse-auth.html.md.erb
index 1d791e0..1b5cef0 100644
--- a/geode-docs/tools_modules/pulse/pulse-auth.html.md.erb
+++ b/geode-docs/tools_modules/pulse/pulse-auth.html.md.erb
@@ -23,7 +23,7 @@ Pulse requires all users to authenticate themselves before they can use the Puls
 
 If you run Pulse in embedded mode, the Pulse application runs on the JMX Manager node and no JMX authentication is required. You do not need to specify valid JMX credentials to start an embedded Pulse application.
 
-If you host Pulse on a Web Application server (non-embedded mode) and you configure JMX authentication on the Geode manager node, then the Pulse Web application must authenticate itself with the manager node when it starts. Specify the credentials of a valid JMX user account in the `pulse.properties` file, as described in [Hosting Pulse on a Web Application Server](pulse-hosted.html).
+If you host Pulse on a Web Application server (non-embedded mode) and you configure JMX authentication on the <%=vars.product_name%> manager node, then the Pulse Web application must authenticate itself with the manager node when it starts. Specify the credentials of a valid JMX user account in the `pulse.properties` file, as described in [Hosting Pulse on a Web Application Server](pulse-hosted.html).
 
 **Note:**
 The credentials that you specify must have both read and write privileges in the JMX Manager node. See [Configuring a JMX Manager](../../managing/management/jmx_manager_operations.html#topic_263072624B8D4CDBAD18B82E07AA44B6).
@@ -34,7 +34,7 @@ You can configure Pulse to use HTTPS in either embedded or non-embedded mode.
 
 In non-embedded mode where you are running Pulse on a standalone Web application server, you must use the Web server's SSL configuration to make the HTTP requests secure.
 
-In embedded mode, Geode uses an embedded Jetty server to host the
+In embedded mode, <%=vars.product_name%> uses an embedded Jetty server to host the
 Pulse Web application. To make the embedded server use HTTPS, you must
 enable the `http` SSL component in
 `gemfire.properties` or `gfsecurity-properties`.

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/pulse/pulse-embedded.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/pulse/pulse-embedded.html.md.erb b/geode-docs/tools_modules/pulse/pulse-embedded.html.md.erb
index 955e554..a613296 100644
--- a/geode-docs/tools_modules/pulse/pulse-embedded.html.md.erb
+++ b/geode-docs/tools_modules/pulse/pulse-embedded.html.md.erb
@@ -19,15 +19,15 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Use Pulse in embedded mode to monitor a Geode deployment directly from a Geode JMX Manager. By
+Use Pulse in embedded mode to monitor a <%=vars.product_name%> deployment directly from a <%=vars.product_name%> JMX Manager. By
 default, the embedded Pulse application connects to the local JMX Manager that hosts the Pulse
-application. Optionally, configure Pulse to connect to a Geode system of your choice.
+application. Optionally, configure Pulse to connect to a <%=vars.product_name%> system of your choice.
 
 To run Pulse in embedded mode:
 
-1.  Configure a Geode member to run as a JMX Manager node, specifying the HTTP port on which you
+1.  Configure a <%=vars.product_name%> member to run as a JMX Manager node, specifying the HTTP port on which you
 will access the Pulse Web application (port 7070 by default). For example, the following command
-starts a Geode locator as a JMX Manager node, using the default HTTP port 7070 for the Pulse
+starts a <%=vars.product_name%> locator as a JMX Manager node, using the default HTTP port 7070 for the Pulse
 application:
 
     ``` pre
@@ -36,7 +36,7 @@ application:
     ```
 
     **Note:**
-    Geode locators become JMX Manager nodes by default. To start a non-locator member as a JMX
+    <%=vars.product_name%> locators become JMX Manager nodes by default. To start a non-locator member as a JMX
     Manager node, include the `--J=-Dgemfire.jmx-manager=true` option. To specify a non-default port
     number for the HTTP service that hosts the Pulse application, include the
     `--J=-Dgemfire.http-service-port=port_number` option when starting the JMX Manager node.
@@ -48,7 +48,7 @@ application:
     started a manager process earlier, use the `connect` command in `gfsh` to connect to that
     process.
 
-2.  Access the embedded Pulse application from a Web browser. If you are connected to the Geode
+2.  Access the embedded Pulse application from a Web browser. If you are connected to the <%=vars.product_name%>
 cluster using gfsh, use the `start pulse` command to load the correct URL in your browser:
 
     ``` pre

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/pulse/pulse-hosted.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/pulse/pulse-hosted.html.md.erb b/geode-docs/tools_modules/pulse/pulse-hosted.html.md.erb
index 4ce25e9..09cd79d 100644
--- a/geode-docs/tools_modules/pulse/pulse-hosted.html.md.erb
+++ b/geode-docs/tools_modules/pulse/pulse-hosted.html.md.erb
@@ -23,11 +23,11 @@ Host Pulse on a dedicated Web application server to make the Pulse application a
 
 To host Pulse on a Web application server:
 
-1.  Set the `http-service-port` property to zero (`-Dgemfire.http-service-port=0`) when you start your Geode JMX Manager nodes. Setting this property to zero disables the embedded Web server for hosting the Pulse application.
+1.  Set the `http-service-port` property to zero (`-Dgemfire.http-service-port=0`) when you start your <%=vars.product_name%> JMX Manager nodes. Setting this property to zero disables the embedded Web server for hosting the Pulse application.
 
-2.  Deploy the Pulse Web application to your application server. Geode installs the
+2.  Deploy the Pulse Web application to your application server. <%=vars.product_name%> installs the
 `geode-pulse-n.n.n.war` file (where `n.n.n` is a version number) in the `tools/Pulse` subdirectory
-of your Geode installation directory. Depending on your application server, you may need to copy the
+of your <%=vars.product_name%> installation directory. Depending on your application server, you may need to copy the
 `pulse.war` file to a deployment directory or use a configuration tool to deploy the file.
 
 3.  Stop the Web application server and locate the Pulse configuration in the `WEB-INF/classes` subdirectory.
@@ -48,17 +48,17 @@ of your Geode installation directory. Depending on your application server, you
     <tbody>
     <tr class="odd">
     <td><code class="ph codeph">pulse.useLocator</code></td>
-    <td>Specify &quot;true&quot; to configure Pulse to connect to a Geode Locator member, or &quot;false&quot; to connect directly to a JMX Manager.
-    <p>When Pulse connects to a Geode locator, the locator provides the address and port of an available JMX Manager to use for monitoring the distributed system. In most production deployments, you should connect Pulse to a locator instance; this allows Pulse to provide monitoring services using any available JMX Manager.</p>
+    <td>Specify &quot;true&quot; to configure Pulse to connect to a <%=vars.product_name%> Locator member, or &quot;false&quot; to connect directly to a JMX Manager.
+    <p>When Pulse connects to a <%=vars.product_name%> locator, the locator provides the address and port of an available JMX Manager to use for monitoring the distributed system. In most production deployments, you should connect Pulse to a locator instance; this allows Pulse to provide monitoring services using any available JMX Manager.</p>
     <p>If you specify &quot;false,&quot; Pulse connects directly to a specific JMX Manager. If this manager is not available, the Pulse connection fails, even if another JMX Manager is available in the distributed system.</p></td>
     </tr>
     <tr class="even">
     <td><code class="ph codeph">pulse.host</code></td>
-    <td>Specify the DNS name or IP address of the Geode locator or JMX Manager machine to which Pulse should connect. You specify either a locator or JMX Manager address depending on how you configured the <code class="ph codeph">pulse.useLocator</code> property.</td>
+    <td>Specify the DNS name or IP address of the <%=vars.product_name%> locator or JMX Manager machine to which Pulse should connect. You specify either a locator or JMX Manager address depending on how you configured the <code class="ph codeph">pulse.useLocator</code> property.</td>
     </tr>
     <tr class="odd">
     <td><code class="ph codeph">pulse.port</code></td>
-    <td>Specify the port number of the Geode locator or the HTTP port number of the JMX Manager to which Pulse should connect. You specify either a locator or JMX Manager port depending on how you configured the <code class="ph codeph">pulse.useLocator</code> property.
+    <td>Specify the port number of the <%=vars.product_name%> locator or the HTTP port number of the JMX Manager to which Pulse should connect. You specify either a locator or JMX Manager port depending on how you configured the <code class="ph codeph">pulse.useLocator</code> property.
     <p>If you configured <code class="ph codeph">pulse.useLocator=false</code>, then <code class="ph codeph">pulse.port</code> must correspond to the <code class="ph codeph">http-service-port</code> setting of the JMX Manager.</p></td>
     </tr>
     </tbody>

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/pulse/pulse-overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/pulse/pulse-overview.html.md.erb b/geode-docs/tools_modules/pulse/pulse-overview.html.md.erb
index ec723d2..c954c4a 100644
--- a/geode-docs/tools_modules/pulse/pulse-overview.html.md.erb
+++ b/geode-docs/tools_modules/pulse/pulse-overview.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Geode Pulse
----
+<% set_title(product_name, "Pulse") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,11 +17,11 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode Pulse is a Web Application that provides a graphical dashboard for monitoring vital, real-time health and performance of Geode clusters, members, and regions.
+<%=vars.product_name%> Pulse is a Web Application that provides a graphical dashboard for monitoring vital, real-time health and performance of <%=vars.product_name%> clusters, members, and regions.
 
-Use Pulse to examine total memory, CPU, and disk space used by members, uptime statistics, client connections, WAN connections, and critical notifications. Pulse communicates with a Geode JMX manager to provide a complete view of your Geode deployment. You can drill down from a high-level cluster view to examine individual members and even regions within a member, to filter the type of information and level of detail.
+Use Pulse to examine total memory, CPU, and disk space used by members, uptime statistics, client connections, WAN connections, and critical notifications. Pulse communicates with a <%=vars.product_name%> JMX manager to provide a complete view of your <%=vars.product_name%> deployment. You can drill down from a high-level cluster view to examine individual members and even regions within a member, to filter the type of information and level of detail.
 
-By default, Geode Pulse runs in an embedded container within a Geode JMX manager node. You can optionally deploy Pulse to a Web application server of your choice, so that the tool runs independently of your Geode clusters. Hosting Pulse on an application server also enables you to use SSL for accessing the application.
+By default, <%=vars.product_name%> Pulse runs in an embedded container within a <%=vars.product_name%> JMX manager node. You can optionally deploy Pulse to a Web application server of your choice, so that the tool runs independently of your <%=vars.product_name%> clusters. Hosting Pulse on an application server also enables you to use SSL for accessing the application.
 
 -   **[Pulse System Requirements](pulse-requirements.html)**
 
@@ -32,7 +30,7 @@ By default, Geode Pulse runs in an embedded container within a Geode JMX manager
 
 -   **[Running Pulse in Embedded Mode (Quick Start)](pulse-embedded.html)**
 
-    Use Pulse in embedded mode to monitor a Geode deployment directly from a Geode JMX Manager. By default, the embedded Pulse application connects to the local JMX Manager that hosts the Pulse application. Optionally, configure Pulse to connect to a Geode system of your choice.
+    Use Pulse in embedded mode to monitor a <%=vars.product_name%> deployment directly from a <%=vars.product_name%> JMX Manager. By default, the embedded Pulse application connects to the local JMX Manager that hosts the Pulse application. Optionally, configure Pulse to connect to a <%=vars.product_name%> system of your choice.
 
 -   **[Hosting Pulse on a Web Application Server](pulse-hosted.html)**
 
@@ -40,10 +38,10 @@ By default, Geode Pulse runs in an embedded container within a Geode JMX manager
 
 -   **[Configuring Pulse Authentication](pulse-auth.html)**
 
-    Pulse requires all users to authenticate themselves before they can use the Pulse Web application. If you have configured JMX authentication on the Geode JMX Manager node, the Pulse Web application itself may also need to authenticate itself to the Geode JMX Manager node on startup.
+    Pulse requires all users to authenticate themselves before they can use the Pulse Web application. If you have configured JMX authentication on the <%=vars.product_name%> JMX Manager node, the Pulse Web application itself may also need to authenticate itself to the <%=vars.product_name%> JMX Manager node on startup.
 
 -   **[Using Pulse Views](pulse-views.html)**
 
-    Pulse provides a variety of different views to help you monitor Geode clusters, members, and regions.
+    Pulse provides a variety of different views to help you monitor <%=vars.product_name%> clusters, members, and regions.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/pulse/pulse-views.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/pulse/pulse-views.html.md.erb b/geode-docs/tools_modules/pulse/pulse-views.html.md.erb
index d3bb367..017a749 100644
--- a/geode-docs/tools_modules/pulse/pulse-views.html.md.erb
+++ b/geode-docs/tools_modules/pulse/pulse-views.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Pulse provides a variety of different views to help you monitor Geode clusters, members, and regions.
+Pulse provides a variety of different views to help you monitor <%=vars.product_name%> clusters, members, and regions.
 
 The following sections provide an overview of the main Pulse views:
 
@@ -31,25 +31,25 @@ The following sections provide an overview of the main Pulse views:
 
 # <a id="topic_F0ECE9E8179541CCA3D6C5F4FBA84404__section_9794B5754E474E10ABFBCD8B1DA240F8" class="no-quick-link"></a>Cluster View
 
-The cluster view is a high-level overview of the Geode distributed system. It is displayed immediately after you log into Pulse. Information displays around the perimeter of the cluster view show statistics such as memory usage, JVM pauses, and throughput. You can use the cluster view to drill down into details for individual members and regions in the distributed system.
+The cluster view is a high-level overview of the <%=vars.product_name%> distributed system. It is displayed immediately after you log into Pulse. Information displays around the perimeter of the cluster view show statistics such as memory usage, JVM pauses, and throughput. You can use the cluster view to drill down into details for individual members and regions in the distributed system.
 
 <img src="../../images/pulse_cluster_view.png" id="topic_F0ECE9E8179541CCA3D6C5F4FBA84404__image_CC7B54903DF24030850E55965CDB6EC4" class="image imageleft" width="624" />
 
 Use these basic controls while in Cluster view:
 
-1.  Click Members or Data to display information about Geode members or data regions in the distributed system.
-2.  Click the display icons to display the Geode members using icon view, block view, or table view. Note that icon view is available only when displaying Members.
+1.  Click Members or Data to display information about <%=vars.product_name%> members or data regions in the distributed system.
+2.  Click the display icons to display the <%=vars.product_name%> members using icon view, block view, or table view. Note that icon view is available only when displaying Members.
 
-    For example, the following shows Geode Members displayed in table view:
+    For example, the following shows <%=vars.product_name%> Members displayed in table view:
 
     <img src="../../images/member_view_list.png" id="topic_F0ECE9E8179541CCA3D6C5F4FBA84404__image_npw_sq3_wn" class="image" />
-    -   While in block view or table view, click the name of a Geode member to display additional information in the [Member View](#topic_F0ECE9E8179541CCA3D6C5F4FBA84404__section_3629814A3DF64D31A190495782DB0DBF).
+    -   While in block view or table view, click the name of a <%=vars.product_name%> member to display additional information in the [Member View](#topic_F0ECE9E8179541CCA3D6C5F4FBA84404__section_3629814A3DF64D31A190495782DB0DBF).
     -   Click Topology, Server Groups, or Redundancy Zones to filter the view based on all members in the topology, configured server groups, or configured redundancy zones.
-    The following shows Geode Regions displayed in table view:
+    The following shows <%=vars.product_name%> Regions displayed in table view:
     <img src="../../images/pulse-region-detail.png" id="topic_F0ECE9E8179541CCA3D6C5F4FBA84404__image_glp_1jr_54" class="image" />
-    -   While in block view or table view, click the name of a Geode region to display additional information in the [Region View](#topic_F0ECE9E8179541CCA3D6C5F4FBA84404__section_D151776BAC8B4704A71F37F8B5CE063D).
+    -   While in block view or table view, click the name of a <%=vars.product_name%> region to display additional information in the [Region View](#topic_F0ECE9E8179541CCA3D6C5F4FBA84404__section_D151776BAC8B4704A71F37F8B5CE063D).
 
-3.  While in icon view, click a host machine icon to display the Geode members on that machine.
+3.  While in icon view, click a host machine icon to display the <%=vars.product_name%> members on that machine.
 4.  In the Alerts pane, click the severity tabs to filter the message display by the level of severity.
 
 **Cluster View Screen Components**
@@ -128,8 +128,8 @@ The following table describes the data pieces displayed on the Cluster View scre
 <td>Host Machine</td>
 <td>When you mouse over a machine icon in Topology View, a pop-up appears with the following machine statistics:
 <ul>
-<li><em>CPU Usage</em>. Percentage of CPU being used by Geode processes on the machine.</li>
-<li><em>Memory Usage</em>. Amount of memory (in MB) being used by Geode processes.</li>
+<li><em>CPU Usage</em>. Percentage of CPU being used by <%=vars.product_name%> processes on the machine.</li>
+<li><em>Memory Usage</em>. Amount of memory (in MB) being used by <%=vars.product_name%> processes.</li>
 <li><em>Load Avg</em>. Average number of threads on the host machine that are in the run queue or are waiting for disk I/O over the last minutes. Corresponds to the Linux System statistic loadAverage1. If the load average is not available, a negative value is shown.</li>
 <li><em>Sockets</em>. Number of sockets currently open on the machine.</li>
 </ul></td>
@@ -138,14 +138,14 @@ The following table describes the data pieces displayed on the Cluster View scre
 <td>Member</td>
 <td>When you mouse over a member icon in Graphical View, a pop-up appears with the following member statistics:
 <ul>
-<li><em>CPU Usage</em>. Percentage of CPU being used by the Geode member process.</li>
+<li><em>CPU Usage</em>. Percentage of CPU being used by the <%=vars.product_name%> member process.</li>
 <li><em>Threads</em>. Number of threads running on the member.</li>
 <li><em>JVM Pauses</em>. Number of times the JVM used by the member process has paused due to garbage collection or excessive CPU usage.</li>
 <li><em>Regions</em>. Number of regions hosted on the member process.</li>
 <li><em>Clients</em>. Number of client currently connected to the member process.</li>
 <li><em>Gateway Sender</em>. Number of gateway senders configured on the member.</li>
 <li><em>Port</em>. Server port of the cache server member where clients can connect and perform cache operations.</li>
-<li><em>GemFire Version</em>. The version of the Geode member.</li>
+<li><em>GemFire Version</em>. The version of the <%=vars.product_name%> member.</li>
 </ul></td>
 </tr>
 <tr class="odd">
@@ -156,7 +156,7 @@ The following table describes the data pieces displayed on the Cluster View scre
 <li><em>Name</em>. Name of the member.</li>
 <li><em>Host</em>. Hostname or IP address where the member is running.</li>
 <li><em>Heap Usage</em>. Amount of JVM heap memory being used by the member process.</li>
-<li><em>CPU Usage</em>. Percentage of CPU being used by the Geode member process.</li>
+<li><em>CPU Usage</em>. Percentage of CPU being used by the <%=vars.product_name%> member process.</li>
 <li><em>Uptime</em>. How long the member has been up and running.</li>
 <li><em>Clients</em>. Number of clients currently connected to the member. It will have a value only if the member acts as a CacheServer.</li>
 </ul></td>
@@ -198,7 +198,7 @@ The following table describes the data pieces displayed on the Cluster View scre
 
 # <a id="topic_F0ECE9E8179541CCA3D6C5F4FBA84404__section_3629814A3DF64D31A190495782DB0DBF" class="no-quick-link"></a>Member View
 
-When you select an individual Geode member in Cluster View, Pulse displays the regions available on that member, as well as member-specific information such as the configured listen ports.
+When you select an individual <%=vars.product_name%> member in Cluster View, Pulse displays the regions available on that member, as well as member-specific information such as the configured listen ports.
 
 <img src="../../images/pulse_member_view.png" id="topic_F0ECE9E8179541CCA3D6C5F4FBA84404__image_EDBD3D333B2741DCAA5CB94719B507B7" class="image imageleft" width="624" />
 
@@ -330,7 +330,7 @@ The following table describes the data elements displayed on the Member View scr
 
 # <a id="topic_F0ECE9E8179541CCA3D6C5F4FBA84404__section_D151776BAC8B4704A71F37F8B5CE063D" class="no-quick-link"></a>Region View
 
-The Pulse Region View provides a comprehensive overview of all regions in the Geode distributed system:
+The Pulse Region View provides a comprehensive overview of all regions in the <%=vars.product_name%> distributed system:
 
 <img src="../../images/pulse_data_view.png" id="topic_F0ECE9E8179541CCA3D6C5F4FBA84404__image_A533852E38654E79BE5628E938E170EB" class="image imageleft" width="624" />
 
@@ -362,13 +362,13 @@ The following table describes the data elements displayed on the Region View scr
 <tbody>
 <tr class="odd">
 <td><strong>Region Members</strong></td>
-<td>Lists information about Geode members that host the region, either in block view or table view.</td>
+<td>Lists information about <%=vars.product_name%> members that host the region, either in block view or table view.</td>
 </tr>
 <tr class="even">
 <td>Region Member (Detail View)</td>
 <td>When you hover over a region member in block view, a pop-up appears with the following data fields:
 <ul>
-<li><em>Member Name</em>. The name of the Geode member hosting the region.</li>
+<li><em>Member Name</em>. The name of the <%=vars.product_name%> member hosting the region.</li>
 <li><em>EntryCount</em>. Number of entries for the region on that member.</li>
 <li><em>EntrySize</em>. The aggregate entry size (in bytes) of all entries on that member. For replicated regions this field will only provide a value if the eviction algorithm has been set to EvictionAlgorithm#LRU_ MEMORY. All partition regions will have this value. However, the value includes redundant entries and will also count the size of all the secondary entries on the node.</li>
 <li><em>Accessor</em>. Indicates whether the member is an accessor member.</li>
@@ -409,7 +409,7 @@ The following table describes the data elements displayed on the Region View scr
 
 # <a id="topic_F0ECE9E8179541CCA3D6C5F4FBA84404__sec_pulsedatabrowser" class="no-quick-link"></a>Data Browser
 
-The Pulse Data Browser enables you to query region data. Note that there are two key attributes available on DistributedSystemMXBean (see [List of Geode JMX MBeans](../../managing/management/list_of_mbeans.html#topic_4BCF867697C3456D96066BAD7F39FC8B)) that you can use to configure limits for the result sets displayed in Data Browser:
+The Pulse Data Browser enables you to query region data. Note that there are two key attributes available on DistributedSystemMXBean (see [List of <%=vars.product_name%> JMX MBeans](../../managing/management/list_of_mbeans.html#topic_4BCF867697C3456D96066BAD7F39FC8B)) that you can use to configure limits for the result sets displayed in Data Browser:
 
 -   `QueryResultSetLimit` limits the number of rows that Data Browser queries return. 1000 rows are displayed by default.
 -   `QueryCollectionsDepth` limits the number of elements of a collection that Data Browser queries return. This attribute applies to query results contain collections such as Map, List, and so forth. The default value is 100 elements.
@@ -423,7 +423,7 @@ The following shows an example Data Browser view:
 Use these basic controls while in Data Browser view:
 
 1.  Search for the name of a specific region.
-2.  Select one or more regions to display the Geode members that host those regions. The hosting Geode members appear in the Region Members section.
+2.  Select one or more regions to display the <%=vars.product_name%> members that host those regions. The hosting <%=vars.product_name%> members appear in the Region Members section.
 3.  Select one or more members from the Region Members section to restrict query results to those members.
 4.  Type in the text of a query to execute. See [Querying](../../developing/querying_basics/chapter_overview.html).
 5.  Display a list of previously-executed queries. Double-click on a query from the history list to copy it to the Query Editor, or delete the query from your history.

http://git-wip-us.apache.org/repos/asf/geode/blob/bb988caa/geode-docs/tools_modules/redis_adapter.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/tools_modules/redis_adapter.html.md.erb b/geode-docs/tools_modules/redis_adapter.html.md.erb
index 697fc4e..82f9ed5 100644
--- a/geode-docs/tools_modules/redis_adapter.html.md.erb
+++ b/geode-docs/tools_modules/redis_adapter.html.md.erb
@@ -19,13 +19,13 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-The Geode Redis adapter allows Geode to function as a drop-in replacement for a Redis data store, letting Redis applications take advantage of Geode’s scaling capabilities without changing their client code. Redis clients connect to a Geode server in the same way they connect to a Redis server, using an IP address and a port number.
+The <%=vars.product_name%> Redis adapter allows <%=vars.product_name%> to function as a drop-in replacement for a Redis data store, letting Redis applications take advantage of <%=vars.product_name%>’s scaling capabilities without changing their client code. Redis clients connect to a <%=vars.product_name%> server in the same way they connect to a Redis server, using an IP address and a port number.
 
 -   **[Using the Redis Adapter](#using-the-redis-adapter)**
 
 -   **[How the Redis Adapter Works](#how-the-redis-adapter-works)**
 
--   **[Advantages of Geode over a Redis Server](#advantages-of-geode-over-redis)**
+-   **[Advantages of <%=vars.product_name%> over a Redis Server](#advantages-of-geode-over-redis)**
 
 ## <a id="using-the-redis-adapter" class="no-quick-link"></a>Using the Redis Adapter
 
@@ -33,17 +33,17 @@ To use the Redis Adapter, you will need three pieces of information:
 
 1.  The port number through which clients will communicate
 2.  The IP address of the host where the server is to reside
-3.  A choice of which attributes you will use for a Geode partitioned region
+3.  A choice of which attributes you will use for a <%=vars.product_name%> partitioned region
 
 The IP address and port number should be the same ones coded in the Redis clients.
 
-In order to take advantage of Geode’s scaling capabilities, you should specify the Geode region as one of the types that use the PARTITION data policy. PARTITION is the default. Other possibilities include PARTITION\_REDUNDANT and PARTITION\_PERSISTENT. (See [“Region Shortcuts Quick Reference”](../reference/topics/region_shortcuts_table.html) for a complete list.)
+In order to take advantage of <%=vars.product_name%>’s scaling capabilities, you should specify the <%=vars.product_name%> region as one of the types that use the PARTITION data policy. PARTITION is the default. Other possibilities include PARTITION\_REDUNDANT and PARTITION\_PERSISTENT. (See [“Region Shortcuts Quick Reference”](../reference/topics/region_shortcuts_table.html) for a complete list.)
 
-To implement a Geode instance using the Redis Adapter:
+To implement a <%=vars.product_name%> instance using the Redis Adapter:
 
-1.  Install Geode on the system where the server is to reside.
-2.  Use gfsh to start a Geode server, specifying the three configuration options described above:
-    -   Use `--redis-port` to specify the port. This parameter is required -- the Geode server will listen on this port for Redis commands.
+1.  Install <%=vars.product_name%> on the system where the server is to reside.
+2.  Use gfsh to start a <%=vars.product_name%> server, specifying the three configuration options described above:
+    -   Use `--redis-port` to specify the port. This parameter is required -- the <%=vars.product_name%> server will listen on this port for Redis commands.
     -   Use `--redis-bind-address` to specify the IP address of the server host. This parameter is optional. If not specified, the default is determined from the /etc/hosts file.
     -   Use `--J=-Dgemfireredis.regiontype` to specify the region type. This parameter is optional. If not specified, regiontype is set to PARTITION.
 
@@ -58,7 +58,7 @@ Redis clients can then connect to the server at localhost:11211.
 
 ## <a id="how-the-redis-adapter-works" class="no-quick-link"></a>How the Redis Adapter Works
 
-The Geode Redis Adapter supports all Redis data structures, including
+The <%=vars.product_name%> Redis Adapter supports all Redis data structures, including
 
 -   String
 -   List
@@ -67,22 +67,22 @@ The Geode Redis Adapter supports all Redis data structures, including
 -   SortedSet
 -   HyperLogLog
 
-In Geode these data structures are implemented using partitioned regions. In most cases, Geode allocates one partitioned region for each data structure. For example, each Sorted Set is allocated its own partitioned region, in which the key is the user data and the value is the user-provided score, and entries are indexed by score. The two exceptions to this design are data types String and HyperLogLog. All Strings are allocated to a single partitioned region. Similarly, all HyperLogLogs are allocated to a single region. Regions use Geode’s OQL and indexes.
+In <%=vars.product_name%> these data structures are implemented using partitioned regions. In most cases, <%=vars.product_name%> allocates one partitioned region for each data structure. For example, each Sorted Set is allocated its own partitioned region, in which the key is the user data and the value is the user-provided score, and entries are indexed by score. The two exceptions to this design are data types String and HyperLogLog. All Strings are allocated to a single partitioned region. Similarly, all HyperLogLogs are allocated to a single region. Regions use <%=vars.product_name%>’s OQL and indexes.
 
-The Geode Redis Adapter supports all Redis commands for each of the Redis data structures. (See the Javadocs for the GemFireRedisServer class for a detailed list.) The Geode server’s responses to Redis commands are identical to those of a Redis server with the following exceptions, resulting from Geode’s more extensive partitioning model:
+The <%=vars.product_name%> Redis Adapter supports all Redis commands for each of the Redis data structures. (See the Javadocs for the GemFireRedisServer class for a detailed list.) The <%=vars.product_name%> server’s responses to Redis commands are identical to those of a Redis server with the following exceptions, resulting from <%=vars.product_name%>’s more extensive partitioning model:
 
 -   Any command that removes keys and returns a count of removed entries will return a count of how many entries have been removed from the local vm, rather than a total count of items removed across all members. However, all entries will be removed.
 -   Any command that returns a count of newly set members has an unspecified return value. The command will work just as the Redis protocol states, but the count will not necessarily reflect the number set compared to the number overridden.
--   Transactions work just as they would on a Redis instance; they are local transactions. Transactions cannot be executed on data that is not local to the executing server, that is on a partitioned region in a different server instance, or that is on a persistent region that does not have transactions enabled. Also, you cannot watch or unwatch keys, as all keys within a Geode transaction are watched by default.
+-   Transactions work just as they would on a Redis instance; they are local transactions. Transactions cannot be executed on data that is not local to the executing server, that is on a partitioned region in a different server instance, or that is on a persistent region that does not have transactions enabled. Also, you cannot watch or unwatch keys, as all keys within a <%=vars.product_name%> transaction are watched by default.
 
-## <a id="advantages-of-geode-over-redis" class="no-quick-link"></a>Advantages of Geode over a Redis Server
+## <a id="advantages-of-geode-over-redis" class="no-quick-link"></a>Advantages of <%=vars.product_name%> over a Redis Server
 
-Geode’s primary advantage is its **scalability**. While the Redis server is single threaded, Geode supports high concurrency. Many Redis clients can execute commands on the Geode server simultaneously.
+<%=vars.product_name%>’s primary advantage is its **scalability**. While the Redis server is single threaded, <%=vars.product_name%> supports high concurrency. Many Redis clients can execute commands on the <%=vars.product_name%> server simultaneously.
 
-Geode supports **stored procedures**, which can execute on the server and report results to the requesting client.
+<%=vars.product_name%> supports **stored procedures**, which can execute on the server and report results to the requesting client.
 
-Geode architecture and management features help detect and resolve **network partitioning** problems without explicit management on the part of the Redis client.
+<%=vars.product_name%> architecture and management features help detect and resolve **network partitioning** problems without explicit management on the part of the Redis client.
 
-Geode **WAN replication** allows the data store to expand horizontally, across physically distant sites, while maintaining data consistency.
+<%=vars.product_name%> **WAN replication** allows the data store to expand horizontally, across physically distant sites, while maintaining data consistency.
 
 


[20/51] [abbrv] geode git commit: GEODE-3249: Validate internal client/server messages

Posted by kl...@apache.org.
GEODE-3249: Validate internal client/server messages

This is a squashed commit of the following from feature/GEODE-3249b:

commit c16b151e57169733186f0c029d1957da32d59635
    "spotless" fixes

commit f8e7ddd5e4696907ce60a14f581ef1ca83e65232

    GEODE-3249: Validate internal client/server messages

    This was merely a matter of changing the server to require the credentials
    and changing the client to send credentials.  I removed the general overriding
    of AbstractOp.processSecureBytes() because it made no sense.  If the server
    sends a secure byte "part" in a message the client is obligated to process
    it or the next message it sends will cause a security violation.

    I've added a server-side property that folks can set to allow old clients
    to continue to work.  This must be used to roll the servers forward to the
    new version that contains this change.  Clients must then be rolled
    forward & the servers can then be rolled once again without the property set.

    The system property is
      geode.allow-internal-messages-without-credentials=true


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/6be38cad
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/6be38cad
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/6be38cad

Branch: refs/heads/feature/GEODE-1279
Commit: 6be38cad729d56f355c7586ec994bfef933c5e65
Parents: bb988ca
Author: Bruce Schuchardt <bs...@pivotal.io>
Authored: Wed Aug 16 08:17:46 2017 -0700
Committer: Bruce Schuchardt <bs...@pivotal.io>
Committed: Wed Aug 16 08:44:41 2017 -0700

----------------------------------------------------------------------
 .../geode/cache/client/internal/AbstractOp.java |  92 ++++++++--------
 .../cache/client/internal/AddPDXEnumOp.java     |  14 ---
 .../cache/client/internal/AddPDXTypeOp.java     |  14 ---
 .../client/internal/CloseConnectionOp.java      |   3 -
 .../geode/cache/client/internal/CommitOp.java   |   3 -
 .../client/internal/GetClientPRMetaDataOp.java  |   3 -
 .../GetClientPartitionAttributesOp.java         |   3 -
 .../cache/client/internal/GetEventValueOp.java  |   3 -
 .../client/internal/GetFunctionAttributeOp.java |  13 ---
 .../cache/client/internal/GetPDXEnumByIdOp.java |  14 ---
 .../cache/client/internal/GetPDXEnumsOp.java    |  13 ---
 .../client/internal/GetPDXIdForEnumOp.java      |  13 ---
 .../client/internal/GetPDXIdForTypeOp.java      |  14 ---
 .../cache/client/internal/GetPDXTypeByIdOp.java |  13 ---
 .../cache/client/internal/GetPDXTypesOp.java    |  13 ---
 .../cache/client/internal/MakePrimaryOp.java    |   3 -
 .../geode/cache/client/internal/PingOp.java     |   1 +
 .../cache/client/internal/PrimaryAckOp.java     |   3 -
 .../geode/cache/client/internal/PutOp.java      |   4 +-
 .../cache/client/internal/ReadyForEventsOp.java |   3 -
 .../internal/RegisterDataSerializersOp.java     |  13 ---
 .../internal/RegisterInstantiatorsOp.java       |  13 ---
 .../geode/cache/client/internal/RollbackOp.java |   3 -
 .../geode/cache/client/internal/SizeOp.java     |   3 -
 .../cache/client/internal/TXFailoverOp.java     |   3 -
 .../client/internal/TXSynchronizationOp.java    |   3 -
 .../internal/cache/tier/sockets/Message.java    |   1 +
 .../cache/tier/sockets/ServerConnection.java    |  76 +++++++------
 .../cache/tier/sockets/command/AddPdxType.java  |   1 +
 .../tier/sockets/command/GetPDXIdForType.java   |   1 +
 .../ClientAuthenticationPart2DUnitTest.java     |  32 ++++++
 .../security/ClientAuthenticationTestCase.java  | 106 +++++++++++++++++++
 .../security/ClientAuthorizationTestCase.java   |   3 +-
 .../geode/security/SecurityTestUtils.java       |   3 +-
 .../test/dunit/standalone/VersionManager.java   |   2 -
 .../client/internal/GatewaySenderBatchOp.java   |   3 -
 36 files changed, 237 insertions(+), 271 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/AbstractOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/AbstractOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/AbstractOp.java
index c4035f9..f39b6fa 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/AbstractOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/AbstractOp.java
@@ -140,6 +140,50 @@ public abstract class AbstractOp implements Op {
   }
 
   /**
+   * Process the security information in a response from the server. If the server sends a security
+   * "part" we must process it so all subclasses should allow this method to be invoked.
+   *
+   * @see ServerConnection#updateAndGetSecurityPart()
+   */
+  protected void processSecureBytes(Connection cnx, Message message) throws Exception {
+    if (cnx.getServer().getRequiresCredentials()) {
+      if (!message.isSecureMode()) {
+        // This can be seen during shutdown
+        if (logger.isDebugEnabled()) {
+          logger.trace(LogMarker.BRIDGE_SERVER,
+              "Response message from {} for {} has no secure part.", cnx, this);
+        }
+        return;
+      }
+      byte[] partBytes = message.getSecureBytes();
+      if (partBytes == null) {
+        if (logger.isDebugEnabled()) {
+          logger.debug("Response message for {} has no bytes in secure part.", this);
+        }
+        return;
+      }
+      byte[] bytes = ((ConnectionImpl) cnx).getHandShake().decryptBytes(partBytes);
+      DataInputStream dis = new DataInputStream(new ByteArrayInputStream(bytes));
+      cnx.setConnectionID(dis.readLong());
+    }
+  }
+
+  /**
+   * New implementations of AbstractOp should override this method to return false if the
+   * implementation should be excluded from client authentication. e.g. PingOp#needsUserId()
+   * <P/>
+   * Also, such an operation's <code>MessageType</code> must be added in the 'if' condition in
+   * {@link ServerConnection#updateAndGetSecurityPart()}
+   *
+   * @return boolean
+   * @see AbstractOp#sendMessage(Connection)
+   * @see ServerConnection#updateAndGetSecurityPart()
+   */
+  protected boolean needsUserId() {
+    return true;
+  }
+
+  /**
    * Attempts to read a response to this operation by reading it from the given connection, and
    * returning it.
    * 
@@ -174,38 +218,6 @@ public abstract class AbstractOp implements Op {
   }
 
   /**
-   * New implementations of AbstractOp should override this method if the implementation should be
-   * excluded from client authentication. e.g. PingOp#processSecureBytes(Connection cnx, Message
-   * message)
-   * 
-   * @see AbstractOp#sendMessage(Connection)
-   * @see AbstractOp#needsUserId()
-   * @see ServerConnection#updateAndGetSecurityPart()
-   */
-  protected void processSecureBytes(Connection cnx, Message message) throws Exception {
-    if (cnx.getServer().getRequiresCredentials()) {
-      if (!message.isSecureMode()) {
-        // This can be seen during shutdown
-        if (logger.isDebugEnabled()) {
-          logger.trace(LogMarker.BRIDGE_SERVER,
-              "Response message from {} for {} has no secure part.", cnx, this);
-        }
-        return;
-      }
-      byte[] partBytes = message.getSecureBytes();
-      if (partBytes == null) {
-        if (logger.isDebugEnabled()) {
-          logger.debug("Response message for {} has no bytes in secure part.", this);
-        }
-        return;
-      }
-      byte[] bytes = ((ConnectionImpl) cnx).getHandShake().decryptBytes(partBytes);
-      DataInputStream dis = new DataInputStream(new ByteArrayInputStream(bytes));
-      cnx.setConnectionID(dis.readLong());
-    }
-  }
-
-  /**
    * By default just create a normal one part msg. Subclasses can override this.
    */
   protected Message createResponseMessage() {
@@ -405,22 +417,6 @@ public abstract class AbstractOp implements Op {
   protected abstract void endAttempt(ConnectionStats stats, long start);
 
   /**
-   * New implementations of AbstractOp should override this method to return false if the
-   * implementation should be excluded from client authentication. e.g. PingOp#needsUserId()
-   * <P/>
-   * Also, such an operation's <code>MessageType</code> must be added in the 'if' condition in
-   * {@link ServerConnection#updateAndGetSecurityPart()}
-   * 
-   * @return boolean
-   * @see AbstractOp#sendMessage(Connection)
-   * @see AbstractOp#processSecureBytes(Connection, Message)
-   * @see ServerConnection#updateAndGetSecurityPart()
-   */
-  protected boolean needsUserId() {
-    return true;
-  }
-
-  /**
    * Subclasses for AbstractOp should override this method to return false in this message should
    * not participate in any existing transaction
    * 

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/AddPDXEnumOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/AddPDXEnumOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/AddPDXEnumOp.java
index ca7790a..857d1d3 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/AddPDXEnumOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/AddPDXEnumOp.java
@@ -75,25 +75,11 @@ public class AddPDXEnumOp {
       stats.endAddPdxType(start, hasTimedOut(), hasFailed());
     }
 
-    @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
-    protected boolean needsUserId() {
-      return false;
-    }
-
     // Don't send the transaction id for this message type.
     @Override
     protected boolean participateInTransaction() {
       return false;
     }
 
-    // override since this is not a message subject to security
-    @Override
-    protected void sendMessage(Connection cnx) throws Exception {
-      getMessage().clearMessageHasSecurePartFlag();
-      getMessage().send(false);
-    }
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/AddPDXTypeOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/AddPDXTypeOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/AddPDXTypeOp.java
index 88c8551..4eb137d 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/AddPDXTypeOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/AddPDXTypeOp.java
@@ -75,25 +75,11 @@ public class AddPDXTypeOp {
       stats.endAddPdxType(start, hasTimedOut(), hasFailed());
     }
 
-    @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
-    protected boolean needsUserId() {
-      return false;
-    }
-
     // Don't send the transaction id for this message type.
     @Override
     protected boolean participateInTransaction() {
       return false;
     }
 
-    // override since this is not a message subject to security
-    @Override
-    protected void sendMessage(Connection cnx) throws Exception {
-      getMessage().clearMessageHasSecurePartFlag();
-      getMessage().send(false);
-    }
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/CloseConnectionOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/CloseConnectionOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/CloseConnectionOp.java
index ffcdc39..ea8a8b5 100755
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/CloseConnectionOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/CloseConnectionOp.java
@@ -54,9 +54,6 @@ public class CloseConnectionOp {
     }
 
     @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
     protected boolean needsUserId() {
       return false;
     }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/CommitOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/CommitOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/CommitOp.java
index edffb2b..f44d62d 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/CommitOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/CommitOp.java
@@ -72,9 +72,6 @@ public class CommitOp {
     }
 
     @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
     protected boolean needsUserId() {
       return false;
     }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetClientPRMetaDataOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetClientPRMetaDataOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetClientPRMetaDataOp.java
index 2ba3e3a..cc25416 100755
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetClientPRMetaDataOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetClientPRMetaDataOp.java
@@ -68,9 +68,6 @@ public class GetClientPRMetaDataOp {
     }
 
     @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
     protected boolean needsUserId() {
       return false;
     }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetClientPartitionAttributesOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetClientPartitionAttributesOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetClientPartitionAttributesOp.java
index 49567dd..ba7463e 100755
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetClientPartitionAttributesOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetClientPartitionAttributesOp.java
@@ -73,9 +73,6 @@ public class GetClientPartitionAttributesOp {
     }
 
     @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
     protected boolean needsUserId() {
       return false;
     }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetEventValueOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetEventValueOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetEventValueOp.java
index 3fb5fcf..8804e05 100755
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetEventValueOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetEventValueOp.java
@@ -59,9 +59,6 @@ public class GetEventValueOp {
     }
 
     @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
     protected boolean needsUserId() {
       return false;
     }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetFunctionAttributeOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetFunctionAttributeOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetFunctionAttributeOp.java
index c7edbfe..dea49a2 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetFunctionAttributeOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetFunctionAttributeOp.java
@@ -63,18 +63,5 @@ public class GetFunctionAttributeOp {
       stats.endGet(start, hasTimedOut(), hasFailed());
     }
 
-    @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
-    protected boolean needsUserId() {
-      return false;
-    }
-
-    @Override
-    protected void sendMessage(Connection cnx) throws Exception {
-      getMessage().clearMessageHasSecurePartFlag();
-      getMessage().send(false);
-    }
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXEnumByIdOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXEnumByIdOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXEnumByIdOp.java
index 7bbf740..dc94fe5 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXEnumByIdOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXEnumByIdOp.java
@@ -72,24 +72,10 @@ public class GetPDXEnumByIdOp {
       stats.endGetPDXTypeById(start, hasTimedOut(), hasFailed());
     }
 
-    @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
-    protected boolean needsUserId() {
-      return false;
-    }
-
     // Don't send the transaction id for this message type.
     @Override
     protected boolean participateInTransaction() {
       return false;
     }
-
-    @Override
-    protected void sendMessage(Connection cnx) throws Exception {
-      getMessage().clearMessageHasSecurePartFlag();
-      getMessage().send(false);
-    }
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXEnumsOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXEnumsOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXEnumsOp.java
index be4c092..3158eb3 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXEnumsOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXEnumsOp.java
@@ -84,22 +84,9 @@ public class GetPDXEnumsOp {
     protected void endAttempt(ConnectionStats stats, long start) {}
 
     @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
-    protected boolean needsUserId() {
-      return false;
-    }
-
-    @Override
     protected boolean participateInTransaction() {
       return false;
     }
 
-    @Override
-    protected void sendMessage(Connection cnx) throws Exception {
-      getMessage().clearMessageHasSecurePartFlag();
-      getMessage().send(false);
-    }
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXIdForEnumOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXIdForEnumOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXIdForEnumOp.java
index d87371c..9ad85f0 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXIdForEnumOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXIdForEnumOp.java
@@ -94,24 +94,11 @@ public class GetPDXIdForEnumOp {
       stats.endGetPDXTypeById(start, hasTimedOut(), hasFailed());
     }
 
-    @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
-    protected boolean needsUserId() {
-      return false;
-    }
-
     // Don't send the transaction id for this message type.
     @Override
     protected boolean participateInTransaction() {
       return false;
     }
 
-    @Override
-    protected void sendMessage(Connection cnx) throws Exception {
-      getMessage().clearMessageHasSecurePartFlag();
-      getMessage().send(false);
-    }
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXIdForTypeOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXIdForTypeOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXIdForTypeOp.java
index 27f600e..cc0cd65 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXIdForTypeOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXIdForTypeOp.java
@@ -93,24 +93,10 @@ public class GetPDXIdForTypeOp {
       stats.endGetPDXTypeById(start, hasTimedOut(), hasFailed());
     }
 
-    @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
-    protected boolean needsUserId() {
-      return false;
-    }
-
     // Don't send the transaction id for this message type.
     @Override
     protected boolean participateInTransaction() {
       return false;
     }
-
-    @Override
-    protected void sendMessage(Connection cnx) throws Exception {
-      getMessage().clearMessageHasSecurePartFlag();
-      getMessage().send(false);
-    }
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXTypeByIdOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXTypeByIdOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXTypeByIdOp.java
index bee50b5..826d4cd 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXTypeByIdOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXTypeByIdOp.java
@@ -72,24 +72,11 @@ public class GetPDXTypeByIdOp {
       stats.endGetPDXTypeById(start, hasTimedOut(), hasFailed());
     }
 
-    @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
-    protected boolean needsUserId() {
-      return false;
-    }
-
     // Don't send the transaction id for this message type.
     @Override
     protected boolean participateInTransaction() {
       return false;
     }
 
-    @Override
-    protected void sendMessage(Connection cnx) throws Exception {
-      getMessage().clearMessageHasSecurePartFlag();
-      getMessage().send(false);
-    }
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXTypesOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXTypesOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXTypesOp.java
index 5256924..9186680 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXTypesOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/GetPDXTypesOp.java
@@ -84,22 +84,9 @@ public class GetPDXTypesOp {
     protected void endAttempt(ConnectionStats stats, long start) {}
 
     @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
-    protected boolean needsUserId() {
-      return false;
-    }
-
-    @Override
     protected boolean participateInTransaction() {
       return false;
     }
 
-    @Override
-    protected void sendMessage(Connection cnx) throws Exception {
-      getMessage().clearMessageHasSecurePartFlag();
-      getMessage().send(false);
-    }
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/MakePrimaryOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/MakePrimaryOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/MakePrimaryOp.java
index e1d3d50..0332507 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/MakePrimaryOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/MakePrimaryOp.java
@@ -49,9 +49,6 @@ public class MakePrimaryOp {
     }
 
     @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
     protected boolean needsUserId() {
       return false;
     }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/PingOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/PingOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/PingOp.java
index 2e52542..fb07b39 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/PingOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/PingOp.java
@@ -53,6 +53,7 @@ public class PingOp {
 
     @Override
     protected void processSecureBytes(Connection cnx, Message message) throws Exception {
+      super.processSecureBytes(cnx, message);
       Message.MESSAGE_TYPE.set(null);
     }
 

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/PrimaryAckOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/PrimaryAckOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/PrimaryAckOp.java
index e380e99..d7d32a7 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/PrimaryAckOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/PrimaryAckOp.java
@@ -56,9 +56,6 @@ public class PrimaryAckOp {
     }
 
     @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
     protected boolean needsUserId() {
       return false;
     }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/PutOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/PutOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/PutOp.java
index 447ed38..1390c2d 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/PutOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/PutOp.java
@@ -409,9 +409,7 @@ public class PutOp {
 
     @Override
     protected void processSecureBytes(Connection cnx, Message message) throws Exception {
-      if (!this.isMetaRegionPutOp) {
-        super.processSecureBytes(cnx, message);
-      }
+      super.processSecureBytes(cnx, message);
     }
 
     @Override

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/ReadyForEventsOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/ReadyForEventsOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/ReadyForEventsOp.java
index f6d0ccb..12e15b4 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/ReadyForEventsOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/ReadyForEventsOp.java
@@ -48,9 +48,6 @@ public class ReadyForEventsOp {
     }
 
     @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
     protected boolean needsUserId() {
       return false;
     }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/RegisterDataSerializersOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/RegisterDataSerializersOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/RegisterDataSerializersOp.java
index 5b25961..b40a840 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/RegisterDataSerializersOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/RegisterDataSerializersOp.java
@@ -117,18 +117,5 @@ public class RegisterDataSerializersOp {
       stats.endRegisterDataSerializers(start, hasTimedOut(), hasFailed());
     }
 
-    @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
-    protected boolean needsUserId() {
-      return false;
-    }
-
-    @Override
-    protected void sendMessage(Connection cnx) throws Exception {
-      getMessage().clearMessageHasSecurePartFlag();
-      getMessage().send(false);
-    }
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/RegisterInstantiatorsOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/RegisterInstantiatorsOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/RegisterInstantiatorsOp.java
index 114bebe..40ce619 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/RegisterInstantiatorsOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/RegisterInstantiatorsOp.java
@@ -150,18 +150,5 @@ public class RegisterInstantiatorsOp {
       stats.endRegisterInstantiators(start, hasTimedOut(), hasFailed());
     }
 
-    @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
-    protected boolean needsUserId() {
-      return false;
-    }
-
-    @Override
-    protected void sendMessage(Connection cnx) throws Exception {
-      getMessage().clearMessageHasSecurePartFlag();
-      getMessage().send(false);
-    }
   }
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/RollbackOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/RollbackOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/RollbackOp.java
index 4704f3a..97cb2e6 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/RollbackOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/RollbackOp.java
@@ -80,9 +80,6 @@ public class RollbackOp {
     }
 
     @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
     protected boolean needsUserId() {
       return false;
     }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/SizeOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/SizeOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/SizeOp.java
index ac8c95e..fb3ffec 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/SizeOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/SizeOp.java
@@ -75,9 +75,6 @@ public class SizeOp {
     }
 
     @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
     protected boolean needsUserId() {
       return false;
     }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/TXFailoverOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/TXFailoverOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/TXFailoverOp.java
index 17fc701..0995981 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/TXFailoverOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/TXFailoverOp.java
@@ -74,9 +74,6 @@ public class TXFailoverOp {
     }
 
     @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
     protected boolean needsUserId() {
       return false;
     }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/cache/client/internal/TXSynchronizationOp.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/cache/client/internal/TXSynchronizationOp.java b/geode-core/src/main/java/org/apache/geode/cache/client/internal/TXSynchronizationOp.java
index 0c4086c..a02d463 100644
--- a/geode-core/src/main/java/org/apache/geode/cache/client/internal/TXSynchronizationOp.java
+++ b/geode-core/src/main/java/org/apache/geode/cache/client/internal/TXSynchronizationOp.java
@@ -147,9 +147,6 @@ public class TXSynchronizationOp {
     }
 
     @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
     protected boolean needsUserId() {
       return false;
     }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/Message.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/Message.java b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/Message.java
index 1f9ef91..b7835a3 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/Message.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/Message.java
@@ -1029,6 +1029,7 @@ public class Message {
     sb.append("type=").append(MessageType.getString(this.messageType));
     sb.append("; payloadLength=").append(this.payloadLength);
     sb.append("; numberOfParts=").append(this.numberOfParts);
+    sb.append("; hasSecurePart=").append(isSecureMode());
     sb.append("; transactionId=").append(this.transactionId);
     sb.append("; currentPart=").append(this.currentPart);
     sb.append("; messageModified=").append(this.messageModified);

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ServerConnection.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ServerConnection.java b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ServerConnection.java
index 870d0ff..394d261 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ServerConnection.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/ServerConnection.java
@@ -84,6 +84,17 @@ public abstract class ServerConnection implements Runnable {
    */
   private static final int TIMEOUT_BUFFER_FOR_CONNECTION_CLEANUP_MS = 5000;
 
+  public static final String ALLOW_INTERNAL_MESSAGES_WITHOUT_CREDENTIALS_NAME =
+      "geode.allow-internal-messages-without-credentials";
+
+  /**
+   * This property allows folks to perform a rolling upgrade from pre-1.2.1 to a post-1.2.1 cluster.
+   * Normally internal messages that can affect server state require credentials but pre-1.2.1 this
+   * wasn't the case. See GEODE-3249
+   */
+  private static final boolean ALLOW_INTERNAL_MESSAGES_WITHOUT_CREDENTIALS =
+      Boolean.getBoolean(ALLOW_INTERNAL_MESSAGES_WITHOUT_CREDENTIALS_NAME);
+
   private Map commands;
 
   private final SecurityService securityService;
@@ -764,7 +775,8 @@ public abstract class ServerConnection implements Runnable {
 
         // if a subject exists for this uniqueId, binds the subject to this thread so that we can do
         // authorization later
-        if (AcceptorImpl.isIntegratedSecurity() && !isInternalMessage()
+        if (AcceptorImpl.isIntegratedSecurity()
+            && !isInternalMessage(this.requestMsg, ALLOW_INTERNAL_MESSAGES_WITHOUT_CREDENTIALS)
             && this.communicationMode != Acceptor.GATEWAY_TO_GATEWAY) {
           long uniqueId = getUniqueId();
           Subject subject = this.clientUserAuths.getSubject(uniqueId);
@@ -1068,7 +1080,8 @@ public abstract class ServerConnection implements Runnable {
     if (AcceptorImpl.isAuthenticationRequired()
         && this.handshake.getVersion().compareTo(Version.GFE_65) >= 0
         && (this.communicationMode != Acceptor.GATEWAY_TO_GATEWAY)
-        && (!this.requestMsg.getAndResetIsMetaRegion()) && (!isInternalMessage())) {
+        && (!this.requestMsg.getAndResetIsMetaRegion())
+        && (!isInternalMessage(this.requestMsg, ALLOW_INTERNAL_MESSAGES_WITHOUT_CREDENTIALS))) {
       setSecurityPart();
       return this.securePart;
     } else {
@@ -1081,34 +1094,37 @@ public abstract class ServerConnection implements Runnable {
     return null;
   }
 
-  private boolean isInternalMessage() {
-    return (this.requestMsg.messageType == MessageType.CLIENT_READY
-        || this.requestMsg.messageType == MessageType.CLOSE_CONNECTION
-        || this.requestMsg.messageType == MessageType.GETCQSTATS_MSG_TYPE
-        || this.requestMsg.messageType == MessageType.GET_CLIENT_PARTITION_ATTRIBUTES
-        || this.requestMsg.messageType == MessageType.GET_CLIENT_PR_METADATA
-        || this.requestMsg.messageType == MessageType.INVALID
-        || this.requestMsg.messageType == MessageType.MAKE_PRIMARY
-        || this.requestMsg.messageType == MessageType.MONITORCQ_MSG_TYPE
-        || this.requestMsg.messageType == MessageType.PERIODIC_ACK
-        || this.requestMsg.messageType == MessageType.PING
-        || this.requestMsg.messageType == MessageType.REGISTER_DATASERIALIZERS
-        || this.requestMsg.messageType == MessageType.REGISTER_INSTANTIATORS
-        || this.requestMsg.messageType == MessageType.REQUEST_EVENT_VALUE
-        || this.requestMsg.messageType == MessageType.ADD_PDX_TYPE
-        || this.requestMsg.messageType == MessageType.GET_PDX_ID_FOR_TYPE
-        || this.requestMsg.messageType == MessageType.GET_PDX_TYPE_BY_ID
-        || this.requestMsg.messageType == MessageType.SIZE
-        || this.requestMsg.messageType == MessageType.TX_FAILOVER
-        || this.requestMsg.messageType == MessageType.TX_SYNCHRONIZATION
-        || this.requestMsg.messageType == MessageType.GET_FUNCTION_ATTRIBUTES
-        || this.requestMsg.messageType == MessageType.ADD_PDX_ENUM
-        || this.requestMsg.messageType == MessageType.GET_PDX_ID_FOR_ENUM
-        || this.requestMsg.messageType == MessageType.GET_PDX_ENUM_BY_ID
-        || this.requestMsg.messageType == MessageType.GET_PDX_TYPES
-        || this.requestMsg.messageType == MessageType.GET_PDX_ENUMS
-        || this.requestMsg.messageType == MessageType.COMMIT
-        || this.requestMsg.messageType == MessageType.ROLLBACK);
+  public boolean isInternalMessage(Message message, boolean allowOldInternalMessages) {
+    int messageType = message.getMessageType();
+    boolean isInternalMessage = messageType == MessageType.PING
+        || messageType == MessageType.USER_CREDENTIAL_MESSAGE
+        || messageType == MessageType.REQUEST_EVENT_VALUE || messageType == MessageType.MAKE_PRIMARY
+        || messageType == MessageType.REMOVE_USER_AUTH || messageType == MessageType.CLIENT_READY
+        || messageType == MessageType.SIZE || messageType == MessageType.TX_FAILOVER
+        || messageType == MessageType.TX_SYNCHRONIZATION || messageType == MessageType.COMMIT
+        || messageType == MessageType.ROLLBACK || messageType == MessageType.CLOSE_CONNECTION
+        || messageType == MessageType.INVALID || messageType == MessageType.PERIODIC_ACK
+        || messageType == MessageType.GET_CLIENT_PR_METADATA
+        || messageType == MessageType.GET_CLIENT_PARTITION_ATTRIBUTES;
+
+    // we allow older clients to not send credentials for a handful of messages
+    // if and only if a system property is set. This allows a rolling upgrade
+    // to be performed.
+    if (!isInternalMessage && allowOldInternalMessages) {
+      isInternalMessage = messageType == MessageType.GETCQSTATS_MSG_TYPE
+          || messageType == MessageType.MONITORCQ_MSG_TYPE
+          || messageType == MessageType.REGISTER_DATASERIALIZERS
+          || messageType == MessageType.REGISTER_INSTANTIATORS
+          || messageType == MessageType.ADD_PDX_TYPE
+          || messageType == MessageType.GET_PDX_ID_FOR_TYPE
+          || messageType == MessageType.GET_PDX_TYPE_BY_ID
+          || messageType == MessageType.GET_FUNCTION_ATTRIBUTES
+          || messageType == MessageType.ADD_PDX_ENUM
+          || messageType == MessageType.GET_PDX_ID_FOR_ENUM
+          || messageType == MessageType.GET_PDX_ENUM_BY_ID
+          || messageType == MessageType.GET_PDX_TYPES || messageType == MessageType.GET_PDX_ENUMS;
+    }
+    return isInternalMessage;
   }
 
   public void run() {

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/command/AddPdxType.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/command/AddPdxType.java b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/command/AddPdxType.java
index cb4b261..041e12f 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/command/AddPdxType.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/command/AddPdxType.java
@@ -24,6 +24,7 @@ import org.apache.geode.internal.cache.tier.sockets.BaseCommand;
 import org.apache.geode.internal.cache.tier.sockets.Message;
 import org.apache.geode.internal.cache.tier.sockets.ServerConnection;
 import org.apache.geode.internal.logging.LogService;
+import org.apache.geode.internal.security.AuthorizeRequest;
 import org.apache.geode.internal.security.SecurityService;
 import org.apache.geode.pdx.internal.PdxType;
 import org.apache.geode.pdx.internal.TypeRegistry;

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/command/GetPDXIdForType.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/command/GetPDXIdForType.java b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/command/GetPDXIdForType.java
index caa0661..f2172ef 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/command/GetPDXIdForType.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/tier/sockets/command/GetPDXIdForType.java
@@ -22,6 +22,7 @@ import org.apache.geode.internal.cache.tier.MessageType;
 import org.apache.geode.internal.cache.tier.sockets.BaseCommand;
 import org.apache.geode.internal.cache.tier.sockets.Message;
 import org.apache.geode.internal.cache.tier.sockets.ServerConnection;
+import org.apache.geode.internal.security.AuthorizeRequest;
 import org.apache.geode.internal.security.SecurityService;
 import org.apache.geode.pdx.internal.PdxType;
 import org.apache.geode.pdx.internal.TypeRegistry;

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/test/java/org/apache/geode/security/ClientAuthenticationPart2DUnitTest.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/security/ClientAuthenticationPart2DUnitTest.java b/geode-core/src/test/java/org/apache/geode/security/ClientAuthenticationPart2DUnitTest.java
index 3cf2efc..5a78535 100644
--- a/geode-core/src/test/java/org/apache/geode/security/ClientAuthenticationPart2DUnitTest.java
+++ b/geode-core/src/test/java/org/apache/geode/security/ClientAuthenticationPart2DUnitTest.java
@@ -14,10 +14,16 @@
  */
 package org.apache.geode.security;
 
+import static org.mockito.Mockito.*;
+
+import org.junit.Assert;
 import org.junit.Ignore;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
+import org.apache.geode.internal.cache.tier.MessageType;
+import org.apache.geode.internal.cache.tier.sockets.Message;
+import org.apache.geode.internal.cache.tier.sockets.ServerConnection;
 import org.apache.geode.test.junit.categories.DistributedTest;
 import org.apache.geode.test.junit.categories.SecurityTest;
 
@@ -33,6 +39,32 @@ public class ClientAuthenticationPart2DUnitTest extends ClientAuthenticationTest
     doTestNoCredentials(true);
   }
 
+  // GEODE-3249
+  @Test
+  public void testNoCredentialsForMultipleUsersCantRegisterMetadata() throws Exception {
+    doTestNoCredentialsCantRegisterMetadata(true);
+  }
+
+  @Test
+  public void testServerConnectionAcceptsOldInternalMessagesIfAllowed() throws Exception {
+
+    ServerConnection serverConnection = mock(ServerConnection.class);
+    when(serverConnection.isInternalMessage(any(Message.class), any(Boolean.class)))
+        .thenCallRealMethod();
+
+    int[] oldInternalMessages = new int[] {MessageType.ADD_PDX_TYPE, MessageType.ADD_PDX_ENUM,
+        MessageType.REGISTER_INSTANTIATORS, MessageType.REGISTER_DATASERIALIZERS};
+
+    for (int i = 0; i < oldInternalMessages.length; i++) {
+      Message message = mock(Message.class);
+      when(message.getMessageType()).thenReturn(oldInternalMessages[i]);
+
+      serverConnection.setRequestMsg(message);
+      Assert.assertFalse(serverConnection.isInternalMessage(message, false));
+      Assert.assertTrue(serverConnection.isInternalMessage(message, true));
+    }
+  }
+
   @Test
   public void testInvalidCredentialsForMultipleUsers() throws Exception {
     doTestInvalidCredentials(true);

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/test/java/org/apache/geode/security/ClientAuthenticationTestCase.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/security/ClientAuthenticationTestCase.java b/geode-core/src/test/java/org/apache/geode/security/ClientAuthenticationTestCase.java
index 1293aff..0ecd72f 100644
--- a/geode-core/src/test/java/org/apache/geode/security/ClientAuthenticationTestCase.java
+++ b/geode-core/src/test/java/org/apache/geode/security/ClientAuthenticationTestCase.java
@@ -24,11 +24,22 @@ import static org.apache.geode.test.dunit.IgnoredException.*;
 import static org.apache.geode.test.dunit.LogWriterUtils.*;
 import static org.apache.geode.test.dunit.Wait.*;
 
+import java.io.DataInput;
+import java.io.DataOutput;
 import java.io.IOException;
 import java.util.Properties;
 import javax.net.ssl.SSLException;
 import javax.net.ssl.SSLHandshakeException;
 
+import org.apache.geode.DataSerializer;
+import org.apache.geode.cache.client.Pool;
+import org.apache.geode.cache.client.PoolManager;
+import org.apache.geode.cache.client.internal.ExecutablePool;
+import org.apache.geode.cache.client.internal.RegisterDataSerializersOp;
+import org.apache.geode.internal.HeapDataOutputStream;
+import org.apache.geode.internal.InternalDataSerializer;
+import org.apache.geode.internal.Version;
+import org.apache.geode.internal.cache.EventID;
 import org.apache.geode.security.generator.CredentialGenerator;
 import org.apache.geode.security.generator.DummyCredentialGenerator;
 import org.apache.geode.test.dunit.Host;
@@ -52,6 +63,37 @@ public abstract class ClientAuthenticationTestCase extends JUnit4DistributedTest
       {AuthenticationRequiredException.class.getName(),
           AuthenticationFailedException.class.getName(), SSLHandshakeException.class.getName()};
 
+
+  public static enum Color {
+    red, orange, yellow, green, blue, indigo, violet
+  }
+
+
+  public static class MyDataSerializer extends DataSerializer {
+    public MyDataSerializer() {}
+
+    @Override
+    public Class<?>[] getSupportedClasses() {
+      return new Class[] {Color.class};
+    }
+
+    public int getId() {
+      return 1073741824;
+    }
+
+    @Override
+    public boolean toData(Object object, DataOutput output) {
+      return true;
+    }
+
+    @Override
+    public Object fromData(DataInput in) throws IOException, ClassNotFoundException {
+      return Color.red;
+    }
+  }
+
+
+
   @Override
   public final void postSetUp() throws Exception {
     final Host host = Host.getHost(0);
@@ -172,6 +214,70 @@ public abstract class ClientAuthenticationTestCase extends JUnit4DistributedTest
     }
   }
 
+  protected void doTestNoCredentialsCantRegisterMetadata(final boolean multiUser) throws Exception {
+    CredentialGenerator gen = new DummyCredentialGenerator();
+    Properties extraProps = gen.getSystemProperties();
+    Properties javaProps = gen.getJavaProperties();
+    String authenticator = gen.getAuthenticator();
+    String authInit = gen.getAuthInit();
+
+    // Start the servers
+    int locPort1 = getLocatorPort();
+    int locPort2 = getLocatorPort();
+    String locString = getAndClearLocatorString();
+
+    int port1 = createServer1(extraProps, javaProps, authenticator, locPort1, locString);
+    int port2 = server2
+        .invoke(() -> createCacheServer(locPort2, locString, authenticator, extraProps, javaProps));
+
+    // Start first client with valid credentials
+    Properties credentials1 = gen.getValidCredentials(1);
+    Properties javaProps1 = gen.getJavaProperties();
+
+    createClient1NoException(multiUser, authInit, port1, port2, credentials1, javaProps1);
+
+    // Trying to create the region on client2
+    if (gen.classCode().equals(CredentialGenerator.ClassCode.SSL)) {
+      // For SSL the exception may not come since the server can close socket
+      // before handshake message is sent from client. However exception
+      // should come in any region operations.
+      client2.invoke(
+          () -> createCacheClient(null, null, null, port1, port2, 0, multiUser, NO_EXCEPTION));
+      client2.invoke(() -> doPuts(2, OTHER_EXCEPTION));
+
+    } else {
+      client2.invoke(
+          () -> createCacheClient(null, null, null, port1, port2, 0, multiUser, AUTHREQ_EXCEPTION));
+
+      // Try to register a PDX type with the server
+      client2.invoke("register a PDX type", () -> {
+        HeapDataOutputStream outputStream = new HeapDataOutputStream(100, Version.CURRENT);
+        try {
+          DataSerializer.writeObject(new Employee(106l, "David", "Copperfield"), outputStream);
+          throw new Error("operation should have been rejected");
+        } catch (UnsupportedOperationException e) {
+          // "UnsupportedOperationException: Use Pool APIs for doing operations when
+          // multiuser-secure-mode-enabled is set to true."
+        }
+      });
+
+      // Try to register a DataSerializer with the server
+      client2.invoke("register a data serializer", () -> {
+        EventID eventId = InternalDataSerializer.generateEventId();
+        Pool pool = PoolManager.getAll().values().iterator().next();
+        try {
+          RegisterDataSerializersOp.execute((ExecutablePool) pool,
+              new DataSerializer[] {new MyDataSerializer()}, eventId);
+          throw new Error("operation should have been rejected");
+        } catch (UnsupportedOperationException e) {
+          // "UnsupportedOperationException: Use Pool APIs for doing operations when
+          // multiuser-secure-mode-enabled is set to true."
+        }
+      });
+    }
+
+  }
+
   protected void doTestInvalidCredentials(final boolean multiUser) throws Exception {
     CredentialGenerator gen = new DummyCredentialGenerator();
     Properties extraProps = gen.getSystemProperties();

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/test/java/org/apache/geode/security/ClientAuthorizationTestCase.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/security/ClientAuthorizationTestCase.java b/geode-core/src/test/java/org/apache/geode/security/ClientAuthorizationTestCase.java
index 9d3f721..a4fd365 100644
--- a/geode-core/src/test/java/org/apache/geode/security/ClientAuthorizationTestCase.java
+++ b/geode-core/src/test/java/org/apache/geode/security/ClientAuthorizationTestCase.java
@@ -288,7 +288,8 @@ public abstract class ClientAuthorizationTestCase extends JUnit4DistributedTestC
 
     final int numOps = indices.length;
     System.out.println("Got doOp for op: " + op.toString() + ", numOps: " + numOps + ", indices: "
-        + indicesToString(indices) + ", expect: " + expectedResult);
+        + indicesToString(indices) + ", expect: " + expectedResult + " flags: "
+        + OpFlags.description(flags));
     boolean exceptionOccurred = false;
     boolean breakLoop = false;
 

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/test/java/org/apache/geode/security/SecurityTestUtils.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/security/SecurityTestUtils.java b/geode-core/src/test/java/org/apache/geode/security/SecurityTestUtils.java
index b1c0907..e69f36d 100644
--- a/geode-core/src/test/java/org/apache/geode/security/SecurityTestUtils.java
+++ b/geode-core/src/test/java/org/apache/geode/security/SecurityTestUtils.java
@@ -1825,7 +1825,7 @@ public class SecurityTestUtils {
 
   // ------------------------------- inner classes ----------------------------
 
-  private static class Employee implements PdxSerializable {
+  public static class Employee implements PdxSerializable {
 
     private Long Id;
     private String fname;
@@ -1854,4 +1854,5 @@ public class SecurityTestUtils {
       out.writeString("lname", lname);
     }
   }
+
 }

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-core/src/test/java/org/apache/geode/test/dunit/standalone/VersionManager.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/test/dunit/standalone/VersionManager.java b/geode-core/src/test/java/org/apache/geode/test/dunit/standalone/VersionManager.java
index 739b690..8eefa01 100755
--- a/geode-core/src/test/java/org/apache/geode/test/dunit/standalone/VersionManager.java
+++ b/geode-core/src/test/java/org/apache/geode/test/dunit/standalone/VersionManager.java
@@ -45,8 +45,6 @@ public class VersionManager {
     instance = new VersionManager();
     final String fileName = "geodeOldVersionClasspaths.txt";
     instance.findVersions(fileName);
-    System.out
-        .println("VersionManager has loaded the following classpaths:\n" + instance.classPaths);
   }
 
   public static VersionManager getInstance() {

http://git-wip-us.apache.org/repos/asf/geode/blob/6be38cad/geode-wan/src/main/java/org/apache/geode/cache/client/internal/GatewaySenderBatchOp.java
----------------------------------------------------------------------
diff --git a/geode-wan/src/main/java/org/apache/geode/cache/client/internal/GatewaySenderBatchOp.java b/geode-wan/src/main/java/org/apache/geode/cache/client/internal/GatewaySenderBatchOp.java
index b8616a9..d7c721d 100755
--- a/geode-wan/src/main/java/org/apache/geode/cache/client/internal/GatewaySenderBatchOp.java
+++ b/geode-wan/src/main/java/org/apache/geode/cache/client/internal/GatewaySenderBatchOp.java
@@ -224,9 +224,6 @@ public class GatewaySenderBatchOp {
     }
 
     @Override
-    protected void processSecureBytes(Connection cnx, Message message) throws Exception {}
-
-    @Override
     protected boolean needsUserId() {
       return false;
     }


[26/51] [abbrv] geode git commit: GEODE-2886 : 1. updated to throw IllegalStateException from WaitUntilFlushedFunction.java and corresponding testcase change.

Posted by kl...@apache.org.
GEODE-2886 : 1. updated to throw IllegalStateException from
WaitUntilFlushedFunction.java and corresponding testcase change.


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/37201519
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/37201519
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/37201519

Branch: refs/heads/feature/GEODE-1279
Commit: 37201519de5968ba265e133a92ececf0c3892bb0
Parents: a1c3fc7
Author: Amey Barve <ab...@apache.org>
Authored: Tue Aug 8 18:41:22 2017 +0530
Committer: Amey Barve <ab...@apache.org>
Committed: Thu Aug 17 15:47:30 2017 +0530

----------------------------------------------------------------------
 .../internal/distributed/WaitUntilFlushedFunction.java |  4 +---
 .../cache/lucene/LuceneQueriesIntegrationTest.java     | 13 +++++++++----
 2 files changed, 10 insertions(+), 7 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/37201519/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
index ca77873..6c0b8b7 100644
--- a/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
+++ b/geode-lucene/src/main/java/org/apache/geode/cache/lucene/internal/distributed/WaitUntilFlushedFunction.java
@@ -61,10 +61,8 @@ public class WaitUntilFlushedFunction implements Function, InternalEntity {
       }
 
     } else {
-      IllegalStateException illegalStateException = new IllegalStateException(
+      throw new IllegalStateException(
           "The AEQ does not exist for the index " + indexName + " region " + region.getFullPath());
-      logger.error(illegalStateException.getMessage());
-      resultSender.lastResult(result);
     }
     resultSender.lastResult(result);
   }

http://git-wip-us.apache.org/repos/asf/geode/blob/37201519/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
----------------------------------------------------------------------
diff --git a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
index 2044c68..2c46b4c 100644
--- a/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
+++ b/geode-lucene/src/test/java/org/apache/geode/cache/lucene/LuceneQueriesIntegrationTest.java
@@ -342,10 +342,15 @@ public class LuceneQueriesIntegrationTest extends LuceneIntegrationTest {
 
     // This is to send IllegalStateException from WaitUntilFlushedFunction
     String nonCreatedIndex = "index2";
-
-    boolean b =
-        luceneService.waitUntilFlushed(nonCreatedIndex, REGION_NAME, 60000, TimeUnit.MILLISECONDS);
-    assertFalse(b);
+    boolean result = false;
+    try {
+      result = luceneService.waitUntilFlushed(nonCreatedIndex, REGION_NAME, 60000,
+          TimeUnit.MILLISECONDS);
+    } catch (Exception ex) {
+      assertEquals(ex.getMessage(),
+          "java.lang.IllegalStateException: The AEQ does not exist for the index index2 region /index");
+      assertFalse(result);
+    }
   }
 
   @Test()


[45/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - Developing

Posted by kl...@apache.org.
http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/distributed_regions/how_region_versioning_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/distributed_regions/how_region_versioning_works.html.md.erb b/geode-docs/developing/distributed_regions/how_region_versioning_works.html.md.erb
index 9911d31..83fedc1 100644
--- a/geode-docs/developing/distributed_regions/how_region_versioning_works.html.md.erb
+++ b/geode-docs/developing/distributed_regions/how_region_versioning_works.html.md.erb
@@ -21,32 +21,32 @@ limitations under the License.
 
 <a id="topic_7A4B6C6169BD4B1ABD356294F744D236"></a>
 
-Geode performs different consistency checks depending on the type of region you have configured.
+<%=vars.product_name%> performs different consistency checks depending on the type of region you have configured.
 
 ## <a id="topic_7A4B6C6169BD4B1ABD356294F744D236__section_B090F5FB87D84104A7BE4BCEA6BAE6B7" class="no-quick-link"></a>Partitioned Region Consistency
 
-For a partitioned region, Geode maintains consistency by routing all updates on a given key to the Geode member that holds the primary copy of that key. That member holds a lock on the key while distributing updates to other members that host a copy of the key. Because all updates to a partitioned region are serialized on the primary Geode member, all members apply the updates in the same order and consistency is maintained at all times. See [Understanding Partitioning](../partitioned_regions/how_partitioning_works.html).
+For a partitioned region, <%=vars.product_name%> maintains consistency by routing all updates on a given key to the <%=vars.product_name%> member that holds the primary copy of that key. That member holds a lock on the key while distributing updates to other members that host a copy of the key. Because all updates to a partitioned region are serialized on the primary <%=vars.product_name%> member, all members apply the updates in the same order and consistency is maintained at all times. See [Understanding Partitioning](../partitioned_regions/how_partitioning_works.html).
 
 ## <a id="topic_7A4B6C6169BD4B1ABD356294F744D236__section_72DFB366C8F14ADBAF2A136669ECAB1E" class="no-quick-link"></a>Replicated Region Consistency
 
-For a replicated region, any member that hosts the region can update a key and distribute that update to other members without locking the key. It is possible that two members can update the same key at the same time (a concurrent update). It is also possible that, due to network latency, an update in one member is distributed to other members at a later time, after those members have already applied more recent updates to the key (an out-of-order update). By default, Geode members perform conflict checking before applying region updates in order to detect and consistently resolve concurrent and out-of-order updates. Conflict checking ensures that region data eventually becomes consistent on all members that host the region. The conflict checking behavior for replicated regions is summarized as follows:
+For a replicated region, any member that hosts the region can update a key and distribute that update to other members without locking the key. It is possible that two members can update the same key at the same time (a concurrent update). It is also possible that, due to network latency, an update in one member is distributed to other members at a later time, after those members have already applied more recent updates to the key (an out-of-order update). By default, <%=vars.product_name%> members perform conflict checking before applying region updates in order to detect and consistently resolve concurrent and out-of-order updates. Conflict checking ensures that region data eventually becomes consistent on all members that host the region. The conflict checking behavior for replicated regions is summarized as follows:
 
 -   If two members update the same key at the same time, conflict checking ensures that all members eventually apply the same value, which is the value of one of the two concurrent updates.
 -   If a member receives an out-of-order update (an update that is received after one or more recent updates were applied), conflict checking ensures that the out-of-order update is discarded and not applied to the cache.
 
-[How Consistency Checking Works for Replicated Regions](#topic_C5B74CCDD909403C815639339AA03758) and [How Destroy and Clear Operations Are Resolved](#topic_321B05044B6641FCAEFABBF5066BD399) provide more details about how Geode performs conflict checking when applying an update.
+[How Consistency Checking Works for Replicated Regions](#topic_C5B74CCDD909403C815639339AA03758) and [How Destroy and Clear Operations Are Resolved](#topic_321B05044B6641FCAEFABBF5066BD399) provide more details about how <%=vars.product_name%> performs conflict checking when applying an update.
 
 ## <a id="topic_7A4B6C6169BD4B1ABD356294F744D236__section_313045F430EE459CB411CAAE7B00F3D8" class="no-quick-link"></a>Non-Replicated Regions and Client Cache Consistency
 
 When a member receives an update for an entry in a non-replicated region and applies an update, it performs conflict checking in the same way as for a replicated region. However, if the member initiates an operation on an entry that is not present in the region, it first passes that operation to a member that hosts a replicate. The member that hosts the replica generates and provides the version information necessary for subsequent conflict checking. See [How Consistency Checking Works for Replicated Regions](#topic_C5B74CCDD909403C815639339AA03758).
 
-Client caches also perform consistency checking in the same way when they receive an update for a region entry. However, all region operations that originate in the client cache are first passed onto an available Geode server, which generates the version information necessary for subsequent conflict checking.
+Client caches also perform consistency checking in the same way when they receive an update for a region entry. However, all region operations that originate in the client cache are first passed onto an available <%=vars.product_name%> server, which generates the version information necessary for subsequent conflict checking.
 
 ## <a id="topic_B64891585E7F4358A633C792F10FA23E" class="no-quick-link"></a>Configuring Consistency Checking
 
-Geode enables consistency checking by default. You cannot disable consistency checking for persistent regions. For all other regions, you can explicitly enable or disable consistency checking by setting the `concurrency-checks-enabled` region attribute in `cache.xml` to "true" or "false."
+<%=vars.product_name%> enables consistency checking by default. You cannot disable consistency checking for persistent regions. For all other regions, you can explicitly enable or disable consistency checking by setting the `concurrency-checks-enabled` region attribute in `cache.xml` to "true" or "false."
 
-All Geode members that host a region must use the same `concurrency-checks-enabled` setting for that region.
+All <%=vars.product_name%> members that host a region must use the same `concurrency-checks-enabled` setting for that region.
 
 A client cache can disable consistency checking for a region even if server caches enable consistency checking for the same region. This configuration ensures that the client sees all events for the region, but it does not prevent the client cache region from becoming out-of-sync with the server cache.
 
@@ -65,21 +65,21 @@ If you cannot support the additional overhead in your deployment, you can disabl
 
 ## <a id="topic_C5B74CCDD909403C815639339AA03758" class="no-quick-link"></a>How Consistency Checking Works for Replicated Regions
 
-Each region stores version and timestamp information for use in conflict detection. Geode members use the recorded information to detect and resolve conflicts consistently before applying a distributed update.
+Each region stores version and timestamp information for use in conflict detection. <%=vars.product_name%> members use the recorded information to detect and resolve conflicts consistently before applying a distributed update.
 
 <a id="topic_C5B74CCDD909403C815639339AA03758__section_763B071061C94D1E82E8883325294547"></a>
-By default, each entry in a region stores the ID of the Geode member that last updated the entry, as well as a version stamp for the entry that is incremented each time an update occurs. The version information is stored in each local entry, and the version stamp is distributed to other Geode members when the local entry is updated.
+By default, each entry in a region stores the ID of the <%=vars.product_name%> member that last updated the entry, as well as a version stamp for the entry that is incremented each time an update occurs. The version information is stored in each local entry, and the version stamp is distributed to other <%=vars.product_name%> members when the local entry is updated.
 
-A Geode member or client that receives an update message first compares the update version stamp with the version stamp recorded in its local cache. If the update version stamp is larger, it represents a newer version of the entry, so the receiving member applies the update locally and updates the version information. A smaller update version stamp indicates an out-of-order update, which is discarded.
+A <%=vars.product_name%> member or client that receives an update message first compares the update version stamp with the version stamp recorded in its local cache. If the update version stamp is larger, it represents a newer version of the entry, so the receiving member applies the update locally and updates the version information. A smaller update version stamp indicates an out-of-order update, which is discarded.
 
-An identical version stamp indicates that multiple Geode members updated the same entry at the same time. To resolve a concurrent update, a Geode member always applies (or keeps) the region entry that has the highest membership ID; the region entry having the lower membership ID is discarded.
+An identical version stamp indicates that multiple <%=vars.product_name%> members updated the same entry at the same time. To resolve a concurrent update, a <%=vars.product_name%> member always applies (or keeps) the region entry that has the highest membership ID; the region entry having the lower membership ID is discarded.
 
 **Note:**
-When a Geode member discards an update message (either for an out-of-order update or when resolving a concurrent update), it does not pass the discarded event to an event listener for the region. You can track the number of discarded updates for each member using the `conflatedEvents` statistic. See [Geode Statistics List](../../reference/statistics_list.html#statistics_list). Some members may discard an update while other members apply the update, depending on the order in which each member receives the update. For this reason, the `conflatedEvents` statistic differs for each Geode member. The example below describes this behavior in more detail.
+When a <%=vars.product_name%> member discards an update message (either for an out-of-order update or when resolving a concurrent update), it does not pass the discarded event to an event listener for the region. You can track the number of discarded updates for each member using the `conflatedEvents` statistic. See [<%=vars.product_name%> Statistics List](../../reference/statistics_list.html#statistics_list). Some members may discard an update while other members apply the update, depending on the order in which each member receives the update. For this reason, the `conflatedEvents` statistic differs for each <%=vars.product_name%> member. The example below describes this behavior in more detail.
 
-The following example shows how a concurrent update is handled in a distributed system of three Geode members. Assume that Members A, B, and C have membership IDs of 1, 2, and 3, respectively. Each member currently stores an entry, X, in their caches at version C2 (the entry was last updated by member C):
+The following example shows how a concurrent update is handled in a distributed system of three <%=vars.product_name%> members. Assume that Members A, B, and C have membership IDs of 1, 2, and 3, respectively. Each member currently stores an entry, X, in their caches at version C2 (the entry was last updated by member C):
 
-**Step 1:** An application updates entry X on Geode member A at the same time another application updates entry X on member C. Each member increments the version stamp for the entry and records the version stamp with their member ID in their local caches. In this case the entry was originally at version C2, so each member updates the version to 3 (A3 and C3, respectively) in their local caches.
+**Step 1:** An application updates entry X on <%=vars.product_name%> member A at the same time another application updates entry X on member C. Each member increments the version stamp for the entry and records the version stamp with their member ID in their local caches. In this case the entry was originally at version C2, so each member updates the version to 3 (A3 and C3, respectively) in their local caches.
 
 <img src="../../images_svg/region_entry_versions_1.svg" id="topic_C5B74CCDD909403C815639339AA03758__image_nt5_ptw_4r" class="image" />
 
@@ -101,27 +101,27 @@ At this point, all members that host the region have achieved a consistent state
 
 ## <a id="topic_321B05044B6641FCAEFABBF5066BD399" class="no-quick-link"></a>How Destroy and Clear Operations Are Resolved
 
-When consistency checking is enabled for a region, a Geode member does not immediately remove an entry from the region when an application destroys the entry. Instead, the member retains the entry with its current version stamp for a period of time in order to detect possible conflicts with operations that have occurred. The retained entry is referred to as a *tombstone*. Geode retains tombstones for partitioned regions and non-replicated regions as well as for replicated regions, in order to provide consistency.
+When consistency checking is enabled for a region, a <%=vars.product_name%> member does not immediately remove an entry from the region when an application destroys the entry. Instead, the member retains the entry with its current version stamp for a period of time in order to detect possible conflicts with operations that have occurred. The retained entry is referred to as a *tombstone*. <%=vars.product_name%> retains tombstones for partitioned regions and non-replicated regions as well as for replicated regions, in order to provide consistency.
 
 A tombstone in a client cache or a non-replicated region expires after 8 minutes, at which point the tombstone is immediately removed from the cache.
 
-A tombstone for a replicated or partitioned region expires after 10 minutes. Expired tombstones are eligible for garbage collection by the Geode member. Garbage collection is automatically triggered after 100,000 tombstones of any type have timed out in the local Geode member. You can optionally set the `gemfire.tombstone-gc-threshold` property to a value smaller than 100000 to perform garbage collection more frequently.
+A tombstone for a replicated or partitioned region expires after 10 minutes. Expired tombstones are eligible for garbage collection by the <%=vars.product_name%> member. Garbage collection is automatically triggered after 100,000 tombstones of any type have timed out in the local <%=vars.product_name%> member. You can optionally set the `gemfire.tombstone-gc-threshold` property to a value smaller than 100000 to perform garbage collection more frequently.
 
 **Note:**
-To avoid out-of-memory errors, a Geode member also initiates garbage collection for tombstones when the amount of free memory drops below 30 percent of total memory.
+To avoid out-of-memory errors, a <%=vars.product_name%> member also initiates garbage collection for tombstones when the amount of free memory drops below 30 percent of total memory.
 
-You can monitor the total number of tombstones in a cache using the `tombstoneCount` statistic in `CachePerfStats`. The `tombstoneGCCount` statistic records the total number of tombstone garbage collection cycles that a member has performed. `replicatedTombstonesSize` and `nonReplicatedTombstonesSize` show the approximate number of bytes that are currently consumed by tombstones in replicated or partitioned regions, and in non-replicated regions, respectively. See [Geode Statistics List](../../reference/statistics_list.html#statistics_list).
+You can monitor the total number of tombstones in a cache using the `tombstoneCount` statistic in `CachePerfStats`. The `tombstoneGCCount` statistic records the total number of tombstone garbage collection cycles that a member has performed. `replicatedTombstonesSize` and `nonReplicatedTombstonesSize` show the approximate number of bytes that are currently consumed by tombstones in replicated or partitioned regions, and in non-replicated regions, respectively. See [<%=vars.product_name%> Statistics List](../../reference/statistics_list.html#statistics_list).
 
 ## <a id="topic_321B05044B6641FCAEFABBF5066BD399__section_4D0140E96A3141EB8D983D0A43464097" class="no-quick-link"></a>About Region.clear() Operations
 
-Region entry version stamps and tombstones ensure consistency only when individual entries are destroyed. A `Region.clear()` operation, however, operates on all entries in a region at once. To provide consistency for `Region.clear()` operations, Geode obtains a distributed read/write lock for the region, which blocks all concurrent updates to the region. Any updates that were initiated before the clear operation are allowed to complete before the region is cleared.
+Region entry version stamps and tombstones ensure consistency only when individual entries are destroyed. A `Region.clear()` operation, however, operates on all entries in a region at once. To provide consistency for `Region.clear()` operations, <%=vars.product_name%> obtains a distributed read/write lock for the region, which blocks all concurrent updates to the region. Any updates that were initiated before the clear operation are allowed to complete before the region is cleared.
 
 ## <a id="topic_32ACFA5542C74F3583ECD30467F352B0" class="no-quick-link"></a>Transactions with Consistent Regions
 
 A transaction that modifies a region having consistency checking enabled generates all necessary version information for region updates when the transaction commits.
 
-If a transaction modifies a normal, preloaded or empty region, the transaction is first delegated to a Geode member that holds a replicate for the region. This behavior is similar to the transactional behavior for partitioned regions, where the partitioned region transaction is forwarded to a member that hosts the primary for the partitioned region update.
+If a transaction modifies a normal, preloaded or empty region, the transaction is first delegated to a <%=vars.product_name%> member that holds a replicate for the region. This behavior is similar to the transactional behavior for partitioned regions, where the partitioned region transaction is forwarded to a member that hosts the primary for the partitioned region update.
 
-The limitation for transactions on normal, preloaded or or empty regions is that, when consistency checking is enabled, a transaction cannot perform a `localDestroy` or `localInvalidate` operation against the region. Geode throws an `UnsupportedOperationInTransactionException` exception in such cases. An application should use a `Destroy` or `Invalidate` operation in place of a `localDestroy` or `localInvalidate` when consistency checks are enabled.
+The limitation for transactions on normal, preloaded or or empty regions is that, when consistency checking is enabled, a transaction cannot perform a `localDestroy` or `localInvalidate` operation against the region. <%=vars.product_name%> throws an `UnsupportedOperationInTransactionException` exception in such cases. An application should use a `Destroy` or `Invalidate` operation in place of a `localDestroy` or `localInvalidate` when consistency checks are enabled.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/distributed_regions/how_region_versioning_works_wan.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/distributed_regions/how_region_versioning_works_wan.html.md.erb b/geode-docs/developing/distributed_regions/how_region_versioning_works_wan.html.md.erb
index 275d496..b939ea8 100644
--- a/geode-docs/developing/distributed_regions/how_region_versioning_works_wan.html.md.erb
+++ b/geode-docs/developing/distributed_regions/how_region_versioning_works_wan.html.md.erb
@@ -19,21 +19,21 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-When two or more Geode systems are configured to distribute events over a WAN, each system performs local consistency checking before it distributes an event to a configured gateway sender. Discarded events are not distributed across the WAN.
+When two or more <%=vars.product_name%> systems are configured to distribute events over a WAN, each system performs local consistency checking before it distributes an event to a configured gateway sender. Discarded events are not distributed across the WAN.
 
-Regions can also be configured to distribute updates to other Geode clusters over a WAN. With a distributed WAN configuration, multiple gateway senders asynchronously queue and send region updates to another Geode cluster. It is possible for multiple sites to send updates to the same region entry at the same time. It is also possible that, due to a slow WAN connection, a cluster might receive region updates after a considerable delay, and after it has applied more recent updates to a region. To ensure that WAN-replicated regions eventually reach a consistent state, Geode first ensures that each cluster performs consistency checking to regions before queuing updates to a gateway sender for WAN distribution. In order words, region conflicts are first detected and resolved in the local cluster, using the techniques described in the previous sections.
+Regions can also be configured to distribute updates to other <%=vars.product_name%> clusters over a WAN. With a distributed WAN configuration, multiple gateway senders asynchronously queue and send region updates to another <%=vars.product_name%> cluster. It is possible for multiple sites to send updates to the same region entry at the same time. It is also possible that, due to a slow WAN connection, a cluster might receive region updates after a considerable delay, and after it has applied more recent updates to a region. To ensure that WAN-replicated regions eventually reach a consistent state, <%=vars.product_name%> first ensures that each cluster performs consistency checking to regions before queuing updates to a gateway sender for WAN distribution. In order words, region conflicts are first detected and resolved in the local cluster, using the techniques described in the previous sections.
 
-When a Geode cluster in a WAN configuration receives a distributed update, conflict checking is performed to ensure that all sites apply updates in the same way. This ensures that regions eventually reach a consistent state across all Geode clusters. The default conflict checking behavior for WAN-replicated regions is summarized as follows:
+When a <%=vars.product_name%> cluster in a WAN configuration receives a distributed update, conflict checking is performed to ensure that all sites apply updates in the same way. This ensures that regions eventually reach a consistent state across all <%=vars.product_name%> clusters. The default conflict checking behavior for WAN-replicated regions is summarized as follows:
 
--   If an update is received from the same Geode cluster that last updated the region entry, then there is no conflict and the update is applied.
--   If an update is received from a different Geode cluster than the one that last updated the region entry, then a potential conflict exists. A cluster applies the update only when the update has a timestamp that is later than the timestamp currently recorded in the cache.
+-   If an update is received from the same <%=vars.product_name%> cluster that last updated the region entry, then there is no conflict and the update is applied.
+-   If an update is received from a different <%=vars.product_name%> cluster than the one that last updated the region entry, then a potential conflict exists. A cluster applies the update only when the update has a timestamp that is later than the timestamp currently recorded in the cache.
 
 **Note:**
-If you use the default conflict checking feature for WAN deployments, you must ensure that all Geode members in all clusters synchronize their system clocks. For example, use a common NTP server for all Geode members that participate in a WAN deployment.
+If you use the default conflict checking feature for WAN deployments, you must ensure that all <%=vars.product_name%> members in all clusters synchronize their system clocks. For example, use a common NTP server for all <%=vars.product_name%> members that participate in a WAN deployment.
 
 As an alternative to the default conflict checking behavior for WAN deployments, you can develop and deploy a custom conflict resolver for handling region events that are distributed over a WAN. Using a custom resolver enables you to handle conflicts using criteria other than, or in addition to, timestamp information. For example, you might always prioritize updates that originate from a particular site, given that the timestamp value is within a certain range.
 
-When a gateway sender distributes an event to another Geode site, it adds the distributed system ID of the local cluster, as well as a timestamp for the event. In a default configuration, the cluster that receives the event examines the timestamp to determine whether or not the event should be applied. If the timestamp of the update is earlier than the local timestamp, the cluster discards the event. If the timestamp is the same as the local timestamp, then the entry having the highest distributed system ID is applied (or kept).
+When a gateway sender distributes an event to another <%=vars.product_name%> site, it adds the distributed system ID of the local cluster, as well as a timestamp for the event. In a default configuration, the cluster that receives the event examines the timestamp to determine whether or not the event should be applied. If the timestamp of the update is earlier than the local timestamp, the cluster discards the event. If the timestamp is the same as the local timestamp, then the entry having the highest distributed system ID is applied (or kept).
 
 You can override the default consistency checking for WAN events by installing a conflict resolver plug-in for the region. If a conflict resolver is installed, then any event that can potentially cause a conflict (any event that originated from a different distributed system ID than the ID that last modified the entry) is delivered to the conflict resolver. The resolver plug-in then makes the sole determination for which update to apply or keep.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/distributed_regions/locking_in_global_regions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/distributed_regions/locking_in_global_regions.html.md.erb b/geode-docs/developing/distributed_regions/locking_in_global_regions.html.md.erb
index 2dc0c8a..313a19b 100644
--- a/geode-docs/developing/distributed_regions/locking_in_global_regions.html.md.erb
+++ b/geode-docs/developing/distributed_regions/locking_in_global_regions.html.md.erb
@@ -26,7 +26,7 @@ In regions with global scope, locking helps ensure cache consistency.
 
 Locking of regions and entries is done in two ways:
 
-1.  **Implicit**. Geode automatically locks global regions and their data entries during most operations. Region invalidation and destruction do not acquire locks.
+1.  **Implicit**. <%=vars.product_name%> automatically locks global regions and their data entries during most operations. Region invalidation and destruction do not acquire locks.
 2.  **Explicit**. You can use the API to explicitly lock the region and its entries. Do this to guarantee atomicity in tasks with multi-step distributed operations. The `Region` methods `org.apache.geode.cache.Region.getDistributedLock` and `org.apache.geode.cache.Region.getRegionDistributedLock` return instances of `java.util.concurrent.locks.Lock` for a region and a specified key.
 
     **Note:**

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/distributed_regions/managing_distributed_regions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/distributed_regions/managing_distributed_regions.html.md.erb b/geode-docs/developing/distributed_regions/managing_distributed_regions.html.md.erb
index 7d4cf37..0149d96 100644
--- a/geode-docs/developing/distributed_regions/managing_distributed_regions.html.md.erb
+++ b/geode-docs/developing/distributed_regions/managing_distributed_regions.html.md.erb
@@ -57,7 +57,7 @@ Before you begin, understand [Basic Configuration and Programming](../../basic_c
     </region-attributes>
     ```
 
-4.  If you are using `global` scope, program any explicit locking you need in addition to the automated locking provided by Geode.
+4.  If you are using `global` scope, program any explicit locking you need in addition to the automated locking provided by <%=vars.product_name%>.
 
 ## <a id="configure_distributed_region__section_6F53FB58B8A84D0F8086AFDB08A649F9" class="no-quick-link"></a>Local Destroy and Invalidate in the Replicated Region
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/distributed_regions/region_entry_versions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/distributed_regions/region_entry_versions.html.md.erb b/geode-docs/developing/distributed_regions/region_entry_versions.html.md.erb
index 9aa8dde..53d1483 100644
--- a/geode-docs/developing/distributed_regions/region_entry_versions.html.md.erb
+++ b/geode-docs/developing/distributed_regions/region_entry_versions.html.md.erb
@@ -22,29 +22,29 @@ limitations under the License.
 <a id="topic_CF2798D3E12647F182C2CEC4A46E2045"></a>
 
 
-Geode ensures that all copies of a region eventually reach a consistent state on all members and clients that host the region, including Geode members that distribute region events.
+<%=vars.product_name%> ensures that all copies of a region eventually reach a consistent state on all members and clients that host the region, including <%=vars.product_name%> members that distribute region events.
 
--   **[Consistency Checking by Region Type](../../developing/distributed_regions/how_region_versioning_works.html#topic_7A4B6C6169BD4B1ABD356294F744D236)**
+-   **[Consistency Checking by Region Type](how_region_versioning_works.html#topic_7A4B6C6169BD4B1ABD356294F744D236)**
 
-    Geode performs different consistency checks depending on the type of region you have configured.
+    <%=vars.product_name%> performs different consistency checks depending on the type of region you have configured.
 
--   **[Configuring Consistency Checking](../../developing/distributed_regions/how_region_versioning_works.html#topic_B64891585E7F4358A633C792F10FA23E)**
+-   **[Configuring Consistency Checking](how_region_versioning_works.html#topic_B64891585E7F4358A633C792F10FA23E)**
 
-    Geode enables consistency checking by default. You cannot disable consistency checking for persistent regions. For all other regions, you can explicitly enable or disable consistency checking by setting the `concurrency-checks-enabled` region attribute in `cache.xml` to "true" or "false."
+    <%=vars.product_name%> enables consistency checking by default. You cannot disable consistency checking for persistent regions. For all other regions, you can explicitly enable or disable consistency checking by setting the `concurrency-checks-enabled` region attribute in `cache.xml` to "true" or "false."
 
--   **[Overhead for Consistency Checks](../../developing/distributed_regions/how_region_versioning_works.html#topic_0BDACA590B2C4974AC9C450397FE70B2)**
+-   **[Overhead for Consistency Checks](how_region_versioning_works.html#topic_0BDACA590B2C4974AC9C450397FE70B2)**
 
     Consistency checking requires additional overhead for storing and distributing version and timestamp information, as well as for maintaining destroyed entries for a period of time to meet consistency requirements.
 
--   **[How Consistency Checking Works for Replicated Regions](../../developing/distributed_regions/how_region_versioning_works.html#topic_C5B74CCDD909403C815639339AA03758)**
+-   **[How Consistency Checking Works for Replicated Regions](how_region_versioning_works.html#topic_C5B74CCDD909403C815639339AA03758)**
 
-    Each region stores version and timestamp information for use in conflict detection. Geode members use the recorded information to detect and resolve conflicts consistently before applying a distributed update.
+    Each region stores version and timestamp information for use in conflict detection. <%=vars.product_name%> members use the recorded information to detect and resolve conflicts consistently before applying a distributed update.
 
--   **[How Destroy and Clear Operations Are Resolved](../../developing/distributed_regions/how_region_versioning_works.html#topic_321B05044B6641FCAEFABBF5066BD399)**
+-   **[How Destroy and Clear Operations Are Resolved](how_region_versioning_works.html#topic_321B05044B6641FCAEFABBF5066BD399)**
 
-    When consistency checking is enabled for a region, a Geode member does not immediately remove an entry from the region when an application destroys the entry. Instead, the member retains the entry with its current version stamp for a period of time in order to detect possible conflicts with operations that have occurred. The retained entry is referred to as a *tombstone*. Geode retains tombstones for partitioned regions and non-replicated regions as well as for replicated regions, in order to provide consistency.
+    When consistency checking is enabled for a region, a <%=vars.product_name%> member does not immediately remove an entry from the region when an application destroys the entry. Instead, the member retains the entry with its current version stamp for a period of time in order to detect possible conflicts with operations that have occurred. The retained entry is referred to as a *tombstone*. <%=vars.product_name%> retains tombstones for partitioned regions and non-replicated regions as well as for replicated regions, in order to provide consistency.
 
--   **[Transactions with Consistent Regions](../../developing/distributed_regions/how_region_versioning_works.html#topic_32ACFA5542C74F3583ECD30467F352B0)**
+-   **[Transactions with Consistent Regions](how_region_versioning_works.html#topic_32ACFA5542C74F3583ECD30467F352B0)**
 
     A transaction that modifies a region having consistency checking enabled generates all necessary version information for region updates when the transaction commits.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/chapter_overview.html.md.erb b/geode-docs/developing/events/chapter_overview.html.md.erb
index f5b46f4..1a26c08 100644
--- a/geode-docs/developing/events/chapter_overview.html.md.erb
+++ b/geode-docs/developing/events/chapter_overview.html.md.erb
@@ -19,26 +19,26 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode provides versatile and reliable event distribution and handling for your cached data and system member events.
+<%=vars.product_name%> provides versatile and reliable event distribution and handling for your cached data and system member events.
 
--   **[How Events Work](../../developing/events/how_events_work.html)**
+-   **[How Events Work](how_events_work.html)**
 
-    Members in your Geode distributed system receive cache updates from other members through cache events. The other members can be peers to the member, clients or servers or other distributed systems.
+    Members in your <%=vars.product_name%> distributed system receive cache updates from other members through cache events. The other members can be peers to the member, clients or servers or other distributed systems.
 
--   **[Implementing Geode Event Handlers](../../developing/events/event_handler_overview.html)**
+-   **[Implementing <%=vars.product_name%> Event Handlers](event_handler_overview.html)**
 
     You can specify event handlers for region and region entry operations and for administrative events.
 
--   **[Configuring Peer-to-Peer Event Messaging](../../developing/events/configure_p2p_event_messaging.html)**
+-   **[Configuring Peer-to-Peer Event Messaging](configure_p2p_event_messaging.html)**
 
     You can receive events from distributed system peers for any region that is not a local region. Local regions receive only local cache events.
 
--   **[Configuring Client/Server Event Messaging](../../developing/events/configure_client_server_event_messaging.html)**
+-   **[Configuring Client/Server Event Messaging](configure_client_server_event_messaging.html)**
 
     You can receive events from your servers for server-side cache events and query result changes.
 
--   **[Configuring Multi-Site (WAN) Event Queues](../../developing/events/configure_multisite_event_messaging.html)**
+-   **[Configuring Multi-Site (WAN) Event Queues](configure_multisite_event_messaging.html)**
 
-    In a multi-site (WAN) installation, Geode uses gateway sender queues to distribute events for regions that are configured with a gateway sender. AsyncEventListeners also use an asynchronous event queue to distribute events for configured regions. This section describes additional options for configuring the event queues that are used by gateway senders or AsyncEventListener implementations.
+    In a multi-site (WAN) installation, <%=vars.product_name%> uses gateway sender queues to distribute events for regions that are configured with a gateway sender. AsyncEventListeners also use an asynchronous event queue to distribute events for configured regions. This section describes additional options for configuring the event queues that are used by gateway senders or AsyncEventListener implementations.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/configure_client_server_event_messaging.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/configure_client_server_event_messaging.html.md.erb b/geode-docs/developing/events/configure_client_server_event_messaging.html.md.erb
index ebd4a3a..77701bd 100644
--- a/geode-docs/developing/events/configure_client_server_event_messaging.html.md.erb
+++ b/geode-docs/developing/events/configure_client_server_event_messaging.html.md.erb
@@ -72,10 +72,10 @@ To receive entry events in the client from the server:
     4.  To have events enqueued for your clients during client downtime, configure durable client/server messaging.
     5.  Write any continuous queries (CQs) that you want to run to receive continuously streaming updates to client queries. CQ events do not update the client cache. If you have dependencies between CQs and/or interest registrations, so that you want the two types of subscription events to arrive as closely together on the client, use a single server pool for everything. Using different pools can lead to time differences in the delivery of events because the pools might use different servers to process and deliver the event messages.
 
--   **[Configuring Highly Available Servers](../../developing/events/configuring_highly_available_servers.html)**
+-   **[Configuring Highly Available Servers](configuring_highly_available_servers.html)**
 
--   **[Implementing Durable Client/Server Messaging](../../developing/events/implementing_durable_client_server_messaging.html)**
+-   **[Implementing Durable Client/Server Messaging](implementing_durable_client_server_messaging.html)**
 
--   **[Tuning Client/Server Event Messaging](../../developing/events/tune_client_server_event_messaging.html)**
+-   **[Tuning Client/Server Event Messaging](tune_client_server_event_messaging.html)**
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/configure_multisite_event_messaging.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/configure_multisite_event_messaging.html.md.erb b/geode-docs/developing/events/configure_multisite_event_messaging.html.md.erb
index 5756652..9fb887a 100644
--- a/geode-docs/developing/events/configure_multisite_event_messaging.html.md.erb
+++ b/geode-docs/developing/events/configure_multisite_event_messaging.html.md.erb
@@ -19,20 +19,20 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-In a multi-site (WAN) installation, Geode uses gateway sender queues to distribute events for regions that are configured with a gateway sender. AsyncEventListeners also use an asynchronous event queue to distribute events for configured regions. This section describes additional options for configuring the event queues that are used by gateway senders or AsyncEventListener implementations.
+In a multi-site (WAN) installation, <%=vars.product_name%> uses gateway sender queues to distribute events for regions that are configured with a gateway sender. AsyncEventListeners also use an asynchronous event queue to distribute events for configured regions. This section describes additional options for configuring the event queues that are used by gateway senders or AsyncEventListener implementations.
 
 <a id="configure_multisite_event_messaging__section_1BBF77E166E84F7CA110385FD03D8453"></a>
 Before you begin, set up your multi-site (WAN) installation or configure asynchronous event queues and AsyncEventListener implementations. See [Configuring a Multi-site (WAN) System](../../topologies_and_comm/multi_site_configuration/setting_up_a_multisite_system.html#setting_up_a_multisite_system) or [Implementing an AsyncEventListener for Write-Behind Cache Event Handling](implementing_write_behind_event_handler.html#implementing_write_behind_cache_event_handling).
 
--   **[Persisting an Event Queue](../../developing/events/configuring_highly_available_gateway_queues.html)**
+-   **[Persisting an Event Queue](configuring_highly_available_gateway_queues.html)**
 
     You can configure a gateway sender queue or an asynchronous event queue to persist data to disk similar to the way in which replicated regions are persisted.
 
--   **[Configuring Dispatcher Threads and Order Policy for Event Distribution](../../developing/events/configuring_gateway_concurrency_levels.html)**
+-   **[Configuring Dispatcher Threads and Order Policy for Event Distribution](configuring_gateway_concurrency_levels.html)**
 
-    By default, Geode uses multiple dispatcher threads to process region events simultaneously in a gateway sender queue for distribution between sites, or in an asynchronous event queue for distributing events for write-behind caching. With serial queues, you can also configure the ordering policy for dispatching those events.
+    By default, <%=vars.product_name%> uses multiple dispatcher threads to process region events simultaneously in a gateway sender queue for distribution between sites, or in an asynchronous event queue for distributing events for write-behind caching. With serial queues, you can also configure the ordering policy for dispatching those events.
 
--   **[Conflating Events in a Queue](../../developing/events/conflate_multisite_gateway_queue.html)**
+-   **[Conflating Events in a Queue](conflate_multisite_gateway_queue.html)**
 
     Conflating a queue improves distribution performance. When conflation is enabled, only the latest queued value is sent for a particular key.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/configuring_gateway_concurrency_levels.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/configuring_gateway_concurrency_levels.html.md.erb b/geode-docs/developing/events/configuring_gateway_concurrency_levels.html.md.erb
index e59d3b4..f637064 100644
--- a/geode-docs/developing/events/configuring_gateway_concurrency_levels.html.md.erb
+++ b/geode-docs/developing/events/configuring_gateway_concurrency_levels.html.md.erb
@@ -19,9 +19,9 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-By default, Geode uses multiple dispatcher threads to process region events simultaneously in a gateway sender queue for distribution between sites, or in an asynchronous event queue for distributing events for write-behind caching. With serial queues, you can also configure the ordering policy for dispatching those events.
+By default, <%=vars.product_name%> uses multiple dispatcher threads to process region events simultaneously in a gateway sender queue for distribution between sites, or in an asynchronous event queue for distributing events for write-behind caching. With serial queues, you can also configure the ordering policy for dispatching those events.
 
-By default, a gateway sender queue or asynchronous event queue uses 5 dispatcher threads per queue. This provides support for applications that have the ability to process queued events concurrently for distribution to another Geode site or listener. If your application does not require concurrent distribution, or if you do not have enough resources to support the requirements of multiple dispatcher threads, then you can configure a single dispatcher thread to process a queue.
+By default, a gateway sender queue or asynchronous event queue uses 5 dispatcher threads per queue. This provides support for applications that have the ability to process queued events concurrently for distribution to another <%=vars.product_name%> site or listener. If your application does not require concurrent distribution, or if you do not have enough resources to support the requirements of multiple dispatcher threads, then you can configure a single dispatcher thread to process a queue.
 
 -   [Using Multiple Dispatcher Threads to Process a Queue](configuring_gateway_concurrency_levels.html#concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_20E8EFCE89EB4DC7AA822D03C8E0F470)
 -   [Performance and Memory Considerations](configuring_gateway_concurrency_levels.html#concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_C4C83B5C0FDD4913BA128365EE7E4E35)
@@ -30,9 +30,9 @@ By default, a gateway sender queue or asynchronous event queue uses 5 dispatcher
 
 ## <a id="concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_20E8EFCE89EB4DC7AA822D03C8E0F470" class="no-quick-link"></a>Using Multiple Dispatcher Threads to Process a Queue
 
-When multiple dispatcher threads are configured for a parallel queue, Geode simply uses multiple threads to process the contents of each individual queue. The total number of queues that are created is still determined by the number of Geode members that host the region.
+When multiple dispatcher threads are configured for a parallel queue, <%=vars.product_name%> simply uses multiple threads to process the contents of each individual queue. The total number of queues that are created is still determined by the number of <%=vars.product_name%> members that host the region.
 
-When multiple dispatcher threads are configured for a serial queue, Geode creates an additional copy of the queue for each thread on each member that hosts the queue. To obtain the maximum throughput, increase the number of dispatcher threads until your network is saturated.
+When multiple dispatcher threads are configured for a serial queue, <%=vars.product_name%> creates an additional copy of the queue for each thread on each member that hosts the queue. To obtain the maximum throughput, increase the number of dispatcher threads until your network is saturated.
 
 The following diagram illustrates a serial gateway sender queue that is configured with multiple dispatcher threads.
 <img src="../../images/MultisiteConcurrency_WAN_Gateway.png" id="concept_6C52A037E39E4FD6AE4C6A982A4A1A85__image_093DAC58EBEE456485562C92CA79899F" class="image" width="624" />
@@ -48,8 +48,8 @@ When a serial gateway sender or an asynchronous event queue uses multiple dispat
 
 When using multiple `dispatcher-threads` (greater than 1) with a serial event queue, you can also configure the `order-policy` that those threads use to distribute events from the queue. The valid order policy values are:
 
--   **key (default)**. All updates to the same key are distributed in order. Geode preserves key ordering by placing all updates to the same key in the same dispatcher thread queue. You typically use key ordering when updates to entries have no relationship to each other, such as for an application that uses a single feeder to distribute stock updates to several other systems.
--   **thread**. All region updates from a given thread are distributed in order. Geode preserves thread ordering by placing all region updates from the same thread into the same dispatcher thread queue. In general, use thread ordering when updates to one region entry affect updates to another region entry.
+-   **key (default)**. All updates to the same key are distributed in order. <%=vars.product_name%> preserves key ordering by placing all updates to the same key in the same dispatcher thread queue. You typically use key ordering when updates to entries have no relationship to each other, such as for an application that uses a single feeder to distribute stock updates to several other systems.
+-   **thread**. All region updates from a given thread are distributed in order. <%=vars.product_name%> preserves thread ordering by placing all region updates from the same thread into the same dispatcher thread queue. In general, use thread ordering when updates to one region entry affect updates to another region entry.
 -   **partition**. All region events that share the same partitioning key are distributed in order. Specify partition ordering when applications use a [PartitionResolver](/releases/latest/javadoc/org/apache/geode/cache/PartitionResolver.html) to implement [custom partitioning](../partitioned_regions/using_custom_partition_resolvers.html). With partition ordering, all entries that share the same "partitioning key" (RoutingObject) are placed into the same dispatcher thread queue.
 
 You cannot configure the `order-policy` for a parallel event queue, because parallel queues cannot preserve event ordering for regions. Only the ordering of events for a given partition (or in a given queue of a distributed region) can be preserved.

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/configuring_highly_available_gateway_queues.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/configuring_highly_available_gateway_queues.html.md.erb b/geode-docs/developing/events/configuring_highly_available_gateway_queues.html.md.erb
index 28339e2..3f570ef 100644
--- a/geode-docs/developing/events/configuring_highly_available_gateway_queues.html.md.erb
+++ b/geode-docs/developing/events/configuring_highly_available_gateway_queues.html.md.erb
@@ -23,7 +23,7 @@ You can configure a gateway sender queue or an asynchronous event queue to persi
 
 <a id="configuring_highly_available_gateway_queues__section_7EB2A7E38B074AAAA06D22C59687CB8A"></a>
 Persisting a queue provides high availability for the event messaging that the sender performs. For example, if a persistent gateway sender queue exits for any reason, when the member that hosts the sender restarts it automatically reloads the queue and resumes sending messages. If an asynchronous event queue exits for any reason, write-back caching can resume where it left off when the queue is brought back online.
-Geode persists an event queue if you set the `enable-persistence` attribute to true. The queue is persisted to the disk store specified in the queue's `disk-store-name` attribute, or to the default disk store if you do not specify a store name.
+<%=vars.product_name%> persists an event queue if you set the `enable-persistence` attribute to true. The queue is persisted to the disk store specified in the queue's `disk-store-name` attribute, or to the default disk store if you do not specify a store name.
 
 You must configure the event queue to use persistence if you are using persistent regions. The use of non-persistent event queues with persistent regions is not supported.
 
@@ -69,7 +69,7 @@ In the example below the gateway sender queue uses "diskStoreA" for persistence
     --maximum-queue-memory=100
     ```
 
-If you were to configure 10 dispatcher threads for the serial gateway sender, then the total maximum memory for the gateway sender queue would be 1000MB on each Geode member that hosted the sender, because Geode creates a separate copy of the queue per thread..
+If you were to configure 10 dispatcher threads for the serial gateway sender, then the total maximum memory for the gateway sender queue would be 1000MB on each <%=vars.product_name%> member that hosted the sender, because <%=vars.product_name%> creates a separate copy of the queue per thread..
 
 The following example shows a similar configuration for an asynchronous event queue:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/configuring_highly_available_servers.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/configuring_highly_available_servers.html.md.erb b/geode-docs/developing/events/configuring_highly_available_servers.html.md.erb
index 48cd174..d78221d 100644
--- a/geode-docs/developing/events/configuring_highly_available_servers.html.md.erb
+++ b/geode-docs/developing/events/configuring_highly_available_servers.html.md.erb
@@ -50,6 +50,6 @@ The following table describes the different values for the subscription-redundan
 | &gt; 0                  | Sets the precise number of secondary servers to use for backup to the primary. |
 | -1                      | Every server that is not the primary is to be used as a secondary.             |
 
--   **[Highly Available Client/Server Event Messaging](../../developing/events/ha_event_messaging_whats_next.html)**
+-   **[Highly Available Client/Server Event Messaging](ha_event_messaging_whats_next.html)**
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/event_handler_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/event_handler_overview.html.md.erb b/geode-docs/developing/events/event_handler_overview.html.md.erb
index 22a053f..868a063 100644
--- a/geode-docs/developing/events/event_handler_overview.html.md.erb
+++ b/geode-docs/developing/events/event_handler_overview.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Implementing Geode Event Handlers
----
+<% set_title("Implementing", product_name, "Event Handlers") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/filtering_multisite_events.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/filtering_multisite_events.html.md.erb b/geode-docs/developing/events/filtering_multisite_events.html.md.erb
index 505cd9c..3ee6ac5 100644
--- a/geode-docs/developing/events/filtering_multisite_events.html.md.erb
+++ b/geode-docs/developing/events/filtering_multisite_events.html.md.erb
@@ -19,21 +19,21 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-You can optionally create gateway sender and/or gateway receiver filters to control which events are queued and distributed to a remote site, or to modify the data stream that is transmitted between Geode sites.
+You can optionally create gateway sender and/or gateway receiver filters to control which events are queued and distributed to a remote site, or to modify the data stream that is transmitted between <%=vars.product_name%> sites.
 
 You can implement and deploy two different types of filter for multi-site events:
 
 -   `GatewayEventFilter`. A `GatewayEventFilter` implementation determines whether a region event is placed in a gateway sender queue and/or whether an event in a gateway queue is distributed to a remote site. You can optionally add one or more `GatewayEventFilter` implementations to a gateway sender, etiher in the `cache.xml` configuration file or using the Java API.
 
-    Geode makes a synchronous call to the filter's `beforeEnqueue` method before it places a region event in the gateway sender queue. The filter returns a boolean value that specifies whether the event should be added to the queue.
+    <%=vars.product_name%> makes a synchronous call to the filter's `beforeEnqueue` method before it places a region event in the gateway sender queue. The filter returns a boolean value that specifies whether the event should be added to the queue.
 
-    Geode asynchronously calls the filter's `beforeTransmit` method to determine whether the gateway sender dispatcher thread should distribute the event to a remote gateway receiver.
+    <%=vars.product_name%> asynchronously calls the filter's `beforeTransmit` method to determine whether the gateway sender dispatcher thread should distribute the event to a remote gateway receiver.
 
-    For events that are distributed to another site, Geode calls the listener's `afterAcknowledgement` method to indicate that is has received an ack from the remote site after the event was received.
+    For events that are distributed to another site, <%=vars.product_name%> calls the listener's `afterAcknowledgement` method to indicate that is has received an ack from the remote site after the event was received.
 
--   GatewayTransportFilter. Use a `GatewayTransportFilter` implementation to process the TCP stream that sends a batch of events that is distributed from one Geode cluster to another over a WAN. A `GatewayTransportFilter` is typically used to perform encryption or compression on the data that distributed. You install the same `GatewayTransportFilter` implementation on both a gateway sender and gateway receiver.
+-   GatewayTransportFilter. Use a `GatewayTransportFilter` implementation to process the TCP stream that sends a batch of events that is distributed from one <%=vars.product_name%> cluster to another over a WAN. A `GatewayTransportFilter` is typically used to perform encryption or compression on the data that distributed. You install the same `GatewayTransportFilter` implementation on both a gateway sender and gateway receiver.
 
-    When a gateway sender processes a batch of events for distribution, Geode delivers the stream to the `getInputStream` method of a configured `GatewayTransportFilter` implementation. The filter processes and returns the stream, which is then transmitted to the gateway receiver. When the gateway receiver receives the batch, Geode calls the `getOutputStream` method of a configured filter, which again processes and returns the stream so that the events can be applied in the local cluster.
+    When a gateway sender processes a batch of events for distribution, <%=vars.product_name%> delivers the stream to the `getInputStream` method of a configured `GatewayTransportFilter` implementation. The filter processes and returns the stream, which is then transmitted to the gateway receiver. When the gateway receiver receives the batch, <%=vars.product_name%> calls the `getOutputStream` method of a configured filter, which again processes and returns the stream so that the events can be applied in the local cluster.
 
 ## <a id="topic_E97BB68748F14987916CD1A50E4B4542__section_E20B4A8A98FD4EDAAA8C14B8059AA7F7" class="no-quick-link"></a>Configuring Multi-Site Event Filters
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/how_cache_events_work.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/how_cache_events_work.html.md.erb b/geode-docs/developing/events/how_cache_events_work.html.md.erb
index e54371a..4492092 100644
--- a/geode-docs/developing/events/how_cache_events_work.html.md.erb
+++ b/geode-docs/developing/events/how_cache_events_work.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-When a region or entry operation is performed, Geode distributes the associated events in the distributed system according to system and cache configurations.
+When a region or entry operation is performed, <%=vars.product_name%> distributes the associated events in the distributed system according to system and cache configurations.
 
 <a id="how_cache_events_work__section_7864A275FDB549FD8E2D046DD59CB9F4"></a>
 Install a cache listener for a region in each system member that needs to receive notification of region and entry changes.
@@ -68,4 +68,4 @@ In the following figure:
 
 ## <a id="how_cache_events_work__section_B4DCA51DDF7F44699E7355277172BEF0" class="no-quick-link"></a>Managing Events in Multi-threaded Applications
 
-For partitioned regions, Geode guarantees ordering of events across threads, but for distributed regions it doesn’t. For multi-threaded applications that create distributed regions, you need to use your application synchronization to make sure that one operation completes before the next one begins. Distribution through the distributed-no-ack queue can work with multiple threads if you set the `conserve-sockets` attribute to true. Then the threads share one queue, preserving the order of the events in distributed regions. Different threads can invoke the same listener, so if you allow different threads to send events, it can result in concurrent invocations of the listener. This is an issue only if the threads have some shared state - if they are incrementing a serial number, for example, or adding their events to a log queue. Then you need to make your code thread safe.
+For partitioned regions, <%=vars.product_name%> guarantees ordering of events across threads, but for distributed regions it doesn’t. For multi-threaded applications that create distributed regions, you need to use your application synchronization to make sure that one operation completes before the next one begins. Distribution through the distributed-no-ack queue can work with multiple threads if you set the `conserve-sockets` attribute to true. Then the threads share one queue, preserving the order of the events in distributed regions. Different threads can invoke the same listener, so if you allow different threads to send events, it can result in concurrent invocations of the listener. This is an issue only if the threads have some shared state - if they are incrementing a serial number, for example, or adding their events to a log queue. Then you need to make your code thread safe.

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/how_client_server_distribution_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/how_client_server_distribution_works.html.md.erb b/geode-docs/developing/events/how_client_server_distribution_works.html.md.erb
index be7e4f2..88e7698 100644
--- a/geode-docs/developing/events/how_client_server_distribution_works.html.md.erb
+++ b/geode-docs/developing/events/how_client_server_distribution_works.html.md.erb
@@ -122,7 +122,7 @@ Once interest is registered, the server continually monitors region activities a
 
 ## <a id="how_client_server_distribution_works__section_928BB60066414BEB9FAA7FB3120334A3" class="no-quick-link"></a>Server Failover
 
-When a server hosting a subscription queue fails, the queueing responsibilities pass to another server. How this happens depends on whether the new server is a secondary server. In any case, all failover activities are carried out automatically by the Geode system.
+When a server hosting a subscription queue fails, the queueing responsibilities pass to another server. How this happens depends on whether the new server is a secondary server. In any case, all failover activities are carried out automatically by the <%=vars.product_name%> system.
 
 -   **Non-HA failover:** The client fails over without high availability if it is not configured for redundancy or if all secondaries also fail before new secondaries can be initialized. As soon as it can attach to a server, the client goes through an automatic reinitialization process. In this process, the failover code on the client side silently destroys all entries of interest to the client and refetches them from the new server, essentially reinitializing the client cache from the new server’s cache. For the notify all configuration, this clears and reloads all of the entries for the client regions that are connected to the server. For notify by subscription, it clears and reloads only the entries in the region interest lists. To reduce failover noise, the events caused by the local entry destruction and refetching are blocked by the failover code and do not reach the client cache listeners. Because of this, your clients could receive some out-of-sequence events during and af
 ter a server failover. For example, entries that exist on the failed server and not on its replacement are destroyed and never recreated during a failover. Because the destruction events are blocked, the client ends up with entries removed from its cache with no associated destroy events.
 -   **HA failover:** If your client pool is configured with redundancy and a secondary server is available at the time the primary fails, the failover is invisible to the client. The secondary server resumes queueing activities as soon as the primary loss is detected. The secondary might resend a few events, which are discarded automatically by the client message tracking activities.

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/how_events_work.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/how_events_work.html.md.erb b/geode-docs/developing/events/how_events_work.html.md.erb
index 2ac899a..291bbb2 100644
--- a/geode-docs/developing/events/how_events_work.html.md.erb
+++ b/geode-docs/developing/events/how_events_work.html.md.erb
@@ -19,11 +19,11 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Members in your Geode distributed system receive cache updates from other members through cache events. The other members can be peers to the member, clients or servers or other distributed systems.
+Members in your <%=vars.product_name%> distributed system receive cache updates from other members through cache events. The other members can be peers to the member, clients or servers or other distributed systems.
 
 ## <a id="how_events_work__section_6C75098DDBB84944ADE57F2088330D5A" class="no-quick-link"></a>Events Features
 
-These are the primary features of Geode events:
+These are the primary features of <%=vars.product_name%> events:
 
 -   Content-based events
 -   Asynchronous event notifications with conflation
@@ -46,7 +46,7 @@ Both kinds of events can be generated by a single member operation.
 **Note:**
 You can handle one of these categories of events in a single system member. You cannot handle both cache and administrative events in a single member.
 
-Because Geode maintains the order of administrative events and the order of cache events separately, using cache events and administrative events in a single process can cause unexpected results.
+Because <%=vars.product_name%> maintains the order of administrative events and the order of cache events separately, using cache events and administrative events in a single process can cause unexpected results.
 
 ## <a id="how_events_work__section_4BCDB22AB927478EBF1035B0DE230DD3" class="no-quick-link"></a>Event Cycle
 
@@ -92,20 +92,20 @@ During a cache operation, event handlers are called at various stages of the ope
 **Note:**
 An `EntryEvent` contains both the old value and the new value of the entry, which helps to indicate the value that was replaced by the cache operation on a particular key.
 
--   **[Peer-to-Peer Event Distribution](../../developing/events/how_cache_events_work.html)**
+-   **[Peer-to-Peer Event Distribution](how_cache_events_work.html)**
 
-    When a region or entry operation is performed, Geode distributes the associated events in the distributed system according to system and cache configurations.
+    When a region or entry operation is performed, <%=vars.product_name%> distributes the associated events in the distributed system according to system and cache configurations.
 
--   **[Client-to-Server Event Distribution](../../developing/events/how_client_server_distribution_works.html)**
+-   **[Client-to-Server Event Distribution](how_client_server_distribution_works.html)**
 
     Clients and servers distribute events according to client activities and according to interest registered by the client in server-side cache changes.
 
--   **[Multi-Site (WAN) Event Distribution](../../developing/events/how_multisite_distribution_works.html)**
+-   **[Multi-Site (WAN) Event Distribution](how_multisite_distribution_works.html)**
 
-    Geode distributes a subset of cache events between distributed systems, with a minimum impact on each system's performance. Events are distributed only for regions that you configure to use a gateway sender for distribution.
+    <%=vars.product_name%> distributes a subset of cache events between distributed systems, with a minimum impact on each system's performance. Events are distributed only for regions that you configure to use a gateway sender for distribution.
 
--   **[List of Event Handlers and Events](../../developing/events/list_of_event_handlers_and_events.html)**
+-   **[List of Event Handlers and Events](list_of_event_handlers_and_events.html)**
 
-    Geode provides many types of events and event handlers to help you manage your different data and application needs.
+    <%=vars.product_name%> provides many types of events and event handlers to help you manage your different data and application needs.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/how_multisite_distribution_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/how_multisite_distribution_works.html.md.erb b/geode-docs/developing/events/how_multisite_distribution_works.html.md.erb
index 7678e7a..ee518e7 100644
--- a/geode-docs/developing/events/how_multisite_distribution_works.html.md.erb
+++ b/geode-docs/developing/events/how_multisite_distribution_works.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode distributes a subset of cache events between distributed systems, with a minimum impact on each system's performance. Events are distributed only for regions that you configure to use a gateway sender for distribution.
+<%=vars.product_name%> distributes a subset of cache events between distributed systems, with a minimum impact on each system's performance. Events are distributed only for regions that you configure to use a gateway sender for distribution.
 
 ## <a id="how_multisite_distribution_works__section_A16562611E094C88B12BC149D5EEEEBA" class="no-quick-link"></a>Queuing Events for Distribution
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/implementing_cache_event_handlers.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/implementing_cache_event_handlers.html.md.erb b/geode-docs/developing/events/implementing_cache_event_handlers.html.md.erb
index 0e639e7..7afcc4d 100644
--- a/geode-docs/developing/events/implementing_cache_event_handlers.html.md.erb
+++ b/geode-docs/developing/events/implementing_cache_event_handlers.html.md.erb
@@ -22,7 +22,7 @@ limitations under the License.
 Depending on your installation and configuration, cache events can come from local operations, peers, servers, and remote sites. Event handlers register their interest in one or more events and are notified when the events occur.
 
 <a id="implementing_cache_event_handlers__section_9286E8C6B3C54089888E1680B4F43692"></a>
-For each type of handler, Geode provides a convenience class with empty stubs for the interface callback methods.
+For each type of handler, <%=vars.product_name%> provides a convenience class with empty stubs for the interface callback methods.
 
 **Note:**
 Write-behind cache listeners are created by extending the `AsyncEventListener` interface, and they are configured with an `AsyncEventQueue` that you assign to one or more regions.

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/implementing_durable_client_server_messaging.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/implementing_durable_client_server_messaging.html.md.erb b/geode-docs/developing/events/implementing_durable_client_server_messaging.html.md.erb
index 150a79d..a1d1792 100644
--- a/geode-docs/developing/events/implementing_durable_client_server_messaging.html.md.erb
+++ b/geode-docs/developing/events/implementing_durable_client_server_messaging.html.md.erb
@@ -54,7 +54,7 @@ Use one of the following methods:
 
 The `durable-client-id` indicates that the client is durable and gives the server an identifier to correlate the client to its durable messages. For a non-durable client, this id is an empty string. The ID can be any number that is unique among the clients attached to servers in the same distributed system.
 
-The `durable-client-timeout` tells the server how long to wait for client reconnect. When this timeout is reached, the server stops storing to the client's message queue and discards any stored messages. The default is 300 seconds. This is a tuning parameter. If you change it, take into account the normal activity of your application, the average size of your messages, and the level of risk you can handle, both in lost messages and in the servers' capacity to store enqueued messages. Assuming that no messages are being removed from the queue, how long can the server run before the queue reaches the maximum capacity? How many durable clients can the server handle? To assist with tuning, use the Geode message queue statistics for durable clients through the disconnect and reconnect cycles.
+The `durable-client-timeout` tells the server how long to wait for client reconnect. When this timeout is reached, the server stops storing to the client's message queue and discards any stored messages. The default is 300 seconds. This is a tuning parameter. If you change it, take into account the normal activity of your application, the average size of your messages, and the level of risk you can handle, both in lost messages and in the servers' capacity to store enqueued messages. Assuming that no messages are being removed from the queue, how long can the server run before the queue reaches the maximum capacity? How many durable clients can the server handle? To assist with tuning, use the <%=vars.product_name%> message queue statistics for durable clients through the disconnect and reconnect cycles.
 
 ## <a id="implementing_durable_client_server_messaging__section_BB5DCCE0582E4FE8B62DE473512FC704" class="no-quick-link"></a>Configure Durable Subscriptions and Continuous Queries
 
@@ -169,7 +169,7 @@ During initialization, the client cache is not blocked from doing operations, so
 -   Client cache operations by the application.
 -   Callbacks triggered by replaying old events from the queue
 
-Geode handles the conflicts between the application and interest registrations so they do not create cache update conflicts. But you must program your event handlers so they don't conflict with current operations. This is true for all event handlers, but it is especially important for those used in durable clients. Your handlers may receive events well after the fact and you must ensure your programming takes that into account.
+<%=vars.product_name%> handles the conflicts between the application and interest registrations so they do not create cache update conflicts. But you must program your event handlers so they don't conflict with current operations. This is true for all event handlers, but it is especially important for those used in durable clients. Your handlers may receive events well after the fact and you must ensure your programming takes that into account.
 
 This figure shows the three concurrent procedures during the initialization process. The application begins operations immediately on the client (step 1), while the client’s cache ready message (also step 1) triggers a series of queue operations on the servers (starting with step 2 on the primary server). At the same time, the client registers interest (step 2 on the client) and receives a response from the server. Message B2 applies to an entry in Region A, so the cache listener handles B2’s event. Because B2 comes before the marker, the client does not apply the update to the cache.
 


[37/51] [abbrv] geode git commit: GEODE-3444: remove the redundant method calls.

Posted by kl...@apache.org.
GEODE-3444: remove the redundant method calls.


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/04867000
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/04867000
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/04867000

Branch: refs/heads/feature/GEODE-1279
Commit: 04867000f8ad33b34947c35278ad944380ff5f95
Parents: 1a67d46
Author: eshu <es...@pivotal.io>
Authored: Thu Aug 17 16:47:39 2017 -0700
Committer: eshu <es...@pivotal.io>
Committed: Thu Aug 17 16:47:39 2017 -0700

----------------------------------------------------------------------
 .../src/main/java/org/apache/geode/internal/cache/TXState.java   | 4 ----
 1 file changed, 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/04867000/geode-core/src/main/java/org/apache/geode/internal/cache/TXState.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/internal/cache/TXState.java b/geode-core/src/main/java/org/apache/geode/internal/cache/TXState.java
index b01dacf..662f7b0 100644
--- a/geode-core/src/main/java/org/apache/geode/internal/cache/TXState.java
+++ b/geode-core/src/main/java/org/apache/geode/internal/cache/TXState.java
@@ -1010,7 +1010,6 @@ public class TXState implements TXStateInterface {
             writer.beforeCommit(event);
           }
         } catch (TransactionWriterException twe) {
-          cleanup();
           throw new CommitConflictException(twe);
         } catch (VirtualMachineError err) {
           // cleanup(); this allocates objects so I don't think we can do it - that leaves the TX
@@ -1021,7 +1020,6 @@ public class TXState implements TXStateInterface {
           // now, so don't let this thread continue.
           throw err;
         } catch (Throwable t) {
-          cleanup(); // rollback the transaction!
           // Whenever you catch Error or Throwable, you must also
           // catch VirtualMachineError (see above). However, there is
           // _still_ a possibility that you are dealing with a cascading
@@ -1031,8 +1029,6 @@ public class TXState implements TXStateInterface {
           throw new CommitConflictException(t);
         }
       }
-
-
     } catch (CommitConflictException commitConflict) {
       cleanup();
       this.proxy.getTxMgr().noteCommitFailure(opStart, this.jtaLifeTime, this);


[12/51] [abbrv] geode git commit: GEODE-3314 - Refactoring of DLockService to improve developer QoL. This now closes #683

Posted by kl...@apache.org.
GEODE-3314 - Refactoring of DLockService to improve developer QoL. This now closes #683

* Write characterization tests for DLockService.
* Remove debugging code.
* Remove dead code.
* Remove comments.
* Extract the local lock granting into a separate function.

Between the characterization tests we've written and the existing DUnit
tests, the coverage should be fairly adequate.

Signed-off-by: Hitesh Khamesra <hk...@pivotal.io>
Signed-off-by: Galen O'Sullivan <go...@pivotal.io>


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/190cfed8
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/190cfed8
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/190cfed8

Branch: refs/heads/feature/GEODE-1279
Commit: 190cfed880da17b947eb520948866062b9aafe0b
Parents: a3c0eba
Author: Galen O'Sullivan <go...@pivotal.io>
Authored: Wed Aug 2 11:29:21 2017 -0700
Committer: Udo Kohlmeyer <uk...@pivotal.io>
Committed: Tue Aug 15 10:08:40 2017 -0700

----------------------------------------------------------------------
 .../internal/locks/DLockRequestProcessor.java   |   7 +
 .../internal/locks/DLockService.java            | 284 +++++--------------
 .../distributed/internal/locks/DLockToken.java  |  12 +-
 .../DLockServiceCharacterizationTests.java      | 124 ++++++++
 4 files changed, 211 insertions(+), 216 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/190cfed8/geode-core/src/main/java/org/apache/geode/distributed/internal/locks/DLockRequestProcessor.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/distributed/internal/locks/DLockRequestProcessor.java b/geode-core/src/main/java/org/apache/geode/distributed/internal/locks/DLockRequestProcessor.java
index 3f42adb..96f692b 100755
--- a/geode-core/src/main/java/org/apache/geode/distributed/internal/locks/DLockRequestProcessor.java
+++ b/geode-core/src/main/java/org/apache/geode/distributed/internal/locks/DLockRequestProcessor.java
@@ -196,6 +196,13 @@ public class DLockRequestProcessor extends ReplyProcessor21 {
     return this.response.leaseExpireTime;
   }
 
+  /**
+   *
+   * @param interruptible
+   * @param lockId
+   * @return
+   * @throws InterruptedException only possible if interruptible is true.
+   */
   protected boolean requestLock(boolean interruptible, int lockId) throws InterruptedException {
     final boolean isDebugEnabled_DLS = logger.isTraceEnabled(LogMarker.DLS);
 

http://git-wip-us.apache.org/repos/asf/geode/blob/190cfed8/geode-core/src/main/java/org/apache/geode/distributed/internal/locks/DLockService.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/distributed/internal/locks/DLockService.java b/geode-core/src/main/java/org/apache/geode/distributed/internal/locks/DLockService.java
index 522b700..f0377b4 100644
--- a/geode-core/src/main/java/org/apache/geode/distributed/internal/locks/DLockService.java
+++ b/geode-core/src/main/java/org/apache/geode/distributed/internal/locks/DLockService.java
@@ -17,7 +17,6 @@ package org.apache.geode.distributed.internal.locks;
 
 import org.apache.geode.CancelCriterion;
 import org.apache.geode.CancelException;
-import org.apache.geode.InternalGemFireError;
 import org.apache.geode.InternalGemFireException;
 import org.apache.geode.StatisticsFactory;
 import org.apache.geode.SystemFailure;
@@ -74,17 +73,6 @@ public class DLockService extends DistributedLockService {
   public static final long NOT_GRANTOR_SLEEP = Long
       .getLong(DistributionConfig.GEMFIRE_PREFIX + "DLockService.notGrantorSleep", 100).longValue();
 
-  public static final boolean DEBUG_DISALLOW_NOT_HOLDER = Boolean
-      .getBoolean(DistributionConfig.GEMFIRE_PREFIX + "DLockService.debug.disallowNotHolder");
-
-  public static final boolean DEBUG_LOCK_REQUEST_LOOP = Boolean
-      .getBoolean(DistributionConfig.GEMFIRE_PREFIX + "DLockService.debug.disallowLockRequestLoop");
-
-  public static final int DEBUG_LOCK_REQUEST_LOOP_COUNT = Integer
-      .getInteger(
-          DistributionConfig.GEMFIRE_PREFIX + "DLockService.debug.disallowLockRequestLoopCount", 20)
-      .intValue();
-
   public static final boolean DEBUG_NONGRANTOR_DESTROY_LOOP = Boolean
       .getBoolean(DistributionConfig.GEMFIRE_PREFIX + "DLockService.debug.nonGrantorDestroyLoop");
 
@@ -93,9 +81,6 @@ public class DLockService extends DistributedLockService {
           DistributionConfig.GEMFIRE_PREFIX + "DLockService.debug.nonGrantorDestroyLoopCount", 20)
       .intValue();
 
-  public static final boolean DEBUG_ENFORCE_SAFE_EXIT =
-      Boolean.getBoolean(DistributionConfig.GEMFIRE_PREFIX + "DLockService.debug.enforceSafeExit");
-
   public static final boolean AUTOMATE_FREE_RESOURCES =
       Boolean.getBoolean(DistributionConfig.GEMFIRE_PREFIX + "DLockService.automateFreeResources");
 
@@ -1381,16 +1366,12 @@ public class DLockService extends DistributedLockService {
       final boolean disallowReentrant, final boolean disableAlerts) throws InterruptedException {
     checkDestroyed();
 
-    final boolean isDebugEnabled_DLS = logger.isTraceEnabled(LogMarker.DLS);
-
     boolean interrupted = Thread.interrupted();
     if (interrupted && interruptible) {
       throw new InterruptedException();
     }
 
-    boolean safeExit = true;
-    try { // try-block for abnormalExit and safeExit
-
+    try {
       long statStart = getStats().startLockWait();
       long startTime = getLockTimeStamp(dm);
 
@@ -1408,9 +1389,7 @@ public class DLockService extends DistributedLockService {
       if (waitLimit < 0)
         waitLimit = Long.MAX_VALUE;
 
-      if (isDebugEnabled_DLS) {
-        logger.trace(LogMarker.DLS, "{}, name: {} - entering lock()", this, name);
-      }
+      logger.trace(LogMarker.DLS, "{}, name: {} - entering lock()", this, name);
 
       DLockToken token = getOrCreateToken(name);
       boolean gotLock = false;
@@ -1433,29 +1412,7 @@ public class DLockService extends DistributedLockService {
         int lockId = -1;
         incActiveLocks();
 
-        int loopCount = 0;
         while (keepTrying) {
-          if (DEBUG_LOCK_REQUEST_LOOP) {
-            loopCount++;
-            if (loopCount > DEBUG_LOCK_REQUEST_LOOP_COUNT) {
-              Integer count = Integer.valueOf(DEBUG_LOCK_REQUEST_LOOP_COUNT);
-              String s =
-                  LocalizedStrings.DLockService_DEBUG_LOCKINTERRUPTIBLY_HAS_GONE_HOT_AND_LOOPED_0_TIMES
-                      .toLocalizedString(count);
-
-              InternalGemFireError e = new InternalGemFireError(s);
-              logger.error(LogMarker.DLS,
-                  LocalizedMessage.create(
-                      LocalizedStrings.DLockService_DEBUG_LOCKINTERRUPTIBLY_HAS_GONE_HOT_AND_LOOPED_0_TIMES,
-                      count),
-                  e);
-              throw e;
-            }
-            /*
-             * if (loopCount > 1) { Thread.sleep(1000); }
-             */
-          }
-
           checkDestroyed();
           interrupted = Thread.interrupted() || interrupted; // clear
           if (interrupted && interruptible) {
@@ -1469,10 +1426,8 @@ public class DLockService extends DistributedLockService {
           synchronized (token) {
             token.checkForExpiration();
             if (token.isLeaseHeldByCurrentThread()) {
-              if (isDebugEnabled_DLS) {
-                logger.trace(LogMarker.DLS, "{} , name: {} - lock() is reentrant: {}", this, name,
-                    token);
-              }
+              logger.trace(LogMarker.DLS, "{} , name: {} - lock() is reentrant: {}", this, name,
+                  token);
               reentrant = true;
               if (reentrant && disallowReentrant) {
                 throw new IllegalStateException(
@@ -1480,8 +1435,6 @@ public class DLockService extends DistributedLockService {
                         .toLocalizedString(new Object[] {Thread.currentThread(), token}));
               }
               recursionBefore = token.getRecursion();
-              leaseExpireTime = token.getLeaseExpireTime(); // moved here from processor null-check
-                                                            // under gotLock
               lockId = token.getLeaseId(); // keep lockId
               if (lockId < 0) {
                 // loop back around due to expiration
@@ -1500,156 +1453,48 @@ public class DLockService extends DistributedLockService {
             lockId = -1; // reset lockId back to -1
           }
 
-          DLockRequestProcessor processor = null;
-
-          // if reentrant w/ infinite lease TODO: remove false to restore this...
-          if (false && reentrant && leaseTimeMillis == Long.MAX_VALUE) {
-            // Optimization:
-            // thread is reentering lock and lease time is infinite so no
-            // need to trouble the poor grantor
-            gotLock = true;
-            // check for race condition...
-            Assert.assertTrue(token.isLeaseHeldByCurrentThread());
-          }
-
-          // non-reentrant or reentrant w/ non-infinite lease
-          else {
-            processor = createRequestProcessor(theLockGrantorId, name, threadId, startTime,
-                requestLeaseTime, requestWaitTime, reentrant, tryLock, disableAlerts);
-            if (reentrant) {
-              // check for race condition... reentrant expired already...
-              // related to bug 32765, but client-side... see bug 33402
-              synchronized (token) {
-                if (!token.isLeaseHeldByCurrentThread()) {
-                  reentrant = false;
-                  recursionBefore = -1;
-                  token.checkForExpiration();
-                }
+          DLockRequestProcessor processor = createRequestProcessor(theLockGrantorId, name, threadId,
+              startTime, requestLeaseTime, requestWaitTime, reentrant, tryLock, disableAlerts);
+          if (reentrant) {
+            // check for race condition... reentrant expired already...
+            // related to bug 32765, but client-side... see bug 33402
+            synchronized (token) {
+              if (!token.isLeaseHeldByCurrentThread()) {
+                reentrant = false;
+                recursionBefore = -1;
+                token.checkForExpiration();
               }
-            } else {
-              // set lockId since this is the first granting (non-reentrant)
-              lockId = processor.getProcessorId();
             }
+          } else {
+            // set lockId since this is the first granting (non-reentrant)
+            lockId = processor.getProcessorId();
+          }
 
-            try {
-              safeExit = false;
-              gotLock = processor.requestLock(interruptible, lockId);
-            } catch (InterruptedException e) { // LOST INTERRUPT
-              if (interruptible) {
-                // TODO: BUG 37158: this can cause a stuck lock
-                throw e;
-              } else {
-                interrupted = true;
-                Assert.assertTrue(false,
-                    "Non-interruptible lock is trying to throw InterruptedException");
-              }
-            }
-            if (isDebugEnabled_DLS) {
-              logger.trace(LogMarker.DLS, "Grantor {} replied {}", theLockGrantorId,
-                  processor.getResponseCodeString());
-            }
-          } // else: non-reentrant or reentrant w/ non-infinite lease
+          gotLock = processor.requestLock(interruptible, lockId); // can throw
+                                                                  // InterruptedException
+
+          logger.trace(LogMarker.DLS, "Grantor {} replied {}", theLockGrantorId,
+              processor.getResponseCodeString());
 
           if (gotLock) {
-            // if (processor != null) (cannot be null)
-            { // TODO: can be null after restoring above optimization
-              // non-reentrant lock needs to getLeaseExpireTime
-              leaseExpireTime = processor.getLeaseExpireTime();
-            }
+            leaseExpireTime = processor.getLeaseExpireTime();
             int recursion = recursionBefore + 1;
 
-            boolean granted = false;
-            boolean needToReleaseOrphanedGrant = false;
-
-            Assert.assertHoldsLock(this.destroyLock, false);
-            synchronized (this.lockGrantorIdLock) {
-              if (!checkLockGrantorId(theLockGrantorId)) {
-                safeExit = true;
-                // race: grantor changed
-                if (isDebugEnabled_DLS) {
-                  logger.trace(LogMarker.DLS,
-                      "Cannot honor grant from {} because {} is now a grantor.", theLockGrantorId,
-                      this.lockGrantorId);
-                }
-                continue;
-              } else if (isDestroyed()) {
-                // race: dls was destroyed
-                if (isDebugEnabled_DLS) {
-                  logger.trace(LogMarker.DLS,
-                      "Cannot honor grant from {} because this lock service has been destroyed.",
-                      theLockGrantorId);
-                }
-                needToReleaseOrphanedGrant = true;
-              } else {
-                safeExit = true;
-                synchronized (this.tokens) {
-                  checkDestroyed();
-                  Assert.assertTrue(token == basicGetToken(name));
-                  RemoteThread rThread =
-                      new RemoteThread(getDistributionManager().getId(), threadId);
-                  granted = token.grantLock(leaseExpireTime, lockId, recursion, rThread);
-                } // tokens sync
-              }
-            }
-
-            if (needToReleaseOrphanedGrant /* && processor != null */) {
-              processor.getResponse().releaseOrphanedGrant(this.dm);
-              safeExit = true;
+            if (!grantLocalDLockAfterObtainingRemoteLock(name, token, threadId, leaseExpireTime,
+                lockId, theLockGrantorId, processor, recursion)) {
               continue;
             }
 
-            if (!granted) {
-              Assert.assertTrue(granted, "Failed to perform client-side granting on " + token
-                  + " which was granted by " + theLockGrantorId);
-            }
-
-            // make sure token is THE instance in the map to avoid race with
-            // freeResources... ok to overwrite a newer instance too since only
-            // one thread will own the lock at a time
-            // synchronized (tokens) { // TODO: verify if this is needed
-            // synchronized (token) {
-            // if (tokens.put(name, token) == null) {
-            // getStats().incTokens(1);
-            // }
-            // }
-            // }
-
-            if (isDebugEnabled_DLS) {
-              logger.trace(LogMarker.DLS, "{}, name: {} - granted lock: {}", this, name, token);
-            }
+            logger.trace(LogMarker.DLS, "{}, name: {} - granted lock: {}", this, name, token);
             keepTrying = false;
-          } // gotLock is true
-
-          // grantor replied destroyed (getLock is false)
-          else if (processor.repliedDestroyed()) {
-            safeExit = true;
-            checkDestroyed();
-            // should have thrown LockServiceDestroyedException
+          } else if (processor.repliedDestroyed()) {
+            checkDestroyed(); // throws LockServiceDestroyedException
             Assert.assertTrue(isDestroyed(),
                 "Grantor reports service " + this + " is destroyed: " + name);
-          } // grantor replied destroyed
-
-          // grantor replied NOT_GRANTOR or departed (getLock is false)
-          else if (processor.repliedNotGrantor() || processor.hadNoResponse()) {
-            safeExit = true;
+          } else if (processor.repliedNotGrantor() || processor.hadNoResponse()) {
             notLockGrantorId(theLockGrantorId, 0, TimeUnit.MILLISECONDS);
             // keepTrying is still true... loop back around
-          } // grantor replied NOT_GRANTOR or departed
-
-          // grantor replied NOT_HOLDER for reentrant lock (getLock is false)
-          else if (processor.repliedNotHolder()) {
-            safeExit = true;
-            if (DEBUG_DISALLOW_NOT_HOLDER) {
-              String s = LocalizedStrings.DLockService_DEBUG_GRANTOR_REPORTS_NOT_HOLDER_FOR_0
-                  .toLocalizedString(token);
-              InternalGemFireError e = new InternalGemFireError(s);
-              logger.error(LogMarker.DLS,
-                  LocalizedMessage.create(
-                      LocalizedStrings.DLockService_DEBUG_GRANTOR_REPORTS_NOT_HOLDER_FOR_0, token),
-                  e);
-              throw e;
-            }
-
+          } else if (processor.repliedNotHolder()) {
             // fix part of bug 32765 - reentrant/expiration problem
             // probably expired... try to get non-reentrant lock
             reentrant = false;
@@ -1675,7 +1520,6 @@ public class DLockService extends DistributedLockService {
 
           // TODO: figure out when this else case can actually happen...
           else {
-            safeExit = true;
             // either dlock service is suspended or tryLock failed
             // fixed the math here... bug 32765
             if (waitLimit > token.getCurrentTime() + 20) {
@@ -1685,10 +1529,8 @@ public class DLockService extends DistributedLockService {
           }
 
         } // while (keepTrying)
-      } // try-block for end stats, token cleanup, and interrupt check
-
-      // finally-block for end stats, token cleanup, and interrupt check
-      finally {
+          // try-block for end stats, token cleanup, and interrupt check
+      } finally {
         getStats().endLockWait(statStart, gotLock);
 
         // cleanup token if failed to get lock
@@ -1711,26 +1553,50 @@ public class DLockService extends DistributedLockService {
         blockedOn.set(null);
       }
 
-      if (isDebugEnabled_DLS) {
-        logger.trace(LogMarker.DLS, "{}, name: {} - exiting lock() returning {}", this, name,
-            gotLock);
-      }
+      logger.trace(LogMarker.DLS, "{}, name: {} - exiting lock() returning {}", this, name,
+          gotLock);
       return gotLock;
-    } // try-block for abnormalExit and safeExit
-
-    // finally-block for abnormalExit and safeExit
-    finally {
-      if (isDebugEnabled_DLS) {
-        logger.trace(LogMarker.DLS, "{}, name: {} - exiting lock() without returning value", this,
-            name);
-      }
+    } finally {
+      logger.trace(LogMarker.DLS, "{}, name: {} - exiting lock() without returning value", this,
+          name);
       if (interrupted) {
         Thread.currentThread().interrupt();
       }
-      if (DEBUG_ENFORCE_SAFE_EXIT) {
-        Assert.assertTrue(safeExit);
+    }
+  }
+
+  private boolean grantLocalDLockAfterObtainingRemoteLock(Object name, DLockToken token,
+      int threadId, long leaseExpireTime, int lockId, LockGrantorId theLockGrantorId,
+      DLockRequestProcessor processor, int recursion) {
+    boolean needToReleaseOrphanedGrant = false;
+
+    Assert.assertHoldsLock(this.destroyLock, false);
+    synchronized (this.lockGrantorIdLock) {
+      if (!checkLockGrantorId(theLockGrantorId)) {
+        // race: grantor changed
+        logger.trace(LogMarker.DLS, "Cannot honor grant from {} because {} is now a grantor.",
+            theLockGrantorId, this.lockGrantorId);
+      } else if (isDestroyed()) {
+        // race: dls was destroyed
+        logger.trace(LogMarker.DLS,
+            "Cannot honor grant from {} because this lock service has been destroyed.",
+            theLockGrantorId);
+        needToReleaseOrphanedGrant = true;
+      } else {
+        synchronized (this.tokens) {
+          checkDestroyed();
+          Assert.assertTrue(token == basicGetToken(name));
+          RemoteThread rThread = new RemoteThread(getDistributionManager().getId(), threadId);
+          token.grantLock(leaseExpireTime, lockId, recursion, rThread);
+          return true;
+        } // tokens sync
       }
     }
+
+    if (needToReleaseOrphanedGrant) {
+      processor.getResponse().releaseOrphanedGrant(this.dm);
+    }
+    return false;
   }
 
   /**
@@ -2547,11 +2413,11 @@ public class DLockService extends DistributedLockService {
   /**
    * Called by grantor recovery to return set of locks held by this process. Synchronizes on
    * lockGrantorIdLock, tokens map, and each lock token.
-   * 
+   *
    * @param newlockGrantorId the newly recovering grantor
    */
-  Set getLockTokensForRecovery(LockGrantorId newlockGrantorId) {
-    Set heldLockSet = Collections.EMPTY_SET;
+  Set<DLockRemoteToken> getLockTokensForRecovery(LockGrantorId newlockGrantorId) {
+    Set<DLockRemoteToken> heldLockSet = Collections.EMPTY_SET;
 
     LockGrantorId currentLockGrantorId = null;
     synchronized (this.lockGrantorIdLock) {
@@ -2589,7 +2455,7 @@ public class DLockService extends DistributedLockService {
               // add token to heldLockSet
               else {
                 if (heldLockSet == Collections.EMPTY_SET) {
-                  heldLockSet = new HashSet();
+                  heldLockSet = new HashSet<>();
                 }
                 heldLockSet.add(DLockRemoteToken.createFromDLockToken(token));
               }

http://git-wip-us.apache.org/repos/asf/geode/blob/190cfed8/geode-core/src/main/java/org/apache/geode/distributed/internal/locks/DLockToken.java
----------------------------------------------------------------------
diff --git a/geode-core/src/main/java/org/apache/geode/distributed/internal/locks/DLockToken.java b/geode-core/src/main/java/org/apache/geode/distributed/internal/locks/DLockToken.java
index c67de67..3e85171 100644
--- a/geode-core/src/main/java/org/apache/geode/distributed/internal/locks/DLockToken.java
+++ b/geode-core/src/main/java/org/apache/geode/distributed/internal/locks/DLockToken.java
@@ -87,7 +87,8 @@ public class DLockToken {
   private Thread thread;
 
   /**
-   * Number of threads currently using this lock token.
+   * Number of usages of this lock token. usageCount = recursion + (# of threads waiting for this
+   * lock). It's weird, I know.
    */
   private int usageCount = 0;
 
@@ -230,10 +231,9 @@ public class DLockToken {
   // -------------------------------------------------------------------------
 
   /**
-   * Destroys this lock token. Caller must synchronize on this lock token.
+   * Destroys this lock token.
    */
   synchronized void destroy() {
-    // checkDestroyed();
     this.destroyed = true;
   }
 
@@ -302,14 +302,14 @@ public class DLockToken {
    * @param remoteThread identity of the leasing thread
    * @return true if lease for this lock token is successfully granted
    */
-  synchronized boolean grantLock(long newLeaseExpireTime, int newLeaseId, int newRecursion,
+  synchronized void grantLock(long newLeaseExpireTime, int newLeaseId, int newRecursion,
       RemoteThread remoteThread) {
 
     Assert.assertTrue(remoteThread != null);
     Assert.assertTrue(newLeaseId > -1, "Invalid attempt to grant lock with leaseId " + newLeaseId);
 
     checkDestroyed();
-    checkForExpiration();
+    checkForExpiration(); // TODO: this should throw.
 
     this.ignoreForRecovery = false;
     this.leaseExpireTime = newLeaseExpireTime;
@@ -321,8 +321,6 @@ public class DLockToken {
     if (logger.isTraceEnabled(LogMarker.DLS)) {
       logger.trace(LogMarker.DLS, "[DLockToken.grantLock.client] granted {}", this);
     }
-
-    return true;
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/geode/blob/190cfed8/geode-core/src/test/java/org/apache/geode/distributed/internal/locks/DLockServiceCharacterizationTests.java
----------------------------------------------------------------------
diff --git a/geode-core/src/test/java/org/apache/geode/distributed/internal/locks/DLockServiceCharacterizationTests.java b/geode-core/src/test/java/org/apache/geode/distributed/internal/locks/DLockServiceCharacterizationTests.java
new file mode 100644
index 0000000..ba300c4
--- /dev/null
+++ b/geode-core/src/test/java/org/apache/geode/distributed/internal/locks/DLockServiceCharacterizationTests.java
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more contributor license
+ * agreements. See the NOTICE file distributed with this work for additional information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package org.apache.geode.distributed.internal.locks;
+
+import static org.apache.geode.distributed.ConfigurationProperties.MCAST_PORT;
+import static org.awaitility.Awaitility.await;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import org.apache.geode.cache.Cache;
+import org.apache.geode.cache.CacheFactory;
+import org.apache.geode.cache.ExpirationAction;
+import org.apache.geode.cache.ExpirationAttributes;
+import org.apache.geode.cache.RegionShortcut;
+import org.apache.geode.cache.Scope;
+import org.apache.geode.distributed.DistributedLockService;
+import org.apache.geode.internal.cache.DistributedRegion;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.util.Properties;
+import java.util.concurrent.TimeUnit;
+
+public class DLockServiceCharacterizationTests {
+  private Cache cache;
+  private DistributedRegion testRegion;
+  private DistributedLockService dLockService;
+
+  @Before
+  public void setUp() {
+    Properties properties = new Properties();
+    properties.setProperty(MCAST_PORT, "0");
+
+    cache = new CacheFactory(properties).create();
+    testRegion = (DistributedRegion) cache.createRegionFactory(RegionShortcut.REPLICATE)
+        .setScope(Scope.GLOBAL)
+        .setEntryTimeToLive(new ExpirationAttributes(1, ExpirationAction.DESTROY))
+        .create("testRegion");
+    testRegion.becomeLockGrantor();
+
+    dLockService = DLockService.create("testService", cache.getDistributedSystem());
+  }
+
+  @After
+  public void tearDown() {
+    cache.close();
+  }
+
+  @Test
+  public void reentrantLockIncreasesReentrancy() {
+    assertTrue(dLockService.lock("key1", -1, -1));
+    DLockToken key1 = ((DLockService) dLockService).getToken("key1");
+
+    assertEquals(0, key1.getRecursion());
+    assertEquals(1, key1.getUsageCount());
+    // reentrancy + 1
+    assertTrue(dLockService.lock("key1", -1, -1));
+
+    assertEquals(1, key1.getRecursion());
+    assertEquals(2, key1.getUsageCount());
+
+    dLockService.unlock("key1");
+    assertEquals(0, key1.getRecursion());
+    assertEquals(1, key1.getUsageCount());
+
+    dLockService.unlock("key1");
+    assertTokenIsUnused(key1);
+  }
+
+  @Test
+  public void threadWaitingOnLockIncreasesUsageCount() {
+    assertTrue(dLockService.lock("key1", -1, -1));
+    DLockToken key1 = ((DLockService) dLockService).getToken("key1");
+
+    assertEquals(0, key1.getRecursion());
+    assertEquals(1, key1.getUsageCount());
+    assertEquals(Thread.currentThread(), key1.getThread());
+
+    Thread otherThread = new Thread(() -> dLockService.lock("key1", -1, -1));
+    otherThread.start();
+
+    // otherThread should be waiting for lock.
+
+    await("other thread is waiting on this lock").atMost(3, TimeUnit.SECONDS)
+        .until(() -> key1.getUsageCount() == 2);
+    assertEquals(0, key1.getRecursion());
+    assertEquals(Thread.currentThread(), key1.getThread());
+
+    dLockService.unlock("key1");
+
+    await("other thread has acquired this lock").atMost(3, TimeUnit.SECONDS)
+        .until(() -> key1.getThread() == otherThread);
+
+    assertEquals(0, key1.getRecursion());
+    assertEquals(1, key1.getUsageCount());
+
+    // We can unlock from a different thread than locked it.
+    dLockService.unlock("key1");
+
+    assertTokenIsUnused(key1);
+  }
+
+  private void assertTokenIsUnused(DLockToken dLockToken) {
+    assertEquals(0, dLockToken.getRecursion());
+    assertEquals(0, dLockToken.getUsageCount());
+    assertEquals(null, dLockToken.getThread());
+    assertEquals(null, dLockToken.getLesseeThread());
+    assertEquals(-1, dLockToken.getLeaseId());
+  }
+}


[48/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - Developing (top-level book file)

Posted by kl...@apache.org.
GEODE-3395 Variable-ize product version and name in user guide - Developing (top-level book file)


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/1c04aabb
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/1c04aabb
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/1c04aabb

Branch: refs/heads/feature/GEODE-1279
Commit: 1c04aabb76b1e990899069fc864c8b96f8f63300
Parents: 36daa9a
Author: Dave Barnes <db...@pivotal.io>
Authored: Fri Aug 18 16:00:46 2017 -0700
Committer: Dave Barnes <db...@pivotal.io>
Committed: Fri Aug 18 16:00:46 2017 -0700

----------------------------------------------------------------------
 geode-docs/developing/book_intro.html.md.erb | 40 +++++++++++------------
 1 file changed, 20 insertions(+), 20 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/1c04aabb/geode-docs/developing/book_intro.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/book_intro.html.md.erb b/geode-docs/developing/book_intro.html.md.erb
index c78f753..72c2c69 100644
--- a/geode-docs/developing/book_intro.html.md.erb
+++ b/geode-docs/developing/book_intro.html.md.erb
@@ -19,53 +19,53 @@ limitations under the License.
 
 *Developing with <%=vars.product_name_long%>* explains main concepts of application programming with <%=vars.product_name_long%>. It describes how to plan and implement regions, data serialization, event handling, delta propagation, transactions, and more.
 
-For information about Geode REST application development, see [Developing REST Applications for <%=vars.product_name_long%>](../rest_apps/book_intro.html).
+For information about <%=vars.product_name%> REST application development, see [Developing REST Applications for <%=vars.product_name_long%>](../rest_apps/book_intro.html).
 
--   **[Region Data Storage and Distribution](../developing/region_options/chapter_overview.html)**
+-   **[Region Data Storage and Distribution](region_options/chapter_overview.html)**
 
-    The <%=vars.product_name_long%> data storage and distribution models put your data in the right place at the right time. You should understand all the options for data storage in Geode before you start configuring your data regions.
+    The <%=vars.product_name_long%> data storage and distribution models put your data in the right place at the right time. You should understand all the options for data storage in <%=vars.product_name%> before you start configuring your data regions.
 
--   **[Partitioned Regions](../developing/partitioned_regions/chapter_overview.html)**
+-   **[Partitioned Regions](partitioned_regions/chapter_overview.html)**
 
     In addition to basic region management, partitioned regions include options for high availability, data location control, and data balancing across the distributed system.
 
--   **[Distributed and Replicated Regions](../developing/distributed_regions/chapter_overview.html)**
+-   **[Distributed and Replicated Regions](distributed_regions/chapter_overview.html)**
 
-    In addition to basic region management, distributed and replicated regions include options for things like push and pull distribution models, global locking, and region entry versions to ensure consistency across Geode members.
+    In addition to basic region management, distributed and replicated regions include options for things like push and pull distribution models, global locking, and region entry versions to ensure consistency across <%=vars.product_name%> members.
 
--   **[Consistency for Region Updates](../developing/distributed_regions/region_entry_versions.html)**
+-   **[Consistency for Region Updates](distributed_regions/region_entry_versions.html)**
 
-    Geode ensures that all copies of a region eventually reach a consistent state on all members and clients that host the region, including Geode members that distribute region events.
+    <%=vars.product_name%> ensures that all copies of a region eventually reach a consistent state on all members and clients that host the region, including <%=vars.product_name%> members that distribute region events.
 
--   **[General Region Data Management](../developing/management_all_region_types/chapter_overview.html)**
+-   **[General Region Data Management](management_all_region_types/chapter_overview.html)**
 
     For all regions, you have options to control memory use, back up your data to disk, and keep stale data out of your cache.
 
--   **[Data Serialization](../developing/data_serialization/chapter_overview.html)**
+-   **[Data Serialization](data_serialization/chapter_overview.html)**
 
-    Data that you manage in Geode must be serialized and deserialized for storage and transmittal between processes. You can choose among several options for data serialization.
+    Data that you manage in <%=vars.product_name%> must be serialized and deserialized for storage and transmittal between processes. You can choose among several options for data serialization.
 
--   **[Events and Event Handling](../developing/events/chapter_overview.html)**
+-   **[Events and Event Handling](events/chapter_overview.html)**
 
-    Geode provides versatile and reliable event distribution and handling for your cached data and system member events.
+    <%=vars.product_name%> provides versatile and reliable event distribution and handling for your cached data and system member events.
 
--   **[Delta Propagation](../developing/delta_propagation/chapter_overview.html)**
+-   **[Delta Propagation](delta_propagation/chapter_overview.html)**
 
     Delta propagation allows you to reduce the amount of data you send over the network by including only changes to objects rather than the entire object.
 
--   **[Querying](../developing/querying_basics/chapter_overview.html)**
+-   **[Querying](querying_basics/chapter_overview.html)**
 
-    Geode provides a SQL-like querying language called OQL that allows you to access data stored in Geode regions.
+    <%=vars.product_name%> provides a SQL-like querying language called OQL that allows you to access data stored in <%=vars.product_name%> regions.
 
--   **[Continuous Querying](../developing/continuous_querying/chapter_overview.html)**
+-   **[Continuous Querying](continuous_querying/chapter_overview.html)**
 
     Continuous querying continuously returns events that match the queries you set up.
 
--   **[Transactions](../developing/transactions/chapter_overview.html)**
+-   **[Transactions](transactions/chapter_overview.html)**
 
-    Geode provides a transactions API, with `begin`, `commit`, and `rollback` methods. These methods are much the same as the familiar relational database transactions methods.
+    <%=vars.product_name%> provides a transactions API, with `begin`, `commit`, and `rollback` methods. These methods are much the same as the familiar relational database transactions methods.
 
--   **[Function Execution](../developing/function_exec/chapter_overview.html)**
+-   **[Function Execution](function_exec/chapter_overview.html)**
 
     A function is a body of code that resides on a server and that an application can invoke from a client or from another server without the need to send the function code itself. The caller can direct a data-dependent function to operate on a particular dataset, or can direct a data-independent function to operate on a particular server, member, or member group.
 


[22/51] [abbrv] geode git commit: GEODE-3395 Variable-ize product version and name in user guide - REST apps

Posted by kl...@apache.org.
GEODE-3395 Variable-ize product version and name in user guide - REST apps


Project: http://git-wip-us.apache.org/repos/asf/geode/repo
Commit: http://git-wip-us.apache.org/repos/asf/geode/commit/d291a457
Tree: http://git-wip-us.apache.org/repos/asf/geode/tree/d291a457
Diff: http://git-wip-us.apache.org/repos/asf/geode/diff/d291a457

Branch: refs/heads/feature/GEODE-1279
Commit: d291a457f5237b94f554047f33852b33d690a23a
Parents: 9b7dd54
Author: Dave Barnes <db...@pivotal.io>
Authored: Wed Aug 16 11:58:29 2017 -0700
Committer: Dave Barnes <db...@pivotal.io>
Committed: Wed Aug 16 11:58:29 2017 -0700

----------------------------------------------------------------------
 geode-docs/rest_apps/book_intro.html.md.erb     | 40 ++++++++++----------
 .../rest_apps/chapter_overview.html.md.erb      | 18 ++++-----
 .../rest_apps/delete_all_data.html.md.erb       |  2 +-
 .../rest_apps/delete_data_for_key.html.md.erb   |  2 +-
 .../delete_data_for_multiple_keys.html.md.erb   |  2 +-
 .../rest_apps/develop_rest_apps.html.md.erb     | 40 ++++++++++----------
 .../get_execute_adhoc_query.html.md.erb         |  4 +-
 geode-docs/rest_apps/get_functions.html.md.erb  |  4 +-
 geode-docs/rest_apps/get_queries.html.md.erb    |  2 +-
 .../rest_apps/get_region_data.html.md.erb       |  2 +-
 .../rest_apps/get_region_key_data.html.md.erb   |  2 +-
 .../rest_apps/get_region_keys.html.md.erb       |  2 +-
 geode-docs/rest_apps/get_regions.html.md.erb    |  2 +-
 geode-docs/rest_apps/get_servers.html.md.erb    |  2 +-
 .../rest_apps/head_region_size.html.md.erb      |  4 +-
 geode-docs/rest_apps/ping_service.html.md.erb   |  2 +-
 .../rest_apps/post_create_query.html.md.erb     |  2 +-
 .../post_execute_functions.html.md.erb          |  2 +-
 .../rest_apps/post_if_absent_data.html.md.erb   |  2 +-
 .../put_multiple_values_for_keys.html.md.erb    |  2 +-
 .../rest_apps/put_replace_data.html.md.erb      |  2 +-
 .../rest_apps/put_update_cas_data.html.md.erb   |  2 +-
 .../rest_apps/put_update_data.html.md.erb       |  2 +-
 .../rest_apps/put_update_query.html.md.erb      |  2 +-
 geode-docs/rest_apps/rest_admin.html.md.erb     |  4 +-
 .../rest_apps/rest_api_reference.html.md.erb    | 14 +++----
 geode-docs/rest_apps/rest_examples.html.md.erb  |  6 +--
 geode-docs/rest_apps/rest_functions.html.md.erb | 10 ++---
 geode-docs/rest_apps/rest_prereqs.html.md.erb   | 14 +++----
 geode-docs/rest_apps/rest_queries.html.md.erb   | 14 +++----
 geode-docs/rest_apps/rest_regions.html.md.erb   | 32 ++++++++--------
 geode-docs/rest_apps/setup_config.html.md.erb   | 16 ++++----
 .../rest_apps/troubleshooting.html.md.erb       |  8 ++--
 geode-docs/rest_apps/using_swagger.html.md.erb  | 10 ++---
 34 files changed, 134 insertions(+), 140 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/book_intro.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/book_intro.html.md.erb b/geode-docs/rest_apps/book_intro.html.md.erb
index c909dc7..b8b6596 100644
--- a/geode-docs/rest_apps/book_intro.html.md.erb
+++ b/geode-docs/rest_apps/book_intro.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Developing REST Applications for Apache Geode
----
+<% set_title("Developing REST Applications for", product_name_long) %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,41 +17,41 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-*Developing REST Applications for Apache Geode* provides background and instructions on how to program REST applications with Apache Geode. Geode REST APIs allow you to access region data, queries and functions in your Geode deployment in wide variety of programming languages.
+*Developing REST Applications for <%=vars.product_name_long%>* provides background and instructions on how to program REST applications with <%=vars.product_name_long%>. <%=vars.product_name%> REST APIs allow you to access region data, queries and functions in your <%=vars.product_name%> deployment in wide variety of programming languages.
 
 **Note:**
-This documentation covers the **v1** release of Geode REST APIs for developing applications.
+This documentation covers the **v1** release of <%=vars.product_name%> REST APIs for developing applications.
 
--   **[Geode REST API Overview](../rest_apps/chapter_overview.html)**
+-   **[<%=vars.product_name%> REST API Overview](chapter_overview.html)**
 
-    By using the Geode REST application interface, you can immediately access Geode's data management capabilities in languages other than the natively supported Java language.
+    By using the <%=vars.product_name%> REST application interface, you can immediately access <%=vars.product_name%>'s data management capabilities in languages other than the natively supported Java language.
 
--   **[Prerequisites and Limitations for Writing REST Applications](../rest_apps/rest_prereqs.html)**
+-   **[Prerequisites and Limitations for Writing REST Applications](rest_prereqs.html)**
 
-    Before development, understand the prerequisites and limitations of the current REST implementation in Geode.
+    Before development, understand the prerequisites and limitations of the current REST implementation in <%=vars.product_name%>.
 
--   **[Setup and Configuration](../rest_apps/setup_config.html)**
+-   **[Setup and Configuration](setup_config.html)**
 
-    The Apache Geode developer REST interface runs as an embedded HTTP or HTTPS service (Jetty server) within a Geode data node.
+    The <%=vars.product_name_long%> developer REST interface runs as an embedded HTTP or HTTPS service (Jetty server) within a <%=vars.product_name%> data node.
 
--   **[Using the Swagger UI to Browse REST APIs](../rest_apps/using_swagger.html)**
+-   **[Using the Swagger UI to Browse REST APIs](using_swagger.html)**
 
-    Apache Geode Developer REST APIs are integrated with the Swagger™ framework. This framework provides a browser-based test client that allows you to visualize and try out Geode REST APIs.
+    <%=vars.product_name_long%> Developer REST APIs are integrated with the Swagger™ framework. This framework provides a browser-based test client that allows you to visualize and try out <%=vars.product_name%> REST APIs.
 
--   **[Developing REST Applications](../rest_apps/develop_rest_apps.html)**
+-   **[Developing REST Applications](develop_rest_apps.html)**
 
-    This section provides guidelines on writing REST client applications for Geode.
+    This section provides guidelines on writing REST client applications for <%=vars.product_name%>.
 
--   **[Sample REST Applications](../rest_apps/rest_examples.html)**
+-   **[Sample REST Applications](rest_examples.html)**
 
-    This section provides examples that illustrate how multiple clients, both REST and native, can access the same Geode region data.
+    This section provides examples that illustrate how multiple clients, both REST and native, can access the same <%=vars.product_name%> region data.
 
--   **[Troubleshooting and FAQ](../rest_apps/troubleshooting.html)**
+-   **[Troubleshooting and FAQ](troubleshooting.html)**
 
-    This section provides troubleshooting guidance and frequently asked questions about Geode Developer REST APIs.
+    This section provides troubleshooting guidance and frequently asked questions about <%=vars.product_name%> Developer REST APIs.
 
--   **[Apache Geode REST API Reference](../rest_apps/rest_api_reference.html)**
+-   **[<%=vars.product_name_long%> REST API Reference](rest_api_reference.html)**
 
-    This section summarizes all available Apache Geode REST API resources and endpoints.
+    This section summarizes all available <%=vars.product_name_long%> REST API resources and endpoints.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/chapter_overview.html.md.erb b/geode-docs/rest_apps/chapter_overview.html.md.erb
index 8f29c08..5eec90f 100644
--- a/geode-docs/rest_apps/chapter_overview.html.md.erb
+++ b/geode-docs/rest_apps/chapter_overview.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Geode REST API Overview
----
+<% set_title(product_name, "REST API Overview") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,23 +17,23 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-By using the Geode REST application interface, you can immediately access Geode's data management capabilities in languages other than the natively supported Java language.
+By using the <%=vars.product_name%> REST application interface, you can immediately access <%=vars.product_name%>'s data management capabilities in languages other than the natively supported Java language.
 
-You can write REST-enabled client applications for Geode in a variety of languages that use the open
+You can write REST-enabled client applications for <%=vars.product_name%> in a variety of languages that use the open
 and standard HTTP protocol&mdash;for example, Ruby, Python, JavaScript and Scala&mdash;as well as
 already supported languages such as Java.
 
-When you access Geode through the REST interface, objects are stored in Geode as PdxInstances. A PdxInstance is a light-weight wrapper around PDX serialized bytes. It provides applications with run-time access to fields of a PDX serialized object. This interoperable format allows your Java applications to operate on the same data as your REST applications.
+When you access <%=vars.product_name%> through the REST interface, objects are stored in <%=vars.product_name%> as PdxInstances. A PdxInstance is a light-weight wrapper around PDX serialized bytes. It provides applications with run-time access to fields of a PDX serialized object. This interoperable format allows your Java applications to operate on the same data as your REST applications.
 
-As an added benefit, because Geode's REST interface stores objects as PdxInstances, you do not need
+As an added benefit, because <%=vars.product_name%>'s REST interface stores objects as PdxInstances, you do not need
 to write corresponding Java classes to translate JSON data (which you must do with other REST
 interface providers such as Oracle Coherence). For example, consider the use case where a non-Java
-REST client application (Python, Ruby or Scala) performs Geode region operations with JSON data that
-represents employee data. Since the object is stored in Geode as a PdxInstance that can be
+REST client application (Python, Ruby or Scala) performs <%=vars.product_name%> region operations with JSON data that
+represents employee data. Since the object is stored in <%=vars.product_name%> as a PdxInstance that can be
 automatically mapped to JSON, the user does not need to write a corresponding Employee.java class
 and also does not need to worry about related issues such as keeping the Employee object in the
 CLASSPATH.
 
-See [Geode PDX Serialization](../developing/data_serialization/gemfire_pdx_serialization.html#gemfire_pdx_serialization) for more information on PDX serialization.
+See [<%=vars.product_name%> PDX Serialization](../developing/data_serialization/gemfire_pdx_serialization.html#gemfire_pdx_serialization) for more information on PDX serialization.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/delete_all_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/delete_all_data.html.md.erb b/geode-docs/rest_apps/delete_all_data.html.md.erb
index a7a7e1a..32ea8eb 100644
--- a/geode-docs/rest_apps/delete_all_data.html.md.erb
+++ b/geode-docs/rest_apps/delete_all_data.html.md.erb
@@ -51,6 +51,6 @@ Response Payload: null
 | Status Code               | Description                                                                                                                      |
 |---------------------------|----------------------------------------------------------------------------------------------------------------------------------|
 | 404 NOT FOUND             | Returned if the region is not found.                                                                                             |
-| 500 INTERNAL SERVER ERROR | Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception. |
+| 500 INTERNAL SERVER ERROR | Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception. |
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/delete_data_for_key.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/delete_data_for_key.html.md.erb b/geode-docs/rest_apps/delete_data_for_key.html.md.erb
index 339942e..1c5f4a7 100644
--- a/geode-docs/rest_apps/delete_data_for_key.html.md.erb
+++ b/geode-docs/rest_apps/delete_data_for_key.html.md.erb
@@ -51,6 +51,6 @@ Response Payload: null
 | Status Code               | Description                                                                                                                      |
 |---------------------------|----------------------------------------------------------------------------------------------------------------------------------|
 | 404 NOT FOUND             | Returned if the region or specified key is not found.                                                                            |
-| 500 INTERNAL SERVER ERROR | Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception. |
+| 500 INTERNAL SERVER ERROR | Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception. |
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/delete_data_for_multiple_keys.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/delete_data_for_multiple_keys.html.md.erb b/geode-docs/rest_apps/delete_data_for_multiple_keys.html.md.erb
index 8a7071d..1c4016b 100644
--- a/geode-docs/rest_apps/delete_data_for_multiple_keys.html.md.erb
+++ b/geode-docs/rest_apps/delete_data_for_multiple_keys.html.md.erb
@@ -51,6 +51,6 @@ Response Payload: null
 | Status Code               | Description                                                                                                                      |
 |---------------------------|----------------------------------------------------------------------------------------------------------------------------------|
 | 404 NOT FOUND             | Returned if either the region or one or more of the specified keys is not found.                                                 |
-| 500 INTERNAL SERVER ERROR | Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception. |
+| 500 INTERNAL SERVER ERROR | Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception. |
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/develop_rest_apps.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/develop_rest_apps.html.md.erb b/geode-docs/rest_apps/develop_rest_apps.html.md.erb
index d48dcb1..336488f 100644
--- a/geode-docs/rest_apps/develop_rest_apps.html.md.erb
+++ b/geode-docs/rest_apps/develop_rest_apps.html.md.erb
@@ -21,33 +21,33 @@ limitations under the License.
 <a id="topic_lvp_cd5_m4"></a>
 
 
-This section provides guidelines on writing REST client applications for Geode.
+This section provides guidelines on writing REST client applications for <%=vars.product_name%>.
 
-You can browse, query, update and delete data stored in your Geode deployment. You can also manage and execute pre-deployed functions on Geode members.
+You can browse, query, update and delete data stored in your <%=vars.product_name%> deployment. You can also manage and execute pre-deployed functions on <%=vars.product_name%> members.
 
--   **[Working with Regions](../rest_apps/develop_rest_apps.html#topic_qhs_f25_m4)**
+-   **[Working with Regions](#topic_qhs_f25_m4)**
 
-    The Geode REST APIs provide basic CRUD (create, read, update and delete) operations for data entries stored in your regions.
+    The <%=vars.product_name%> REST APIs provide basic CRUD (create, read, update and delete) operations for data entries stored in your regions.
 
--   **[Working with Queries](../rest_apps/develop_rest_apps.html#topic_fcn_g25_m4)**
+-   **[Working with Queries](#topic_fcn_g25_m4)**
 
-    Geode supports the use of queries to extract data from its regions. Using REST APIs, you can create and execute either prepared or ad-hoc queries on Geode regions. You can also update and delete prepared queries.
+    <%=vars.product_name%> supports the use of queries to extract data from its regions. Using REST APIs, you can create and execute either prepared or ad-hoc queries on <%=vars.product_name%> regions. You can also update and delete prepared queries.
 
--   **[Working with Functions](../rest_apps/develop_rest_apps.html#topic_rbc_h25_m4)**
+-   **[Working with Functions](#topic_rbc_h25_m4)**
 
-    Geode REST APIs support the discovery and execution of predefined Geode functions on your cluster deployments.
+    <%=vars.product_name%> REST APIs support the discovery and execution of predefined <%=vars.product_name%> functions on your cluster deployments.
 
 ## <a id="topic_qhs_f25_m4" class="no-quick-link"></a>Working with Regions
 
-The Geode REST APIs provide basic CRUD (create, read, update and delete) operations for data entries stored in your regions.
+The <%=vars.product_name%> REST APIs provide basic CRUD (create, read, update and delete) operations for data entries stored in your regions.
 
-Regions are the resources of the Geode REST API. Each region represents a resource or a collection of resources.
+Regions are the resources of the <%=vars.product_name%> REST API. Each region represents a resource or a collection of resources.
 
-You cannot create or delete the regions themselves with the REST APIs, but you can work with the data stored within predefined Geode regions. Use the [gfsh](../tools_modules/gfsh/chapter_overview.html) command utility to add, configure or delete regions in your Geode deployment. Any additions or modifications to regions made through `gfsh` are then accessible by the REST APIs.
+You cannot create or delete the regions themselves with the REST APIs, but you can work with the data stored within predefined <%=vars.product_name%> regions. Use the [gfsh](../tools_modules/gfsh/chapter_overview.html) command utility to add, configure or delete regions in your <%=vars.product_name%> deployment. Any additions or modifications to regions made through `gfsh` are then accessible by the REST APIs.
 
 ## Listing Available Regions
 
-The main resource endpoint to the Geode API is [GET /gemfire-api/v1](get_regions.html#topic_itv_mg5_m4). Use this endpoint to discover which regions are available in your cluster.
+The main resource endpoint to the <%=vars.product_name%> API is [GET /gemfire-api/v1](get_regions.html#topic_itv_mg5_m4). Use this endpoint to discover which regions are available in your cluster.
 
 Example call:
 
@@ -307,7 +307,7 @@ Accept: application/json
 
 **Modifying existing entries**
 
-Geode provides three different options for this type of operation. To update a value for the key, you can use:
+<%=vars.product_name%> provides three different options for this type of operation. To update a value for the key, you can use:
 
 ``` pre
 PUT /gemfire/v1/{region}/{key}
@@ -438,7 +438,7 @@ If any of the supplied keys are not found in the region, the request will fail a
 
 ## <a id="topic_fcn_g25_m4" class="no-quick-link"></a>Working with Queries
 
-Geode supports the use of queries to extract data from its regions. Using REST APIs, you can create and execute either prepared or ad-hoc queries on Geode regions. You can also update and delete prepared queries.
+<%=vars.product_name%> supports the use of queries to extract data from its regions. Using REST APIs, you can create and execute either prepared or ad-hoc queries on <%=vars.product_name%> regions. You can also update and delete prepared queries.
 
 ## Listing Queries
 
@@ -448,7 +448,7 @@ To find out which predefined and named queries are available in your deployment,
 GET /gemfire-api/v1/queries
 ```
 
-All queries that have been predefined and assigned IDs in Geode are listed.
+All queries that have been predefined and assigned IDs in <%=vars.product_name%> are listed.
 
 ## <a id="topic_fcn_g25_m4__section_t4h_wtp_y4" class="no-quick-link"></a>Creating a New Query
 
@@ -623,18 +623,18 @@ http://localhost:7070/gemfire-api/v1/queries/adhoc?q="SELECT * FROM /customers"
 
 ## <a id="topic_rbc_h25_m4" class="no-quick-link"></a>Working with Functions
 
-Geode REST APIs support the discovery and execution of predefined Geode functions on your cluster deployments.
+<%=vars.product_name%> REST APIs support the discovery and execution of predefined <%=vars.product_name%> functions on your cluster deployments.
 
-Before you can access functions using REST APIs, you must have already defined and registered the functions in your Geode deployment. Additionally, any domain objects that are being accessed by the functions must be available on the CLASSPATH of the server running the REST endpoint service.
+Before you can access functions using REST APIs, you must have already defined and registered the functions in your <%=vars.product_name%> deployment. Additionally, any domain objects that are being accessed by the functions must be available on the CLASSPATH of the server running the REST endpoint service.
 
 You can do the following with functions:
 
--   List all functions available in the Geode cluster.
+-   List all functions available in the <%=vars.product_name%> cluster.
 -   Execute a function, optionally specifying the region and members and/or member groups that are targeted by the function
 
 ## Listing Functions
 
-To list all functions that are currently registered and deployed in the Geode cluster, use the following endpoint:
+To list all functions that are currently registered and deployed in the <%=vars.product_name%> cluster, use the following endpoint:
 
 ``` pre
 GET /gemfire-api/v1/functions
@@ -644,7 +644,7 @@ The list of returned functions includes the functionId, which you can use to exe
 
 ## Executing Functions
 
-To execute a function on a Geode cluster, use the following endpoint:
+To execute a function on a <%=vars.product_name%> cluster, use the following endpoint:
 
 ``` pre
 POST /gemfire-api/v1/functions/{functionId}?[&onRegion=regionname|&onMembers=member1,member2,...,memberN|&onGroups=group1,group2,...,groupN]

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/get_execute_adhoc_query.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/get_execute_adhoc_query.html.md.erb b/geode-docs/rest_apps/get_execute_adhoc_query.html.md.erb
index dc31407..5d13c45 100644
--- a/geode-docs/rest_apps/get_execute_adhoc_query.html.md.erb
+++ b/geode-docs/rest_apps/get_execute_adhoc_query.html.md.erb
@@ -113,7 +113,7 @@ Content-Type: application/json
 </tr>
 <tr class="odd">
 <td>500 INTERNAL SERVER ERROR</td>
-<td>Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception. Some possible exceptions include:
+<td>Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception. Some possible exceptions include:
 <ul>
 <li>A function was applied to a parameter that is improper for that function!</li>
 <li>Bind parameter is not of the expected type!</li>
@@ -123,7 +123,7 @@ Content-Type: application/json
 <li>Query execution time is exceeded max query execution time (gemfire.Cache.MAX_QUERY_EXECUTION_TIME) configured!</li>
 <li>Data referenced in from clause is not available for querying!</li>
 <li>Query execution gets canceled due to low memory conditions and the resource manager critical heap percentage has been set!</li>
-<li>Server has encountered while executing Adhoc query!</li>
+<li>Server has encountered an error while executing Adhoc query!</li>
 </ul></td>
 </tr>
 </tbody>

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/get_functions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/get_functions.html.md.erb b/geode-docs/rest_apps/get_functions.html.md.erb
index 9ae6867..3fa54a1 100644
--- a/geode-docs/rest_apps/get_functions.html.md.erb
+++ b/geode-docs/rest_apps/get_functions.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-List all registered Geode functions in the cluster.
+List all registered <%=vars.product_name%> functions in the cluster.
 
 ## Resource URL
 
@@ -62,6 +62,6 @@ Location: https://localhost:8080/gemfire-api/v1/functions
 | Status Code               | Description                                                                                                                      |
 |---------------------------|----------------------------------------------------------------------------------------------------------------------------------|
 | 404 NOT FOUND             | Returned if no functions are found in the cluster.                                                                               |
-| 500 INTERNAL SERVER ERROR | Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception. |
+| 500 INTERNAL SERVER ERROR | Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception. |
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/get_queries.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/get_queries.html.md.erb b/geode-docs/rest_apps/get_queries.html.md.erb
index ba86e55..54ff487 100644
--- a/geode-docs/rest_apps/get_queries.html.md.erb
+++ b/geode-docs/rest_apps/get_queries.html.md.erb
@@ -87,7 +87,7 @@ Location: http://localhost:8080/gemfire-api/v1/queries
 </tr>
 <tr class="even">
 <td>500 INTERNAL SERVER ERROR</td>
-<td>Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception.</td>
+<td>Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception.</td>
 </tr>
 </tbody>
 </table>

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/get_region_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/get_region_data.html.md.erb b/geode-docs/rest_apps/get_region_data.html.md.erb
index 4cae9cd..1a20d4d 100644
--- a/geode-docs/rest_apps/get_region_data.html.md.erb
+++ b/geode-docs/rest_apps/get_region_data.html.md.erb
@@ -129,4 +129,4 @@ Date: Sat, 18 Jan 2014 21:03:08 GMT
 |--------------------|----------------------------------------------------------------------------------------------------------------------------------|
 | 400 BAD REQUEST    | Limit parameter **X** is not valid! The specified limit value must be ALL or an integer.                                            |
 | 404 NOT FOUND      | Returned if region does not exist.                                                                                               |
-| 500 INTERNAL ERROR | Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception. |
+| 500 INTERNAL ERROR | Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception. |

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/get_region_key_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/get_region_key_data.html.md.erb b/geode-docs/rest_apps/get_region_key_data.html.md.erb
index fce7501..e8cbb7e 100644
--- a/geode-docs/rest_apps/get_region_key_data.html.md.erb
+++ b/geode-docs/rest_apps/get_region_key_data.html.md.erb
@@ -84,4 +84,4 @@ Date: Sat, 18 Jan 2014 21:27:59 GMT
 |---------------------------|-----------------------------------------------------------------------------------------------------|
 | 400 BAD REQUEST           | Returned if the supplied key is not found in the region.                                            |
 | 404 NOT FOUND             | Returned if the region or specified key is not found.                                               |
-| 500 INTERNAL SERVER ERROR | Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception. |
+| 500 INTERNAL SERVER ERROR | Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception. |

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/get_region_keys.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/get_region_keys.html.md.erb b/geode-docs/rest_apps/get_region_keys.html.md.erb
index fa7bbd0..1d50da3 100644
--- a/geode-docs/rest_apps/get_region_keys.html.md.erb
+++ b/geode-docs/rest_apps/get_region_keys.html.md.erb
@@ -62,6 +62,6 @@ Date: Sat, 18 Jan 2014 21:20:05 GMT
 |---------------------------|----------------------------------------------------------------------------------------------------------------------------------|
 | 404 NOT FOUND             | Specified region does not exist.                                                                                                 |
 | 405 METHOD NOT ALLOWED    | Returned if any HTTP request method other than GET (for example, POST, PUT, DELETE, etc.) is used.                               |
-| 500 INTERNAL SERVER ERROR | Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception. |
+| 500 INTERNAL SERVER ERROR | Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception. |
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/get_regions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/get_regions.html.md.erb b/geode-docs/rest_apps/get_regions.html.md.erb
index b8e6add..fee8e11 100644
--- a/geode-docs/rest_apps/get_regions.html.md.erb
+++ b/geode-docs/rest_apps/get_regions.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-List all available resources (regions) in the Geode cluster.
+List all available resources (regions) in the <%=vars.product_name%> cluster.
 
 ## Resource URL
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/get_servers.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/get_servers.html.md.erb b/geode-docs/rest_apps/get_servers.html.md.erb
index 1ac585f..efc4205 100644
--- a/geode-docs/rest_apps/get_servers.html.md.erb
+++ b/geode-docs/rest_apps/get_servers.html.md.erb
@@ -59,6 +59,6 @@ Content-Type: application/json; charset=utf-8
 
 | Status Code               | Description                                                                                 |
 |---------------------------|---------------------------------------------------------------------------------------------|
-| 500 INTERNAL SERVER ERROR | Returned if Geode throws an error while executing the request. |
+| 500 INTERNAL SERVER ERROR | Returned if <%=vars.product_name%> throws an error while executing the request. |
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/head_region_size.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/head_region_size.html.md.erb b/geode-docs/rest_apps/head_region_size.html.md.erb
index 3a83bb6..c332d07 100644
--- a/geode-docs/rest_apps/head_region_size.html.md.erb
+++ b/geode-docs/rest_apps/head_region_size.html.md.erb
@@ -55,8 +55,8 @@ Resource-Count: 8192
 
 | Status Code               | Description                                                                                 |
 |---------------------------|---------------------------------------------------------------------------------------------|
-| 400 Bad Request           | Returned if Geode throws an error while executing the request. |
+| 400 Bad Request           | Returned if <%=vars.product_name%> throws an error while executing the request. |
 | 404 Resource Not Found    | Region does not exist.                                                                      |
-| 500 Internal Server Error | Geode has thown an error or exception.                         |
+| 500 Internal Server Error | <%=vars.product_name%> has thown an error or exception.                         |
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/ping_service.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/ping_service.html.md.erb b/geode-docs/rest_apps/ping_service.html.md.erb
index d6339e3..690aad9 100644
--- a/geode-docs/rest_apps/ping_service.html.md.erb
+++ b/geode-docs/rest_apps/ping_service.html.md.erb
@@ -49,6 +49,6 @@ GET /gemfire/v1/ping
 | Status Code               | Description                                                                                |
 |---------------------------|--------------------------------------------------------------------------------------------|
 | 404 NOT FOUND             | The Developer REST API service is not available.                                           |
-| 500 INTERNAL SERVER ERROR | Encountered error at server. Check the Geode exception trace. |
+| 500 INTERNAL SERVER ERROR | Encountered error at server. Check the <%=vars.product_name%> exception trace. |
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/post_create_query.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/post_create_query.html.md.erb b/geode-docs/rest_apps/post_create_query.html.md.erb
index 9fdba2a..4e3b3a0 100644
--- a/geode-docs/rest_apps/post_create_query.html.md.erb
+++ b/geode-docs/rest_apps/post_create_query.html.md.erb
@@ -107,7 +107,7 @@ Location: http://localhost:8080/gemfire-api/v1/queries/selectOrders
 </tr>
 <tr class="even">
 <td>500 INTERNAL SERVER ERROR</td>
-<td>Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception.
+<td>Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception.
 <ul>
 <li>Query store does not exist!</li>
 </ul></td>

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/post_execute_functions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/post_execute_functions.html.md.erb b/geode-docs/rest_apps/post_execute_functions.html.md.erb
index 32d8cd7..816d8fc 100644
--- a/geode-docs/rest_apps/post_execute_functions.html.md.erb
+++ b/geode-docs/rest_apps/post_execute_functions.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Execute Geode function on entire cluster or on a specified region, members and member groups.
+Execute <%=vars.product_name%> function on entire cluster or on a specified region, members and member groups.
 
 ## Resource URL
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/post_if_absent_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/post_if_absent_data.html.md.erb b/geode-docs/rest_apps/post_if_absent_data.html.md.erb
index 89925e3..7c4c347 100644
--- a/geode-docs/rest_apps/post_if_absent_data.html.md.erb
+++ b/geode-docs/rest_apps/post_if_absent_data.html.md.erb
@@ -105,7 +105,7 @@ Location: http://localhost:8080/gemfire-api/v1/orders/2
 | 400 BAD REQUEST           | Returned if JSON content is malformed.                                                                                           |
 | 404 NOT FOUND             | Returned if the specified region does not exist.                                                                                 |
 | 409 CONFLICT              | Returned if the provided key already exists in the region.                                                                       |
-| 500 INTERNAL SERVER ERROR | Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception. |
+| 500 INTERNAL SERVER ERROR | Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception. |
 
 ## Example Error Response
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/put_multiple_values_for_keys.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/put_multiple_values_for_keys.html.md.erb b/geode-docs/rest_apps/put_multiple_values_for_keys.html.md.erb
index e0e0c03..1c305dd 100644
--- a/geode-docs/rest_apps/put_multiple_values_for_keys.html.md.erb
+++ b/geode-docs/rest_apps/put_multiple_values_for_keys.html.md.erb
@@ -101,4 +101,4 @@ Response-payload: null
 | 400 BAD REQUEST           | Returned if one or more of the supplied keys is an invalid format.                                                               |
 | 404 NOT FOUND             | Returned if the region is not found.                                                                                             |
 | 414 REQUEST URI TOO LONG  | Returned if the URI is longer than the system component can handle. Limiting the size to 2000 bytes will work for most components.   |
-| 500 INTERNAL SERVER ERROR | Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception. |
+| 500 INTERNAL SERVER ERROR | Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception. |

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/put_replace_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/put_replace_data.html.md.erb b/geode-docs/rest_apps/put_replace_data.html.md.erb
index 323ce0c..12e8dd6 100644
--- a/geode-docs/rest_apps/put_replace_data.html.md.erb
+++ b/geode-docs/rest_apps/put_replace_data.html.md.erb
@@ -80,4 +80,4 @@ Response Payload: null
 |---------------------------|----------------------------------------------------------------------------------------------------------------------------------|
 | 400 BAD REQUEST           | Returned if the supplied key is not present in the region.                                                                       |
 | 404 NOT FOUND             | Returned if the region is not found.                                                                                             |
-| 500 INTERNAL SERVER ERROR | Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception. |
+| 500 INTERNAL SERVER ERROR | Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception. |

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/put_update_cas_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/put_update_cas_data.html.md.erb b/geode-docs/rest_apps/put_update_cas_data.html.md.erb
index 05360af..d900055 100644
--- a/geode-docs/rest_apps/put_update_cas_data.html.md.erb
+++ b/geode-docs/rest_apps/put_update_cas_data.html.md.erb
@@ -177,7 +177,7 @@ Response Payload: null
 | 400 BAD REQUEST           | Returned if the supplied key is not present in the region.                                                                       |
 | 404 NOT FOUND             | Returned if the region is not found.                                                                                             |
 | 409 CONFLICT              | Returned if the provided @old value of the key does not match the current value of the key.                                      |
-| 500 INTERNAL SERVER ERROR | Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception. |
+| 500 INTERNAL SERVER ERROR | Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception. |
 
 ## Example Error Response
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/put_update_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/put_update_data.html.md.erb b/geode-docs/rest_apps/put_update_data.html.md.erb
index d6c68c4..9285e0e 100644
--- a/geode-docs/rest_apps/put_update_data.html.md.erb
+++ b/geode-docs/rest_apps/put_update_data.html.md.erb
@@ -75,7 +75,7 @@ Response Payload:  null
 |---------------------------|----------------------------------------------------------------------------------------------------------------------------------|
 | 400 BAD REQUEST           | Returned if supplied key is an invalid format.                                                                                   |
 | 404 NOT FOUND             | Returned if the region is not found.                                                                                             |
-| 500 INTERNAL SERVER ERROR | Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception. |
+| 500 INTERNAL SERVER ERROR | Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception. |
 
 ## Implementation Notes
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/put_update_query.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/put_update_query.html.md.erb b/geode-docs/rest_apps/put_update_query.html.md.erb
index 339f4d7..c96e14e 100644
--- a/geode-docs/rest_apps/put_update_query.html.md.erb
+++ b/geode-docs/rest_apps/put_update_query.html.md.erb
@@ -99,7 +99,7 @@ Response Payload:  null
 </tr>
 <tr class="odd">
 <td>500 INTERNAL SERVER ERROR</td>
-<td>Error encountered at Geode server. Check the HTTP response body for a stack trace of the exception.</td>
+<td>Error encountered at <%=vars.product_name%> server. Check the HTTP response body for a stack trace of the exception.</td>
 </tr>
 </tbody>
 </table>

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/rest_admin.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/rest_admin.html.md.erb b/geode-docs/rest_apps/rest_admin.html.md.erb
index 527a835..9d6cbd5 100644
--- a/geode-docs/rest_apps/rest_admin.html.md.erb
+++ b/geode-docs/rest_apps/rest_admin.html.md.erb
@@ -21,11 +21,11 @@ limitations under the License.
 
 Administrative endpoints provide management and monitoring functionality for the REST API interface.
 
--   **[\[HEAD | GET\] /gemfire-api/v1/ping](../rest_apps/ping_service.html)**
+-   **[\[HEAD | GET\] /gemfire-api/v1/ping](ping_service.html)**
 
     Mechanism to check for REST API server and service availability.
 
--   **[GET /gemfire-api/v1/servers](../rest_apps/get_servers.html)**
+-   **[GET /gemfire-api/v1/servers](get_servers.html)**
 
     Mechanism to obtain a list of all members in the distributed system that are running the REST API service.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/rest_api_reference.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/rest_api_reference.html.md.erb b/geode-docs/rest_apps/rest_api_reference.html.md.erb
index 15f75e5..d39aba5 100644
--- a/geode-docs/rest_apps/rest_api_reference.html.md.erb
+++ b/geode-docs/rest_apps/rest_api_reference.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Apache Geode REST API Reference
----
+<% set_title(product_name_long, "REST API Reference") %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -19,22 +17,22 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-This section summarizes all available Apache Geode REST API resources and endpoints.
+This section summarizes all available <%=vars.product_name_long%> REST API resources and endpoints.
 
 **Note:**
-This documentation covers the **v1** release of Geode REST APIs for developing applications.
+This documentation covers the **v1** release of <%=vars.product_name%> REST APIs for developing applications.
 
 -   **[Region Endpoints](rest_regions.html)**
 
-    A Geode region is how Geode logically groups data within its cache. Regions stores data as entries, which are key-value pairs. Using the REST APIs you can read, add (or update), and delete region data.
+    A <%=vars.product_name%> region is how <%=vars.product_name%> logically groups data within its cache. Regions stores data as entries, which are key-value pairs. Using the REST APIs you can read, add (or update), and delete region data.
 
 -   **[Query Endpoints](rest_queries.html)**
 
-    Geode uses a query syntax based on OQL (Object Query Language) to query region data. Since Geode regions are key-value stores, values can range from simple byte arrays to complex nested objects.
+    <%=vars.product_name%> uses a query syntax based on OQL (Object Query Language) to query region data. Since <%=vars.product_name%> regions are key-value stores, values can range from simple byte arrays to complex nested objects.
 
 -   **[Function Endpoints](rest_functions.html)**
 
-    Geode functions allows you to write and execute server-side transactions and data operations. These may include anything ranging from initializing components or third-party services or aggregating data.
+    <%=vars.product_name%> functions allows you to write and execute server-side transactions and data operations. These may include anything ranging from initializing components or third-party services or aggregating data.
 
 -   **[Administrative Endpoints](rest_admin.html)**
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/rest_examples.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/rest_examples.html.md.erb b/geode-docs/rest_apps/rest_examples.html.md.erb
index afa4f70..e40393d 100644
--- a/geode-docs/rest_apps/rest_examples.html.md.erb
+++ b/geode-docs/rest_apps/rest_examples.html.md.erb
@@ -22,7 +22,7 @@ limitations under the License.
 <a id="topic_lvp_cd5_m4"></a>
 
 
-This section provides examples that illustrate how multiple clients, both REST and native, can access the same Geode region data.
+This section provides examples that illustrate how multiple clients, both REST and native, can access the same <%=vars.product_name%> region data.
 
 **Note:**
 You must set PDX read-serialized to true when starting the cache server to achieve interoperability between different clients. See [Setup and Configuration](setup_config.html#topic_e21_qc5_m4) for instructions on starting up REST-enabled cache servers.
@@ -30,7 +30,7 @@ You must set PDX read-serialized to true when starting the cache server to achie
 The following examples demonstrate the following:
 
 1.  A Java REST client creates a Person object on key 1. This client references the following supporting examples (also provided):
-    1.  Geode cache client
+    1.  <%=vars.product_name%> cache client
     2.  REST client utility
     3.  Date Time utility
     4.  Person class
@@ -109,7 +109,7 @@ package org.apache.geode.restclient;
 }
 ```
 
-## \#1a. Geode Cache Java Client (MyJavaClient.java)
+## \#1a. <%=vars.product_name%> Cache Java Client (MyJavaClient.java)
 
 ``` pre
 package org.apache.geode.javaclient;

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/rest_functions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/rest_functions.html.md.erb b/geode-docs/rest_apps/rest_functions.html.md.erb
index f7ad25c..3f5c4fb 100644
--- a/geode-docs/rest_apps/rest_functions.html.md.erb
+++ b/geode-docs/rest_apps/rest_functions.html.md.erb
@@ -19,14 +19,14 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode functions allows you to write and execute server-side transactions and data operations. These may include anything ranging from initializing components or third-party services or aggregating data.
+<%=vars.product_name%> functions allows you to write and execute server-side transactions and data operations. These may include anything ranging from initializing components or third-party services or aggregating data.
 
--   **[GET /gemfire-api/v1/functions](../rest_apps/get_functions.html)**
+-   **[GET /gemfire-api/v1/functions](get_functions.html)**
 
-    List all registered Geode functions in the cluster.
+    List all registered <%=vars.product_name%> functions in the cluster.
 
--   **[POST /gemfire-api/v1/functions/{functionId}](../rest_apps/post_execute_functions.html)**
+-   **[POST /gemfire-api/v1/functions/{functionId}](post_execute_functions.html)**
 
-    Execute Geode function on entire cluster or on a specified region, members and member groups.
+    Execute <%=vars.product_name%> function on entire cluster or on a specified region, members and member groups.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/rest_prereqs.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/rest_prereqs.html.md.erb b/geode-docs/rest_apps/rest_prereqs.html.md.erb
index 55fe050..08c0f4d 100644
--- a/geode-docs/rest_apps/rest_prereqs.html.md.erb
+++ b/geode-docs/rest_apps/rest_prereqs.html.md.erb
@@ -19,17 +19,17 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Before development, it is important to understand the prerequisites and limitations of the Geode REST implementation.
+Before development, it is important to understand the prerequisites and limitations of the <%=vars.product_name%> REST implementation.
 
-Geode and REST-enabled applications accessing Geode are subject to the following rules and limitations:
+<%=vars.product_name%> and REST-enabled applications accessing <%=vars.product_name%> are subject to the following rules and limitations:
 
--   All domain objects, functions and function-arg classes must be properly configured and registered in the Geode deployment. Any functions that you wish to execute through the REST API must be available on the target member’s CLASSPATH.
+-   All domain objects, functions and function-arg classes must be properly configured and registered in the <%=vars.product_name%> deployment. Any functions that you wish to execute through the REST API must be available on the target member’s CLASSPATH.
 -   The current implementation supports only the **application/json** MIME type. Other return types (XML, objects, and so on) are not supported. Plain text is supported as a return type for some error messages.
 -   Keys are strictly of type String. For example, the request `PUT http://localhost:8080/gemfire-api/v1/customers/123.456` will add an entry for key ("123.456") of type String.
--   Some special formats of JSON documents are not supported in Geode REST. See [Key Types and JSON Support](troubleshooting.html#concept_gsv_zd5_m4) for examples.
--   To achieve interoperability between Geode Java clients (or Geode native clients) and REST clients, the following rules must be followed:
-    -   All Geode Java and native client classes operating on data also accessed by the REST interface must be PDX serialized, either via PDX autoserialization or by implementing `PdxSerializable`.
-    -   Geode Java clients and native clients can retrieve REST-enabled data either as a `PdxInstance` or as an actual object by using the `PdxInstance.getObject` method. If you use the latter method, you must first declare the object type (@type) in your POST or PUT request payload when creating the object in REST; and secondly, the Java client must have the actual domain class in its CLASSPATH.
+-   Some special formats of JSON documents are not supported in <%=vars.product_name%> REST. See [Key Types and JSON Support](troubleshooting.html#concept_gsv_zd5_m4) for examples.
+-   To achieve interoperability between <%=vars.product_name%> Java clients (or <%=vars.product_name%> native clients) and REST clients, the following rules must be followed:
+    -   All <%=vars.product_name%> Java and native client classes operating on data also accessed by the REST interface must be PDX serialized, either via PDX autoserialization or by implementing `PdxSerializable`.
+    -   <%=vars.product_name%> Java clients and native clients can retrieve REST-enabled data either as a `PdxInstance` or as an actual object by using the `PdxInstance.getObject` method. If you use the latter method, you must first declare the object type (@type) in your POST or PUT request payload when creating the object in REST; and secondly, the Java client must have the actual domain class in its CLASSPATH.
 -   Objects returned by REST-invoked functions must be returned as PdxInstance objects or other data types that can be written to JSON. You cannot return Java objects.
 -   REST client applications do not support single hop access or notification features.
 -   Specifying subregions as endpoints is not supported.

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/rest_queries.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/rest_queries.html.md.erb b/geode-docs/rest_apps/rest_queries.html.md.erb
index 529d5f4..cf1de3e 100644
--- a/geode-docs/rest_apps/rest_queries.html.md.erb
+++ b/geode-docs/rest_apps/rest_queries.html.md.erb
@@ -19,29 +19,29 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Geode uses a query syntax based on OQL (Object Query Language) to query region data. Since Geode regions are key-value stores, values can range from simple byte arrays to complex nested objects.
+<%=vars.product_name%> uses a query syntax based on OQL (Object Query Language) to query region data. Since <%=vars.product_name%> regions are key-value stores, values can range from simple byte arrays to complex nested objects.
 
--   **[GET /gemfire-api/v1/queries](../rest_apps/get_queries.html)**
+-   **[GET /gemfire-api/v1/queries](get_queries.html)**
 
     List all parameterized queries by ID or name.
 
--   **[POST /gemfire-api/v1/queries?id=&lt;queryId&gt;&q=&lt;OQL-statement&gt;](../rest_apps/post_create_query.html)**
+-   **[POST /gemfire-api/v1/queries?id=&lt;queryId&gt;&q=&lt;OQL-statement&gt;](post_create_query.html)**
 
     Create (prepare) the specified parameterized query and assign the corresponding ID for lookup.
 
--   **[POST /gemfire-api/v1/queries/{queryId}](../rest_apps/post_execute_query.html)**
+-   **[POST /gemfire-api/v1/queries/{queryId}](post_execute_query.html)**
 
     Execute the specified named query passing in scalar values for query parameters in the POST body.
 
--   **[PUT /gemfire-api/v1/queries/{queryId}](../rest_apps/put_update_query.html)**
+-   **[PUT /gemfire-api/v1/queries/{queryId}](put_update_query.html)**
 
     Update a named, parameterized query.
 
--   **[DELETE /gemfire-api/v1/queries/{queryId}](../rest_apps/delete_named_query.html)**
+-   **[DELETE /gemfire-api/v1/queries/{queryId}](delete_named_query.html)**
 
     Delete the specified named query.
 
--   **[GET /gemfire-api/v1/queries/adhoc?q=&lt;OQL-statement&gt;](../rest_apps/get_execute_adhoc_query.html)**
+-   **[GET /gemfire-api/v1/queries/adhoc?q=&lt;OQL-statement&gt;](get_execute_adhoc_query.html)**
 
     Run an unnamed (unidentified), ad-hoc query passed as a URL parameter.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/rest_regions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/rest_regions.html.md.erb b/geode-docs/rest_apps/rest_regions.html.md.erb
index dd49f6f..47a8a45 100644
--- a/geode-docs/rest_apps/rest_regions.html.md.erb
+++ b/geode-docs/rest_apps/rest_regions.html.md.erb
@@ -19,63 +19,63 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-A Geode region is how Geode logically groups data within its cache. Regions stores data as entries, which are key-value pairs. Using the REST APIs you can read, add (or update), and delete region data.
+A <%=vars.product_name%> region is how <%=vars.product_name%> logically groups data within its cache. Regions stores data as entries, which are key-value pairs. Using the REST APIs you can read, add (or update), and delete region data.
 
 See also [Data Regions](../basic_config/data_regions/chapter_overview.html#data_regions) for more information on working with regions.
 
--   **[GET /gemfire-api/v1](../rest_apps/get_regions.html)**
+-   **[GET /gemfire-api/v1](get_regions.html)**
 
-    List all available resources (regions) in the Geode cluster.
+    List all available resources (regions) in the <%=vars.product_name%> cluster.
 
--   **[GET /gemfire-api/v1/{region}](../rest_apps/get_region_data.html)**
+-   **[GET /gemfire-api/v1/{region}](get_region_data.html)**
 
     Read data for the region. The optional limit URL query parameter specifies the number of values from the Region that will be returned. The default limit is 50. If the user specifies a limit of “ALL”, then all entry values for the region will be returned.
 
--   **[GET /gemfire-api/v1/{region}/keys](../rest_apps/get_region_keys.html)**
+-   **[GET /gemfire-api/v1/{region}/keys](get_region_keys.html)**
 
     List all keys for the specified region.
 
--   **[GET /gemfire-api/v1/{region}/{key}](../rest_apps/get_region_key_data.html)**
+-   **[GET /gemfire-api/v1/{region}/{key}](get_region_key_data.html)**
 
     Read data for a specific key in the region.
 
--   **[GET /gemfire-api/v1/{region}/{key1},{key2},...,{keyN}](../rest_apps/get_region_data_for_multiple_keys.html)**
+-   **[GET /gemfire-api/v1/{region}/{key1},{key2},...,{keyN}](get_region_data_for_multiple_keys.html)**
 
     Read data for multiple keys in the region.
 
--   **[HEAD /gemfire-api/v1/{region}](../rest_apps/head_region_size.html)**
+-   **[HEAD /gemfire-api/v1/{region}](head_region_size.html)**
 
     An HTTP HEAD request that returns region's size (number of entries) within the HEADERS, which is a response without the content-body. Region size is specified in the pre-defined header named "Resource-Count".
 
--   **[POST /gemfire-api/v1/{region}?key=&lt;key&gt;](../rest_apps/post_if_absent_data.html)**
+-   **[POST /gemfire-api/v1/{region}?key=&lt;key&gt;](post_if_absent_data.html)**
 
     Create (put-if-absent) data in region.
 
--   **[PUT /gemfire-api/v1/{region}/{key}](../rest_apps/put_update_data.html)**
+-   **[PUT /gemfire-api/v1/{region}/{key}](put_update_data.html)**
 
     Update or insert (put) data for key in region.
 
--   **[PUT /gemfire-api/v1/{region}/{key1},{key2},...{keyN}](../rest_apps/put_multiple_values_for_keys.html)**
+-   **[PUT /gemfire-api/v1/{region}/{key1},{key2},...{keyN}](put_multiple_values_for_keys.html)**
 
     Update or insert (put) data for multiple keys in the region.
 
--   **[PUT /gemfire-api/v1/{region}/{key}?op=REPLACE](../rest_apps/put_replace_data.html)**
+-   **[PUT /gemfire-api/v1/{region}/{key}?op=REPLACE](put_replace_data.html)**
 
     Update (replace) data with key(s) if and only if the key(s) exists in region. The Key(s) must be present in the Region for the update to occur.
 
--   **[PUT /gemfire-api/v1/{region}/{key}?op=CAS](../rest_apps/put_update_cas_data.html)**
+-   **[PUT /gemfire-api/v1/{region}/{key}?op=CAS](put_update_cas_data.html)**
 
     Update (compare-and-set) value having key with a new value if and only if the "@old" value sent matches the current value having key in region.
 
--   **[DELETE /gemfire-api/v1/{region}](../rest_apps/delete_all_data.html)**
+-   **[DELETE /gemfire-api/v1/{region}](delete_all_data.html)**
 
     Delete all entries in the region.
 
--   **[DELETE /gemfire-api/v1/{region}/{key}](../rest_apps/delete_data_for_key.html)**
+-   **[DELETE /gemfire-api/v1/{region}/{key}](delete_data_for_key.html)**
 
     Delete entry for specified key in the region.
 
--   **[DELETE /gemfire-api/v1/{region}/{key1},{key2},...{keyN}](../rest_apps/delete_data_for_multiple_keys.html)**
+-   **[DELETE /gemfire-api/v1/{region}/{key1},{key2},...{keyN}](delete_data_for_multiple_keys.html)**
 
     Delete entries for multiple keys in the region.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/setup_config.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/setup_config.html.md.erb b/geode-docs/rest_apps/setup_config.html.md.erb
index 7ba24ea..ba2445f 100644
--- a/geode-docs/rest_apps/setup_config.html.md.erb
+++ b/geode-docs/rest_apps/setup_config.html.md.erb
@@ -19,12 +19,12 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-The Apache Geode Developer REST interface runs as an embedded HTTP or HTTPS service (Jetty server) within one
-or more Geode servers.
+The <%=vars.product_name_long%> Developer REST interface runs as an embedded HTTP or HTTPS service (Jetty server) within one
+or more <%=vars.product_name%> servers.
 
 # REST API Libraries
 
-All Geode REST interface classes and required JAR files are distributed as a WAR file with the Geode product distribution. You can find the file in the following location:
+All <%=vars.product_name%> REST interface classes and required JAR files are distributed as a WAR file with the <%=vars.product_name%> product distribution. You can find the file in the following location:
 
 <code>
 <i>install-dir</i>/tools/Extensions/geode-web-api-<i>n.n.n.</i>war
@@ -59,7 +59,7 @@ following:
 
 # <a id="setup_config_starting_rest" class="no-quick-link"></a> Starting the REST API Service
 
-To start a REST API service-enabled Geode deployment, configure PDX serialization for your
+To start a REST API service-enabled <%=vars.product_name%> deployment, configure PDX serialization for your
 cluster, then start the service on one or more server nodes.
 
 ## Configure PDX for your cluster
@@ -104,7 +104,7 @@ To configure PDX in your cluster, perform the following steps:
 ## Start the REST API Service on One or More Servers
 
 As described above, you can start the REST API service on a server by using `gfsh start server --start-rest-api`,
-or by setting the Geode property `start-dev-rest-api` to `true`. 
+or by setting the <%=vars.product_name%> property `start-dev-rest-api` to `true`. 
 If you wish to start the service on multiple servers, use `http-service-bind-address` and `http-service-port` to
 identify the cache server and specific port that will host REST services. If you do not specify
 the `http-service-port`, the default port is 7070, which may collide with other locators and servers.
@@ -157,7 +157,7 @@ start-dev-rest-api=true
 
 ## Verify That The Service is Running
 
-Verify that the Geode REST API service is up and running. To validate this, you can perform the following checks:
+Verify that the <%=vars.product_name%> REST API service is up and running. To validate this, you can perform the following checks:
 
 1.  Test the list resources endpoint (this step assumes that you have regions defined on your cluster):
 
@@ -192,7 +192,7 @@ APIs](using_swagger.html#concept_rlr_y3c_54) for more information.
 
 To turn on integrated security, start your servers and locators with the security-manager property
 set in your gemfire.properties file or on the gfsh command-line.
-The following example uses the sample implementation that is included in the Geode source,
+The following example uses the sample implementation that is included in the <%=vars.product_name%> source,
 `org.apache.geode.examples.security.ExampleSecurityManager`.
 
 This implementation requires a JSON security configuration file which defines the allowed users and their corresponding
@@ -230,7 +230,7 @@ http://super-user:1234567@localhost:8080/geode/v1
 
 # <a id="setup_config_implementing_auth" class="no-quick-link"></a>Programmatic Startup
 
-You can also start and configure Geode REST services programmatically. For example:
+You can also start and configure <%=vars.product_name%> REST services programmatically. For example:
 
 ``` pre
 import org.apache.geode.distributed.ServerLauncher;

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/troubleshooting.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/troubleshooting.html.md.erb b/geode-docs/rest_apps/troubleshooting.html.md.erb
index 0f47fff..5177311 100644
--- a/geode-docs/rest_apps/troubleshooting.html.md.erb
+++ b/geode-docs/rest_apps/troubleshooting.html.md.erb
@@ -21,7 +21,7 @@ limitations under the License.
 <a id="topic_r5z_lj5_m4"></a>
 
 
-This section provides troubleshooting guidance and frequently asked questions about Geode Developer REST APIs.
+This section provides troubleshooting guidance and frequently asked questions about <%=vars.product_name%> Developer REST APIs.
 
 ## Checking if the REST API Service is Up and Running
 
@@ -45,15 +45,15 @@ If the server is not available, your client will receive an HTTP error code and
 
 ## Key Types and JSON Support
 
-When defining regions (your REST resources), you must only use scalar values for keys and also set value constraints in order to avoid producing JSON that cannot be parsed by Geode.
+When defining regions (your REST resources), you must only use scalar values for keys and also set value constraints in order to avoid producing JSON that cannot be parsed by <%=vars.product_name%>.
 
-If Geode regions are not defined with scalar values as keys and value constraints, then you may receive the following error message (even though the JSON is technically valid) in your REST client applications:
+If <%=vars.product_name%> regions are not defined with scalar values as keys and value constraints, then you may receive the following error message (even though the JSON is technically valid) in your REST client applications:
 
 ``` pre
 Json doc specified in request body is malformed..!!'
 ```
 
-For example, the following JSON documents are not supported by Geode:
+For example, the following JSON documents are not supported by <%=vars.product_name%>:
 
 ## Unsupported JSON Example 1
 

http://git-wip-us.apache.org/repos/asf/geode/blob/d291a457/geode-docs/rest_apps/using_swagger.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/rest_apps/using_swagger.html.md.erb b/geode-docs/rest_apps/using_swagger.html.md.erb
index 995f33a..70f0d96 100644
--- a/geode-docs/rest_apps/using_swagger.html.md.erb
+++ b/geode-docs/rest_apps/using_swagger.html.md.erb
@@ -19,18 +19,18 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 
-Apache Geode Developer REST APIs are integrated with the Swagger™ framework. This framework provides a browser-based test client that allows you to visualize and try out Geode REST APIs.
+<%=vars.product_name_long%> Developer REST APIs are integrated with the Swagger™ framework. This framework provides a browser-based test client that allows you to visualize and try out <%=vars.product_name%> REST APIs.
 
-Swagger application JARs are included in the Geode REST application WAR; you do not need to install any additional libraries to use Swagger.
+Swagger application JARs are included in the <%=vars.product_name%> REST application WAR; you do not need to install any additional libraries to use Swagger.
 
 The following example demonstrates how to access the Swagger UI to browse the APIs.
 
-1.  Start a Geode Locator and a Developer REST API-enabled server as described in [Setup and Configuration](setup_config.html#topic_e21_qc5_m4). 
+1.  Start a <%=vars.product_name%> Locator and a Developer REST API-enabled server as described in [Setup and Configuration](setup_config.html#topic_e21_qc5_m4). 
 Specify an `http-service-port` for the developer REST service, as the default port, 7070, is already taken by the locator. For example:
 
     ``` pre
     gfsh>start locator --name=locator1
-    Starting a Geode Locator in /Users/admin/apache-geode-1.2.0/locator1...
+    Starting a <%=vars.product_name%> Locator in /Users/admin/apache-geode-1.2.0/locator1...
     ....
     gfsh>start server --name=server1  --start-rest-api=true \
     --http-service-bind-address=localhost --J=-Dgemfire.http-service-port=8080
@@ -60,7 +60,7 @@ Specify an `http-service-port` for the developer REST service, as the default po
 7.  Add an entry to the region by expanding the **POST /v1/{region}** endpoint. <img src="../images/swagger_post_region.png" id="concept_rlr_y3c_54__image_sfk_c2m_x4" class="image" />
 8.  Click the **Try it out!** button to see the response body and response code. <img src="../images/swagger_post_region_response.png" id="concept_rlr_y3c_54__image_pmx_k2m_x4" class="image" />
 
-You can use the Swagger interface to try out additional Geode API endpoints and view sample responses.
+You can use the Swagger interface to try out additional <%=vars.product_name%> API endpoints and view sample responses.
 
 For more information on Swagger, see the [Swagger website](http://swagger.io/) and the [OpenAPI specification](https://github.com/OAI/OpenAPI-Specification).