You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@geode.apache.org by km...@apache.org on 2016/10/14 22:17:32 UTC

[34/94] [abbrv] [partial] incubator-geode git commit: GEODE-1952 Consolidated docs under a single geode-docs directory

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/distributed_regions/how_region_versioning_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/distributed_regions/how_region_versioning_works.html.md.erb b/geode-docs/developing/distributed_regions/how_region_versioning_works.html.md.erb
new file mode 100644
index 0000000..7e4c551
--- /dev/null
+++ b/geode-docs/developing/distributed_regions/how_region_versioning_works.html.md.erb
@@ -0,0 +1,110 @@
+---
+title: Consistency Checking by Region Type
+---
+
+<a id="topic_7A4B6C6169BD4B1ABD356294F744D236"></a>
+
+Geode performs different consistency checks depending on the type of region you have configured.
+
+## <a id="topic_7A4B6C6169BD4B1ABD356294F744D236__section_B090F5FB87D84104A7BE4BCEA6BAE6B7" class="no-quick-link"></a>Partitioned Region Consistency
+
+For a partitioned region, Geode maintains consistency by routing all updates on a given key to the Geode member that holds the primary copy of that key. That member holds a lock on the key while distributing updates to other members that host a copy of the key. Because all updates to a partitioned region are serialized on the primary Geode member, all members apply the updates in the same order and consistency is maintained at all times. See [Understanding Partitioning](../partitioned_regions/how_partitioning_works.html).
+
+## <a id="topic_7A4B6C6169BD4B1ABD356294F744D236__section_72DFB366C8F14ADBAF2A136669ECAB1E" class="no-quick-link"></a>Replicated Region Consistency
+
+For a replicated region, any member that hosts the region can update a key and distribute that update to other members without locking the key. It is possible that two members can update the same key at the same time (a concurrent update). It is also possible that, due to network latency, an update in one member is distributed to other members at a later time, after those members have already applied more recent updates to the key (an out-of-order update). By default, Geode members perform conflict checking before applying region updates in order to detect and consistently resolve concurrent and out-of-order updates. Conflict checking ensures that region data eventually becomes consistent on all members that host the region. The conflict checking behavior for replicated regions is summarized as follows:
+
+-   If two members update the same key at the same time, conflict checking ensures that all members eventually apply the same value, which is the value of one of the two concurrent updates.
+-   If a member receives an out-of-order update (an update that is received after one or more recent updates were applied), conflict checking ensures that the out-of-order update is discarded and not applied to the cache.
+
+[How Consistency Checking Works for Replicated Regions](#topic_C5B74CCDD909403C815639339AA03758) and [How Destroy and Clear Operations Are Resolved](#topic_321B05044B6641FCAEFABBF5066BD399) provide more details about how Geode performs conflict checking when applying an update.
+
+## <a id="topic_7A4B6C6169BD4B1ABD356294F744D236__section_313045F430EE459CB411CAAE7B00F3D8" class="no-quick-link"></a>Non-Replicated Regions and Client Cache Consistency
+
+When a member receives an update for an entry in a non-replicated region and applies an update, it performs conflict checking in the same way as for a replicated region. However, if the member initiates an operation on an entry that is not present in the region, it first passes that operation to a member that hosts a replicate. The member that hosts the replica generates and provides the version information necessary for subsequent conflict checking. See [How Consistency Checking Works for Replicated Regions](#topic_C5B74CCDD909403C815639339AA03758).
+
+Client caches also perform consistency checking in the same way when they receive an update for a region entry. However, all region operations that originate in the client cache are first passed onto an available Geode server, which generates the version information necessary for subsequent conflict checking.
+
+## <a id="topic_B64891585E7F4358A633C792F10FA23E" class="no-quick-link"></a>Configuring Consistency Checking
+
+Geode enables consistency checking by default. You cannot disable consistency checking for persistent regions. For all other regions, you can explicitly enable or disable consistency checking by setting the `concurrency-checks-enabled` region attribute in `cache.xml` to "true" or "false."
+
+All Geode members that host a region must use the same `concurrency-checks-enabled` setting for that region.
+
+A client cache can disable consistency checking for a region even if server caches enable consistency checking for the same region. This configuration ensures that the client sees all events for the region, but it does not prevent the client cache region from becoming out-of-sync with the server cache.
+
+See [&lt;region-attributes&gt;](../../reference/topics/cache_xml.html#region-attributes).
+
+**Note:**
+Regions that do not enable consistency checking remain subject to race conditions. Concurrent updates may result in one or more members having different values for the same key. Network latency can result in older updates being applied to a key after more recent updates have occurred.
+
+## <a id="topic_0BDACA590B2C4974AC9C450397FE70B2" class="no-quick-link"></a>Overhead for Consistency Checks
+
+Consistency checking requires additional overhead for storing and distributing version and timestamp information, as well as for maintaining destroyed entries for a period of time to meet consistency requirements.
+
+To provide consistency checking, each region entry uses an additional 16 bytes. When an entry is deleted, a tombstone entry of approximately 13 bytes is created and maintained until the tombstone expires or is garbage-collected in the member. (When an entry is destroyed, the member temporarily retains the entry with its current version stamp to detect possible conflicts with operations that have occurred. The retained entry is referred to as a tombstone.) See [How Destroy and Clear Operations Are Resolved](#topic_321B05044B6641FCAEFABBF5066BD399).
+
+If you cannot support the additional overhead in your deployment, you can disable consistency checks by setting `concurrency-checks-enabled` to "false" for each region. See [Consistency for Region Updates](region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045).
+
+## <a id="topic_C5B74CCDD909403C815639339AA03758" class="no-quick-link"></a>How Consistency Checking Works for Replicated Regions
+
+Each region stores version and timestamp information for use in conflict detection. Geode members use the recorded information to detect and resolve conflicts consistently before applying a distributed update.
+
+<a id="topic_C5B74CCDD909403C815639339AA03758__section_763B071061C94D1E82E8883325294547"></a>
+By default, each entry in a region stores the ID of the Geode member that last updated the entry, as well as a version stamp for the entry that is incremented each time an update occurs. The version information is stored in each local entry, and the version stamp is distributed to other Geode members when the local entry is updated.
+
+A Geode member or client that receives an update message first compares the update version stamp with the version stamp recorded in its local cache. If the update version stamp is larger, it represents a newer version of the entry, so the receiving member applies the update locally and updates the version information. A smaller update version stamp indicates an out-of-order update, which is discarded.
+
+An identical version stamp indicates that multiple Geode members updated the same entry at the same time. To resolve a concurrent update, a Geode member always applies (or keeps) the region entry that has the highest membership ID; the region entry having the lower membership ID is discarded.
+
+**Note:**
+When a Geode member discards an update message (either for an out-of-order update or when resolving a concurrent update), it does not pass the discarded event to an event listener for the region. You can track the number of discarded updates for each member using the `conflatedEvents` statistic. See [Geode Statistics List](../../reference/statistics/statistics_list.html#statistics_list). Some members may discard an update while other members apply the update, depending on the order in which each member receives the update. For this reason, the `conflatedEvents` statistic differs for each Geode member. The example below describes this behavior in more detail.
+
+The following example shows how a concurrent update is handled in a distributed system of three Geode members. Assume that Members A, B, and C have membership IDs of 1, 2, and 3, respectively. Each member currently stores an entry, X, in their caches at version C2 (the entry was last updated by member C):
+
+**Step 1:** An application updates entry X on Geode member A at the same time another application updates entry X on member C. Each member increments the version stamp for the entry and records the version stamp with their member ID in their local caches. In this case the entry was originally at version C2, so each member updates the version to 3 (A3 and C3, respectively) in their local caches.
+
+<img src="../../images_svg/region_entry_versions_1.svg" id="topic_C5B74CCDD909403C815639339AA03758__image_nt5_ptw_4r" class="image" />
+
+**Step 2:** Member A distributes its update message to members B and C.
+
+Member B compares the update version stamp (3) to its recorded version stamp (2) and applies the update to its local cache as version A3. In this member, the update is applied for the time being, and passed on to configured event listeners.
+
+Member C compares the update version stamp (3) to its recorded version stamp (3) and identifies a concurrent update. To resolve the conflict, member C next compares the membership ID of the update to the membership ID stored in its local cache. Because the distributed system ID the update (A3) is lower than the ID stored in the cache (C3), member C discards the update (and increments the `conflatedEvents` statistic).
+
+<img src="../../images_svg/region_entry_versions_2.svg" id="topic_C5B74CCDD909403C815639339AA03758__image_ocs_35b_pr" class="image" />
+
+**Step 3:** Member C distributes the update message to members A and B.
+
+Members A and B compare the update version stamp (3) to their recorded version stamps (3) and identify the concurrent update. To resolve the conflict, both members compare the membership ID of the update with the membership ID stored in their local caches. Because the distributed system ID of A in the cache value is less than the ID of C in the update, both members record the update C3 in their local caches, overwriting the previous value.
+
+At this point, all members that host the region have achieved a consistent state for the concurrent updates on members A and C.
+
+<img src="../../images_svg/region_entry_versions_3.svg" id="topic_C5B74CCDD909403C815639339AA03758__image_gsv_k5b_pr" class="image" />
+
+## <a id="topic_321B05044B6641FCAEFABBF5066BD399" class="no-quick-link"></a>How Destroy and Clear Operations Are Resolved
+
+When consistency checking is enabled for a region, a Geode member does not immediately remove an entry from the region when an application destroys the entry. Instead, the member retains the entry with its current version stamp for a period of time in order to detect possible conflicts with operations that have occurred. The retained entry is referred to as a *tombstone*. Geode retains tombstones for partitioned regions and non-replicated regions as well as for replicated regions, in order to provide consistency.
+
+A tombstone in a client cache or a non-replicated region expires after 8 minutes, at which point the tombstone is immediately removed from the cache.
+
+A tombstone for a replicated or partitioned region expires after 10 minutes. Expired tombstones are eligible for garbage collection by the Geode member. Garbage collection is automatically triggered after 100,000 tombstones of any type have timed out in the local Geode member. You can optionally set the `gemfire.tombstone-gc-threshold` property to a value smaller than 100000 to perform garbage collection more frequently.
+
+**Note:**
+To avoid out-of-memory errors, a Geode member also initiates garbage collection for tombstones when the amount of free memory drops below 30 percent of total memory.
+
+You can monitor the total number of tombstones in a cache using the `tombstoneCount` statistic in `CachePerfStats`. The `tombstoneGCCount` statistic records the total number of tombstone garbage collection cycles that a member has performed. `replicatedTombstonesSize` and `nonReplicatedTombstonesSize` show the approximate number of bytes that are currently consumed by tombstones in replicated or partitioned regions, and in non-replicated regions, respectively. See [Geode Statistics List](../../reference/statistics/statistics_list.html#statistics_list).
+
+## <a id="topic_321B05044B6641FCAEFABBF5066BD399__section_4D0140E96A3141EB8D983D0A43464097" class="no-quick-link"></a>About Region.clear() Operations
+
+Region entry version stamps and tombstones ensure consistency only when individual entries are destroyed. A `Region.clear()` operation, however, operates on all entries in a region at once. To provide consistency for `Region.clear()` operations, Geode obtains a distributed read/write lock for the region, which blocks all concurrent updates to the region. Any updates that were initiated before the clear operation are allowed to complete before the region is cleared.
+
+## <a id="topic_32ACFA5542C74F3583ECD30467F352B0" class="no-quick-link"></a>Transactions with Consistent Regions
+
+A transaction that modifies a region having consistency checking enabled generates all necessary version information for region updates when the transaction commits.
+
+If a transaction modifies a normal, preloaded or empty region, the transaction is first delegated to a Geode member that holds a replicate for the region. This behavior is similar to the transactional behavior for partitioned regions, where the partitioned region transaction is forwarded to a member that hosts the primary for the partitioned region update.
+
+The limitation for transactions on normal, preloaded or or empty regions is that, when consistency checking is enabled, a transaction cannot perform a `localDestroy` or `localInvalidate` operation against the region. Geode throws an `UnsupportedOperationInTransactionException` exception in such cases. An application should use a `Destroy` or `Invalidate` operation in place of a `localDestroy` or `localInvalidate` when consistency checks are enabled.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/distributed_regions/how_region_versioning_works_wan.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/distributed_regions/how_region_versioning_works_wan.html.md.erb b/geode-docs/developing/distributed_regions/how_region_versioning_works_wan.html.md.erb
new file mode 100644
index 0000000..0ce2f04
--- /dev/null
+++ b/geode-docs/developing/distributed_regions/how_region_versioning_works_wan.html.md.erb
@@ -0,0 +1,25 @@
+---
+title:  How Consistency Is Achieved in WAN Deployments
+---
+
+When two or more Geode systems are configured to distribute events over a WAN, each system performs local consistency checking before it distributes an event to a configured gateway sender. Discarded events are not distributed across the WAN.
+
+Regions can also be configured to distribute updates to other Geode clusters over a WAN. With a distributed WAN configuration, multiple gateway senders asynchronously queue and send region updates to another Geode cluster. It is possible for multiple sites to send updates to the same region entry at the same time. It is also possible that, due to a slow WAN connection, a cluster might receive region updates after a considerable delay, and after it has applied more recent updates to a region. To ensure that WAN-replicated regions eventually reach a consistent state, Geode first ensures that each cluster performs consistency checking to regions before queuing updates to a gateway sender for WAN distribution. In order words, region conflicts are first detected and resolved in the local cluster, using the techniques described in the previous sections.
+
+When a Geode cluster in a WAN configuration receives a distributed update, conflict checking is performed to ensure that all sites apply updates in the same way. This ensures that regions eventually reach a consistent state across all Geode clusters. The default conflict checking behavior for WAN-replicated regions is summarized as follows:
+
+-   If an update is received from the same Geode cluster that last updated the region entry, then there is no conflict and the update is applied.
+-   If an update is received from a different Geode cluster than the one that last updated the region entry, then a potential conflict exists. A cluster applies the update only when the update has a timestamp that is later than the timestamp currently recorded in the cache.
+
+**Note:**
+If you use the default conflict checking feature for WAN deployments, you must ensure that all Geode members in all clusters synchronize their system clocks. For example, use a common NTP server for all Geode members that participate in a WAN deployment.
+
+As an alternative to the default conflict checking behavior for WAN deployments, you can develop and deploy a custom conflict resolver for handling region events that are distributed over a WAN. Using a custom resolver enables you to handle conflicts using criteria other than, or in addition to, timestamp information. For example, you might always prioritize updates that originate from a particular site, given that the timestamp value is within a certain range.
+
+When a gateway sender distributes an event to another Geode site, it adds the distributed system ID of the local cluster, as well as a timestamp for the event. In a default configuration, the cluster that receives the event examines the timestamp to determine whether or not the event should be applied. If the timestamp of the update is earlier than the local timestamp, the cluster discards the event. If the timestamp is the same as the local timestamp, then the entry having the highest distributed system ID is applied (or kept).
+
+You can override the default consistency checking for WAN events by installing a conflict resolver plug-in for the region. If a conflict resolver is installed, then any event that can potentially cause a conflict (any event that originated from a different distributed system ID than the ID that last modified the entry) is delivered to the conflict resolver. The resolver plug-in then makes the sole determination for which update to apply or keep.
+
+See "Implementing a GatewayConflictResolver" under [Resolving Conflicting Events](../events/resolving_multisite_conflicts.html#topic_E97BB68748F14987916CD1A50E4B4542) to configure a custom resolver.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/distributed_regions/how_replication_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/distributed_regions/how_replication_works.html.md.erb b/geode-docs/developing/distributed_regions/how_replication_works.html.md.erb
new file mode 100644
index 0000000..73bc5e1
--- /dev/null
+++ b/geode-docs/developing/distributed_regions/how_replication_works.html.md.erb
@@ -0,0 +1,34 @@
+---
+title:  How Replication and Preloading Work
+---
+
+To work with replicated and preloaded regions, you should understand how their data is initialized and maintained in the cache.
+
+<a id="how_replication_works__section_C75BB463A0584491ABD982A55E5A050F"></a>
+Replicated and preloaded regions are configured by using one of the `REPLICATE` region shortcut settings, or by setting the region attribute `data-policy` to `replicate`, `persistent-replicate`, or `preloaded`.
+
+## <a id="how_replication_works__section_B4E76BBCC6104A27BC0A8ECA6B9CDF91" class="no-quick-link"></a>Initialization of Replicated and Preloaded Regions
+
+At region creation, the system initializes the preloaded or replicated region with the most complete and up-to-date data set it can find. The system uses these data sources to initialize the new region, following this order of preference:
+
+1.  Another replicated region that is already defined in the distributed system.
+2.  For persistent replicate only. Disk files, followed by a union of all copies of the region in the distributed cache.
+3.  For preloaded region only. Another preloaded region that is already defined in the distributed system.
+4.  The union of all copies of the region in the distributed cache.
+
+<img src="../../images_svg/distributed_replica_preload.svg" id="how_replication_works__image_5F50EBA30CE3408091F07A198F821741" class="image" />
+
+While a region is being initialized from a replicated or preloaded region, if the source region crashes, the initialization starts over.
+
+If a union of regions is used for initialization, as in the figure, and one of the individual source regions goes away during the initialization (due to cache closure, member crash, or region destruction), the new region may contain a partial data set from the crashed source region. When this happens, there is no warning logged or exception thrown. The new region still has a complete set of the remaining members' regions.
+
+## <a id="how_replication_works__section_6BE7555A711E4CA490B02E58B5DDE396" class="no-quick-link"></a>Behavior of Replicated and Preloaded Regions After Initialization
+
+Once initialized, the preloaded region operates like the region with a `normal` `data-policy`, receiving distributions only for entries it has defined in the local cache.
+
+<img src="../../images_svg/distributed_preload.svg" id="how_replication_works__image_994CA599B1004D3F95E1BB7C4FAC2AEF" class="image" />
+
+If the region is configured as a replicated region, it receives all new creations in the distributed region from the other members. This is the push distribution model. Unlike the preloaded region, the replicated region has a contract that states it will hold all entries that are present anywhere in the distributed region.
+
+<img src="../../images_svg/distributed_replica.svg" id="how_replication_works__image_2E7F3EB6213A47FEA3ABE32FD2CB1503" class="image" />
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/distributed_regions/locking_in_global_regions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/distributed_regions/locking_in_global_regions.html.md.erb b/geode-docs/developing/distributed_regions/locking_in_global_regions.html.md.erb
new file mode 100644
index 0000000..6a6e030
--- /dev/null
+++ b/geode-docs/developing/distributed_regions/locking_in_global_regions.html.md.erb
@@ -0,0 +1,92 @@
+---
+title:  Locking in Global Regions
+---
+
+In global regions, the system locks entries and the region during updates. You can also explicitly lock the region and its entries as needed by your application. Locking includes system settings that help you optimize performance and locking behavior between your members.
+
+<a id="locking_in_global_regions__section_065B3A57CCCA4F17821D170A312B6675"></a>
+In regions with global scope, locking helps ensure cache consistency.
+
+Locking of regions and entries is done in two ways:
+
+1.  **Implicit**. Geode automatically locks global regions and their data entries during most operations. Region invalidation and destruction do not acquire locks.
+2.  **Explicit**. You can use the API to explicitly lock the region and its entries. Do this to guarantee atomicity in tasks with multi-step distributed operations. The `Region` methods `org.apache.geode.cache.Region.getDistributedLock` and `org.apache.geode.cache.Region.getRegionDistributedLock` return instances of `java.util.concurrent.locks.Lock` for a region and a specified key.
+
+    **Note:**
+    You must use the `Region` API to lock regions and region entries. Do not use the `DistributedLockService` in the `org.apache.geode.distributed` package. That service is available only for locking in arbitrary distributed applications. It is not compatible with the `Region` locking methods.
+
+## <a id="locking_in_global_regions__section_5B47F9C5C27A4B789A3498AC553BB1FB" class="no-quick-link"></a>Lock Timeouts
+
+Getting a lock on a region or entry is a two-step process of getting a lock instance for the entity and then using the instance to set the lock. Once you have the lock, you hold it for your operations, then release it for someone else to use. You can set limits on the time spent waiting to get a lock and the time spent holding it. Both implicit and explicit locking operations are affected by the timeouts:
+
+-   The lock timeout limits the wait to get a lock. The cache attribute `lock-timeout` governs implicit lock requests. For explicit locking, specify the wait time through your calls to the instance of `java.util.concurrent.locks.Lock` returned from the `Region` API. You can wait a specific amount of time, return immediately either with or without the lock, or wait indefinitely.
+
+    ``` pre
+    <cache lock-timeout="60"> 
+    </cache>
+    ```
+
+    gfsh:
+
+    ``` pre
+    gfsh>alter runtime --lock-timeout=60 
+    ```
+
+-   The lock lease limits how long a lock can be held before it is automatically released. A timed lock allows the application to recover when a member fails to release an obtained lock within the lease time. For all locking, this timeout is set with the cache attribute `lock-lease`.
+
+    ``` pre
+    <cache lock-lease="120"> </cache>
+    ```
+
+    gfsh:
+
+    ``` pre
+    gfsh>alter runtime --lock-lease=120
+    ```
+
+## <a id="locking_in_global_regions__section_031727F04D114B42944872360A386907" class="no-quick-link"></a>Optimize Locking Performance
+
+For each global region, one of the members with the region defined will be assigned the job of lock grantor. The lock grantor runs the lock service that receives lock requests from system members, queues them as needed, and grants them in the order received.
+
+The lock grantor is at a slight advantage over other members as it is the only one that does not have to send a message to request a lock. The grantor\u2019s requests cost the least for the same reason. Thus, you can optimize locking in a region by assigning lock grantor status to the member that acquires the most locks. This may be the member that performs the most puts and thus requires the most implicit locks or this may be the member that performs many explicit locks.
+
+The lock grantor is assigned as follows:
+
+-   Any member with the region defined that requests lock grantor status is assigned it. Thus at any time, the most recent member to make the request is the lock grantor.
+-   If no member requests lock grantor status for a region, or if the current lock grantor goes away, the system assigns a lock grantor from the members that have the region defined in their caches.
+
+You can request lock grantor status:
+
+1.  At region creation through the `is-lock-grantor` attribute. You can retrieve this attribute through the region method, `getAttributes`, to see whether you requested to be lock grantor for the region.
+    **Note:**
+    The `is-lock-grantor` attribute does not change after region creation.
+
+2.  After region creation through the region `becomeLockGrantor` method. Changing lock grantors should be done with care, however, as doing so takes cycles from other operations. In particular, be careful to avoid creating a situation where you have members vying for lock grantor status.
+
+## <a id="locking_in_global_regions__section_34661E38DFF9420B89C1A2B25F232D53" class="no-quick-link"></a>Examples
+
+These two examples show entry locking and unlocking. Note how the entry\u2019s `Lock` object is obtained and then its lock method invoked to actually set the lock. The example program stores the entry lock information in a hash table for future reference.
+
+``` pre
+/* Lock a data entry */ 
+HashMap lockedItemsMap = new HashMap(); 
+...
+  String entryKey = ... 
+  if (!lockedItemsMap.containsKey(entryKey)) 
+  { 
+    Lock lock = this.currRegion.getDistributedLock(entryKey); 
+    lock.lock(); 
+    lockedItemsMap.put(name, lock); 
+  } 
+  ...
+```
+
+``` pre
+/* Unlock a data entry */ 
+  String entryKey = ... 
+  if (lockedItemsMap.containsKey(entryKey)) 
+  { 
+    Lock lock = (Lock) lockedItemsMap.remove(name);
+    lock.unlock();
+  }
+```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/distributed_regions/managing_distributed_regions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/distributed_regions/managing_distributed_regions.html.md.erb b/geode-docs/developing/distributed_regions/managing_distributed_regions.html.md.erb
new file mode 100644
index 0000000..f36d8ca
--- /dev/null
+++ b/geode-docs/developing/distributed_regions/managing_distributed_regions.html.md.erb
@@ -0,0 +1,47 @@
+---
+title:  Configure Distributed, Replicated, and Preloaded Regions
+---
+
+Plan the configuration and ongoing management of your distributed, replicated, and preloaded regions, and configure the regions.
+
+<a id="configure_distributed_region__section_11E9E1B3EB5845D9A4FB226A992B8D0D"></a>
+Before you begin, understand [Basic Configuration and Programming](../../basic_config/book_intro.html).
+
+1.  Choose the region shortcut setting that most closely matches your region configuration. See **`org.apache.geode.cache.RegionShortcut`** or [Region Shortcuts](../../reference/topics/chapter_overview_regionshortcuts.html#concept_ymp_rkz_4dffhdfhk). To create a replicated region, use one of the `REPLICATE` shortcut settings. To create a preloaded region, set your region `data-policy` to `preloaded`. This `cache.xml` declaration creates a replicated region:
+
+    ``` pre
+    <region-attributes refid="REPLICATE"> 
+    </region-attributes>
+    ```
+
+    You can also use gfsh to configure a region. For example:
+
+    ``` pre
+    gfsh>create region --name=regionA --type=REPLICATE
+    ```
+
+    See [Region Types](../region_options/region_types.html#region_types).
+
+2.  Choose the level of distribution for your region. The region shortcuts in `RegionShortcut` for distributed regions use `distributed-ack` scope. If you need a different scope, set the `region-attributes` `scope` to `distributed-no-ack` or `global`.
+
+    Example:
+
+    ``` pre
+    <region-attributes refid="REPLICATE" scope="distributed-no-ack"> 
+    </region-attributes>
+    ```
+
+3.  If you are using the `distributed-ack` scope, optionally enable concurrency checks for the region.
+
+    Example:
+
+    ``` pre
+    <region-attributes refid="REPLICATE" scope="distributed-ack" concurrency-checks-enabled="true"> 
+    </region-attributes>
+    ```
+
+4.  If you are using `global` scope, program any explicit locking you need in addition to the automated locking provided by Geode.
+
+## <a id="configure_distributed_region__section_6F53FB58B8A84D0F8086AFDB08A649F9" class="no-quick-link"></a>Local Destroy and Invalidate in the Replicated Region
+
+Of all the operations that affect the local cache only, only local region destroy is allowed in a replicated region. Other operations are not configurable or throw exceptions. For example, you cannot use local destroy as the expiration action on a replicated region. This is because local operations like entry invalidation and destruction remove data from the local cache only. A replicated region would no longer be complete if data were removed locally but left intact.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/distributed_regions/region_entry_versions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/distributed_regions/region_entry_versions.html.md.erb b/geode-docs/developing/distributed_regions/region_entry_versions.html.md.erb
new file mode 100644
index 0000000..1781fc7
--- /dev/null
+++ b/geode-docs/developing/distributed_regions/region_entry_versions.html.md.erb
@@ -0,0 +1,34 @@
+---
+title: Consistency for Region Updates
+---
+
+<a id="topic_CF2798D3E12647F182C2CEC4A46E2045"></a>
+
+
+Geode ensures that all copies of a region eventually reach a consistent state on all members and clients that host the region, including Geode members that distribute region events.
+
+-   **[Consistency Checking by Region Type](../../developing/distributed_regions/how_region_versioning_works.html#topic_7A4B6C6169BD4B1ABD356294F744D236)**
+
+    Geode performs different consistency checks depending on the type of region you have configured.
+
+-   **[Configuring Consistency Checking](../../developing/distributed_regions/how_region_versioning_works.html#topic_B64891585E7F4358A633C792F10FA23E)**
+
+    Geode enables consistency checking by default. You cannot disable consistency checking for persistent regions. For all other regions, you can explicitly enable or disable consistency checking by setting the `concurrency-checks-enabled` region attribute in `cache.xml` to "true" or "false."
+
+-   **[Overhead for Consistency Checks](../../developing/distributed_regions/how_region_versioning_works.html#topic_0BDACA590B2C4974AC9C450397FE70B2)**
+
+    Consistency checking requires additional overhead for storing and distributing version and timestamp information, as well as for maintaining destroyed entries for a period of time to meet consistency requirements.
+
+-   **[How Consistency Checking Works for Replicated Regions](../../developing/distributed_regions/how_region_versioning_works.html#topic_C5B74CCDD909403C815639339AA03758)**
+
+    Each region stores version and timestamp information for use in conflict detection. Geode members use the recorded information to detect and resolve conflicts consistently before applying a distributed update.
+
+-   **[How Destroy and Clear Operations Are Resolved](../../developing/distributed_regions/how_region_versioning_works.html#topic_321B05044B6641FCAEFABBF5066BD399)**
+
+    When consistency checking is enabled for a region, a Geode member does not immediately remove an entry from the region when an application destroys the entry. Instead, the member retains the entry with its current version stamp for a period of time in order to detect possible conflicts with operations that have occurred. The retained entry is referred to as a *tombstone*. Geode retains tombstones for partitioned regions and non-replicated regions as well as for replicated regions, in order to provide consistency.
+
+-   **[Transactions with Consistent Regions](../../developing/distributed_regions/how_region_versioning_works.html#topic_32ACFA5542C74F3583ECD30467F352B0)**
+
+    A transaction that modifies a region having consistency checking enabled generates all necessary version information for region updates when the transaction commits.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/cache_event_handler_examples.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/cache_event_handler_examples.html.md.erb b/geode-docs/developing/events/cache_event_handler_examples.html.md.erb
new file mode 100644
index 0000000..858003d
--- /dev/null
+++ b/geode-docs/developing/events/cache_event_handler_examples.html.md.erb
@@ -0,0 +1,138 @@
+---
+title:  Cache Event Handler Examples
+---
+
+Some examples of cache event handlers.
+
+## <a id="cache_event_handler_examples__section_F2790678E9DE4A81B73A4B6346CB210B" class="no-quick-link"></a>Declaring and Loading an Event Handler with Parameters
+
+This declares an event handler for a region in the `cache.xml`. The handler is a cache listener designed to communicate changes to a DB2 database. The declaration includes the listener\u2019s parameters, which are the database path, username, and password.
+
+``` pre
+<region name="exampleRegion"> 
+  <region-attributes> 
+  . . . 
+    <cache-listener> 
+      <class-name>JDBCListener</class-name> 
+      <parameter name="url"> 
+        <string>jdbc:db2:SAMPLE</string> 
+      </parameter> 
+      <parameter name="username"> 
+        <string>gfeadmin</string> 
+      </parameter> 
+      <parameter name="password"> 
+        <string>admin1</string> 
+      </parameter> 
+    </cache-listener> 
+  </region-attributes> 
+  </region>
+```
+
+This code listing shows part of the implementation of the `JDBCListener` declared in the `cache.xml`. This listener implements the `Declarable` interface. When an entry is created in the cache, this listener\u2019s `afterCreate` callback method is triggered to update the database. Here the listener\u2019s properties, provided in the `cache.xml`, are passed into the `Declarable.init` method and used to create a database connection.
+
+``` pre
+. . .
+public class JDBCListener
+extends CacheListenerAdapter
+implements Declarable {
+  public void afterCreate(EntryEvent e) {
+  . . .
+    // Initialize the database driver and connection using input parameters
+    Driver driver = (Driver) Class.forName(DRIVER_NAME).newInstance();
+    Connection connection =
+      DriverManager.getConnection(_url, _username, _password);
+      System.out.println(_connection);
+        . . .
+  }
+    . . .
+  public void init(Properties props) {
+    this._url = props.getProperty("url");
+    this._username = props.getProperty("username");
+    this._password = props.getProperty("password");
+  }
+}
+```
+
+## <a id="cache_event_handler_examples__section_2B4275C1AE744794AAD22530E5ECA8CC" class="no-quick-link"></a>Installing an Event Handler Through the API
+
+This listing defines a cache listener using the `RegionFactory` method `addCacheListener`.
+
+``` pre
+Region newReg = cache.createRegionFactory()
+          .addCacheListener(new SimpleCacheListener())
+          .create(name);
+ 
+```
+
+You can create a cache writer similarly, using the `RegionFactory` method `setCacheWriter`, like this:
+
+``` pre
+Region newReg = cache.createRegionFactory()
+          .setCacheWriter(new SimpleCacheWriter())
+          .create(name);
+ 
+```
+
+## <a id="cache_event_handler_examples__section_C62E9535C43B4BC5A7AA7B8B4125D1EB" class="no-quick-link"></a>Installing Multiple Listeners on a Region
+
+XML:
+
+``` pre
+<region name="exampleRegion">
+  <region-attributes>
+    . . .
+    <cache-listener>
+      <class-name>myCacheListener1</class-name>
+    </cache-listener>
+    <cache-listener>
+      <class-name>myCacheListener2</class-name>
+    </cache-listener>
+    <cache-listener>
+      <class-name>myCacheListener3</class-name>
+    </cache-listener>
+  </region-attributes>
+</region>
+```
+
+API:
+
+``` pre
+CacheListener listener1 = new myCacheListener1(); 
+CacheListener listener2 = new myCacheListener2(); 
+CacheListener listener3 = new myCacheListener3(); 
+
+Region nr = cache.createRegionFactory()
+  .initCacheListeners(new CacheListener[]
+    {listener1, listener2, listener3})
+  .setScope(Scope.DISTRIBUTED_NO_ACK)
+  .create(name);
+```
+
+## <a id="cache_event_handler_examples__section_3AF3D7C9927F491F8BACDB72834E42AA" class="no-quick-link"></a>Installing a Write-Behind Cache Listener
+
+``` pre
+//AsyncEventQueue with listener that performs WBCL work
+<cache>
+   <async-event-queue id="sampleQueue" persistent="true"
+    disk-store-name="exampleStore" parallel="false">
+      <async-event-listener>
+         <class-name>MyAsyncListener</class-name>
+         <parameter name="url"> 
+           <string>jdbc:db2:SAMPLE</string> 
+         </parameter> 
+         <parameter name="username"> 
+           <string>gfeadmin</string> 
+         </parameter> 
+         <parameter name="password"> 
+           <string>admin1</string> 
+         </parameter> 
+               </async-event-listener>
+             </async-event-queue>
+
+// Add the AsyncEventQueue to region(s) that use the WBCL
+  <region name="data">
+       <region-attributes async-event-queue-ids="sampleQueue">
+    </region-attributes>
+  </region>
+</cache>
+```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/chapter_overview.html.md.erb b/geode-docs/developing/events/chapter_overview.html.md.erb
new file mode 100644
index 0000000..52e1905
--- /dev/null
+++ b/geode-docs/developing/events/chapter_overview.html.md.erb
@@ -0,0 +1,27 @@
+---
+title:  Events and Event Handling
+---
+
+Geode provides versatile and reliable event distribution and handling for your cached data and system member events.
+
+-   **[How Events Work](../../developing/events/how_events_work.html)**
+
+    Members in your Geode distributed system receive cache updates from other members through cache events. The other members can be peers to the member, clients or servers or other distributed systems.
+
+-   **[Implementing Geode Event Handlers](../../developing/events/event_handler_overview.html)**
+
+    You can specify event handlers for region and region entry operations and for administrative events.
+
+-   **[Configuring Peer-to-Peer Event Messaging](../../developing/events/configure_p2p_event_messaging.html)**
+
+    You can receive events from distributed system peers for any region that is not a local region. Local regions receive only local cache events.
+
+-   **[Configuring Client/Server Event Messaging](../../developing/events/configure_client_server_event_messaging.html)**
+
+    You can receive events from your servers for server-side cache events and query result changes.
+
+-   **[Configuring Multi-Site (WAN) Event Queues](../../developing/events/configure_multisite_event_messaging.html)**
+
+    In a multi-site (WAN) installation, Geode uses gateway sender queues to distribute events for regions that are configured with a gateway sender. AsyncEventListeners also use an asynchronous event queue to distribute events for configured regions. This section describes additional options for configuring the event queues that are used by gateway senders or AsyncEventListener implementations.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/configure_client_server_event_messaging.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/configure_client_server_event_messaging.html.md.erb b/geode-docs/developing/events/configure_client_server_event_messaging.html.md.erb
new file mode 100644
index 0000000..2d6185e
--- /dev/null
+++ b/geode-docs/developing/events/configure_client_server_event_messaging.html.md.erb
@@ -0,0 +1,64 @@
+---
+title:  Configuring Client/Server Event Messaging
+---
+
+You can receive events from your servers for server-side cache events and query result changes.
+
+<a id="receiving_events_from_servers__section_F21FB253CCC244708CB953B6D5866A91"></a>
+For cache updates, you can configure to receive entry keys and values or just entry keys, with the data retrieved lazily when requested. The queries are run continuously against server cache events, with the server sending the deltas for your query result sets.
+
+Before you begin, set up your client/server installation and configure and program your basic event messaging.
+
+Servers receive updates for all entry events in their client's client regions.
+
+To receive entry events in the client from the server:
+
+1.  Set the client pool `subscription-enabled` to true. See [&lt;pool&gt;](../../reference/topics/client-cache.html#cc-pool).
+2.  Program the client to register interest in the entries you need.
+
+    **Note:**
+    This must be done through the API.
+
+    Register interest in all keys, a key list, individual keys, or by comparing key strings to regular expressions. By default, no entries are registered to receive updates. Specify whether the server is to send values with entry update events. Interest registration is only available through the API.
+
+    1.  Get an instance of the region where you want to register interest.
+    2.  Use the regions's `registerInterest`\* methods to specify the entries you want. Examples:
+
+        ``` pre
+        // Register interest in a single key and download its entry 
+        // at this time, if it is available in the server cache 
+        Region region1 = . . . ;
+        region1.registerInterest("key-1"); 
+                            
+        // Register Interest in a List of Keys but do not do an initial bulk load
+        // do not send values for creater/update events - just send key with invalidation
+        Region region2 = . . . ; 
+        List list = new ArrayList();
+        list.add("key-1"); 
+        list.add("key-2"); 
+        list.add("key-3"); 
+        list.add("key-4");
+        region2.registerInterest(list, InterestResultPolicy.NONE, false); 
+                            
+        // Register interest in all keys and download all available keys now
+        Region region3 = . . . ;
+        region3.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS); 
+                            
+        // Register Interest in all keys matching a regular expression 
+        Region region1 = . . . ; 
+        region1.registerInterestRegex("[a-zA-Z]+_[0-9]+"); 
+        ```
+
+        You can call the register interest methods multiple times for a single region. Each interest registration adds to the server\u2019s list of registered interest criteria for the client. So if a client registers interest in key \u2018A\u2019, then registers interest in regular expression "B\*", the server will send updates for all entries with key \u2018A\u2019 or key beginning with the letter \u2018B\u2019.
+
+    3.  For highly available event messaging, configure server redundancy. See [Configuring Highly Available Servers](configuring_highly_available_servers.html).
+    4.  To have events enqueued for your clients during client downtime, configure durable client/server messaging.
+    5.  Write any continuous queries (CQs) that you want to run to receive continuously streaming updates to client queries. CQ events do not update the client cache. If you have dependencies between CQs and/or interest registrations, so that you want the two types of subscription events to arrive as closely together on the client, use a single server pool for everything. Using different pools can lead to time differences in the delivery of events because the pools might use different servers to process and deliver the event messages.
+
+-   **[Configuring Highly Available Servers](../../developing/events/configuring_highly_available_servers.html)**
+
+-   **[Implementing Durable Client/Server Messaging](../../developing/events/implementing_durable_client_server_messaging.html)**
+
+-   **[Tuning Client/Server Event Messaging](../../developing/events/tune_client_server_event_messaging.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/configure_multisite_event_messaging.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/configure_multisite_event_messaging.html.md.erb b/geode-docs/developing/events/configure_multisite_event_messaging.html.md.erb
new file mode 100644
index 0000000..6bd1e8b
--- /dev/null
+++ b/geode-docs/developing/events/configure_multisite_event_messaging.html.md.erb
@@ -0,0 +1,22 @@
+---
+title:  Configuring Multi-Site (WAN) Event Queues
+---
+
+In a multi-site (WAN) installation, Geode uses gateway sender queues to distribute events for regions that are configured with a gateway sender. AsyncEventListeners also use an asynchronous event queue to distribute events for configured regions. This section describes additional options for configuring the event queues that are used by gateway senders or AsyncEventListener implementations.
+
+<a id="configure_multisite_event_messaging__section_1BBF77E166E84F7CA110385FD03D8453"></a>
+Before you begin, set up your multi-site (WAN) installation or configure asynchronous event queues and AsyncEventListener implementations. See [Configuring a Multi-site (WAN) System](../../topologies_and_comm/multi_site_configuration/setting_up_a_multisite_system.html#setting_up_a_multisite_system) or [Implementing an AsyncEventListener for Write-Behind Cache Event Handling](implementing_write_behind_event_handler.html#implementing_write_behind_cache_event_handling).
+
+-   **[Persisting an Event Queue](../../developing/events/configuring_highly_available_gateway_queues.html)**
+
+    You can configure a gateway sender queue or an asynchronous event queue to persist data to disk similar to the way in which replicated regions are persisted.
+
+-   **[Configuring Dispatcher Threads and Order Policy for Event Distribution](../../developing/events/configuring_gateway_concurrency_levels.html)**
+
+    By default, Geode uses multiple dispatcher threads to process region events simultaneously in a gateway sender queue for distribution between sites, or in an asynchronous event queue for distributing events for write-behind caching. With serial queues, you can also configure the ordering policy for dispatching those events.
+
+-   **[Conflating Events in a Queue](../../developing/events/conflate_multisite_gateway_queue.html)**
+
+    Conflating a queue improves distribution performance. When conflation is enabled, only the latest queued value is sent for a particular key.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/configure_p2p_event_messaging.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/configure_p2p_event_messaging.html.md.erb b/geode-docs/developing/events/configure_p2p_event_messaging.html.md.erb
new file mode 100644
index 0000000..73c7d74
--- /dev/null
+++ b/geode-docs/developing/events/configure_p2p_event_messaging.html.md.erb
@@ -0,0 +1,33 @@
+---
+title:  Configuring Peer-to-Peer Event Messaging
+---
+
+You can receive events from distributed system peers for any region that is not a local region. Local regions receive only local cache events.
+
+<a id="configuring_event_distribution__section_7D5B1F0C0EF24E58BB3C335CB4EA9A3C"></a>
+Peer distribution is done according to the region's configuration.
+
+-   Replicated regions always receive all events from peers and require no further configuration. Replicated regions are configured using the `REPLICATE` region shortcut settings.
+-   For non-replicated regions, decide whether you want to receive all entry events from the distributed cache or only events for the data you have stored locally. To configure:
+    -   To receive all events, set the `subscription-attributes` `interest-policy` to `all`:
+
+        ``` pre
+        <region-attributes> 
+            <subscription-attributes interest-policy="all"/> 
+        </region-attributes>
+        ```
+
+    -   To receive events just for the data you have stored locally, set the `subscription-attributes` `interest-policy` to `cache-content` or do not set it (`cache-content` is the default):
+
+        ``` pre
+        <region-attributes> 
+            <subscription-attributes interest-policy="cache-content"/> 
+        </region-attributes>
+        ```
+
+    For partitioned regions, this only affects the receipt of events, as the data is stored according to the region partitioning. Partitioned regions with interest policy of `all` can create network bottlenecks, so if you can, run listeners in every member that hosts the partitioned region data and use the `cache-content` interest policy.
+
+**Note:**
+You can also configure Regions using the gfsh command-line interface. See [Region Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_EF03119A40EE492984F3B6248596E1DD).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/configuring_gateway_concurrency_levels.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/configuring_gateway_concurrency_levels.html.md.erb b/geode-docs/developing/events/configuring_gateway_concurrency_levels.html.md.erb
new file mode 100644
index 0000000..5d001c3
--- /dev/null
+++ b/geode-docs/developing/events/configuring_gateway_concurrency_levels.html.md.erb
@@ -0,0 +1,141 @@
+---
+title:  Configuring Dispatcher Threads and Order Policy for Event Distribution
+---
+
+By default, Geode uses multiple dispatcher threads to process region events simultaneously in a gateway sender queue for distribution between sites, or in an asynchronous event queue for distributing events for write-behind caching. With serial queues, you can also configure the ordering policy for dispatching those events.
+
+By default, a gateway sender queue or asynchronous event queue uses 5 dispatcher threads per queue. This provides support for applications that have the ability to process queued events concurrently for distribution to another Geode site or listener. If your application does not require concurrent distribution, or if you do not have enough resources to support the requirements of multiple dispatcher threads, then you can configure a single dispatcher thread to process a queue.
+
+-   [Using Multiple Dispatcher Threads to Process a Queue](configuring_gateway_concurrency_levels.html#concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_20E8EFCE89EB4DC7AA822D03C8E0F470)
+-   [Performance and Memory Considerations](configuring_gateway_concurrency_levels.html#concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_C4C83B5C0FDD4913BA128365EE7E4E35)
+-   [Configuring the Ordering Policy for Serial Queues](configuring_gateway_concurrency_levels.html#concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_4835BA30CDFD4B658BD2576F6BC2E23F)
+-   [Examples\u2014Configuring Dispatcher Threads and Ordering Policy for a Serial Gateway Sender Queue](configuring_gateway_concurrency_levels.html#concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_752F08F9064B4F67A80DA0A994671EA0)
+
+## <a id="concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_20E8EFCE89EB4DC7AA822D03C8E0F470" class="no-quick-link"></a>Using Multiple Dispatcher Threads to Process a Queue
+
+When multiple dispatcher threads are configured for a parallel queue, Geode simply uses multiple threads to process the contents of each individual queue. The total number of queues that are created is still determined by the number of Geode members that host the region.
+
+When multiple dispatcher threads are configured for a serial queue, Geode creates an additional copy of the queue for each thread on each member that hosts the queue. To obtain the maximum throughput, increase the number of dispatcher threads until your network is saturated.
+
+The following diagram illustrates a serial gateway sender queue that is configured with multiple dispatcher threads.
+<img src="../../images/MultisiteConcurrency_WAN_Gateway.png" id="concept_6C52A037E39E4FD6AE4C6A982A4A1A85__image_093DAC58EBEE456485562C92CA79899F" class="image" width="624" />
+
+## <a id="concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_C4C83B5C0FDD4913BA128365EE7E4E35" class="no-quick-link"></a>Performance and Memory Considerations
+
+When a serial gateway sender or an asynchronous event queue uses multiple dispatcher threads, consider the following:
+
+-   Queue attributes are repeated for each copy of the queue that is created for a dispatcher thread. That is, each concurrent queue points to the same disk store, so the same disk directories are used. If persistence is enabled and overflow occurs, the threads that insert entries into the queues compete for the disk. This applies to application threads and dispatcher threads, so it can affect application performance.
+-   The `maximum-queue-memory` setting applies to each copy of the serial queue. If you configure 10 dispatcher threads and the maximum queue memory is set to 100MB, then the total maximum queue memory for the queue is 1000MB on each member that hosts the queue.
+
+## <a id="concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_4835BA30CDFD4B658BD2576F6BC2E23F" class="no-quick-link"></a>Configuring the Ordering Policy for Serial Queues
+
+When using multiple `dispatcher-threads` (greater than 1) with a serial event queue, you can also configure the `order-policy` that those threads use to distribute events from the queue. The valid order policy values are:
+
+-   **key (default)**. All updates to the same key are distributed in order. Geode preserves key ordering by placing all updates to the same key in the same dispatcher thread queue. You typically use key ordering when updates to entries have no relationship to each other, such as for an application that uses a single feeder to distribute stock updates to several other systems.
+-   **thread**. All region updates from a given thread are distributed in order. Geode preserves thread ordering by placing all region updates from the same thread into the same dispatcher thread queue. In general, use thread ordering when updates to one region entry affect updates to another region entry.
+-   **partition**. All region events that share the same partitioning key are distributed in order. Specify partition ordering when applications use a [PartitionResolver](/releases/latest/javadoc/org/apache/geode/cache/PartitionResolver.html) to implement [custom partitioning](../partitioned_regions/using_custom_partition_resolvers.html). With partition ordering, all entries that share the same "partitioning key" (RoutingObject) are placed into the same dispatcher thread queue.
+
+You cannot configure the `order-policy` for a parallel event queue, because parallel queues cannot preserve event ordering for regions. Only the ordering of events for a given partition (or in a given queue of a distributed region) can be preserved.
+
+## <a id="concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_752F08F9064B4F67A80DA0A994671EA0" class="no-quick-link"></a>Examples\u2014Configuring Dispatcher Threads and Ordering Policy for a Serial Gateway Sender Queue
+
+To increase the number of dispatcher threads and set the ordering policy for a serial gateway sender, use one of the following mechanisms.
+
+-   **cache.xml configuration**
+
+    ``` pre
+    <cache>
+      <gateway-sender id="NY" parallel="false" 
+       remote-distributed-system-id="1"
+       enable-persistence="true"
+       disk-store-name="gateway-disk-store"
+       maximum-queue-memory="200"
+       dispatcher-threads=7 order-policy="key"/> 
+       ... 
+    </cache>
+    ```
+
+-   **Java API configuration**
+
+    ``` pre
+    Cache cache = new CacheFactory().create();
+
+    GatewaySenderFactory gateway = cache.createGatewaySenderFactory();
+    gateway.setParallel(false);
+    gateway.setPersistenceEnabled(true);
+    gateway.setDiskStoreName("gateway-disk-store");
+    gateway.setMaximumQueueMemory(200);
+    gateway.setDispatcherThreads(7);
+    gateway.setOrderPolicy(OrderPolicy.KEY);
+    GatewaySender sender = gateway.create("NY", "1");
+    sender.start();
+    ```
+
+-   **gfsh:**
+
+    ``` pre
+    gfsh>create gateway-sender -d="NY" 
+       --parallel=false 
+       --remote-distributed-system-id="1"
+       --enable-persistence=true
+       --disk-store-name="gateway-disk-store"
+       --maximum-queue-memory=200
+       --dispatcher-threads=7 
+       --order-policy="key"
+    ```
+
+The following examples show how to set dispatcher threads and ordering policy for an asynchronous event queue:
+
+-   **cache.xml configuration**
+
+    ``` pre
+    <cache>
+       <async-event-queue id="sampleQueue" persistent="true"
+        disk-store-name="async-disk-store" parallel="false"
+        dispatcher-threads=7 order-policy="key">
+          <async-event-listener>
+             <class-name>MyAsyncEventListener</class-name>
+             <parameter name="url"> 
+               <string>jdbc:db2:SAMPLE</string> 
+             </parameter> 
+             <parameter name="username"> 
+               <string>gfeadmin</string> 
+             </parameter> 
+             <parameter name="password"> 
+               <string>admin1</string> 
+             </parameter> 
+        </async-event-listener>
+        </async-event-queue>
+    ...
+    </cache>
+    ```
+
+-   **Java API configuration**
+
+    ``` pre
+    Cache cache = new CacheFactory().create();
+    AsyncEventQueueFactory factory = cache.createAsyncEventQueueFactory();
+    factory.setPersistent(true);
+    factory.setDiskStoreName("async-disk-store");
+    factory.setParallel(false);
+    factory.setDispatcherThreads(7);
+    factory.setOrderPolicy(OrderPolicy.KEY);
+    AsyncEventListener listener = new MyAsyncEventListener();
+    AsyncEventQueue sampleQueue = factory.create("customerWB", listener);
+    ```
+
+    Entry updates in the current, in-process batch are not eligible for conflation.
+
+-   **gfsh:**
+
+    ``` pre
+    gfsh>create async-event-queue --id="sampleQueue" --persistent=true
+    --disk-store="async-disk-store" --parallel=false
+    --dispatcher-threads=7 order-policy="key"
+    --listener=myAsycEventListener 
+    --listener-param=url#jdbc:db2:SAMPLE 
+    --listener-param=username#gfeadmin 
+    --listener-param=password#admin1
+    ```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/configuring_highly_available_gateway_queues.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/configuring_highly_available_gateway_queues.html.md.erb b/geode-docs/developing/events/configuring_highly_available_gateway_queues.html.md.erb
new file mode 100644
index 0000000..a674a45
--- /dev/null
+++ b/geode-docs/developing/events/configuring_highly_available_gateway_queues.html.md.erb
@@ -0,0 +1,102 @@
+---
+title:  Persisting an Event Queue
+---
+
+You can configure a gateway sender queue or an asynchronous event queue to persist data to disk similar to the way in which replicated regions are persisted.
+
+<a id="configuring_highly_available_gateway_queues__section_7EB2A7E38B074AAAA06D22C59687CB8A"></a>
+Persisting a queue provides high availability for the event messaging that the sender performs. For example, if a persistent gateway sender queue exits for any reason, when the member that hosts the sender restarts it automatically reloads the queue and resumes sending messages. If an asynchronous event queue exits for any reason, write-back caching can resume where it left off when the queue is brought back online.
+Geode persists an event queue if you set the `enable-persistence` attribute to true. The queue is persisted to the disk store specified in the queue's `disk-store-name` attribute, or to the default disk store if you do not specify a store name.
+
+You must configure the event queue to use persistence if you are using persistent regions. The use of non-persistent event queues with persistent regions is not supported.
+
+When you enable persistence for a queue, the `maximum-queue-memory` attribute determines how much memory the queue can consume before it overflows to disk. By default, this value is set to 100MB.
+
+**Note:**
+If you configure a parallel queue and/or you configure multiple dispatcher threads for a queue, the values that are defined in the `maximum-queue-memory` and `disk-store-name` attributes apply to each instance of the queue.
+
+In the example below the gateway sender queue uses "diskStoreA" for persistence and overflow, and the queue has a maximum queue memory of 100MB:
+
+-   XML example:
+
+    ``` pre
+    <cache>
+      <gateway-sender id="persistedsender1" parallel="false" 
+       remote-distributed-system-id="1"
+       enable-persistence="true"
+       disk-store-name="diskStoreA"
+       maximum-queue-memory="100"/> 
+       ... 
+    </cache>
+    ```
+
+-   API example:
+
+    ``` pre
+    Cache cache = new CacheFactory().create();
+
+    GatewaySenderFactory gateway = cache.createGatewaySenderFactory();
+    gateway.setParallel(false);
+    gateway.setPersistenceEnabled(true);
+    gateway.setDiskStoreName("diskStoreA");
+    gateway.setMaximumQueueMemory(100); 
+    GatewaySender sender = gateway.create("persistedsender1", "1");
+    sender.start();
+    ```
+
+-   gfsh:
+
+    ``` pre
+    gfsh>create gateway-sender --id="persistedsender1 --parallel=false 
+    --remote-distributed-system-id=1 --enable-persistence=true --disk-store-name=diskStoreA 
+    --maximum-queue-memory=100
+    ```
+
+If you were to configure 10 dispatcher threads for the serial gateway sender, then the total maximum memory for the gateway sender queue would be 1000MB on each Geode member that hosted the sender, because Geode creates a separate copy of the queue per thread..
+
+The following example shows a similar configuration for an asynchronous event queue:
+
+-   XML example:
+
+    ``` pre
+    <cache>
+       <async-event-queue id="persistentAsyncQueue" persistent="true"
+        disk-store-name="diskStoreA" parallel="true">
+          <async-event-listener>
+             <class-name>MyAsyncEventListener</class-name>
+             <parameter name="url"> 
+               <string>jdbc:db2:SAMPLE</string> 
+             </parameter> 
+             <parameter name="username"> 
+               <string>gfeadmin</string> 
+             </parameter> 
+             <parameter name="password"> 
+               <string>admin1</string> 
+             </parameter> 
+          </async-event-listener>
+        </async-event-queue>
+    ...
+    </cache>
+    ```
+
+-   API example:
+
+    ``` pre
+    Cache cache = new CacheFactory().create();
+    AsyncEventQueueFactory factory = cache.createAsyncEventQueueFactory();
+    factory.setPersistent(true);
+    factory.setDiskStoreName("diskStoreA");
+    factory.setParallel(true);
+    AsyncEventListener listener = new MyAsyncEventListener();
+    AsyncEventQueue persistentAsyncQueue = factory.create("customerWB", listener);
+    ```
+
+-   gfsh:
+
+    ``` pre
+    gfsh>create async-event-queue --id="persistentAsyncQueue" --persistent=true 
+    --disk-store="diskStoreA" --parallel=true --listener=MyAsyncEventListener 
+    --listener-param=url#jdbc:db2:SAMPLE --listener-param=username#gfeadmin --listener-param=password#admin1
+    ```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/configuring_highly_available_servers.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/configuring_highly_available_servers.html.md.erb b/geode-docs/developing/events/configuring_highly_available_servers.html.md.erb
new file mode 100644
index 0000000..3b80d96
--- /dev/null
+++ b/geode-docs/developing/events/configuring_highly_available_servers.html.md.erb
@@ -0,0 +1,38 @@
+---
+title:  Configuring Highly Available Servers
+---
+
+<a id="configuring_highly_available_servers__section_7EB2A7E38B074AAAA06D22C59687CB8A"></a>
+With highly-available servers, one of the backups steps in and takes over messaging with no interruption in service if the client's primary server crashes.
+
+To configure high availability, set the `subscription-redundancy` in the client's pool configuration. This setting indicates the number of secondary servers to use. For example:
+
+``` pre
+<!-- Run one secondary server -->
+<pool name="red1" subscription-enabled="true" subscription-redundancy="1"> 
+  <locator host="nick" port="41111"/> 
+  <locator host="nora" port="41111"/> 
+</pool> 
+```
+
+``` pre
+<!-- Use all available servers as secondaries. One is primary, the rest are secondaries -->
+<pool name="redX" subscription-enabled="true" subscription-redundancy="-1"> 
+  <locator host="nick" port="41111"/> 
+  <locator host="nora" port="41111"/> 
+</pool> 
+```
+
+When redundancy is enabled, secondary servers maintain queue backups while the primary server pushes events to the client. If the primary server fails, one of the secondary servers steps in as primary to provide uninterrupted event messaging to the client.
+
+The following table describes the different values for the subscription-redundancy setting:
+
+| subscription-redundancy | Description                                                                    |
+|-------------------------|--------------------------------------------------------------------------------|
+| 0                       | No secondary servers are configured, so high availability is disabled.         |
+| &gt; 0                  | Sets the precise number of secondary servers to use for backup to the primary. |
+| -1                      | Every server that is not the primary is to be used as a secondary.             |
+
+-   **[Highly Available Client/Server Event Messaging](../../developing/events/ha_event_messaging_whats_next.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/conflate_multisite_gateway_queue.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/conflate_multisite_gateway_queue.html.md.erb b/geode-docs/developing/events/conflate_multisite_gateway_queue.html.md.erb
new file mode 100644
index 0000000..e2e7ff1
--- /dev/null
+++ b/geode-docs/developing/events/conflate_multisite_gateway_queue.html.md.erb
@@ -0,0 +1,113 @@
+---
+title:  Conflating Events in a Queue
+---
+
+Conflating a queue improves distribution performance. When conflation is enabled, only the latest queued value is sent for a particular key.
+
+<a id="conflate_multisite_gateway_queue__section_294AD2E2328E4D6B8D6A73966F7B3B14"></a>
+**Note:**
+Do not use conflation if your receiving applications depend on the specific ordering of entry modifications, or if they need to be notified of every change to an entry.
+
+Conflation is most useful when a single entry is updated frequently, but other sites only need to know the current value of the entry (rather than the value of each update). When an update is added to a queue that has conflation enabled, if there is already an update message in the queue for the entry key, then the existing message assumes the value of the new update and the new update is dropped, as shown here for key A.
+
+<img src="../../images/MultiSite-4.gif" id="conflate_multisite_gateway_queue__image_27219DAAB6D643348641389DBAEA1E94" class="image" />
+
+**Note:**
+This method of conflation is different from the one used for server-to-client subscription queue conflation and peer-to-peer distribution within a distributed system.
+
+## <a id="conflate_multisite_gateway_queue__section_207FA6BF0F734F9A91EAACB136F8D6B5" class="no-quick-link"></a>Examples\u2014Configuring Conflation for a Gateway Sender Queue
+
+To enable conflation for a gateway sender queue, use one of the following mechanisms:
+
+-   **cache.xml configuration**
+
+    ``` pre
+    <cache>
+      <gateway-sender id="NY" parallel="true" 
+       remote-distributed-system-id="1"
+       enable-persistence="true"
+       disk-store-name="gateway-disk-store"
+       enable-batch-conflation="true"/> 
+       ... 
+    </cache>
+    ```
+
+-   **Java API configuration**
+
+    ``` pre
+    Cache cache = new CacheFactory().create();
+
+    GatewaySenderFactory gateway = cache.createGatewaySenderFactory();
+    gateway.setParallel(true);
+    gateway.setPersistenceEnabled(true);
+    gateway.setDiskStoreName("gateway-disk-store");
+    gateway.setBatchConflationEnabled(true);
+    GatewaySender sender = gateway.create("NY", "1");
+    sender.start();
+    ```
+
+    Entry updates in the current, in-process batch are not eligible for conflation.
+
+-   **gfsh:**
+
+    ``` pre
+    gfsh>create gateway-sender --id="NY" --parallel=true 
+       --remote-distributed-system-id="1"
+       --enable-persistence=true
+       --disk-store-name="gateway-disk-store"
+       --enable-batch-conflation=true
+    ```
+
+The following examples show how to configure conflation for an asynchronous event queue:
+
+-   **cache.xml configuration**
+
+    ``` pre
+    <cache>
+       <async-event-queue id="sampleQueue" persistent="true"
+        disk-store-name="async-disk-store" parallel="false"
+        enable-batch-conflation="true">
+          <async-event-listener>
+             <class-name>MyAsyncEventListener</class-name>
+             <parameter name="url"> 
+               <string>jdbc:db2:SAMPLE</string> 
+             </parameter> 
+             <parameter name="username"> 
+               <string>gfeadmin</string> 
+             </parameter> 
+             <parameter name="password"> 
+               <string>admin1</string> 
+             </parameter> 
+       </async-event-listener>
+     </async-event-queue>
+    ...
+    </cache>
+    ```
+
+-   **Java API configuration**
+
+    ``` pre
+    Cache cache = new CacheFactory().create();
+    AsyncEventQueueFactory factory = cache.createAsyncEventQueueFactory();
+    factory.setPersistent(true);
+    factory.setDiskStoreName("async-disk-store");
+    factory.setParallel(false);
+    factory.setBatchConflationEnabled(true);
+    AsyncEventListener listener = new MyAsyncEventListener();
+    AsyncEventQueue sampleQueue = factory.create("customerWB", listener);
+    ```
+
+    Entry updates in the current, in-process batch are not eligible for conflation.
+
+-   **gfsh:**
+
+    ``` pre
+    gfsh>create async-event-queue --id="sampleQueue" --persistent=true 
+    --disk-store="async-disk-store" --parallel="false" 
+    --listener=myAsyncEventListener 
+    --listener-param=url#jdbc:db2:SAMPLE 
+    --listener-param=username#gfeadmin 
+    --listener-param=password#admin1
+    ```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/conflate_server_subscription_queue.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/conflate_server_subscription_queue.html.md.erb b/geode-docs/developing/events/conflate_server_subscription_queue.html.md.erb
new file mode 100644
index 0000000..f6d4990
--- /dev/null
+++ b/geode-docs/developing/events/conflate_server_subscription_queue.html.md.erb
@@ -0,0 +1,36 @@
+---
+title:  Conflate the Server Subscription Queue
+---
+
+<a id="conflate_the_server_subscription_queue__section_1791DFB89502480EB57F81D16AC0EBAC"></a>
+Conflating the server subscription queue can save space in the server and time in message processing.
+
+Enable conflation at the server level in the server region configuration:
+
+``` pre
+<region ... >
+  <region-attributes enable-subscription-conflation="true" /> 
+</region>
+```
+
+Override the server setting as needed, on a per-client basis, in the client\u2019s `gemfire.properties`:
+
+``` pre
+conflate-events=false
+```
+
+Valid `conflate-events` settings are:
+-   `server`, which uses the server settings
+-   `true`, which conflates everything sent to the client
+-   `false`, which does not conflate anything sent to this client
+
+Conflation can both improve performance and reduce the amount of memory required on the server for queuing. The client receives only the latest available update in the queue for a particular entry key. Conflation is disabled by default.
+
+Conflation is particularly useful when a single entry is updated often and the intermediate updates don\u2019t require processing by the client. With conflation, if an entry is updated and there is already an update in the queue for its key, the existing update is removed and the new update is placed at the end of the queue. Conflation is only done on messages that are not in the process of being sent to the client.
+
+<img src="../../images/ClientServerAdvancedTopics-7.gif" id="conflate_the_server_subscription_queue__image_FA77FD2857464D17BF2ED5B3CC62687A" class="image" />
+
+**Note:**
+This method of conflation is different from the one used for multi-site gateway sender queue conflation. It is the same as the method used for the conflation of peer-to-peer distribution messages within a single distributed system.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/event_handler_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/event_handler_overview.html.md.erb b/geode-docs/developing/events/event_handler_overview.html.md.erb
new file mode 100644
index 0000000..c020e3a
--- /dev/null
+++ b/geode-docs/developing/events/event_handler_overview.html.md.erb
@@ -0,0 +1,23 @@
+---
+title:  Implementing Geode Event Handlers
+---
+
+You can specify event handlers for region and region entry operations and for administrative events.
+
+-   **[Implementing Cache Event Handlers](implementing_cache_event_handlers.html)**
+
+    Depending on your installation and configuration, cache events can come from local operations, peers, servers, and remote sites. Event handlers register their interest in one or more events and are notified when the events occur.
+
+-   **[Implementing an AsyncEventListener for Write-Behind Cache Event Handling](implementing_write_behind_event_handler.html)**
+
+    An `AsyncEventListener` asynchronously processes batches of events after they have been applied to a region. You can use an `AsyncEventListener` implementation as a write-behind cache event handler to synchronize region updates with a database.
+
+-   **[How to Safely Modify the Cache from an Event Handler Callback](writing_callbacks_that_modify_the_cache.html)**
+
+    Event handlers are synchronous. If you need to change the cache or perform any other distributed operation from event handler callbacks, be careful to avoid activities that might block and affect your overall system performance.
+
+-   **[Cache Event Handler Examples](cache_event_handler_examples.html)**
+
+    Some examples of cache event handlers.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/filtering_multisite_events.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/filtering_multisite_events.html.md.erb b/geode-docs/developing/events/filtering_multisite_events.html.md.erb
new file mode 100644
index 0000000..a46d18e
--- /dev/null
+++ b/geode-docs/developing/events/filtering_multisite_events.html.md.erb
@@ -0,0 +1,109 @@
+---
+title:  Filtering Events for Multi-Site (WAN) Distribution
+---
+
+You can optionally create gateway sender and/or gateway receiver filters to control which events are queued and distributed to a remote site, or to modify the data stream that is transmitted between Geode sites.
+
+You can implement and deploy two different types of filter for multi-site events:
+
+-   `GatewayEventFilter`. A `GatewayEventFilter` implementation determines whether a region event is placed in a gateway sender queue and/or whether an event in a gateway queue is distributed to a remote site. You can optionally add one or more `GatewayEventFilter` implementations to a gateway sender, etiher in the `cache.xml` configuration file or using the Java API.
+
+    Geode makes a synchronous call to the filter's `beforeEnqueue` method before it places a region event in the gateway sender queue. The filter returns a boolean value that specifies whether the event should be added to the queue.
+
+    Geode asynchronously calls the filter's `beforeTransmit` method to determine whether the gateway sender dispatcher thread should distribute the event to a remote gateway receiver.
+
+    For events that are distributed to another site, Geode calls the listener's `afterAcknowledgement` method to indicate that is has received an ack from the remote site after the event was received.
+
+-   GatewayTransportFilter. Use a `GatewayTransportFilter` implementation to process the TCP stream that sends a batch of events that is distributed from one Geode cluster to another over a WAN. A `GatewayTransportFilter` is typically used to perform encryption or compression on the data that distributed. You install the same `GatewayTransportFilter` implementation on both a gateway sender and gateway receiver.
+
+    When a gateway sender processes a batch of events for distribution, Geode delivers the stream to the `getInputStream` method of a configured `GatewayTransportFilter` implementation. The filter processes and returns the stream, which is then transmitted to the gateway receiver. When the gateway receiver receives the batch, Geode calls the `getOutputStream` method of a configured filter, which again processes and returns the stream so that the events can be applied in the local cluster.
+
+## <a id="topic_E97BB68748F14987916CD1A50E4B4542__section_E20B4A8A98FD4EDAAA8C14B8059AA7F7" class="no-quick-link"></a>Configuring Multi-Site Event Filters
+
+You install a `GatewayEventFilter` implementation to a configured gateway sender in order to decide which events are queued and distributed. You install a `GatewayTransportFilter` implementation to both a gateway sender and a gateway receiver to process the stream of batched events that are distributed between two sites:
+
+-   **XML example**
+
+    ``` pre
+    <cache>
+      <gateway-sender id="remoteA" parallel="true" remote-distributed-system-id="1"> 
+        <gateway-event-filter>
+          <class-name>org.apache.geode.util.SampleEventFilter</class-name>
+          <parameter name="param1">
+            <string>"value1"</string>
+          </parameter>
+        </gateway-event-filter>
+        <gateway-transport-filter>
+          <class-name>org.apache.geode.util.SampleTransportFilter</class-name>
+          <parameter name="param1">
+            <string>"value1"</string>
+          </parameter>
+        </gateway-transport-filter>
+      </gateway-sender> 
+    </cache>
+    ```
+
+    ``` pre
+    <cache>
+      ...
+      <gateway-receiver start-port="1530" end-port="1551"> 
+        <gateway-transport-filter>
+          <class-name>org.apache.geode.util.SampleTransportFilter</class-name>
+          <parameter name="param1">
+            <string>"value1"</string>
+          </parameter>
+        </gateway-transport-filter>
+      </gateway-receiver>
+    </cache>
+    ```
+
+-   **gfsh example**
+
+    ``` pre
+    gfsh>create gateway-sender --id=remoteA --parallel=true --remote-distributed-id="1" 
+    --gateway-event-filter=org.apache.geode.util.SampleEventFilter 
+    --gateway-transport-filter=org.apache.geode.util.SampleTransportFilter
+    ```
+
+    See [create gateway-sender](../../tools_modules/gfsh/command-pages/create.html#topic_hg2_bjz_ck).
+
+    ``` pre
+    gfsh>create gateway-receiver --start-port=1530 --end-port=1551 \
+    --gateway-transport-filter=org.apache.geode.util.SampleTransportFilter
+    ```
+
+    **Note:**
+    You cannot specify parameters and values for the Java class you specify with the `--gateway-transport-filter` option.
+
+    See [create gateway-receiver](../../tools_modules/gfsh/command-pages/create.html#topic_a4x_pb1_dk).
+
+-   **API example**
+
+    ``` pre
+    Cache cache = new CacheFactory().create();
+
+    GatewayEventFilter efilter = new SampleEventFilter();
+    GatewayTransportFilter tfilter = new SampleTransportFilter();
+
+    GatewaySenderFactory gateway = cache.createGatewaySenderFactory();
+    gateway.setParallel(true);
+    gateway.addGatewayEventFilter(efilter);
+    gateway.addTransportFilter(tfilter);
+    GatewaySender sender = gateway.create("remoteA", "1");
+    sender.start();
+    ```
+
+    ``` pre
+    Cache cache = new CacheFactory().create();
+
+    GatewayTransportFilter tfilter = new SampleTransportFilter();
+
+    GatewayReceiverFactory gateway = cache.createGatewayReceiverFactory();
+    gateway.setStartPort(1530);
+    gateway.setEndPort(1551);
+    gateway.addTransportFilter(tfilter);
+    GatewayReceiver receiver = gateway.create();
+    receiver.start();
+    ```
+
+