You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@geode.apache.org by km...@apache.org on 2016/10/12 17:11:52 UTC

[32/76] [abbrv] [partial] incubator-geode git commit: GEODE-1952 Consolidated docs under a single geode-docs directory

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb b/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb
new file mode 100644
index 0000000..5f63db1
--- /dev/null
+++ b/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb
@@ -0,0 +1,164 @@
+---
+title:  List of Event Handlers and Events
+---
+
+Geode provides many types of events and event handlers to help you manage your different data and application needs.
+
+## <a id="event_handlers_and_events__section_E7B7502F673B43E794884D0F6BF537CF" class="no-quick-link"></a>Event Handlers
+
+Use either cache handlers or membership handlers in any single application. Do not use both. The event handlers in this table are cache handlers unless otherwise noted.
+
+<table>
+<colgroup>
+<col width="25%" />
+<col width="25%" />
+<col width="50%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Handler API</th>
+<th>Events received</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code class="ph codeph">AsyncEventListener</code></td>
+<td><code class="ph codeph">AsyncEvent</code></td>
+<td><p>Tracks changes in a region for write-behind processing. Extends th <code class="ph codeph">CacheCallback</code> interface. You install a write-back cache listener to an <code class="ph codeph">AsyncEventQueue</code> instance. You can then add the <code class="ph codeph">AsyncEventQueue</code> instance to one or more regions for write-behind processing. See [Implementing an AsyncEventListener for Write-Behind Cache Event Handling](implementing_write_behind_event_handler.html#implementing_write_behind_cache_event_handling).</p></td>
+</tr>
+<tr>
+<td><code class="ph codeph">CacheCallback</code></td>
+<td>�</td>
+<td>Superinterface of all cache event listeners. Functions only to clean up resources that the callback allocated.</td>
+</tr>
+<tr>
+<td><code class="ph codeph">CacheListener</code></td>
+<td><code class="ph codeph">RegionEvent</code>, <code class="ph codeph">EntryEvent</code></td>
+<td>Tracks changes to region and its data entries. Responds synchronously. Extends <code class="ph codeph">CacheCallback</code> interface. Installed in region. Receives only local cache events. Install one in every member where you want the events handled by this listener. In a partitioned region, the cache listener only fires in the primary data store. Listeners on secondaries are not fired.</td>
+</tr>
+<tr>
+<td><code class="ph codeph">CacheWriter</code></td>
+<td><code class="ph codeph">RegionEvent</code>, <code class="ph codeph">EntryEvent</code></td>
+<td>Receives events for <em>pending</em> changes to the region and its data entries in this member or one of its peers. Has the ability to abort the operations in question. Extends <code class="ph codeph">CacheCallback</code> interface. Installed in region. Receives events from anywhere in the distributed region, so you can install one in one member for the entire distributed region. Receives events only in primary data store in partitioned regions, so install one in every data store.</td>
+</tr>
+<tr>
+<td><code class="ph codeph">ClientMembershipListener</code>
+<p>(org.apache.geode.management .membership.ClientMembershipListener)</p></td>
+<td><code class="ph codeph">ClientMembershipEvent</code></td>
+<td>One of the interfaces that replaces the deprecated Admin APIs. You can use the ClientMembershipListener to receive membership events only about clients. This listener's callback methods are invoked when this process detects connection changes to clients. Callback methods include <code class="ph codeph">memberCrashed</code>, <code class="ph codeph">memberJoined</code>, <code class="ph codeph">memberLeft</code> (graceful exit).</td>
+</tr>
+<tr>
+<td><code class="ph codeph">CqListener</code></td>
+<td><code class="ph codeph">CqEvent</code></td>
+<td>Receives events from the server cache that satisfy a client-specified query. Extends <code class="ph codeph">CacheCallback</code> interface. Installed in the client inside a <code class="ph codeph">CqQuery</code>.</td>
+</tr>
+<tr>
+<td><code class="ph codeph">GatewayConflictResolver</code></td>
+<td><code class="ph codeph">TimestampedEntryEvent</code></td>
+<td>Decides whether to apply a potentially conflicting event to a region that is distributed over a WAN configuration. This event handler is called only when the distributed system ID of an update event is different from the ID that last updated the region entry.</td>
+</tr>
+<tr>
+<td><code class="ph codeph">MembershipListener</code>
+<p>(org.apache.geode.management .membership.MembershipListener)</p></td>
+<td><code class="ph codeph">MembershipEvent</code></td>
+<td>Use this interface to receive membership events only about peers. This listener's callback methods are invoked when peer members join or leave the Geode distributed system. Callback methods include <code class="ph codeph">memberCrashed</code>, <code class="ph codeph">memberJoined</code>, and <code class="ph codeph">memberLeft</code> (graceful exit).</td>
+</tr>
+<tr>
+<td><code class="ph codeph">RegionMembershipListener</code></td>
+<td><code class="ph codeph">RegionEvent</code></td>
+<td>Provides after-event notification when a region with the same name has been created in another member and when other members hosting the region join or leave the distributed system. Extends <code class="ph codeph">CacheCallback</code> and <code class="ph codeph">CacheListener</code>. Installed in region as a <code class="ph codeph">CacheListener</code>.</td>
+</tr>
+<tr>
+<td><code class="ph codeph">TransactionListener</code></td>
+<td><code class="ph codeph">TransactionEvent</code> with embedded list of <code class="ph codeph">EntryEvent</code></td>
+<td>Tracks the outcome of transactions and changes to data entries in the transaction.
+<div class="note note">
+**Note:**
+<p>Multiple transactions on the same cache can cause concurrent invocation of <code class="ph codeph">TransactionListener</code> methods, so implement methods that do the appropriate synchronizing of the multiple threads for thread-safe operation.</p>
+</div>
+Extends <code class="ph codeph">CacheCallback</code> interface. Installed in cache using transaction manager. Works with region-level listeners if needed.</td>
+</tr>
+<tr>
+<td><code class="ph codeph">TransactionWriter</code></td>
+<td><code class="ph codeph">TransactionEvent</code> with embedded list of <code class="ph codeph">EntryEvent</code></td>
+<td>Receives events for <em>pending</em> transaction commits. Has the ability to abort the transaction. Extends <code class="ph codeph">CacheCallback</code> interface. Installed in cache using transaction manager. At most one writer is called per transaction. Install a writer in every transaction host.</td>
+</tr>
+<tr>
+<td><code class="ph codeph">UniversalMembershipListenerAdapter</code>
+<p>(org.apache.geode .management.membership .UniversalMembershipListenerAdapter)</p></td>
+<td><code class="ph codeph">MembershipEvent</code> and <code class="ph codeph">ClientMembershipEvent</code></td>
+<td>One of the interfaces that replaces the deprecated Admin APIs. Provides a wrapper for MembershipListener and ClientMembershipListener callbacks for both clients and peers.</td>
+</tr>
+</tbody>
+</table>
+
+## <a id="event_handlers_and_events__section_48C81FE4C1934DBBB287925A6F7A473D" class="no-quick-link"></a>Events
+
+The events in this table are cache events unless otherwise noted.
+
+<table>
+<colgroup>
+<col width="25%" />
+<col width="25%" />
+<col width="50%" />
+</colgroup>
+<thead>
+<tr>
+<th>Event</th>
+<th>Passed to handler ...</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code class="ph codeph">AsyncEvent</code></td>
+<td><code class="ph codeph">AsyncEventListener</code></td>
+<td>Provides information about a single event in the cache for asynchronous, write-behind processing.</td>
+</tr>
+<tr>
+<td><code class="ph codeph">CacheEvent</code></td>
+<td> </td>
+<td>Superinterface to <code class="ph codeph">RegionEvent</code> and <code class="ph codeph">EntryEvent</code>. This defines common event methods, and contains data needed to diagnose the circumstances of the event, including a description of the operation being performed, information about where the event originated, and any callback argument passed to the method that generated this event.</td>
+</tr>
+<tr>
+<td><code class="ph codeph">ClientMembershipEvent</code></td>
+<td><code class="ph codeph">ClientMembershipListener</code></td>
+<td>An event delivered to a <code class="ph codeph">ClientMembershipListener</code> when this process detects connection changes to servers or clients.</td>
+</tr>
+<tr>
+<td><code class="ph codeph">CqEvent</code></td>
+<td><code class="ph codeph">CqListener</code></td>
+<td>Provides information about a change to the results of a continuous query running on a server on behalf of a client. <code class="ph codeph">CqEvent</code>s are processed on the client.</td>
+</tr>
+<tr>
+<td><code class="ph codeph">EntryEvent</code></td>
+<td><code class="ph codeph">CacheListener</code>, <code class="ph codeph">CacheWriter</code>, <code class="ph codeph">TransactionListener</code> (inside the <code class="ph codeph">TransactionEvent</code>)</td>
+<td>Extends <code class="ph codeph">CacheEvent</code> for entry events. Contains information about an event affecting a data entry in the cache. The information includes the key, the value before this event, and the value after this event. <code class="ph codeph">EntryEvent.getNewValue</code> returns the current value of the data entry. <code class="ph codeph">EntryEvent.getOldValue</code> returns the value before this event if it is available. For a partitioned region, returns the old value if the local cache holds the primary copy of the entry. <code class="ph codeph">EntryEvent</code> provides the Geode transaction ID if available.
+<p>You can retrieve serialized values from <code class="ph codeph">EntryEvent</code> using the <code class="ph codeph">getSerialized</code>* methods. This is useful if you get values from one region\u2019s events just to put them into a separate cache region. There is no counterpart <code class="ph codeph">put</code> function as the put recognizes that the value is serialized and bypasses the serialization step.</p></td>
+</tr>
+<tr>
+<td><code class="ph codeph">MembershipEvent</code> (membership event)</td>
+<td><code class="ph codeph">MembershipListener</code></td>
+<td><p>An event that describes the member that originated this event. Instances of this are delivered to a <code class="ph codeph">MembershipListener</code> when a member has joined or left the distributed system.</p></td>
+</tr>
+<tr>
+<td><code class="ph codeph">RegionEvent</code></td>
+<td><code class="ph codeph">CacheListener</code>, <code class="ph codeph">CacheWriter</code>, <code class="ph codeph">RegionMembershipListener</code></td>
+<td>Extends <code class="ph codeph">CacheEvent</code> for region events. Provides information about operations that affect the whole region, such as reinitialization of the region after being destroyed.</td>
+</tr>
+<tr>
+<td><code class="ph codeph">TimestampedEntryEvent</code></td>
+<td><code class="ph codeph">GatewayConflictResolver</code></td>
+<td>Extends <code class="ph codeph">EntryEvent</code> to include a timestamp and distributed system ID associated with the event. The conflict resolver can compare the timestamp and ID in the event with the values stored in the entry to decide whether the local system should apply the potentially conflicting event.</td>
+</tr>
+<tr>
+<td><code class="ph codeph">TransactionEvent</code></td>
+<td><code class="ph codeph">TransactionListener</code>, <code class="ph codeph">TransactionWriter</code></td>
+<td>Describes the work done in a transaction. This event may be for a pending or committed transaction, or for the work abandoned by an explicit rollback or failed commit. The work is represented by an ordered list of <code class="ph codeph">EntryEvent</code> instances. The entry events are listed in the order in which the operations were performed in the transaction.
+<p>As the transaction operations are performed, the entry events are conflated, with only the last event for each entry remaining in the list. So if entry A is modified, then entry B, then entry A, the list will contain the event for entry B followed by the second event for entry A.</p></td>
+</tr>
+</tbody>
+</table>
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/resolving_multisite_conflicts.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/resolving_multisite_conflicts.html.md.erb b/geode-docs/developing/events/resolving_multisite_conflicts.html.md.erb
new file mode 100644
index 0000000..31fe7a3
--- /dev/null
+++ b/geode-docs/developing/events/resolving_multisite_conflicts.html.md.erb
@@ -0,0 +1,63 @@
+---
+title:  Resolving Conflicting Events
+---
+
+You can optionally create a `GatewayConflictResolver` cache plug-in to decide whether a potentially conflicting event that was delivered from another site should be applied to the local cache.
+
+By default, all regions perform consistency checks when a member applies an update received either from another cluster member or from a remote cluster over the WAN. The default consistency checking for WAN events is described in [How Consistency Is Achieved in WAN Deployments](../distributed_regions/how_region_versioning_works_wan.html#topic_fpy_z3h_j5).
+
+You can override the default consistency checking behavior by writing and configuring a custom `GatewayConflictResolver`. The `GatewayConflictResolver` implementation can use the timestamp and distributed system ID included in a WAN update event to determine whether or not to apply the update. For example, you may decide that updates from a particular cluster should always "win" a conflict when the timestamp difference between updates is less than some fixed period of time.
+
+## <a id="topic_E97BB68748F14987916CD1A50E4B4542__section_E20B4A8A98FD4EDAAA8C14B8059AA7F7" class="no-quick-link"></a>Implementing a GatewayConflictResolver
+
+**Note:**
+A `GatewayConflictResolver` implementation is called only for update events that could cause a conflict in the region. This corresponds to update events that have a different distributed system ID than the distributed system that last updated the region entry. If the same distributed system ID makes consecutive updates to a region entry, no conflict is possible, and the `GatewayConflictResolver` is not called.
+
+**Procedure**
+
+1.  Program the event handler:
+    1.  Create a class that implements the `GatewayConflictResolver` interface.
+    2.  If you want to declare the handler in `cache.xml`, implement the `org.apache.geode.cache.Declarable` interface as well.
+    3.  Implement the handler's `onEvent()` method to determine whether the WAN event should be allowed. `onEvent()` receives both a `TimestampedEntryEvent` and a `GatewayConflictHelper` instance. `TimestampedEntryEvent` has methods for obtaining the timestamp and distributed system ID of both the update event and the current region entry. Use methods in the `GatewayConflictHelper` to either disallow the update event (retaining the existing region entry value) or provide an alternate value.
+
+        **Example:**
+
+        ``` pre
+         public void onEvent(TimestampedEntryEvent event, GatewayConflictHelper helper) {
+            if (event.getOperation().isUpdate()) {
+              ShoppingCart oldCart = (ShoppingCart)event.getOldValue();
+              ShoppingCart newCart = (ShoppingCart)event.getNewValue();
+              oldCart.updateFromConflictingState(newCart);
+              helper.changeEventValue(oldCart);
+            }
+          }
+        ```
+
+        **Note:**
+        In order to maintain consistency in the region, your conflict resolver must always resolve two events in the same way regardless of which event it receives first.
+
+2.  Install the conflict resolver for the cache, using either the `cache.xml` file or the Java API.
+
+    **cache.xml**
+
+    ``` pre
+    <cache>
+         ... 
+        <gateway-conflict-resolver>
+          <class-name>myPackage.MyConflictResolver</class-name>
+        </gateway-conflict-resolver>
+        ...
+    </cache>
+    ```
+
+    **Java API**
+
+    ``` pre
+    // Create or obtain the cache
+    Cache cache = new CacheFactory().create();
+
+    // Create and add a conflict resolver
+    cache.setGatewayConflictResolver(new MyConflictResolver);
+    ```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/tune_client_message_tracking_timeout.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/tune_client_message_tracking_timeout.html.md.erb b/geode-docs/developing/events/tune_client_message_tracking_timeout.html.md.erb
new file mode 100644
index 0000000..0874bb7
--- /dev/null
+++ b/geode-docs/developing/events/tune_client_message_tracking_timeout.html.md.erb
@@ -0,0 +1,26 @@
+---
+title:  Tune the Client's Subscription Message Tracking Timeout
+---
+
+<a id="tune_client_message_tracking_timeout__section_C655A41D47694BDC9164E5D83C23FA7C"></a>
+If the client pool's `subscription-message-tracking-timeout` is set too low, your client will discard tracking records for live threads, increasing the likelihood of processing duplicate events from those threads.
+
+This setting is especially important in systems where it is vital to avoid or greatly minimize duplicate events. If you detect that duplicate messages are being processed by your clients, increasing the timeout may help. Setting `subscription-message-tracking-timeout` may not completely eliminate duplicate entries, but careful configuration can help minimize occurrences.
+
+Duplicates are monitored by keeping track of message sequence IDs from the source thread where the operation originated. For a long-running system, you would not want to track this information for very long periods or the information may be kept long enough for a thread ID to be recycled. If this happens, messages from a new thread may be discarded mistakenly as duplicates of messages from an old thread with the same ID. In addition, maintaining this tracking information for old threads uses memory that might be freed up for other things.
+
+To minimize duplicates and reduce the size of the message tracking list, set your client `subscription-message-tracking-timeout` higher than double the sum of these times:
+
+-   The longest time your originating threads might wait between operations
+-   For redundant servers add:
+    -   The server\u2019s `message-sync-interval`
+    -   Total time required for failover (usually 7-10 seconds, including the time to detect failure)
+
+You risk losing live thread tracking records if you set the value lower than this. This could result in your client processing duplicate event messages into its cache for the associated threads. It is worth working to set the `subscription-message-tracking-timeout` as low as you reasonably can.
+
+``` pre
+<!-- Set the tracking timeout to 70 seconds -->
+<pool name="client" subscription-enabled="true" subscription-message-tracking-timeout="70000"> 
+    ...
+</pool>
+```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb b/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb
new file mode 100644
index 0000000..59206bc
--- /dev/null
+++ b/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb
@@ -0,0 +1,20 @@
+---
+title:  Tuning Client/Server Event Messaging
+---
+
+<a id="client_server_event_messaging__section_0894E034A285456EA01D5903248F9B3B"></a>
+The server uses an asynchronous messaging queue to send events to its clients. Every event in the queue originates in an operation performed by a thread in a client, a server, or an application in the server\u2019s or some other distributed system. The event message has a unique identifier composed of the originating thread\u2019s ID combined with its member\u2019s distributed system member ID, and the sequential ID of the operation. So the event messages originating in any single thread can be grouped and ordered by time from lowest sequence ID to highest. Servers and clients track the highest sequential ID for each member thread ID.
+
+A single client thread receives and processes messages from the server, tracking received messages to make sure it does not process duplicate sends. It does this using the process IDs from originating threads.
+
+<img src="../../images_svg/tune_cs_event_messaging.svg" id="client_server_event_messaging__image_656BDF5E745F4C6D92C844C423102948" class="image" />
+
+The client\u2019s message tracking list holds the highest sequence ID of any message received for each originating thread. The list can become quite large in systems where there are many different threads coming and going and doing work on the cache. After a thread dies, its tracking entry is not needed. To avoid maintaining tracking information for threads that have died, the client expires entries that have had no activity for more than the `subscription-message-tracking-timeout`.
+
+-   **[Conflate the Server Subscription Queue](../../developing/events/conflate_server_subscription_queue.html)**
+
+-   **[Limit the Server's Subscription Queue Memory Use](../../developing/events/limit_server_subscription_queue_size.html)**
+
+-   **[Tune the Client's Subscription Message Tracking Timeout](../../developing/events/tune_client_message_tracking_timeout.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb b/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb
new file mode 100644
index 0000000..299fd87
--- /dev/null
+++ b/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb
@@ -0,0 +1,48 @@
+---
+title:  How to Safely Modify the Cache from an Event Handler Callback
+---
+
+Event handlers are synchronous. If you need to change the cache or perform any other distributed operation from event handler callbacks, be careful to avoid activities that might block and affect your overall system performance.
+
+## <a id="writing_callbacks_that_modify_the_cache__section_98E49363C91945DEB0A3B2FD9A209969" class="no-quick-link"></a>Operations to Avoid in Event Handlers
+
+Do not perform distributed operations of any kind directly from your event handler. Geode is a highly distributed system and many operations that may seem local invoke distributed operations.
+
+These are common distributed operations that can get you into trouble:
+
+-   Calling `Region` methods, on the event's region or any other region.
+-   Using the Geode `DistributedLockService`.
+-   Modifying region attributes.
+-   Executing a function through the Geode `FunctionService`.
+
+To be on the safe side, do not make any calls to the Geode API directly from your event handler. Make all Geode API calls from within a separate thread or executor.
+
+## <a id="writing_callbacks_that_modify_the_cache__section_78648D4177E14EA695F0B059E336137C" class="no-quick-link"></a>How to Perform Distributed Operations Based on Events
+
+If you need to use the Geode API from your handlers, make your work asynchronous to the event handler. You can spawn a separate thread or use a solution like the `java.util.concurrent.Executor` interface.
+
+This example shows a serial executor where the callback creates a `Runnable` that can be pulled off a queue and run by another object. This preserves the ordering of events.
+
+``` pre
+public void afterCreate(EntryEvent event) {
+  final Region otherRegion = cache.getRegion("/otherRegion");
+  final Object key = event.getKey();
+  final Object val = event.getNewValue();
+
+  serialExecutor.execute(new Runnable() {
+    public void run() {
+      try {
+        otherRegion.create(key, val);
+      }
+      catch (org.apache.geode.cache.RegionDestroyedException e) {
+        ...
+      }
+      catch (org.apache.geode.cache.EntryExistsException e) {
+        ...
+      }
+    }
+  });
+  }
+```
+
+For additional information on the `Executor`, see the `SerialExecutor` example on the Oracle Java web site.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/eviction/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/eviction/chapter_overview.html.md.erb b/geode-docs/developing/eviction/chapter_overview.html.md.erb
new file mode 100644
index 0000000..4920294
--- /dev/null
+++ b/geode-docs/developing/eviction/chapter_overview.html.md.erb
@@ -0,0 +1,17 @@
+---
+title:  Eviction
+---
+
+Use eviction to control data region size.
+
+<a id="eviction__section_C3409270DD794822B15E819E2276B21A"></a>
+
+-   **[How Eviction Works](../../developing/eviction/how_eviction_works.html)**
+
+    Eviction settings cause Apache Geode to work to keep a region's resource use under a specified level by removing least recently used (LRU) entries to make way for new entries.
+
+-   **[Configure Data Eviction](../../developing/eviction/configuring_data_eviction.html)**
+
+    Use eviction controllers to configure the eviction-attributes region attribute settings to keep your region within a specified limit.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb b/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb
new file mode 100644
index 0000000..42f3dbd
--- /dev/null
+++ b/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb
@@ -0,0 +1,71 @@
+---
+title:  Configure Data Eviction
+---
+
+Use eviction controllers to configure the eviction-attributes region attribute settings to keep your region within a specified limit.
+
+<a id="configuring_data_eviction__section_8515EC9635C342C0916EE9E6120E2AC9"></a>
+Eviction controllers monitor region and memory use and, when the limit is reached, remove older entries to make way for new data. For heap percentage, the controller used is the Geode resource manager, configured in conjunction with the JVM's garbage collector for optimum performance.
+
+Configure data eviction as follows. You do not need to perform these steps in the sequence shown.
+
+1.  Decide whether to evict based on:
+    -   Entry count (useful if your entry sizes are relatively uniform).
+    -   Total bytes used. In partitioned regions, this is set using `local-max-memory`. In non-partitioned, it is set in `eviction-attributes`.
+    -   Percentage of application heap used. This uses the Geode resource manager. When the manager determines that eviction is required, the manager orders the eviction controller to start evicting from all regions where the eviction algorithm is set to `lru-heap-percentage`. Eviction continues until the manager calls a halt. Geode evicts the least recently used entry hosted by the member for the region. See [Managing Heap and Off-heap Memory](../../managing/heap_use/heap_management.html#resource_manager).
+
+2.  Decide what action to take when the limit is reached:
+    -   Locally destroy the entry.
+    -   Overflow the entry data to disk. See [Persistence and Overflow](../storing_data_on_disk/chapter_overview.html).
+
+3.  Decide the maximum amount of data to allow in the member for the eviction measurement indicated. This is the maximum for all storage for the region in the member. For partitioned regions, this is the total for all buckets stored in the member for the region - including any secondary buckets used for redundancy.
+4.  Decide whether to program a custom sizer for your region. If you are able to provide such a class, it might be faster than the standard sizing done by Geode. Your custom class must follow the guidelines for defining custom classes and, additionally, must implement `org.apache.geode.cache.util.ObjectSizer`. See [Requirements for Using Custom Classes in Data Caching](../../basic_config/data_entries_custom_classes/using_custom_classes.html).
+
+**Note:**
+You can also configure Regions using the gfsh command-line interface, however, you cannot configure `eviction-attributes` using gfsh. See [Region Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_EF03119A40EE492984F3B6248596E1DD) and [Disk Store Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_1ACC91B493EE446E89EC7DBFBBAE00EA).
+
+Examples:
+
+``` pre
+// Create an LRU memory eviction controller with max bytes of 1000 MB
+// Use a custom class for measuring the size of each object in the region 
+<region-attributes refid="REPLICATE"> 
+  <eviction-attributes> 
+    <lru-memory-size maximum="1000" action="overflow-to-disk"> 
+      <class-name>com.myLib.MySizer</class-name> 
+      <parameter name="name"> 
+        <string>Super Sizer</string> 
+      </parameter> 
+    </lru-memory-size> 
+  </eviction-attributes> 
+  </region-attributes>
+```
+
+``` pre
+// Create a memory eviction controller on a partitioned region with max bytes of 512 MB
+<region name="demoPR">
+  <region-attributes refid="PARTITION">
+    <partition-attributes local-max-memory="512" total-num-buckets="13"/>
+    <eviction-attributes>
+       <lru-memory-size action="local-destroy"/>
+       <class-name>org.apache.geode.cache.util.ObjectSizer
+       </class-name>
+    </eviction-attributes>
+  </region-attributes>
+</region>
+            
+```
+
+``` pre
+// Configure a partitioned region for heap LRU eviction. The resource manager controls the limits. 
+<region-attributes refid="PARTITION_HEAP_LRU"> 
+</region-attributes>
+```
+
+``` pre
+Region currRegion = cache.createRegionFactory()
+    .setEvictionAttributes(EvictionAttributes.createLRUHeapAttributes(EvictionAction.LOCAL_DESTROY))
+    .create("root");
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/eviction/how_eviction_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/eviction/how_eviction_works.html.md.erb b/geode-docs/developing/eviction/how_eviction_works.html.md.erb
new file mode 100644
index 0000000..01d87d6
--- /dev/null
+++ b/geode-docs/developing/eviction/how_eviction_works.html.md.erb
@@ -0,0 +1,19 @@
+---
+title:  How Eviction Works
+---
+
+Eviction settings cause Apache Geode to work to keep a region's resource use under a specified level by removing least recently used (LRU) entries to make way for new entries.
+
+<a id="how_eviction_works__section_C3409270DD794822B15E819E2276B21A"></a>
+You configure for eviction based on entry count, percentage of available heap, and absolute memory usage. You also configure what to do when you need to evict: destroy entries or overflow them to disk. See [Persistence and Overflow](../storing_data_on_disk/chapter_overview.html).
+
+When Geode determines that adding or updating an entry would take the region over the specified level, it overflows or removes enough older entries to make room. For entry count eviction, this means a one-to-one trade of an older entry for the newer one. For the memory settings, the number of older entries that need to be removed to make space depends entirely on the relative sizes of the older and newer entries.
+
+## <a id="how_eviction_works__section_69E2AA453EDE4E088D1C3332C071AFE1" class="no-quick-link"></a>Eviction in Partitioned Regions
+
+In partitioned regions, Geode removes the oldest entry it can find *in the bucket where the new entry operation is being performed*. Geode maintains LRU entry information on a bucket-by-bucket bases, as the cost of maintaining information across the partitioned region would be too great a performance hit.
+
+-   For memory and entry count eviction, LRU eviction is done in the bucket where the new entry operation is being performed until the overall size of the combined buckets in the member has dropped enough to perform the operation without going over the limit.
+-   For heap eviction, each partitioned region bucket is treated as if it were a separate region, with each eviction action only considering the LRU for the bucket, and not the partitioned region as a whole.
+
+Because of this, eviction in partitioned regions may leave older entries for the region in other buckets in the local data store as well as in other stores in the distributed system. It may also leave entries in a primary copy that it evicts from a secondary copy or vice-versa.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/expiration/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/expiration/chapter_overview.html.md.erb b/geode-docs/developing/expiration/chapter_overview.html.md.erb
new file mode 100644
index 0000000..31ad4b2
--- /dev/null
+++ b/geode-docs/developing/expiration/chapter_overview.html.md.erb
@@ -0,0 +1,15 @@
+---
+title:  Expiration
+---
+
+Use expiration to keep data current by removing stale entries. You can also use it to remove entries you are not using so your region uses less space. Expired entries are reloaded the next time they are requested.
+
+-   **[How Expiration Works](../../developing/expiration/how_expiration_works.html)**
+
+    Expiration removes old entries and entries that you are not using. You can destroy or invalidate entries.
+
+-   **[Configure Data Expiration](../../developing/expiration/configuring_data_expiration.html)**
+
+    Configure the type of expiration and the expiration action to use.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/expiration/configuring_data_expiration.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/expiration/configuring_data_expiration.html.md.erb b/geode-docs/developing/expiration/configuring_data_expiration.html.md.erb
new file mode 100644
index 0000000..0fae74a
--- /dev/null
+++ b/geode-docs/developing/expiration/configuring_data_expiration.html.md.erb
@@ -0,0 +1,66 @@
+---
+title:  Configure Data Expiration
+---
+
+Configure the type of expiration and the expiration action to use.
+
+<a id="configuring_data_expiration__section_ADB8302125624E01A808EA5E4FF79A5C"></a>
+
+-   Set the region's `statistics-enabled` attribute to true.
+
+    The statistics used for expiration are available directly to the application through the `CacheStatistics` object returned by the `Region` and `Region.Entry` `getStatistics` methods. The `CacheStatistics` object also provides a method for resetting the statistics counters.
+
+-   Set the expiration attributes by expiration type, with the max times and expiration actions. See the region attributes listings for `entry-time-to-live`, `entry-idle-time`, `region-time-to-live`, and `region-idle-time` in [&lt;region-attributes&gt;](../../reference/topics/cache_xml.html#region-attributes).
+
+    For partitioned regions, to ensure reliable read behavior, use the `time-to-live` attributes, not the `idle-time` attributes. In addition, you cannot use `local-destroy` or `local-invalidate` expiration actions in partitioned regions.
+
+    Replicated regions example:
+
+    ``` pre
+    // Setting standard expiration on an entry
+    <region-attributes statistics-enabled="true"> 
+      <entry-idle-time> 
+        <expiration-attributes timeout="60" action="local-invalidate"/> 
+      </entry-idle-time> 
+    </region-attributes> 
+    ```
+
+-   Override the region-wide settings for specific entries, if required by your application. To do this:
+    1.  Program a custom expiration class that implements `org.apache.geode.cache.CustomExpiry`. Example:
+
+        ``` pre
+        // Custom expiration class
+        // Use the key for a region entry to set entry-specific expiration timeouts of 
+        //   10 seconds for even-numbered keys with a DESTROY action on the expired entries
+        //   Leave the default region setting for all odd-numbered keys. 
+        public class MyClass implements CustomExpiry, Declarable 
+        { 
+            private static final ExpirationAttributes CUSTOM_EXPIRY = 
+                    new ExpirationAttributes(10, ExpirationAction.DESTROY); 
+            public ExpirationAttributes getExpiry(Entry entry) 
+            { 
+                int key = (Integer)entry.getKey(); 
+                return key % 2 == 0 ? CUSTOM_EXPIRY : null; 
+            }
+        }
+        ```
+    2.  Define the class inside the expiration attributes settings for the region. Example:
+
+
+        ``` pre
+        <!-- Set default entry idle timeout expiration for the region --> 
+        <!-- Pass entries to custom expiry class for expiration overrides -->
+        <region-attributes statistics-enabled="true"> 
+            <entry-idle-time> 
+                <expiration-attributes timeout="60" action="local-invalidate"> 
+                    <custom-expiry> 
+                        <class-name>com.company.mypackage.MyClass</class-name> 
+                    </custom-expiry> 
+                </expiration-attributes> 
+            </entry-idle-time> 
+        </region-attributes>
+        ```
+
+You can also configure Regions using the gfsh command-line interface, however, you cannot configure `custom-expiry` using gfsh. See [Region Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_EF03119A40EE492984F3B6248596E1DD).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/expiration/how_expiration_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/expiration/how_expiration_works.html.md.erb b/geode-docs/developing/expiration/how_expiration_works.html.md.erb
new file mode 100644
index 0000000..5c3be02
--- /dev/null
+++ b/geode-docs/developing/expiration/how_expiration_works.html.md.erb
@@ -0,0 +1,53 @@
+---
+title:  How Expiration Works
+---
+
+Expiration removes old entries and entries that you are not using. You can destroy or invalidate entries.
+
+<a id="how_expiration_works__section_94FDBB821CDE49C48A0EFA6ED4DE194F"></a>
+Expiration activities in distributed regions can be distributed or local. Thus, one cache could control expiration for a number of caches in the system.
+
+This figure shows two basic expiration settings for a producer/consumer system. The producer member (on the right) populates the region from a database and the data is automatically distributed throughout the system. The data is valid only for one hour, so the producer performs a distributed destroy on entries that are an hour old. The other applications are consumers. The consumers free up space in their caches by removing their local copies of the entries for which there is no local interest (idle-time expiration). Requests for entries that have expired on the consumers will be forwarded to the producer.
+
+<img src="../../images_svg/expiration.svg" id="how_expiration_works__image_3D674825D1434830A8242D77CC89289F" class="image" />
+
+## <a id="how_expiration_works__section_B6C55A610F4243ED8F1986E8A98858CF" class="no-quick-link"></a>Expiration Types
+
+Apache Geode uses the following expiration types:
+
+-   **Time to live (TTL)**. The amount of time, in seconds, the object may remain in the cache after the last creation or update. For entries, the counter is set to zero for create and put operations. Region counters are reset when the region is created and when an entry has its counter reset. The TTL expiration attributes are `region-time-to-live` and `entry-time-to-live`.
+-   **Idle timeout**. The amount of time, in seconds, the object may remain in the cache after the last access. The idle timeout counter for an object is reset any time its TTL counter is reset. In addition, an entry\u2019s idle timeout counter is reset any time the entry is accessed through a get operation or a netSearch . The idle timeout counter for a region is reset whenever the idle timeout is reset for one of its entries. Idle timeout expiration attributes are: `region-idle-time` and `entry-idle-time`.
+
+## <a id="how_expiration_works__section_BA995343EF584104B9853CFE4CAD88AD" class="no-quick-link"></a>Expiration Actions
+
+Apache Geode uses the following expiration actions:
+
+-   destroy
+-   local destroy
+-   invalidate (default)
+-   local invalidate
+
+## <a id="how_expiration_works__section_AB4AB9E57D434159AA6E9B402E5E599D" class="no-quick-link"></a>Partitioned Regions and Entry Expiration
+
+For overall region performance, idle time expiration in partitioned regions may expire some entries sooner than expected. To ensure reliable read behavior across the partitioned region, we recommend that you use `entry-time-to-live` for entry expiration in partitioned regions instead of `entry-idle-time`.
+
+Expiration in partitioned regions is executed in the primary copy, based on the primary\u2019s last accessed and last updated statistics.
+
+-   Entry updates are always done in the primary copy, resetting the primary copy\u2019s last updated and last accessed statistics.
+-   Entry retrieval uses the most convenient available copy of the data, which may be one of the secondary copies. This provides the best performance at the cost of possibly not updating the primary copy\u2019s statistic for last accessed time.
+
+When the primary expires entries, it does not request last accessed statistics from the secondaries, as the performance hit would be too great. It expires entries based solely on the last time the entries were accessed in the primary copy.
+
+You cannot use `local-destroy` or `local-invalidate` expiration actions in a partitioned region.
+
+## <a id="how_expiration_works__section_expiration_settings_and_netSearch" class="no-quick-link"></a>Interaction Between Expiration Settings and `netSearch`
+
+Before `netSearch` retrieves an entry value from a remote cache, it validates the *remote* entry\u2019s statistics against the *local* region\u2019s expiration settings. Entries that would have already expired in the local cache are passed over. Once validated, the entry is brought into the local cache and the local access and update statistics are updated for the local copy. The last accessed time is reset and the last modified time is updated to the time in the remote cache, with corrections made for system clock differences. Thus the local entry is assigned the true last time the entry was modified in the distributed system. The `netSearch` operation has no effect on the expiration counters in remote caches.
+
+The `netSearch` method operates only on distributed regions with a data-policy of empty, normal and preloaded.
+
+## Configuring the Number of Threads for Expiration
+
+You can use the `gemfire.EXPIRY_THREADS` system property to increase the number of threads that handle expiration. By default, one thread handles expiration, and it is possible for the thread to become overloaded when entries expire faster than the thread can expire them. If a single thread is handling too many expirations, it can result in an OOME. Set the gemfire.EXPIRY\_THREADS system property to the desired number when starting the cache server.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/function_exec/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/function_exec/chapter_overview.html.md.erb b/geode-docs/developing/function_exec/chapter_overview.html.md.erb
new file mode 100644
index 0000000..ead23a9
--- /dev/null
+++ b/geode-docs/developing/function_exec/chapter_overview.html.md.erb
@@ -0,0 +1,19 @@
+---
+title:  Function Execution
+---
+
+A function is a body of code that resides on a server and that an application can invoke from a client or from another server without the need to send the function code itself. The caller can direct a data-dependent function to operate on a particular dataset, or can direct a data-independent function to operate on a particular server, member, or member group.
+
+<a id="function_exec__section_CBD5B04ACC554029B5C710CE8E244FEA">The function execution service provides solutions for a variety of use cases, including:</a>
+
+-   An application needs to perform an operation on the data associated with a key. A registered server-side function can retrieve the data, operate on it, and put it back, with all processing performed locally to the server.
+-   An application needs to initialize some of its components once on each server, which might be used later by executed functions.
+-   A third-party service, such as a messaging service, requires initialization and startup.
+-   Any arbitrary aggregation operation requires iteration over local data sets that can be done more efficiently through a single call to the cache server.
+-   An external resource needs provisioning that can be done by executing a function on a server.
+
+-   **[How Function Execution Works](how_function_execution_works.html)**
+
+-   **[Executing a Function in Apache Geode](function_execution.html)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/function_exec/function_execution.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/function_exec/function_execution.html.md.erb b/geode-docs/developing/function_exec/function_execution.html.md.erb
new file mode 100644
index 0000000..70fb1a3
--- /dev/null
+++ b/geode-docs/developing/function_exec/function_execution.html.md.erb
@@ -0,0 +1,237 @@
+---
+title:  Executing a Function in Apache Geode
+---
+
+<a id="function_execution__section_BE483D79B81C49EE9855F506ED5AB014"></a>
+In this procedure it is assumed that you have your members and regions defined where you want to run functions.
+
+Main tasks:
+
+1.  Write the function code.
+2.  Register the function on all servers where you want to execute the function. The easiest way to register a function is to use the `gfsh` `deploy` command to deploy the JAR file containing the function code. Deploying the JAR automatically registers the function for you. See [Register the Function Automatically by Deploying a JAR](function_execution.html#function_execution__section_164E27B88EC642BA8D2359B18517B624) for details. Alternatively, you can write the XML or application code to register the function. See [Register the Function Programmatically](function_execution.html#function_execution__section_1D1056F843044F368FB76F47061FCD50) for details.
+3.  Write the application code to run the function and, if the function returns results, to handle the results.
+4.  If your function returns results and you need special results handling, code a custom `ResultsCollector` implementation and use it in your function execution.
+
+## <a id="function_execution__section_7D43B0C628D54F579D5C434D3DF69B3C" class="no-quick-link"></a>Write the Function Code
+
+To write the function code, you implement the `Function` interface or extend the `FunctionAdapter` class. Both are in the `org.apache.geode.cache.execute` package. The adapter class provides some default implementations for methods, which you can override.
+
+Code the methods you need for the function. These steps do not have to be done in this order.
+
+1.  Code `getId` to return a unique name for your function. You can use this name to access the function through the `FunctionService` API.
+2.  For high availability:
+    1.  Code `isHa` to return true to indicate to Geode that it can re-execute your function after one or more members fails
+    2.  Code your function to return a result
+    3.  Code `hasResult` to return true
+
+3.  Code `hasResult` to return true if your function returns results to be processed and false if your function does not return any data - the fire and forget function. `FunctionAdapter` `hasResult` returns true by default.
+4.  If the function will be executed on a region, code `optimizeForWrite` to return false if your function only reads from the cache, and true if your function updates the cache. The method only works if, when you are running the function, the `Execution` object is obtained through a `FunctionService` `onRegion` call. `FunctionAdapter` `optimizeForWrite` returns false by default.
+5.  Code the `execute` method to perform the work of the function.
+    1.  Make `execute` thread safe to accommodate simultaneous invocations.
+    2.  For high availability, code `execute` to accommodate multiple identical calls to the function. Use the `RegionFunctionContext` `isPossibleDuplicate` to determine whether the call may be a high-availability re-execution. This boolean is set to true on execution failure and is false otherwise.
+        **Note:**
+        The `isPossibleDuplicate` boolean can be set following a failure from another member\u2019s execution of the function, so it only indicates that the execution might be a repeat run in the current member.
+
+    3.  Use the function context to get information about the execution and the data:
+        -   The context holds the function ID, the `ResultSender` object for passing results back to the originator, and function arguments provided by the member where the function originated.
+        -   The context provided to the function is the `FunctionContext`, which is automatically extended to `RegionFunctionContext` if you get the `Execution` object through a `FunctionService` `onRegion` call.
+        -   For data dependent functions, the `RegionFunctionContext` holds the `Region` object, the `Set` of key filters, and a boolean indicating multiple identical calls to the function, for high availability implementations.
+        -   For partitioned regions, the `PartitionRegionHelper` provides access to additional information and data for the region. For single regions, use `getLocalDataForContext`. For colocated regions, use `getLocalColocatedRegions`.
+            **Note:**
+            When you use `PartitionRegionHelper.getLocalDataForContext`, `putIfAbsent` may not return expected results if you are working on local data set instead of the region.
+
+    4.  To propagate an error condition or exception back to the caller of the function, throw a FunctionException from the `execute` method. Geode transmits the exception back to the caller as if it had been thrown on the calling side. See the Java API documentation for [FunctionException](/releases/latest/javadoc/org/apache/geode/cache/execute/FunctionException.html) for more information.
+
+Example function code:
+
+``` pre
+package quickstart;
+
+import java.io.Serializable;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Set;
+
+import org.apache.geode.cache.execute.FunctionAdapter;
+import org.apache.geode.cache.execute.FunctionContext;
+import org.apache.geode.cache.execute.RegionFunctionContext;
+import org.apache.geode.cache.partition.PartitionRegionHelper;
+
+public class MultiGetFunction extends FunctionAdapter {
+
+  public void execute(FunctionContext fc) { 
+    if(! (fc instanceof RegionFunctionContext)){
+       throw new FunctionException("This is a data aware function, and has 
+to be called using FunctionService.onRegion.");
+    }
+    RegionFunctionContext context = (RegionFunctionContext)fc;
+    Set keys = context.getFilter();
+    Set keysTillSecondLast = new HashSet(); 
+    int setSize = keys.size();
+    Iterator keysIterator = keys.iterator();
+    for(int i = 0; i < (setSize -1); i++)
+    {
+      keysTillSecondLast.add(keysIterator.next());
+    }
+    for (Object k : keysTillSecondLast) {
+      context.getResultSender().sendResult(
+          (Serializable)PartitionRegionHelper.getLocalDataForContext(context)
+              .get(k));
+    }
+    Object lastResult = keysIterator.next();
+    context.getResultSender().lastResult(
+        (Serializable)PartitionRegionHelper.getLocalDataForContext(context)
+            .get(lastResult));
+  }
+
+  public String getId() {
+    return getClass().getName();
+  }
+}
+```
+
+## <a id="function_execution__section_164E27B88EC642BA8D2359B18517B624" class="no-quick-link"></a>Register the Function Automatically by Deploying a JAR
+
+When you deploy a JAR file that contains a Function (in other words, contains a class that implements the Function interface), the Function will be automatically registered via the `FunctionService.registerFunction` method.
+
+To register a function by using `gfsh`:
+
+1.  Package your class files into a JAR file.
+2.  Start a `gfsh` prompt. If necessary, start a Locator and connect to the Geode distributed system where you want to run the function.
+3.  At the gfsh prompt, type the following command:
+
+    ``` pre
+    gfsh>deploy --jar=group1_functions.jar
+    ```
+
+    where group1\_functions.jar corresponds to the JAR file that you created in step 1.
+
+If another JAR file is deployed (either with the same JAR filename or another filename) with the same Function, the new implementation of the Function will be registered, overwriting the old one. If a JAR file is undeployed, any Functions that were auto-registered at the time of deployment will be unregistered. Since deploying a JAR file that has the same name multiple times results in the JAR being un-deployed and re-deployed, Functions in the JAR will be unregistered and re-registered each time this occurs. If a Function with the same ID is registered from multiple differently named JAR files, the Function will be unregistered if either of those JAR files is re-deployed or un-deployed.
+
+See [Deploying Application JARs to Apache Geode Members](../../configuring/cluster_config/deploying_application_jars.html#concept_4436C021FB934EC4A330D27BD026602C) for more details on deploying JAR files.
+
+## <a id="function_execution__section_1D1056F843044F368FB76F47061FCD50" class="no-quick-link"></a>Register the Function Programmatically
+
+This section applies to functions that are invoked using the `Execution.execute(String functionId)` signature. When this method is invoked, the calling application sends the function ID to all members where the `Function.execute` is to be run. Receiving members use the ID to look up the function in the local `FunctionService`. In order to do the lookup, all of the receiving member must have previously registered the function with the function service.
+
+The alternative to this is the `Execution.execute(Function function)` signature. When this method is invoked, the calling application serializes the instance of `Function` and sends it to all members where the `Function.execute` is to be run. Receiving members deserialize the `Function` instance, create a new local instance of it, and run execute from that. This option is not available for non-Java client invocation of functions on servers.
+
+Your Java servers must register functions that are invoked by non-Java clients. You may want to use registration in other cases to avoid the overhead of sending `Function` instances between members.
+
+Register your function using one of these methods:
+
+-   XML:
+
+    ``` pre
+    <cache>
+        ...
+        </region>
+    <function-service>
+      <function>
+        <class-name>com.bigFatCompany.tradeService.cache.func.TradeCalc</class-name>
+      </function>
+    </function-service>
+    ```
+
+-   Java:
+
+    ``` pre
+    myFunction myFun = new myFunction();
+    FunctionService.registerFunction(myFun);
+    ```
+
+    **Note:**
+    Modifying a function instance after registration has no effect on the registered function. If you want to execute a new function, you must register it with a different identifier.
+
+## <a id="function_execution__section_6A0F4C9FB77C477DA5D995705C8BDD5E" class="no-quick-link"></a>Run the Function
+
+This assumes you\u2019ve already followed the steps for writing and registering the function.
+
+In every member where you want to explicitly execute the function and process the results, you can use the `gfsh` command line to run the function or you can write an application to run the function.
+
+**Running the Function Using gfsh**
+
+1.  Start a gfsh prompt.
+2.  If necessary, start a Locator and connect to the Geode distributed system where you want to run the function.
+3.  At the gfsh prompt, type the following command:
+
+    ``` pre
+    gfsh> execute function --id=function_id
+    ```
+
+    Where *function\_id* equals the unique ID assigned to the function. You can obtain this ID using the `Function.getId` method.
+
+See [Function Execution Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_8BB061D1A7A9488C819FE2B7881A1278) for more `gfsh` commands related to functions.
+
+**Running the Function via API Calls**
+
+1.  Use one of the `FunctionService` `on*` methods to create an `Execute` object. The `on*` methods, `onRegion`, `onMembers`, etc., define the highest level where the function is run. For colocated partitioned regions, use `onRegion` and specify any one of the colocated regions. The function run using `onRegion` is referred to as a data dependent function - the others as data-independent functions.
+2.  Use the `Execution` object as needed for additional function configuration. You can:
+    -   Provide a key `Set` to `withFilters` to narrow the execution scope for `onRegion` `Execution` objects. You can retrieve the key set in your `Function` `execute` method through `RegionFunctionContext.getFilter`.
+    -   Provide function arguments to `withArgs`. You can retrieve these in your `Function` `execute` method through `FunctionContext.getArguments`.
+    -   Define a custom `ResultCollector`
+
+3.  Call the `Execution` object to `execute` method to run the function.
+4.  If the function returns results, call `getResult` from the results collector returned from `execute` and code your application to do whatever it needs to do with the results.
+    **Note:**
+    For high availability, you must call the `getResult` method.
+
+Example of running the function - for executing members:
+
+``` pre
+MultiGetFunction function = new MultiGetFunction();
+FunctionService.registerFunction(function);
+    
+writeToStdout("Press Enter to continue.");
+stdinReader.readLine();
+    
+Set keysForGet = new HashSet();
+keysForGet.add("KEY_4");
+keysForGet.add("KEY_9");
+keysForGet.add("KEY_7");
+
+Execution execution = FunctionService.onRegion(exampleRegion)
+    .withFilter(keysForGet)
+    .withArgs(Boolean.TRUE)
+    .withCollector(new MyArrayListResultCollector());
+
+ResultCollector rc = execution.execute(function);
+// Retrieve results, if the function returns results
+List result = (List)rc.getResult();
+```
+
+## <a id="function_execution__section_F2AFE056650B4BF08BC865F746BFED38" class="no-quick-link"></a>Write a Custom Results Collector
+
+This topic applies to functions that return results.
+
+When you execute a function that returns results, the function stores the results into a `ResultCollector` and returns the `ResultCollector` object. The calling application can then retrieve the results through the `ResultCollector` `getResult` method. Example:
+
+``` pre
+ResultCollector rc = execution.execute(function);
+List result = (List)rc.getResult();
+```
+
+Geode\u2019s default `ResultCollector` collects all results into an `ArrayList`. Its `getResult` methods block until all results are received. Then they return the full result set.
+
+To customize results collecting:
+
+1.  Write a class that extends `ResultCollector` and code the methods to store and retrieve the results as you need. Note that the methods are of two types:
+    1.  `addResult` and `endResults` are called by Geode when results arrive from the `Function` instance `SendResults` methods
+    2.  `getResult` is available to your executing application (the one that calls `Execution.execute`) to retrieve the results
+
+2.  Use high availability for `onRegion` functions that have been coded for it:
+    1.  Code the `ResultCollector` `clearResults` method to remove any partial results data. This readies the instance for a clean function re-execution.
+    2.  When you invoke the function, call the result collector `getResult` method. This enables the high availability functionality.
+
+3.  In your member that calls the function execution, create the `Execution` object using the `withCollector` method, and passing it your custom collector. Example:
+
+    ``` pre
+    Execution execution = FunctionService.onRegion(exampleRegion)
+        .withFilter(keysForGet)
+        .withArgs(Boolean.TRUE)
+        .withCollector(new MyArrayListResultCollector());
+    ```
+
+## <a id="function_execution__section_638E1FB9B08F4CC4B62C07DDB3661C14" class="no-quick-link"></a>Targeting Single Members of a Member Group or Entire Member Groups
+
+To execute a data independent function on a group of members or one member in a group of members, you can write your own nested function. You will need to write one nested function if you are executing the function from client to server and another nested function if you are executing a function from server to all members.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb b/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb
new file mode 100644
index 0000000..19959e8
--- /dev/null
+++ b/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb
@@ -0,0 +1,114 @@
+---
+title:  How Function Execution Works
+---
+
+## <a id="how_function_execution_works__section_881D2FF6761B4D689DDB46C650E2A2E1" class="no-quick-link"></a>Where Functions Are Executed
+
+You can execute data-independent functions or data-dependent functions in Geode in the following places:
+
+**For Data-independent Functions**
+
+-   **On a specific member or members\u2014**Execute the function within a peer-to-peer distributed system, specifying the member or members where you want to run the function by using `FunctionService` methods `onMember()` and `onMembers()`.
+-   **On a specific server or set of servers\u2014**If you are connected to a distributed system as a client, you can execute the function on a server or servers configured for a specific connection pool, or on a server or servers connected to a given cache using the default connection pool. For data-independent functions on client/server architectures, a client invokes `FunctionService` methods `onServer()` or `onServers()`. (See [How Client/Server Connections Work](../../topologies_and_comm/topology_concepts/how_the_pool_manages_connections.html) for details regarding pool connections.)
+-   **On member groups or on a single member within each member group\u2014**You can organize members into logical member groups. (See [Configuring and Running a Cluster](../../configuring/chapter_overview.html#concept_lrh_gyq_s4) for more information about using member groups.) You can invoke a data independent function on all members in a specified member group or member groups, or execute the function on only one member of each specified member group.
+
+**For Data-dependent Functions**
+
+-   **On a region\u2014**If you are executing a data-dependent function, specify a region and, optionally, a set of keys on which to run the function. The method `FunctionService.onRegion()` directs a data-dependent function to execute on a specific region.
+
+See the `org.apache.geode.cache.execute.FunctionService` Java API documentation for more details.
+
+## <a id="how_function_execution_works__section_E0C4B7D2E4414F099788A5A441FF0E03" class="no-quick-link"></a>How Functions Are Executed
+
+The following things occur when executing a function:
+
+1.  When you call the `execute` method on the `Execution` object, Geode invokes the function on all members where it needs to run. The locations are determined by the `FunctionService` `on*` method calls, region configuration, and any filters.
+2.  If the function has results, they are returned to the `addResult` method call in a `ResultCollector` object.
+3.  The originating member collects results using `ResultCollector.getResult`.
+
+## <a id="how_function_execution_works__section_14FF9932C7134C5584A14246BB4D4FF6" class="no-quick-link"></a>Highly Available Functions
+
+Generally, function execution errors are returned to the calling application. You can code for high availability for `onRegion` functions that return a result, so Geode automatically retries a function if it does not execute successfully. You must code and configure the function to be highly available, and the calling application must invoke the function using the results collector `getResult` method.
+
+When a failure (such as an execution error or member crash while executing) occurs, the system responds by:
+
+1.  Waiting for all calls to return
+2.  Setting a boolean indicating a re-execution
+3.  Calling the result collector\u2019s `clearResults` method
+4.  Executing the function
+
+For client regions, the system retries the execution according to `org.apache.geode.cache.client.Pool` `retryAttempts`. If the function fails to run every time, the final exception is returned to the `getResult` method.
+
+For member calls, the system retries until either it succeeds or no data remains in the system for the function to operate on.
+
+## <a id="how_function_execution_works__section_A0FD54B73E9A453AA38FC4A4D5282351" class="no-quick-link"></a>Function Execution Scenarios
+
+[Server-distributed System](#how_function_execution_works__fig_server_distributed_system) shows the sequence of events for a data-independent function invoked from a client on all available servers.
+
+<a id="how_function_execution_works__fig_server_distributed_system"></a>
+
+<span class="figtitleprefix">Figure: </span>Server-distributed System
+
+<img src="../../images/FuncExecOnServers.png" alt="A diagram showing the sequence of events for a data-independent function invoked from a client on all available servers" id="how_function_execution_works__image_993D1FD7705E40EA801CF0656C4E91E5" class="image" />
+
+The client contacts a locator to obtain host and port identifiers for each server in the distributed system and issues calls to each server. As the instigator of the calls, the client also receives the call results.
+
+[Peer-to-peer Distributed System](#how_function_execution_works__fig_peer_distributed_system) shows the sequence of events for a data-independent function executed against members in a peer-to-peer distributed system.
+
+<a id="how_function_execution_works__fig_peer_distributed_system"></a>
+
+<span class="figtitleprefix">Figure: </span>Peer-to-peer Distributed System
+
+<img src="../../images/FuncExecOnMembers.png" alt="The sequence of events for a data-independent function executed against members in a peer-to-peer distributed system." id="how_function_execution_works__image_041832B370AA4241980B8C2632DD1DC8" class="image" />
+
+You can think of `onMembers()` as the peer-to-peer counterpart of a client-server call to `onServers()`. Because it is called from a peer of other members in the distributed system, an `onMembers()` function invocation has access to detailed metadata and does not require the services of a locator. The caller invokes the function on itself, if appropriate, as well as other members in the distributed system and collects the results of all of the function executions.
+
+[Data-dependent Function on a Region](#how_function_execution_works__fig_data_dependent_function_region) shows a data-dependent function run on a region.
+
+<a id="how_function_execution_works__fig_data_dependent_function_region"></a>
+
+<span class="figtitleprefix">Figure: </span>Data-dependent Function on a Region
+
+<img src="../../images/FuncExecOnRegionNoMetadata.png" alt="The path followed when the client lacks detailed metadata regarding target locations" id="how_function_execution_works__image_68742923936F4EEC8E50819F5CEECBCC" class="image" />
+
+An `onRegion()` call requires more detailed metadata than a locator provides in its host:port identifier. This diagram shows the path followed when the client lacks detailed metadata regarding target locations, as on the first call or when previously obtained metadata is no longer up to date.
+
+The first time a client invokes a function to be executed on a particular region of a distributed system, the client's knowledge of target locations is limited to the host and port information provided by the locator. Given only this limited information, the client sends its execution request to whichever server is next in line to be called according to the pool allocation algorithm. Because it is a participant in the distributed system, that server has access to detailed metadata and can dispatch the function call to the appropriate target locations. When the server returns results to the client, it sets a flag indicating whether a request to a different server would have provided a more direct path to the intended target. To improve efficiency, the client requests a copy of the metadata. With additional details regarding the bucket layout for the region, the client can act as its own dispatcher on subsequent calls and identify multiple targets for itself, eliminating at least one 
 hop.
+
+After it has obtained current metadata, the client can act as its own dispatcher on subsequent calls, identifying multiple targets for itself and eliminating one hop, as shown in [Data-dependent function after obtaining current metadata](#how_function_execution_works__fig_data_dependent_function_obtaining_current_metadata).
+
+<a id="how_function_execution_works__fig_data_dependent_function_obtaining_current_metadata"></a>
+
+<span class="figtitleprefix">Figure: </span>Data-dependent function after obtaining current metadata
+
+<img src="../../images/FuncExecOnRegionWithMetadata.png" alt="A diagram showing the client acting as its own dispatcher after having obtained current metadata." class="image" />
+
+[Data-dependent Function on a Region with Keys](#how_function_execution_works__fig_data_dependent_function_region_keys) shows the same data-dependent function with the added specification of a set of keys on which to run.
+
+<a id="how_function_execution_works__fig_data_dependent_function_region_keys"></a>
+
+<span class="figtitleprefix">Figure: </span>Data-dependent Function on a Region with Keys
+
+<img src="../../images/FuncExecOnRegionWithFilter.png" alt="A data-dependent function on a region with specification of keys on which to run" id="how_function_execution_works__image_7FA8BE5D02F24CF8B49186C6FEB786BD" class="image" />
+
+Servers that do not hold any keys are left out of the function execution.
+
+[Peer-to-peer Data-dependent Function](#how_function_execution_works__fig_peer_data_dependent_function) shows a peer-to-peer data-dependent call.
+
+<a id="how_function_execution_works__fig_peer_data_dependent_function"></a>
+
+<span class="figtitleprefix">Figure: </span>Peer-to-peer Data-dependent Function
+
+<img src="../../images/FuncExecOnRegionPeersWithFilter.png" alt="A data-dependent function where the caller is not an external client" id="how_function_execution_works__image_9B8E914BA80E4BBA99856E9603A9BDA0" class="image" />
+
+The caller is a member of the distributed system, not an external client, so the function runs in the caller\u2019s distributed system. Note the similarities between this diagram and the preceding figure ([Data-dependent Function on a Region with Keys](#how_function_execution_works__fig_data_dependent_function_region_keys)), which shows a client-server model where the client has up-to-date metadata regarding target locations within the distributed system.
+
+[Client-server system with Up-to-date Target Metadata](#how_function_execution_works__fig_client_server_system_target_metadata) demonstrates a sequence of steps in a call to a highly available function in a client-server system in which the client has up-to-date metadata regarding target locations.
+
+<a id="how_function_execution_works__fig_client_server_system_target_metadata"></a>
+
+<span class="figtitleprefix">Figure: </span>Client-server system with Up-to-date Target Metadata
+
+<img src="../../images/FuncExecOnRegionHAWithFilter.png" alt="A sequence of steps in a call to a highly available function in a client-server system in which the client has up-to-date metadata regarding target locations" id="how_function_execution_works__image_05E94BB0EBF349FF8822158F2001F313" class="image" />
+
+In this example, three primary keys (X, Y, Z) and their secondary copies (X', Y', Z') are distributed among three servers. Because `optimizeForWrite` is `true`, the system first attempts to invoke the function where the primary keys reside: Server 1 and Server 2. Suppose, however, that Server 2 is off-line for some reason, so the call targeted for key Y fails. Because `isHA` is set to `true`, the call is retried on Server 1 (which succeeded the first time, so likely will do so again) and Server 3, where key Y' resides. This time, the function call returns successfully. Calls to highly available functions retry until they obtain a successful result or they reach a retry limit.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/management_all_region_types/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/management_all_region_types/chapter_overview.html.md.erb b/geode-docs/developing/management_all_region_types/chapter_overview.html.md.erb
new file mode 100644
index 0000000..dc44e87
--- /dev/null
+++ b/geode-docs/developing/management_all_region_types/chapter_overview.html.md.erb
@@ -0,0 +1,23 @@
+---
+title:  General Region Data Management
+---
+
+For all regions, you have options to control memory use, back up your data to disk, and keep stale data out of your cache.
+
+-   **[Persistence and Overflow](../../developing/storing_data_on_disk/chapter_overview.html)**
+
+    You can persist data on disk for backup purposes and overflow it to disk to free up memory without completely removing the data from your cache.
+
+-   **[Eviction](../../developing/eviction/chapter_overview.html)**
+
+    Use eviction to control data region size.
+
+-   **[Expiration](../../developing/expiration/chapter_overview.html)**
+
+    Use expiration to keep data current by removing stale entries. You can also use it to remove entries you are not using so your region uses less space. Expired entries are reloaded the next time they are requested.
+
+-   **[Keeping the Cache in Sync with Outside Data Sources](../../developing/outside_data_sources/sync_outside_data.html)**
+
+    Keep your distributed cache in sync with an outside data source by programming and installing application plug-ins for your region.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb b/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb
new file mode 100644
index 0000000..fc4f5ac
--- /dev/null
+++ b/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb
@@ -0,0 +1,17 @@
+---
+title:  Overview of Outside Data Sources
+---
+
+Apache Geode has application plug-ins to read data into the cache and write it out.
+
+<a id="outside_data_sources__section_100B707BB812430E8D9CFDE3BE4698D1"></a>
+The application plug-ins:
+
+1.  Load data on cache misses using an implementation of a `org.apache.geode.cache.CacheLoader`. The `CacheLoader.load` method is called when the `get` operation can't find the value in the cache. The value returned from the loader is put into the cache and returned to the `get` operation. You might use this in conjunction with data expiration to get rid of old data, and your other data loading applications, which might be prompted by events in the outside data source. See [Configure Data Expiration](../expiration/configuring_data_expiration.html).
+2.  Write data out to the data source using the cache event handlers, `CacheWriter` and `CacheListener`. For implementation details, see [Implementing Cache Event Handlers](../events/implementing_cache_event_handlers.html).
+    -   `CacheWriter` is run synchronously. Before performing any operation on a region entry, if any cache writers are defined for the region in the distributed system, the system invokes the most convenient writer. In partitioned and distributed regions, cache writers are usually defined in only a subset of the caches holding the region - often in only one cache. The cache writer can abort the region entry operation.
+    -   `CacheListener` is run synchronously after the cache is updated. This listener works only on local cache events, so install your listener in every cache where you want it to handle events. You can install multiple cache listeners in any of your caches.
+
+In addition to using application plug-ins, you can also configure external JNDI database sources in your cache.xml and use these data sources in transactions. See [Configuring Database Connections Using JNDI](../transactions/configuring_db_connections_using_JNDI.html#topic_A5E3A67C808D48C08E1F0DC167C5C494) for more information.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb b/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb
new file mode 100644
index 0000000..dd39ec8
--- /dev/null
+++ b/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb
@@ -0,0 +1,35 @@
+---
+title:  How Data Loaders Work
+---
+
+By default, a region has no data loader defined. Plug an application-defined loader into any region by setting the region attribute cache-loader on the members that host data for the region.
+
+<a id="how_data_loaders_work__section_1E600469D223498DB49446434CE9B0B4"></a>
+The loader is called on cache misses during get operations, and it populates the cache with the new entry value in addition to returning the value to the calling thread.
+
+A loader can be configured to load data into the Geode cache from an outside data store. To do the reverse operation, writing data from the Geode cache to an outside data store, use a cache writer event handler. See [Implementing Cache Event Handlers](../events/implementing_cache_event_handlers.html).
+
+How to install your cache loader depends on the type of region.
+
+## <a id="how_data_loaders_work__section_5CD65D559F1A490DAB5ED9326860FE8D" class="no-quick-link"></a>Data Loading in Partitioned Regions
+
+Because of the huge amounts of data they can handle, partitioned regions support partitioned loading. Each cache loader loads only the data entries in the member where the loader is defined. If data redundancy is configured, data is loaded only if the member holds the primary copy. So you must install a cache loader in every member where the partitioned attributes `local-max-memory` is not zero.
+
+If you depend on a JDBC connection, every data store must have a connection to the data source, as shown in the following figure. Here the three members require three connections. See [Configuring Database Connections Using JNDI](../transactions/configuring_db_connections_using_JNDI.html#topic_A5E3A67C808D48C08E1F0DC167C5C494) for information on how to configure data sources.
+
+**Note:**
+Partitioned regions generally require more JDBC connections than distributed regions.
+
+<img src="../../images_svg/cache_data_loader.svg" id="how_data_loaders_work__image_CD7CE9BD22ED4782AB6B296187AB983A" class="image" />
+
+## <a id="how_data_loaders_work__section_6A2CE777CE9E4BD682B881F6986CF66C" class="no-quick-link"></a>Data Loading in Distributed Regions
+
+In a non-partitioned distributed region, a cache loader defined in one member is available to all members that have the region defined. Loaders are usually defined in just a subset of the caches holding the region. When a loader is needed, all available loaders for the region are invoked, starting with the most convenient loader, until the data is loaded or all loaders have been tried.
+
+In the following figure, these members of one distributed system can be running on different machines. Loading for the distributed region is performed from M1.
+
+<img src="../../images_svg/cache_data_loader_2.svg" id="how_data_loaders_work__image_3C39A50218D64EF28A5448EB01A4C6EC" class="image" />
+
+## <a id="how_data_loaders_work__section_BE33D9AB27104D1BB8AC8BFCE11A063E" class="no-quick-link"></a>Data Loading in Local Regions
+
+For local regions, the cache loader is available only in the member where it is defined. If a loader is defined, it is called whenever a value is not found in the local cache.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/outside_data_sources/implementing_data_loaders.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/outside_data_sources/implementing_data_loaders.html.md.erb b/geode-docs/developing/outside_data_sources/implementing_data_loaders.html.md.erb
new file mode 100644
index 0000000..2b65b44
--- /dev/null
+++ b/geode-docs/developing/outside_data_sources/implementing_data_loaders.html.md.erb
@@ -0,0 +1,71 @@
+---
+title:  Implement a Data Loader
+---
+
+To program a data loader and configure your region to use it:
+
+1. Program your loader.
+
+2. Install your loader in each member region where you need it.
+
+## <a id="implementing_data_loaders__section_88076AF5EC184FE88AAF4C806A0CA9DF" class="no-quick-link"></a>Program your loader
+To program your loader:
+
+1.  Implement `org.apache.geode.cache.CacheLoader`.
+
+2.  If you want to declare the loader in your `cache.xml`, implement the `org.apache.geode.cache.Declarable` interface as well.
+
+3.  Program the single `CacheLoader` `load` method to do whatever your application requires for retrieving the value from outside the cache. If you need to run `Region` API calls from your loader, spawn separate threads for them. Do not make direct calls to `Region` methods from your load method implementation as it could cause the cache loader to block, hurting the performance of the distributed system. For example:
+
+    ``` pre
+    public class SimpleCacheLoader implements CacheLoader, Declarable {
+        public Object load(LoaderHelper helper) {
+            String key = (String) helper.getKey();
+            System.out.println(" Loader called to retrieve value for " + key);
+            // Create a value using the suffix number of the key (key1, key2, etc.)
+            return "LoadedValue" + (Integer.parseInt(key.substring(3)));
+        }
+        public void close() { // do nothing }
+        public void init(Properties props) { // do nothing }
+    }
+    ```
+
+## Install your loader in each member region
+To install your loader in each member region where you need it:
+
+1. In a partitioned region, install the cache loader in every data store for the region (`partition-attributes` `local-max-memory` &gt; 0).
+
+2. In a distributed region, install the loader in the members where it makes sense to do so. Cache loaders are usually defined in only a subset of the members holding the region. You might, for example, assign the job of loading from a database to one or two members for a region hosted by many more members. This can be done to reduce the number of connections when the outside source is a database.
+
+    Use one of these methods to install the loader:
+    -   XML:
+
+        ``` pre
+        <region-attributes>
+            <cache-loader>
+                <class-name>myCacheLoader</class-name>
+            </cache-loader>
+        </region-attributes>
+        ```
+    -   XML with parameters:
+
+        ``` pre
+        <cache-loader>
+            <class-name>com.company.data.DatabaseLoader</class-name>
+            <parameter name="URL">
+                <string>jdbc:cloudscape:rmi:MyData</string>
+            </parameter>
+        </cache-loader>
+        ```
+    -   Java:
+
+        ``` pre
+        RegionFactory<String,Object> rf = cache.createRegionFactory(REPLICATE);
+        rf.setCacheLoader(new QuoteLoader());
+        quotes = rf.create("NASDAQ Quotes");
+        ```
+
+**Note:**
+You can also configure Regions using the gfsh command-line interface, however you cannot configure a `cache-loader` using gfsh. See [Region Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_EF03119A40EE492984F3B6248596E1DD).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb b/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb
new file mode 100644
index 0000000..54e4f48
--- /dev/null
+++ b/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb
@@ -0,0 +1,19 @@
+---
+title:  Keeping the Cache in Sync with Outside Data Sources
+---
+
+Keep your distributed cache in sync with an outside data source by programming and installing application plug-ins for your region.
+
+-   **[Overview of Outside Data Sources](../../developing/outside_data_sources/chapter_overview.html)**
+
+    Apache Geode has application plug-ins to read data into the cache and write it out.
+
+-   **[How Data Loaders Work](../../developing/outside_data_sources/how_data_loaders_work.html)**
+
+    By default, a region has no data loader defined. Plug an application-defined loader into any region by setting the region attribute cache-loader on the members that host data for the region.
+
+-   **[Implement a Data Loader](../../developing/outside_data_sources/implementing_data_loaders.html)**
+
+    Program a data loader and configure your region to use it.
+
+