You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@geode.apache.org by db...@apache.org on 2016/10/06 20:02:24 UTC

[43/51] [partial] incubator-geode git commit: Set aside hibernate cache docs until the corresponding code is mainstreamed.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/eviction/how_eviction_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/eviction/how_eviction_works.html.md.erb b/geode-docs/developing/eviction/how_eviction_works.html.md.erb
deleted file mode 100644
index ee702ea..0000000
--- a/geode-docs/developing/eviction/how_eviction_works.html.md.erb
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title:  How Eviction Works
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-Eviction settings cause Apache Geode to work to keep a region's resource use under a specified level by removing least recently used (LRU) entries to make way for new entries.
-
-<a id="how_eviction_works__section_C3409270DD794822B15E819E2276B21A"></a>
-You configure for eviction based on entry count, percentage of available heap, and absolute memory usage. You also configure what to do when you need to evict: destroy entries or overflow them to disk. See [Persistence and Overflow](../storing_data_on_disk/chapter_overview.html).
-
-When Geode determines that adding or updating an entry would take the region over the specified level, it overflows or removes enough older entries to make room. For entry count eviction, this means a one-to-one trade of an older entry for the newer one. For the memory settings, the number of older entries that need to be removed to make space depends entirely on the relative sizes of the older and newer entries.
-
-## <a id="how_eviction_works__section_69E2AA453EDE4E088D1C3332C071AFE1" class="no-quick-link"></a>Eviction in Partitioned Regions
-
-In partitioned regions, Geode removes the oldest entry it can find *in the bucket where the new entry operation is being performed*. Geode maintains LRU entry information on a bucket-by-bucket bases, as the cost of maintaining information across the partitioned region would be too great a performance hit.
-
--   For memory and entry count eviction, LRU eviction is done in the bucket where the new entry operation is being performed until the overall size of the combined buckets in the member has dropped enough to perform the operation without going over the limit.
--   For heap eviction, each partitioned region bucket is treated as if it were a separate region, with each eviction action only considering the LRU for the bucket, and not the partitioned region as a whole.
-
-Because of this, eviction in partitioned regions may leave older entries for the region in other buckets in the local data store as well as in other stores in the distributed system. It may also leave entries in a primary copy that it evicts from a secondary copy or vice-versa.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/expiration/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/expiration/chapter_overview.html.md.erb b/geode-docs/developing/expiration/chapter_overview.html.md.erb
deleted file mode 100644
index 546af32..0000000
--- a/geode-docs/developing/expiration/chapter_overview.html.md.erb
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title:  Expiration
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-Use expiration to keep data current by removing stale entries. You can also use it to remove entries you are not using so your region uses less space. Expired entries are reloaded the next time they are requested.
-
--   **[How Expiration Works](../../developing/expiration/how_expiration_works.html)**
-
-    Expiration removes old entries and entries that you are not using. You can destroy or invalidate entries.
-
--   **[Configure Data Expiration](../../developing/expiration/configuring_data_expiration.html)**
-
-    Configure the type of expiration and the expiration action to use.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/expiration/configuring_data_expiration.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/expiration/configuring_data_expiration.html.md.erb b/geode-docs/developing/expiration/configuring_data_expiration.html.md.erb
deleted file mode 100644
index 74c1e48..0000000
--- a/geode-docs/developing/expiration/configuring_data_expiration.html.md.erb
+++ /dev/null
@@ -1,83 +0,0 @@
----
-title:  Configure Data Expiration
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-Configure the type of expiration and the expiration action to use.
-
-<a id="configuring_data_expiration__section_ADB8302125624E01A808EA5E4FF79A5C"></a>
-
--   Set the region's `statistics-enabled` attribute to true.
-
-    The statistics used for expiration are available directly to the application through the `CacheStatistics` object returned by the `Region` and `Region.Entry` `getStatistics` methods. The `CacheStatistics` object also provides a method for resetting the statistics counters.
-
--   Set the expiration attributes by expiration type, with the max times and expiration actions. See the region attributes listings for `entry-time-to-live`, `entry-idle-time`, `region-time-to-live`, and `region-idle-time` in [&lt;region-attributes&gt;](../../reference/topics/cache_xml.html#region-attributes).
-
-    For partitioned regions, to ensure reliable read behavior, use the `time-to-live` attributes, not the `idle-time` attributes. In addition, you cannot use `local-destroy` or `local-invalidate` expiration actions in partitioned regions.
-
-    Replicated regions example:
-
-    ``` pre
-    // Setting standard expiration on an entry
-    <region-attributes statistics-enabled="true"> 
-      <entry-idle-time> 
-        <expiration-attributes timeout="60" action="local-invalidate"/> 
-      </entry-idle-time> 
-    </region-attributes> 
-    ```
-
--   Override the region-wide settings for specific entries, if required by your application. To do this:
-    1.  Program a custom expiration class that implements `org.apache.geode.cache.CustomExpiry`. Example:
-
-        ``` pre
-        // Custom expiration class
-        // Use the key for a region entry to set entry-specific expiration timeouts of 
-        //   10 seconds for even-numbered keys with a DESTROY action on the expired entries
-        //   Leave the default region setting for all odd-numbered keys. 
-        public class MyClass implements CustomExpiry, Declarable 
-        { 
-            private static final ExpirationAttributes CUSTOM_EXPIRY = 
-                    new ExpirationAttributes(10, ExpirationAction.DESTROY); 
-            public ExpirationAttributes getExpiry(Entry entry) 
-            { 
-                int key = (Integer)entry.getKey(); 
-                return key % 2 == 0 ? CUSTOM_EXPIRY : null; 
-            }
-        }
-        ```
-    2.  Define the class inside the expiration attributes settings for the region. Example:
-
-
-        ``` pre
-        <!-- Set default entry idle timeout expiration for the region --> 
-        <!-- Pass entries to custom expiry class for expiration overrides -->
-        <region-attributes statistics-enabled="true"> 
-            <entry-idle-time> 
-                <expiration-attributes timeout="60" action="local-invalidate"> 
-                    <custom-expiry> 
-                        <class-name>com.company.mypackage.MyClass</class-name> 
-                    </custom-expiry> 
-                </expiration-attributes> 
-            </entry-idle-time> 
-        </region-attributes>
-        ```
-
-You can also configure Regions using the gfsh command-line interface, however, you cannot configure `custom-expiry` using gfsh. See [Region Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_EF03119A40EE492984F3B6248596E1DD).
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/expiration/how_expiration_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/expiration/how_expiration_works.html.md.erb b/geode-docs/developing/expiration/how_expiration_works.html.md.erb
deleted file mode 100644
index 4ec5015..0000000
--- a/geode-docs/developing/expiration/how_expiration_works.html.md.erb
+++ /dev/null
@@ -1,70 +0,0 @@
----
-title:  How Expiration Works
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-Expiration removes old entries and entries that you are not using. You can destroy or invalidate entries.
-
-<a id="how_expiration_works__section_94FDBB821CDE49C48A0EFA6ED4DE194F"></a>
-Expiration activities in distributed regions can be distributed or local. Thus, one cache could control expiration for a number of caches in the system.
-
-This figure shows two basic expiration settings for a producer/consumer system. The producer member (on the right) populates the region from a database and the data is automatically distributed throughout the system. The data is valid only for one hour, so the producer performs a distributed destroy on entries that are an hour old. The other applications are consumers. The consumers free up space in their caches by removing their local copies of the entries for which there is no local interest (idle-time expiration). Requests for entries that have expired on the consumers will be forwarded to the producer.
-
-<img src="../../images_svg/expiration.svg" id="how_expiration_works__image_3D674825D1434830A8242D77CC89289F" class="image" />
-
-## <a id="how_expiration_works__section_B6C55A610F4243ED8F1986E8A98858CF" class="no-quick-link"></a>Expiration Types
-
-Apache Geode uses the following expiration types:
-
--   **Time to live (TTL)**. The amount of time, in seconds, the object may remain in the cache after the last creation or update. For entries, the counter is set to zero for create and put operations. Region counters are reset when the region is created and when an entry has its counter reset. The TTL expiration attributes are `region-time-to-live` and `entry-time-to-live`.
--   **Idle timeout**. The amount of time, in seconds, the object may remain in the cache after the last access. The idle timeout counter for an object is reset any time its TTL counter is reset. In addition, an entry\u2019s idle timeout counter is reset any time the entry is accessed through a get operation or a netSearch . The idle timeout counter for a region is reset whenever the idle timeout is reset for one of its entries. Idle timeout expiration attributes are: `region-idle-time` and `entry-idle-time`.
-
-## <a id="how_expiration_works__section_BA995343EF584104B9853CFE4CAD88AD" class="no-quick-link"></a>Expiration Actions
-
-Apache Geode uses the following expiration actions:
-
--   destroy
--   local destroy
--   invalidate (default)
--   local invalidate
-
-## <a id="how_expiration_works__section_AB4AB9E57D434159AA6E9B402E5E599D" class="no-quick-link"></a>Partitioned Regions and Entry Expiration
-
-For overall region performance, idle time expiration in partitioned regions may expire some entries sooner than expected. To ensure reliable read behavior across the partitioned region, we recommend that you use `entry-time-to-live` for entry expiration in partitioned regions instead of `entry-idle-time`.
-
-Expiration in partitioned regions is executed in the primary copy, based on the primary\u2019s last accessed and last updated statistics.
-
--   Entry updates are always done in the primary copy, resetting the primary copy\u2019s last updated and last accessed statistics.
--   Entry retrieval uses the most convenient available copy of the data, which may be one of the secondary copies. This provides the best performance at the cost of possibly not updating the primary copy\u2019s statistic for last accessed time.
-
-When the primary expires entries, it does not request last accessed statistics from the secondaries, as the performance hit would be too great. It expires entries based solely on the last time the entries were accessed in the primary copy.
-
-You cannot use `local-destroy` or `local-invalidate` expiration actions in a partitioned region.
-
-## <a id="how_expiration_works__section_expiration_settings_and_netSearch" class="no-quick-link"></a>Interaction Between Expiration Settings and `netSearch`
-
-Before `netSearch` retrieves an entry value from a remote cache, it validates the *remote* entry\u2019s statistics against the *local* region\u2019s expiration settings. Entries that would have already expired in the local cache are passed over. Once validated, the entry is brought into the local cache and the local access and update statistics are updated for the local copy. The last accessed time is reset and the last modified time is updated to the time in the remote cache, with corrections made for system clock differences. Thus the local entry is assigned the true last time the entry was modified in the distributed system. The `netSearch` operation has no effect on the expiration counters in remote caches.
-
-The `netSearch` method operates only on distributed regions with a data-policy of empty, normal and preloaded.
-
-## Configuring the Number of Threads for Expiration
-
-You can use the `gemfire.EXPIRY_THREADS` system property to increase the number of threads that handle expiration. By default, one thread handles expiration, and it is possible for the thread to become overloaded when entries expire faster than the thread can expire them. If a single thread is handling too many expirations, it can result in an OOME. Set the gemfire.EXPIRY\_THREADS system property to the desired number when starting the cache server.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/function_exec/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/function_exec/chapter_overview.html.md.erb b/geode-docs/developing/function_exec/chapter_overview.html.md.erb
deleted file mode 100644
index c85e9c8..0000000
--- a/geode-docs/developing/function_exec/chapter_overview.html.md.erb
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title:  Function Execution
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-A function is a body of code that resides on a server and that an application can invoke from a client or from another server without the need to send the function code itself. The caller can direct a data-dependent function to operate on a particular dataset, or can direct a data-independent function to operate on a particular server, member, or member group.
-
-<a id="function_exec__section_CBD5B04ACC554029B5C710CE8E244FEA">The function execution service provides solutions for a variety of use cases, including:</a>
-
--   An application needs to perform an operation on the data associated with a key. A registered server-side function can retrieve the data, operate on it, and put it back, with all processing performed locally to the server.
--   An application needs to initialize some of its components once on each server, which might be used later by executed functions.
--   A third-party service, such as a messaging service, requires initialization and startup.
--   Any arbitrary aggregation operation requires iteration over local data sets that can be done more efficiently through a single call to the cache server.
--   An external resource needs provisioning that can be done by executing a function on a server.
-
--   **[How Function Execution Works](how_function_execution_works.html)**
-
--   **[Executing a Function in Apache Geode](function_execution.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/function_exec/function_execution.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/function_exec/function_execution.html.md.erb b/geode-docs/developing/function_exec/function_execution.html.md.erb
deleted file mode 100644
index 4d5e0b8..0000000
--- a/geode-docs/developing/function_exec/function_execution.html.md.erb
+++ /dev/null
@@ -1,254 +0,0 @@
----
-title:  Executing a Function in Apache Geode
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-<a id="function_execution__section_BE483D79B81C49EE9855F506ED5AB014"></a>
-In this procedure it is assumed that you have your members and regions defined where you want to run functions.
-
-Main tasks:
-
-1.  Write the function code.
-2.  Register the function on all servers where you want to execute the function. The easiest way to register a function is to use the `gfsh` `deploy` command to deploy the JAR file containing the function code. Deploying the JAR automatically registers the function for you. See [Register the Function Automatically by Deploying a JAR](function_execution.html#function_execution__section_164E27B88EC642BA8D2359B18517B624) for details. Alternatively, you can write the XML or application code to register the function. See [Register the Function Programmatically](function_execution.html#function_execution__section_1D1056F843044F368FB76F47061FCD50) for details.
-3.  Write the application code to run the function and, if the function returns results, to handle the results.
-4.  If your function returns results and you need special results handling, code a custom `ResultsCollector` implementation and use it in your function execution.
-
-## <a id="function_execution__section_7D43B0C628D54F579D5C434D3DF69B3C" class="no-quick-link"></a>Write the Function Code
-
-To write the function code, you implement the `Function` interface or extend the `FunctionAdapter` class. Both are in the `org.apache.geode.cache.execute` package. The adapter class provides some default implementations for methods, which you can override.
-
-Code the methods you need for the function. These steps do not have to be done in this order.
-
-1.  Code `getId` to return a unique name for your function. You can use this name to access the function through the `FunctionService` API.
-2.  For high availability:
-    1.  Code `isHa` to return true to indicate to Geode that it can re-execute your function after one or more members fails
-    2.  Code your function to return a result
-    3.  Code `hasResult` to return true
-
-3.  Code `hasResult` to return true if your function returns results to be processed and false if your function does not return any data - the fire and forget function. `FunctionAdapter` `hasResult` returns true by default.
-4.  If the function will be executed on a region, code `optimizeForWrite` to return false if your function only reads from the cache, and true if your function updates the cache. The method only works if, when you are running the function, the `Execution` object is obtained through a `FunctionService` `onRegion` call. `FunctionAdapter` `optimizeForWrite` returns false by default.
-5.  Code the `execute` method to perform the work of the function.
-    1.  Make `execute` thread safe to accommodate simultaneous invocations.
-    2.  For high availability, code `execute` to accommodate multiple identical calls to the function. Use the `RegionFunctionContext` `isPossibleDuplicate` to determine whether the call may be a high-availability re-execution. This boolean is set to true on execution failure and is false otherwise.
-        **Note:**
-        The `isPossibleDuplicate` boolean can be set following a failure from another member\u2019s execution of the function, so it only indicates that the execution might be a repeat run in the current member.
-
-    3.  Use the function context to get information about the execution and the data:
-        -   The context holds the function ID, the `ResultSender` object for passing results back to the originator, and function arguments provided by the member where the function originated.
-        -   The context provided to the function is the `FunctionContext`, which is automatically extended to `RegionFunctionContext` if you get the `Execution` object through a `FunctionService` `onRegion` call.
-        -   For data dependent functions, the `RegionFunctionContext` holds the `Region` object, the `Set` of key filters, and a boolean indicating multiple identical calls to the function, for high availability implementations.
-        -   For partitioned regions, the `PartitionRegionHelper` provides access to additional information and data for the region. For single regions, use `getLocalDataForContext`. For colocated regions, use `getLocalColocatedRegions`.
-            **Note:**
-            When you use `PartitionRegionHelper.getLocalDataForContext`, `putIfAbsent` may not return expected results if you are working on local data set instead of the region.
-
-    4.  To propagate an error condition or exception back to the caller of the function, throw a FunctionException from the `execute` method. Geode transmits the exception back to the caller as if it had been thrown on the calling side. See the Java API documentation for [FunctionException](/releases/latest/javadoc/org/apache/geode/cache/execute/FunctionException.html) for more information.
-
-Example function code:
-
-``` pre
-package quickstart;
-
-import java.io.Serializable;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.Set;
-
-import org.apache.geode.cache.execute.FunctionAdapter;
-import org.apache.geode.cache.execute.FunctionContext;
-import org.apache.geode.cache.execute.RegionFunctionContext;
-import org.apache.geode.cache.partition.PartitionRegionHelper;
-
-public class MultiGetFunction extends FunctionAdapter {
-
-  public void execute(FunctionContext fc) { 
-    if(! (fc instanceof RegionFunctionContext)){
-       throw new FunctionException("This is a data aware function, and has 
-to be called using FunctionService.onRegion.");
-    }
-    RegionFunctionContext context = (RegionFunctionContext)fc;
-    Set keys = context.getFilter();
-    Set keysTillSecondLast = new HashSet(); 
-    int setSize = keys.size();
-    Iterator keysIterator = keys.iterator();
-    for(int i = 0; i < (setSize -1); i++)
-    {
-      keysTillSecondLast.add(keysIterator.next());
-    }
-    for (Object k : keysTillSecondLast) {
-      context.getResultSender().sendResult(
-          (Serializable)PartitionRegionHelper.getLocalDataForContext(context)
-              .get(k));
-    }
-    Object lastResult = keysIterator.next();
-    context.getResultSender().lastResult(
-        (Serializable)PartitionRegionHelper.getLocalDataForContext(context)
-            .get(lastResult));
-  }
-
-  public String getId() {
-    return getClass().getName();
-  }
-}
-```
-
-## <a id="function_execution__section_164E27B88EC642BA8D2359B18517B624" class="no-quick-link"></a>Register the Function Automatically by Deploying a JAR
-
-When you deploy a JAR file that contains a Function (in other words, contains a class that implements the Function interface), the Function will be automatically registered via the `FunctionService.registerFunction` method.
-
-To register a function by using `gfsh`:
-
-1.  Package your class files into a JAR file.
-2.  Start a `gfsh` prompt. If necessary, start a Locator and connect to the Geode distributed system where you want to run the function.
-3.  At the gfsh prompt, type the following command:
-
-    ``` pre
-    gfsh>deploy --jar=group1_functions.jar
-    ```
-
-    where group1\_functions.jar corresponds to the JAR file that you created in step 1.
-
-If another JAR file is deployed (either with the same JAR filename or another filename) with the same Function, the new implementation of the Function will be registered, overwriting the old one. If a JAR file is undeployed, any Functions that were auto-registered at the time of deployment will be unregistered. Since deploying a JAR file that has the same name multiple times results in the JAR being un-deployed and re-deployed, Functions in the JAR will be unregistered and re-registered each time this occurs. If a Function with the same ID is registered from multiple differently named JAR files, the Function will be unregistered if either of those JAR files is re-deployed or un-deployed.
-
-See [Deploying Application JARs to Apache Geode Members](../../configuring/cluster_config/deploying_application_jars.html#concept_4436C021FB934EC4A330D27BD026602C) for more details on deploying JAR files.
-
-## <a id="function_execution__section_1D1056F843044F368FB76F47061FCD50" class="no-quick-link"></a>Register the Function Programmatically
-
-This section applies to functions that are invoked using the `Execution.execute(String functionId)` signature. When this method is invoked, the calling application sends the function ID to all members where the `Function.execute` is to be run. Receiving members use the ID to look up the function in the local `FunctionService`. In order to do the lookup, all of the receiving member must have previously registered the function with the function service.
-
-The alternative to this is the `Execution.execute(Function function)` signature. When this method is invoked, the calling application serializes the instance of `Function` and sends it to all members where the `Function.execute` is to be run. Receiving members deserialize the `Function` instance, create a new local instance of it, and run execute from that. This option is not available for non-Java client invocation of functions on servers.
-
-Your Java servers must register functions that are invoked by non-Java clients. You may want to use registration in other cases to avoid the overhead of sending `Function` instances between members.
-
-Register your function using one of these methods:
-
--   XML:
-
-    ``` pre
-    <cache>
-        ...
-        </region>
-    <function-service>
-      <function>
-        <class-name>com.bigFatCompany.tradeService.cache.func.TradeCalc</class-name>
-      </function>
-    </function-service>
-    ```
-
--   Java:
-
-    ``` pre
-    myFunction myFun = new myFunction();
-    FunctionService.registerFunction(myFun);
-    ```
-
-    **Note:**
-    Modifying a function instance after registration has no effect on the registered function. If you want to execute a new function, you must register it with a different identifier.
-
-## <a id="function_execution__section_6A0F4C9FB77C477DA5D995705C8BDD5E" class="no-quick-link"></a>Run the Function
-
-This assumes you\u2019ve already followed the steps for writing and registering the function.
-
-In every member where you want to explicitly execute the function and process the results, you can use the `gfsh` command line to run the function or you can write an application to run the function.
-
-**Running the Function Using gfsh**
-
-1.  Start a gfsh prompt.
-2.  If necessary, start a Locator and connect to the Geode distributed system where you want to run the function.
-3.  At the gfsh prompt, type the following command:
-
-    ``` pre
-    gfsh> execute function --id=function_id
-    ```
-
-    Where *function\_id* equals the unique ID assigned to the function. You can obtain this ID using the `Function.getId` method.
-
-See [Function Execution Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_8BB061D1A7A9488C819FE2B7881A1278) for more `gfsh` commands related to functions.
-
-**Running the Function via API Calls**
-
-1.  Use one of the `FunctionService` `on*` methods to create an `Execute` object. The `on*` methods, `onRegion`, `onMembers`, etc., define the highest level where the function is run. For colocated partitioned regions, use `onRegion` and specify any one of the colocated regions. The function run using `onRegion` is referred to as a data dependent function - the others as data-independent functions.
-2.  Use the `Execution` object as needed for additional function configuration. You can:
-    -   Provide a key `Set` to `withFilters` to narrow the execution scope for `onRegion` `Execution` objects. You can retrieve the key set in your `Function` `execute` method through `RegionFunctionContext.getFilter`.
-    -   Provide function arguments to `withArgs`. You can retrieve these in your `Function` `execute` method through `FunctionContext.getArguments`.
-    -   Define a custom `ResultCollector`
-
-3.  Call the `Execution` object to `execute` method to run the function.
-4.  If the function returns results, call `getResult` from the results collector returned from `execute` and code your application to do whatever it needs to do with the results.
-    **Note:**
-    For high availability, you must call the `getResult` method.
-
-Example of running the function - for executing members:
-
-``` pre
-MultiGetFunction function = new MultiGetFunction();
-FunctionService.registerFunction(function);
-    
-writeToStdout("Press Enter to continue.");
-stdinReader.readLine();
-    
-Set keysForGet = new HashSet();
-keysForGet.add("KEY_4");
-keysForGet.add("KEY_9");
-keysForGet.add("KEY_7");
-
-Execution execution = FunctionService.onRegion(exampleRegion)
-    .withFilter(keysForGet)
-    .withArgs(Boolean.TRUE)
-    .withCollector(new MyArrayListResultCollector());
-
-ResultCollector rc = execution.execute(function);
-// Retrieve results, if the function returns results
-List result = (List)rc.getResult();
-```
-
-## <a id="function_execution__section_F2AFE056650B4BF08BC865F746BFED38" class="no-quick-link"></a>Write a Custom Results Collector
-
-This topic applies to functions that return results.
-
-When you execute a function that returns results, the function stores the results into a `ResultCollector` and returns the `ResultCollector` object. The calling application can then retrieve the results through the `ResultCollector` `getResult` method. Example:
-
-``` pre
-ResultCollector rc = execution.execute(function);
-List result = (List)rc.getResult();
-```
-
-Geode\u2019s default `ResultCollector` collects all results into an `ArrayList`. Its `getResult` methods block until all results are received. Then they return the full result set.
-
-To customize results collecting:
-
-1.  Write a class that extends `ResultCollector` and code the methods to store and retrieve the results as you need. Note that the methods are of two types:
-    1.  `addResult` and `endResults` are called by Geode when results arrive from the `Function` instance `SendResults` methods
-    2.  `getResult` is available to your executing application (the one that calls `Execution.execute`) to retrieve the results
-
-2.  Use high availability for `onRegion` functions that have been coded for it:
-    1.  Code the `ResultCollector` `clearResults` method to remove any partial results data. This readies the instance for a clean function re-execution.
-    2.  When you invoke the function, call the result collector `getResult` method. This enables the high availability functionality.
-
-3.  In your member that calls the function execution, create the `Execution` object using the `withCollector` method, and passing it your custom collector. Example:
-
-    ``` pre
-    Execution execution = FunctionService.onRegion(exampleRegion)
-        .withFilter(keysForGet)
-        .withArgs(Boolean.TRUE)
-        .withCollector(new MyArrayListResultCollector());
-    ```
-
-## <a id="function_execution__section_638E1FB9B08F4CC4B62C07DDB3661C14" class="no-quick-link"></a>Targeting Single Members of a Member Group or Entire Member Groups
-
-To execute a data independent function on a group of members or one member in a group of members, you can write your own nested function. You will need to write one nested function if you are executing the function from client to server and another nested function if you are executing a function from server to all members.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb b/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb
deleted file mode 100644
index a72045f..0000000
--- a/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb
+++ /dev/null
@@ -1,131 +0,0 @@
----
-title:  How Function Execution Works
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-## <a id="how_function_execution_works__section_881D2FF6761B4D689DDB46C650E2A2E1" class="no-quick-link"></a>Where Functions Are Executed
-
-You can execute data-independent functions or data-dependent functions in Geode in the following places:
-
-**For Data-independent Functions**
-
--   **On a specific member or members\u2014**Execute the function within a peer-to-peer distributed system, specifying the member or members where you want to run the function by using `FunctionService` methods `onMember()` and `onMembers()`.
--   **On a specific server or set of servers\u2014**If you are connected to a distributed system as a client, you can execute the function on a server or servers configured for a specific connection pool, or on a server or servers connected to a given cache using the default connection pool. For data-independent functions on client/server architectures, a client invokes `FunctionService` methods `onServer()` or `onServers()`. (See [How Client/Server Connections Work](../../topologies_and_comm/topology_concepts/how_the_pool_manages_connections.html) for details regarding pool connections.)
--   **On member groups or on a single member within each member group\u2014**You can organize members into logical member groups. (See [Configuring and Running a Cluster](../../configuring/chapter_overview.html#concept_lrh_gyq_s4) for more information about using member groups.) You can invoke a data independent function on all members in a specified member group or member groups, or execute the function on only one member of each specified member group.
-
-**For Data-dependent Functions**
-
--   **On a region\u2014**If you are executing a data-dependent function, specify a region and, optionally, a set of keys on which to run the function. The method `FunctionService.onRegion()` directs a data-dependent function to execute on a specific region.
-
-See the `org.apache.geode.cache.execute.FunctionService` Java API documentation for more details.
-
-## <a id="how_function_execution_works__section_E0C4B7D2E4414F099788A5A441FF0E03" class="no-quick-link"></a>How Functions Are Executed
-
-The following things occur when executing a function:
-
-1.  When you call the `execute` method on the `Execution` object, Geode invokes the function on all members where it needs to run. The locations are determined by the `FunctionService` `on*` method calls, region configuration, and any filters.
-2.  If the function has results, they are returned to the `addResult` method call in a `ResultCollector` object.
-3.  The originating member collects results using `ResultCollector.getResult`.
-
-## <a id="how_function_execution_works__section_14FF9932C7134C5584A14246BB4D4FF6" class="no-quick-link"></a>Highly Available Functions
-
-Generally, function execution errors are returned to the calling application. You can code for high availability for `onRegion` functions that return a result, so Geode automatically retries a function if it does not execute successfully. You must code and configure the function to be highly available, and the calling application must invoke the function using the results collector `getResult` method.
-
-When a failure (such as an execution error or member crash while executing) occurs, the system responds by:
-
-1.  Waiting for all calls to return
-2.  Setting a boolean indicating a re-execution
-3.  Calling the result collector\u2019s `clearResults` method
-4.  Executing the function
-
-For client regions, the system retries the execution according to `org.apache.geode.cache.client.Pool` `retryAttempts`. If the function fails to run every time, the final exception is returned to the `getResult` method.
-
-For member calls, the system retries until either it succeeds or no data remains in the system for the function to operate on.
-
-## <a id="how_function_execution_works__section_A0FD54B73E9A453AA38FC4A4D5282351" class="no-quick-link"></a>Function Execution Scenarios
-
-[Server-distributed System](#how_function_execution_works__fig_server_distributed_system) shows the sequence of events for a data-independent function invoked from a client on all available servers.
-
-<a id="how_function_execution_works__fig_server_distributed_system"></a>
-
-<span class="figtitleprefix">Figure: </span>Server-distributed System
-
-<img src="../../images/FuncExecOnServers.png" alt="A diagram showing the sequence of events for a data-independent function invoked from a client on all available servers" id="how_function_execution_works__image_993D1FD7705E40EA801CF0656C4E91E5" class="image" />
-
-The client contacts a locator to obtain host and port identifiers for each server in the distributed system and issues calls to each server. As the instigator of the calls, the client also receives the call results.
-
-[Peer-to-peer Distributed System](#how_function_execution_works__fig_peer_distributed_system) shows the sequence of events for a data-independent function executed against members in a peer-to-peer distributed system.
-
-<a id="how_function_execution_works__fig_peer_distributed_system"></a>
-
-<span class="figtitleprefix">Figure: </span>Peer-to-peer Distributed System
-
-<img src="../../images/FuncExecOnMembers.png" alt="The sequence of events for a data-independent function executed against members in a peer-to-peer distributed system." id="how_function_execution_works__image_041832B370AA4241980B8C2632DD1DC8" class="image" />
-
-You can think of `onMembers()` as the peer-to-peer counterpart of a client-server call to `onServers()`. Because it is called from a peer of other members in the distributed system, an `onMembers()` function invocation has access to detailed metadata and does not require the services of a locator. The caller invokes the function on itself, if appropriate, as well as other members in the distributed system and collects the results of all of the function executions.
-
-[Data-dependent Function on a Region](#how_function_execution_works__fig_data_dependent_function_region) shows a data-dependent function run on a region.
-
-<a id="how_function_execution_works__fig_data_dependent_function_region"></a>
-
-<span class="figtitleprefix">Figure: </span>Data-dependent Function on a Region
-
-<img src="../../images/FuncExecOnRegionNoMetadata.png" alt="The path followed when the client lacks detailed metadata regarding target locations" id="how_function_execution_works__image_68742923936F4EEC8E50819F5CEECBCC" class="image" />
-
-An `onRegion()` call requires more detailed metadata than a locator provides in its host:port identifier. This diagram shows the path followed when the client lacks detailed metadata regarding target locations, as on the first call or when previously obtained metadata is no longer up to date.
-
-The first time a client invokes a function to be executed on a particular region of a distributed system, the client's knowledge of target locations is limited to the host and port information provided by the locator. Given only this limited information, the client sends its execution request to whichever server is next in line to be called according to the pool allocation algorithm. Because it is a participant in the distributed system, that server has access to detailed metadata and can dispatch the function call to the appropriate target locations. When the server returns results to the client, it sets a flag indicating whether a request to a different server would have provided a more direct path to the intended target. To improve efficiency, the client requests a copy of the metadata. With additional details regarding the bucket layout for the region, the client can act as its own dispatcher on subsequent calls and identify multiple targets for itself, eliminating at least one 
 hop.
-
-After it has obtained current metadata, the client can act as its own dispatcher on subsequent calls, identifying multiple targets for itself and eliminating one hop, as shown in [Data-dependent function after obtaining current metadata](#how_function_execution_works__fig_data_dependent_function_obtaining_current_metadata).
-
-<a id="how_function_execution_works__fig_data_dependent_function_obtaining_current_metadata"></a>
-
-<span class="figtitleprefix">Figure: </span>Data-dependent function after obtaining current metadata
-
-<img src="../../images/FuncExecOnRegionWithMetadata.png" alt="A diagram showing the client acting as its own dispatcher after having obtained current metadata." class="image" />
-
-[Data-dependent Function on a Region with Keys](#how_function_execution_works__fig_data_dependent_function_region_keys) shows the same data-dependent function with the added specification of a set of keys on which to run.
-
-<a id="how_function_execution_works__fig_data_dependent_function_region_keys"></a>
-
-<span class="figtitleprefix">Figure: </span>Data-dependent Function on a Region with Keys
-
-<img src="../../images/FuncExecOnRegionWithFilter.png" alt="A data-dependent function on a region with specification of keys on which to run" id="how_function_execution_works__image_7FA8BE5D02F24CF8B49186C6FEB786BD" class="image" />
-
-Servers that do not hold any keys are left out of the function execution.
-
-[Peer-to-peer Data-dependent Function](#how_function_execution_works__fig_peer_data_dependent_function) shows a peer-to-peer data-dependent call.
-
-<a id="how_function_execution_works__fig_peer_data_dependent_function"></a>
-
-<span class="figtitleprefix">Figure: </span>Peer-to-peer Data-dependent Function
-
-<img src="../../images/FuncExecOnRegionPeersWithFilter.png" alt="A data-dependent function where the caller is not an external client" id="how_function_execution_works__image_9B8E914BA80E4BBA99856E9603A9BDA0" class="image" />
-
-The caller is a member of the distributed system, not an external client, so the function runs in the caller\u2019s distributed system. Note the similarities between this diagram and the preceding figure ([Data-dependent Function on a Region with Keys](#how_function_execution_works__fig_data_dependent_function_region_keys)), which shows a client-server model where the client has up-to-date metadata regarding target locations within the distributed system.
-
-[Client-server system with Up-to-date Target Metadata](#how_function_execution_works__fig_client_server_system_target_metadata) demonstrates a sequence of steps in a call to a highly available function in a client-server system in which the client has up-to-date metadata regarding target locations.
-
-<a id="how_function_execution_works__fig_client_server_system_target_metadata"></a>
-
-<span class="figtitleprefix">Figure: </span>Client-server system with Up-to-date Target Metadata
-
-<img src="../../images/FuncExecOnRegionHAWithFilter.png" alt="A sequence of steps in a call to a highly available function in a client-server system in which the client has up-to-date metadata regarding target locations" id="how_function_execution_works__image_05E94BB0EBF349FF8822158F2001F313" class="image" />
-
-In this example, three primary keys (X, Y, Z) and their secondary copies (X', Y', Z') are distributed among three servers. Because `optimizeForWrite` is `true`, the system first attempts to invoke the function where the primary keys reside: Server 1 and Server 2. Suppose, however, that Server 2 is off-line for some reason, so the call targeted for key Y fails. Because `isHA` is set to `true`, the call is retried on Server 1 (which succeeded the first time, so likely will do so again) and Server 3, where key Y' resides. This time, the function call returns successfully. Calls to highly available functions retry until they obtain a successful result or they reach a retry limit.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/management_all_region_types/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/management_all_region_types/chapter_overview.html.md.erb b/geode-docs/developing/management_all_region_types/chapter_overview.html.md.erb
deleted file mode 100644
index ea43679..0000000
--- a/geode-docs/developing/management_all_region_types/chapter_overview.html.md.erb
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title:  General Region Data Management
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-For all regions, you have options to control memory use, back up your data to disk, and keep stale data out of your cache.
-
--   **[Persistence and Overflow](../../developing/storing_data_on_disk/chapter_overview.html)**
-
-    You can persist data on disk for backup purposes and overflow it to disk to free up memory without completely removing the data from your cache.
-
--   **[Eviction](../../developing/eviction/chapter_overview.html)**
-
-    Use eviction to control data region size.
-
--   **[Expiration](../../developing/expiration/chapter_overview.html)**
-
-    Use expiration to keep data current by removing stale entries. You can also use it to remove entries you are not using so your region uses less space. Expired entries are reloaded the next time they are requested.
-
--   **[Keeping the Cache in Sync with Outside Data Sources](../../developing/outside_data_sources/sync_outside_data.html)**
-
-    Keep your distributed cache in sync with an outside data source by programming and installing application plug-ins for your region.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb b/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb
deleted file mode 100644
index a008ede..0000000
--- a/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title:  Overview of Outside Data Sources
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-Apache Geode has application plug-ins to read data into the cache and write it out.
-
-<a id="outside_data_sources__section_100B707BB812430E8D9CFDE3BE4698D1"></a>
-The application plug-ins:
-
-1.  Load data on cache misses using an implementation of a `org.apache.geode.cache.CacheLoader`. The `CacheLoader.load` method is called when the `get` operation can't find the value in the cache. The value returned from the loader is put into the cache and returned to the `get` operation. You might use this in conjunction with data expiration to get rid of old data, and your other data loading applications, which might be prompted by events in the outside data source. See [Configure Data Expiration](../expiration/configuring_data_expiration.html).
-2.  Write data out to the data source using the cache event handlers, `CacheWriter` and `CacheListener`. For implementation details, see [Implementing Cache Event Handlers](../events/implementing_cache_event_handlers.html).
-    -   `CacheWriter` is run synchronously. Before performing any operation on a region entry, if any cache writers are defined for the region in the distributed system, the system invokes the most convenient writer. In partitioned and distributed regions, cache writers are usually defined in only a subset of the caches holding the region - often in only one cache. The cache writer can abort the region entry operation.
-    -   `CacheListener` is run synchronously after the cache is updated. This listener works only on local cache events, so install your listener in every cache where you want it to handle events. You can install multiple cache listeners in any of your caches.
-
-In addition to using application plug-ins, you can also configure external JNDI database sources in your cache.xml and use these data sources in transactions. See [Configuring Database Connections Using JNDI](../transactions/configuring_db_connections_using_JNDI.html#topic_A5E3A67C808D48C08E1F0DC167C5C494) for more information.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb b/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb
deleted file mode 100644
index 4f309a0..0000000
--- a/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb
+++ /dev/null
@@ -1,52 +0,0 @@
----
-title:  How Data Loaders Work
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-By default, a region has no data loader defined. Plug an application-defined loader into any region by setting the region attribute cache-loader on the members that host data for the region.
-
-<a id="how_data_loaders_work__section_1E600469D223498DB49446434CE9B0B4"></a>
-The loader is called on cache misses during get operations, and it populates the cache with the new entry value in addition to returning the value to the calling thread.
-
-A loader can be configured to load data into the Geode cache from an outside data store. To do the reverse operation, writing data from the Geode cache to an outside data store, use a cache writer event handler. See [Implementing Cache Event Handlers](../events/implementing_cache_event_handlers.html).
-
-How to install your cache loader depends on the type of region.
-
-## <a id="how_data_loaders_work__section_5CD65D559F1A490DAB5ED9326860FE8D" class="no-quick-link"></a>Data Loading in Partitioned Regions
-
-Because of the huge amounts of data they can handle, partitioned regions support partitioned loading. Each cache loader loads only the data entries in the member where the loader is defined. If data redundancy is configured, data is loaded only if the member holds the primary copy. So you must install a cache loader in every member where the partitioned attributes `local-max-memory` is not zero.
-
-If you depend on a JDBC connection, every data store must have a connection to the data source, as shown in the following figure. Here the three members require three connections. See [Configuring Database Connections Using JNDI](../transactions/configuring_db_connections_using_JNDI.html#topic_A5E3A67C808D48C08E1F0DC167C5C494) for information on how to configure data sources.
-
-**Note:**
-Partitioned regions generally require more JDBC connections than distributed regions.
-
-<img src="../../images_svg/cache_data_loader.svg" id="how_data_loaders_work__image_CD7CE9BD22ED4782AB6B296187AB983A" class="image" />
-
-## <a id="how_data_loaders_work__section_6A2CE777CE9E4BD682B881F6986CF66C" class="no-quick-link"></a>Data Loading in Distributed Regions
-
-In a non-partitioned distributed region, a cache loader defined in one member is available to all members that have the region defined. Loaders are usually defined in just a subset of the caches holding the region. When a loader is needed, all available loaders for the region are invoked, starting with the most convenient loader, until the data is loaded or all loaders have been tried.
-
-In the following figure, these members of one distributed system can be running on different machines. Loading for the distributed region is performed from M1.
-
-<img src="../../images_svg/cache_data_loader_2.svg" id="how_data_loaders_work__image_3C39A50218D64EF28A5448EB01A4C6EC" class="image" />
-
-## <a id="how_data_loaders_work__section_BE33D9AB27104D1BB8AC8BFCE11A063E" class="no-quick-link"></a>Data Loading in Local Regions
-
-For local regions, the cache loader is available only in the member where it is defined. If a loader is defined, it is called whenever a value is not found in the local cache.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/outside_data_sources/implementing_data_loaders.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/outside_data_sources/implementing_data_loaders.html.md.erb b/geode-docs/developing/outside_data_sources/implementing_data_loaders.html.md.erb
deleted file mode 100644
index 7c20fbd..0000000
--- a/geode-docs/developing/outside_data_sources/implementing_data_loaders.html.md.erb
+++ /dev/null
@@ -1,88 +0,0 @@
----
-title:  Implement a Data Loader
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-To program a data loader and configure your region to use it:
-
-1. Program your loader.
-
-2. Install your loader in each member region where you need it.
-
-## <a id="implementing_data_loaders__section_88076AF5EC184FE88AAF4C806A0CA9DF" class="no-quick-link"></a>Program your loader
-To program your loader:
-
-1.  Implement `org.apache.geode.cache.CacheLoader`.
-
-2.  If you want to declare the loader in your `cache.xml`, implement the `org.apache.geode.cache.Declarable` interface as well.
-
-3.  Program the single `CacheLoader` `load` method to do whatever your application requires for retrieving the value from outside the cache. If you need to run `Region` API calls from your loader, spawn separate threads for them. Do not make direct calls to `Region` methods from your load method implementation as it could cause the cache loader to block, hurting the performance of the distributed system. For example:
-
-    ``` pre
-    public class SimpleCacheLoader implements CacheLoader, Declarable {
-        public Object load(LoaderHelper helper) {
-            String key = (String) helper.getKey();
-            System.out.println(" Loader called to retrieve value for " + key);
-            // Create a value using the suffix number of the key (key1, key2, etc.)
-            return "LoadedValue" + (Integer.parseInt(key.substring(3)));
-        }
-        public void close() { // do nothing }
-        public void init(Properties props) { // do nothing }
-    }
-    ```
-
-## Install your loader in each member region
-To install your loader in each member region where you need it:
-
-1. In a partitioned region, install the cache loader in every data store for the region (`partition-attributes` `local-max-memory` &gt; 0).
-
-2. In a distributed region, install the loader in the members where it makes sense to do so. Cache loaders are usually defined in only a subset of the members holding the region. You might, for example, assign the job of loading from a database to one or two members for a region hosted by many more members. This can be done to reduce the number of connections when the outside source is a database.
-
-    Use one of these methods to install the loader:
-    -   XML:
-
-        ``` pre
-        <region-attributes>
-            <cache-loader>
-                <class-name>myCacheLoader</class-name>
-            </cache-loader>
-        </region-attributes>
-        ```
-    -   XML with parameters:
-
-        ``` pre
-        <cache-loader>
-            <class-name>com.company.data.DatabaseLoader</class-name>
-            <parameter name="URL">
-                <string>jdbc:cloudscape:rmi:MyData</string>
-            </parameter>
-        </cache-loader>
-        ```
-    -   Java:
-
-        ``` pre
-        RegionFactory<String,Object> rf = cache.createRegionFactory(REPLICATE);
-        rf.setCacheLoader(new QuoteLoader());
-        quotes = rf.create("NASDAQ Quotes");
-        ```
-
-**Note:**
-You can also configure Regions using the gfsh command-line interface, however you cannot configure a `cache-loader` using gfsh. See [Region Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_EF03119A40EE492984F3B6248596E1DD).
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb b/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb
deleted file mode 100644
index 728b664..0000000
--- a/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title:  Keeping the Cache in Sync with Outside Data Sources
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-Keep your distributed cache in sync with an outside data source by programming and installing application plug-ins for your region.
-
--   **[Overview of Outside Data Sources](../../developing/outside_data_sources/chapter_overview.html)**
-
-    Apache Geode has application plug-ins to read data into the cache and write it out.
-
--   **[How Data Loaders Work](../../developing/outside_data_sources/how_data_loaders_work.html)**
-
-    By default, a region has no data loader defined. Plug an application-defined loader into any region by setting the region attribute cache-loader on the members that host data for the region.
-
--   **[Implement a Data Loader](../../developing/outside_data_sources/implementing_data_loaders.html)**
-
-    Program a data loader and configure your region to use it.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb b/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb
deleted file mode 100644
index c92921b..0000000
--- a/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb
+++ /dev/null
@@ -1,60 +0,0 @@
----
-title:  Partitioned Regions
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-In addition to basic region management, partitioned regions include options for high availability, data location control, and data balancing across the distributed system.
-
--   **[Understanding Partitioning](../../developing/partitioned_regions/how_partitioning_works.html)**
-
-    To use partitioned regions, you should understand how they work and your options for managing them.
-
--   **[Configuring Partitioned Regions](../../developing/partitioned_regions/managing_partitioned_regions.html)**
-
-    Plan the configuration and ongoing management of your partitioned region for host and accessor members and configure the regions for startup.
-
--   **[Configuring the Number of Buckets for a Partitioned Region](../../developing/partitioned_regions/configuring_bucket_for_pr.html)**
-
-    Decide how many buckets to assign to your partitioned region and set the configuration accordingly.
-
--   **[Custom-Partitioning and Colocating Data](../../developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html)**
-
-    You can customize how Apache Geode groups your partitioned region data with custom partitioning and data colocation.
-
--   **[Configuring High Availability for Partitioned Regions](../../developing/partitioned_regions/overview_how_pr_ha_works.html)**
-
-    By default, Apache Geode stores only a single copy of your partitioned region data among the region's data stores. You can configure Geode to maintain redundant copies of your partitioned region data for high availability.
-
--   **[Configuring Single-Hop Client Access to Server-Partitioned Regions](../../developing/partitioned_regions/overview_how_pr_single_hop_works.html)**
-
-    Single-hop data access enables the client pool to track where a partitioned region\u2019s data is hosted in the servers. To access a single entry, the client directly contacts the server that hosts the key, in a single hop.
-
--   **[Rebalancing Partitioned Region Data](../../developing/partitioned_regions/rebalancing_pr_data.html)**
-
-    In a distributed system with minimal contention to the concurrent threads reading or updating from the members, you can use rebalancing to dynamically increase or decrease your data and processing capacity.
-
--   **[Checking Redundancy in Partitioned Regions](../../developing/partitioned_regions/checking_region_redundancy.html)**
-
-    Under some circumstances, it can be important to verify that your partitioned region data is redundant and that upon member restart, redundancy has been recovered properly across partitioned region members.
-
--   **[Moving Partitioned Region Data to Another Member](../../developing/partitioned_regions/moving_partitioned_data.html)**
-
-    You can use the `PartitionRegionHelper` `moveBucketByKey` and `moveData` methods to explicitly move partitioned region data from one member to another.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/partitioned_regions/checking_region_redundancy.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/checking_region_redundancy.html.md.erb b/geode-docs/developing/partitioned_regions/checking_region_redundancy.html.md.erb
deleted file mode 100644
index 1e57742..0000000
--- a/geode-docs/developing/partitioned_regions/checking_region_redundancy.html.md.erb
+++ /dev/null
@@ -1,55 +0,0 @@
----
-title:  Checking Redundancy in Partitioned Regions
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-Under some circumstances, it can be important to verify that your partitioned region data is redundant and that upon member restart, redundancy has been recovered properly across partitioned region members.
-
-You can verify partitioned region redundancy by making sure that the `numBucketsWithoutRedundancy` statistic is **zero** for all your partitioned regions. To check this statistic, use the following `gfsh` command:
-
-``` pre
-gfsh>show metrics --categories=partition --region=region_name
-```
-
-For example:
-
-``` pre
-gfsh>show metrics --categories=partition --region=posts
-
-Cluster-wide Region Metrics
---------- | --------------------------- | -----
-partition | putLocalRate                | 0
-          | putRemoteRate               | 0
-          | putRemoteLatency            | 0
-          | putRemoteAvgLatency         | 0
-          | bucketCount                 | 1
-          | primaryBucketCount          | 1
-          | numBucketsWithoutRedundancy | 1
-          | minBucketSize               | 1
-          | maxBucketSize               | 0
-          | totalBucketSize             | 1
-          | averageBucketSize           | 1
-      
-```
-
-If you have `start-recovery-delay=-1` configured for your partitioned region, you will need to perform a rebalance on your region after you restart any members in your cluster in order to recover redundancy.
-
-If you have `start-recovery-delay` set to a low number, you may need to wait extra time until the region has recovered redundancy.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb b/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb
deleted file mode 100644
index c20e30e..0000000
--- a/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb
+++ /dev/null
@@ -1,128 +0,0 @@
----
-title:  Colocate Data from Different Partitioned Regions
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-By default, Geode allocates the data locations for a partitioned region independent of the data locations for any other partitioned region. You can change this policy for any group of partitioned regions, so that cross-region, related data is all hosted by the same member. This colocation speeds queries and other operations that access data from the regions.
-
-<a id="colocating_partitioned_region_data__section_131EC040055E48A6B35E981B5C845A65"></a>
-**Note:**
-If you are colocating data between regions and custom partitioning the data in the regions, all colocated regions must use partitioning mechanisms that return the same routing object. The most common approach, though not the only one, is for all colocated regions to use the same custom PartitionResolver. See [Custom-Partition Your Region Data](using_custom_partition_resolvers.html).
-
-Data colocation between partitioned regions generally improves the performance of data-intensive operations. You can reduce network hops for iterative operations on related data sets. Compute-heavy applications that are data-intensive can significantly increase overall throughput. For example, a query run on a patient's health records, insurance, and billing information is more efficient if all data is grouped in a single member. Similarly, a financial risk analytical application runs faster if all trades, risk sensitivities, and reference data associated with a single instrument are together.
-
-**Prerequisites**
-
-<a id="colocating_partitioned_region_data__section_5A8D752F02834146A37D9430F1CA32DA"></a>
-
--   Understand how to configure and create your partitioned regions. See [Understanding Partitioning](how_partitioning_works.html) and [Configuring Partitioned Regions](managing_partitioned_regions.html#configure_partitioned_regions).
--   (Optional) Understand how to custom-partition your data. See [Custom-Partition Your Region Data](using_custom_partition_resolvers.html).
--   (Optional) If you want your colocated regions to be highly available, understand how high availability for partitioned regions works. See [Understanding High Availability for Partitioned Regions](how_pr_ha_works.html#how_pr_ha_works).
--   (Optional) Understand how to persist your region data. See [Configure Region Persistence and Overflow](../storing_data_on_disk/storing_data_on_disk.html).
-
-**Procedure**
-
-1.  Identify one region as the central region, with which data in the other regions is explicitly colocated. If you use persistence for any of the regions, you must persist the central region.
-    1.  Create the central region before you create the others, either in the cache.xml or your code. Regions in the XML are created before regions in the code, so if you create any of your colocated regions in the XML, you must create the central region in the XML before the others. Geode will verify its existence when the others are created and return `IllegalStateException` if the central region is not there. Do not add any colocation specifications to this central region.
-    2.  For all other regions, in the region partition attributes, provide the central region's name in the `colocated-with` attribute. Use one of these methods:
-        -   XML:
-
-            ``` pre
-            <cache> 
-                <region name="trades"> 
-                    <region-attributes> 
-                        <partition-attributes>  
-                            ...
-                        <partition-attributes> 
-                    </region-attributes> 
-                </region> 
-                <region name="trade_history"> 
-                    <region-attributes> 
-                        <partition-attributes colocated-with="trades">   
-                            ...
-                        <partition-attributes> 
-                    </region-attributes> 
-                </region> 
-            </cache> 
-            ```
-        -   Java:
-
-            ``` pre
-            PartitionAttributes attrs = ...
-            Region trades = new RegionFactory().setPartitionAttributes(attrs).create("trades");
-            ...
-            attrs = new PartitionAttributesFactory().setColocatedWith(trades.getFullPath()).create();
-            Region trade_history = new RegionFactory().setPartitionAttributes(attrs).create("trade_history");
-            ```
-        -   gfsh:
-
-            ``` pre
-            gfsh>create region --name="trades" type=PARTITION
-            gfsh> create region --name="trade_history" --colocated-with="trades"
-            ```
-
-2.  For each of the colocated regions, use the same values for these partition attributes related to bucket management:
-    -   `recovery-delay`
-    -   `redundant-copies`
-    -   `startup-recovery-delay`
-    -   `total-num-buckets`
-
-3.  If you custom partition your region data, provide the same custom resolver to all colocated regions:
-    -   XML:
-
-        ``` pre
-        <cache> 
-            <region name="trades"> 
-                <region-attributes> 
-                    <partition-attributes>  
-                    <partition-resolver name="TradesPartitionResolver"> 
-                        <class-name>myPackage.TradesPartitionResolver
-                        </class-name>
-                    <partition-attributes> 
-                </region-attributes> 
-            </region> 
-            <region name="trade_history"> 
-                <region-attributes> 
-                    <partition-attributes colocated-with="trades">   
-                    <partition-resolver name="TradesPartitionResolver"> 
-                        <class-name>myPackage.TradesPartitionResolver
-                        </class-name>
-                    <partition-attributes> 
-                </region-attributes> 
-            </region> 
-        </cache> 
-        ```
-    -   Java:
-
-        ``` pre
-        PartitionResolver resolver = new TradesPartitionResolver();
-        PartitionAttributes attrs = 
-            new PartitionAttributesFactory()
-            .setPartitionResolver(resolver).create();
-        Region trades = new RegionFactory().setPartitionAttributes(attrs).create("trades");
-        attrs = new PartitionAttributesFactory()
-            .setColocatedWith(trades.getFullPath()).setPartitionResolver(resolver).create();
-        Region trade_history = new RegionFactory().setPartitionAttributes(attrs).create("trade_history");
-        ```
-    -   gfsh:
-
-        You cannot specify a partition resolver using gfsh.
-
-4.  If you want to persist data in the colocated regions, persist the central region and then persist the other regions as needed. Use the same disk store for all of the colocated regions that you persist.
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/partitioned_regions/configure_pr_single_hop.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/configure_pr_single_hop.html.md.erb b/geode-docs/developing/partitioned_regions/configure_pr_single_hop.html.md.erb
deleted file mode 100644
index a7eeeb2..0000000
--- a/geode-docs/developing/partitioned_regions/configure_pr_single_hop.html.md.erb
+++ /dev/null
@@ -1,39 +0,0 @@
----
-title:  Configure Client Single-Hop Access to Server-Partitioned Regions
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-Configure your client/server system for direct, single-hop access to partitioned region data in the servers.
-
-This requires a client/server installation that uses one or more partitioned regions on the server.
-
-1.  
-
-    Verify the client's pool attribute, `pr-single-hop-enabled` is not set or is set to true. It is true by default. 
-2.  
-
-    If possible, leave the pool\u2019s `max-connections` at the default unlimited setting (-1). 
-3.  
-
-    If possible, use a custom data resolver to partition your server region data according to your clients' data use patterns. See [Custom-Partition Your Region Data](using_custom_partition_resolvers.html). Include the server\u2019s partition resolver implementation in the client\u2019s `CLASSPATH`. The server passes the name of the resolver for each custom partitioned region, so the client uses the proper one. If the server does not use a partition resolver, the default partitioning between server and client matches, so single hop works. 
-4.  
-
-    Add single-hop considerations to your overall server load balancing plan. Single-hop uses data location rather than least loaded server to pick the servers for single-key operations. Poorly balanced single-hop data access can affect overall client/server load balancing. Some counterbalancing is done automatically because the servers with more single-key operations become more loaded and are less likely to be picked for other operations. 
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb b/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb
deleted file mode 100644
index ccb7e71..0000000
--- a/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb
+++ /dev/null
@@ -1,70 +0,0 @@
----
-title:  Configuring the Number of Buckets for a Partitioned Region
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-Decide how many buckets to assign to your partitioned region and set the configuration accordingly.
-
-<a id="configuring_total_buckets__section_DF52B2BF467F4DB4B8B3D16A79EFCA39"></a>
-The total number of buckets for the partitioned region determines the granularity of data storage and thus how evenly the data can be distributed. Geode distributes the buckets as evenly as possible across the data stores. The number of buckets is fixed after region creation.
-
-The partition attribute `total-num-buckets` sets the number for the entire partitioned region across all participating members. Set it using one of the following:
-
--   XML:
-
-    ``` pre
-    <region name="PR1"> 
-      <region-attributes refid="PARTITION"> 
-        <partition-attributes total-num-buckets="7"/> 
-      </region-attributes> 
-    </region> 
-    ```
-
--   Java:
-
-    ``` pre
-    RegionFactory rf = 
-        cache.createRegionFactory(RegionShortcut.PARTITION);
-    rf.setPartitionAttributes(new PartitionAttributesFactory().setTotalNumBuckets(7).create());
-    custRegion = rf.create("customer");
-    ```
-
--   gfsh:
-
-    Use the <span class="keyword parmname">--total-num-buckets</span> parameter of the `create region` command. For example:
-
-    ``` pre
-    gfsh>create region --name="PR1" --type=PARTITION --total-num-buckets=7
-    ```
-
-## <a id="configuring_total_buckets__section_C956D9BA41C546F89D07DCFE901E539F" class="no-quick-link"></a>Calculate the Total Number of Buckets for a Partitioned Region
-
-Follow these guidelines to calculate the total number of buckets for the partitioned region:
-
--   Use a prime number. This provides the most even distribution.
--   Make it at least four times as large as the number of data stores you expect to have for the region. The larger the ratio of buckets to data stores, the more evenly the load can be spread across the members. Note that there is a trade-off between load balancing and overhead, however. Managing a bucket introduces significant overhead, especially with higher levels of redundancy.
-
-You are trying to avoid the situation where some members have significantly more data entries than others. For example, compare the next two figures. This figure shows a region with three data stores and seven buckets. If all the entries are accessed at about the same rate, this configuration creates a hot spot in member M3, which has about fifty percent more data than the other data stores. M3 is likely to be a slow receiver and potential point of failure.
-
-<img src="../../images_svg/partitioned_data_buckets_1.svg" id="configuring_total_buckets__image_04B05CE3C732430C84D967A062D9EDDA" class="image" />
-
-Configuring more buckets gives you fewer entries in a bucket and a more balanced data distribution. This figure uses the same data as before but increases the number of buckets to 13. Now the data entries are distributed more evenly.
-
-<img src="../../images_svg/partitioned_data_buckets_2.svg" id="configuring_total_buckets__image_326202046D07414391BA5CBA474920CA" class="image" />
-