You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@geode.apache.org by km...@apache.org on 2016/10/12 17:11:51 UTC

[31/76] [abbrv] [partial] incubator-geode git commit: GEODE-1952 Consolidated docs under a single geode-docs directory

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb b/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb
new file mode 100644
index 0000000..56e65e1
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb
@@ -0,0 +1,43 @@
+---
+title:  Partitioned Regions
+---
+
+In addition to basic region management, partitioned regions include options for high availability, data location control, and data balancing across the distributed system.
+
+-   **[Understanding Partitioning](../../developing/partitioned_regions/how_partitioning_works.html)**
+
+    To use partitioned regions, you should understand how they work and your options for managing them.
+
+-   **[Configuring Partitioned Regions](../../developing/partitioned_regions/managing_partitioned_regions.html)**
+
+    Plan the configuration and ongoing management of your partitioned region for host and accessor members and configure the regions for startup.
+
+-   **[Configuring the Number of Buckets for a Partitioned Region](../../developing/partitioned_regions/configuring_bucket_for_pr.html)**
+
+    Decide how many buckets to assign to your partitioned region and set the configuration accordingly.
+
+-   **[Custom-Partitioning and Colocating Data](../../developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html)**
+
+    You can customize how Apache Geode groups your partitioned region data with custom partitioning and data colocation.
+
+-   **[Configuring High Availability for Partitioned Regions](../../developing/partitioned_regions/overview_how_pr_ha_works.html)**
+
+    By default, Apache Geode stores only a single copy of your partitioned region data among the region's data stores. You can configure Geode to maintain redundant copies of your partitioned region data for high availability.
+
+-   **[Configuring Single-Hop Client Access to Server-Partitioned Regions](../../developing/partitioned_regions/overview_how_pr_single_hop_works.html)**
+
+    Single-hop data access enables the client pool to track where a partitioned region\u2019s data is hosted in the servers. To access a single entry, the client directly contacts the server that hosts the key, in a single hop.
+
+-   **[Rebalancing Partitioned Region Data](../../developing/partitioned_regions/rebalancing_pr_data.html)**
+
+    In a distributed system with minimal contention to the concurrent threads reading or updating from the members, you can use rebalancing to dynamically increase or decrease your data and processing capacity.
+
+-   **[Checking Redundancy in Partitioned Regions](../../developing/partitioned_regions/checking_region_redundancy.html)**
+
+    Under some circumstances, it can be important to verify that your partitioned region data is redundant and that upon member restart, redundancy has been recovered properly across partitioned region members.
+
+-   **[Moving Partitioned Region Data to Another Member](../../developing/partitioned_regions/moving_partitioned_data.html)**
+
+    You can use the `PartitionRegionHelper` `moveBucketByKey` and `moveData` methods to explicitly move partitioned region data from one member to another.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/checking_region_redundancy.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/checking_region_redundancy.html.md.erb b/geode-docs/developing/partitioned_regions/checking_region_redundancy.html.md.erb
new file mode 100644
index 0000000..a35de98
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/checking_region_redundancy.html.md.erb
@@ -0,0 +1,38 @@
+---
+title:  Checking Redundancy in Partitioned Regions
+---
+
+Under some circumstances, it can be important to verify that your partitioned region data is redundant and that upon member restart, redundancy has been recovered properly across partitioned region members.
+
+You can verify partitioned region redundancy by making sure that the `numBucketsWithoutRedundancy` statistic is **zero** for all your partitioned regions. To check this statistic, use the following `gfsh` command:
+
+``` pre
+gfsh>show metrics --categories=partition --region=region_name
+```
+
+For example:
+
+``` pre
+gfsh>show metrics --categories=partition --region=posts
+
+Cluster-wide Region Metrics
+--------- | --------------------------- | -----
+partition | putLocalRate                | 0
+          | putRemoteRate               | 0
+          | putRemoteLatency            | 0
+          | putRemoteAvgLatency         | 0
+          | bucketCount                 | 1
+          | primaryBucketCount          | 1
+          | numBucketsWithoutRedundancy | 1
+          | minBucketSize               | 1
+          | maxBucketSize               | 0
+          | totalBucketSize             | 1
+          | averageBucketSize           | 1
+      
+```
+
+If you have `start-recovery-delay=-1` configured for your partitioned region, you will need to perform a rebalance on your region after you restart any members in your cluster in order to recover redundancy.
+
+If you have `start-recovery-delay` set to a low number, you may need to wait extra time until the region has recovered redundancy.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb b/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb
new file mode 100644
index 0000000..f8f13a6
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb
@@ -0,0 +1,111 @@
+---
+title:  Colocate Data from Different Partitioned Regions
+---
+
+By default, Geode allocates the data locations for a partitioned region independent of the data locations for any other partitioned region. You can change this policy for any group of partitioned regions, so that cross-region, related data is all hosted by the same member. This colocation speeds queries and other operations that access data from the regions.
+
+<a id="colocating_partitioned_region_data__section_131EC040055E48A6B35E981B5C845A65"></a>
+**Note:**
+If you are colocating data between regions and custom partitioning the data in the regions, all colocated regions must use partitioning mechanisms that return the same routing object. The most common approach, though not the only one, is for all colocated regions to use the same custom PartitionResolver. See [Custom-Partition Your Region Data](using_custom_partition_resolvers.html).
+
+Data colocation between partitioned regions generally improves the performance of data-intensive operations. You can reduce network hops for iterative operations on related data sets. Compute-heavy applications that are data-intensive can significantly increase overall throughput. For example, a query run on a patient's health records, insurance, and billing information is more efficient if all data is grouped in a single member. Similarly, a financial risk analytical application runs faster if all trades, risk sensitivities, and reference data associated with a single instrument are together.
+
+**Prerequisites**
+
+<a id="colocating_partitioned_region_data__section_5A8D752F02834146A37D9430F1CA32DA"></a>
+
+-   Understand how to configure and create your partitioned regions. See [Understanding Partitioning](how_partitioning_works.html) and [Configuring Partitioned Regions](managing_partitioned_regions.html#configure_partitioned_regions).
+-   (Optional) Understand how to custom-partition your data. See [Custom-Partition Your Region Data](using_custom_partition_resolvers.html).
+-   (Optional) If you want your colocated regions to be highly available, understand how high availability for partitioned regions works. See [Understanding High Availability for Partitioned Regions](how_pr_ha_works.html#how_pr_ha_works).
+-   (Optional) Understand how to persist your region data. See [Configure Region Persistence and Overflow](../storing_data_on_disk/storing_data_on_disk.html).
+
+**Procedure**
+
+1.  Identify one region as the central region, with which data in the other regions is explicitly colocated. If you use persistence for any of the regions, you must persist the central region.
+    1.  Create the central region before you create the others, either in the cache.xml or your code. Regions in the XML are created before regions in the code, so if you create any of your colocated regions in the XML, you must create the central region in the XML before the others. Geode will verify its existence when the others are created and return `IllegalStateException` if the central region is not there. Do not add any colocation specifications to this central region.
+    2.  For all other regions, in the region partition attributes, provide the central region's name in the `colocated-with` attribute. Use one of these methods:
+        -   XML:
+
+            ``` pre
+            <cache> 
+                <region name="trades"> 
+                    <region-attributes> 
+                        <partition-attributes>  
+                            ...
+                        <partition-attributes> 
+                    </region-attributes> 
+                </region> 
+                <region name="trade_history"> 
+                    <region-attributes> 
+                        <partition-attributes colocated-with="trades">   
+                            ...
+                        <partition-attributes> 
+                    </region-attributes> 
+                </region> 
+            </cache> 
+            ```
+        -   Java:
+
+            ``` pre
+            PartitionAttributes attrs = ...
+            Region trades = new RegionFactory().setPartitionAttributes(attrs).create("trades");
+            ...
+            attrs = new PartitionAttributesFactory().setColocatedWith(trades.getFullPath()).create();
+            Region trade_history = new RegionFactory().setPartitionAttributes(attrs).create("trade_history");
+            ```
+        -   gfsh:
+
+            ``` pre
+            gfsh>create region --name="trades" type=PARTITION
+            gfsh> create region --name="trade_history" --colocated-with="trades"
+            ```
+
+2.  For each of the colocated regions, use the same values for these partition attributes related to bucket management:
+    -   `recovery-delay`
+    -   `redundant-copies`
+    -   `startup-recovery-delay`
+    -   `total-num-buckets`
+
+3.  If you custom partition your region data, provide the same custom resolver to all colocated regions:
+    -   XML:
+
+        ``` pre
+        <cache> 
+            <region name="trades"> 
+                <region-attributes> 
+                    <partition-attributes>  
+                    <partition-resolver name="TradesPartitionResolver"> 
+                        <class-name>myPackage.TradesPartitionResolver
+                        </class-name>
+                    <partition-attributes> 
+                </region-attributes> 
+            </region> 
+            <region name="trade_history"> 
+                <region-attributes> 
+                    <partition-attributes colocated-with="trades">   
+                    <partition-resolver name="TradesPartitionResolver"> 
+                        <class-name>myPackage.TradesPartitionResolver
+                        </class-name>
+                    <partition-attributes> 
+                </region-attributes> 
+            </region> 
+        </cache> 
+        ```
+    -   Java:
+
+        ``` pre
+        PartitionResolver resolver = new TradesPartitionResolver();
+        PartitionAttributes attrs = 
+            new PartitionAttributesFactory()
+            .setPartitionResolver(resolver).create();
+        Region trades = new RegionFactory().setPartitionAttributes(attrs).create("trades");
+        attrs = new PartitionAttributesFactory()
+            .setColocatedWith(trades.getFullPath()).setPartitionResolver(resolver).create();
+        Region trade_history = new RegionFactory().setPartitionAttributes(attrs).create("trade_history");
+        ```
+    -   gfsh:
+
+        You cannot specify a partition resolver using gfsh.
+
+4.  If you want to persist data in the colocated regions, persist the central region and then persist the other regions as needed. Use the same disk store for all of the colocated regions that you persist.
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/configure_pr_single_hop.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/configure_pr_single_hop.html.md.erb b/geode-docs/developing/partitioned_regions/configure_pr_single_hop.html.md.erb
new file mode 100644
index 0000000..5518905
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/configure_pr_single_hop.html.md.erb
@@ -0,0 +1,22 @@
+---
+title:  Configure Client Single-Hop Access to Server-Partitioned Regions
+---
+
+Configure your client/server system for direct, single-hop access to partitioned region data in the servers.
+
+This requires a client/server installation that uses one or more partitioned regions on the server.
+
+1.  
+
+    Verify the client's pool attribute, `pr-single-hop-enabled` is not set or is set to true. It is true by default. 
+2.  
+
+    If possible, leave the pool\u2019s `max-connections` at the default unlimited setting (-1). 
+3.  
+
+    If possible, use a custom data resolver to partition your server region data according to your clients' data use patterns. See [Custom-Partition Your Region Data](using_custom_partition_resolvers.html). Include the server\u2019s partition resolver implementation in the client\u2019s `CLASSPATH`. The server passes the name of the resolver for each custom partitioned region, so the client uses the proper one. If the server does not use a partition resolver, the default partitioning between server and client matches, so single hop works. 
+4.  
+
+    Add single-hop considerations to your overall server load balancing plan. Single-hop uses data location rather than least loaded server to pick the servers for single-key operations. Poorly balanced single-hop data access can affect overall client/server load balancing. Some counterbalancing is done automatically because the servers with more single-key operations become more loaded and are less likely to be picked for other operations. 
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb b/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb
new file mode 100644
index 0000000..7ee7133
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb
@@ -0,0 +1,53 @@
+---
+title:  Configuring the Number of Buckets for a Partitioned Region
+---
+
+Decide how many buckets to assign to your partitioned region and set the configuration accordingly.
+
+<a id="configuring_total_buckets__section_DF52B2BF467F4DB4B8B3D16A79EFCA39"></a>
+The total number of buckets for the partitioned region determines the granularity of data storage and thus how evenly the data can be distributed. Geode distributes the buckets as evenly as possible across the data stores. The number of buckets is fixed after region creation.
+
+The partition attribute `total-num-buckets` sets the number for the entire partitioned region across all participating members. Set it using one of the following:
+
+-   XML:
+
+    ``` pre
+    <region name="PR1"> 
+      <region-attributes refid="PARTITION"> 
+        <partition-attributes total-num-buckets="7"/> 
+      </region-attributes> 
+    </region> 
+    ```
+
+-   Java:
+
+    ``` pre
+    RegionFactory rf = 
+        cache.createRegionFactory(RegionShortcut.PARTITION);
+    rf.setPartitionAttributes(new PartitionAttributesFactory().setTotalNumBuckets(7).create());
+    custRegion = rf.create("customer");
+    ```
+
+-   gfsh:
+
+    Use the <span class="keyword parmname">--total-num-buckets</span> parameter of the `create region` command. For example:
+
+    ``` pre
+    gfsh>create region --name="PR1" --type=PARTITION --total-num-buckets=7
+    ```
+
+## <a id="configuring_total_buckets__section_C956D9BA41C546F89D07DCFE901E539F" class="no-quick-link"></a>Calculate the Total Number of Buckets for a Partitioned Region
+
+Follow these guidelines to calculate the total number of buckets for the partitioned region:
+
+-   Use a prime number. This provides the most even distribution.
+-   Make it at least four times as large as the number of data stores you expect to have for the region. The larger the ratio of buckets to data stores, the more evenly the load can be spread across the members. Note that there is a trade-off between load balancing and overhead, however. Managing a bucket introduces significant overhead, especially with higher levels of redundancy.
+
+You are trying to avoid the situation where some members have significantly more data entries than others. For example, compare the next two figures. This figure shows a region with three data stores and seven buckets. If all the entries are accessed at about the same rate, this configuration creates a hot spot in member M3, which has about fifty percent more data than the other data stores. M3 is likely to be a slow receiver and potential point of failure.
+
+<img src="../../images_svg/partitioned_data_buckets_1.svg" id="configuring_total_buckets__image_04B05CE3C732430C84D967A062D9EDDA" class="image" />
+
+Configuring more buckets gives you fewer entries in a bucket and a more balanced data distribution. This figure uses the same data as before but increases the number of buckets to 13. Now the data entries are distributed more evenly.
+
+<img src="../../images_svg/partitioned_data_buckets_2.svg" id="configuring_total_buckets__image_326202046D07414391BA5CBA474920CA" class="image" />
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/configuring_ha_for_pr.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/configuring_ha_for_pr.html.md.erb b/geode-docs/developing/partitioned_regions/configuring_ha_for_pr.html.md.erb
new file mode 100644
index 0000000..a9a98fb
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/configuring_ha_for_pr.html.md.erb
@@ -0,0 +1,41 @@
+---
+title:  Configure High Availability for a Partitioned Region
+---
+
+Configure in-memory high availability for your partitioned region. Set other high-availability options, like redundancy zones and redundancy recovery strategies.
+
+Here are the main steps for configuring high availability for a partitioned region. See later sections for details.
+
+1.  Set the number of redundant copies the system should maintain of the region data. See [Set the Number of Redundant Copies](set_pr_redundancy.html#set_pr_redundancy). 
+2.  (Optional) If you want to group your data store members into redundancy zones, configure them accordingly. See [Configure Redundancy Zones for Members](set_redundancy_zones.html#set_redundancy_zones). 
+3.  (Optional) If you want Geode to only place redundant copies on different physical machines, configure for that. See [Set Enforce Unique Host](set_enforce_unique_host.html#set_pr_redundancy). 
+4.  Decide how to manage redundancy recovery and change Geode's default behavior as needed. 
+    - **After a member crashes**. If you want automatic redundancy recovery, change the configuration for that. See [Configure Member Crash Redundancy Recovery for a Partitioned Region](set_crash_redundancy_recovery.html#set_crash_redundancy_recovery). 
+    - **After a member joins**. If you do *not* want immediate, automatic redundancy recovery, change the configuration for that. See [Configure Member Join Redundancy Recovery for a Partitioned Region](set_join_redundancy_recovery.html#set_join_redundancy_recovery). 
+
+5.  Decide how many buckets Geode should attempt to recover in parallel when performing redundancy recovery. By default, the system recovers up to 8 buckets in parallel. Use the `gemfire.MAX_PARALLEL_BUCKET_RECOVERIES` system property to increase or decrease the maximum number of buckets to recover in parallel any time redundancy recovery is performed.
+6.  For all but fixed partitioned regions, review the points at which you kick off rebalancing. Redundancy recovery is done automatically at the start of any rebalancing. This is most important if you run with no automated recovery after member crashes or joins. See [Rebalancing Partitioned Region Data](rebalancing_pr_data.html#rebalancing_pr_data). 
+
+During runtime, you can add capacity by adding new members for the region. For regions that do not use fixed partitioning, you can also kick off a rebalancing operation to spread the region buckets among all members.
+
+-   **[Set the Number of Redundant Copies](../../developing/partitioned_regions/set_pr_redundancy.html)**
+
+    Configure in-memory high availability for your partitioned region by specifying the number of secondary copies you want to maintain in the region's data stores.
+
+-   **[Configure Redundancy Zones for Members](../../developing/partitioned_regions/set_redundancy_zones.html)**
+
+    Group members into redundancy zones so Geode will separate redundant data copies into different zones.
+
+-   **[Set Enforce Unique Host](../../developing/partitioned_regions/set_enforce_unique_host.html)**
+
+    Configure Geode to use only unique physical machines for redundant copies of partitioned region data.
+
+-   **[Configure Member Crash Redundancy Recovery for a Partitioned Region](../../developing/partitioned_regions/set_crash_redundancy_recovery.html)**
+
+    Configure whether and how redundancy is recovered in a partition region after a member crashes.
+
+-   **[Configure Member Join Redundancy Recovery for a Partitioned Region](../../developing/partitioned_regions/set_join_redundancy_recovery.html)**
+
+    Configure whether and how redundancy is recovered in a partition region after a member joins.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/custom_partitioning_and_data_colocation.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/custom_partitioning_and_data_colocation.html.md.erb b/geode-docs/developing/partitioned_regions/custom_partitioning_and_data_colocation.html.md.erb
new file mode 100644
index 0000000..0cd5f63
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/custom_partitioning_and_data_colocation.html.md.erb
@@ -0,0 +1,41 @@
+---
+title:  Understanding Custom Partitioning and Data Colocation
+---
+
+Custom partitioning and data colocation can be used separately or in conjunction with one another.
+
+## <a id="custom_partitioning_and_data_colocation__section_ABFEE9CB17AF44F1AE252AC10FB5E999" class="no-quick-link"></a>Custom Partitioning
+
+Use custom partitioning to group like entries into region buckets within a region. By default, Geode assigns new entries to buckets based on the entry key contents. With custom partitioning, you can assign your entries to buckets in whatever way you want.
+
+You can generally get better performance if you use custom partitioning to group similar data within a region. For example, a query run on all accounts created in January runs faster if all January account data is hosted by a single member. Grouping all data for a single customer can improve performance of data operations that work on customer data. Data aware function execution takes advantage of custom partitioning.
+
+This figure shows a region with customer data that is grouped into buckets by customer.
+
+<img src="../../images_svg/custom_partitioned.svg" id="custom_partitioning_and_data_colocation__image_1D37D547D3244171BB9CADAEC88E7649" class="image" />
+
+With custom partitioning, you have two choices:
+
+-   **Standard custom partitioning**. With standard partitioning, you group entries into buckets, but you do not specify where the buckets reside. Geode always keeps the entries in the buckets you have specified, but may move the buckets around for load balancing.
+-   **Fixed custom partitioning**. With fixed partitioning, you provide standard partitioning plus you specify the exact member where each data entry resides. You do this by assigning the data entry to a bucket and to a partition and by naming specific members as primary and secondary hosts of each partition.
+
+    This gives you complete control over the locations of your primary and any secondary buckets for the region. This can be useful when you want to store specific data on specific physical machines or when you need to keep data close to certain hardware elements.
+
+    Fixed partitioning has these requirements and caveats:
+
+    -   Geode cannot rebalance fixed partition region data because it cannot move the buckets around among the host members. You must carefully consider your expected data loads for the partitions you create.
+    -   With fixed partitioning, the region configuration is different between host members. Each member identifies the named partitions it hosts, and whether it is hosting the primary copy or a secondary copy. You then program fixed partition resolver to return the partition id, so the entry is placed on the right members. Only one member can be primary for a particular partition name and that member cannot be the partition's secondary.
+
+## <a id="custom_partitioning_and_data_colocation__section_D2C66951FE38426F9C05050D2B9028D8" class="no-quick-link"></a>Data Colocation Between Regions
+
+With data colocation, Geode stores entries that are related across multiple data regions in a single member. Geode does this by storing all of the regions' buckets with the same ID together in the same member. During rebalancing operations, Geode moves these bucket groups together or not at all.
+
+So, for example, if you have one region with customer contact information and another region with customer orders, you can use colocation to keep all contact information and all orders for a single customer in a single member. This way, any operation done for a single customer uses the cache of only a single member.
+
+This figure shows two regions with data colocation where the data is partitioned by customer type.
+
+<img src="../../images_svg/colocated_partitioned_regions.svg" id="custom_partitioning_and_data_colocation__image_525AC474950F473ABCDE8E372583C5DF" class="image" />
+
+Data colocation requires the same data partitioning mechanism for all of the colocated regions. You can use the default partitioning provided by Geode or custom partitioning.
+
+You must use the same high availability settings across your colocated regions.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/how_partitioning_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/how_partitioning_works.html.md.erb b/geode-docs/developing/partitioned_regions/how_partitioning_works.html.md.erb
new file mode 100644
index 0000000..68e8dd2
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/how_partitioning_works.html.md.erb
@@ -0,0 +1,41 @@
+---
+title:  Understanding Partitioning
+---
+
+To use partitioned regions, you should understand how they work and your options for managing them.
+
+<a id="how_partitioning_works__section_B540C49A80124551853AFCE2DE6BCFE8"></a>
+During operation, a partitioned region looks like one large virtual region, with the same logical view held by all of the members where the region is defined.
+<img src="../../images_svg/how_partitioning_works_1.svg" id="how_partitioning_works__image_305566EA091A4CBBB108BE0EA7658C0A" class="image" />
+
+For each member where you define the region, you can choose how much space to allow for region data storage, including no local storage at all. The member can access all region data regardless of how much is stored locally.
+<img src="../../images_svg/how_partitioning_works_2.svg" id="how_partitioning_works__image_773C91B76D5E4739A1F81D9DF918BCDB" class="image" />
+
+A distributed system can have multiple partitioned regions, and it can mix partitioned regions with distributed regions and local regions. The usual requirement for unique region names, except for regions with local scope, still applies. A single member can host multiple partitioned regions.
+
+## <a id="how_partitioning_works__section_260C2455FC8C40A094B39BF585D06B7D" class="no-quick-link"></a>Data Partitioning
+
+Geode automatically determines the physical location of data in the members that host a partitioned region's data. Geode breaks partitioned region data into units of storage known as buckets and stores each bucket in a region host member. Buckets are distributed in accordance to the member\u2019s region attribute settings.
+
+When an entry is created, it is assigned to a bucket. Keys are grouped together in a bucket and always remain there. If the configuration allows, the buckets may be moved between members to balance the load.
+
+You must run the data stores needed to accommodate storage for the partitioned region\u2019s buckets. You can start new data stores on the fly. When a new data store creates the region, it takes responsibility for as many buckets as allowed by the partitioned region and member configuration.
+
+You can customize how Geode groups your partitioned region data with custom partitioning and data colocation.
+
+## <a id="how_partitioning_works__section_155F9D4AB539473F848FD05E413B21B3" class="no-quick-link"></a>Partitioned Region Operation
+
+A partitioned region operates much like a non-partitioned region with distributed scope. Most of the standard `Region` methods are available, although some methods that are normally local operations become distributed operations, because they work on the partitioned region as a whole instead of the local cache. For example, a `put` or `create` into a partitioned region may not actually be stored into the cache of the member that called the operation. The retrieval of any entry requires no more than one hop between members.
+
+Partitioned regions support the client/server model, just like other regions. If you need to connect dozens of clients to a single partitioned region, using servers greatly improves performance.
+
+## <a id="how_partitioning_works__section_3B47A291ADAB4988AF9D0DF34BC2CDAC" class="no-quick-link"></a>Additional Information About Partitioned Regions
+
+Keep the following in mind about partitioned regions:
+
+-   Partitioned regions never run asynchronously. Operations in partitioned regions always wait for acknowledgement from the caches containing the original data entry and any redundant copies.
+-   A partitioned region needs a cache loader in every region data store (`local-max-memory` &gt; 0).
+-   Geode distributes the data buckets as evenly as possible across all members storing the partitioned region data, within the limits of any custom partitioning or data colocation that you use. The number of buckets allotted for the partitioned region determines the granularity of data storage and thus how evenly the data can be distributed. The number of buckets is a total for the entire region across the distributed system.
+-   In rebalancing data for the region, Geode moves buckets, but does not move data around inside the buckets.
+-   You can query partitioned regions, but there are certain limitations. See [Querying Partitioned Regions](../querying_basics/querying_partitioned_regions.html#querying_partitioned_regions) for more information.
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb b/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb
new file mode 100644
index 0000000..5082cc4
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb
@@ -0,0 +1,44 @@
+---
+title:  Understanding High Availability for Partitioned Regions
+---
+
+With high availability, each member that hosts data for the partitioned region gets some primary copies and some redundant (secondary) copies.
+
+<a id="how_pr_ha_works__section_04FDCC6C2130496F8B33B9DF5CDED362"></a>
+
+With redundancy, if one member fails, operations continue on the partitioned region with no interruption of service:
+
+-   If the member hosting the primary copy is lost, Geode makes a secondary copy the primary. This might cause a temporary loss of redundancy, but not a loss of data.
+-   Whenever there are not enough secondary copies to satisfy redundancy, the system works to recover redundancy by assigning another member as secondary and copying the data to it.
+
+**Note:**
+You can still lose cached data when you are using redundancy if enough members go down in a short enough time span.
+
+You can configure how the system works to recover redundancy when it is not satisfied. You can configure recovery to take place immediately or, if you want to give replacement members a chance to start up, you can configure a wait period. Redundancy recovery is also automatically attempted during any partitioned data rebalancing operation. Use the `gemfire.MAX_PARALLEL_BUCKET_RECOVERIES` system property to configure the maximum number of buckets that are recovered in parallel. By default, up to 8 buckets are recovered in parallel any time the system attempts to recover redundancy.
+
+Without redundancy, the loss of any of the region's data stores causes the loss of some of the region's cached data. Generally, you should not use redundancy when your applications can directly read from another data source, or when write performance is more important than read performance.
+
+## <a id="how_pr_ha_works__section_7045530D601F4C65A062B5FDD0DD9206" class="no-quick-link"></a>Controlling Where Your Primaries and Secondaries Reside
+
+By default, Geode places your primary and secondary data copies for you, avoiding placement of two copies on the same physical machine. If there are not enough machines to keep different copies separate, Geode places copies on the same physical machine. You can change this behavior, so Geode only places copies on separate machines.
+
+You can also control which members store your primary and secondary data copies. Geode provides two options:
+
+-   **Fixed custom partitioning**. This option is set for the region. Fixed partitioning gives you absolute control over where your region data is hosted. With fixed partitioning, you provide Geode with the code that specifies the bucket and data store for each data entry in the region. When you use this option with redundancy, you specify the primary and secondary data stores. Fixed partitioning does not participate in rebalancing because all bucket locations are fixed by you.
+-   **Redundancy zones**. This option is set at the member level. Redundancy zones let you separate primary and secondary copies by member groups, or zones. You assign each data host to a zone. Then Geode places redundant copies in different redundancy zones, the same as it places redundant copies on different physical machines. You can use this to split data copies across different machine racks or networks, This option allows you to add members on the fly and use rebalancing to redistribute the data load, with redundant data maintained in separate zones. When you use redundancy zones, Geode will not place two copies of the data in the same zone, so make sure you have enough zones.
+
+## <a id="how_pr_ha_works__section_87A2429B6277497184926E08E64B81C6" class="no-quick-link"></a>Running Processes in Virtual Machines
+
+By default, Geode stores redundant copies on different machines. When you run your processes in virtual machines, the normal view of the machine becomes the VM and not the physical machine. If you run multiple VMs on the same physical machine, you could end up storing partitioned region primary buckets in separate VMs, but on the same physical machine as your secondaries. If the physical machine fails, you can lose data. When you run in VMs, you can configure Geode to identify the physical machine and store redundant copies on different physical machines.
+
+## <a id="how_pr_ha_works__section_CAB9440BABD6484D99525766E937CB55" class="no-quick-link"></a>Reads and Writes in Highly-Available Partitioned Regions
+
+Geode treats reads and writes differently in highly-available partitioned regions than in other regions because the data is available in multiple members:
+
+-   Write operations (like `put` and `create`) go to the primary for the data keys and then are distributed synchronously to the redundant copies. Events are sent to the members configured with `subscription-attributes` `interest-policy` set to `all`.
+-   Read operations go to any member holding a copy of the data, with the local cache favored, so a read intensive system can scale much better and handle higher loads.
+
+In this figure, M1 is reading W, Y, and Z. It gets W directly from its local copy. Since it doesn't have a local copy of Y or Z, it goes to a cache that does, picking the source cache at random.
+
+<img src="../../images_svg/partitioned_data_HA.svg" id="how_pr_ha_works__image_574D1A1E641944D2A2DE68C4618D84B4" class="image" />
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/how_pr_single_hop_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/how_pr_single_hop_works.html.md.erb b/geode-docs/developing/partitioned_regions/how_pr_single_hop_works.html.md.erb
new file mode 100644
index 0000000..9002719
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/how_pr_single_hop_works.html.md.erb
@@ -0,0 +1,31 @@
+---
+title:  Understanding Client Single-Hop Access to Server-Partitioned Regions
+---
+
+With single-hop access the client connects to every server, so more connections are generally used. This works fine for smaller installations, but is a barrier to scaling.
+
+If you have a large installation with many clients, you may want to disable single hop by setting the pool attribute, `pr-single-hop-enabled` to false in your pool declarations.
+
+Without single hop, the client uses whatever server connection is available, the same as with all other operations. The server that receives the request determines the data location and contacts the host, which might be a different server. So more multiple-hop requests are made to the server system.
+
+**Note:**
+Single hop is used for the following operations: `put`, `get`, `destroy`, `putAll`, `getAll`, `removeAll` and `onRegion` function execution.
+
+Even with single hop access enabled, you will occasionally see some multiple-hop behavior. To perform single-hop data access, clients automatically get metadata from the servers about where the entry buckets are hosted. The metadata is maintained lazily. It is only updated after a single-hop operation ends up needing multiple hops, an indicator of stale metadata in the client.
+
+## <a id="how_pr_single_hop_works__section_AE4A6DA0064C4D5280336DD65CB107CC" class="no-quick-link"></a>Single Hop and the Pool max-connections Setting
+
+Do not set the pool's `max-connections` setting with single hop enabled. Limiting the pool's connections with single hop can cause connection thrashing, throughput loss, and server log bloat.
+
+If you need to limit the pool\u2019s connections, either disable single hop or keep a close watch on your system for these negative effects.
+
+Setting no limit on connections, however, can result in too many connections to your servers, possibly causing you to run up against your system\u2019s file handle limits. Review your anticipated connection use and make sure your servers are able to accommodate it.
+
+## <a id="how_pr_single_hop_works__section_99F27B724E5F4008BC8878D1CB4B9821" class="no-quick-link"></a>Balancing Single-Hop Server Connection Use
+
+Single-hop gives the biggest benefits when data access is well balanced across your servers. In particular, the loads for client/server connections can get out of balance if you have these in combination:
+
+-   Servers that are empty data accessors or that do not host the data the clients access through single-key operations
+-   Many single-key operations from the clients
+
+If data access is greatly out of balance, clients can thrash trying to get to the data servers. In this case, it might be faster to disable single hop and go through servers that do not host the data.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb b/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb
new file mode 100644
index 0000000..b8399e2
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb
@@ -0,0 +1,80 @@
+---
+title:  Performing an Equi-Join Query on Partitioned Regions
+---
+
+In order to perform equi-join operations on partitioned regions or partitioned regions and replicated regions, you need to use the `query.execute` method and supply it with a function execution context. You need to use Geode's FunctionService executor because join operations are not yet directly supported for partitioned regions without providing a function execution context.
+
+See [Partitioned Region Query Restrictions](../query_additional/partitioned_region_query_restrictions.html#concept_5353476380D44CC1A7F586E5AE1CE7E8) for more information on partitioned region query limitations.
+
+For example, let's say your equi-join query is the following:
+
+``` pre
+SELECT DISTINCT * FROM /QueryRegion1 r1,
+/QueryRegion2 r2 WHERE r1.ID = r2.ID
+```
+
+In this example QueryRegion2 is colocated with QueryRegion1, and both regions have same type of data objects.
+
+On the server side:
+
+``` pre
+ Function prQueryFunction1 = new QueryFunction();
+ FunctionService.registerFunction(prQueryFunction1);
+
+ public class QueryFunction extends FunctionAdapter {
+    @Override
+    public void execute(FunctionContext context) {
+      Cache cache = CacheFactory.getAnyInstance();
+      QueryService queryService = cache.getQueryService();
+      ArrayList allQueryResults = new ArrayList();
+      ArrayList arguments = (ArrayList)(context.getArguments());
+      String qstr = (String)arguments.get(0);
+      try {
+           Query query = queryService.newQuery(qstr);
+           SelectResults result = (SelectResults)query
+             .execute((RegionFunctionContext)context);
+           ArrayList arrayResult = (ArrayList)result.asList();
+           context.getResultSender().sendResult((ArrayList)result.asList());
+           context.getResultSender().lastResult(null);
+              } catch (Exception e) {
+               // handle exception
+             }
+       }
+} 
+     
+```
+
+On the server side, `Query.execute()` operates on the local data of the partitioned region.
+
+On the client side:
+
+``` pre
+ 
+Function function = new QueryFunction();
+String queryString = "SELECT DISTINCT * FROM /QueryRegion1 r1,
+        /QueryRegion2 r2 WHERE r1.ID = r2.ID";
+ArrayList argList = new ArrayList();
+argList.add(queryString);
+Object result = FunctionService.onRegion(CacheFactory.getAnyInstance()
+     .getRegion("QueryRegion1" ))
+     .withArgs(argList).execute(function).getResult();
+ArrayList resultList = (ArrayList)result;
+resultList.trimToSize();
+List queryResults = null;
+if (resultList.size() != 0) {
+   queryResults = new ArrayList();
+   for (Object obj : resultList) {
+      if (obj != null ) {
+      queryResults.addAll((ArrayList)obj);
+         }
+   }
+}
+```
+
+On the client side, note that you can specify a bucket filter while invoking FunctionService.onRegion(). In this case, the query engine relies on FunctionService to direct the query to specific nodes.
+
+**Additional Notes on Using the Query.execute and RegionFunctionContext APIs**
+
+You can also pass multiple parameters (besides the query itself) to the query function by specifying the parameters in the client-side code (`FunctionService.onRegion(..).withArgs()`). Then you can handle the parameters inside the function on the server side using `context.getArguments`. Note that it does not matter which order you specify the parameters as long as you match the parameter handling order on the server with the order specified in the client.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/managing_partitioned_regions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/managing_partitioned_regions.html.md.erb b/geode-docs/developing/partitioned_regions/managing_partitioned_regions.html.md.erb
new file mode 100644
index 0000000..fd7494f
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/managing_partitioned_regions.html.md.erb
@@ -0,0 +1,25 @@
+---
+title:  Configuring Partitioned Regions
+---
+
+Plan the configuration and ongoing management of your partitioned region for host and accessor members and configure the regions for startup.
+
+<a id="configure_partitioned_regions__section_241583D88E244AB6AB5CD05BF55F6A0A"></a>
+Before you begin, understand [Basic Configuration and Programming](../../basic_config/book_intro.html).
+
+1.  Start your region configuration using one of the `PARTITION` region shortcut settings. See [Region Shortcuts and Custom Named Region Attributes](../../basic_config/data_regions/region_shortcuts.html).
+2.  If you need high availability for your partitioned region, configure for that. See [Configure High Availability for a Partitioned Region](configuring_ha_for_pr.html).
+3.  Estimate the amount of space needed for the region. If you use redundancy, this is the max for all primary and secondary copies stored in the member. For example, with redundancy of one, each region data entry requires twice the space than with no redundancy, because the entry is stored twice. See [Memory Requirements for Cached Data](../../reference/topics/memory_requirements_for_cache_data.html#calculating_memory_requirements).
+4.  Configure the total number of buckets for the region. This number must be the same for colocated regions. See [Configuring the Number of Buckets for a Partitioned Region](configuring_bucket_for_pr.html#configuring_total_buckets).
+5.  Configure your members' data storage and data loading for the region:
+    1.  You can have members with no local data storage and members with varying amounts of storage. Determine the max memory available in your different member types for this region. These will be set in the `partition-attributes` `local-max-memory`. This is the only setting in `partition-attributes` that can vary between members. Use these max values and your estimates for region memory requirements to help you figure how many members to start out with for the region.
+    2.  For members that store data for the region (`local-max-memory` greater than 0), define a data loader. See [Implement a Data Loader](../outside_data_sources/implementing_data_loaders.html#implementing_data_loaders).
+    3.  If you have members with no local data storage (`local-max-memory` set to 0), review your system startup/shutdown procedures. Make sure there is always at least one member with local data storage running when any members with no storage are running.
+
+6.  If you want to custom partition the data in your region or colocate data between multiple regions, code and configure accordingly. See [Understanding Custom Partitioning and Data Colocation](custom_partitioning_and_data_colocation.html#custom_partitioning_and_data_colocation).
+7.  Plan your partition rebalancing strategy and configure and program for that. See [Rebalancing Partitioned Region Data](rebalancing_pr_data.html#rebalancing_pr_data).
+
+**Note:**
+To configure a partitioned region using gfsh, see [gfsh Command Help](../../tools_modules/gfsh/gfsh_command_index.html#concept_C291647179C5407A876CC7FCF91CF756).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/moving_partitioned_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/moving_partitioned_data.html.md.erb b/geode-docs/developing/partitioned_regions/moving_partitioned_data.html.md.erb
new file mode 100644
index 0000000..054f7fe
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/moving_partitioned_data.html.md.erb
@@ -0,0 +1,59 @@
+---
+title:  Moving Partitioned Region Data to Another Member
+---
+
+You can use the `PartitionRegionHelper` `moveBucketByKey` and `moveData` methods to explicitly move partitioned region data from one member to another.
+
+The `moveBucketByKey` method moves the bucket that contains the specified key from a source member to a destination member. For example, you could use the method to move a popular product item to a new, empty member to reduce load on the source member.
+
+For example:
+
+``` pre
+Object product = ...
+Region r = ...
+DistributedSystem ds = ...
+String memberName = ...
+
+//Find the member that is currently hosting the product.
+Set<DistributedMember> sourceMembers =
+PartitionRegionHelper.getAllMembersForKey(r, product);
+
+//Find the member to move the product to.
+DistributedMember destination = ds.findDistributedMember(memberName);
+
+//In this example we assume there is always at least one source.
+//In practice, you should check that at least one source
+//for the data is available.
+source = sourceMembers.iterator().next();
+
+//Move the bucket to the new node. The bucket will
+//be moved when this method completes. It throws an exception
+//if there is a problem or invalid arguments.
+PartitionRegionHelper.moveBucketByKey(r, source, destination, product);
+```
+
+See the Java API documentation for `org.apache.geode.cache.partition.PartitionRegionHelper.moveBucketByKey` for more details.
+
+The `moveData` method moves data up to a given percentage (measured in bytes) from a source member to a destination member. For example, you could use this method to move a specified percentage of data from an overloaded member to another member to improve distribution.
+
+For example:
+
+``` pre
+Region r = ...
+DistributedSystem ds = ...
+String sourceName = ...
+String destName = ...
+
+//Find the source member.
+DistributedMember source = ds.findDistributedMember(sourceName);
+DistributedMember destination = ds.findDistributedMember(destName);
+
+//Move up to 20% of the data from the source to the destination node.
+PartitionRegionHelper.moveData(r, source, destination, 20);
+```
+
+See the Java API documentation for `org.apache.geode.cache.partition.PartitionRegionHelper.moveData` for more details.
+
+For more information on partitioned regions and rebalancing, see [Partitioned Regions](chapter_overview.html).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb b/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb
new file mode 100644
index 0000000..3cf5c10
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb
@@ -0,0 +1,19 @@
+---
+title:  Custom-Partitioning and Colocating Data
+---
+
+You can customize how Apache Geode groups your partitioned region data with custom partitioning and data colocation.
+
+-   **[Understanding Custom Partitioning and Data Colocation](../../developing/partitioned_regions/custom_partitioning_and_data_colocation.html)**
+
+    Custom partitioning and data colocation can be used separately or in conjunction with one another.
+
+-   **[Custom-Partition Your Region Data](../../developing/partitioned_regions/using_custom_partition_resolvers.html)**
+
+    By default, Geode partitions each data entry into a bucket using a hashing policy on the key. Additionally, the physical location of the key-value pair is abstracted away from the application. You can change these policies for a partitioned region. You can provide your own data partitioning resolver and you can additionally specify which members host which data buckets.
+
+-   **[Colocate Data from Different Partitioned Regions](../../developing/partitioned_regions/colocating_partitioned_region_data.html)**
+
+    By default, Geode allocates the data locations for a partitioned region independent of the data locations for any other partitioned region. You can change this policy for any group of partitioned regions, so that cross-region, related data is all hosted by the same member. This colocation speeds queries and other operations that access data from the regions.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb b/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb
new file mode 100644
index 0000000..7b182c5
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb
@@ -0,0 +1,15 @@
+---
+title:  Configuring High Availability for Partitioned Regions
+---
+
+By default, Apache Geode stores only a single copy of your partitioned region data among the region's data stores. You can configure Geode to maintain redundant copies of your partitioned region data for high availability.
+
+-   **[Understanding High Availability for Partitioned Regions](../../developing/partitioned_regions/how_pr_ha_works.html)**
+
+    With high availability, each member that hosts data for the partitioned region gets some primary copies and some redundant (secondary) copies.
+
+-   **[Configure High Availability for a Partitioned Region](../../developing/partitioned_regions/configuring_ha_for_pr.html)**
+
+    Configure in-memory high availability for your partitioned region. Set other high-availability options, like redundancy zones and redundancy recovery strategies.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb b/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb
new file mode 100644
index 0000000..651c851
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb
@@ -0,0 +1,15 @@
+---
+title:  Configuring Single-Hop Client Access to Server-Partitioned Regions
+---
+
+Single-hop data access enables the client pool to track where a partitioned region\u2019s data is hosted in the servers. To access a single entry, the client directly contacts the server that hosts the key, in a single hop.
+
+-   **[Understanding Client Single-Hop Access to Server-Partitioned Regions](../../developing/partitioned_regions/how_pr_single_hop_works.html)**
+
+    With single-hop access the client connects to every server, so more connections are generally used. This works fine for smaller installations, but is a barrier to scaling.
+
+-   **[Configure Client Single-Hop Access to Server-Partitioned Regions](../../developing/partitioned_regions/configure_pr_single_hop.html)**
+
+    Configure your client/server system for direct, single-hop access to partitioned region data in the servers.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/rebalancing_pr_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/rebalancing_pr_data.html.md.erb b/geode-docs/developing/partitioned_regions/rebalancing_pr_data.html.md.erb
new file mode 100644
index 0000000..93c31e7
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/rebalancing_pr_data.html.md.erb
@@ -0,0 +1,89 @@
+---
+title:  Rebalancing Partitioned Region Data
+---
+
+In a distributed system with minimal contention to the concurrent threads reading or updating from the members, you can use rebalancing to dynamically increase or decrease your data and processing capacity.
+
+<a id="rebalancing_pr_data__section_D3649ADD28DB4FF78C47A3E428C80510"></a>
+Rebalancing is a member operation. It affects all partitioned regions defined by the member, regardless of whether the member hosts data for the regions. The rebalancing operation performs two tasks:
+
+1.  If the configured partition region redundancy is not satisfied, rebalancing does what it can to recover redundancy. See [Configure High Availability for a Partitioned Region](configuring_ha_for_pr.html).
+2.  Rebalancing moves the partitioned region data buckets between host members as needed to establish the most fair balance of data and behavior across the distributed system.
+
+For efficiency, when starting multiple members, trigger the rebalance a single time, after you have added all members.
+
+**Note:**
+If you have transactions running in your system, be careful in planning your rebalancing operations. Rebalancing may move data between members, which could cause a running transaction to fail with a `TransactionDataRebalancedException`. Fixed custom partitioning prevents rebalancing altogether. All other data partitioning strategies allow rebalancing and can result in this exception unless you run your transactions and your rebalancing operations at different times.
+
+Kick off a rebalance using one of the following:
+
+-   `gfsh` command. First, starting a `gfsh` prompt and connect to the Geode distributed system. Then type the following command:
+
+    ``` pre
+    gfsh>rebalance
+    ```
+
+    Optionally, you can specify regions to include or exclude from rebalancing, specify a time-out for the rebalance operation or just [simulate a rebalance operation](rebalancing_pr_data.html#rebalancing_pr_data__section_495FEE48ED60433BADB7D36C73279C89). Type `help rebalance` or see [rebalance](../../tools_modules/gfsh/command-pages/rebalance.html) for more information.
+
+-   API call:
+
+    ``` pre
+    ResourceManager manager = cache.getResourceManager(); 
+    RebalanceOperation op = manager.createRebalanceFactory().start(); 
+    //Wait until the rebalance is complete and then get the results
+    RebalanceResults results = op.getResults(); 
+    //These are some of the details we can get about the run from the API
+    System.out.println("Took " + results.getTotalTime() + " milliseconds\n"); 
+    System.out.println("Transfered " + results.getTotalBucketTransferBytes()+ "bytes\n");
+    ```
+
+You can also just simulate a rebalance through the API, to see if it's worth it to run:
+
+``` pre
+ResourceManager manager = cache.getResourceManager(); 
+RebalanceOperation op = manager.createRebalanceFactory().simulate(); 
+RebalanceResults results = op.getResults(); 
+System.out.println("Rebalance would transfer " + results.getTotalBucketTransferBytes() +" bytes "); 
+System.out.println(" and create " + results.getTotalBucketCreatesCompleted() + " buckets.\n");
+```
+
+## <a id="rebalancing_pr_data__section_1592413D533D454D9E5ACFCDC4685DD1" class="no-quick-link"></a>How Partitioned Region Rebalancing Works
+
+The rebalancing operation runs asynchronously.
+
+By default, rebalancing is performed on one partitioned region at a time. For regions that have colocated data, the rebalancing works on the regions as a group, maintaining the data colocation between the regions.
+
+You can optionally rebalance multiple regions in parallel by setting the `gemfire.resource.manager.threads` system property. Setting this property to a value greater than 1 enables Geode to rebalance multiple regions in parallel, any time a rebalance operation is initiated using the API.
+
+You can continue to use your partitioned regions normally while rebalancing is in progress. Read operations, write operations, and function executions continue while data is moving. If a function is executing on a local data set, you may see a performance degradation if that data moves to another host during function execution. Future function invocations are routed to the correct member.
+
+Geode tries to ensure that each member has the same percentage of its available space used for each partitioned region. The percentage is configured in the `partition-attributes` `local-max-memory` setting.
+
+Partitioned region rebalancing:
+
+-   Does not allow the `local-max-memory` setting to be exceeded unless LRU eviction is enabled with overflow to disk.
+-   Places multiple copies of the same bucket on different host IP addresses whenever possible.
+-   Resets entry time to live and idle time statistics during bucket migration.
+-   Replaces offline members.
+
+## <a id="rebalancing_pr_data__section_BE71EE52DE1A4275BC7854CA597797F4" class="no-quick-link"></a>When to Rebalance a Partitioned Region
+
+You typically want to trigger rebalancing when capacity is increased or reduced through member startup, shut down or failure.
+
+You may also need to rebalance when:
+
+-   You use redundancy for high availability and have configured your region to not automatically recover redundancy after a loss. In this case, Geode only restores redundancy when you invoke a rebalance. See [Configure High Availability for a Partitioned Region](configuring_ha_for_pr.html).
+-   You have uneven hashing of data. Uneven hashing can occur if your keys do not have a hash code method, which ensures uniform distribution, or if you use a `PartitionResolver` to colocate your partitioned region data (see [Colocate Data from Different Partitioned Regions](colocating_partitioned_region_data.html#colocating_partitioned_region_data)). In either case, some buckets may receive more data than others. Rebalancing can be used to even out the load between data stores by putting fewer buckets on members that are hosting large buckets.
+
+## <a id="rebalancing_pr_data__section_495FEE48ED60433BADB7D36C73279C89" class="no-quick-link"></a>How to Simulate Region Rebalancing
+
+You can simulate the rebalance operation before moving any actual data around by executing the `rebalance` command with the following option:
+
+``` pre
+gfsh>rebalance --simulate
+```
+
+**Note:**
+If you are using `heap_lru` for data eviction, you may notice a difference between your simulated results and your actual rebalancing results. This discrepancy can be due to the VM starting to evict entries after you execute the simulation. Then when you perform an actual rebalance operation, the operation will make different decisions based on the newer heap size.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/set_crash_redundancy_recovery.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/set_crash_redundancy_recovery.html.md.erb b/geode-docs/developing/partitioned_regions/set_crash_redundancy_recovery.html.md.erb
new file mode 100644
index 0000000..4c7311e
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/set_crash_redundancy_recovery.html.md.erb
@@ -0,0 +1,43 @@
+---
+title:  Configure Member Crash Redundancy Recovery for a Partitioned Region
+---
+
+Configure whether and how redundancy is recovered in a partition region after a member crashes.
+
+<a id="set_crash_redundancy_recovery__section_86CF741758E54DA29519E9CDDF1BC393"></a>
+Use the partition attribute `recovery-delay` to specify member crash redundancy recovery.
+
+| recovery-delay partition attribute | Effect following a member failure                                                    |
+|------------------------------------|--------------------------------------------------------------------------------------|
+| -1                                 | No automatic recovery of redundancy following a member failure. This is the default. |
+| long greater than or equal to 0    | Number of milliseconds to wait after a member failure before recovering redundancy.  |
+
+By default, redundancy is not recovered after a member crashes. If you expect to quickly restart most crashed members, combining this default setting with member join redundancy recovery can help you avoid unnecessary data shuffling while members are down. By waiting for lost members to rejoin, redundancy recovery is done using the newly started members and partitioning is better balanced with less processing.
+
+Set crash redundancy recovery using one of the following:
+
+-   XML:
+
+    ``` pre
+    // Give a crashed member 10 seconds to restart 
+    // before recovering redundancy
+    <region name="PR1"> 
+      <region-attributes refid="PARTITION"> 
+        <partition-attributes recovery-delay="10000"/> 
+      </region-attributes> 
+    </region> 
+    ```
+
+-   Java:
+
+    ``` pre
+    PartitionAttributes pa = new PartitionAttributesFactory().setRecoveryDelay(10000).create(); 
+    ```
+
+-   gfsh:
+
+    ``` pre
+    gfsh>create region --name="PR1" type=PARTITION --recovery-delay=10000
+    ```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/set_enforce_unique_host.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/set_enforce_unique_host.html.md.erb b/geode-docs/developing/partitioned_regions/set_enforce_unique_host.html.md.erb
new file mode 100644
index 0000000..19cc1ec
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/set_enforce_unique_host.html.md.erb
@@ -0,0 +1,17 @@
+---
+title:  Set Enforce Unique Host
+---
+
+Configure Geode to use only unique physical machines for redundant copies of partitioned region data.
+
+Understand how to set a member's `gemfire.properties` settings. See [Reference](../../reference/book_intro.html#reference).
+
+Configure your members so Geode always uses different physical machines for redundant copies of partitioned region data using the `gemfire.properties` setting `enforce-unique-host`. The default for this setting is false. 
+
+Example:
+
+``` pre
+enforce-unique-host=true
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/set_join_redundancy_recovery.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/set_join_redundancy_recovery.html.md.erb b/geode-docs/developing/partitioned_regions/set_join_redundancy_recovery.html.md.erb
new file mode 100644
index 0000000..4fe790e
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/set_join_redundancy_recovery.html.md.erb
@@ -0,0 +1,49 @@
+---
+title:  Configure Member Join Redundancy Recovery for a Partitioned Region
+---
+
+Configure whether and how redundancy is recovered in a partition region after a member joins.
+
+<a id="set_join_redundancy_recovery__section_D6FB0D69CC454B53B9CF1E656A44465C"></a>
+Use the partition attribute `startup-recovery-delay` to specify member join redundancy recovery.
+
+| startup-recovery-delay partition attribute | Effect following a member join                                                                                                                                                                                               |
+|--------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| -1                                         | No automatic recovery of redundancy after a new member comes online. If you use this and the default `recovery-delay` setting, you can only recover redundancy by kicking off rebalancing through a cacheserver or API call. |
+| long greater than or equal to **0**        | Number of milliseconds to wait after a member joins before before recovering redundancy. The default is 0 (zero), which causes immediate redundancy recovery whenever a new partitioned region host joins.                   |
+
+Setting this to a value higher than the default of 0 allows multiple new members to join before redundancy recovery kicks in. With the multiple members present during recovery, the system will spread redundancy recovery among them. With no delay, if multiple members are started in close succession, the system may choose only the first member started for most or all of the redundancy recovery.
+
+**Note:**
+Satisfying redundancy is not the same as adding capacity. If redundancy is satisfied, new members do not take buckets until you invoke a rebalance.
+
+**Note:**
+With parallel recovery introduced in version 8.2, redundancy may be recovered more quickly than in previous versions. For this reason, it is even more important to configure `startup-recovery-delay` to an appropriate value if you intend to restart multiple members at once. Set `startup-recovery-delay` to a value that ensures all members are up and available *before* redundancy recovery kicks in.
+
+Set join redundancy recovery using one of the following:
+
+-   XML:
+
+    ``` pre
+    // Wait 5 seconds after a new member joins before  
+    // recovering redundancy
+    <region name="PR1"> 
+      <region-attributes refid="PARTITION"> 
+        <partition-attributes startup-recovery-delay="5000"/> 
+      </region-attributes> 
+    </region> 
+    ```
+
+-   Java:
+
+    ``` pre
+    PartitionAttributes pa = new PartitionAttributesFactory().setStartupRecoveryDelay(5000).create(); 
+    ```
+
+-   gfsh:
+
+    ``` pre
+    gfsh>create region --name="PR1" --type=PARTITION --startup-recovery-delay=5000
+    ```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/set_pr_redundancy.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/set_pr_redundancy.html.md.erb b/geode-docs/developing/partitioned_regions/set_pr_redundancy.html.md.erb
new file mode 100644
index 0000000..ae12721
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/set_pr_redundancy.html.md.erb
@@ -0,0 +1,34 @@
+---
+title:  Set the Number of Redundant Copies
+---
+
+Configure in-memory high availability for your partitioned region by specifying the number of secondary copies you want to maintain in the region's data stores.
+
+Specify the number of redundant copies you want for your partitioned region data in the partition attribute `redundant-copies` setting. The default setting is 0. 
+
+For example:
+
+-   XML:
+
+    ``` pre
+    <region name="PR1"> 
+      <region-attributes refid="PARTITION"> 
+        <partition-attributes redundant-copies="1"/> 
+      </region-attributes> 
+    </region> 
+    ```
+
+-   Java:
+
+    ``` pre
+    PartitionAttributes pa = 
+        new PartitionAttributesFactory().setRedundantCopies(1).create(); 
+    ```
+
+-   gfsh:
+
+    ``` pre
+    gfsh>create region --name="PR1" --type=PARTITION --redundant-copies=1
+    ```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/set_redundancy_zones.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/set_redundancy_zones.html.md.erb b/geode-docs/developing/partitioned_regions/set_redundancy_zones.html.md.erb
new file mode 100644
index 0000000..f5a0a10
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/set_redundancy_zones.html.md.erb
@@ -0,0 +1,23 @@
+---
+title:  Configure Redundancy Zones for Members
+---
+
+Group members into redundancy zones so Geode will separate redundant data copies into different zones.
+
+Understand how to set a member's `gemfire.properties` settings. See [Reference](../../reference/book_intro.html#reference).
+
+Group your partition region hosts into redundancy zones with the `gemfire.properties` setting `redundancy-zone`. 
+
+For example, if you had redundancy set to 1, so you have one primary and one secondary copy of each data entry, you could split primary and secondary data copies between two machine racks by defining one redundancy zone for each rack. To do this, you set this zone in the `gemfire.properties` for all members that run on one rack:
+``` pre
+redundancy-zone=rack1
+```
+
+You would set this zone `gemfire.properties` for all members on the other rack:
+``` pre
+redundancy-zone=rack2
+```
+
+Each secondary copy would be hosted on the rack opposite the rack where its primary copy is hosted.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/partitioned_regions/using_custom_partition_resolvers.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/using_custom_partition_resolvers.html.md.erb b/geode-docs/developing/partitioned_regions/using_custom_partition_resolvers.html.md.erb
new file mode 100644
index 0000000..df9e4b3
--- /dev/null
+++ b/geode-docs/developing/partitioned_regions/using_custom_partition_resolvers.html.md.erb
@@ -0,0 +1,204 @@
+---
+title:  Custom-Partition Your Region Data
+---
+
+By default, Geode partitions each data entry into a bucket using a hashing policy on the key. Additionally, the physical location of the key-value pair is abstracted away from the application. You can change these policies for a partitioned region. You can provide your own data partitioning resolver and you can additionally specify which members host which data buckets.
+
+<a id="custom_partition_region_data__section_CF05CE974C9C4AF78430DA55601D2158"></a>
+**Note:**
+If you are colocating data between regions and custom partitioning the data in the regions, all colocated regions must use the same custom partitioning mechanism. See [Colocate Data from Different Partitioned Regions](colocating_partitioned_region_data.html#colocating_partitioned_region_data).
+
+<a id="custom_partition_region_data__section_1D7043815DF24308ABE4C78BFDFEE686"></a>
+
+For standard partitioning, use `org.apache.geode.cache.PartitionResolver`. To implement fixed partitioning, use `org.apache.geode.cache.FixedPartitionResolver`.
+
+<a id="custom_partition_region_data__section_5A8D752F02834146A37D9430F1CA32DA"></a>
+
+**Prerequisites**
+
+-   Create partitioned regions. See [Understanding Partitioning](how_partitioning_works.html) and [Configuring Partitioned Regions](managing_partitioned_regions.html#configure_partitioned_regions).
+-   Decide whether to use standard custom partitioning or fixed custom partitioning. See [Understanding Custom Partitioning and Data Colocation](custom_partitioning_and_data_colocation.html#custom_partitioning_and_data_colocation).
+-   If you also want to colocate data from multiple regions, understand how to colocate. See [Colocate Data from Different Partitioned Regions](colocating_partitioned_region_data.html#colocating_partitioned_region_data).
+
+**Procedure**
+
+1.  Using `org.apache.geode.cache.PartitionResolver` (standard partitioning) or `org.apache.geode.cache.FixedPartitionResolver` (fixed partitioning), implement the standard partitioning resolver or the fixed partitioning resolver in one of the following locations, listed here in the search order used by Geode:
+    -   **Custom class**. You provide this class as the partition resolver to the region creation.
+    -   **Entry key**. You use the implementing key object for every operation on the region entries.
+    -   **Cache callback argument**. This implementation restricts you to using methods that accept a cache callback argument to manage the region entries. For a full list of the methods that take a callback argument, see the `Region` Javadocs.
+
+2.  If you need the resolver's `getName` method, program that.
+3.  Program the resolver's `getRoutingObject` method to return the routing object for each entry, based on how you want to group the entries. Give the same routing object to entries you want to group together. Geode will place the entries in the same bucket.
+
+    **Note:**
+    Only fields on the key should be used when creating the routing object. Do not use the value or additional metadata for this purpose.
+
+    For example, here is an implementation on a region key object that groups the entries by month and year:
+
+    ``` pre
+    Public class TradeKey implements PartitionResolver 
+    { 
+        private String tradeID; 
+        private Month month; 
+        private Year year; 
+        public TradingKey(){ } 
+        public TradingKey(Month month, Year year)
+        { 
+            this.month = month; 
+            this.year = year; 
+        } 
+        public Serializable getRoutingObject(EntryOperation opDetails)
+        { 
+            return this.month + this.year; 
+        }
+    }
+    ```
+
+4.  For fixed partitioning only, program and configure additional fixed partitioning pieces:
+    1.  Set the fixed partition attributes for each member.
+
+        These attributes define the data stored for the region by the member and must be different for different members. See `org.apache.geode.cache.FixedPartitionAttributes` for definitions of the attributes. Define each `partition-name` in your data host members for the region. For each partition name, in the member you want to host the primary copy, define it with `is-primary` set to `true`. In every member you want to host the secondary copy, define it with `is-primary` set to `false` (the default). The number of secondaries must match the number of redundant copies you have defined for the region. See [Configure High Availability for a Partitioned Region](configuring_ha_for_pr.html).
+
+        **Note:**
+        Buckets for a partition are hosted only by the members that have defined the partition name in their `FixedPartitionAttributes`.
+
+        These examples set the partition attributes for a member to be the primary host for the "Q1" partition data and a secondary host for "Q3" partition data.
+        -   XML:
+
+            ``` pre
+            <cache>
+               <region name="Trades">
+                  <region-attributes>
+                     <partition-attributes redundant-copies="1">
+                       <partition-resolver name="QuarterFixedPartitionResolver">
+                          <class-name>myPackage.QuarterFixedPartitionResolver</class-name>
+                       </partition-resolver>
+                       <fixed-partition-attributes partition-name="Q1" is-primary="true"/>
+                       <fixed-partition-attributes partition-name="Q3" is-primary="false" num-buckets="6"/>
+                     </partition-attributes> 
+                  </region-attributes>
+               </region>
+            </cache>
+            ```
+        -   Java:
+
+
+            ``` pre
+            FixedPartitionAttribute fpa1 = FixedPartitionAttributes.createFixedPartition("Q1", true);
+            FixedPartitionAttribute fpa3 = FixedPartitionAttributes.createFixedPartition("Q3", false, 6);
+
+            PartitionAttributesFactory paf = new PartitionAttributesFactory()
+                 .setPartitionResolver(new QuarterFixedPartitionResolver())
+                 .setTotalNumBuckets(12)
+                 .setRedundantCopies(2)
+                 .addFixedPartitionAttribute(fpa1)
+                 .addFixedPartitionAttribute(fpa3);
+
+            Cache c = new CacheFactory().create();
+
+            Region r = c.createRegionFactory()
+                .setPartitionAttributes(paf.create())
+                .create("Trades");
+            ```
+        -   gfsh:
+
+            You cannot specify a partition resolver using gfsh.
+
+    2.  Program the `FixedPartitionResolver` `getPartitionName` method to return the name of the partition for each entry, based on where you want the entries to reside. Geode uses `getPartitionName` and `getRoutingObject` to determine where an entry is placed.
+
+        **Note:**
+        To group entries, assign every entry in the group the same routing object and the same partition name.
+
+        This example places the data based on date, with a different partition name for each quarter-year and a different routing object for each month.
+
+        ``` pre
+        /**
+         * Returns one of four different partition names
+         * (Q1, Q2, Q3, Q4) depending on the entry's date
+         */
+        class QuarterFixedPartitionResolver implements
+            FixedPartitionResolver<String, String> {
+
+          @Override
+          public String getPartitionName(EntryOperation<String, String> opDetails,
+              Set<String> targetPartitions) {
+
+             Date date = (Date)opDetails.getKey();
+             Calendar cal = Calendar.getInstance();
+             cal.setTime(date);
+             int month = cal.get(Calendar.MONTH);
+             if (month >= 0 && month < 3) {
+                if (targetPartitions.contains("Q1")) return "Q1";
+             }
+             else if (month >= 3 && month < 6) {
+                if (targetPartitions.contains("Q2")) return "Q2";
+             }
+             else if (month >= 6 && month < 9) {
+                if (targetPartitions.contains("Q3")) return "Q3";
+             }
+             else if (month >= 9 && month < 12) {
+                if (targetPartitions.contains("Q4")) return "Q4";
+             }
+             return "Invalid Quarter";
+          }
+
+          @Override
+          public String getName() {
+             return "QuarterFixedPartitionResolver";
+          }
+
+          @Override
+          public Serializable getRoutingObject(EntryOperation<String, String> opDetails) {
+             Date date = (Date)opDetails.getKey();
+             Calendar cal = Calendar.getInstance();
+             cal.setTime(date);
+             int month = cal.get(Calendar.MONTH);
+             return month;
+          }
+
+          @Override
+          public void close() {
+          }
+        }
+        ```
+
+5.  Configure or program the region so Geode finds your resolver for every operation that you perform on the region's entries. How you do this depends on where you chose to program your custom partitioning implementation (step 1).
+    1.  **Custom class**. Define the class for the region at creation. The resolver will be used for every entry operation. Use one of these methods:
+        -   XML:
+
+            ``` pre
+            <region name="trades">
+                <region-attributes>
+                    <partition-attributes>
+                        <partition-resolver name="TradesPartitionResolver"> 
+                            <class-name>myPackage.TradesPartitionResolver
+                            </class-name>
+                        </partition-resolver>
+                    <partition-attributes>
+                </region-attributes>
+            </region>
+            ```
+        -   Java:
+
+
+            ``` pre
+            PartitionResolver resolver = new TradesPartitionResolver();
+            PartitionAttributes attrs = 
+                new PartitionAttributesFactory()
+                .setPartitionResolver(resolver).create();
+
+            Cache c = new CacheFactory().create();
+
+            Region r = c.createRegionFactory()
+                .setPartitionAttributes(attrs)
+                .create("trades");
+            ```
+        -   gfsh:
+
+            You cannot specify a partition resolver using gfsh.
+
+    2.  **Entry key**. Use the key object with the resolver implementation for every entry operation.
+    3.  **Cache callback argument**. Provide the argument to every call that accesses an entry. This restricts you to calls that take a callback argument.
+
+6.  If your colocated data is in a server system, add the `PartitionResolver` implementation class to the `CLASSPATH` of your Java clients. The resolver is used for single hop access to partitioned region data in the servers.
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/query_additional/advanced_querying.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/advanced_querying.html.md.erb b/geode-docs/developing/query_additional/advanced_querying.html.md.erb
new file mode 100644
index 0000000..d94da69
--- /dev/null
+++ b/geode-docs/developing/query_additional/advanced_querying.html.md.erb
@@ -0,0 +1,31 @@
+---
+title:  Advanced Querying
+---
+
+This section includes advanced querying topics such as using query indexes, using query bind parameters, querying partitioned regions and query debugging.
+
+-   **[Performance Considerations](../../developing/querying_basics/performance_considerations.html)**
+
+    This topic covers considerations for improving query performance.
+
+-   **[Monitoring Queries for Low Memory](../../developing/querying_basics/monitor_queries_for_low_memory.html)**
+
+    The query monitoring feature prevents out-of-memory exceptions from occurring when you execute queries or create indexes.
+
+-   **[Using Query Bind Parameters](../../developing/query_additional/using_query_bind_parameters.html)**
+
+    Using query bind parameters in Geode queries is similar to using prepared statements in SQL where parameters can be set during query execution. This allows user to build a query once and execute it multiple times by passing the query conditions during run time.
+
+-   **[Working with Indexes](../../developing/query_index/query_index.html)**
+
+    The Geode query engine supports indexing. An index can provide significant performance gains for query execution.
+
+-   **[Querying Partitioned Regions](../../developing/querying_basics/querying_partitioned_regions.html)**
+
+    Geode allows you to manage and store large amounts of data across distributed nodes using partitioned regions. The basic unit of storage for a partitioned region is a bucket, which resides on a Geode node and contains all the entries that map to a single hashcode. In a typical partitioned region query, the system distributes the query to all buckets across all nodes, then merges the result sets and sends back the query results.
+
+-   **[Query Debugging](../../developing/query_additional/query_debugging.html)**
+
+    You can debug a specific query at the query level by adding the `<trace>` keyword before the query string that you want to debug.
+
+