You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@geode.apache.org by db...@apache.org on 2016/10/05 00:10:25 UTC

[43/51] [partial] incubator-geode git commit: GEODE-1964: native client documentation (note: contains references to images in the geode-docs directories)

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb b/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb
deleted file mode 100644
index 5082cc4..0000000
--- a/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb
+++ /dev/null
@@ -1,44 +0,0 @@
----
-title:  Understanding High Availability for Partitioned Regions
----
-
-With high availability, each member that hosts data for the partitioned region gets some primary copies and some redundant (secondary) copies.
-
-<a id="how_pr_ha_works__section_04FDCC6C2130496F8B33B9DF5CDED362"></a>
-
-With redundancy, if one member fails, operations continue on the partitioned region with no interruption of service:
-
--   If the member hosting the primary copy is lost, Geode makes a secondary copy the primary. This might cause a temporary loss of redundancy, but not a loss of data.
--   Whenever there are not enough secondary copies to satisfy redundancy, the system works to recover redundancy by assigning another member as secondary and copying the data to it.
-
-**Note:**
-You can still lose cached data when you are using redundancy if enough members go down in a short enough time span.
-
-You can configure how the system works to recover redundancy when it is not satisfied. You can configure recovery to take place immediately or, if you want to give replacement members a chance to start up, you can configure a wait period. Redundancy recovery is also automatically attempted during any partitioned data rebalancing operation. Use the `gemfire.MAX_PARALLEL_BUCKET_RECOVERIES` system property to configure the maximum number of buckets that are recovered in parallel. By default, up to 8 buckets are recovered in parallel any time the system attempts to recover redundancy.
-
-Without redundancy, the loss of any of the region's data stores causes the loss of some of the region's cached data. Generally, you should not use redundancy when your applications can directly read from another data source, or when write performance is more important than read performance.
-
-## <a id="how_pr_ha_works__section_7045530D601F4C65A062B5FDD0DD9206" class="no-quick-link"></a>Controlling Where Your Primaries and Secondaries Reside
-
-By default, Geode places your primary and secondary data copies for you, avoiding placement of two copies on the same physical machine. If there are not enough machines to keep different copies separate, Geode places copies on the same physical machine. You can change this behavior, so Geode only places copies on separate machines.
-
-You can also control which members store your primary and secondary data copies. Geode provides two options:
-
--   **Fixed custom partitioning**. This option is set for the region. Fixed partitioning gives you absolute control over where your region data is hosted. With fixed partitioning, you provide Geode with the code that specifies the bucket and data store for each data entry in the region. When you use this option with redundancy, you specify the primary and secondary data stores. Fixed partitioning does not participate in rebalancing because all bucket locations are fixed by you.
--   **Redundancy zones**. This option is set at the member level. Redundancy zones let you separate primary and secondary copies by member groups, or zones. You assign each data host to a zone. Then Geode places redundant copies in different redundancy zones, the same as it places redundant copies on different physical machines. You can use this to split data copies across different machine racks or networks, This option allows you to add members on the fly and use rebalancing to redistribute the data load, with redundant data maintained in separate zones. When you use redundancy zones, Geode will not place two copies of the data in the same zone, so make sure you have enough zones.
-
-## <a id="how_pr_ha_works__section_87A2429B6277497184926E08E64B81C6" class="no-quick-link"></a>Running Processes in Virtual Machines
-
-By default, Geode stores redundant copies on different machines. When you run your processes in virtual machines, the normal view of the machine becomes the VM and not the physical machine. If you run multiple VMs on the same physical machine, you could end up storing partitioned region primary buckets in separate VMs, but on the same physical machine as your secondaries. If the physical machine fails, you can lose data. When you run in VMs, you can configure Geode to identify the physical machine and store redundant copies on different physical machines.
-
-## <a id="how_pr_ha_works__section_CAB9440BABD6484D99525766E937CB55" class="no-quick-link"></a>Reads and Writes in Highly-Available Partitioned Regions
-
-Geode treats reads and writes differently in highly-available partitioned regions than in other regions because the data is available in multiple members:
-
--   Write operations (like `put` and `create`) go to the primary for the data keys and then are distributed synchronously to the redundant copies. Events are sent to the members configured with `subscription-attributes` `interest-policy` set to `all`.
--   Read operations go to any member holding a copy of the data, with the local cache favored, so a read intensive system can scale much better and handle higher loads.
-
-In this figure, M1 is reading W, Y, and Z. It gets W directly from its local copy. Since it doesn't have a local copy of Y or Z, it goes to a cache that does, picking the source cache at random.
-
-<img src="../../images_svg/partitioned_data_HA.svg" id="how_pr_ha_works__image_574D1A1E641944D2A2DE68C4618D84B4" class="image" />
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/partitioned_regions/how_pr_single_hop_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/how_pr_single_hop_works.html.md.erb b/geode-docs/developing/partitioned_regions/how_pr_single_hop_works.html.md.erb
deleted file mode 100644
index 9002719..0000000
--- a/geode-docs/developing/partitioned_regions/how_pr_single_hop_works.html.md.erb
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title:  Understanding Client Single-Hop Access to Server-Partitioned Regions
----
-
-With single-hop access the client connects to every server, so more connections are generally used. This works fine for smaller installations, but is a barrier to scaling.
-
-If you have a large installation with many clients, you may want to disable single hop by setting the pool attribute, `pr-single-hop-enabled` to false in your pool declarations.
-
-Without single hop, the client uses whatever server connection is available, the same as with all other operations. The server that receives the request determines the data location and contacts the host, which might be a different server. So more multiple-hop requests are made to the server system.
-
-**Note:**
-Single hop is used for the following operations: `put`, `get`, `destroy`, `putAll`, `getAll`, `removeAll` and `onRegion` function execution.
-
-Even with single hop access enabled, you will occasionally see some multiple-hop behavior. To perform single-hop data access, clients automatically get metadata from the servers about where the entry buckets are hosted. The metadata is maintained lazily. It is only updated after a single-hop operation ends up needing multiple hops, an indicator of stale metadata in the client.
-
-## <a id="how_pr_single_hop_works__section_AE4A6DA0064C4D5280336DD65CB107CC" class="no-quick-link"></a>Single Hop and the Pool max-connections Setting
-
-Do not set the pool's `max-connections` setting with single hop enabled. Limiting the pool's connections with single hop can cause connection thrashing, throughput loss, and server log bloat.
-
-If you need to limit the pool\u2019s connections, either disable single hop or keep a close watch on your system for these negative effects.
-
-Setting no limit on connections, however, can result in too many connections to your servers, possibly causing you to run up against your system\u2019s file handle limits. Review your anticipated connection use and make sure your servers are able to accommodate it.
-
-## <a id="how_pr_single_hop_works__section_99F27B724E5F4008BC8878D1CB4B9821" class="no-quick-link"></a>Balancing Single-Hop Server Connection Use
-
-Single-hop gives the biggest benefits when data access is well balanced across your servers. In particular, the loads for client/server connections can get out of balance if you have these in combination:
-
--   Servers that are empty data accessors or that do not host the data the clients access through single-key operations
--   Many single-key operations from the clients
-
-If data access is greatly out of balance, clients can thrash trying to get to the data servers. In this case, it might be faster to disable single hop and go through servers that do not host the data.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb b/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb
deleted file mode 100644
index b8399e2..0000000
--- a/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb
+++ /dev/null
@@ -1,80 +0,0 @@
----
-title:  Performing an Equi-Join Query on Partitioned Regions
----
-
-In order to perform equi-join operations on partitioned regions or partitioned regions and replicated regions, you need to use the `query.execute` method and supply it with a function execution context. You need to use Geode's FunctionService executor because join operations are not yet directly supported for partitioned regions without providing a function execution context.
-
-See [Partitioned Region Query Restrictions](../query_additional/partitioned_region_query_restrictions.html#concept_5353476380D44CC1A7F586E5AE1CE7E8) for more information on partitioned region query limitations.
-
-For example, let's say your equi-join query is the following:
-
-``` pre
-SELECT DISTINCT * FROM /QueryRegion1 r1,
-/QueryRegion2 r2 WHERE r1.ID = r2.ID
-```
-
-In this example QueryRegion2 is colocated with QueryRegion1, and both regions have same type of data objects.
-
-On the server side:
-
-``` pre
- Function prQueryFunction1 = new QueryFunction();
- FunctionService.registerFunction(prQueryFunction1);
-
- public class QueryFunction extends FunctionAdapter {
-    @Override
-    public void execute(FunctionContext context) {
-      Cache cache = CacheFactory.getAnyInstance();
-      QueryService queryService = cache.getQueryService();
-      ArrayList allQueryResults = new ArrayList();
-      ArrayList arguments = (ArrayList)(context.getArguments());
-      String qstr = (String)arguments.get(0);
-      try {
-           Query query = queryService.newQuery(qstr);
-           SelectResults result = (SelectResults)query
-             .execute((RegionFunctionContext)context);
-           ArrayList arrayResult = (ArrayList)result.asList();
-           context.getResultSender().sendResult((ArrayList)result.asList());
-           context.getResultSender().lastResult(null);
-              } catch (Exception e) {
-               // handle exception
-             }
-       }
-} 
-     
-```
-
-On the server side, `Query.execute()` operates on the local data of the partitioned region.
-
-On the client side:
-
-``` pre
- 
-Function function = new QueryFunction();
-String queryString = "SELECT DISTINCT * FROM /QueryRegion1 r1,
-        /QueryRegion2 r2 WHERE r1.ID = r2.ID";
-ArrayList argList = new ArrayList();
-argList.add(queryString);
-Object result = FunctionService.onRegion(CacheFactory.getAnyInstance()
-     .getRegion("QueryRegion1" ))
-     .withArgs(argList).execute(function).getResult();
-ArrayList resultList = (ArrayList)result;
-resultList.trimToSize();
-List queryResults = null;
-if (resultList.size() != 0) {
-   queryResults = new ArrayList();
-   for (Object obj : resultList) {
-      if (obj != null ) {
-      queryResults.addAll((ArrayList)obj);
-         }
-   }
-}
-```
-
-On the client side, note that you can specify a bucket filter while invoking FunctionService.onRegion(). In this case, the query engine relies on FunctionService to direct the query to specific nodes.
-
-**Additional Notes on Using the Query.execute and RegionFunctionContext APIs**
-
-You can also pass multiple parameters (besides the query itself) to the query function by specifying the parameters in the client-side code (`FunctionService.onRegion(..).withArgs()`). Then you can handle the parameters inside the function on the server side using `context.getArguments`. Note that it does not matter which order you specify the parameters as long as you match the parameter handling order on the server with the order specified in the client.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/partitioned_regions/managing_partitioned_regions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/managing_partitioned_regions.html.md.erb b/geode-docs/developing/partitioned_regions/managing_partitioned_regions.html.md.erb
deleted file mode 100644
index fd7494f..0000000
--- a/geode-docs/developing/partitioned_regions/managing_partitioned_regions.html.md.erb
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title:  Configuring Partitioned Regions
----
-
-Plan the configuration and ongoing management of your partitioned region for host and accessor members and configure the regions for startup.
-
-<a id="configure_partitioned_regions__section_241583D88E244AB6AB5CD05BF55F6A0A"></a>
-Before you begin, understand [Basic Configuration and Programming](../../basic_config/book_intro.html).
-
-1.  Start your region configuration using one of the `PARTITION` region shortcut settings. See [Region Shortcuts and Custom Named Region Attributes](../../basic_config/data_regions/region_shortcuts.html).
-2.  If you need high availability for your partitioned region, configure for that. See [Configure High Availability for a Partitioned Region](configuring_ha_for_pr.html).
-3.  Estimate the amount of space needed for the region. If you use redundancy, this is the max for all primary and secondary copies stored in the member. For example, with redundancy of one, each region data entry requires twice the space than with no redundancy, because the entry is stored twice. See [Memory Requirements for Cached Data](../../reference/topics/memory_requirements_for_cache_data.html#calculating_memory_requirements).
-4.  Configure the total number of buckets for the region. This number must be the same for colocated regions. See [Configuring the Number of Buckets for a Partitioned Region](configuring_bucket_for_pr.html#configuring_total_buckets).
-5.  Configure your members' data storage and data loading for the region:
-    1.  You can have members with no local data storage and members with varying amounts of storage. Determine the max memory available in your different member types for this region. These will be set in the `partition-attributes` `local-max-memory`. This is the only setting in `partition-attributes` that can vary between members. Use these max values and your estimates for region memory requirements to help you figure how many members to start out with for the region.
-    2.  For members that store data for the region (`local-max-memory` greater than 0), define a data loader. See [Implement a Data Loader](../outside_data_sources/implementing_data_loaders.html#implementing_data_loaders).
-    3.  If you have members with no local data storage (`local-max-memory` set to 0), review your system startup/shutdown procedures. Make sure there is always at least one member with local data storage running when any members with no storage are running.
-
-6.  If you want to custom partition the data in your region or colocate data between multiple regions, code and configure accordingly. See [Understanding Custom Partitioning and Data Colocation](custom_partitioning_and_data_colocation.html#custom_partitioning_and_data_colocation).
-7.  Plan your partition rebalancing strategy and configure and program for that. See [Rebalancing Partitioned Region Data](rebalancing_pr_data.html#rebalancing_pr_data).
-
-**Note:**
-To configure a partitioned region using gfsh, see [gfsh Command Help](../../tools_modules/gfsh/gfsh_command_index.html#concept_C291647179C5407A876CC7FCF91CF756).
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/partitioned_regions/moving_partitioned_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/moving_partitioned_data.html.md.erb b/geode-docs/developing/partitioned_regions/moving_partitioned_data.html.md.erb
deleted file mode 100644
index 054f7fe..0000000
--- a/geode-docs/developing/partitioned_regions/moving_partitioned_data.html.md.erb
+++ /dev/null
@@ -1,59 +0,0 @@
----
-title:  Moving Partitioned Region Data to Another Member
----
-
-You can use the `PartitionRegionHelper` `moveBucketByKey` and `moveData` methods to explicitly move partitioned region data from one member to another.
-
-The `moveBucketByKey` method moves the bucket that contains the specified key from a source member to a destination member. For example, you could use the method to move a popular product item to a new, empty member to reduce load on the source member.
-
-For example:
-
-``` pre
-Object product = ...
-Region r = ...
-DistributedSystem ds = ...
-String memberName = ...
-
-//Find the member that is currently hosting the product.
-Set<DistributedMember> sourceMembers =
-PartitionRegionHelper.getAllMembersForKey(r, product);
-
-//Find the member to move the product to.
-DistributedMember destination = ds.findDistributedMember(memberName);
-
-//In this example we assume there is always at least one source.
-//In practice, you should check that at least one source
-//for the data is available.
-source = sourceMembers.iterator().next();
-
-//Move the bucket to the new node. The bucket will
-//be moved when this method completes. It throws an exception
-//if there is a problem or invalid arguments.
-PartitionRegionHelper.moveBucketByKey(r, source, destination, product);
-```
-
-See the Java API documentation for `org.apache.geode.cache.partition.PartitionRegionHelper.moveBucketByKey` for more details.
-
-The `moveData` method moves data up to a given percentage (measured in bytes) from a source member to a destination member. For example, you could use this method to move a specified percentage of data from an overloaded member to another member to improve distribution.
-
-For example:
-
-``` pre
-Region r = ...
-DistributedSystem ds = ...
-String sourceName = ...
-String destName = ...
-
-//Find the source member.
-DistributedMember source = ds.findDistributedMember(sourceName);
-DistributedMember destination = ds.findDistributedMember(destName);
-
-//Move up to 20% of the data from the source to the destination node.
-PartitionRegionHelper.moveData(r, source, destination, 20);
-```
-
-See the Java API documentation for `org.apache.geode.cache.partition.PartitionRegionHelper.moveData` for more details.
-
-For more information on partitioned regions and rebalancing, see [Partitioned Regions](chapter_overview.html).
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb b/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb
deleted file mode 100644
index 3cf5c10..0000000
--- a/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title:  Custom-Partitioning and Colocating Data
----
-
-You can customize how Apache Geode groups your partitioned region data with custom partitioning and data colocation.
-
--   **[Understanding Custom Partitioning and Data Colocation](../../developing/partitioned_regions/custom_partitioning_and_data_colocation.html)**
-
-    Custom partitioning and data colocation can be used separately or in conjunction with one another.
-
--   **[Custom-Partition Your Region Data](../../developing/partitioned_regions/using_custom_partition_resolvers.html)**
-
-    By default, Geode partitions each data entry into a bucket using a hashing policy on the key. Additionally, the physical location of the key-value pair is abstracted away from the application. You can change these policies for a partitioned region. You can provide your own data partitioning resolver and you can additionally specify which members host which data buckets.
-
--   **[Colocate Data from Different Partitioned Regions](../../developing/partitioned_regions/colocating_partitioned_region_data.html)**
-
-    By default, Geode allocates the data locations for a partitioned region independent of the data locations for any other partitioned region. You can change this policy for any group of partitioned regions, so that cross-region, related data is all hosted by the same member. This colocation speeds queries and other operations that access data from the regions.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb b/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb
deleted file mode 100644
index 7b182c5..0000000
--- a/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title:  Configuring High Availability for Partitioned Regions
----
-
-By default, Apache Geode stores only a single copy of your partitioned region data among the region's data stores. You can configure Geode to maintain redundant copies of your partitioned region data for high availability.
-
--   **[Understanding High Availability for Partitioned Regions](../../developing/partitioned_regions/how_pr_ha_works.html)**
-
-    With high availability, each member that hosts data for the partitioned region gets some primary copies and some redundant (secondary) copies.
-
--   **[Configure High Availability for a Partitioned Region](../../developing/partitioned_regions/configuring_ha_for_pr.html)**
-
-    Configure in-memory high availability for your partitioned region. Set other high-availability options, like redundancy zones and redundancy recovery strategies.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb b/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb
deleted file mode 100644
index 651c851..0000000
--- a/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title:  Configuring Single-Hop Client Access to Server-Partitioned Regions
----
-
-Single-hop data access enables the client pool to track where a partitioned region\u2019s data is hosted in the servers. To access a single entry, the client directly contacts the server that hosts the key, in a single hop.
-
--   **[Understanding Client Single-Hop Access to Server-Partitioned Regions](../../developing/partitioned_regions/how_pr_single_hop_works.html)**
-
-    With single-hop access the client connects to every server, so more connections are generally used. This works fine for smaller installations, but is a barrier to scaling.
-
--   **[Configure Client Single-Hop Access to Server-Partitioned Regions](../../developing/partitioned_regions/configure_pr_single_hop.html)**
-
-    Configure your client/server system for direct, single-hop access to partitioned region data in the servers.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/partitioned_regions/rebalancing_pr_data.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/rebalancing_pr_data.html.md.erb b/geode-docs/developing/partitioned_regions/rebalancing_pr_data.html.md.erb
deleted file mode 100644
index 93c31e7..0000000
--- a/geode-docs/developing/partitioned_regions/rebalancing_pr_data.html.md.erb
+++ /dev/null
@@ -1,89 +0,0 @@
----
-title:  Rebalancing Partitioned Region Data
----
-
-In a distributed system with minimal contention to the concurrent threads reading or updating from the members, you can use rebalancing to dynamically increase or decrease your data and processing capacity.
-
-<a id="rebalancing_pr_data__section_D3649ADD28DB4FF78C47A3E428C80510"></a>
-Rebalancing is a member operation. It affects all partitioned regions defined by the member, regardless of whether the member hosts data for the regions. The rebalancing operation performs two tasks:
-
-1.  If the configured partition region redundancy is not satisfied, rebalancing does what it can to recover redundancy. See [Configure High Availability for a Partitioned Region](configuring_ha_for_pr.html).
-2.  Rebalancing moves the partitioned region data buckets between host members as needed to establish the most fair balance of data and behavior across the distributed system.
-
-For efficiency, when starting multiple members, trigger the rebalance a single time, after you have added all members.
-
-**Note:**
-If you have transactions running in your system, be careful in planning your rebalancing operations. Rebalancing may move data between members, which could cause a running transaction to fail with a `TransactionDataRebalancedException`. Fixed custom partitioning prevents rebalancing altogether. All other data partitioning strategies allow rebalancing and can result in this exception unless you run your transactions and your rebalancing operations at different times.
-
-Kick off a rebalance using one of the following:
-
--   `gfsh` command. First, starting a `gfsh` prompt and connect to the Geode distributed system. Then type the following command:
-
-    ``` pre
-    gfsh>rebalance
-    ```
-
-    Optionally, you can specify regions to include or exclude from rebalancing, specify a time-out for the rebalance operation or just [simulate a rebalance operation](rebalancing_pr_data.html#rebalancing_pr_data__section_495FEE48ED60433BADB7D36C73279C89). Type `help rebalance` or see [rebalance](../../tools_modules/gfsh/command-pages/rebalance.html) for more information.
-
--   API call:
-
-    ``` pre
-    ResourceManager manager = cache.getResourceManager(); 
-    RebalanceOperation op = manager.createRebalanceFactory().start(); 
-    //Wait until the rebalance is complete and then get the results
-    RebalanceResults results = op.getResults(); 
-    //These are some of the details we can get about the run from the API
-    System.out.println("Took " + results.getTotalTime() + " milliseconds\n"); 
-    System.out.println("Transfered " + results.getTotalBucketTransferBytes()+ "bytes\n");
-    ```
-
-You can also just simulate a rebalance through the API, to see if it's worth it to run:
-
-``` pre
-ResourceManager manager = cache.getResourceManager(); 
-RebalanceOperation op = manager.createRebalanceFactory().simulate(); 
-RebalanceResults results = op.getResults(); 
-System.out.println("Rebalance would transfer " + results.getTotalBucketTransferBytes() +" bytes "); 
-System.out.println(" and create " + results.getTotalBucketCreatesCompleted() + " buckets.\n");
-```
-
-## <a id="rebalancing_pr_data__section_1592413D533D454D9E5ACFCDC4685DD1" class="no-quick-link"></a>How Partitioned Region Rebalancing Works
-
-The rebalancing operation runs asynchronously.
-
-By default, rebalancing is performed on one partitioned region at a time. For regions that have colocated data, the rebalancing works on the regions as a group, maintaining the data colocation between the regions.
-
-You can optionally rebalance multiple regions in parallel by setting the `gemfire.resource.manager.threads` system property. Setting this property to a value greater than 1 enables Geode to rebalance multiple regions in parallel, any time a rebalance operation is initiated using the API.
-
-You can continue to use your partitioned regions normally while rebalancing is in progress. Read operations, write operations, and function executions continue while data is moving. If a function is executing on a local data set, you may see a performance degradation if that data moves to another host during function execution. Future function invocations are routed to the correct member.
-
-Geode tries to ensure that each member has the same percentage of its available space used for each partitioned region. The percentage is configured in the `partition-attributes` `local-max-memory` setting.
-
-Partitioned region rebalancing:
-
--   Does not allow the `local-max-memory` setting to be exceeded unless LRU eviction is enabled with overflow to disk.
--   Places multiple copies of the same bucket on different host IP addresses whenever possible.
--   Resets entry time to live and idle time statistics during bucket migration.
--   Replaces offline members.
-
-## <a id="rebalancing_pr_data__section_BE71EE52DE1A4275BC7854CA597797F4" class="no-quick-link"></a>When to Rebalance a Partitioned Region
-
-You typically want to trigger rebalancing when capacity is increased or reduced through member startup, shut down or failure.
-
-You may also need to rebalance when:
-
--   You use redundancy for high availability and have configured your region to not automatically recover redundancy after a loss. In this case, Geode only restores redundancy when you invoke a rebalance. See [Configure High Availability for a Partitioned Region](configuring_ha_for_pr.html).
--   You have uneven hashing of data. Uneven hashing can occur if your keys do not have a hash code method, which ensures uniform distribution, or if you use a `PartitionResolver` to colocate your partitioned region data (see [Colocate Data from Different Partitioned Regions](colocating_partitioned_region_data.html#colocating_partitioned_region_data)). In either case, some buckets may receive more data than others. Rebalancing can be used to even out the load between data stores by putting fewer buckets on members that are hosting large buckets.
-
-## <a id="rebalancing_pr_data__section_495FEE48ED60433BADB7D36C73279C89" class="no-quick-link"></a>How to Simulate Region Rebalancing
-
-You can simulate the rebalance operation before moving any actual data around by executing the `rebalance` command with the following option:
-
-``` pre
-gfsh>rebalance --simulate
-```
-
-**Note:**
-If you are using `heap_lru` for data eviction, you may notice a difference between your simulated results and your actual rebalancing results. This discrepancy can be due to the VM starting to evict entries after you execute the simulation. Then when you perform an actual rebalance operation, the operation will make different decisions based on the newer heap size.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/partitioned_regions/set_crash_redundancy_recovery.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/set_crash_redundancy_recovery.html.md.erb b/geode-docs/developing/partitioned_regions/set_crash_redundancy_recovery.html.md.erb
deleted file mode 100644
index 4c7311e..0000000
--- a/geode-docs/developing/partitioned_regions/set_crash_redundancy_recovery.html.md.erb
+++ /dev/null
@@ -1,43 +0,0 @@
----
-title:  Configure Member Crash Redundancy Recovery for a Partitioned Region
----
-
-Configure whether and how redundancy is recovered in a partition region after a member crashes.
-
-<a id="set_crash_redundancy_recovery__section_86CF741758E54DA29519E9CDDF1BC393"></a>
-Use the partition attribute `recovery-delay` to specify member crash redundancy recovery.
-
-| recovery-delay partition attribute | Effect following a member failure                                                    |
-|------------------------------------|--------------------------------------------------------------------------------------|
-| -1                                 | No automatic recovery of redundancy following a member failure. This is the default. |
-| long greater than or equal to 0    | Number of milliseconds to wait after a member failure before recovering redundancy.  |
-
-By default, redundancy is not recovered after a member crashes. If you expect to quickly restart most crashed members, combining this default setting with member join redundancy recovery can help you avoid unnecessary data shuffling while members are down. By waiting for lost members to rejoin, redundancy recovery is done using the newly started members and partitioning is better balanced with less processing.
-
-Set crash redundancy recovery using one of the following:
-
--   XML:
-
-    ``` pre
-    // Give a crashed member 10 seconds to restart 
-    // before recovering redundancy
-    <region name="PR1"> 
-      <region-attributes refid="PARTITION"> 
-        <partition-attributes recovery-delay="10000"/> 
-      </region-attributes> 
-    </region> 
-    ```
-
--   Java:
-
-    ``` pre
-    PartitionAttributes pa = new PartitionAttributesFactory().setRecoveryDelay(10000).create(); 
-    ```
-
--   gfsh:
-
-    ``` pre
-    gfsh>create region --name="PR1" type=PARTITION --recovery-delay=10000
-    ```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/partitioned_regions/set_enforce_unique_host.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/set_enforce_unique_host.html.md.erb b/geode-docs/developing/partitioned_regions/set_enforce_unique_host.html.md.erb
deleted file mode 100644
index 19cc1ec..0000000
--- a/geode-docs/developing/partitioned_regions/set_enforce_unique_host.html.md.erb
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title:  Set Enforce Unique Host
----
-
-Configure Geode to use only unique physical machines for redundant copies of partitioned region data.
-
-Understand how to set a member's `gemfire.properties` settings. See [Reference](../../reference/book_intro.html#reference).
-
-Configure your members so Geode always uses different physical machines for redundant copies of partitioned region data using the `gemfire.properties` setting `enforce-unique-host`. The default for this setting is false. 
-
-Example:
-
-``` pre
-enforce-unique-host=true
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/partitioned_regions/set_join_redundancy_recovery.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/set_join_redundancy_recovery.html.md.erb b/geode-docs/developing/partitioned_regions/set_join_redundancy_recovery.html.md.erb
deleted file mode 100644
index 4fe790e..0000000
--- a/geode-docs/developing/partitioned_regions/set_join_redundancy_recovery.html.md.erb
+++ /dev/null
@@ -1,49 +0,0 @@
----
-title:  Configure Member Join Redundancy Recovery for a Partitioned Region
----
-
-Configure whether and how redundancy is recovered in a partition region after a member joins.
-
-<a id="set_join_redundancy_recovery__section_D6FB0D69CC454B53B9CF1E656A44465C"></a>
-Use the partition attribute `startup-recovery-delay` to specify member join redundancy recovery.
-
-| startup-recovery-delay partition attribute | Effect following a member join                                                                                                                                                                                               |
-|--------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| -1                                         | No automatic recovery of redundancy after a new member comes online. If you use this and the default `recovery-delay` setting, you can only recover redundancy by kicking off rebalancing through a cacheserver or API call. |
-| long greater than or equal to **0**        | Number of milliseconds to wait after a member joins before before recovering redundancy. The default is 0 (zero), which causes immediate redundancy recovery whenever a new partitioned region host joins.                   |
-
-Setting this to a value higher than the default of 0 allows multiple new members to join before redundancy recovery kicks in. With the multiple members present during recovery, the system will spread redundancy recovery among them. With no delay, if multiple members are started in close succession, the system may choose only the first member started for most or all of the redundancy recovery.
-
-**Note:**
-Satisfying redundancy is not the same as adding capacity. If redundancy is satisfied, new members do not take buckets until you invoke a rebalance.
-
-**Note:**
-With parallel recovery introduced in version 8.2, redundancy may be recovered more quickly than in previous versions. For this reason, it is even more important to configure `startup-recovery-delay` to an appropriate value if you intend to restart multiple members at once. Set `startup-recovery-delay` to a value that ensures all members are up and available *before* redundancy recovery kicks in.
-
-Set join redundancy recovery using one of the following:
-
--   XML:
-
-    ``` pre
-    // Wait 5 seconds after a new member joins before  
-    // recovering redundancy
-    <region name="PR1"> 
-      <region-attributes refid="PARTITION"> 
-        <partition-attributes startup-recovery-delay="5000"/> 
-      </region-attributes> 
-    </region> 
-    ```
-
--   Java:
-
-    ``` pre
-    PartitionAttributes pa = new PartitionAttributesFactory().setStartupRecoveryDelay(5000).create(); 
-    ```
-
--   gfsh:
-
-    ``` pre
-    gfsh>create region --name="PR1" --type=PARTITION --startup-recovery-delay=5000
-    ```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/partitioned_regions/set_pr_redundancy.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/set_pr_redundancy.html.md.erb b/geode-docs/developing/partitioned_regions/set_pr_redundancy.html.md.erb
deleted file mode 100644
index ae12721..0000000
--- a/geode-docs/developing/partitioned_regions/set_pr_redundancy.html.md.erb
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title:  Set the Number of Redundant Copies
----
-
-Configure in-memory high availability for your partitioned region by specifying the number of secondary copies you want to maintain in the region's data stores.
-
-Specify the number of redundant copies you want for your partitioned region data in the partition attribute `redundant-copies` setting. The default setting is 0. 
-
-For example:
-
--   XML:
-
-    ``` pre
-    <region name="PR1"> 
-      <region-attributes refid="PARTITION"> 
-        <partition-attributes redundant-copies="1"/> 
-      </region-attributes> 
-    </region> 
-    ```
-
--   Java:
-
-    ``` pre
-    PartitionAttributes pa = 
-        new PartitionAttributesFactory().setRedundantCopies(1).create(); 
-    ```
-
--   gfsh:
-
-    ``` pre
-    gfsh>create region --name="PR1" --type=PARTITION --redundant-copies=1
-    ```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/partitioned_regions/set_redundancy_zones.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/set_redundancy_zones.html.md.erb b/geode-docs/developing/partitioned_regions/set_redundancy_zones.html.md.erb
deleted file mode 100644
index f5a0a10..0000000
--- a/geode-docs/developing/partitioned_regions/set_redundancy_zones.html.md.erb
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title:  Configure Redundancy Zones for Members
----
-
-Group members into redundancy zones so Geode will separate redundant data copies into different zones.
-
-Understand how to set a member's `gemfire.properties` settings. See [Reference](../../reference/book_intro.html#reference).
-
-Group your partition region hosts into redundancy zones with the `gemfire.properties` setting `redundancy-zone`. 
-
-For example, if you had redundancy set to 1, so you have one primary and one secondary copy of each data entry, you could split primary and secondary data copies between two machine racks by defining one redundancy zone for each rack. To do this, you set this zone in the `gemfire.properties` for all members that run on one rack:
-``` pre
-redundancy-zone=rack1
-```
-
-You would set this zone `gemfire.properties` for all members on the other rack:
-``` pre
-redundancy-zone=rack2
-```
-
-Each secondary copy would be hosted on the rack opposite the rack where its primary copy is hosted.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/partitioned_regions/using_custom_partition_resolvers.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/partitioned_regions/using_custom_partition_resolvers.html.md.erb b/geode-docs/developing/partitioned_regions/using_custom_partition_resolvers.html.md.erb
deleted file mode 100644
index df9e4b3..0000000
--- a/geode-docs/developing/partitioned_regions/using_custom_partition_resolvers.html.md.erb
+++ /dev/null
@@ -1,204 +0,0 @@
----
-title:  Custom-Partition Your Region Data
----
-
-By default, Geode partitions each data entry into a bucket using a hashing policy on the key. Additionally, the physical location of the key-value pair is abstracted away from the application. You can change these policies for a partitioned region. You can provide your own data partitioning resolver and you can additionally specify which members host which data buckets.
-
-<a id="custom_partition_region_data__section_CF05CE974C9C4AF78430DA55601D2158"></a>
-**Note:**
-If you are colocating data between regions and custom partitioning the data in the regions, all colocated regions must use the same custom partitioning mechanism. See [Colocate Data from Different Partitioned Regions](colocating_partitioned_region_data.html#colocating_partitioned_region_data).
-
-<a id="custom_partition_region_data__section_1D7043815DF24308ABE4C78BFDFEE686"></a>
-
-For standard partitioning, use `org.apache.geode.cache.PartitionResolver`. To implement fixed partitioning, use `org.apache.geode.cache.FixedPartitionResolver`.
-
-<a id="custom_partition_region_data__section_5A8D752F02834146A37D9430F1CA32DA"></a>
-
-**Prerequisites**
-
--   Create partitioned regions. See [Understanding Partitioning](how_partitioning_works.html) and [Configuring Partitioned Regions](managing_partitioned_regions.html#configure_partitioned_regions).
--   Decide whether to use standard custom partitioning or fixed custom partitioning. See [Understanding Custom Partitioning and Data Colocation](custom_partitioning_and_data_colocation.html#custom_partitioning_and_data_colocation).
--   If you also want to colocate data from multiple regions, understand how to colocate. See [Colocate Data from Different Partitioned Regions](colocating_partitioned_region_data.html#colocating_partitioned_region_data).
-
-**Procedure**
-
-1.  Using `org.apache.geode.cache.PartitionResolver` (standard partitioning) or `org.apache.geode.cache.FixedPartitionResolver` (fixed partitioning), implement the standard partitioning resolver or the fixed partitioning resolver in one of the following locations, listed here in the search order used by Geode:
-    -   **Custom class**. You provide this class as the partition resolver to the region creation.
-    -   **Entry key**. You use the implementing key object for every operation on the region entries.
-    -   **Cache callback argument**. This implementation restricts you to using methods that accept a cache callback argument to manage the region entries. For a full list of the methods that take a callback argument, see the `Region` Javadocs.
-
-2.  If you need the resolver's `getName` method, program that.
-3.  Program the resolver's `getRoutingObject` method to return the routing object for each entry, based on how you want to group the entries. Give the same routing object to entries you want to group together. Geode will place the entries in the same bucket.
-
-    **Note:**
-    Only fields on the key should be used when creating the routing object. Do not use the value or additional metadata for this purpose.
-
-    For example, here is an implementation on a region key object that groups the entries by month and year:
-
-    ``` pre
-    Public class TradeKey implements PartitionResolver 
-    { 
-        private String tradeID; 
-        private Month month; 
-        private Year year; 
-        public TradingKey(){ } 
-        public TradingKey(Month month, Year year)
-        { 
-            this.month = month; 
-            this.year = year; 
-        } 
-        public Serializable getRoutingObject(EntryOperation opDetails)
-        { 
-            return this.month + this.year; 
-        }
-    }
-    ```
-
-4.  For fixed partitioning only, program and configure additional fixed partitioning pieces:
-    1.  Set the fixed partition attributes for each member.
-
-        These attributes define the data stored for the region by the member and must be different for different members. See `org.apache.geode.cache.FixedPartitionAttributes` for definitions of the attributes. Define each `partition-name` in your data host members for the region. For each partition name, in the member you want to host the primary copy, define it with `is-primary` set to `true`. In every member you want to host the secondary copy, define it with `is-primary` set to `false` (the default). The number of secondaries must match the number of redundant copies you have defined for the region. See [Configure High Availability for a Partitioned Region](configuring_ha_for_pr.html).
-
-        **Note:**
-        Buckets for a partition are hosted only by the members that have defined the partition name in their `FixedPartitionAttributes`.
-
-        These examples set the partition attributes for a member to be the primary host for the "Q1" partition data and a secondary host for "Q3" partition data.
-        -   XML:
-
-            ``` pre
-            <cache>
-               <region name="Trades">
-                  <region-attributes>
-                     <partition-attributes redundant-copies="1">
-                       <partition-resolver name="QuarterFixedPartitionResolver">
-                          <class-name>myPackage.QuarterFixedPartitionResolver</class-name>
-                       </partition-resolver>
-                       <fixed-partition-attributes partition-name="Q1" is-primary="true"/>
-                       <fixed-partition-attributes partition-name="Q3" is-primary="false" num-buckets="6"/>
-                     </partition-attributes> 
-                  </region-attributes>
-               </region>
-            </cache>
-            ```
-        -   Java:
-
-
-            ``` pre
-            FixedPartitionAttribute fpa1 = FixedPartitionAttributes.createFixedPartition("Q1", true);
-            FixedPartitionAttribute fpa3 = FixedPartitionAttributes.createFixedPartition("Q3", false, 6);
-
-            PartitionAttributesFactory paf = new PartitionAttributesFactory()
-                 .setPartitionResolver(new QuarterFixedPartitionResolver())
-                 .setTotalNumBuckets(12)
-                 .setRedundantCopies(2)
-                 .addFixedPartitionAttribute(fpa1)
-                 .addFixedPartitionAttribute(fpa3);
-
-            Cache c = new CacheFactory().create();
-
-            Region r = c.createRegionFactory()
-                .setPartitionAttributes(paf.create())
-                .create("Trades");
-            ```
-        -   gfsh:
-
-            You cannot specify a partition resolver using gfsh.
-
-    2.  Program the `FixedPartitionResolver` `getPartitionName` method to return the name of the partition for each entry, based on where you want the entries to reside. Geode uses `getPartitionName` and `getRoutingObject` to determine where an entry is placed.
-
-        **Note:**
-        To group entries, assign every entry in the group the same routing object and the same partition name.
-
-        This example places the data based on date, with a different partition name for each quarter-year and a different routing object for each month.
-
-        ``` pre
-        /**
-         * Returns one of four different partition names
-         * (Q1, Q2, Q3, Q4) depending on the entry's date
-         */
-        class QuarterFixedPartitionResolver implements
-            FixedPartitionResolver<String, String> {
-
-          @Override
-          public String getPartitionName(EntryOperation<String, String> opDetails,
-              Set<String> targetPartitions) {
-
-             Date date = (Date)opDetails.getKey();
-             Calendar cal = Calendar.getInstance();
-             cal.setTime(date);
-             int month = cal.get(Calendar.MONTH);
-             if (month >= 0 && month < 3) {
-                if (targetPartitions.contains("Q1")) return "Q1";
-             }
-             else if (month >= 3 && month < 6) {
-                if (targetPartitions.contains("Q2")) return "Q2";
-             }
-             else if (month >= 6 && month < 9) {
-                if (targetPartitions.contains("Q3")) return "Q3";
-             }
-             else if (month >= 9 && month < 12) {
-                if (targetPartitions.contains("Q4")) return "Q4";
-             }
-             return "Invalid Quarter";
-          }
-
-          @Override
-          public String getName() {
-             return "QuarterFixedPartitionResolver";
-          }
-
-          @Override
-          public Serializable getRoutingObject(EntryOperation<String, String> opDetails) {
-             Date date = (Date)opDetails.getKey();
-             Calendar cal = Calendar.getInstance();
-             cal.setTime(date);
-             int month = cal.get(Calendar.MONTH);
-             return month;
-          }
-
-          @Override
-          public void close() {
-          }
-        }
-        ```
-
-5.  Configure or program the region so Geode finds your resolver for every operation that you perform on the region's entries. How you do this depends on where you chose to program your custom partitioning implementation (step 1).
-    1.  **Custom class**. Define the class for the region at creation. The resolver will be used for every entry operation. Use one of these methods:
-        -   XML:
-
-            ``` pre
-            <region name="trades">
-                <region-attributes>
-                    <partition-attributes>
-                        <partition-resolver name="TradesPartitionResolver"> 
-                            <class-name>myPackage.TradesPartitionResolver
-                            </class-name>
-                        </partition-resolver>
-                    <partition-attributes>
-                </region-attributes>
-            </region>
-            ```
-        -   Java:
-
-
-            ``` pre
-            PartitionResolver resolver = new TradesPartitionResolver();
-            PartitionAttributes attrs = 
-                new PartitionAttributesFactory()
-                .setPartitionResolver(resolver).create();
-
-            Cache c = new CacheFactory().create();
-
-            Region r = c.createRegionFactory()
-                .setPartitionAttributes(attrs)
-                .create("trades");
-            ```
-        -   gfsh:
-
-            You cannot specify a partition resolver using gfsh.
-
-    2.  **Entry key**. Use the key object with the resolver implementation for every entry operation.
-    3.  **Cache callback argument**. Provide the argument to every call that accesses an entry. This restricts you to calls that take a callback argument.
-
-6.  If your colocated data is in a server system, add the `PartitionResolver` implementation class to the `CLASSPATH` of your Java clients. The resolver is used for single hop access to partitioned region data in the servers.
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/query_additional/advanced_querying.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/advanced_querying.html.md.erb b/geode-docs/developing/query_additional/advanced_querying.html.md.erb
deleted file mode 100644
index d94da69..0000000
--- a/geode-docs/developing/query_additional/advanced_querying.html.md.erb
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title:  Advanced Querying
----
-
-This section includes advanced querying topics such as using query indexes, using query bind parameters, querying partitioned regions and query debugging.
-
--   **[Performance Considerations](../../developing/querying_basics/performance_considerations.html)**
-
-    This topic covers considerations for improving query performance.
-
--   **[Monitoring Queries for Low Memory](../../developing/querying_basics/monitor_queries_for_low_memory.html)**
-
-    The query monitoring feature prevents out-of-memory exceptions from occurring when you execute queries or create indexes.
-
--   **[Using Query Bind Parameters](../../developing/query_additional/using_query_bind_parameters.html)**
-
-    Using query bind parameters in Geode queries is similar to using prepared statements in SQL where parameters can be set during query execution. This allows user to build a query once and execute it multiple times by passing the query conditions during run time.
-
--   **[Working with Indexes](../../developing/query_index/query_index.html)**
-
-    The Geode query engine supports indexing. An index can provide significant performance gains for query execution.
-
--   **[Querying Partitioned Regions](../../developing/querying_basics/querying_partitioned_regions.html)**
-
-    Geode allows you to manage and store large amounts of data across distributed nodes using partitioned regions. The basic unit of storage for a partitioned region is a bucket, which resides on a Geode node and contains all the entries that map to a single hashcode. In a typical partitioned region query, the system distributes the query to all buckets across all nodes, then merges the result sets and sends back the query results.
-
--   **[Query Debugging](../../developing/query_additional/query_debugging.html)**
-
-    You can debug a specific query at the query level by adding the `<trace>` keyword before the query string that you want to debug.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/query_additional/case_sensitivity.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/case_sensitivity.html.md.erb b/geode-docs/developing/query_additional/case_sensitivity.html.md.erb
deleted file mode 100644
index 2d49259..0000000
--- a/geode-docs/developing/query_additional/case_sensitivity.html.md.erb
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title:  Case Sensitivity
----
-
-Query language keywords such as SELECT, NULL, DATE, and &lt;TRACE&gt; are case-insensitive. Identifiers such as attribute names, method names, and path expressions are case-sensitive.
-
-In terms of query string and region entry matching, if you want to perform a case-insensitive search on a particular field, you can use the Java String class `toUpperCase` and `toLowerCase` methods in your query. For example:
-
-``` pre
-SELECT entry.value FROM /exampleRegion.entries entry WHERE entry.value.toUpperCase LIKE '%BAR%'
-```
-
-or
-
-``` pre
-SELECT * FROM /exampleRegion WHERE foo.toLowerCase LIKE '%bar%'
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/query_additional/literals.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/literals.html.md.erb b/geode-docs/developing/query_additional/literals.html.md.erb
deleted file mode 100644
index 37fcc0f..0000000
--- a/geode-docs/developing/query_additional/literals.html.md.erb
+++ /dev/null
@@ -1,65 +0,0 @@
----
-title:  Supported Literals
----
-
-## <a id="literals__section_BA2D0AC444EB45088F00D9E2C8A1DD06" class="no-quick-link"></a>Comparing Values With java.util.Date
-
-Geode supports the following literal types:
-
-<dt>**boolean**</dt>
-<dd>A `boolean` value, either TRUE or FALSE</dd>
-<dt>**int** and **long**</dt>
-<dd>An integer literal is of type `long` if has a suffix of the ASCII letter L. Otherwise it is of type `int`.</dd>
-<dt>**floating point**</dt>
-<dd>A floating-point literal is of type `float` if it has a suffix of an ASCII letter `F`. Otherwise its type is `double`. Optionally, it can have a suffix of an ASCII letter `D`. A double or floating point literal can optionally include an exponent suffix of `E` or `e`, followed by a signed or unsigned number.</dd>
-
-<dt>**string**</dt>
-<dd>String literals are delimited by single quotation marks. Embedded single-quotation marks are doubled. For example, the character string `'Hello'` evaluates to the value `Hello`, while the character string `'He said, ''Hello'''` evaluates to `He said, 'Hello'`. Embedded newlines are kept as part of the string literal.</dd>
-<dt>**char**</dt>
-<dd>A literal is of type char if it is a string literal prefixed by the keyword `CHAR`, otherwise it is of type `string`. The `CHAR` literal for the single-quotation mark character is `CHAR` `''''` (four single quotation marks).</dd>
-<dt>**date**</dt>
-<dd>A `java.sql.Date` object that uses the JDBC format prefixed with the DATE keyword: `DATE yyyy-mm-dd`. In the `Date`, `yyyy` represents the year, `mm` represents the month, and `dd` represents the day. The year must be represented by four digits; a two-digit shorthand for the year is not allowed.</dd>
-<dt>**time**</dt>
-<dd>A `java.sql.Time` object that uses the JDBC format (based on a 24-hour clock) prefixed with the TIME keyword: `TIME hh:mm:ss`. In the `Time`, `hh` represents the hours, `mm` represents the minutes, and `ss` represents the seconds.</dd>
-<dt>**timestamp**</dt>
-<dd>A `java.sql.Timestamp` object that uses the JDBC format with a TIMESTAMP prefix: `TIMESTAMP yyyy-mm-dd hh:mm:ss.fffffffff` In the `Timestamp`, `yyyy-mm-dd` represents the `date`, `hh:mm:ss` represents the `time`, and `fffffffff` represents the fractional seconds (up to nine digits).</dd>
-<dt>**NIL**</dt>
-<dd>Equivalent alternative of `NULL`.</dd>
-<dt>**NULL**</dt>
-<dd>The same as `null` in Java.</dd>
-<dt>**UNDEFINED**</dt>
-<dd>A special literal that is a valid value for any data type. An `UNDEFINED` value is the result of accessing an attribute of a null-valued attribute. Note that if you access an attribute that has an explicit value of null, then it is not undefined. For example if a query accesses the attribute address.city and address is null, the result is undefined. If the query accesses address, then the result is not undefined, it is `NULL`.</dd>
-
-You can compare temporal literal values `DATE`, `TIME`, and `TIMESTAMP` with `java.util.Date` values. There is no literal for `java.util.Date` in the query language.
-
-## <a id="literals__section_9EE6CFC410D2409188EDEAA43AC85851" class="no-quick-link"></a>Type Conversion
-
-The Geode query processor performs implicit type conversions and promotions under certain cases in order to evaluate expressions that contain different types. The query processor performs binary numeric promotion, method invocation conversion, and temporal type conversion.
-
-## <a id="literals__section_F5A3FC509FD04E09B5468BA94B814701" class="no-quick-link"></a>Binary Numeric Promotion
-
-The query processor performs binary numeric promotion on the operands of the following operators:
-
--   Comparison operators &lt;, &lt;=, &gt;, and &gt;=
--   Equality operators = and &lt;&gt;
--   Binary numeric promotion widens the operands in a numeric expression to the widest representation used by any of the operands. In each expression, the query processor applies the following rules in the prescribed order until a conversion is made:
-    1.  If either operand is of type double, the other is converted to double
-    2.  If either operand is of type float, the other is converted to float
-    3.  If either operand is of type long, the other is converted to long
-    4.  Both operands are converted to type int char
-
-## <a id="literals__section_BA277AC4A9B34C93A5291ECC1FDC11C7" class="no-quick-link"></a>Method Invocation Conversion
-
-Method invocation conversion in the query language follows the same rules as Java method invocation conversion, except that the query language uses runtime types instead of compile time types, and handles null arguments differently than in Java. One aspect of using runtime types is that an argument with a null value has no typing information, and so can be matched with any type parameter. When a null argument is used, if the query processor cannot determine the proper method to invoke based on the non-null arguments, it throws an AmbiguousNameException
-
-## <a id="literals__section_0A1A6EFE98A24538B651373B1C6ED8C0" class="no-quick-link"></a>Temporal Type Conversion
-
-The temporal types that the query language supports include the Java types java.util.Date , java.sql.Date , java.sql.Time , and java.sql.Timestamp , which are all treated the same and can be compared and used in indexes. When compared with each other, these types are all treated as nanosecond quantities.
-
-## <a id="literals__section_73255A4630C94D04B461B1480AAF2F66" class="no-quick-link"></a>Enum Conversion
-
-Enums are not automatically converted. To use Enum values in query, you must use the toString method of the enum object or use a query bind parameter. See [Enum Objects](../query_select/the_where_clause.html#the_where_clause__section_59E7D64746AE495D942F2F09EF7DB9B5) for more information.
-
-## <a id="literals__section_CB624C143A2743C5ADC6F95C962F176B" class="no-quick-link"></a>Query Evaulation of Float.NaN and Double.NaN
-
-Float.NaN and Double.NaN are not evaluated as primitives; instead, they are compared in the same manner used as the JDK methods Float.compareTo and Double.compareTo. See [Double.NaN and Float.NaN Comparisons](../query_select/the_where_clause.html#the_where_clause__section_E7206D045BEC4F67A8D2B793922BF213) for more information.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/query_additional/operators.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/operators.html.md.erb b/geode-docs/developing/query_additional/operators.html.md.erb
deleted file mode 100644
index b05d5d3..0000000
--- a/geode-docs/developing/query_additional/operators.html.md.erb
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title:  Operators
----
-
-Geode supports comparison, logical, unary, map, index, dot, and right arrow operators.
-
-## <a id="operators__section_A3FB372F85D840D7A49CB95BD7FCA7C6" class="no-quick-link"></a>Comparison Operators
-
-Comparison operators compare two values and return the results, either TRUE or FALSE.
-
-The following are supported comparison operators:
-
-|                       |                                |
-|-----------------------|--------------------------------|
-| = equal to            | &lt; less than                 |
-| &lt;&gt; not equal to | &lt;= less than or equal to    |
-| != not equal to       | &gt; greater than              |
-| �                     | &gt;= greater than or equal to |
-
-The equal and not equal operators have lower precedence than the other comparison operators. They can be used with null. To perform equality or inequality comparisons with UNDEFINED, use the IS\_DEFINED and IS\_UNDEFINED preset query functions instead of these comparison operators.
-
-## <a id="operators__section_6A85A9DDA47E47009FDE1CC38D7BA66C" class="no-quick-link"></a>Logical Operators
-
-The logical operators AND and OR allow you to create more complex expressions by combining expressions to produce a boolean result. When you combine two conditional expressions using the AND operator, both conditions must evaluate to true for the entire expression to be true. When you combine two conditional expressions using the OR operator, the expression evaluates to true if either one or both of the conditions are true. You can create complex expressions by combining multiple simple conditional expressions with AND and OR operators. When expressions use AND and OR operators, AND has higher precedence than OR.
-
-## <a id="operators__section_A970AE75B0D24E0B9E1B61BE2D9842D8" class="no-quick-link"></a>Unary Operators
-
-Unary operators operate on a single value or expression, and have lower precedence than comparison operators in expressions. Geode supports the unary operator NOT. NOT is the negation operator, which changes the value of the operand to its opposite. So if an expression evaluates to TRUE, NOT changes it to FALSE. The operand must be a boolean.
-
-## <a id="operators__section_E78FB4FB3703471C8186A0E26D25F01F" class="no-quick-link"></a>Map and Index Operators
-
-Map and index operators access elements in key/value collections (such as maps and regions) and ordered collections (such as arrays, lists, and `String`s). The operator is represented by a set of square brackets (`[ ]`) immediately following the name of the collection. The mapping or indexing specification is provided inside these brackets.
-
-Array, list, and `String` elements are accessed using an index value. Indexing starts from zero for the first element, 1 for the second element and so on. If `myList` is an array, list, or String and `index` is an expression that evaluates to a non-negative integer, then `myList[index]` represents the (`index + 1`)th element of `myList`. The elements of a `String` are the list of characters that make up the string.
-
-Map and region values are accessed by key using the same syntax. The key can be any `Object`. For a `Region`, the map operator performs a non-distributed `get` in the local cache only - with no use of `netSearch`. So `myRegion[keyExpression]` is the equivalent of `myRegion.getEntry(keyExpression).getValue`.
-
-## <a id="operators__section_6C0BB787B2324B85AA02AA19D4822A83" class="no-quick-link"></a>Dot, Right Arrow, and Forward Slash Operators
-
-The dot operator (`.`) separates attribute names in a path expression, and specifies the navigation through object attributes. An alternate equivalent to the dot is the right arrow, (`->`). The forward slash is used to separate region names when navigating into subregions.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/query_additional/order_by_on_partitioned_regions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/order_by_on_partitioned_regions.html.md.erb b/geode-docs/developing/query_additional/order_by_on_partitioned_regions.html.md.erb
deleted file mode 100644
index 4d52a88..0000000
--- a/geode-docs/developing/query_additional/order_by_on_partitioned_regions.html.md.erb
+++ /dev/null
@@ -1,18 +0,0 @@
----
-title:  Using ORDER BY on Partitioned Regions
----
-
-To execute a query with an ORDER BY clause on a partitioned region, the fields specified in the ORDER BY clause must be part of the projection list.
-
-When an ORDER BY clause is used with a partition region query, the query is executed separately on each region host, the local query coordinator, and all remote members. The results are all gathered by the query coordinator. The cumulative result set is built by applying ORDER BY on the gathered results. If the LIMIT clause is also used in the query, ORDER BY and LIMIT are applied on each node before each node\u2019s results are returned to the coordinator. Then the clauses are applied to the cumulative result set to get the final result set, which is returned to the calling application.
-
-**Example:**
-
-``` pre
-// This query works because p.status is part of projection list
-select distinct p.ID, p.status from /region p where p.ID > 5 order by p.status
-// This query works providing status is part of the value indicated by *
-select distinct * from /region where ID > 5 order by status 
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/query_additional/partitioned_region_key_or_field_value.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/partitioned_region_key_or_field_value.html.md.erb b/geode-docs/developing/query_additional/partitioned_region_key_or_field_value.html.md.erb
deleted file mode 100644
index f97c011..0000000
--- a/geode-docs/developing/query_additional/partitioned_region_key_or_field_value.html.md.erb
+++ /dev/null
@@ -1,66 +0,0 @@
----
-title:  Optimizing Queries on Data Partitioned by a Key or Field Value
----
-
-You can improve query performance on data that is partitioned by key or a field value by creating a key index and then executing the query using the `FunctionService` with the key or field value used as filter.
-
-The following is an example how to optimize a query that will be run on data partitioned by region key value. In the following example, data is partitioned by the "orderId" field.
-
-1.  Create a key index on the orderId field. See [Creating Key Indexes](../query_index/creating_key_indexes.html#concept_09E29507AF0D42CF81D261B030D0B7C8) for more details.
-2.  Execute the query using the function service with orderId provided as the filter to the function context. For example:
-
-    ``` pre
-    /**
-     * Execute MyFunction for query on data partitioned by orderId key
-     *
-     */
-    public class TestFunctionQuery {
-
-      public static void main(String[] args) {
-
-        Set filter =  new HashSet();
-        ResultCollector rcollector = null;
-
-        //Filter data based on orderId  = '12345'
-        filter.add(12345);
-
-        //Query to get all orders that match ID 12345 and amount > 1000
-        String qStr = "SELECT * FROM /Orders WHERE orderId = '12345' AND amount > 1000";
-
-        try {
-          Function func = new MyFunction("testFunction");
-
-          Region region = CacheFactory.getAnyInstance().getRegion("myPartitionRegion");
-
-          //Function will be routed to one node containing the bucket
-          //for ID=1 and query will execute on that bucket.
-          rcollector = FunctionService
-              .onRegion(region)
-              .withArgs(qStr)
-              .withFilter(filter)
-              .execute(func);
-
-          Object result = rcollector.getResult();
-
-          //Results from one or multiple nodes.
-          ArrayList resultList = (ArrayList)result;
-
-          List queryResults = new ArrayList();
-
-          if (resultList.size()!=0) {
-            for (Object obj: resultList) {
-              if (obj != null) {
-                queryResults.addAll((ArrayList)obj);
-              }
-            }
-          }
-          printResults(queryResults);
-
-        } catch (FunctionException ex) {
-            getLogger().info(ex);
-        }
-      }
-    }
-    ```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/query_additional/partitioned_region_query_restrictions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/partitioned_region_query_restrictions.html.md.erb b/geode-docs/developing/query_additional/partitioned_region_query_restrictions.html.md.erb
deleted file mode 100644
index 429081e..0000000
--- a/geode-docs/developing/query_additional/partitioned_region_query_restrictions.html.md.erb
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title:  Partitioned Region Query Restrictions
----
-
-## <a id="concept_5353476380D44CC1A7F586E5AE1CE7E8__section_16875A7EA07D42C08FB194F4A854360D" class="no-quick-link"></a>Query Restrictions in Partitioned Regions
-
-Partitioned region queries function the same as non-partitioned region queries, except for the restrictions listed in this section. Partitioned region queries that do not follow these guidelines generate an `UnsupportedOperationException`.
-
--   Join queries between partitioned region and between partitioned regions and replicated regions are supported through the function service only. Join queries partitioned regions are not supported through the client server API.
--   You can run join queries on partitioned regions and on partitioned regions and replicated regions only if they are co-located. Equi-join queries are supported only on partitioned regions that are co-located and where the co-located columns are indicated in the WHERE clause of the query. In the case of multi-column partitioning, there should also be an AND clause in the WHERE specification. See [Colocate Data from Different Partitioned Regions](../partitioned_regions/colocating_partitioned_region_data.html#colocating_partitioned_region_data) for more information on partitioned region co-location.
--   Equi-join queries are allowed between partitioned regions and between partitioned regions and local replicated regions as long as the local replicated region also exists on all partitioned region nodes. To perform a join query on a partitioned region and another region (partitioned or not), you need to use the `query.execute` method and supply it with a function execution context. See [Performing an Equi-Join Query on Partitioned Regions](../partitioned_regions/join_query_partitioned_regions.html#concept_B930D276F49541F282A2CFE639F107DD) for an example.
--   The query must be just a SELECT expression (as opposed to arbitrary OQL expressions), preceded by zero or more IMPORT statements. For example, this query is not allowed because it is not just a SELECT expression:
-
-    ``` pre
-    // NOT VALID for partitioned regions
-    (SELECT DISTINCT *FROM /prRgn WHERE attribute > 10).size
-    ```
-
-    This query is allowed:
-
-    ``` pre
-    // VALID for partitioned regions
-    SELECT DISTINCT *FROM /prRgn WHERE attribute > 10
-    ```
-
--   The SELECT expression itself can be arbitrarily complex, including nested SELECT expressions, as long as only one partitioned region is referenced.
--   The partitioned region reference can only be in the first FROM clause iterator. Additional FROM clause iterators are allowed if they do not reference any regions (such as drilling down into the values in the partitioned region).
--   The first FROM clause iterator must contain only one reference to the partitioned region (the reference can be a parameter, such as $1).
--   The first FROM clause iterator cannot contain a subquery, but subqueries are allowed in additional FROM clause iterators.
--   You can use ORDER BY on partitioned region queries, but the fields that are specified in the ORDER BY clause must be part of the projection list.
--   If a partition region (or a bucket) being queried has been destroyed, the query is reattempted on the new primary for the destroyed bucket (if it exists). After certain number of attempts, a QueryException is thrown if all buckets (calculated at the startup of the query) cannot be queried.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/query_additional/query_debugging.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/query_debugging.html.md.erb b/geode-docs/developing/query_additional/query_debugging.html.md.erb
deleted file mode 100644
index b39cf08..0000000
--- a/geode-docs/developing/query_additional/query_debugging.html.md.erb
+++ /dev/null
@@ -1,87 +0,0 @@
----
-title:  Query Debugging
----
-
-You can debug a specific query at the query level by adding the `<trace>` keyword before the query string that you want to debug.
-
-Here is an example:
-
-``` pre
-<trace> select * from /exampleRegion
-```
-
-You can also write:
-
-``` pre
-<TRACE> select * from /exampleRegion
-```
-
-When the query is executed, Geode will log a message in `$GEMFIRE_DIR/system.log` with the following information:
-
-``` pre
-[info 2011/08/29 11:24:35.472 PDT CqServer <main> tid=0x1] Query Executed in 9.619656 ms; rowCount = 99; indexesUsed(0) "select *  from /exampleRegion" 
-```
-
-If you want to enable debugging for all queries, you can enable query execution logging by setting a System property on the command line during start-up:
-
-``` pre
-gfsh>start server --name=server_name -\u2013J=-Dgemfire.Query.VERBOSE=true
-```
-
-Or you can set the property programmatically:
-
-``` pre
-System.setProperty("gemfire.Query.VERBOSE","true");
-```
-
-As an example, let us say you have an EmployeeRegion that that contains Employee objects as values and the objects have public fields in them like ID and status.
-
-``` pre
-Employee.java
-Class Employee {
- public int ID;
- public String status;
- - - - - - -
- - - - - - -
-}
-```
-
-In addition, you have created the following indexes for the region:
-
-``` pre
-<index name="sampleIndex-1">
-<functional from-clause="/test " expression="ID"/>
-</index>
-<index name="sampleIndex-2">
-<functional from-clause="/test " expression="status"/>
-</index>
-```
-
-After you have set `gemfire.Query.VERBOSE` to "true", you could see the following debug messages in the logs after running queries on the EmployeeRegion or its indexes:
-
--   If indexes are not used in the query execution, you would see a debug message like this:
-
-    ``` pre
-    [info 2011/08/29 11:24:35.472 PDT CqServer <main> tid=0x1] Query Executed in 9.619656 ms; rowCount = 99; indexesUsed(0) "select * from /test k where ID > 0 and status='active'"
-    ```
-
--   When single index is used in query execution, you might see a debug message like this:
-
-    ``` pre
-    [info 2011/08/29 11:24:35.472 PDT CqServer <main> tid=0x1] Query Executed in 101.43499 ms; rowCount = 199; indexesUsed(1):sampleIndex-1(Results: 199) "select count *   from /test k where ID > 0"
-    ```
-
--   When multiple indexes are used by a query, you might see a debug message like this:
-
-    ``` pre
-    [info 2011/08/29 11:24:35.472 PDT CqServer <main> tid=0x1] Query Executed in 79.43847 ms; rowCount = 199; indexesUsed(2):sampleIndex-2(Results: 100),sampleIndex-1(Results: 199) "select * from /test k where ID > 0 OR status='active'"
-    ```
-
-In above log messages, the following information is provided:
-
--   "rowCount" represents ResultSet size for the query.
--   "indexesUsed(\\n) " shows n indexes were used for finding the results of the query.
--   Each index name and its corresponding results are reported respectively.
--   The log can be identified with the original query string itself appended in the end.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/query_additional/query_language_features.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/query_language_features.html.md.erb b/geode-docs/developing/query_additional/query_language_features.html.md.erb
deleted file mode 100644
index eea7cc0..0000000
--- a/geode-docs/developing/query_additional/query_language_features.html.md.erb
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title:  OQL Syntax and Semantics
----
-
-<a id="concept_5B8BA904DF2A41BEAA057017777D4E90__section_33F0FD791A2448CB812E8397828B33C2"></a>
-This section covers the following querying language features:
-
--   **[Supported Character Sets](../../developing/querying_basics/supported_character_sets.html)**
-
--   **[Supported Keywords](../../developing/query_additional/supported_keywords.html)**
-
--   **[Case Sensitivity](../../developing/query_additional/case_sensitivity.html)**
-
--   **[Comments in Query Strings](../../developing/querying_basics/comments_in_query_strings.html)**
-
--   **[Query Language Grammar](../../developing/querying_basics/query_grammar_and_reserved_words.html)**
-
--   **[Operators](../../developing/query_additional/operators.html)**
-
--   **[Reserved Words](../../developing/querying_basics/reserved_words.html)**
-
--   **[Supported Literals](../../developing/query_additional/literals.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/query_additional/query_on_a_single_node.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_additional/query_on_a_single_node.html.md.erb b/geode-docs/developing/query_additional/query_on_a_single_node.html.md.erb
deleted file mode 100644
index 6b56cb0..0000000
--- a/geode-docs/developing/query_additional/query_on_a_single_node.html.md.erb
+++ /dev/null
@@ -1,155 +0,0 @@
----
-title:  Querying a Partitioned Region on a Single Node
----
-
-To direct a query to specific partitioned region node, you can execute the query within a function. Use the following steps:
-
-1.  Implement a function which executes a query using RegionFunctionContext.
-
-    ``` pre
-    /**
-     * This function executes a query using its RegionFunctionContext
-     * which provides a filter on data which should be queried.
-     *
-     */
-    public class MyFunction extends FunctionAdapter {
-
-        private final String id;
-
-        @Override
-        public void execute(FunctionContext context) {
-
-          Cache cache = CacheFactory.getAnyInstance();
-          QueryService queryService = cache.getQueryService();
-
-          String qstr = (String) context.getArguments();
-
-          try {
-            Query query = queryService.newQuery(qstr);
-
-            //If function is executed on region, context is RegionFunctionContext
-            RegionFunctionContext rContext = (RegionFunctionContext)context;
-
-            SelectResults results = (SelectResults) query.execute(rContext)
-
-            //Send the results to function caller node.
-            context.getResultSender().sendResult((ArrayList) (results).asList());
-            context.getResultSender().lastResult(null);
-
-          } catch (Exception e) {
-            throw new FunctionException(e);
-          }
-        }
-
-        @Override
-        public boolean hasResult() {
-          return true;
-        }
-
-        @Override
-        public boolean isHA() {
-          return false;
-        }
-
-
-        public MyFunction(String id) {
-          super();
-          this.id = id;
-        }
-
-        @Override
-        public String getId() {
-          return this.id;
-        }
-      }
-    ```
-
-2.  Decide on the data you want to query. Based on this decision, you can use `PartitionResolver` to configure the organization of buckets to be queried in the Partitioned Region.
-
-    For example, let's say that you have defined the PortfolioKey class:
-
-    ``` pre
-    public class PortfolioKey implements DataSerializable {
-      private int id;
-      private long startValidTime;
-      private long endValidTime
-      private long writtenTime
-      
-      public int getId() {
-        return this.id;
-      }
-    ...
-    }
-    ```
-
-    You could use the `MyPartitionResolver` to store all keys with the same ID in the same bucket. This `PartitionResolver` has to be configured at the time of Partition Region creation either declaratively using xml OR using APIs. See [Configuring Partitioned Regions](../partitioned_regions/managing_partitioned_regions.html#configure_partitioned_regions) for more information.
-
-    ``` pre
-    /** This resolver returns the value of the ID field in the key. With this resolver, 
-     * all Portfolios using the same ID are colocated in the same bucket.
-     */
-    public class MyPartitionResolver implements PartitionResolver, Declarable {
-
-       public Serializable getRoutingObject(EntryOperation operation) {
-       return operation.getKey().getId();
-    }
-    ```
-
-3.  Execute the function on a client or any other node by setting the filter in the function call.
-
-    ``` pre
-    /**
-     * Execute MyFunction for query on specified keys.
-     *
-     */
-    public class TestFunctionQuery {
-
-      public static void main(String[] args) {
-
-        ResultCollector rcollector = null;
-        PortfolioKey portfolioKey1 = ...;
-
-        //Filter data based on portfolioKey1 which is the key used in 
-        //region.put(portfolioKey1, portfolio1);
-        Set filter = Collections.singleton(portfolioKey1);
-
-        //Query to get all positions for portfolio ID = 1
-        String qStr = "SELECT positions FROM /myPartitionRegion WHERE ID = 1";
-
-        try {
-          Function func = new MyFunction("testFunction");
-
-          Region region = CacheFactory.getAnyInstance().getRegion("myPartitionRegion");
-
-          //Function will be routed to one node containing the bucket
-          //for ID=1 and query will execute on that bucket.
-          rcollector = FunctionService
-              .onRegion(region)
-              .withArgs(qStr)
-              .withFilter(filter)
-              .execute(func);
-
-          Object result = rcollector.getResult();
-
-          //Results from one or multiple nodes.
-          ArrayList resultList = (ArrayList)result;
-
-          List queryResults = new ArrayList();
-
-          if (resultList.size()!=0) {
-            for (Object obj: resultList) {
-              if (obj != null) {
-                queryResults.addAll((ArrayList)obj);
-              }
-            }
-          }
-          printResults(queryResults);
-
-        } catch (FunctionException ex) {
-            getLogger().info(ex);
-        }
-      }
-    }
-    ```
-
-