You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@geode.apache.org by km...@apache.org on 2016/10/14 22:17:35 UTC

[37/94] [abbrv] [partial] incubator-geode git commit: GEODE-1952 Consolidated docs under a single geode-docs directory

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/basic_config/the_cache/setting_cache_initializer.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/basic_config/the_cache/setting_cache_initializer.html.md.erb b/geode-docs/basic_config/the_cache/setting_cache_initializer.html.md.erb
new file mode 100644
index 0000000..20cc2c6
--- /dev/null
+++ b/geode-docs/basic_config/the_cache/setting_cache_initializer.html.md.erb
@@ -0,0 +1,59 @@
+---
+title:  Launching an Application after Initializing the Cache
+---
+
+You can specify a callback application that is launched after the cache initialization.
+
+By specifying an `<initializer>` element in your cache.xml file, you can trigger a callback application, which is run after the cache has been initialized. Applications that use the cacheserver script to start up a server can also use this feature to hook into a callback application. To use this feature, you need to specify the callback class within the `<initializer>` element. This element should be added to the end of your `cache.xml` file.
+
+You can specify the `<initializer>` element for either server caches or client caches.
+
+The callback class must implement the `Declarable` interface. When the callback class is loaded, its `init` method is called, and any parameters defined in the `<initializer>` element are passed as properties.
+
+The following is an example specification.
+
+In cache.xml:
+
+``` pre
+<initializer>
+   <class-name>MyInitializer</class-name>
+      <parameter name="members">
+         <string>2</string>
+      </parameter>
+</initializer>
+```
+
+Here's the corresponding class definition:
+
+``` pre
+ 
+import org.apache.geode.cache.Declarable;
+
+public class MyInitializer implements Declarable {
+   public void init(Properties properties) {
+      System.out.println(properties.getProperty("members"));
+   }
+}
+```
+
+The following are some additional real-world usage scenarios:
+
+1.  Start a SystemMembershipListener
+
+    ``` pre
+    <initializer>
+       <class-name>TestSystemMembershipListener</class-name>
+    </initializer>
+    ```
+
+2.  Write a custom tool that monitors cache resources
+
+    ``` pre
+    <initializer>
+       <class-name>ResourceMonitorCacheXmlLoader</class-name>
+    </initializer>
+    ```
+
+Any singleton or timer task or thread can be instantiated and started using the initializer element.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/basic_config/the_cache/setting_cache_properties.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/basic_config/the_cache/setting_cache_properties.html.md.erb b/geode-docs/basic_config/the_cache/setting_cache_properties.html.md.erb
new file mode 100644
index 0000000..76d5066
--- /dev/null
+++ b/geode-docs/basic_config/the_cache/setting_cache_properties.html.md.erb
@@ -0,0 +1,22 @@
+---
+title:  Options for Configuring the Cache and Data Regions
+---
+
+To populate your Apache Geode cache and fine-tune its storage and distribution behavior, you need to define cached data regions and provide custom configuration for the cache and regions.
+
+<a id="setting_cache_properties__section_FB536C90C219432D93E872CBD49D66B1"></a>
+Cache configuration properties define:
+
+-   Cache-wide settings such as disk stores, communication timeouts, and settings designating the member as a server
+-   Cache data regions
+
+Configure the cache and its data regions through one or more of these methods:
+
+-   Through a persistent configuration that you define when issuing commands that use the gfsh command line utility. `gfsh` supports the administration, debugging, and deployment of Apache Geode processes and applications. You can use gfsh to configure regions, locators, servers, disk stores, event queues, and other objects.
+
+    As you issue commands, gfsh saves a set of configurations that apply to the entire cluster and also saves configurations that only apply to defined groups of members within the cluster. You can re-use these configurations to create a distributed system. See [Overview of the Cluster Configuration Service](../../configuring/cluster_config/gfsh_persist.html).
+
+-   Through declarations in the XML file named in the `cache-xml-file` `gemfire.properties` setting. This file is generally referred to as the `cache.xml` file, but it can have any name. See [cache.xml](../../reference/topics/chapter_overview_cache_xml.html#cache_xml).
+-   Through application calls to the `org.apache.geode.cache.CacheFactory`, `org.apache.geode.cache.Cache` and `org.apache.geode.cache.Region` APIs.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/chapter_overview.html.md.erb b/geode-docs/configuring/chapter_overview.html.md.erb
new file mode 100644
index 0000000..8026e72
--- /dev/null
+++ b/geode-docs/configuring/chapter_overview.html.md.erb
@@ -0,0 +1,67 @@
+---
+title:  Configuring and Running a Cluster
+---
+
+You use the `gfsh` command-line utility to configure your Apache Geode cluster (also called a "distributed system"). The cluster configuration service persists the cluster configurations and distributes the configurations to members of the cluster. There are also several additional ways to configure a cluster.
+
+You use `gfsh` to configure regions, disk stores, members, and other Geode objects. You also use `gfsh` to start and stop locators, servers, and Geode monitoring tools. As you execute these commands, the cluster configuration service persists the configuration. When new members join the cluster, the service distributes the configuration to the new members.
+
+`gfsh` is the recommended means of configuring and managing your Apache Geode cluster, however you can still configure many aspects of a cluster using the older methods of the cache.xml and gemfire.properties files. See [cache.xml](../reference/topics/chapter_overview_cache_xml.html#cache_xml) and the [Reference](../reference/book_intro.html#reference) for configuration parameters. You can also configure some aspects of a cluster using a Java API. See [Managing Apache Geode](../managing/book_intro.html#managing_gemfire_intro).
+
+-   **[Overview of the Cluster Configuration Service](../configuring/cluster_config/gfsh_persist.html)**
+
+    The Apache Geode cluster configuration service persists cluster configurations created by `gfsh` commands to the locators in a cluster and distributes the configurations to members of the cluster.
+
+-   **[Tutorial\u2014Creating and Using a Cluster Configuration](../configuring/cluster_config/persisting_configurations.html)**
+
+    A short walk-through that uses a single computer to demonstrate how to use `gfsh` to create a cluster configuration for a Geode cluster.
+
+-   **[Deploying Application JARs to Apache Geode Members](../configuring/cluster_config/deploying_application_jars.html)**
+
+    You can dynamically deploy your application JAR files to specific members or to all members in your distributed system. Geode automatically keeps track of JAR file versions; autoloads the deployed JAR files to the CLASSPATH; and auto-registers any functions that the JAR contains.
+
+-   **[Using Member Groups](../configuring/cluster_config/using_member_groups.html)**
+
+    Apache Geode allows you to organize your distributed system members into logical member groups.
+
+-   **[Exporting and Importing Cluster Configurations](../configuring/cluster_config/export-import.html)**
+
+    The cluster configuration service exports and imports configurations created using `gfsh` for an entire Apache Geode cluster.
+
+-   **[Cluster Configuration Files and Troubleshooting](../configuring/cluster_config/gfsh_config_troubleshooting.html)**
+
+    When you use the cluster configuration service in Geode, you can examine the generated configuration files in the `cluster_config` directory on the locator. `gfsh` saves configuration files at the cluster-level and at the individual group-level.
+
+-   **[Loading Existing Configuration Files into Cluster Configuration](../configuring/cluster_config/gfsh_load_from_shared_dir.html)**
+
+    To load an existing cache.xml or gemfire.properties configuration file into a new cluster, use the `--load-cluster-configuration-from-dir` parameter when starting up the locator.
+
+-   **[Using gfsh to Manage a Remote Cluster Over HTTP or HTTPS](../configuring/cluster_config/gfsh_remote.html)**
+
+    You can connect `gfsh` via HTTP or HTTPS to a remote cluster and manage the cluster using `gfsh` commands.
+
+-   **[Deploying Configuration Files without the Cluster Configuration Service](../configuring/running/deploying_config_files.html)**
+
+    You can deploy your Apache Geode configuration files in your system directory structure or in jar files. You determine how you want to deploy your configuration files and set them up accordingly.
+
+-   **[Starting Up and Shutting Down Your System](../configuring/running/starting_up_shutting_down.html)**
+
+    Determine the proper startup and shutdown procedures, and write your startup and shutdown scripts.
+
+-   **[Running Geode Locator Processes](../configuring/running/running_the_locator.html)**
+
+    The locator is a Geode process that tells new, connecting members where running members are located and provides load balancing for server use.
+
+-   **[Running Geode Server Processes](../configuring/running/running_the_cacheserver.html)**
+
+    A Geode server is a process that runs as a long-lived, configurable member of a client/server system.
+
+-   **[Managing System Output Files](../configuring/running/managing_output_files.html)**
+
+    Geode output files are optional and can become quite large. Work with your system administrator to determine where to place them to avoid interfering with other system activities.
+
+-   **[Firewall Considerations](../configuring/running/firewall_ports_config.html)**
+
+    You can configure and limit port usage for situations that involve firewalls, for example, between client-server or server-server connections.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/cluster_config/deploying_application_jars.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/cluster_config/deploying_application_jars.html.md.erb b/geode-docs/configuring/cluster_config/deploying_application_jars.html.md.erb
new file mode 100644
index 0000000..08eb1d5
--- /dev/null
+++ b/geode-docs/configuring/cluster_config/deploying_application_jars.html.md.erb
@@ -0,0 +1,114 @@
+---
+title:  Deploying Application JARs to Apache Geode Members
+---
+
+You can dynamically deploy your application JAR files to specific members or to all members in your distributed system. Geode automatically keeps track of JAR file versions; autoloads the deployed JAR files to the CLASSPATH; and auto-registers any functions that the JAR contains.
+
+To deploy and undeploy application JAR files in Apache Geode, use the `gfsh` `deploy` or `undeploy` command. You can deploy a single JAR or multiple JARs (by either specifying the JAR filenames or by specifying a directory that contains the JAR files), and you can also target the deployment to a member group or multiple member group. For example, after connecting to the distributed system where you want to deploy the JARs, you could type at the `gfsh` prompt:
+
+``` pre
+gfsh> deploy --jar=group1_functions.jar
+```
+
+This command deploys the `group1_functions.jar` file to all members in the distributed system.
+
+To deploy the JAR file to a subset of members, use the `--group` argument. For example:
+
+``` pre
+gfsh> deploy --jar=group1_functions.jar --group=MemberGroup1
+```
+
+In the example it is assumed that you have already defined the member group that you want to use when starting up your members. See [Configuring and Running a Cluster](../chapter_overview.html#concept_lrh_gyq_s4) for more information on how to define member groups and add a member to a group.
+
+To deploy all the JAR files that are located in a specific directory to all members:
+
+``` pre
+gfsh> deploy --dir=libs/group1-libs
+```
+
+You can either provide a JAR file name or a directory of JARs for deployment, but you cannot specify both at once.
+
+To undeploy all previously deployed JAR files throughout the distributed system:
+
+``` pre
+gfsh> undeploy
+```
+
+To undeploy a specific JAR file:
+
+``` pre
+gfsh> undeploy --jar=group1_functions.jar
+```
+
+To target a specific member group when undeploying all JAR files:
+
+``` pre
+gfsh> undeploy --group=MemberGroup1
+```
+
+Only JAR files that have been previously deployed on members in the MemberGroup1 group will be undeployed.
+
+To see a list of all deployed JARs in your distributed system:
+
+``` pre
+gfsh> list deployed
+```
+
+To see a list of all deployed JARs in a specific member group:
+
+``` pre
+gfsh> list deployed --group=MemberGroup1
+```
+
+Sample output:
+
+``` pre
+ 
+ Member   |     Deployed JAR     |                JAR Location            
+--------- | -------------------- | ---------------------------------------------------
+datanode1 | group1_functions.jar | /usr/local/gemfire/deploy/vf.gf#group1_functions.jar#1
+datanode2 | group1_functions.jar | /usr/local/gemfire/deploy/vf.gf#group1_functions.jar#1
+```
+
+For more information on `gfsh` usage, see [gfsh (Geode SHell)](../../tools_modules/gfsh/chapter_overview.html).
+
+## <a id="concept_4436C021FB934EC4A330D27BD026602C__section_D36E345C6E254D27B0F4B0C8711F5E6A" class="no-quick-link"></a>Deployment Location for JAR Files
+
+The system location where JAR files are written on each member is determined by the `deploy-working-dir` Geode property configured for that member. For example, you could have the following configured in the `gemfire.properties` file for your member:
+
+``` pre
+#gemfire.properties
+deploy-working-dir=/usr/local/gemfire/deploy
+```
+
+This deployment location can be local or a shared network resource (such as a mount location) used by multiple members in order to reduce disk space usage. If you use a shared directory, you still need to deploy the JAR file on every member that you want to have access to the application, because deployment updates the CLASSPATH and auto-registers functions.
+
+## About Deploying JAR Files and the Cluster Configuration Service
+
+By default, the cluster configuration service distributes deployed JAR files to all locators in the distributed system. When you start a new server using `gfsh`, the locator supplies configuration files and deployed jar files to the member and writes them to the server's directory.
+
+See [Overview of the Cluster Configuration Service](gfsh_persist.html).
+
+## <a id="concept_4436C021FB934EC4A330D27BD026602C__section_D9219C5EEED64672930200677C2118C9" class="no-quick-link"></a>Versioning of JAR Files
+
+When you deploy JAR files to a distributed system or member group, the JAR file is modified to indicate version information in its name. Each JAR filename is prefixed with `vf.gf#` and contains a version number at the end of the filename. For example, if you deploy `MyClasses.jar` five times, the filename is displayed as `vf.gf#MyClasses.jar#5` when you list all deployed jars.
+
+When you deploy a new JAR file, the member receiving the deployment checks whether the JAR file is a duplicate, either because the JAR file has already been deployed on that member or because the JAR file has already been deployed to a shared deployment working directory that other members are also using. If another member has already deployed this JAR file to the shared directory (determined by doing a byte-for-byte compare to the latest version in its directory), the member receiving the latest deployment does not write the file to disk. Instead, the member updates the ClassPathLoader to use the already deployed JAR file. If a newer version of the JAR file is detected on disk and is already in use, the deployment is canceled.
+
+When a member begins using a JAR file, the member obtains a shared lock on the file. If the member receives a newer version by deployment, the member releases the shared lock and tries to delete the existing JAR file in favor of the newer version. If no other member has a shared lock on the existing JAR, the existing, older version JAR is deleted.
+
+## <a id="concept_4436C021FB934EC4A330D27BD026602C__section_F8AC59EEC8C5434FBC6F38A12A7371CE" class="no-quick-link"></a>Automatic Class Path Loading
+
+When a cache is started, the new cache requests that the latest versions of each JAR file in the current working directory be added to the ClassPathLoader. If a JAR file has already been deployed to the ClassPathLoader, the ClassPathLoader updates its loaded version if a newer version is found; otherwise, there is no change. If detected, older versions of the JAR files are deleted if no other member has a shared lock on them.
+
+Undeploying a JAR file does not automatically unload the classes that were loaded during deployment. You need to restart your members to unload those classes.
+
+When a cache is closed it requests that all currently deployed JAR files be removed from the ClassPathLoader.
+
+If you are using a shared deployment working directory, all members sharing the directory should belong to the same member group. Upon restart, all members that share the same deployment working directory will deploy and autoload their CLASSPATH with any JARs found in the current working directory. This means that some members may load the JARs even though they are not part of the member group that received the original deployment.
+
+## <a id="concept_4436C021FB934EC4A330D27BD026602C__section_C1ECA5A66C27403A9A18D0E04EFCC66D" class="no-quick-link"></a>Automatic Function Registration
+
+When you deploy a JAR file that contains a function (in other words, contains a class that implements the Function interface), the function is automatically registered through the `FunctionService.registerFunction` method. If another JAR file is deployed (either with the same JAR filename or another filename) with the same function, the new implementation of the function is registered, overwriting the old one. If a JAR file is undeployed, any functions that were auto-registered at the time of deployment are unregistered. Because deploying a JAR file that has the same name multiple times results in the JAR being un-deployed and re-deployed, functions in the JAR are unregistered and re-registered each time this occurs. If a function with the same ID is registered from multiple differently named JAR files, the function is unregistered if any of those JAR files are re-deployed or un-deployed.
+
+During `cache.xml` load, the parameters for any declarables are saved. If functions found in a JAR file are also declarable, and have the same class name as the declarables whose parameters were saved after loading cache.xml, then function instances are created using those Parameters and are also registered. Therefore, if the same function is declared multiple times in the `cache.xml` with different sets of parameters, when the JAR is deployed a function is instantiated for each set of parameters. If any functions are registered using parameters from a `cache.xml` load, the default, no-argument function is not registered.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/cluster_config/export-import.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/cluster_config/export-import.html.md.erb b/geode-docs/configuring/cluster_config/export-import.html.md.erb
new file mode 100644
index 0000000..e730c5b
--- /dev/null
+++ b/geode-docs/configuring/cluster_config/export-import.html.md.erb
@@ -0,0 +1,39 @@
+---
+title:  Exporting and Importing Cluster Configurations
+---
+
+The cluster configuration service exports and imports configurations created using `gfsh` for an entire Apache Geode cluster.
+
+The cluster configuration service saves the cluster configuration as you create a regions, disk-stores and other objects using `gfsh` commands. You can export this configuration as well as any jar files that contain application files to a zip file and then import this configuration to create a new cluster.
+
+## Exporting a Cluster Configuration
+
+You issue the `gfsh` `export cluster-configuration` command to save the configuration data for you cluster in a zip file. This zip file contains subdirectories for cluster-level configurations and a directory for each group specified in the cluster. The contents of these directories are described in [Cluster Configuration Files and Troubleshooting](gfsh_config_troubleshooting.html#concept_ylt_2cb_y4).
+
+To export a cluster configuration, run the `gfsh` `export cluster-configuration` command while connected to a Geode cluster. For example:
+
+``` pre
+export cluster-configuration --zip-file-name=myClusterConfig.zip --dir=/home/username/configs
+```
+
+See [export cluster-configuration](../../tools_modules/gfsh/command-pages/export.html#topic_mdv_jgz_ck).
+
+**Note:**
+`gfsh` only saves cluster configuration values for configurations specified using `gfsh`. Configurations created by the management API are not saved with the cluster configurations.
+
+## Importing a Cluster Configuration
+
+You can import a cluster configuration to a running locator. After importing the configuration, any servers you start receive this cluster configuration.
+
+To import a cluster configuration, start one or more locators and then run the `gfsh` `import cluster-configuration` command. For example:
+
+``` pre
+import cluster-configuration --zip-file-name=/home/username/configs/myClusterConfig.zip
+```
+
+See [import cluster-configuration](../../tools_modules/gfsh/command-pages/import.html#topic_vnv_grz_ck).
+
+**Note:**
+You cannot import a cluster configuration to a cluster where cache servers are already running.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/cluster_config/gfsh_config_troubleshooting.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/cluster_config/gfsh_config_troubleshooting.html.md.erb b/geode-docs/configuring/cluster_config/gfsh_config_troubleshooting.html.md.erb
new file mode 100644
index 0000000..51f89b0
--- /dev/null
+++ b/geode-docs/configuring/cluster_config/gfsh_config_troubleshooting.html.md.erb
@@ -0,0 +1,58 @@
+---
+title:  Cluster Configuration Files and Troubleshooting
+---
+
+When you use the cluster configuration service in Geode, you can examine the generated configuration files in the `cluster_config` directory on the locator. `gfsh` saves configuration files at the cluster-level and at the individual group-level.
+
+The following directories and configuration files are available on the locator running the cluster configuration service:
+
+**Cluster-level configuration**  
+For configurations that apply to all members of a cluster, the locator creates a `cluster` subdirectory within the `cluster_config` directory (or in the cluster configuration directory when starting up the locator with the `--cluster-config-dir=value` parameter) specified All servers receive this configuration when they are started using `gfsh`. This directory contains:
+
+-   `cluster.xml` -- A Geode `cache.xml` file containing configuration common to all members
+-   `cluster.properties` -- a Geode ` gemfire.properties` file containing properties common to all members
+-   Jar files that are intended for deployment to all members
+
+<!-- -->
+
+**Group-level configuration**  
+When you specify the `--group` parameter in a `gfsh` command, (for example, `start server` or `create region`) the locator writes the configurations for each group in a subdirectory with the same name as the group. When you start a server that specifies one or more group names, the server receives both the cluster-level configurations and the configurations from all groups specified. This subdirectory contains:
+
+-   `<group-name>.xml` -- A Geode `cache.xml` file containing configurations common to all members of the group
+-   `<group-name>.properties` -- A Geode `gemfire.properties` file containing properties common to all members of the group
+-   Jar files that are intended for deployment to all members of the group
+
+<img src="../../images_svg/cluster-group-config.svg" id="concept_ylt_2cb_y4__image_bs1_mcb_y4" class="image" />
+
+You can export a zip file that contains all artifacts of a cluster configuration. The zip file contains all of the files in the `cluster_config` (or otherwise specified) subdirectory of a locator. You can import this configuration to a new cluster. See [Exporting and Importing Cluster Configurations](export-import.html#concept_wft_dkq_34).
+
+## Individual Configuration Files and Cluster Configuration Files
+
+Geode applies the cluster-wide configuration files first and then group-level configurations next. If a member has its own configuration files defined (cache.xml and gemfire.properties files), those configurations are applied last. Whenever possible, use the member group-level configuration files in the cluster-configuration service to apply non-cluster-wide configurations on individual members.
+
+## Troubleshooting Tips
+
+-   When you start a locator using `gfsh`, you should see the following message:
+
+    ``` pre
+    Cluster configuration service is up and running.
+    ```
+
+    If you do not see this message, there may be a problem with the cluster configuration service. Use the `status cluster-configuration-service` command to check the status of the cluster configuration.
+
+    -   If the command returns RUNNING, the cluster configuration is running normally.
+    -   If the command returns WAITING, run the `status locator` command. The output of this command returns the cause of the WAITING status.
+-   If a server start fails with the following exception: `ClusterConfigurationNotAvailableException`, the cluster configuration service may not be in the RUNNING state. Because the server requests the cluster configuration from the locator, which is not available, the `start server` command fails.
+-   You can determine what configurations a server received from a locator by examining the server's log file. See [Logging](../../managing/logging/logging.html#concept_30DB86B12B454E168B80BB5A71268865).
+-   If a `start server` command specifies a cache.xml file that conflicts with the existing cluster configuration, the server startup may fail.
+-   If a `gfsh` command fails because the cluster configuration cannot be saved, the following message displays:
+
+    ``` pre
+    Failed to persist the configuration changes due to this command, 
+    Revert the command to maintain consistency. Please use "status cluster-config-service" 
+    to determine whether Cluster configuration service is RUNNING."
+    ```
+
+-   There are some types of configurations that cannot be made using `gfsh`. See [gfsh Limitations](gfsh_persist.html#concept_r22_hyw_bl__section_bn3_23p_y4).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/cluster_config/gfsh_load_from_shared_dir.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/cluster_config/gfsh_load_from_shared_dir.html.md.erb b/geode-docs/configuring/cluster_config/gfsh_load_from_shared_dir.html.md.erb
new file mode 100644
index 0000000..b9e9a5d
--- /dev/null
+++ b/geode-docs/configuring/cluster_config/gfsh_load_from_shared_dir.html.md.erb
@@ -0,0 +1,27 @@
+---
+title:  Loading Existing Configuration Files into Cluster Configuration
+---
+
+To load an existing cache.xml or gemfire.properties configuration file into a new cluster, use the `--load-cluster-configuration-from-dir` parameter when starting up the locator.
+
+You can use this technique to migrate a single server's configuration into the cluster configuration service. To load an existing cache.xml file or cluster configuration into a cluster, perform the following steps:
+
+1.  Make sure the locator is not currently running.
+2.  Within the locator's working directory, create a `cluster_config/cluster` directory if the directory does not already exist.
+3.  Copy the desired configuration files (cache.xml or gemfire.properties, or both) into the `cluster_config/cluster` directory.
+4.  Rename the configuration files as follows:
+    -   Rename `cache.xml` to `cluster.xml`
+    -   Rename `gemfire.properties` to `cluster.properties`
+
+5.  Start the locator in `gfsh` as follows:
+
+    ``` pre
+    gfsh>start locator --name=<locator_name> --enable-cluster-configuration=true --load-cluster-configuration-from-dir=true
+    ```
+
+    After successful startup, the locator should report that the "Cluster configuration service is up and running." Any servers that join this cluster and have `--use-cluster-configuration` set to true will pick up these configuration files.
+
+**Note:**
+If you make any manual modifications to the cluster.xml or cluster.properties (or group\_name.xml or group\_name.properties) files, you must stop the locator and then restart the locator using the `--load-cluster-configuration-from-dir` parameter. Direct file modifications are not picked up iby the cluster configuration service without a locator restart.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/cluster_config/gfsh_persist.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/cluster_config/gfsh_persist.html.md.erb b/geode-docs/configuring/cluster_config/gfsh_persist.html.md.erb
new file mode 100644
index 0000000..85be33c
--- /dev/null
+++ b/geode-docs/configuring/cluster_config/gfsh_persist.html.md.erb
@@ -0,0 +1,108 @@
+---
+title:  Overview of the Cluster Configuration Service
+---
+
+The Apache Geode cluster configuration service persists cluster configurations created by `gfsh` commands to the locators in a cluster and distributes the configurations to members of the cluster.
+
+## Why Use the Cluster Configuration Service
+
+We highly recommend that you use the `gfsh` command line and the cluster configuration service as the primary mechanism to manage your distributed system configuration. Using a common cluster configuration reduces the amount of time you spend configuring individual members and enforces consistent configurations when bringing up new members in your cluster. You no longer need to reconfigure each new member that you add to the cluster. You no longer need to worry about validating your cache.xml file. It also becomes easier to propagate configuration changes across your cluster and deploy your configuration changes to different environments.
+
+You can use the cluster configuration service to:
+
+-   Save the configuration for an entire Apache Geode cluster.
+-   Restart members using a previously-saved configuration.
+-   Export a configuration from a development environment and migrate that configuration to create a testing or production system.
+-   Start additional servers without having to configure each server separately.
+-   Configure some servers to host certain regions and other servers to host different regions, and configure all servers to host a set of common regions.
+
+## Using the Cluster Configuration Service
+
+To use the cluster configuration service in Geode, you must use dedicated, standalone locators in your deployment. You cannot use the cluster configuration service with co-located locators (locators running in another process such as a server) or in multicast environments.
+
+The standalone locators distribute configuration to all locators in a cluster. Every locator in the cluster with `--enable-cluster-configuration` set to true keeps a record of all cluster-level and group-level configuration settings.
+
+**Note:**
+The default behavior for `gfsh` is to create and save cluster configurations. You can disable the cluster configuration service by using the `--enable-cluster-configuration=false` option when starting locators.
+
+Subsequently, any servers that you start with `gfsh` that have `--use-cluster-configuration` set to `true` will pick up the cluster configuration from the locator as well as any appropriate group-level configurations (for member groups they belong to). To disable the cluster configuration service on a server, you must start the server with the `--use-cluster-configuration` parameter set to `false`. By default, the parameter is set to true.
+
+You can also load existing configuration files into the cluster configuration service by starting up a standalone locator with the parameter `--load-cluster-configuration-from-dir` set to true. See [Loading Existing Configuration Files into Cluster Configuration](gfsh_load_from_shared_dir.html).
+
+## How the Cluster Configuration Service Works
+
+When you use `gfsh` commands to create Apache Geode regions, disk-stores, and other objects, the cluster configuration service saves the configurations on each locator in the cluster (also called a Geode distributed system). If you specify a group when issuing these commands, a separate configuration is saved containing only configurations that apply to the group.
+
+When you use `gfsh` to start new Apache Geode servers, the locator distributes the persisted configurations to the new server. If you specify a group when starting the server, the server receives the group-level configuration in addition to the cluster-level configuration. Group-level configurations are applied after cluster-wide configurations; therefore you can use group-level to override cluster-level settings.
+
+<img src="../../images_svg/cluster_config_overview.svg" id="concept_r22_hyw_bl__image_jjc_vhb_y4" class="image" />
+
+## gfsh Commands that Create Cluster Configurations
+
+The following `gfsh` commands cause the configuration to be written to all locators in the cluster (the locators write the configuration to disk):
+
+-   `configure pdx`\*
+-   `create region`
+-   `alter region`
+-   `alter runtime`
+-   `destroy region`
+-   `create index`
+-   `destroy index`
+-   `create disk-store`
+-   `destroy disk-store`
+-   `create async-event-queue`
+-   `deploy jar`
+-   `undeploy jar`
+
+**\*** Note that the `configure pdx` command must be executed *before* starting your data members. This command does not affect any currently running members in the system. Data members (with cluster configuration enabled) that are started after running this command will pick up the new PDX configuration.
+
+The following gateway-related commands use the cluster configuration service, and their configuration is saved by locators:
+
+-   `create gateway-sender`
+-   `create gateway-receiver`
+
+## <a id="concept_r22_hyw_bl__section_bn3_23p_y4" class="no-quick-link"></a>gfsh Limitations
+
+There are some configurations that you cannot create using `gfsh`, and that you must configure using cache.xml or the API:
+
+-   Client cache configuration
+-   You cannot specify parameters and values for Java classes for the following objects:
+    -   `function`
+    -   `custom-load-probe`
+    -   `cache-listener`
+    -   `cache-loader`
+    -   `cache-writer`
+    -   `compressor`
+    -   `serializer`
+    -   `instantiantor`
+    -   `pdx-serializer`
+    
+        **Note:**
+        The `configure pdx` command always specifies the `org.apache.geode.pdx.ReflectionBasedAutoSerializer` class. You cannot specify a custom PDX serializer in gfsh.
+
+    -   `custom-expiry`
+    -   `initializer`
+    -   `declarable`
+    -   `lru-heap-percentage`
+    -   `lru-memory-size`
+    -   `partition-resolver`
+    -   `partition-listener`
+    -   `transaction-listener`
+    -   `transaction-writer`
+-   Adding or removing a TransactionListener
+-   Adding JNDI bindings
+-   Deleting an AsyncEventQueue
+
+In addition, there are some limitations on configuring gateways using `gfsh`.You must use cache.xml or the Java APIs to configure the following:
+
+-   Configuring a GatewayConflictResolver
+-   You cannot specify parameters and values for Java classes for the following:
+    -   `gateway-listener`
+    -   `gateway-conflict-resolver`
+    -   `gateway-event-filter`
+    -   `gateway-transport-filter`
+    -   `gateway-event-substitution-filter`
+
+## <a id="concept_r22_hyw_bl__section_fh1_c3p_y4" class="no-quick-link"></a>Disabling the Cluster Configuration Service
+
+If you do not want to use the cluster configuration service, start up your locator with the `--enable-cluster-configuration` parameter set to false or do not use standalone locators. You will then need to configure the cache (via cache.xml or API) separately on all your distributed system members.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/cluster_config/gfsh_remote.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/cluster_config/gfsh_remote.html.md.erb b/geode-docs/configuring/cluster_config/gfsh_remote.html.md.erb
new file mode 100644
index 0000000..9132e44
--- /dev/null
+++ b/geode-docs/configuring/cluster_config/gfsh_remote.html.md.erb
@@ -0,0 +1,61 @@
+---
+title:  Using gfsh to Manage a Remote Cluster Over HTTP or HTTPS
+---
+
+You can connect `gfsh` via HTTP or HTTPS to a remote cluster and manage the cluster using `gfsh` commands.
+
+To connect `gfsh` using the HTTP protocol to a remote GemFire cluster:
+
+1.  Launch `gfsh`. See [Starting gfsh](../../tools_modules/gfsh/starting_gfsh.html#concept_DB959734350B488BBFF91A120890FE61).
+2.  When starting the remote cluster on the remote host, you can optionally specify `--http-bind-address` and `--http-service-port` as GemFire properties when starting up your JMX manager (server or locator). These properties can be then used in the URL used when connecting from your local system to the HTTP service in the remote cluster. For example:
+
+    ``` pre
+    gfsh>start server --name=server1 --J=-Dgemfire.jmx-manager=true \
+    --J=-Dgemfire.jmx-manager-start=true --J=-Dgemfire.http-service-port=8080 \
+    --J=-Dgemfire.http-service-bind-address=myremotecluster.example.com
+    ```
+
+    This command must be executed directly on the host machine that will ultimately act as the remote GemFire server that hosts the HTTP service for remote administration. (You cannot launch a GemFire server remotely.)
+
+3.  On your local system, run the `gfsh` `connect` command to connect to the remote system. Include the `--use-http` and `--url` parameters. For example:
+
+    ``` pre
+    gfsh>connect --use-http=true --url="http://myremotecluster.example.com:8080/gemfire/v1"
+
+    Successfully connected to: GemFire Manager's HTTP service @ http://myremotecluster.example.com:8080/gemfire/v1
+    ```
+
+    See [connect](../../tools_modules/gfsh/command-pages/connect.html).
+
+    `gfsh` is now connected to the remote system. Most `gfsh` commands will now execute on the remote system; however, there are exceptions. The following commands are executed on the local cluster:
+      -   `alter disk-store`
+      -   `compact offline-disk-store`
+      -   `describe offline-disk-store`
+      -   `help`
+      -   `hint`
+      -   `sh` (for executing OS commands)
+      -   `sleep`
+      -   `start jconsole` (however, you can connect JConsole to a remote cluster when gfsh is connected to the cluster via JMX)
+      -   `start jvisualvm`
+      -   `start locator`
+      -   `start server`
+      -   `start vsd`
+      -   `status locator``*`
+      -   `status server``*`
+      -   `stop locator``*`
+      -   `stop server``*`
+      -   `run` (for executing gfsh scripts)
+      -   `validate disk-store`
+      -   `version`
+
+    `*`You can stop and obtain the status of *remote locators and servers* when `gfsh` is connected to the cluster via JMX or HTTP/S by using the `--name` option for these `stop`/`status` commands. If you are using the `--pid` or `--dir` option for these commands, then the` stop`/`status` commands are executed only locally.
+
+To configure SSL for the remote connection (HTTPS), enable SSL for the `http` component
+in <span class="ph filepath">gemfire.properties</span> or <span class="ph
+filepath">gfsecurity-properties</span> or upon server startup. See
+[SSL](../../managing/security/ssl_overview.html) for details on configuring SSL parameters. These
+SSL parameters also apply to all HTTP services hosted on the configured JMX Manager, which can
+include the following:
+
+-   Developer REST API service
+-   Pulse monitoring tool

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/cluster_config/persisting_configurations.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/cluster_config/persisting_configurations.html.md.erb b/geode-docs/configuring/cluster_config/persisting_configurations.html.md.erb
new file mode 100644
index 0000000..e18bb30
--- /dev/null
+++ b/geode-docs/configuring/cluster_config/persisting_configurations.html.md.erb
@@ -0,0 +1,320 @@
+---
+title:  Tutorial\u2014Creating and Using a Cluster Configuration
+---
+
+A short walk-through that uses a single computer to demonstrate how to use `gfsh` to create a cluster configuration for a Geode cluster.
+
+The `gfsh` command-line tool allows you to configure and start a Geode cluster. The cluster configuration service uses Apache Geode locators to store the configuration at the group and cluster levels and serves these configurations to new members as they are started. The locators store the configurations in a hidden region that is available to all locators and also write the configuration data to disk as XML files. Configuration data is updated as `gfsh` commands are executed.
+
+This section provides a walk-through example of configuring a simple Apache Geode cluster and then re-using that configuration in a new context.
+
+1.  Create a working directory (For example:`/home/username/my_gemfire`) and switch to the new directory. This directory will contain the configurations for your cluster.
+
+2.  Start the `gfsh` command-line tool. For example:
+
+    ``` pre
+    $ gfsh
+    ```
+
+    The `gfsh` command prompt displays.
+
+    ``` pre
+        _________________________     __
+       / _____/ ______/ ______/ /____/ /
+      / /  __/ /___  /_____  / _____  /
+     / /__/ / ____/  _____/ / /    / /
+    /______/_/      /______/_/    /_/    1.0.0
+
+    Monitor and Manage Apache Geode
+    gfsh>
+
+    ```
+
+3.  Start a locator using the command in the following example:
+
+    ``` pre
+    gfsh>start locator --name=locator1
+    Starting a GemFire Locator in /Users/username/my_gemfire/locator1...
+    .............................
+    Locator in /Users/username/my_gemfire/locator1 on 192.0.2.0[10334] as locator1 is currently online.
+    Process ID: 5203
+    Uptime: 15 seconds
+    GemFire Version: 8.1.0
+    Java Version: 1.7.0_71
+    Log File: /Users/username/my_gemfire/locator1/locator1.log
+    JVM Arguments: -Dgemfire.enable-cluster-configuration=true
+    -Dgemfire.load-cluster-configuration-from-dir=false
+    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
+    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
+    Class-Path: /Users/username/Pivotal_GemFire_810_b50582_Linux/lib/gemfire.jar
+    :/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/locator-dependencies.jar
+
+    Successfully connected to: [host=192.0.2.0, port=1099]
+
+    Cluster configuration service is up and running.
+    ```
+
+    Note that `gfsh` responds with a message indicating that the cluster configuration service is up and running. If you see a message indicating a problem, review the locator log file for possible errors. The path to the log file is displayed in the output from `gfsh`.
+
+4.  Start Apache Geode servers using the commands in the following example:
+
+    ``` pre
+    gfsh>start server --name=server1 --group=group1
+    Starting a GemFire Server in /Users/username/my_gemfire/server1...
+    .....
+    Server in /Users/username/my_gemfire/server1 on 192.0.2.0[40404] as server1 is currently online.
+    Process ID: 5627
+    Uptime: 2 seconds
+    GemFire Version: 8.1.0
+    Java Version: 1.7.0_71
+    Log File: /Users/username/my_gemfire/server1/server1.log
+    JVM Arguments: -Dgemfire.default.locators=192.0.2.0[10334] -Dgemfire.groups=group1
+    -Dgemfire.use-cluster-configuration=true -XX:OnOutOfMemoryError=kill -KILL %p
+    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
+    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
+    Class-Path: /Users/username/Pivotal_GemFire_810_b50582_Linux/lib/gemfire.jar
+    :/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/server-dependencies.jar
+
+    gfsh>start server --name=server2 --group=group1 --server-port=40405
+    Starting a GemFire Server in /Users/username/my_gemfire/server2...
+    .....
+    Server in /Users/username/my_gemfire/server2 on 192.0.2.0[40405] as server2 is currently online.
+    Process ID: 5634
+    Uptime: 2 seconds
+    GemFire Version: 8.1.0
+    Java Version: 1.7.0_71
+    Log File: /Users/username/my_gemfire/server2/server2.log
+    JVM Arguments: -Dgemfire.default.locators=192.0.2.0[10334] -Dgemfire.groups=group1
+    -Dgemfire.use-cluster-configuration=true -XX:OnOutOfMemoryError=kill -KILL %p
+    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
+    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
+    Class-Path: /Users/username/Pivotal_GemFire_810_b50582_Linux/lib/gemfire.jar
+    :/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/server-dependencies.jar
+
+    gfsh>start server --name=server3 --server-port=40406
+    Starting a GemFire Server in /Users/username/my_gemfire/server3...
+    .....
+    Server in /Users/username/my_gemfire/server3 on 192.0.2.0[40406] as server3 is currently online.
+    Process ID: 5637
+    Uptime: 2 seconds
+    GemFire Version: 8.1.0
+    Java Version: 1.7.0_71
+    Log File: /Users/username/my_gemfire/server3/server3.log
+    JVM Arguments: -Dgemfire.default.locators=192.0.2.0[10334]
+    -Dgemfire.use-cluster-configuration=true -XX:OnOutOfMemoryError=kill -KILL %p
+    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
+    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
+    Class-Path: /Users/username/Pivotal_GemFire_810_b50582_Linux/lib/gemfire.jar
+    :/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/server-dependencies.jar
+    ```
+
+    Note that the `gfsh` commands you used to start `server1` and `server2` specify a group named `group1` while the command for `server3` did not specify a group name.
+
+5.  Create some regions using the commands in the following example:
+
+    ``` pre
+    gfsh>create region --name=region1 --group=group1 --type=REPLICATE
+    Member  | Status
+    ------- | --------------------------------------
+    server2 | Region "/region1" created on "server2"
+    server1 | Region "/region1" created on "server1"
+
+    gfsh>create region --name=region2 --type=REPLICATE
+    Member  | Status
+    ------- | --------------------------------------
+    server1 | Region "/region2" created on "server1"
+    server2 | Region "/region2" created on "server2"
+    server3 | Region "/region2" created on "server3"
+    ```
+
+    Note that `region1` is created on all cache servers that specified the group named `group1` when starting the cache server (`server1` and `server2`, in this example). `region2` is created on all members because no group was specified.
+
+6.  Deploy jar files. Use the `gfsh deploy` command to deploy application jar files to all members or to a specified group of members. The following example deploys the `mail.jar` and `mx4j.jar` files from the distribution. (Note: This is only an example, you do not need to deploy these files to use the Cluster Configuration Service. Alternately, you can use any two jar files for this demonstration.)
+
+    ``` pre
+    gfsh>deploy --group=group1 --jar=${SYS_GEMFIRE_DIR}/lib/mail.jar
+    Post substitution: deploy --group=group1 --jar=/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/mail.jar
+    Member  | Deployed JAR | Deployed JAR Location
+    ------- | ------------ | -------------------------------------------------
+    server1 | mail.jar     | /Users/username/my_gemfire/server1/vf.gf#mail.jar#1
+    server2 | mail.jar     | /Users/username/my_gemfire/server2/vf.gf#mail.jar#1
+
+    gfsh>deploy --jar=${SYS_GEMFIRE_DIR}/lib/mx4j.jar
+    Post substitution: deploy --jar=/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/mx4j.jar
+    Member  | Deployed JAR | Deployed JAR Location
+    ------- | ------------ | -------------------------------------------------
+    server1 | mx4j.jar     | /Users/username/my_gemfire/server1/vf.gf#mx4j.jar#1
+    server2 | mx4j.jar     | /Users/username/my_gemfire/server2/vf.gf#mx4j.jar#1
+    server3 | mx4j.jar     | /Users/username/my_gemfire/server3/vf.gf#mx4j.jar#1
+    ```
+
+    Note that the `mail.jar` file was deployed only to the members of `group1` and the `mx4j.jar` was deployed to all members.
+
+7.  Export the cluster configuration.
+    You can use the `gfsh export cluster-configuration` command to create a zip file that contains the cluster's persisted configuration. The zip file contains a copy of the contents of the `cluster_config` directory. For example:
+
+    ``` pre
+    gfsh>export cluster-configuration --zip-file-name=myClusterConfig.zip --dir=/Users/username
+    ```
+
+    Apache Geode writes the cluster configuration to the specified zip file.
+
+    ``` pre
+    Downloading cluster configuration : /Users/username/myClusterConfig.zip
+    ```
+
+    The remaining steps demonstrate how to use the cluster configuration you just created.
+
+8.  Shut down the cluster using the following commands:
+
+    ``` pre
+    gfsh>shutdown --include-locators=true
+    As a lot of data in memory will be lost, including possibly events in queues, do you
+    really want to shutdown the entire distributed system? (Y/n): Y
+    Shutdown is triggered
+
+    gfsh>
+    No longer connected to 192.0.2.0[1099].
+    gfsh>
+    ```
+
+9.  Exit the `gfsh` command shell:
+
+    ``` pre
+    gfsh>quit
+    Exiting...
+    ```
+
+10. Create a new working directory (for example: `new_gemfire`) and switch to the new directory.
+11. Start the `gfsh` command shell:
+
+    ``` pre
+    $ gfsh
+    ```
+
+12. Start a new locator. For example:
+
+    ``` pre
+    gfsh>start locator --name=locator2 --port=10335
+    Starting a GemFire Locator in /Users/username/new_gemfire/locator2...
+    .............................
+    Locator in /Users/username/new_gemfire/locator2 on 192.0.2.0[10335] as locator2 is currently online.
+    Process ID: 5749
+    Uptime: 15 seconds
+    GemFire Version: 8.1.0
+    Java Version: 1.7.0_71
+    Log File: /Users/username/new_gemfire/locator2/locator2.log
+    JVM Arguments: -Dgemfire.enable-cluster-configuration=true
+    -Dgemfire.load-cluster-configuration-from-dir=false
+    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
+    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
+    Class-Path: /Users/username/Pivotal_GemFire_810_b50582_Linux/lib/gemfire.jar
+    :/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/locator-dependencies.jar
+
+    Successfully connected to: [host=192.0.2.0, port=1099]
+
+    Cluster configuration service is up and running.
+    ```
+
+13. Import the cluster configuration using the `import cluster-configuration` command. For example:
+
+    ``` pre
+    gfsh>import cluster-configuration --zip-file-name=/Users/username/myClusterConfig.zip
+    Cluster configuration successfully imported
+    ```
+
+    Note that the `locator2` directory now contains a `cluster_config` subdirectory.
+
+14. Start a server that does not reference a group:
+
+    ``` pre
+    gfsh>start server --name=server4 --server-port=40414
+    Starting a GemFire Server in /Users/username/new_gemfire/server4...
+    ........
+    Server in /Users/username/new_gemfire/server4 on 192.0.2.0[40414] as server4 is currently online.
+    Process ID: 5813
+    Uptime: 4 seconds
+    GemFire Version: 8.1.0
+    Java Version: 1.7.0_71
+    Log File: /Users/username/new_gemfire/server4/server4.log
+    JVM Arguments: -Dgemfire.default.locators=192.0.2.0[10335]
+    -Dgemfire.use-cluster-configuration=true -XX:OnOutOfMemoryError=kill -KILL %p
+    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
+    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
+    Class-Path: /Users/username/Pivotal_GemFire_810_b50582_Linux/lib/gemfire.jar
+    :/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/server-dependencies.jar
+    ```
+
+15. Start another server that references `group1`:
+
+    ``` pre
+    gfsh>start server --name=server5 --group=group1 --server-port=40415
+    Starting a GemFire Server in /Users/username/new_gemfire/server5...
+    .....
+    Server in /Users/username/new_gemfire/server2 on 192.0.2.0[40415] as server5 is currently online.
+    Process ID: 5954
+    Uptime: 2 seconds
+    GemFire Version: 8.1.0
+    Java Version: 1.7.0_71
+    Log File: /Users/username/new_gemfire/server5/server5.log
+    JVM Arguments: -Dgemfire.default.locators=192.0.2.0[10335] -Dgemfire.groups=group1
+    -Dgemfire.use-cluster-configuration=true -XX:OnOutOfMemoryError=kill -KILL %p
+    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
+    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
+    Class-Path: /Users/username/Pivotal_GemFire_810_b50582_Linux/lib/gemfire.jar
+    :/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/server-dependencies.jar
+    ```
+
+16. Use the `list regions` command to display the configured regions. Note that region1 and region2, which were configured in the original cluster level are available.
+
+    ``` pre
+    gfsh>list regions
+    List of regions
+    ---------------
+    region1
+    region2
+    ```
+
+17. Use the `describe region` command to see which members host each region. Note that region1 is hosted only by server5 because server5 was started using the group1 configuration. region2 is hosted on both server4 and server5 because region2 was created without a group specified.
+
+    ``` pre
+    gfsh>describe region --name=region1
+    ..........................................................
+    Name            : region1
+    Data Policy     : replicate
+    Hosting Members : server5
+
+    Non-Default Attributes Shared By Hosting Members
+
+     Type  | Name | Value
+    ------ | ---- | -----
+    Region | size | 0
+
+
+    gfsh>describe region --name=region2
+    ..........................................................
+    Name            : region2
+    Data Policy     : replicate
+    Hosting Members : server5
+                      server4
+
+    Non-Default Attributes Shared By Hosting Members
+
+     Type  | Name | Value
+    ------ | ---- | -----
+    Region | size | 0
+    ```
+
+    This new cluster uses the same configuration as the original system. You can start any number of servers using this cluster configuration. All servers will receive the cluster-level configuration. Servers that specify `group1` also receive the `group1` configuration.
+
+18. Shut down your cluster using the following commands:
+
+    ``` pre
+    gfsh>shutdown --include-locators=true
+    As a lot of data in memory will be lost, including possibly events in queues,
+      do you really want to shutdown the entire distributed system? (Y/n): Y
+    Shutdown is triggered
+
+    gfsh>
+    No longer connected to 192.0.2.0[1099].
+    ```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/cluster_config/using_member_groups.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/cluster_config/using_member_groups.html.md.erb b/geode-docs/configuring/cluster_config/using_member_groups.html.md.erb
new file mode 100644
index 0000000..524d787
--- /dev/null
+++ b/geode-docs/configuring/cluster_config/using_member_groups.html.md.erb
@@ -0,0 +1,27 @@
+---
+title:  Using Member Groups
+---
+
+Apache Geode allows you to organize your distributed system members into logical member groups.
+
+The use of member groups in Apache Geode is optional. The benefit of using member groups is the ability to coordinate certain operations on members based on logical group membership. For example, by defining and using member groups you can:
+
+-   Alter a subset of configuration properties for a specific member or members. See [alter runtime](../../tools_modules/gfsh/command-pages/alter.html#topic_7E6B7E1B972D4F418CB45354D1089C2B) in `gfsh`.
+-   Perform certain disk operations like disk-store compaction across a member group. See [Disk Store Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_1ACC91B493EE446E89EC7DBFBBAE00EA) for a list of commands.
+-   Manage specific indexes or regions across all members of a group.
+-   Start and stop multi-site (WAN) services such as gateway senders and gateway receivers across a member group.
+-   Deploy or undeploy JAR applications on all members in a group.
+-   Execute functions on all members of a specific group.
+
+You define group names in the `groups` property of your member's `gemfire.properties` file or upon member startup in `gfsh`.
+
+**Note:**
+Any roles defined in the currently existing `roles` property will now be considered a group. If you wish to add membership roles to your distributed system, you should add them as member groups in the `groups` property. The `roles` property has been deprecated in favor of using the `groups` property.
+
+To add a member to a group, add the name of a member group to the `gemfire.properties` file of the member prior to startup or you can start up a member in `gfsh` and pass in the `--group` argument at startup time.
+
+A single member can belong to more than one group.
+
+Member groups can also be used to organize members from either a client's perspective or from a peer member's perspective. See [Organizing Peers into Logical Member Groups](../../topologies_and_comm/p2p_configuration/configuring_peer_member_groups.html) and [Organizing Servers Into Logical Member Groups](../../topologies_and_comm/cs_configuration/configure_servers_into_logical_groups.html) for more information. On the client side, you can supply the member group name when configuring a client's connection pool. Use the &lt;pool server-group&gt; element in the client's cache.xml.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/running/change_file_spec.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/running/change_file_spec.html.md.erb b/geode-docs/configuring/running/change_file_spec.html.md.erb
new file mode 100644
index 0000000..8edb68b
--- /dev/null
+++ b/geode-docs/configuring/running/change_file_spec.html.md.erb
@@ -0,0 +1,40 @@
+---
+title:  Changing the File Specifications
+---
+
+You can change all file specifications in the `gemfire.properties` file and at the command line.
+
+**Note:**
+Geode applications can use the API to pass `java.lang.System properties` to the distributed system connection. This changes file specifications made at the command line and in the `gemfire.properties`. You can verify an application\u2019s property settings in the configuration information logged at application startup. The configuration is listed when the `gemfire.properties` `log-level` is set to `config` or lower.
+
+This invocation of the application, `testApplication.TestApp1`, provides non-default specifications for both the `cache.xml` and `gemfire.properties`:
+
+``` pre
+java -Dgemfire.cache-xml-file=
+/gemfireSamples/examples/dist/cacheRunner/queryPortfolios.xml
+-DgemfirePropertyFile=defaultConfigs/gemfire.properties
+testApplication.TestApp1
+```
+
+The gfsh start server command can use the same specifications:
+
+``` pre
+gfsh>start server
+-J-Dgemfire.cache-xml-file=/gemfireSamples/examples/dist/cacheRunner/queryPortfolios.xml
+-J-DgemfirePropertyFile=defaultConfigs/gemfire.properties
+```
+
+You can also change the specifications for the `cache.xml` file inside the `gemfire.properties` file.
+
+**Note:**
+Specifications in `gemfire.properties` files cannot use environment variables.
+
+Example `gemfire.properties` file with non-default `cache.xml` specification:
+
+``` pre
+#Tue May 09 17:53:54 PDT 2006
+mcast-address=192.0.2.0
+mcast-port=10333
+locators=
+cache-xml-file=/gemfireSamples/examples/dist/cacheRunner/queryPortfolios.xml
+```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/running/default_file_specs.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/running/default_file_specs.html.md.erb b/geode-docs/configuring/running/default_file_specs.html.md.erb
new file mode 100644
index 0000000..37f9ee3
--- /dev/null
+++ b/geode-docs/configuring/running/default_file_specs.html.md.erb
@@ -0,0 +1,59 @@
+---
+title:  Default File Specifications and Search Locations
+---
+
+Each file has a default name, a set of file search locations, and a system property you can use to override the defaults.
+
+To use the default specifications, place the file at the top level of its directory or jar file. The system properties are standard file specifications that can have absolute or relative pathnames and filenames.
+
+**Note:**
+If you do not specify an absolute file path and name, the search examines all search locations for the file.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Default File Specification</th>
+<th>Search Locations for Relative File Specifications</th>
+<th>Available Property for File Specification</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><code class="ph codeph">gemfire.properties</code></td>
+<td><ol>
+<li>current directory</li>
+<li>home directory</li>
+<li>CLASSPATH</li>
+</ol></td>
+<td>As a Java system property, use <code class="ph codeph">gemfirePropertyFile</code></td>
+</tr>
+<tr class="even">
+<td><code class="ph codeph">cache.xml</code></td>
+<td><ol>
+<li>current directory</li>
+<li>CLASSPATH</li>
+</ol></td>
+<td>In <code class="ph codeph">gemfire.properties</code>, use the <code class="ph codeph">cache-xml-file</code> property</td>
+</tr>
+</tbody>
+</table>
+
+Examples of valid `gemfirePropertyFile` specifications:
+
+-   `/zippy/users/jpearson/gemfiretest/gemfire.properties`
+-   `c:\gemfiretest\gemfire.prp`
+-   `myGF.properties`
+-   `test1/gfprops`
+
+For the `test1/gfprops` specification, if you launch your Geode system member from `/testDir` in a Unix file system, Geode looks for the file in this order until it finds the file or exhausts all locations:
+
+1.  `/testDir/test1/gfprops`
+2.  `<yourHomeDir>/test1/gfprops`
+3.  under every location in your `CLASSPATH` for `test1/gfprops`
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/running/deploy_config_files_intro.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/running/deploy_config_files_intro.html.md.erb b/geode-docs/configuring/running/deploy_config_files_intro.html.md.erb
new file mode 100644
index 0000000..758b25a
--- /dev/null
+++ b/geode-docs/configuring/running/deploy_config_files_intro.html.md.erb
@@ -0,0 +1,17 @@
+---
+title:  Main Steps to Deploying Configuration Files
+---
+
+These are the basic steps for deploying configuration files, with related detail in sections that follow.
+
+1.  Determine which configuration files you need for your installation.
+2.  Place the files in your directories or jar files.
+3.  For any file with a non-default name or location, provide the file specification in the system properties file and/or in the member `CLASSPATH.`
+
+## <a id="concept_337B365782E44951B73F33E1E17AB07B__section_53C98F9DB1584E3BABFA315CDF254A92" class="no-quick-link"></a>Geode Configuration Files
+
+-   `gemfire.properties`. Contains the settings required by members of a distributed system. These settings include licensing, system member discovery, communication parameters, logging, and statistics. See the [Reference](../../reference/book_intro.html#reference).
+-   **`gfsecurity.properties`**. An optional separate file that contains security-related (`security-*`) settings that are otherwise defined in `gemfire.properties`. Placing these member properties into a separate file allows you to restrict user access to those specific settings. See the [Reference](../../reference/book_intro.html#reference).
+-   `cache.xml`. Declarative cache configuration file. This file contains XML declarations for cache, region, and region entry configuration. You also use it to configure disk stores, database login credentials, server and remote site location information, and socket information. See [cache.xml](../../reference/topics/chapter_overview_cache_xml.html#cache_xml).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/running/deploying_config_files.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/running/deploying_config_files.html.md.erb b/geode-docs/configuring/running/deploying_config_files.html.md.erb
new file mode 100644
index 0000000..76c036a
--- /dev/null
+++ b/geode-docs/configuring/running/deploying_config_files.html.md.erb
@@ -0,0 +1,28 @@
+---
+title:  Deploying Configuration Files without the Cluster Configuration Service
+---
+
+You can deploy your Apache Geode configuration files in your system directory structure or in jar files. You determine how you want to deploy your configuration files and set them up accordingly.
+
+**Note:**
+If you use the cluster configuration service to create and manage your Apache Geode cluster configuration, the procedures described in this section are not needed because Geode automatically manages the distribution of the configuration files and jar files to members of the cluster. See [Overview of the Cluster Configuration Service](../cluster_config/gfsh_persist.html).
+
+You can use the procedures described in this section to distribute configurations that are member-specific, or for situations where you do not want to use the cluster configuration service.
+
+-   **[Main Steps to Deploying Configuration Files](../../configuring/running/deploy_config_files_intro.html)**
+
+    These are the basic steps for deploying configuration files, with related detail in sections that follow.
+
+-   **[Default File Specifications and Search Locations](../../configuring/running/default_file_specs.html)**
+
+    Each file has a default name, a set of file search locations, and a system property you can use to override the defaults.
+
+-   **[Changing the File Specifications](../../configuring/running/change_file_spec.html)**
+
+    You can change all file specifications in the `gemfire.properties` file and at the command line.
+
+-   **[Deploying Configuration Files in JAR Files](../../configuring/running/deploying_config_jar_files.html)**
+
+    This section provides a procedure and an example for deploying configuration files in JAR files.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/running/deploying_config_jar_files.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/running/deploying_config_jar_files.html.md.erb b/geode-docs/configuring/running/deploying_config_jar_files.html.md.erb
new file mode 100644
index 0000000..bf855c6
--- /dev/null
+++ b/geode-docs/configuring/running/deploying_config_jar_files.html.md.erb
@@ -0,0 +1,35 @@
+---
+title:  Deploying Configuration Files in JAR Files
+---
+
+This section provides a procedure and an example for deploying configuration files in JAR files.
+
+**Procedure**
+
+1.  Jar the files.
+2.  Set the Apache Geode system properties to point to the files as they reside in the jar file.
+3.  Include the jar file in your `CLASSPATH.`
+4.  Verify the jar file copies are the only ones visible to the application at runtime. Geode searches the `CLASSPATH` after searching other locations, so the files cannot be available in the other search areas.
+5.  Start your application. The configuration file is loaded from the jar file.
+
+**Example of Deploying a Configuration JAR**
+
+The following example deploys the cache configuration file, `myCache.xml`, in `my.jar`. The following displays the contents of `my.jar`:
+
+``` pre
+% jar -tf my.jar 
+META-INF 
+META-INF/MANIFEST.MF 
+myConfig/ 
+myConfig/myCache.xml
+```
+
+In this example, you would perform the following steps to deploy the configuration jar file:
+
+1.  Set the system property, `gemfire.cache-xml-file`, to `myConfig/myCache.xml`
+2.  Set your `CLASSPATH` to include `my.jar`.
+3.  Verify there is no file named `myCache.xml` in `./myConfig/myCache.xml`, the current directory location of the file
+
+When you start your application, the configuration file is loaded from the jar file.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/running/firewall_ports_config.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/running/firewall_ports_config.html.md.erb b/geode-docs/configuring/running/firewall_ports_config.html.md.erb
new file mode 100644
index 0000000..4f90602
--- /dev/null
+++ b/geode-docs/configuring/running/firewall_ports_config.html.md.erb
@@ -0,0 +1,15 @@
+---
+title:  Firewall Considerations
+---
+
+You can configure and limit port usage for situations that involve firewalls, for example, between client-server or server-server connections.
+
+-   **[Firewalls and Connections](../../configuring/running/firewalls_connections.html)**
+
+    Be aware of possible connection problems that can result from running a firewall on your machine.
+
+-   **[Firewalls and Ports](../../configuring/running/firewalls_ports.html)**
+
+    Make sure your port settings are configured correctly for firewalls.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/running/firewalls_connections.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/running/firewalls_connections.html.md.erb b/geode-docs/configuring/running/firewalls_connections.html.md.erb
new file mode 100644
index 0000000..3ae6bb1
--- /dev/null
+++ b/geode-docs/configuring/running/firewalls_connections.html.md.erb
@@ -0,0 +1,18 @@
+---
+title:  Firewalls and Connections
+---
+
+Be aware of possible connection problems that can result from running a firewall on your machine.
+
+Apache Geode is a network-centric distributed system, so if you have a firewall running on your machine it could cause connection problems. For example, your connections may fail if your firewall places restrictions on inbound or outbound permissions for Java-based sockets. You may need to modify your firewall configuration to permit traffic to Java applications running on your machine. The specific configuration depends on the firewall you are using.
+
+As one example, firewalls may close connections to Geode due to timeout settings. If a firewall senses no activity in a certain time period, it may close a connection and open a new connection when activity resumes, which can cause some confusion about which connections you have.
+
+For more information on how Geode client and servers connect, see the following topics:
+
+-   [How Client/Server Connections Work](../../topologies_and_comm/topology_concepts/how_the_pool_manages_connections.html#how_the_pool_manages_connections)
+-   [Socket Communication](../../managing/monitor_tune/socket_communication.html)
+-   [Controlling Socket Use](../../managing/monitor_tune/performance_controls_controlling_socket_use.html#perf)
+-   [Setting Socket Buffer Sizes](../../managing/monitor_tune/socket_communication_setting_socket_buffer_sizes.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/running/firewalls_multisite.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/running/firewalls_multisite.html.md.erb b/geode-docs/configuring/running/firewalls_multisite.html.md.erb
new file mode 100644
index 0000000..ace18d7
--- /dev/null
+++ b/geode-docs/configuring/running/firewalls_multisite.html.md.erb
@@ -0,0 +1,70 @@
+---
+title:  Firewalls and Ports in Multi-Site (WAN) Configurations
+---
+
+Make sure your port settings are configured correctly for firewalls.
+
+<a id="concept_pfs_sf4_ft__section_alm_2g4_ft"></a>
+Each gateway receiver uses a port to listen for incoming communication from one or more gateway senders communication between GemFire sites. The full range of port values for gateway receivers must be made accessible within the firewall from across the WAN.
+
+## **Properties for Firewall and Port Configuration in Multi-Site (WAN) Configurations**
+
+This table contains properties potentially involved in firewall behavior, with a brief description of each property. Click on a property name for a link to the [gemfire.properties and gfsecurity.properties (GemFire Properties)](../../reference/topics/gemfire_properties.html#gemfire_properties) reference topic.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Configuration Area</th>
+<th><strong>Property or Setting</strong></th>
+<th><strong>Definition</strong></th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>multi-site (WAN) config</td>
+<td><p>[hostname-for-senders](../../reference/topics/gfe_cache_xml.html#gateway-receiver)</p></td>
+<td><p>Hostname or IP address of the gateway receiver used by gateway senders to connect.</p></td>
+</tr>
+<tr class="even">
+<td>multi-site (WAN) config</td>
+<td>[remote-locators](../../reference/topics/gemfire_properties.html#gemfire_properties)</td>
+<td><p>List of locators (and their ports) that are available on the remote WAN site.</p></td>
+</tr>
+<tr class="odd">
+<td>multi-site (WAN) config</td>
+<td><p>[start-port](../../reference/topics/gfe_cache_xml.html#gateway-receiver) and [end-port](../../reference/topics/gfe_cache_xml.html#gateway-receiver) (cache.xml) or <code class="ph codeph">--start-port</code> and <code class="ph codeph">--end-port</code> parameters to the gfsh start gateway receiver command</p></td>
+<td><p>Port range that the gateway receiver can use to listen for gateway sender communication.</p></td>
+</tr>
+</tbody>
+</table>
+
+## Default Port Configuration
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th><p><strong>Port Name</strong></p></th>
+<th>Related Configuration Setting</th>
+<th><p><strong>Default Port</strong></p></th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><p>Gateway Receiver</p></td>
+<td><p>[start-port](../../reference/topics/gfe_cache_xml.html#gateway-receiver) and [end-port](../../reference/topics/gfe_cache_xml.html#gateway-receiver) (cache.xml) or <code class="ph codeph">--start-port</code> and <code class="ph codeph">--end-port</code> parameters to the <code class="ph codeph">gfsh start gateway receiver</code> command</p></td>
+<td><em>not set</em> Each gateway receiver uses a single port to accept connections from gateway senders in other systems. However, the configuration of a gateway receiver specifies a range of possible port values to use. GemFire selects an available port from the specified range when the gateway receiver starts. Configure your firewall so that the full range of possible port values is accessible by gateway senders from across the WAN.</td>
+</tr>
+</tbody>
+</table>
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/running/firewalls_ports.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/running/firewalls_ports.html.md.erb b/geode-docs/configuring/running/firewalls_ports.html.md.erb
new file mode 100644
index 0000000..e278e5c
--- /dev/null
+++ b/geode-docs/configuring/running/firewalls_ports.html.md.erb
@@ -0,0 +1,229 @@
+---
+title:  Firewalls and Ports
+---
+
+Make sure your port settings are configured correctly for firewalls.
+
+<a id="concept_5ED182BDBFFA4FAB89E3B81366EBC58E__section_F9C1D7419F954DC1A305C34714C8615C"></a>
+There are several different port settings that need to be considered when using firewalls:
+
+-   Port that the cache server listens on. This is configurable using the `cache-server` element in cache.xml, on the CacheServer class in Java APIs, and as a command line option to the `gfsh start server` command.
+
+    By default, if not otherwise specified, Geode clients and servers discover each other on a pre-defined port (**40404**) on the localhost.
+
+-   Locator port. Geode clients can use the locator to automatically discover cache servers. The locator port is configurable as a command-line option to the `gfsh start locator` command. Locators are used in the peer-to-peer cache deployments to discover other processes. They can be used by clients to locate servers as an alternative to configuring clients with a collection of server addresses and ports.
+
+    By default, if not otherwise specified, Geode locators use the default multicast port **10334**.
+
+-   Since locators start up the distributed system, locators must also have their ephemeral port range and TCP port accessible to other members through the firewall.
+-   For clients, you configure the client to connect to servers using the client's pool configuration. The client's pool configuration has two options: you can create a pool with either a list of server elements or a list of locator elements. For each element, you specify the host and port. The ports specified must be made accessible through your firewall.
+
+## **Limiting Ephemeral Ports for Peer-to-Peer Membership**
+
+By default, Geode assigns *ephemeral* ports, that is, temporary ports assigned from a designated range, which can encompass a large number of possible ports. When a firewall is present, the ephemeral port range usually must be limited to a much smaller number, for example six. If you are configuring P2P communications through a firewall, you must also set the TCP port for each process and ensure that UDP traffic is allowed through the firewall.
+
+## **Properties for Firewall and Port Configuration**
+
+This table contains properties potentially involved in firewall behavior, with a brief description of each property. Click on a property name for a link to the reference topic.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="34%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th><strong>Configuration area</strong></th>
+<th><strong>Property or Setting</strong></th>
+<th><strong>Definition</strong></th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>peer-to-peer config</td>
+<td><p><code class="ph codeph">conserve-sockets</code></p></td>
+<td><p>Specifies whether sockets are shared by the system member's threads.</p></td>
+</tr>
+<tr class="even">
+<td>peer-to-peer config</td>
+<td><p><code class="ph codeph">locators</code></p></td>
+<td><p>The list of locators used by system members. The list must be configured consistently for every member of the distributed system.</p></td>
+</tr>
+<tr class="odd">
+<td>peer-to-peer config</td>
+<td><p><code class="ph codeph">mcast-address</code></p></td>
+<td><p>Address used to discover other members of the distributed system. Only used if mcast-port is non-zero. This attribute must be consistent across the distributed system.</p></td>
+</tr>
+<tr class="even">
+<td>peer-to-peer config</td>
+<td><p><code class="ph codeph">mcast-port</code></p></td>
+<td><p>Port used, along with the mcast-address, for multicast communication with other members of the distributed system. If zero, multicast is disabled for data distribution.</p></td>
+</tr>
+<tr class="odd">
+<td>peer-to-peer config</td>
+<td><p><code class="ph codeph">membership-port-range</code></p></td>
+<td><p>The range of ephemeral ports available for unicast UDP messaging and for TCP failure detection in the peer-to-peer distributed system.</p></td>
+</tr>
+<tr class="even">
+<td>peer-to-peer config</td>
+<td><p><code class="ph codeph">tcp-port</code></p></td>
+<td><p>The TCP port to listen on for cache communications.</p></td>
+</tr>
+</tbody>
+</table>
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Configuration Area</th>
+<th><strong>Property or Setting</strong></th>
+<th><strong>Definition</strong></th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>cache server config</td>
+<td><p><code class="ph codeph">hostname-for-clients</code></p></td>
+<td><p>Hostname or IP address to pass to the client as the location where the server is listening.</p></td>
+</tr>
+<tr class="even">
+<td>cache server config</td>
+<td><p><code class="ph codeph">max-connections</code></p></td>
+<td><p>Maximum number of client connections for the server. When the maximum is reached, the server refuses additional client connections.</p></td>
+</tr>
+<tr class="odd">
+<td>cache server config</td>
+<td><p><code class="ph codeph">port</code> (cache.xml) or <code class="ph codeph">--port</code> parameter to the <code class="ph codeph">gfsh start server</code> command</p></td>
+<td><p>Port that the server listens on for client communication.</p></td>
+</tr>
+</tbody>
+</table>
+
+## Default Port Configurations
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th><p><strong>Port Name</strong></p></th>
+<th>Related Configuration Setting</th>
+<th><p><strong>Default Port</strong></p></th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><p>Cache Server</p></td>
+<td><p><code class="ph codeph">port</code> (cache.xml)</p></td>
+<td>40404</td>
+</tr>
+<tr class="even">
+<td><p>HTTP</p></td>
+<td><code class="ph codeph">http-service-port</code></td>
+<td>7070</td>
+</tr>
+<tr class="odd">
+<td><p>Locator</p></td>
+<td><code class="ph codeph">start-locator</code> (for embedded locators) or <code class="ph codeph">--port</code> parameter to the <code class="ph codeph">gfsh start locator</code> command.</td>
+<td><em>if not specified upon startup or in the start-locator property, uses default multicast port 10334</em></td>
+</tr>
+<tr class="even">
+<td><p>Membership Port Range</p></td>
+<td><code class="ph codeph">membership-port-range</code></td>
+<td>1024 to 65535</td>
+</tr>
+<tr class="odd">
+<td><p>Memcached Port</p></td>
+<td><code class="ph codeph">memcached-port</code></td>
+<td><em>not set</em></td>
+</tr>
+<tr class="even">
+<td><p>Multicast</p></td>
+<td><code class="ph codeph">mcast-port</code></td>
+<td>10334</td>
+</tr>
+<tr class="odd">
+<td><p>RMI</p></td>
+<td><code class="ph codeph">jmx-manager-port</code></td>
+<td>1099</td>
+</tr>
+<tr class="even">
+<td><p>TCP</p></td>
+<td><code class="ph codeph">tcp-port</code></td>
+<td>ephemeral port</td>
+</tr>
+</tbody>
+</table>
+
+## **Properties for Firewall and Port Configuration in Multi-Site (WAN) Configurations**
+
+Each gateway receiver uses a port to listen for incoming communication from one or more gateway senders communication between Geode sites. The full range of port values for gateway receivers must be made accessible within the firewall from across the WAN.
+
+This table contains properties potentially involved in firewall behavior, with a brief description of each property. Click on a property name for a link to the [gemfire.properties and gfsecurity.properties (Geode Properties)](../../reference/topics/gemfire_properties.html#gemfire_properties) reference topic.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Configuration Area</th>
+<th><strong>Property or Setting</strong></th>
+<th><strong>Definition</strong></th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>multi-site (WAN) config</td>
+<td><p>[hostname-for-senders](../../reference/topics/gfe_cache_xml.html#gateway-receiver)</p></td>
+<td><p>Hostname or IP address of the gateway receiver used by gateway senders to connect.</p></td>
+</tr>
+<tr class="even">
+<td>multi-site (WAN) config</td>
+<td>[remote-locators](../../reference/topics/gemfire_properties.html#gemfire_properties)</td>
+<td><p>List of locators (and their ports) that are available on the remote WAN site.</p></td>
+</tr>
+<tr class="odd">
+<td>multi-site (WAN) config</td>
+<td><p>[start-port](../../reference/topics/gfe_cache_xml.html#gateway-receiver) and [end-port](../../reference/topics/gfe_cache_xml.html#gateway-receiver) (cache.xml) or <code class="ph codeph">--start-port</code> and <code class="ph codeph">--end-port</code> parameters to the <code class=" ph codeph">gfsh start gateway receiver</code> command</p></td>
+<td><p>Port range that the gateway receiver can use to listen for gateway sender communication.</p></td>
+</tr>
+</tbody>
+</table>
+
+## Default Port Configuration
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th><p><strong>Port Name</strong></p></th>
+<th>Related Configuration Setting</th>
+<th><p><strong>Default Port</strong></p></th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><p>Gateway Receiver</p></td>
+<td><p>[start-port](../../reference/topics/gfe_cache_xml.html#gateway-receiver) and [end-port](../../reference/topics/gfe_cache_xml.html#gateway-receiver) (cache.xml) or <code class="ph codeph">--start-port</code> and <code class="ph codeph">--end-port</code> parameters to the <code class="ph codeph">gfsh start gateway receiver</code> command</p></td>
+<td><em>not set</em> Each gateway receiver uses a single port to accept connections from gateway senders in other systems. However, the configuration of a gateway receiver specifies a range of possible port values to use. Geode selects an available port from the specified range when the gateway receiver starts. Configure your firewall so that the full range of possible port values is accessible by gateway senders from across the WAN.</td>
+</tr>
+</tbody>
+</table>
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/running/managing_output_files.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/running/managing_output_files.html.md.erb b/geode-docs/configuring/running/managing_output_files.html.md.erb
new file mode 100644
index 0000000..59f48aa
--- /dev/null
+++ b/geode-docs/configuring/running/managing_output_files.html.md.erb
@@ -0,0 +1,16 @@
+---
+title:  Managing System Output Files
+---
+
+Geode output files are optional and can become quite large. Work with your system administrator to determine where to place them to avoid interfering with other system activities.
+
+<a id="managing_output_files__section_F0CEA4299D274801B9AB700C074F178F"></a>
+Geode includes several types of optional output files as described below.
+
+-   **Log Files**. Comprehensive logging messages to help you confirm system configuration and to debug problems in configuration and code. Configure log file behavior in the `gemfire.properties` file. See [Logging](../../managing/logging/logging.html#concept_30DB86B12B454E168B80BB5A71268865).
+
+-   **Statistics Archive Files**. Standard statistics for caching and distribution activities, which you can archive on disk. Configure statistics collection and archival in the `gemfire.properties`, `archive-disk-space-limit` and `archive-file-size-limit`. See the [Reference](../../reference/book_intro.html#reference).
+
+-   **Disk Store Files**. Hold persistent and overflow data from the cache. You can configure regions to persist data to disk for backup purposes or overflow to disk to control memory use. The subscription queues that servers use to send events to clients can be overflowed to disk. Gateway sender queues overflow to disk automatically and can be persisted for high availability. Configure these through the `cache.xml`. See [Disk Storage](../../managing/disk_storage/chapter_overview.html).
+
+