You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@geode.apache.org by km...@apache.org on 2016/10/14 22:17:03 UTC

[05/94] [abbrv] [partial] incubator-geode git commit: GEODE-1952 Consolidated docs under a single geode-docs directory

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/udp_communication.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/monitor_tune/udp_communication.html.md.erb b/geode-docs/managing/monitor_tune/udp_communication.html.md.erb
new file mode 100644
index 0000000..2f85709
--- /dev/null
+++ b/geode-docs/managing/monitor_tune/udp_communication.html.md.erb
@@ -0,0 +1,33 @@
+---
+title:  UDP Communication
+---
+
+You can make configuration adjustments to improve multicast and unicast UDP performance of peer-to-peer communication.
+
+You can tune your Geode UDP messaging to maximize throughput. There are two main tuning goals: to use the largest reasonable datagram packet sizes and to reduce retransmission rates. These actions reduce messaging overhead and overall traffic on your network while still getting your data where it needs to go. Geode also provides statistics to help you decide when to change your UDP messaging settings.
+
+Before you begin, you should understand Geode [Basic Configuration and Programming](../../basic_config/book_intro.html). See also the general communication tuning and multicast-specific tuning covered in [Socket Communication](socket_communication.html) and [Multicast Communication](multicast_communication.html#multicast).
+
+## <a id="udp_comm__section_4089ACC33AF34FA888BAE3CA3602A730" class="no-quick-link"></a>UDP Datagram Size
+
+You can change the UDP datagram size with the Geode property udp-fragment-size. This is the maximum packet size for transmission over UDP unicast or multicast sockets. When possible, smaller messages are combined into batches up to the size of this setting.
+
+Most operating systems set a maximum transmission size of 64k for UDP datagrams, so this setting should be kept under 60k to allow for communication headers. Setting the fragment size too high can result in extra network traffic if your network is subject to packet loss, as more data must be resent for each retransmission. If many UDP retransmissions appear in DistributionStats, you maybe achieve better throughput by lowering the fragment size.
+
+## <a id="udp_comm__section_B9882A4EBA004599B2207B9CB1D3ADC9" class="no-quick-link"></a>UDP Flow Control
+
+UDP protocols typically have a flow control protocol built into them to keep processes from being overrun by incoming no-ack messages. The Geode UDP flow control protocol is a credit based system in which the sender has a maximum number of bytes it can send before getting its byte credit count replenished, or recharged, by its receivers. While its byte credits are too low, the sender waits. The receivers do their best to anticipate the sender\u2019s recharge requirements and provide recharges before they are needed. If the senders credits run too low, it explicitly requests a recharge from its receivers.
+
+This flow control protocol, which is used for all multicast and unicast no-ack messaging, is configured using a three-part Geode property mcast-flow-control. This property is composed of:
+
+-   byteAllowance\u2014Determines how many bytes (also referred to as credits) can be sent before receiving a recharge from the receiving processes.
+-   rechargeThreshold\u2014Sets a lower limit on the ratio of the sender\u2019s remaining credit to its byteAllowance. When the ratio goes below this limit, the receiver automatically sends a recharge. This reduces recharge request messaging from the sender and helps keep the sender from blocking while waiting for recharges.
+-   rechargeBlockMs\u2014Tells the sender how long to wait while needing a recharge before explicitly requesting one.
+
+In a well-tuned system, where consumers of cache events are keeping up with producers, the byteAllowance can be set high to limit flow-of-control messaging and pauses. JVM bloat or frequent message retransmissions are an indication that cache events from producers are overrunning consumers.
+
+## <a id="udp_comm__section_FB1F54A41D2643A29DB416D309ED4C56" class="no-quick-link"></a>UDP Retransmission Statistics
+
+Geode stores retransmission statistics for its senders and receivers. You can use these statistics to help determine whether your flow control and fragment size settings are appropriate for your system.
+
+The retransmission rates are stored in the DistributionStats ucastRetransmits and mcastRetransmits. For multicast, there is also a receiver-side statistic mcastRetransmitRequests that can be used to see which processes aren't keeping up and are requesting retransmissions. There is no comparable way to tell which receivers are having trouble receiving unicast UDP messages.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/network_partitioning/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/network_partitioning/chapter_overview.html.md.erb b/geode-docs/managing/network_partitioning/chapter_overview.html.md.erb
new file mode 100644
index 0000000..62a10bf
--- /dev/null
+++ b/geode-docs/managing/network_partitioning/chapter_overview.html.md.erb
@@ -0,0 +1,31 @@
+---
+title:  Network Partitioning
+---
+
+Apache Geode architecture and management features help detect and resolve network partition problems.
+
+-   **[How Network Partitioning Management Works](../../managing/network_partitioning/how_network_partitioning_management_works.html)**
+
+    Geode handles network outages by using a weighting system to determine whether the remaining available members have a sufficient quorum to continue as a distributed system.
+
+-   **[Failure Detection and Membership Views](../../managing/network_partitioning/failure_detection.html)**
+
+    Geode uses failure detection to remove unresponsive members from membership views.
+
+-   **[Membership Coordinators, Lead Members and Member Weighting](../../managing/network_partitioning/membership_coordinators_lead_members_and_weighting.html)**
+
+    Network partition detection uses a designated membership coordinator and a weighting system that accounts for a lead member to determine whether a network partition has occurred.
+
+-   **[Network Partitioning Scenarios](../../managing/network_partitioning/network_partitioning_scenarios.html)**
+
+    This topic describes network partitioning scenarios and what happens to the partitioned sides of the distributed system.
+
+-   **[Configure Apache Geode to Handle Network Partitioning](../../managing/network_partitioning/handling_network_partitioning.html)**
+
+    This section lists the configuration steps for network partition detection.
+
+-   **[Preventing Network Partitions](../../managing/network_partitioning/preventing_network_partitions.html)**
+
+    This section provides a short list of things you can do to prevent network partition from occurring.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/network_partitioning/failure_detection.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/network_partitioning/failure_detection.html.md.erb b/geode-docs/managing/network_partitioning/failure_detection.html.md.erb
new file mode 100644
index 0000000..055cc42
--- /dev/null
+++ b/geode-docs/managing/network_partitioning/failure_detection.html.md.erb
@@ -0,0 +1,45 @@
+---
+title:  Failure Detection and Membership Views
+---
+
+Geode uses failure detection to remove unresponsive members from membership views.
+
+## <a id="concept_CFD13177F78C456095622151D6EE10EB__section_1AAE6C92FED249EFBA476D8A480B8E51" class="no-quick-link"></a>Failure Detection
+
+Network partitioning has a failure detection protocol that is not subject to hanging when NICs or machines fail. Failure detection has each member observe messages from the peer to its right within the membership view (see "Membership Views" below for the view layout). A member that suspects the failure of its peer to the right sends a datagram heartbeat request to the suspect member. With no response from the suspect member, the suspicious member broadcasts a `SuspectMembersMessage` datagram message to all other members. The coordinator attempts to connect to the suspect member. If the connection attempt is unsuccessful, the suspect member is removed from the membership view. The suspect member is sent a message to disconnect from the distributed system and close the cache. In parallel to the receipt of the `SuspectMembersMessage`, a distributed algorithm promotes the leftmost member within the view to act as the coordinator, if the coordinator is the suspect member.
+
+Failure detection processing is also initiated on a member if the `gemfire.properties` `ack-wait-threshold` elapses before receiving a response to a message, if a TCP/IP connection cannot be made to the member for peer-to-peer (P2P) messaging, and if no other traffic is detected from the member.
+
+**Note:**
+The TCP connection ping is not used for connection keep alive purposes; it is only used to detect failed members. See [TCP/IP KeepAlive Configuration](../monitor_tune/socket_tcp_keepalive.html#topic_jvc_pw3_34) for TCP keep alive configuration.
+
+If a new membership view is sent out that includes one or more failed members, the coordinator will log new quorum weight calculations. At any point, if quorum loss is detected due to unresponsive processes, the coordinator will also log a severe level message to identify the failed members:
+``` pre
+Possible loss of quorum detected due to loss of {0} cache processes: {1}
+```
+
+in which {0} is the number of processes that failed and {1} lists the members (cache processes).
+
+## <a id="concept_CFD13177F78C456095622151D6EE10EB__section_1170FBBD6B7A483AB2C2A837F1B8876D" class="no-quick-link"></a>Membership Views
+
+The following is a sample membership view:
+
+``` pre
+[info 2012/01/06 11:44:08.164 PST bridgegemfire1 <UDP Incoming Message Handler> tid=0x1f] 
+Membership: received new view  [ent(5767)<v0>:8700|16] [ent(5767)<v0>:8700/44876, 
+ent(5829)<v1>:48034/55334, ent(5875)<v2>:4738/54595, ent(5822)<v5>:49380/39564, 
+ent(8788)<v7>:24136/53525]
+```
+
+The components of the membership view are as follows:
+
+-   The first part of the view (`[ent(5767)<v0>:8700|16]` in the example above) corresponds to the view ID. It identifies:
+    -   the address and processId of the membership coordinator-- `ent(5767)` in example above.
+    -   the view-number (`<vXX>`) of the membership view that the member first appeared in-- `<v0>` in example above.
+    -   membership-port of the membership coordinator-- `8700` in the example above.
+    -   view-number-- `16` in the example above
+-   The second part of the view lists all of the member processes in the current view. `[ent(5767)<v0>:8700/44876,                             ent(5829)<v1>:48034/55334, ent(5875)<v2>:4738/54595,                             ent(5822)<v5>:49380/39564,                             ent(8788)<v7>:24136/53525]` in the example above.
+-   The overall format of each listed member is:`Address(processId)<vXX>:membership-port/distribution                             port`. The membership coordinator is almost always the first member in the view and the rest are ordered by age.
+-   The membership-port is the JGroups TCP UDP port that it uses to send datagrams. The distribution-port is the TCP/IP port that is used for cache messaging.
+-   Each member watches the member to its right for failure detection purposes.
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/network_partitioning/handling_network_partitioning.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/network_partitioning/handling_network_partitioning.html.md.erb b/geode-docs/managing/network_partitioning/handling_network_partitioning.html.md.erb
new file mode 100644
index 0000000..70e9668
--- /dev/null
+++ b/geode-docs/managing/network_partitioning/handling_network_partitioning.html.md.erb
@@ -0,0 +1,46 @@
+---
+title:  Configure Apache Geode to Handle Network Partitioning
+---
+
+This section lists the configuration steps for network partition detection.
+
+<a id="handling_network_partitioning__section_EAF1957B6446491A938DEFB06481740F"></a>
+The system uses a combination of member coordinators and system members, designated as lead members, to detect and resolve network partitioning problems.
+
+1.  Network partition detection works in all environments. Using multiple locators mitigates the effect of network partitioning. See [Configuring Peer-to-Peer Discovery](../../topologies_and_comm/p2p_configuration/setting_up_a_p2p_system.html).
+2.  Enable partition detection consistently in all system members by setting this in their `gemfire.properties` file:
+
+    ``` pre
+    enable-network-partition-detection=true
+    ```
+
+    Enable network partition detection in all locators and in any other process that should be sensitive to network partitioning. Processes that do not have network partition detection enabled are not eligible to be the lead member, so their failure will not trigger declaration of a network partition.
+
+    All system members should have the same setting for `enable-network-partition-detection`. If they don\u2019t, the system throws a `GemFireConfigException` upon startup.
+
+3.  You must set `enable-network-partition-detection` to true if you are using persistent partitioned regions. You **must** set `enable-network-partition-detection` to true if you are using persistent regions (partitioned or replicated). If you create a persistent region and `enable-network-partition-detection` to set to false, you will receive the following warning message:
+
+    ``` pre
+    Creating persistent region {0}, but enable-network-partition-detection is set to false.
+          Running with network partition detection disabled can lead to an unrecoverable system in the
+          event of a network split."
+    ```
+
+4.  Configure regions you want to protect from network partitioning with `DISTRIBUTED_ACK` or `GLOBAL` `scope`. Do not use `DISTRIBUTED_NO_ACK` `scope`. The region configurations provided in the region shortcut settings use `DISTRIBUTED_ACK` scope. This setting prevents operations from performed throughout the distributed system before a network partition is detected.
+    **Note:**
+    GemFire issues an alert if it detects distributed-no-ack regions when network partition detection is enabled:
+
+    ``` pre
+    Region {0} is being created with scope {1} but enable-network-partition-detection is enabled in the distributed system. 
+    This can lead to cache inconsistencies if there is a network failure.
+                                
+    ```
+
+5.  These other configuration parameters affect or interact with network partitioning detection. Check whether they are appropriate for your installation and modify as needed.
+    -   If you have network partition detection enabled, the threshold percentage value for allowed membership weight loss is automatically configured to 51. You cannot modify this value. (**Note:** The weight loss calculation uses standard rounding. Therefore, a value of 50.51 is rounded to 51 and will cause a network partition.)
+    -   Failure detection is initiated if a member's `gemfire.properties` `ack-wait-threshold` (default is 15 seconds) and `ack-severe-alert-threshold` (15 seconds) elapses before receiving a response to a message. If you modify the `ack-wait-threshold` configuration value, you should modify `ack-severe-alert-threshold` to match the other configuration value.
+    -   If the system has clients connecting to it, the clients' `cache.xml` `<cache> <pool> read-timeout` should be set to at least three times the `member-timeout` setting in the server's `gemfire.properties`. The default `<cache> <pool> read-timeout` setting is 10000 milliseconds.
+    -   You can adjust the default weights of members by specifying the system property `gemfire.member-weight` upon startup. For example, if you have some VMs that host a needed service, you could assign them a higher weight upon startup.
+    -   By default, members that are forced out of the distributed system by a network partition event will automatically restart and attempt to reconnect. Data members will attempt to reinitialize the cache. See [Handling Forced Cache Disconnection Using Autoreconnect](../autoreconnect/member-reconnect.html).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/network_partitioning/how_network_partitioning_management_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/network_partitioning/how_network_partitioning_management_works.html.md.erb b/geode-docs/managing/network_partitioning/how_network_partitioning_management_works.html.md.erb
new file mode 100644
index 0000000..157864d
--- /dev/null
+++ b/geode-docs/managing/network_partitioning/how_network_partitioning_management_works.html.md.erb
@@ -0,0 +1,42 @@
+---
+title:  How Network Partitioning Management Works
+---
+
+Geode handles network outages by using a weighting system to determine whether the remaining available members have a sufficient quorum to continue as a distributed system.
+
+<a id="how_network_partitioning_management_works__section_548146BB8C24412CB7B43E6640272882"></a>
+Individual members are each assigned a weight, and the quorum is determined by comparing the total weight of currently responsive members to the previous total weight of responsive members.
+
+Your distributed system can split into separate running systems when members lose the ability to see each other. The typical cause of this problem is a failure in the network. When a partitioned system is detected, Apache Geode only one side of the system keeps running and the other side automatically shuts down.
+
+**Note:**
+The network partitioning detection feature is only enabled when `enable-network-partition-detection` is set to true in `gemfire.properties`. By default, this property is set to false. See [Configure Apache Geode to Handle Network Partitioning](handling_network_partitioning.html#handling_network_partitioning) for details. Quorum weight calculations are always performed and logged regardless of this configuration setting.
+
+The overall process for detecting a network partition is as follows:
+
+1.  The distributed system starts up. When you start up a distributed system, start the locators first, start the cache servers second, and then start other members such as applications or processes that access distributed system data.
+2.  After the members start up, the oldest member, typically a locator, assumes the role of the membership coordinator. Peer discovery occurs as members come up and members generate a membership discovery list for the distributed system. Locators hand out the membership discovery list as each member process starts up. This list typically contains a hint on who the current membership coordinator is.
+3.  Members join and if necessary, depart the distributed system:
+    -   Member processes make a request to the coordinator to join the distributed system. If authenticated, the coordinator creates a new membership view, hands the new membership view to the new member, and begins the process of sending the new membership view (to add the new member or members) by sending out a view preparation message to existing members in the view.
+    -   While members are joining the system, it is possible that members are also leaving or being removed through the normal failure detection process. Failure detection removes unresponsive or slow members. See [Managing Slow Receivers](../monitor_tune/slow_receivers_managing.html) and [Failure Detection and Membership Views](failure_detection.html#concept_CFD13177F78C456095622151D6EE10EB) for descriptions of the failure detection process. If a new membership view is sent out that includes one or more failed processes, the coordinator will log the new weight calculations. At any point, if quorum loss is detected due to unresponsive processes, the coordinator will also log a severe level message to identify the failed processes:
+
+        ``` pre
+        Possible loss of quorum detected due to loss of {0} cache processes: {1}
+        ```
+
+        where {0} is the number of processes that failed and {1} lists the processes.
+
+4.  Whenever the coordinator is alerted of a membership change (a member either joins or leaves the distributed system), the coordinator generates a new membership view. The membership view is generated by a two-phase protocol:
+    1.  In the first phase, the membership coordinator sends out a view preparation message to all members and waits 12 seconds for a view preparation ack return message from each member. If the coordinator does not receive an ack message from a member within 12 seconds, the coordinator attempts to connect to the member's failure-detection socket. If the coordinator cannot connect to the member's failure-detection socket, the coordinator declares the member dead and starts the membership view protocol again from the beginning.
+    2.  In the second phase, the coordinator sends out the new membership view to all members that acknowledged the view preparation message or passed the connection test.
+
+5.  Each time the membership coordinator sends a view, each member calculates the total weight of members in the current membership view and compares it to the total weight of the previous membership view. Some conditions to note:
+    -   When the first membership view is sent out, there are no accumulated losses. The first view only has additions.
+    -   A new coordinator may have a stale view of membership if it did not see the last membership view sent by the previous (failed) coordinator. If new members were added during that failure, then the new members may be ignored when the first new view is sent out.
+    -   If members were removed during the fail over to the new coordinator, then the new coordinator will have to determine these losses during the view preparation step.
+
+6.  With `enable-network-partition-detection` set to true, any member that detects that the total membership weight has dropped below 51% within a single membership view change (loss of quorum) declares a network partition event. The coordinator sends a network-partitioned-detected UDP message to all members (even to the non-responsive ones) and then closes the distributed system with a `ForcedDisconnectException`. If a member fails to receive the message before the coordinator closes the system, the member is responsible for detecting the event on its own.
+
+The presumption is that when a network partition is declared, the members that comprise a quorum will continue operations. The surviving members elect a new coordinator, designate a lead member, and so on.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/network_partitioning/membership_coordinators_lead_members_and_weighting.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/network_partitioning/membership_coordinators_lead_members_and_weighting.html.md.erb b/geode-docs/managing/network_partitioning/membership_coordinators_lead_members_and_weighting.html.md.erb
new file mode 100644
index 0000000..cfade68
--- /dev/null
+++ b/geode-docs/managing/network_partitioning/membership_coordinators_lead_members_and_weighting.html.md.erb
@@ -0,0 +1,62 @@
+---
+title:  Membership Coordinators, Lead Members and Member Weighting
+---
+
+Network partition detection uses a designated membership coordinator and a weighting system that accounts for a lead member to determine whether a network partition has occurred.
+
+## <a id="concept_23C2606D59754106AFBFE17515DF4330__section_7C67F1D30C1645CC8489E481873691D9" class="no-quick-link"></a>Membership Coordinators and Lead Members
+
+The membership coordinator is a member that manages entry and exit of other members of the distributed system. With network partition detection enabled, the coordinator can be any Geode member but locators are preferred. In a locator-based system, if all locators are in the reconnecting state, the system continues to function, but new members are not able to join until a locator has successfully reconnected. After a locator has reconnected, the reconnected locator will take over the role of coordinator.
+
+When a coordinator is shutting down, it sends out a view that removes itself from the list and the other members must determine who the new coordinator is.
+
+The lead member is determined by the coordinator. Any member that has enabled network partition detection, is not hosting a locator, and is not an administrator interface-only member is eligible to be designated as the lead member by the coordinator. The coordinator chooses the longest-lived member that fits the criteria.
+
+The purpose of the lead member role is to provide extra weight. It does not perform any specific functionality.
+
+## <a id="concept_23C2606D59754106AFBFE17515DF4330__section_D819DE21928F4D658C132981307447E3" class="no-quick-link"></a>Member Weighting System
+
+By default, individual members are assigned the following weights:
+
+-   Each member has a weight of 10 except the lead member.
+-   The lead member is assigned a weight of 15.
+-   Locators have a weight of 3.
+
+You can modify the default weights for specific members by defining the `gemfire.member-weight` system property upon startup.
+
+The weights of members prior to the view change are added together and compared to the weight of lost members. Lost members are considered members that were removed between the last view and the completed send of the view preparation message. If membership is reduced by a certain percentage within a single membership view change, a network partition is declared.
+
+The loss percentage threshold is 51 (meaning 51%). Note that the percentage calculation uses standard rounding. Therefore, a value of 50.51 is rounded to 51. If the rounded loss percentage is equal to or greater than 51%, the membership coordinator initiates shut down.
+
+## <a id="concept_23C2606D59754106AFBFE17515DF4330__section_53C963D1B2DF417C973A60981E52CDCF" class="no-quick-link"></a>Sample Member Weight Calculations
+
+This section provides some example calculations.
+
+**Example 1:** Distributed system with 12 members. 2 locators, 10 cache servers (one cache server is designated as lead member.) View total weight equals 111.
+
+-   4 cache servers become unreachable. Total membership weight loss is 40 (36%). Since 36% is under the 51% threshold for loss, the distributed system stays up.
+-   1 locator and 4 cache servers (including the lead member) become unreachable. Membership weight loss equals 48 (43%). Since 43% is under the 51% threshold for loss, the distributed system stays up.
+-   5 cache servers (not including the lead member) and both locators become unreachable. Membership weight loss equals 56 (49%). Since 49% is under the 51% threshold for loss, the distributed system stays up.
+-   5 cache servers (including the lead member) and 1 locator become unreachable. Membership weight loss equals 58 (52%). Since 52% is greater than the 51% threshold, the coordinator initiates shutdown.
+-   6 cache servers (not including the lead member) and both locators become unreachable. Membership weight loss equals 66 (59%). Since 59% is greater than the 51% threshold, the newly elected coordinator (a cache server since no locators remain) will initiate shutdown.
+
+**Example 2:** Distributed system with 4 members. 2 cache servers (1 cache server is designated lead member), 2 locators. View total weight is 31.
+
+-   Cache server designated as lead member becomes unreachable. Membership weight loss equals 15 or 48%. Distributed system stays up.
+-   Cache server designated as lead member and 1 locator become unreachable. Member weight loss equals 18 or 58%. Membership coordinator initiates shutdown. If the locator that became unreachable was the membership coordinator, the other locator is elected coordinator and then initiates shutdown.
+
+Even if network partitioning is not enabled, if quorum loss is detected due to unresponsive processes, the locator will also log a severe level message to identify the failed processes:
+``` pre
+Possible loss of quorum detected due to loss of {0} cache processes: {1}
+```
+
+where {0} is the number of processes that failed and {1} lists the processes.
+
+Enabling network partition detection allows only one subgroup to survive a split. The rest of the system is disconnected and the caches are closed.
+
+When a shutdown occurs, the members that are shut down will log the following alert message:
+``` pre
+Exiting due to possible network partition event due to loss of {0} cache processes: {1}
+```
+
+where `{0}` is the count of lost members and `{1}` is the list of lost member IDs.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/network_partitioning/network_partitioning_scenarios.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/network_partitioning/network_partitioning_scenarios.html.md.erb b/geode-docs/managing/network_partitioning/network_partitioning_scenarios.html.md.erb
new file mode 100644
index 0000000..4009792
--- /dev/null
+++ b/geode-docs/managing/network_partitioning/network_partitioning_scenarios.html.md.erb
@@ -0,0 +1,36 @@
+---
+title:  Network Partitioning Scenarios
+---
+
+This topic describes network partitioning scenarios and what happens to the partitioned sides of the distributed system.
+
+<img src="../../images_svg/network_partition_scenario.svg" id="concept_357ABE91AAA042D2A20328BD01FEB882__image_6ED88C6911EE4C68A19353ABD7B1552A" class="image" />
+
+## <a id="concept_357ABE91AAA042D2A20328BD01FEB882__section_DAFBCB8BB421453EB6C5B4A348640762" class="no-quick-link"></a>What the Losing Side Does
+
+In a network partitioning scenario, the "losing side" constitutes the cluster partition where the membership coordinator has detected that there is an insufficient quorum of members to continue.
+
+The membership coordinator calculates membership weight change after sending out its view preparation message. If a quorum of members does not remain after the view preparation phase, the coordinator on the "losing side" declares a network partition event and sends a network-partition-detected UDP message to the members. The coordinator then closes its distributed system with a `ForcedDisconnectException`. If a member fails to receive the message before the coordinator closes the connection, it is responsible for detecting the event on its own.
+
+When the losing side discovers that a network partition event has occurred, all peer members receive a `RegionDestroyedException` with `Operation`: `FORCED_DISCONNECT`.
+
+If a `CacheListener` is installed, the `afterRegionDestroy` callback is invoked with a `RegionDestroyedEvent`, as shown in this example logged by the losing side's callback. The peer member process IDs are 14291 (lead member) and 14296, and the locator is 14289.
+
+``` pre
+[info 2008/05/01 11:14:51.853 PDT <CloserThread> tid=0x4a] 
+Invoked splitBrain.SBListener: afterRegionDestroy in client1 whereIWasRegistered: 14291 
+event.isReinitializing(): false 
+event.getDistributedMember(): thor(14291):40440/34132 
+event.getCallbackArgument(): null 
+event.getRegion(): /TestRegion 
+event.isOriginRemote(): false 
+Operation: FORCED_DISCONNECT 
+Operation.isDistributed(): false 
+Operation.isExpiration(): false 
+```
+
+Peers still actively performing operations on the cache may see `ShutdownException`s or `CacheClosedException`s with `Caused by: ForcedDisconnectException`.
+
+## <a id="concept_357ABE91AAA042D2A20328BD01FEB882__section_E6E914107FE64C0F9D8F7DA142D00AD7" class="no-quick-link"></a>What Isolated Members Do
+
+When a member is isolated from all locators, it is unable to receive membership view changes. It can't know if the current coordinator is present or, if it has left, whether there are other members available to take over that role. In this condition, a member will eventually detect the loss of all other members and will use the loss threshold to determine whether it should shut itself down. In the case of a distributed system with 2 locators and 2 cache servers, the loss of communication with the non-lead cache server plus both locators would result in this situation and the remaining cache server would eventually shut itself down.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/network_partitioning/preventing_network_partitions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/network_partitioning/preventing_network_partitions.html.md.erb b/geode-docs/managing/network_partitioning/preventing_network_partitions.html.md.erb
new file mode 100644
index 0000000..b18b600
--- /dev/null
+++ b/geode-docs/managing/network_partitioning/preventing_network_partitions.html.md.erb
@@ -0,0 +1,11 @@
+---
+title:  Preventing Network Partitions
+---
+
+This section provides a short list of things you can do to prevent a network partition from occurring.
+
+To avoid a network partition:
+
+-   Use NIC teaming for redundant connectivity. See [http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/vmware/VMware.html#wp696452](http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/vmware/VMware.html#wp696452) for more information.
+-   It is best if all servers share a common network switch. Having multiple network switches increases the possibility of a network partition occurring. If multiple switches must be used, redundant routing paths should be available, if possible. The weight of members sharing a switch in a multi-switch configuration will determine which partition survives if there is an inter-switch failure.
+-   In terms of Geode configuration, consider the weighting of members. For example, you could assign important processes a higher weight.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/region_compression/region_compression.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/region_compression/region_compression.html.md.erb b/geode-docs/managing/region_compression/region_compression.html.md.erb
new file mode 100644
index 0000000..d664a7b
--- /dev/null
+++ b/geode-docs/managing/region_compression/region_compression.html.md.erb
@@ -0,0 +1,209 @@
+---
+title: Region Compression
+---
+<a id="topic_r43_wgc_gl"></a>
+
+
+This section describes region compression, its benefits and usage.
+
+One way to reduce memory consumption by Geode is to enable compression in your regions. Geode allows you to compress in-memory region values using pluggable compressors (compression codecs). Geode includes the [Snappy](http://google.github.io/snappy/) compressor as the built-in compression codec; however, you can implement and specify a different compressor for each compressed region.
+
+## What Gets Compressed
+
+When you enable compression in a region, all values stored in the region are compressed while in memory. Keys and indexes are not compressed. New values are compressed when put into the in-memory cache and all values are decompressed when being read from the cache. Values are not compressed when persisted to disk. Values are decompressed before being sent over the wire to other peer members or clients.
+
+When compression is enabled, each value in the region is compressed, and each region entry is compressed as a single unit. It is not possible to compress individual fields of an entry.
+
+You can have a mix of compressed and non-compressed regions in the same cache.
+
+-   **[Guidelines on Using Compression](#concept_a2c_rhc_gl)**
+
+    This topic describes factors to consider when deciding on whether to use compression.
+
+-   **[How to Enable Compression in a Region](#topic_inm_whc_gl)**
+
+    This topic describes how to enable compression on your region.
+
+-   **[Working with Compressors](#topic_hqf_syj_g4)**
+
+    When using region compression, you can use the default Snappy compressor included with Geode or you can specify your own compressor.
+
+-   **[Comparing Performance of Compressed and Non-Compressed Regions](#topic_omw_j3c_gl)**
+
+    The comparative performance of compressed regions versus non-compressed regions can vary depending on how the region is being used and whether the region is hosted in a memory-bound JVM.
+
+## <a id="concept_a2c_rhc_gl" class="no-quick-link"></a>Guidelines on Using Compression
+
+This topic describes factors to consider when deciding on whether to use compression.
+
+Review the following guidelines when deciding on whether or not to enable compression in your region:
+
+-   **Use compression when JVM memory usage is too high.** Compression allows you to store more region data in-memory and to reduce the number of expensive garbage collection cycles that prevent JVMs from running out of memory when memory usage is high.
+
+    To determine if JVM memory usage is high, examine the the following statistics:
+
+    -   vmStats&gt;freeMemory
+    -   vmStats-&gt;maxMemory
+    -   ConcurrentMarkSweep-&gt;collectionTime
+
+    If the amount of free memory regularly drops below 20% - 25% or the duration of the garbage collection cycles is generally on the high side, then the regions hosted on that JVM are good candidates for having compression enabled.
+
+-   **Consider the types and lengths of the fields in the region's entries.** Since compression is performed on each entry separately (and not on the region as a whole), consider the potential for duplicate data across a single entry. Duplicate bytes are compressed more easily. Also, since region entries are first serialized into a byte area before being compressed, how well the data might compress is determined by the number and length of duplicate bytes across the entire entry and not just a single field. Finally, the larger the entry the more likely compression will achieve good results as the potential for duplicate bytes, and a series of duplicate bytes, increases.
+-   **Consider the type of data you wish to compress.** The type of data stored has a significant impact on how well the data may compress. String data will generally compress better than numeric data simply because string bytes are far more likely to repeat; however, that may not always be the case. For example, a region entry that holds a couple of short, unique strings may not provide as much memory savings when compressed as another region entry that holds a large number of integer values. In short, when evaluating the potential gains of compressing a region, consider the likelihood of having duplicate bytes, and more importantly the length of a series of duplicate bytes, for a single, serialized region entry. In addition, data that has already been compressed, such as JPEG format files, can actually cause more memory to be used.
+-   **Compress if you are storing large text values.** Compression is beneficial if you are storing large text values (such as JSON or XML) or blobs in Geode that would benefit from compression.
+-   **Consider whether fields being queried against are indexed.** You can query against compressed regions; however, if the fields you are querying against have not been indexed, then the fields must be decompressed before they can be used for comparison. In short, you may incur some query performance costs when querying against non-indexed fields.
+-   **Objects stored in the compression region must be serializable.** Compression only operates on byte arrays, therefore objects being stored in a compressed region must be serializable and deserializable. The objects can either implement the Serializable interface or use one of the other Geode serialization mechanisms (such as PdxSerializable). Implementers should always be aware that when compression is enabled the instance of an object put into a region will not be the same instance when taken out. Therefore, transient attributes will lose their value when the containing object is put into and then taken out of a region.
+
+-   **Compressed regions will enable cloning by default.** Setting a compressor and then disabling cloning results in an exception. The options are incompatible because the process of compressing/serializing and then decompressing/deserializing will result in a different instance of the object being created and that may be interpreted as cloning the object.
+
+<a id="topic_inm_whc_gl"></a>
+
+## <a id="topic_inm_whc_gl" class="no-quick-link"></a>How to Enable Compression in a Region
+
+This topic describes how to enable compression on your region.
+
+To enable compression on your region, set the following region attribute in your cache.xml:
+
+``` pre
+<?xml version="1.0" encoding= "UTF-8"?>
+<cache xmlns="http://geode.incubator.apache.org/schema/cache"
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+    xsi:schemaLocation="http://geode.incubator.apache.org/schema/cache http://geode.incubator.apache.org/schema/cache/cache-1.0.xsd"
+    version="1.0\u201d lock-lease="120"  lock-timeout= "60" search-timeout= "300"  is-server= "true"  copy-on-read= "false" > 
+   <region name="compressedRegion" > 
+      <region-attributes data-policy="replicate" ... /> 
+         <Compressor>
+             <class-name>org.apache.geode.compression.SnappyCompressor</class-name>
+         </Compressor>
+        ...
+      </region-attributes>
+   </region> 
+</cache>
+```
+
+In the Compressor element, specify the class-name for your compressor implementation. This example specifies the Snappy compressor, which is bundled with Geode . You can also specify a custom compressor. See [Working with Compressors](#topic_hqf_syj_g4) for an example.
+
+Compression can be enabled during region creation using gfsh or programmatically as well.
+
+Using gfsh:
+
+``` pre
+gfsh>create-region --name=\u201dCompressedRegion\u201d --compressor=\u201dorg.apache.geode.compression.SnappyCompressor\u201d;
+```
+
+API:
+
+``` pre
+regionFactory.setCompressor(new SnappyCompressor());
+```
+
+or
+
+``` pre
+regionFactory.setCompressor(SnappyCompressor.getDefaultInstance());
+```
+
+## How to Check Whether Compression is Enabled
+
+You can also check whether a region has compression enabled by querying which codec is being used. A null codec indicates that no compression is enabled for the region.
+
+``` pre
+Region myRegion = cache.getRegion("myRegion");
+Compressor compressor = myRegion.getAttributes().getCompressor();
+```
+
+## <a id="topic_hqf_syj_g4" class="no-quick-link"></a>Working with Compressors
+
+When using region compression, you can use the default Snappy compressor included with Geode or you can specify your own compressor.
+
+The compression API consists of a single interface that compression providers must implement. The default compressor (SnappyCompressor) is the single compression implementation that comes bundled with the product. Note that since the Compressor is stateless, there only needs to be a single instance in any JVM; however, multiple instances may be used without issue. The single, default instance of the SnappyCompressor may be retrieved with the `SnappyCompressor.getDefaultInstance()` static method.
+
+**Note:**
+The Snappy codec included with Geode cannot be used with Solaris deployments. Snappy is only supported on Linux, Windows, and OSX deployments of Geode.
+
+This example provides a custom Compressor implementation:
+
+``` pre
+package com.mybiz.myproduct.compression;
+
+import org.apache.geode.compression.Compressor;
+
+public class LZWCompressor implements Compressor {
+  private final LZWCodec lzwCodec = new LZWCodec(); 
+  
+  @Override
+  public byte[] compress(byte[] input) {
+         return lzwCodec.compress(input);
+  }
+
+  @Override
+  public byte[] decompress(byte[] input) {
+         return lzwCodec.decompress(input);
+  }
+}
+```
+
+To use the new custom compressor on a region:
+
+1.  Make sure that the new compressor package is available in the classpath of all JVMs that will host the region.
+2.  Configure the custom compressor for the region using any of the following mechanisms:
+
+    Using gfsh:
+
+    ``` pre
+    gfsh>create-region --name=\u201dCompressedRegion\u201d \
+    --compressor=\u201dcom.mybiz.myproduct.compression.LZWCompressor\u201d
+    ```
+
+    Using API:
+
+    For example:
+
+    ``` pre
+    regionFactory.setCompressor(new LZWCompressor());
+    ```
+
+    cache.xml:
+
+    ``` pre
+    <region-attributes>
+     <Compressor>
+         <class-name>com.mybiz.myproduct.compression.LZWCompressor</class-name>
+      </Compressor>
+    </region-attributes>
+    ```
+
+## Changing the Compressor for an Already Compressed Region
+
+You typically enable compression on a region at the time of region creation. You cannot modify the Compressor or disable compression for the region while the region is online.
+
+However, if you need to change the compressor or disable compression, you can do so by performing the following steps:
+
+1.  Shut down the members hosting the region you wish to modify.
+2.  Modify the cache.xml file for the member either specifying a new compressor or removing the compressor attribute from the region.
+3.  Restart the member.
+
+## <a id="topic_omw_j3c_gl" class="no-quick-link"></a>Comparing Performance of Compressed and Non-Compressed Regions
+
+The comparative performance of compressed regions versus non-compressed regions can vary depending on how the region is being used and whether the region is hosted in a memory-bound JVM.
+
+When considering the cost of enabling compression, you should consider the relative cost of reading and writing compressed data as well as the cost of compression as a percentage of the total time spent managing entries in a region. As a general rule, enabling compression on a region will add 30% - 60% more overhead for region create and update operations than for region get operations. Because of this, enabling compression will create more overhead on regions that are write heavy than on regions that are read heavy.
+
+However, when attempting to evaluate the performance cost of enabling compression you should also consider the cost of compression relative to the overall cost of managing entries in a region. A region may be tuned in such a way that it is highly optimized for read and/or write performance. For example, a replicated region that does not save to disk will have much better read and write performance than a partitioned region that does save to disk. Enabling compression on a region that has been optimized for read and write performance will provide more noticeable results than using compression on regions that have not been optimized this way. More concretely, performance may degrade by several hundred percent on a read/write optimized region whereas it may only degrade by 5 to 10 percent on a non-optimized region.
+
+A final note on performance relates to the cost when enabling compression on regions in a memory bound JVM. Enabling compression generally assumes that the enclosing JVM is memory bound and therefore spends a lot of time for garbage collection. In that case performance may improve by as much as several hundred percent as the JVM will be running far fewer garbage collection cycles and spending less time when running a cycle.
+
+## Monitoring Compression Performance
+
+The following statistics provide monitoring for cache compression:
+
+-   `compressTime`
+-   `decompressTime`
+-   `compressions`
+-   `decompressions`
+-   `preCompressedBytes`
+-   `postCompressedBytes`
+
+See [Cache Performance (CachePerfStats)](../../reference/statistics/statistics_list.html#section_DEF8D3644D3246AB8F06FE09A37DC5C8) for statistic descriptions.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/security/authentication_examples.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/security/authentication_examples.html.md.erb b/geode-docs/managing/security/authentication_examples.html.md.erb
new file mode 100644
index 0000000..6e0d050
--- /dev/null
+++ b/geode-docs/managing/security/authentication_examples.html.md.erb
@@ -0,0 +1,53 @@
+---
+title:  Authentication Example
+---
+
+This example demonstrates the basics of an implementation of the
+`SecurityManager.authenticate` method.
+The remainder of the example may be found within the Apache Geode
+source code within the
+`geode-core/src/main/java/org/apache/geode/security/templates` directory.
+
+Of course, the security implementation of every installation is unique,
+so this example cannot be used in a production environment.
+Its use of the user name as a returned principal upon successful
+authentication is a particularly poor design choice,
+as any attacker that discovers the implementation can potentially
+spoof the system.
+
+This example assumes that a set of user name and password pairs
+representing users that may be successfully authenticated 
+has been read into a data structure upon intialization.
+Any component that presents the correct password for a user name
+successfully authenticates,
+and its identity is verified as that user.
+Therefore, the implementation of the `authenticate` method
+checks that the user name provided within the `credentials` parameter
+ is in its data structure.
+If the user name is present,
+then the password provided within the `credentials` parameter 
+is compared to the data structure's known password for that user name.
+Upon a match, the authentication is successful.
+
+``` pre
+public Object authenticate(final Properties credentials)
+         throws AuthenticationFailedException {
+    String user = credentials.getProperty(ResourceConstants.USER_NAME);
+    String password = credentials.getProperty(ResourceConstants.PASSWORD);
+
+    User userObj = this.userNameToUser.get(user);
+    if (userObj == null) {
+        throw new AuthenticationFailedException(
+                      "SampleSecurityManager: wrong username/password");
+    }
+
+    if (user != null 
+        && !userObj.password.equals(password) 
+        && !"".equals(user)) {
+        throw new AuthenticationFailedException(
+                      "SampleSecurityManager: wrong username/password");
+    }
+
+    return user;
+}
+```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/security/authentication_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/security/authentication_overview.html.md.erb b/geode-docs/managing/security/authentication_overview.html.md.erb
new file mode 100644
index 0000000..fe38e10
--- /dev/null
+++ b/geode-docs/managing/security/authentication_overview.html.md.erb
@@ -0,0 +1,26 @@
+---
+title:  Authentication
+---
+
+Authentication verifies the identities of components within the distributed
+system such as peers, clients, and those connecting to a JMX manager.
+
+-   **[Implementing Authentication](../../managing/security/implementing_authentication.html)**
+
+    All components of the distributed system authenticate the same way,
+    through a custom-written method.
+
+-   **[Encrypting Passwords for Use in cache.xml](../../managing/security/encrypting_passwords.html)**
+
+    Apache Geode provides a gfsh utility to generate encrypted passwords.
+
+-   **[Encrypt Credentials with Diffie-Hellman](../../managing/security/encrypting_with_diffie_helman.html)**
+
+    For secure transmission of sensitive information, like passwords, you can encrypt credentials using the Diffie-Hellman key exchange algorithm.
+
+-   **[Authentication Example](../../managing/security/authentication_examples.html)**
+
+    The example demonstrates the basics of an implementation of the
+`SecurityManager.authenticate` method.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/security/authorization_example.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/security/authorization_example.html.md.erb b/geode-docs/managing/security/authorization_example.html.md.erb
new file mode 100644
index 0000000..9d03f65
--- /dev/null
+++ b/geode-docs/managing/security/authorization_example.html.md.erb
@@ -0,0 +1,53 @@
+---
+title:  Authorization Example
+---
+
+This example demonstrates the basics of an implementation of the
+`SecurityManager.authorize` method.
+The remainder of the example may be found within the Apache Geode
+source code within the
+`geode-core/src/main/java/org/apache/geode/security/templates` directory.
+
+Of course, the security implementation of every installation is unique,
+so this example cannot be used in a production environment,
+as the roles and permissions will not match the needs of any
+real distributed system. 
+
+This example assumes that a set of users, a set of roles
+that a user might take on within the system,
+and a mapping of users to their roles are described
+in a JSON format file.
+The roles define a set of authorized resource permissions granted
+for users in those roles.
+Code not shown here parses the file to compose a data structure
+with the information on roles and users.
+The `authorize` callback denies permission for any operation
+that does not have a principal representing the identity of the
+operation's requester.
+Given the principal, 
+the method iterates through the data structure searching for the 
+necessary permissions for the principal.
+When the necessary permission is found, 
+authorization is granted by returning the value `true`.
+If the permission is not found in the data structure,
+then the method returns `false`, denying authorization of the operation.
+
+``` pre
+public boolean authorize(final Object principal, final ResourcePermission context) {
+    if (principal == null) return false;
+
+    User user = this.userNameToUser.get(principal.toString());
+    if (user == null) return false; // this user is not authorized to do anything
+
+    // check if the user has this permission defined in the context
+    for (Role role : this.userNameToUser.get(user.name).roles) {
+        for (Permission permitted : role.permissions) {
+            if (permitted.implies(context)) {
+                return true;
+            }
+        }
+    }
+
+    return false;
+}
+```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/security/authorization_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/security/authorization_overview.html.md.erb b/geode-docs/managing/security/authorization_overview.html.md.erb
new file mode 100644
index 0000000..6ca014a
--- /dev/null
+++ b/geode-docs/managing/security/authorization_overview.html.md.erb
@@ -0,0 +1,17 @@
+---
+title:  Authorization
+---
+
+Distributed system and cache operations can be restricted, intercepted and
+modifed, or completely blocked based on configured access rights set for
+the various distributed system entities. 
+
+-   **[Implementing Authorization](../../managing/security/implementing_authorization.html)**
+
+    To use authorization for client/server systems, your client connections must be authenticated by their servers.
+
+-   **[Authorization Example](../../managing/security/authorization_example.html)**
+
+    This topic discusses the authorization example provided in the product under `templates/security` using `XmlAuthorization.java`, `XmlErrorHandler.java`, and `authz6_0.dtd`.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/security/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/security/chapter_overview.html.md.erb b/geode-docs/managing/security/chapter_overview.html.md.erb
new file mode 100644
index 0000000..88df5d8
--- /dev/null
+++ b/geode-docs/managing/security/chapter_overview.html.md.erb
@@ -0,0 +1,30 @@
+---
+title:  Security
+---
+
+The security framework permits authentication of connecting components and authorization of operations for all communicating components of the distributed system.
+
+-   **[Security Implementation Introduction and Overview](../../managing/security/implementing_security.html)**
+
+    Encryption, SSL secure communication, authentication, and authorization help to secure the distributed system.
+
+-   **[Security Detail Considerations](../../managing/security/security_audit_overview.html)**
+
+    This section gathers discrete details in one convenient location to better help you assess and configure the security of your environment.
+
+-   **[Enable Security with Property Definitions](../../managing/security/enable_security.html)**
+
+-   **[Authentication](../../managing/security/authentication_overview.html)**
+
+    A distributed system using authentication bars malicious peers or clients, and deters inadvertent access to its cache.
+
+-   **[Authorization](../../managing/security/authorization_overview.html)**
+
+    Client operations on a cache server can be restricted or completely blocked based on the roles and permissions assigned to the credentials submitted by the client.
+
+-   **[Post Processing of Region Data](../../managing/security/post_processing.html)**
+
+-   **[SSL](../../managing/security/ssl_overview.html)**
+
+    SSL protects your data in transit between applications.
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/security/enable_security.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/security/enable_security.html.md.erb b/geode-docs/managing/security/enable_security.html.md.erb
new file mode 100644
index 0000000..c6b4e33
--- /dev/null
+++ b/geode-docs/managing/security/enable_security.html.md.erb
@@ -0,0 +1,56 @@
+---
+title:  Enable Security with Property Definitions
+---
+
+
+## security-manager Property
+
+The authentication callback and the authorization callback that implement 
+the `SecurityManager` interface
+are specified with the `security-manager` property.
+When this property is defined, authentication and authorization are enabled.
+The definition of the `security-manager` property is the
+fully qualified name of the class that implements the `SecurityManager` interface.
+For example:
+
+``` pre
+security-manager = com.example.security.MySecurityManager
+```
+
+All components of the system invoke the same callbacks.
+Here are descriptions of the components and the connections that they
+make with the system.
+
+- A client connects with a server and makes operation requests 
+of that server.  The callbacks invoked are those defined by the
+`SecurityManager` interface for that server.
+- A server connects with a locator, invoking the `authenticate` callback
+defined for that locator.
+- Components communicating with a locator's JMX manager connect and make
+operation requests of the locator.
+The callbacks invoked are those defined by the
+`SecurityManager` interface for that locator.
+Both `gfsh` and `Pulse` use this form of communication.
+- Applications communicating via the REST API make of a server
+invoke security callbacks upon connection and operation requests.
+- Requests that a gateway sender makes of a locator
+invoke security callbacks defined for that locator.
+
+## security-post-processor Property
+
+The  `PostProcessor` interface allows the definition of a set of callbacks
+that are invoked after operations that get data,
+but before the data is returned.
+This permits the callback to intervene and modify of the data
+that is to be returned.
+The callbacks do not modify the region data,
+only the data to be returned.
+
+Enable the post processing of data by defining the
+`security-post-processor` property
+with the path to the definition of the interface.
+For example,
+
+``` pre
+security-post-processor = com.example.security.MySecurityPostProcessing
+```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/security/encrypting_passwords.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/security/encrypting_passwords.html.md.erb b/geode-docs/managing/security/encrypting_passwords.html.md.erb
new file mode 100644
index 0000000..8104c29
--- /dev/null
+++ b/geode-docs/managing/security/encrypting_passwords.html.md.erb
@@ -0,0 +1,32 @@
+---
+title: Encrypting Passwords for Use in cache.xml
+---
+<a id="topic_730CC61BA84F421494956E2B98BDE2A1"></a>
+
+
+Apache Geode provides a gfsh utility to generate encrypted passwords.
+
+You may need to specify an encrypted password in `cache.xml` when configuring JNDI connections to external JDBC data sources. See [Configuring Database Connections Using JNDI](../../developing/transactions/configuring_db_connections_using_JNDI.html#topic_A5E3A67C808D48C08E1F0DC167C5C494) for configuration examples.
+
+The `cache.xml` file accepts passwords in clear text or encrypted text.
+
+To generate an encrypted password, use the [encrypt password](../../tools_modules/gfsh/command-pages/encrypt.html#concept_2B834B0AC8EE44C6A7F85CC66B1D6E18__section_F3D0959AF6264A3CB1821383B2AE4407) command in `gfsh`. The following example shows a sample command invocation and output (assuming `my_password` is the actual password for the data source). After you [start gfsh](../../tools_modules/gfsh/starting_gfsh.html#concept_DB959734350B488BBFF91A120890FE61), enter the following command:
+
+``` pre
+gfsh>encrypt password --password=my_password
+AB80B8E1EE8BB5701D0366E2BA3C3754
+```
+
+Copy the output from the `gfsh` command to the `cache.xml` file as the value of the password attribute of the `jndi-binding` tag embedded in `encrypted()`, just like a method parameter. Enter it as encrypted, in this format:
+
+``` pre
+password="encrypted(83f0069202c571faf1ae6c42b4ad46030e4e31c17409e19a)"
+```
+
+To use a non-encrypted (clear text) password, put the actual password as the value of the password attribute of the `jndi-binding` tag, like this:
+
+``` pre
+password="password"
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/security/encrypting_with_diffie_helman.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/security/encrypting_with_diffie_helman.html.md.erb b/geode-docs/managing/security/encrypting_with_diffie_helman.html.md.erb
new file mode 100644
index 0000000..bc44fd8
--- /dev/null
+++ b/geode-docs/managing/security/encrypting_with_diffie_helman.html.md.erb
@@ -0,0 +1,49 @@
+---
+title:  Encrypt Credentials with Diffie-Hellman
+---
+
+For secure transmission of sensitive information, like passwords, you can encrypt credentials using the Diffie-Hellman key exchange algorithm.
+
+This encryption applies only to client/server authentication - not peer-to-peer authentication.
+
+You need to specify the name of a valid symmetric key cipher supported by the JDK. Valid key names, like DES, DESede, AES, and Blowfish, enable the Diffie-Hellman algorithm with the specified cipher to encrypt the credentials. For valid JDK names, see [http://download.oracle.com/javase/1.5.0/docs/guide/security/CryptoSpec.html#AppA](http://download.oracle.com/javase/1.5.0/docs/guide/security/CryptoSpec.html#AppA).
+
+Before you begin, you need to understand how to use your security algorithm.
+
+## <a id="using_diffie_helman__section_45A9502BDF8E42E1970CEFB132F7424D" class="no-quick-link"></a>Enable Server Authentication of Client with Diffie-Hellman
+
+Set this in property in the client\u2019s `gemfire.properties` (or `gfsecurity.properties` file if you are creating a special restricted access file for security configuration):
+
+-   `security-client-dhalgo`. Name of a valid symmetric key cipher supported by the JDK, possibly followed by a key size specification.
+
+This causes the server to authenticate the client using the Diffie-Hellman algorithm.
+
+## <a id="using_diffie_helman__section_D07F68BE8D3140E99244895F4AF2CC80" class="no-quick-link"></a>Enable Client Authentication of Server
+
+This requires server authentication of client with Diffie-Hellman to be enabled. To have your client authenticate its servers, in addition to being authenticated:
+
+1.  In server `gemfire.properties` (or `gfsecurity.properties` file if you are creating a special restricted access file for security configuration), set:
+    1.  `security-server-kspath`. Path of the PKCS\#12 keystore containing the private key for the server
+    2.  `security-server-ksalias`. Alias name for the private key in the keystore.
+    3.  `security-server-kspasswd`. Keystore and private key password, which should match.
+
+2.  In client `gemfire.properties` (or `gfsecurity.properties` file if you are creating a special restricted access file for security configuration), set:
+    1.  `security-client-kspasswd`. Password for the public key file store on the client
+    2.  `security-client-kspath`. Path to the client public key truststore, the JKS keystore of public keys for all servers the client can use. This keystore should not be password-protected
+
+## <a id="using_diffie_helman__section_5FB4437072AC4B4E93210BEA60B67A27" class="no-quick-link"></a>Set the Key Size for AES and Blowfish Encryption Keys
+
+For algorithms like AES, especially if large key sizes are used, you may need Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files from Sun or equivalent for your JDK. This enables encryption of client credentials in combination with challenge-response from server to client to prevent replay and other types of attacks. It also enables challenge-response from client to server to avoid server-side replay attacks.
+
+For the AES and Blowfish algorithms, you can specify the key size for the `security-client-dhalgo` property by adding a colon and the size after the algorithm specification, like this:
+
+``` pre
+security-client-dhalgo=AES:192
+```
+
+-   For AES, valid key size settings are:
+    -   AES:128
+    -   AES:192
+    -   AES:256
+-   For Blowfish, set the key size between 128 and 448 bits, inclusive.
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/security/implementing_authentication.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/security/implementing_authentication.html.md.erb b/geode-docs/managing/security/implementing_authentication.html.md.erb
new file mode 100644
index 0000000..a605e1b
--- /dev/null
+++ b/geode-docs/managing/security/implementing_authentication.html.md.erb
@@ -0,0 +1,125 @@
+---
+title:  Implementing Authentication
+---
+
+Authentication lends a measure of security to a distributed system
+by verifying the identity of components as they connect to the system.
+All components use the same authentication mechanism.
+
+## How Authentication Works
+
+When a component initiates a connection to the distributed system,
+the `SecurityManager.authenticate` method is invoked.
+The component provides its credentials in the form of properties
+as a parameter to the `authenticate` method.
+The credential is presumed to be the two properties:
+`security-username` and `security-password`.
+The `authenticate` method is expected to either return an object
+representing a principal or throw an `AuthenticationFailedException`.
+
+A well-designed `authenticate` method will have or will have a way of
+obtaining a set of known user and password pairs that can be compared
+to the credential presented.
+
+## How a Server Sets Its Credential
+
+In order to connect with a locator that does authentication,
+a server will need to set its credential, composed of the two properties
+`security-username` and `security-password`.
+There are two ways of accomplishing this:
+
+- Set the `security-username` and `security-password` in the server's
+`gfsecurity.properties` file that will be read upon server start up,
+as in the example
+
+     ``` pre
+     security-username=admin
+     security-password=xyz1234
+     ```
+The user name and password are stored in the clear, so the
+`gfsecurity.properties` file must be protected by restricting access with
+file system permissions.
+
+- Implement the `getCredentials` method of the `AuthInitialize` interface
+for the server.
+This callback's location is defined in the property `security-peer-auth-init`,
+as in the example
+
+     ``` pre
+     security-peer-auth-init=com.example.security.MyAuthInitialize
+     ```
+The implementation of `getCredentials` may then acquire values for
+the properties `security-username` and `security-password` in whatever way
+it wishes.
+It might look up values in a database or another external resource.
+
+Gateway senders and receivers communicate as a component of their
+server member.
+Therefore, the credential of the server become those of the gateway
+sender or receiver.
+
+## How a Cache Client Sets Its Credential
+
+In order to connect with a locator or a server that does authentication,
+a client will need to set its credential, composed of the two properties
+`security-username` and `security-password`.
+There are two ways of accomplishing this:
+
+- Set the `security-username` and `security-password` in the client's
+`gfsecurity.properties` file that will be read upon client start up,
+as in the example
+
+     ``` pre
+     security-username=clientapp
+     security-password=xyz1234
+     ```
+The user name and password are stored in the clear, so the
+`gfsecurity.properties` file must be protected by restricting access with
+file system permissions.
+
+- Implement the `getCredentials` method of the `AuthInitialize` interface
+for the client.
+This callback's location is defined in the property `security-client-auth-init`,
+as in the example
+
+     ``` pre
+     security-client-auth-init=com.example.security.ClientAuthInitialize
+     ```
+The implementation of `getCredentials` may then acquire values for
+the properties `security-username` and `security-password` in whatever way
+it wishes.
+It might look up values in a database or another external resource,
+or it might prompt for values.
+
+## How Other Components Set Their Credentials
+
+`gfsh` prompts for the user name and password upon invocation of
+a`gfsh connect` command.
+
+Pulse prompts for the user name and password upon start up.
+
+Due to the stateless nature of the REST API,
+a web application or other component that speaks to a server or locator
+via the REST API goes through authentication on each request.
+The header of the request needs to include attributes that define values for
+`security-username` and `security-password`.
+
+## Implement SecurityManager Interface
+
+Complete these items to implement authentication done by either a
+locator or a server.
+
+- Decide upon an authentication algorithm.
+The [Authentication Example](authentication_examples.html)
+stores a set of user name and
+password pairs that represent the identities of components
+that will connect to the system.
+This simplistic algorithm returns the user name as a principal
+if the user name and password passed to the `authenticate` method
+are a match for one of the stored pairs.
+- Define the `security-manager` property.
+See [Enable Security with Property Definitions](enable_security.html)
+for details about this property.
+- Implement the  `authenticate` method of the `SecurityManager` interface.
+- Define any extra resources that the implemented authentication algorithm
+needs in order to make a decision.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/security/implementing_authorization.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/security/implementing_authorization.html.md.erb b/geode-docs/managing/security/implementing_authorization.html.md.erb
new file mode 100644
index 0000000..3fd4816
--- /dev/null
+++ b/geode-docs/managing/security/implementing_authorization.html.md.erb
@@ -0,0 +1,248 @@
+---
+title:  Implementing Authorization
+---
+
+## How Authorization Works
+
+When a component requests an operation,
+the `SecurityManager.authorize` method is invoked.
+It is passed the principal of the operation's requester
+and a `ResourcePermission`, which describes the operation requested.
+
+The implementation of the `SecurityManager.authorize` method
+makes a decision as to whether or not the principal will be granted permission
+to carry out the operation.
+It returns a boolean in which a return value of `true` permits
+the operation,
+and a return value of `false` prevents the operation.
+
+A well-designed `authorize` method will have or will have a way of obtaining
+a mapping of principals to the operations (in the form of resource permissions)
+that they are permitted to do.
+
+## Resource Permissions
+
+All operations are described by an instance of the `ResourcePermission` class.
+A permission contains the `Resource` data member,
+which classifies whether the operation as working on
+
+- cache data; value is `DATA`
+- the distributed system; value is `CLUSTER`
+
+A permission also contains the `Operation` data member, 
+which classifies whether the operation as
+
+- reading; value is `READ`
+- changing information; value is `WRITE`
+- making administrative changes; value is `MANAGE`
+
+The operations are not hierarchical;
+`MANAGE` does not imply `WRITE`, and `WRITE` does not imply `READ`.
+
+Some operations further specify a region name in the permission.
+This permits restricting operations on that region to only those
+authorized principals.
+And within a region, some operations may specify a key.
+This permits restricting operations on that key within that region to 
+only those authorized principals.
+
+This table classifies the permissions assigned for operations common to
+a Client-Server interaction.
+
+| Client Operation                   | Assigned `ResourcePermission`
+|------------------------------------|-------------------------------------|
+| get function attribute             | CLUSTER:READ                        |
+| create region                      | DATA:MANAGE                         |
+| destroy region                     | DATA:MANAGE                         |
+| Region.Keyset                      | DATA:READ:RegionName                |
+| Region.query                       | DATA:READ:RegionName                |
+| Region.getAll                      | DATA:READ:RegionName                |
+| Region.getAll with a list of keys  | DATA:READ:RegionName:Key            |
+| Region.getEntry                    | DATA:READ:RegionName                |
+| Region.containsKeyOnServer(key)    | DATA:READ:RegionName:Key            |
+| Region.get(key)                    | DATA:READ:RegionName:Key            |
+| Region.registerInterest(key)       | DATA:READ:RegionName:Key            |
+| Region.registerInterest(regex)     | DATA:READ:RegionName                |
+| Region.unregisterInterest(key)     | DATA:READ:RegionName:Key            |
+| Region.unregisterInterest(regex)   | DATA:READ:RegionName                |
+| execute function                   | DATA:WRITE                          |
+| clear region                       | DATA:WRITE:RegionName               |
+| Region.putAll                      | DATA:WRITE:RegionName               |
+| Region.clear                       | DATA:WRITE:RegionName               |
+| Region.removeAll                   | DATA:WRITE:RegionName               |
+| Region.destroy(key)                | DATA:WRITE:RegionName:Key           |
+| Region.invalidate(key)             | DATA:WRITE:RegionName:Key           |
+| invalidate key (DIFFERENT?)        | DATA:WRITE:RegionName:Key           |
+| Region.destroy(key)                | DATA:WRITE:RegionName:Key           |
+| destroy key    (DIFFERENT?)        | DATA:WRITE:RegionName:Key           |
+| Region.put(key)                    | DATA:WRITE:RegionName:Key           |
+| Region.replace                     | DATA:WRITE:RegionName:Key           |
+
+
+This table classifies the permissions assigned for `gfsh` operations.
+
+| `gfsh` Command                         | Assigned `ResourcePermission`
+|----------------------------------------|----------------------------------|
+| alter disk-store                       | DATA:MANAGE                      |
+| alter region                           | DATA:MANAGE:RegionName           |
+| alter runtime                          | CLUSTER:MANAGE                   |
+| backup disk-store                      | DATA:READ                        |
+| change loglevel                        | CLUSTER:WRITE                    |
+| clear defined indexes                  | DATA:MANAGE                      |
+| close durable-client                   | DATA:MANAGE                      |
+| close durable-cq                       | DATA:MANAGE                      |
+| compact disk-store                     | DATA:MANAGE                      |
+| compact offline-disk-store             | DATA:MANAGE                      |
+| configure pdx                          | DATA:MANAGE                      |
+| create async-event-queue               | DATA:MANAGE                      |
+| create defined indexes                 | DATA:MANAGE                      |
+| create disk-store                      | DATA:MANAGE                      |
+| create gateway-receiver                | DATA:MANAGE                      |
+| create gateway-sender                  | DATA:MANAGE                      |
+| create index                           | DATA:MANAGE:RegionName           |
+| create region                          | DATA:MANAGE                      |
+| define index                           | DATA:MANAGE:RegionName           |
+| deploy                                 | DATA:MANAGE                      |
+| describe client                        | CLUSTER:READ                     |
+| describe config                        | CLUSTER:READ                     |
+| describe disk-store                    | CLUSTER:READ                     |
+| describe member                        | CLUSTER:READ                     |
+| describe offline-disk-store            | CLUSTER:READ                     |
+| describe region                        | CLUSTER:READ                     |
+| destroy disk-store                     | DATA:MANAGE                      |
+| destroy function                       | DATA:MANAGE                      |
+| destroy index                          | DATA:MANAGE or DATA:MANAGE:RegionName |
+| destroy region                         | DATA:MANAGE                      |
+| disconnect                             | DATA:MANAGE                      |
+| echo                                   | DATA:MANAGE                      |
+| encrypt password                       | DATA:MANAGE                      |
+| execute function                       | DATA:MANAGE                      |
+| export cluster-configuration           | CLUSTER:READ                     |
+| export config                          | CLUSTER:READ                     |
+| export data                            | CLUSTER:READ                     |
+| export logs                            | CLUSTER:READ                     |
+| export offline-disk-store              | CLUSTER:READ                     |
+| export stack-traces                    | CLUSTER:READ                     |
+| gc                                     | CLUSTER:MANAGE                   |
+| get --key=key1 --region=region1        | DATA:READ:RegionName:Key         |
+| import data                            | DATA:WRITE:RegionName            |
+| import cluster-configuration           | DATA:MANAGE                      |
+| list async-event-queues                | CLUSTER:READ                     |
+| list clients                           | CLUSTER:READ                     |
+| list deployed                          | CLUSTER:READ                     |
+| list disk-stores                       | CLUSTER:READ                     |
+| list durable-cqs                       | CLUSTER:READ                     |
+| list functions                         | CLUSTER:READ                     |
+| list gateways                          | CLUSTER:READ                     |
+| list indexes                           | CLUSTER:READ                     |
+| list members                           | CLUSTER:READ                     |
+| list regions                           | DATA:READ                        |
+| load-balance gateway-sender            | DATA:MANAGE                      |
+| locate entry                           | DATA:READ:RegionName:Key         |
+| netstat                                | CLUSTER:READ                     |
+| pause gateway-sender                   | DATA:MANAGE                      |
+| pdx rename                             | DATA:MANAGE                      |
+| put --key=key1 --region=region1        | DATA:WRITE:RegionName:Key        |
+| query                                  | DATA:READ:RegionName             |
+| rebalance                              | DATA:MANAGE                      |
+| remove                                 | DATA:WRITE:RegionName or DATA:WRITE:RegionName:Key |
+| resume gateway-sender                  | DATA:MANAGE                      |
+| revoke mising-disk-store               | DATA:MANAGE                      |
+| show dead-locks                        | CLUSTER:READ                     |
+| show log                               | CLUSTER:READ                     |
+| show metrics                           | CLUSTER:READ                     |
+| show missing-disk-stores               | CLUSTER:READ                     |
+| show subscription-queue-size           | CLUSTER:READ                     |
+| shutdown                               | CLUSTER:MANAGE                   |
+| start gateway-receiver                 | DATA:MANAGE                      |
+| start gateway-sender                   | DATA:MANAGE                      |
+| start server                           | CLUSTER:MANAGE                   |
+| status cluster-config-service          | CLUSTER:READ                     |
+| status gateway-receiver                | CLUSTER:READ                     |
+| status gateway-sender                  | CLUSTER:READ                     |
+| status locator                         | CLUSTER:READ                     |
+| status server                          | CLUSTER:READ                     |
+| stop gateway-receiver                  | DATA:MANAGE                      |
+| stop gateway-receiver                  | DATA:MANAGE                      |
+| stop locator                           | CLUSTER:MANAGE                   |
+| stop server                            | CLUSTER:MANAGE                   |
+| undeploy                               | DATA:MANAGE                      |
+
+The `gfsh connect` does not have a permission,
+as it is the operation that invokes authentication. 
+These `gfsh` commands do not have permission defined,
+as they do not interact with the distributed system.
+
+-  `gfsh describe connection`, which describes the `gfsh` end of the connection
+-  `gfsh debug`, which toggles the mode within `gfsh`
+-  `gfsh exit`
+-  `gfsh help`
+-  `gfsh hint`
+-  `gfsh history`
+-  `gfsh run`, although individual commands within the script
+will go through authorization
+-  `gfsh set variable`
+-  `gfsh sh`
+-  `gfsh sleep`
+-  `validate offline-disk-store`
+-  `gfsh version`
+
+This table classifies the permissions assigned for JMX operations.
+
+| JMX Operation                                | Assigned `ResourcePermission`
+|----------------------------------------------|-----------------------------|
+| DistributedSystemMXBean.shutdownAllMembers     | CLUSTER:MANAGE            |
+| ManagerMXBean.start                            | CLUSTER:MANAGE            |
+| ManagerMXBean.stop                             | CLUSTER:MANAGE            |
+| ManagerMXBean.createManager                    | CLUSTER:MANAGE            |
+| ManagerMXBean.shutDownMember                   | CLUSTER:MANAGE            |
+| Mbeans get attributes                          | CLUSTER:READ              |
+| MemberMXBean.showLog                           | CLUSTER:READ              |
+| DistributedSystemMXBean.changerAlertLevel      | CLUSTER:WRITE             |
+| ManagerMXBean.setPulseURL                      | CLUSTER:WRITE             |
+| ManagerMXBean.setStatusMessage                 | CLUSTER:WRITE             |
+| CacheServerMXBean.closeAllContinuousQuery      | DATA:MANAGE               |
+| CacheServerMXBean.closeContinuousQuery         | DATA:MANAGE               |
+| CacheServerMXBean.executeContinuousQuery       | DATA:READ                 |
+| DiskStoreMXBean.flush                          | DATA:MANAGE               |
+| DiskStoreMXBean.forceCompaction                | DATA:MANAGE               |
+| DiskStoreMXBean.forceRoll                      | DATA:MANAGE               |
+| DiskStoreMXBean.setDiskUsageCriticalPercentage | DATA:MANAGE               |
+| DiskStoreMXBean.setDiskUsageWarningPercentage  | DATA:MANAGE               |
+| DistributedSystemMXBean.revokeMissingDiskStores| DATA:MANAGE               |
+| DistributedSystemMXBean.setQueryCollectionsDepth| DATA:MANAGE              |
+| DistributedSystemMXBean.setQueryResultSetLimit | DATA:MANAGE               |
+| DistributedSystemMXBean.backupAllMembers       | DATA:READ                 |
+| DistributedSystemMXBean.queryData              | DATA:READ                 |
+| DistributedSystemMXBean.queryDataForCompressedResult | DATA:READ           |
+| GatewayReceiverMXBean.pause                    | DATA:MANAGE               |
+| GatewayReceiverMXBean.rebalance                | DATA:MANAGE               |
+| GatewayReceiverMXBean.resume                   | DATA:MANAGE               |
+| GatewayReceiverMXBean.start                    | DATA:MANAGE               |
+| GatewayReceiverMXBean.stop                     | DATA:MANAGE               |
+| GatewaySenderMXBean.pause                      | DATA:MANAGE               |
+| GatewaySenderMXBean.rebalance                  | DATA:MANAGE               |
+| GatewaySenderMXBean.resume                     | DATA:MANAGE               |
+| GatewaySenderMXBean.start                      | DATA:MANAGE               |
+| GatewaySenderMXBean.stop                       | DATA:MANAGE               |
+| LockServiceMXBean.becomeLockGrantor            | DATA:MANAGE               |
+| MemberMXBean.compactAllDiskStores              | DATA:MANAGE               |
+
+## Implement Authorization
+
+Complete these items to implement authorization.
+
+- Decide upon an authorization algorithm.
+The [Authorization Example](authorization_example.html)
+stores a mapping of which principals (users) are permitted to do
+which operations.
+The algorithm bases its decision
+on a look up of the permissions granted to the principal attempting
+the operation.
+- Define the `security-manager` property.
+See [Enable Security with Property Definitions](enable_security.html)
+for details about this property.
+- Implement the  `authorize` method of the `SecurityManager` interface.
+- Define any extra resources that the implemented authorization algorithm
+needs in order to make a decision.
+