You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@geode.apache.org by km...@apache.org on 2016/10/12 17:11:48 UTC

[28/76] [abbrv] [partial] incubator-geode git commit: GEODE-1952 Consolidated docs under a single geode-docs directory

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/region_options/region_types.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/region_options/region_types.html.md.erb b/geode-docs/developing/region_options/region_types.html.md.erb
new file mode 100644
index 0000000..45908dd
--- /dev/null
+++ b/geode-docs/developing/region_options/region_types.html.md.erb
@@ -0,0 +1,129 @@
+---
+title:  Region Types
+---
+
+Region types define region behavior within a single distributed system. You have various options for region data storage and distribution.
+
+<a id="region_types__section_E3435ED1D0D142538B99FA69A9E449EF"></a>
+Within a Geode distributed system, you can define distributed regions and non-distributed regions, and you can define regions whose data is spread across the distributed system, and regions whose data is entirely contained in a single member.
+
+Your choice of region type is governed in part by the type of application you are running. In particular, you need to use specific region types for your servers and clients for effective communication between the two tiers:
+
+-   Server regions are created inside a `Cache` by servers and are accessed by clients that connect to the servers from outside the server's distributed system. Server regions must have region type partitioned or replicated. Server region configuration uses the `RegionShortcut` enum settings.
+-   Client regions are created inside a `ClientCache` by clients and are configured to distribute data and events between the client and the server tier. Client regions must have region type `local`. Client region configuration uses the `ClientRegionShortcut` enum settings.
+-   Peer regions are created inside a `Cache`. Peer regions may be server regions, or they may be regions that are not accessed by clients. Peer regions can have any region type. Peer region configuration uses the `RegionShortcut` enum settings.
+
+When you configure a server or peer region using `gfsh` or with the `cache.xml` file, you can use *region shortcuts* to define the basic configuration of your region. A region shortcut provides a set of default configuration attributes that are designed for various types of caching architectures. You can then add additional configuration attributes as needed to customize your application. For more information and a complete reference of these region shortcuts, see [Region Shortcuts Reference](../../reference/topics/region_shortcuts_reference.html#reference_lt4_54c_lk).
+
+<a id="region_types__section_A3449B07598C47A881D9219574DE46C5"></a>
+
+These are the primary configuration choices for each data region.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="34%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Region Type</th>
+<th>Description</th>
+<th>Best suited for...</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>Partitioned</td>
+<td>System-wide setting for the data set. Data is divided into buckets across the members that define the region. For high availability, configure redundant copies so each bucket is stored in multiple members with one member holding the primary.</td>
+<td>Server regions and peer regions
+<ul>
+<li>Very large data sets</li>
+<li>High availability</li>
+<li>Write performance</li>
+<li>Partitioned event listeners and data loaders</li>
+</ul></td>
+</tr>
+<tr class="even">
+<td>Replicated (distributed)</td>
+<td>Holds all data from the distributed region. The data from the distributed region is copied into the member replica region. Can be mixed with non-replication, with some members holding replicas and some holding non-replicas.</td>
+<td>Server regions and peer regions
+<ul>
+<li>Read heavy, small datasets</li>
+<li>Asynchronous distribution</li>
+<li>Query performance</li>
+</ul></td>
+</tr>
+<tr class="odd">
+<td>Distributed non-replicated</td>
+<td>Data is spread across the members that define the region. Each member holds only the data it has expressed interest in. Can be mixed with replication, with some members holding replicas and some holding non-replicas.</td>
+<td>Peer regions, but not server regions and not client regions
+<ul>
+<li>Asynchronous distribution</li>
+<li>Query performance</li>
+</ul></td>
+</tr>
+<tr class="even">
+<td>Non-distributed (local)</td>
+<td>The region is visible only to the defining member.</td>
+<td>Client regions and peer regions
+<ul>
+<li>Data that is not shared between applications</li>
+</ul></td>
+</tr>
+</tbody>
+</table>
+
+## <a id="region_types__section_C92C7DBD8EF44F1789FCB36281D3F8BF" class="no-quick-link"></a>Partitioned Regions
+
+Partitioning is a good choice for very large server regions. Partitioned regions are ideal for data sets in the hundreds of gigabytes and beyond.
+
+**Note:**
+Partitioned regions generally require more JDBC connections than other region types because each member that hosts data must have a connection.
+
+Partitioned regions group your data into buckets, each of which is stored on a subset of all of the system members. Data location in the buckets does not affect the logical view - all members see the same logical data set.
+
+Use partitioning for:
+
+-   **Large data sets**. Store data sets that are too large to fit into a single member, and all members will see the same logical data set. Partitioned regions divide the data into units of storage called buckets that are split across the members hosting the partitioned region data, so no member needs to host all of the region\u2019s data. Geode provides dynamic redundancy recovery and rebalancing of partitioned regions, making them the choice for large-scale data containers. More members in the system can accommodate more uniform balancing of the data across all host members, allowing system throughput (both gets and puts) to scale as new members are added.
+-   **High availability**. Partitioned regions allow you configure the number of copies of your data that Geode should make. If a member fails, your data will be available without interruption from the remaining members. Partitioned regions can also be persisted to disk for additional high availability.
+-   **Scalability**. Partitioned regions can scale to large amounts of data because the data is divided between the members available to host the region. Increase your data capacity dynamically by simply adding new members. Partitioned regions also allow you to scale your processing capacity. Because your entries are spread out across the members hosting the region, reads and writes to those entries are also spread out across those members.
+-   **Good write performance**. You can configure the number of copies of your data. The amount of data transmitted per write does not increase with the number of members. By contrast, with replicated regions, each write must be sent to every member that has the region replicated, so the amount of data transmitted per write increases with the number of members.
+
+In partitioned regions, you can colocate keys within buckets and across multiple partitioned regions. You can also control which members store which data buckets.
+
+## <a id="region_types__section_iwt_dnj_bm" class="no-quick-link"></a>Replicated Regions
+
+
+Replicated regions provide the highest performance in terms of throughput and latency.
+Replication is a good choice for small to medium size server regions.
+
+Use replicated regions for:
+
+-   **Small amounts of data required by all members of the distributed system**. For example, currency rate information and mortgage rates.
+-   **Data sets that can be contained entirely in a single member**. Each replicated region holds the complete data set for the region
+-   **High performance data access**. Replication guarantees local access from the heap for application threads, providing the lowest possible latency for data access.
+-   **Asynchronous distribution**. All distributed regions, replicated and non-replicated, provide the fastest distribution speeds.
+
+## <a id="region_types__section_2232BEC969F74CDB91B1BB74FEF67EE1" class="no-quick-link"></a>Distributed, Non-Replicated Regions
+
+Distributed regions provide the same performance as replicated regions, but each member stores only  data in which it has expressed an interest, either by subscribing to events from other members or by defining the data entries in its cache.
+
+Use distributed, non-replicated regions for:
+
+-   **Peer regions, but not server regions or client regions**. Server regions must be either replicated or partitioned. Client regions must be local.
+-   **Data sets where individual members need only notification and updates for changes to a subset of the data**. In non-replicated regions, each member receives only update events for the data entries it has defined in the local cache.
+-   **Asynchronous distribution**. All distributed regions, replicated and non-replicated, provide the fastest distribution speeds.
+
+## <a id="region_types__section_A8150BDBC74E4019B1942481877A4370" class="no-quick-link"></a>Local Regions
+
+**Note:**
+When created using the `ClientRegionShortcut` settings, client regions are automatically defined as local, since all client distribution activities go to and come from the server tier.
+
+The local region has no peer-to-peer distribution activity.
+
+Use local regions for:
+
+-   **Client regions**. Distribution is only between the client and server tier.
+-   **Private data sets for the defining member**. The local region is not visible to peer members.
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/region_options/storage_distribution_options.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/region_options/storage_distribution_options.html.md.erb b/geode-docs/developing/region_options/storage_distribution_options.html.md.erb
new file mode 100644
index 0000000..7ed2732
--- /dev/null
+++ b/geode-docs/developing/region_options/storage_distribution_options.html.md.erb
@@ -0,0 +1,23 @@
+---
+title:  Storage and Distribution Options
+---
+
+Geode provides several models for data storage and distribution, including partitioned or replicated regions as well as distributed or non-distributed regions (local cache storage).
+
+## <a id="concept_B18B7754E7C7485BA6D66F2DDB7A11FB__section_787D674A64244871AE49CBB58475088E" class="no-quick-link"></a>Peer-to-Peer Region Storage and Distribution
+
+At its most general, data management means having current data available when and where your applications need it. In a properly configured Geode installation, you store your data in your local members and Geode automatically distributes it to the other members that need it according to your cache configuration settings. You may be storing very large data objects that require special consideration, or you may have a high volume of data requiring careful configuration to safeguard your application's performance or memory use. You may need to be able to explicitly lock some data during particular operations. Most data management features are available as configuration options, which you can specify either using the `gfsh` cluster configuration service, `cache.xml` file or the API. Once configured, Geode manages the data automatically. For example, this is how you manage data distribution, disk storage, data expiration activities, and data partitioning. A few features are managed at ru
 n-time through the API.
+
+At the architectural level, data distribution runs between peers in a single system and between clients and servers.
+
+-   Peer-to-peer provides the core distribution and storage models, which are specified as attributes on the data regions.
+
+-   For client/server, you choose which data regions to share between the client and server tiers. Then, within each region, you can fine-tune the data that the server automatically sends to the client by subscribing to subsets.
+
+Data storage in any type of installation is based on the peer-to-peer configuration for each individual distributed system. Data and event distribution is based on a combination of the peer-to-peer and system-to-system configurations.
+
+Storage and distribution models are configured through cache and region attributes. The main choices are partitioned, replicated, or just distributed. All server regions must be partitioned or replicated. Each region\u2019s `data-policy` and `subscription-attributes`, and its `scope` if it is not a partitioned region, interact for finer control of data distribution.
+
+## <a id="concept_B18B7754E7C7485BA6D66F2DDB7A11FB__section_A364D16DFADA49D1A838A7EAF8E4251C" class="no-quick-link"></a>Storing Data in the Local Cache
+
+To store data in your local cache, use a region `refid` with a `RegionShortcut` or `ClientRegionShortcut` that has local state. These automatically set the region `data-policy` to a non-empty policy. Regions without storage can send and receive event distributions without storing anything in your application heap. With the other settings, all entry operations received are stored locally.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/storing_data_on_disk/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/storing_data_on_disk/chapter_overview.html.md.erb b/geode-docs/developing/storing_data_on_disk/chapter_overview.html.md.erb
new file mode 100644
index 0000000..96c6a3d
--- /dev/null
+++ b/geode-docs/developing/storing_data_on_disk/chapter_overview.html.md.erb
@@ -0,0 +1,24 @@
+---
+title:  Persistence and Overflow
+---
+
+You can persist data on disk for backup purposes and overflow it to disk to free up memory without completely removing the data from your cache.
+
+**Note:**
+This supplements the general steps for managing data regions provided in [Basic Configuration and Programming](../../basic_config/book_intro.html).
+
+All disk storage uses Apache Geode[Disk Storage](../../managing/disk_storage/chapter_overview.html).
+
+-   **[How Persistence and Overflow Work](../../developing/storing_data_on_disk/how_persist_overflow_work.html)**
+
+    To use Geode persistence and overflow, you should understand how they work with your data.
+
+-   **[Configure Region Persistence and Overflow](../../developing/storing_data_on_disk/storing_data_on_disk.html)**
+
+    Plan persistence and overflow for your data regions and configure them accordingly.
+
+-   **[Overflow Configuration Examples](../../developing/storing_data_on_disk/overflow_config_examples.html)**
+
+    The `cache.xml` examples show configuration of region and server subscription queue overflows.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/storing_data_on_disk/how_persist_overflow_work.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/storing_data_on_disk/how_persist_overflow_work.html.md.erb b/geode-docs/developing/storing_data_on_disk/how_persist_overflow_work.html.md.erb
new file mode 100644
index 0000000..2c08c33
--- /dev/null
+++ b/geode-docs/developing/storing_data_on_disk/how_persist_overflow_work.html.md.erb
@@ -0,0 +1,47 @@
+---
+title:  How Persistence and Overflow Work
+---
+
+To use Geode persistence and overflow, you should understand how they work with your data.
+
+<a id="how_persist_overflow_work__section_jzl_wwb_pr"></a>
+Geode persists and overflows several types of data. You can persist or overflow the application data in your regions. In addition, Geode persists and overflows messaging queues between distributed systems, to manage memory consumption and provide high availability.
+
+Persistent data outlives the member where the region resides and can be used to initialize the region at creation. Overflow acts only as an extension of the region in memory.
+
+The data is written to disk according to the configuration of Geode disk stores. For any disk option, you can specify the name of the disk store to use or use the Geode default disk store. See [Disk Storage](../../managing/disk_storage/chapter_overview.html).
+
+## <a id="how_persist_overflow_work__section_78F2D1820B6C48859A0E5411CE360105" class="no-quick-link"></a>How Data Is Persisted and Overflowed
+
+For persistence, the entry keys and values are copied to disk. For overflow, only the entry values are copied. Other data, such as statistics and user attributes, are retained in memory only.
+
+-   Data regions are overflowed to disk by least recently used (LRU) entries because those entries are deemed of least interest to the application and therefore less likely to be accessed.
+-   Server subscription queues overflow most recently used (MRU) entries. These are the messages that are at the end of the queue and so are last in line to be sent to the client.
+
+## <a id="how_persist_overflow_work__section_1A3AE288145749058880D98C699FE124" class="no-quick-link"></a>Persistence
+
+Persistence provides a disk backup of region entry data. The keys and values of all entries are saved to disk, like having a replica of the region on disk. Region entry operations such as put and destroy are carried out in memory and on disk.
+
+<img src="../../images_svg/developing_persistence.svg" id="how_persist_overflow_work__image_B53E1A5A568D437692247A2FD99348A6" class="image" />
+
+When the member stops for any reason, the region data on disk remains. In partitioned regions, where data buckets are divided among members, this can result in some data only on disk and some on disk and in memory. The disk data can be used at member startup to populate the same region.
+
+## <a id="how_persist_overflow_work__section_55A7BBEB48574F649C40EB5D3E9CD0AC" class="no-quick-link"></a>Overflow
+
+Overflow limits region size in memory by moving the values of least recently used (LRU) entries to disk. Overflow basically uses disk as a swap space for entry values. If an entry is requested whose value is only on disk, the value is copied back up into memory, possibly causing the value of a different LRU entry to be moved to disk. As with persisted entries, overflowed entries are maintained on disk just as they are in memory.
+
+In this figure, the value of entry X has been moved to disk to make space in memory. The key for X remains in memory. From the distributed system perspective, the value on disk is as much a part of the region as the data in memory.
+
+<img src="../../images_svg/developing_overflow.svg" id="how_persist_overflow_work__image_1F89C9FBACB54EDA844778EC60F61B8D" class="image" />
+
+## <a id="how_persist_overflow_work__section_9CBEBC0B59554DB49CE4941435793C51" class="no-quick-link"></a>Persistence and Overflow Together
+
+Used together, persistence and overflow keep all entry keys and values on disk and only the most active entry values in memory. The removal of an entry value from memory due to overflow has no effect on the disk copy as all entries are already on disk.
+
+<img src="../../images_svg/developing_persistence_and_overflow.svg" id="how_persist_overflow_work__image_E40D9C2EA238406A991E954477C7EB78" class="image" />
+
+## Persistence and Multi-Site Configurations
+
+Multi-site gateway sender queues overflow most recently used (MRU) entries. These are the messages that are at the end of the queue and so are last in line to be sent to the remote site. You can also configure gateway sender queues to persist for high availability.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/storing_data_on_disk/overflow_config_examples.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/storing_data_on_disk/overflow_config_examples.html.md.erb b/geode-docs/developing/storing_data_on_disk/overflow_config_examples.html.md.erb
new file mode 100644
index 0000000..ca9d7cd
--- /dev/null
+++ b/geode-docs/developing/storing_data_on_disk/overflow_config_examples.html.md.erb
@@ -0,0 +1,36 @@
+---
+title:  Overflow Configuration Examples
+---
+
+The `cache.xml` examples show configuration of region and server subscription queue overflows.
+
+<a id="overflow_config_examples__section_FD38DA72706245C996ACB7B23927F6AF"></a>
+Configure overflow criteria based on one of these factors:
+
+-   Entry count
+-   Absolute memory consumption
+-   Memory consumption as a percentage of the application heap (not available for server subscription queues)
+
+Configuration of region overflow:
+
+``` pre
+<!-- Overflow when the region goes over 10000 entries -->
+<region-attributes>
+  <eviction-attributes>
+    <lru-entry-count maximum="10000" action="overflow-to-disk"/>
+  </eviction-attributes>
+</region-attributes>
+```
+
+Configuration of server's client subscription queue overflow:
+
+``` pre
+<!-- Overflow the server's subscription queues when the queues reach 1 Mb of memory -->
+<cache> 
+  <cache-server> 
+    <client-subscription eviction-policy="mem" capacity="1"/> 
+  </cache-server> 
+</cache>
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/storing_data_on_disk/storing_data_on_disk.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/storing_data_on_disk/storing_data_on_disk.html.md.erb b/geode-docs/developing/storing_data_on_disk/storing_data_on_disk.html.md.erb
new file mode 100644
index 0000000..9aefd7c
--- /dev/null
+++ b/geode-docs/developing/storing_data_on_disk/storing_data_on_disk.html.md.erb
@@ -0,0 +1,62 @@
+---
+title:  Configure Region Persistence and Overflow
+---
+
+Plan persistence and overflow for your data regions and configure them accordingly.
+
+<a id="storing_data_on_disk__section_E253562A46114CF0A4E47048D8143999"></a>
+Use the following steps to configure your data regions for persistence and overflow:
+
+1.  Configure your disk stores as needed. See [Designing and Configuring Disk Stores](../../managing/disk_storage/using_disk_stores.html#defining_disk_stores). The cache disk store defines where and how the data is written to disk.
+
+    ``` pre
+    <disk-store name="myPersistentStore" . . . >
+    <disk-store name="myOverflowStore" . . . >
+    ```
+
+2.  Specify the persistence and overflow criteria for the region. If you are not using the default disk store, provide the disk store name in your region attributes configuration. To write asynchronously to disk, specify `disk-synchronous="false"`.
+    -   For overflow, specify the overflow criteria in the region's `eviction-attributes` and name the disk store to use.
+
+        Example:
+
+        ``` pre
+        <region name="overflowRegion" . . . >
+          <region-attributes disk-store-name="myOverflowStore" disk-synchronous="true">
+            <eviction-attributes>
+              <!-- Overflow to disk when 100 megabytes of data reside in the
+                   region -->
+              <lru-memory-size maximum="100" action="overflow-to-disk"/>
+            </eviction-attributes>
+          </region-attributes>
+        </region>
+        ```
+
+        gfsh:
+
+        You cannot configure `lru-memory-size` using gfsh.
+    -   For persistence, set the `data-policy` to `persistent-replicate` and name the disk store to use.
+
+        Example:
+
+        ``` pre
+        <region name="partitioned_region" refid="PARTITION_PERSISTENT">
+          <region-attributes disk-store-name="myPersistentStore">
+            . . . 
+          </region-attributes>
+        </region> 
+        ```
+
+When you start your members, overflow and persistence will be done automatically, with the disk stores and disk write behaviors.
+
+**Note:**
+You can also configure Regions and Disk Stores using the gfsh command-line interface. See [Region Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_EF03119A40EE492984F3B6248596E1DD) and [Disk Store Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_1ACC91B493EE446E89EC7DBFBBAE00EA).
+
+<a id="storing_data_on_disk__section_0D825566F508444C98DFE57527962FED"></a>
+
+| Related Topics                                                                        |
+|---------------------------------------------------------------------------------------|
+| `org.apache.geode.cache.RegionAttributes` for data region persistence information |
+| `org.apache.geode.cache.EvictionAttributes` for data region overflow information  |
+| `org.apache.geode.cache.server.ClientSubscriptionConfig`                          |
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/transactions/JTA_transactions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/JTA_transactions.html.md.erb b/geode-docs/developing/transactions/JTA_transactions.html.md.erb
new file mode 100644
index 0000000..31d0cbb
--- /dev/null
+++ b/geode-docs/developing/transactions/JTA_transactions.html.md.erb
@@ -0,0 +1,226 @@
+---
+title: JTA Global Transactions with Geode
+---
+
+
+Use JTA global transactions to coordinate Geode cache transactions and JDBC transactions.
+
+JTA is a standard Java interface you can use to coordinate Geode cache transactions and JDBC transactions globally under one umbrella. JTA provides direct coordination between the Geode cache and another transactional resource, such as a database. The parties involved in a JTA transaction include:
+
+-   The Java application, responsible for starting the global transaction
+-   The JTA transaction manager, responsible for opening, committing, and rolling back transactions
+-   The transaction resource managers, including the Geode cache transaction manager and the JDBC resource manager, responsible for managing operations in the cache and database respectively
+
+Using JTA, your application controls all transactions in the same standard way, whether the transactions act on the Geode cache, a JDBC resource, or both together. When a JTA global transaction is done, the Geode transaction and the database transaction are both complete.
+
+When using JTA global transactions with Geode, you have three options:
+
+1.  Coordinate with an external JTA transaction manager in a container (such as WebLogic or JBoss)
+2.  Set Geode as the \u201clast resource\u201d while using a container (such as WebLogic or JBoss) as the JTA transaction manager
+3.  Have Geode act as the JTA transaction manager
+
+An application creates a global transaction by using `javax.transaction.UserTransaction` bound to the JNDI context `java:/UserTransaction` to start and terminate transactions. During the transaction, cache operations are done through Geode as usual as described in [Geode Cache Transactions](cache_transactions.html#topic_e15_mr3_5k).
+
+**Note:**
+See the Sun documentation for more information on topics such as JTA, `javax.transaction`, committing and rolling back global transactions, and the related exceptions.
+
+-   **[Coordinating with External JTA Transactions Managers](#concept_cp1_zx1_wk)**
+
+    Geode can work with the JTA transaction managers of several containers like JBoss, WebLogic, GlassFish, and so on.
+
+-   **[Using Geode as the "Last Resource" in a Container-Managed JTA Transaction](#concept_csy_vfb_wk)**
+
+    The "last resource" feature in certain 3rd party containers such as WebLogic allow the use one non-XAResource (such as Geode) in a transaction with multiple XAResources while ensuring consistency.
+
+-   **[Using Geode as the JTA Transaction Manager](#concept_8567sdkbigige)**
+
+    You can also use Geode as the JTA transaction manager.
+
+-   **[Behavior of Geode Cache Writers and Loaders Under JTA](cache_plugins_with_jta.html)**
+
+    When Geode participates in a global transactions, you can still have Geode cache writers and cache loaders operating in the usual way.
+
+-   **[Turning Off JTA Transactions](turning_off_jta.html)**
+
+    You can configure regions to not participate in any JTA global transaction.
+
+<a id="concept_cp1_zx1_wk"></a>
+
+# Coordinating with External JTA Transactions Managers
+
+Geode can work with the JTA transaction managers of several containers like JBoss, WebLogic, GlassFish, and so on.
+
+At startup Geode looks for a TransactionManager (`javax.transaction.TransactionManager`) that has been bound to its JNDI context. When Geode finds such an external transaction manager, all Geode region operations (such as get and put) will participate in global transactions hosted by this external JTA transaction manager.
+
+This figure shows the high-level operation of a JTA global transaction whose resources include a Geode cache and a database.
+
+<img src="../../images/transactions_jta_app_server.png" id="concept_cp1_zx1_wk__image_C2935E48415349659FC39BF5C7E75579" class="image" />
+
+An externally coordinated JTA global transaction is run in the following manner:
+
+1.  Each region operation looks up for presence of a global transaction. If one is detected, then a Geode transaction is started automatically, and we register a `javax.transaction.Synchronization` callback with the external JTA transaction manager.
+2.  At transaction commit, Geode gets a `beforeCommit()` callback from the external JTA transaction manager. Geode does all locking and conflict detection at this time. If this fails, an exception is thrown back to JTA transaction manager, which then aborts the transaction.
+3.  After a successful `beforeCommit()`callback, JTA transaction manager asks other data sources to commit their transaction.
+4.  Geode then gets a `afterCommit()` callback in which changes are applied to the cache and distributed to other members.
+
+You can disable JTA in any region that should not participate in JTA transactions. See [Turning Off JTA Transactions](turning_off_jta.html#concept_nw2_5gs_xk).
+
+## <a id="task_j3g_3mn_1l" class="no-quick-link"></a>How to Run a JTA Transaction Coordinated by an External Transaction Manager
+
+Use the following procedure to run a Geode global JTA transaction coordinated by an external JTA transaction manager.
+
+1.  **Configure the external data sources in the external container.** Do not configure the data sources in cache.xml . They are not guaranteed to get bound to the JNDI tree.
+2.  
+
+    Configure Geode for any necessary transactional behavior in the `cache.xml` file. For example, enable `copy-on-read` and specify a transaction listener, as needed. See [Setting Global Copy on Read](working_with_transactions.html#concept_vx2_gs4_5k) and [Configuring Transaction Plug-In Event Handlers](working_with_transactions.html#concept_ocw_vf1_wk) for details. 
+3.  
+
+    Make sure that JTA transactions are enabled for the regions that will participate in the transaction. See [Turning Off JTA Transactions](turning_off_jta.html#concept_nw2_5gs_xk) for details. 
+4.  
+
+     Start the transaction through the external container. 
+5.  
+
+    Initialize the Geode cache. Geode will automatically join the transaction. 
+6.  
+
+     Execute operations in the cache and the database as usual. 
+7.  
+
+     Commit the transaction through the external container. 
+
+<a id="concept_csy_vfb_wk"></a>
+
+# Using Geode as the "Last Resource" in a Container-Managed JTA Transaction
+
+The "last resource" feature in certain 3rd party containers such as WebLogic allow the use one non-XAResource (such as Geode) in a transaction with multiple XAResources while ensuring consistency.
+
+In the previous two JTA transaction use cases, if the Geode member fails after the other data sources commit but before Geode receives the `afterCommit` callback, Geode and the other data sources may become inconsistent. To prevent this from occurring, you can use the container's "last resource optimization" feature, with Geode set as the "last resource". Using Geode as the last resource ensures that in the event of failure, Geode remains consistent with the other XAResources involved in the transaction.
+
+To accomplish this, the application server container must use a JCA Resource Adapter to accomodate Geode as the transaction's last resource. The transaction manager of the container first issues a "prepare" message to the participating XAResources. If the XAResources all accept the transaction, then the manager issues a "commit" instruction to the non-XAResource (in this case, Geode). The non-XAResource (in this case, Geode) participates as a local transaction resource. If the non-XAResource fails, then the transaction manager can rollback the XAResources.
+
+<img src="../../images/transactions_jca_adapter.png" id="concept_csy_vfb_wk__image_opb_sgb_wk" class="image" />
+
+<a id="task_sln_x3b_wk"></a>
+
+# How to Run JTA Transactions with Geode as a "Last Resource"
+
+1.  Locate the `$GEMFIRE/lib/gemfire-jca.rar` file in your Geode installation. 
+2.  Add your container-specific XML file to the `gemfire-jca.rar` file. 
+<ol>
+<li>Create a container-specific resource adapter XML file named &lt;container&gt;-ra.xml. For example, an XML file for a WebLogic resource adapter XML file might look something like this:
+
+    ``` pre
+    <?xml version="1.0"?>
+    <!DOCTYPE weblogic-connection-factory-dd PUBLIC '-//BEA Systems, Inc.//DTD WebLogic 9.0.0 Connector//EN' 
+    'http://www.bea.com/servers/wls810/dtd/weblogic810-ra.dtd'>
+
+    <weblogic-connection-factory-dd>
+       <connection-factory-name>GFE JCA</connection-factory-name>
+       <jndi-name>gfe/jca</jndi-name>
+    </weblogic-connection-factory-dd>
+    ```
+</li>
+<li>Create a folder named `META-INF`, and place the container-specific XML file inside the directory. For example, the folder structure would look like this:
+
+    ``` pre
+    META-INF/weblogic-ra.xml
+    ```
+</li>
+<li>Navigate to the directory above the `META-INF` folder and execute the following command:
+
+    ``` pre
+    $ jar -uf <GEMFIRE_INSTALL_DIR>/lib/gemfire-jca.rar META-INF/weblogic-ra.xml
+    ```
+</li>
+</ol>
+3.  Make sure that `$GEMFIRE/lib/gemfire.jar` is accessible in the CLASSPATH of the JTA transaction coordinator container.
+4.  Deploy `gemfire-jca.rar` file on the JTA transaction coordinator container . When deploying the file, you specify the JNDI name and so on. 
+5.  Configure Geode for any necessary transactional behavior. Enable `copy-on-read` and specify a transaction listener, if you need one. See [Setting Global Copy on Read](working_with_transactions.html#concept_vx2_gs4_5k) and [Configuring Transaction Plug-In Event Handlers](working_with_transactions.html#concept_ocw_vf1_wk) for details.
+6.  Get an initial context through `com.gemstone.cache.Cache.getJNDIContext`. For example:
+
+    ``` pre
+    Context ctx = cache.getJNDIContext();
+    ```
+
+    This returns `javax.naming.Context` and gives you the JNDI associated with the cache. The context contains the `TransactionManager`, `UserTransaction`, and any configured JDBC resource manager.
+
+7.  Start and commit the global transaction using the `UserTransaction` object rather than with Geode's `CacheTransactionManager`. 
+
+    ``` pre
+    UserTransaction txManager = (UserTransaction)ctx.lookup("java:/UserTransaction");
+    ```
+
+8.  Obtain a Geode connection.
+
+    ``` pre
+    GFConnectionFactory cf = (GFConnectionFactory) ctx.lookup("gfe/jca");
+
+    //This step of obtaining connection is what begins the
+    //LocalTransaction.
+    //If this is absent, GFE operations will not be part of any
+    //transaction
+    GFConnection gemfireConn = (GFConnection)cf.getConnection();
+    ```
+
+See [JCA Resource Adapter Example](jca_adapter_example.html#concept_swv_z2p_wk) for an example of how to set up a transaction using the JCA Resource Adapter.
+
+## <a id="concept_8567sdkbigige" class="no-quick-link"></a>Using Geode as the JTA Transaction Manager
+
+You can also use Geode as the JTA transaction manager.
+
+Geode ships with its own implementation of a JTA transaction manager. However, note that this implementation is not XA-compliant; therefore, it does not persist any state, which could lead to an inconsistent state after recovering a crashed member.
+
+<img src="../../images/transactions_jta.png" id="concept_8567sdkbigige__image_C8D94070E55F4BCC8B5FF3D5BEBA99ED" class="image" />
+
+The Geode JTA transaction manager is initialized when the Geode cache is initialized. Until then, JTA is not available for use. The application starts a JTA transaction by using the `UserTransaction.begin` method. The `UserTransaction` object is the application\u2019s handle to instruct the JTA transaction manager on what to do.
+
+The Geode JTA implementation also supports the J2EE Connector Architecture (JCA) `ManagedConnectionFactory`.
+
+The Geode implementation of JTA has the following limitations:
+
+-   Only one JDBC database instance per transaction is allowed, although you can have multiple connections to that database.
+-   Multiple threads cannot participate in a transaction.
+-   Transaction recovery after a crash is not supported.
+
+In addition, JTA transactions are subject to the limitations of Geode cache transactions such as not being supported on regions with global scope. When a global transaction needs to access the Geode cache, JTA silently starts a Geode cache transaction.
+
+<a id="task_qjv_khb_wk"></a>
+
+# How to Run a JTA Global Transaction Using Geode as the JTA Transaction Manager
+
+This topic describes how to run a JTA global transaction in Geode .
+
+To run a global transaction, perform the following steps:
+
+1. Configure the external data sources in the `cache.xml` file. See [Configuring Database Connections Using JNDI](configuring_db_connections_using_JNDI.html#topic_A5E3A67C808D48C08E1F0DC167C5C494) for examples. 
+2. Include the JAR file for any data sources in your CLASSPATH. 
+3.  Configure Geode for any necessary transactional behavior. Enable `copy-on-read` for your cache and specify a transaction listener, if you need one. See [Setting Global Copy on Read](working_with_transactions.html#concept_vx2_gs4_5k) and [Configuring Transaction Plug-In Event Handlers](working_with_transactions.html#concept_ocw_vf1_wk) for details. 
+4.  Make sure that JTA transactions are not disabled in the `cache.xml` file or the application code. 
+5.  Initialize the Geode cache. 
+6.  Get an initial context through `org.apache.geode.cache.Cache.getJNDIContext`. For example: 
+
+    ``` pre
+    Context ctx = cache.getJNDIContext();
+    ```
+
+    This returns `javax.naming.Context` and gives you the JNDI associated with the cache. The context contains the `TransactionManager`, `UserTransaction`, and any configured JDBC resource manager.
+
+7.  Look up the `UserTransaction` context: 
+
+    ``` pre
+    UserTransaction txManager = (UserTransaction) ctx.lookup("java:/UserTransaction");
+    ```
+
+    With `UserTransaction`, you can begin, commit, and rollback transactions.
+    If a global transaction exists when you use the cache, it automatically joins the transaction. Operations on a region automatically detect and become associated with the existing global transaction through JTA synchronization. If the global transaction has been marked for rollback, however, the Geode cache is not allowed to enlist with that transaction. Any cache operation that causes an attempt to enlist throws a `FailedSynchronizationException`.
+
+    The Geode cache transaction\u2019s commit or rollback is triggered when the global transaction commits or rolls back. When the global transaction is committed using the `UserTransaction` interface, the transactions of any registered JTA resources are committed, including the Geode cache transaction. If the cache or database transaction fails to commit, the `UserTransaction` call throws a `TransactionRolledBackException`. If a commit or rollback is attempted directly on a Geode transaction that is registered with JTA, that action throws an `IllegalStateException`.
+
+See [Geode JTA Transaction Example](transaction_jta_gemfire_example.html#concept_ffg_sj5_1l).
+
+-   **[Configuring Database Connections Using JNDI](configuring_db_connections_using_JNDI.html)**
+
+-   **[Example DataSource Configurations in cache.xml](configuring_db_connections_using_JNDI.html#topic_F67EC20067124A618A8099AB4CBF634C)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/transactions/about_transactions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/about_transactions.html.md.erb b/geode-docs/developing/transactions/about_transactions.html.md.erb
new file mode 100644
index 0000000..158f5f8
--- /dev/null
+++ b/geode-docs/developing/transactions/about_transactions.html.md.erb
@@ -0,0 +1,30 @@
+---
+title: About Transactions
+---
+
+<a id="topic_jbt_2y4_wk"></a>
+
+
+This section covers the features of Geode transactions.
+
+Geode transactions provide the following features:
+
+-   Basic transaction properties: atomicity, consistency, isolation, and durability
+-   Rollback and commit operations along with standard Geode cache operations
+-   Ability to suspend and resume transactions
+-   High concurrency and high performance
+-   Transaction statistics gathering and archiving
+-   Compatibility with Java Transaction API (JTA) transactions, using either Geode JTA or a third-party implementation
+-   Ability to use Geode as a \u201clast resource\u201d in JTA transactions with multiple data sources to guarantee transactional consistency
+
+## Types of Transactions
+
+Geode supports two kinds of transactions: **Geode cache transactions** and **JTA global transactions**.
+
+Geode cache transactions are used to group the execution of cache operations and to gain the control offered by transactional commit and rollback. Applications create cache transactions by using an instance of the Geode `CacheTransactionManager`. During a transaction, cache operations are performed and distributed through Geode as usual. See [Geode Cache Transactions](cache_transactions.html#topic_e15_mr3_5k) for details on Geode cache transactions and how these transactions work.
+
+JTA global transactions allow you to use the standard JTA interface to coordinate Geode transactions with JDBC transactions. When performing JTA global transactions, you have the option of using Geode\u2019s own implementation of JTA or a third party\u2019s implementation (typically application servers such as WebLogic or JBoss) of JTA. In addition, some third party JTA implementations allow you to set Geode as a \u201clast resource\u201d to ensure transactional consistency across data sources in the event that Geode or another data source becomes unavailable. For global transactions, applications use `java:/UserTransaction` to start and terminate transactions while Geode cache operations are performed in the same manner as regular Geode cache transactions. See [JTA Global Transactions with Geode](JTA_transactions.html) for details on JTA Global transactions.
+
+You can also coordinate a Geode cache transaction with an external database by specifying database operations within cache and transaction application plug-ins (CacheWriters/CacheListeners and TransactionWriters/TransactionListeners.) This is an alternative to using JTA transactions. See [How to Run a Geode Cache Transaction that Coordinates with an External Database](run_a_cache_transaction_with_external_db.html#task_sdn_2qk_2l).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/transactions/cache_plugins_with_jta.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/cache_plugins_with_jta.html.md.erb b/geode-docs/developing/transactions/cache_plugins_with_jta.html.md.erb
new file mode 100644
index 0000000..d0199f5
--- /dev/null
+++ b/geode-docs/developing/transactions/cache_plugins_with_jta.html.md.erb
@@ -0,0 +1,11 @@
+---
+title:  Behavior of Geode Cache Writers and Loaders Under JTA
+---
+
+When Geode participates in a global transactions, you can still have Geode cache writers and cache loaders operating in the usual way.
+
+For example, in addition to the transactional connection to the database, the region could also have a cache writer and cache loader configured to exchange data with that same database. As long as the data source is transactional, which means that it can detect the transaction manager, the cache writer and cache loader participate in the transaction. If the JTA rolls back its transaction, the changes made by the cache loader and the cache writer are rolled back. For more on transactional data sources, see the discussion of XAPooledDataSource and ManagedDataSource in[Configuring Database Connections Using JNDI](configuring_db_connections_using_JNDI.html#topic_A5E3A67C808D48C08E1F0DC167C5C494).
+
+If you are using a Geode cache or transaction listener with global transactions, be aware that the EntryEvent returned by a transaction has the Geode transaction ID, not the JTA transaction ID.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/transactions/cache_transaction_performance.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/cache_transaction_performance.html.md.erb b/geode-docs/developing/transactions/cache_transaction_performance.html.md.erb
new file mode 100644
index 0000000..f78e21b
--- /dev/null
+++ b/geode-docs/developing/transactions/cache_transaction_performance.html.md.erb
@@ -0,0 +1,12 @@
+---
+title:  Cache Transaction Performance
+---
+
+Cache transaction performance can vary depending on the type of regions you are using.
+
+The most common region configurations for use with transactions are distributed replicated and partitioned:
+
+-   Replicated regions are better suited for running transactions on small to mid-size data sets. To ensure all or nothing behavior, at commit time, distributed transactions use the global reservation system of the Geode distributed lock service. This works well as long as the data set is reasonably small.
+-   Partitioned regions are the right choice for highly-performant, scalable operations. Transactions on partitioned regions use only local locking, and only send messages to the redundant data stores at commit time. Because of this, these transactions perform much better than distributed transactions. There are no global locks, so partitioned transactions are extremely scalable as well.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/transactions/cache_transactions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/cache_transactions.html.md.erb b/geode-docs/developing/transactions/cache_transactions.html.md.erb
new file mode 100644
index 0000000..01359cd
--- /dev/null
+++ b/geode-docs/developing/transactions/cache_transactions.html.md.erb
@@ -0,0 +1,34 @@
+---
+title: Geode Cache Transactions
+---
+
+<a id="topic_e15_mr3_5k"></a>
+
+
+Use Geode cache transactions to group the execution of cache operations and to gain the control offered by transactional commit and rollback. Geode cache transactions control operations within the Geode cache while the Geode distributed system handles data distribution in the usual way.
+
+-   **[Cache Transaction Performance](../../developing/transactions/cache_transaction_performance.html)**
+
+    Cache transaction performance can vary depending on the type of regions you are using.
+
+-   **[Data Location for Cache Transactions](../../developing/transactions/data_location_cache_transactions.html)**
+
+    The location where you can run your transaction depends on where you are storing your data.
+
+-   **[How to Run a Geode Cache Transaction](../../developing/transactions/run_a_cache_transaction.html)**
+
+    This topic describes how to run a Geode cache transaction.
+
+-   **[How to Run a Geode Cache Transaction that Coordinates with an External Database](../../developing/transactions/run_a_cache_transaction_with_external_db.html)**
+
+    Coordinate a Geode cache transaction with an external database by using CacheWriter/CacheListener and TransactionWriter/TransactionListener plug-ins, **to provide an alternative to using JTA transactions**.
+
+-   **[Working with Geode Cache Transactions](../../developing/transactions/working_with_transactions.html)**
+
+    This section contains guidelines and additional information on working with Geode and its cache transactions.
+
+-   **[How Geode Cache Transactions Work](../../developing/transactions/how_cache_transactions_work.html#topic_fls_1j1_wk)**
+
+    This section provides an explanation of how transactions work on Geode caches.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/transactions/cache_transactions_by_region_type.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/cache_transactions_by_region_type.html.md.erb b/geode-docs/developing/transactions/cache_transactions_by_region_type.html.md.erb
new file mode 100644
index 0000000..a576ded
--- /dev/null
+++ b/geode-docs/developing/transactions/cache_transactions_by_region_type.html.md.erb
@@ -0,0 +1,139 @@
+---
+title: Transactions by Region Type
+---
+<a id="topic_nlq_sk1_wk"></a>
+
+
+A transaction is managed on a per-cache basis, so multiple regions in the cache can participate in a single transaction. The data scope of a Geode cache transaction is the cache that hosts the transactional data. For partitioned regions, this may be a remote host to the one running the transaction application. Any transaction that includes one or more partitioned regions is run on the member storing the primary copy of the partitioned region data. Otherwise, the transaction host is the same one running the application.
+
+-   The client executing the transaction code is called the transaction initiator.
+
+-   The member contacted by the transaction initiator is called the transaction delegate.
+
+-   The member that hosts the data\u2014and the transaction\u2014is called the transaction host.
+
+The transaction host may be the same member or different member from the transaction initiator. In either case, when the transaction commits, data distribution is done from the transaction host in the same way.
+
+**Note:**
+If you have consistency checking enabled in your region, the transaction will generate all necessary version information for the region update when the transaction commits. See [Transactions and Consistent Regions](working_with_transactions.html#transactions_and_consistency) for more details.
+
+-   **[Transactions and Partitioned Regions](../../developing/transactions/cache_transactions_by_region_type.html#concept_ysk_xj1_wk)**
+
+-   **[Transactions and Replicated Regions](../../developing/transactions/cache_transactions_by_region_type.html#concept_nl5_pk1_wk)**
+
+-   **[Transactions and Persistent Regions](../../developing/transactions/cache_transactions_by_region_type.html#concept_omy_341_wk)**
+
+## Transactions and Partitioned Regions
+<a id="concept_ysk_xj1_wk">
+
+In partitioned regions, transaction operations are done first on the primary data store then distributed to other members from there, regardless of which member initiates the cache operation. This is the same as is done for normal cache operations on partitioned regions.
+
+In this figure, M1 runs two transactions.
+
+-   The first transaction, T1, works on data whose primary buckets are stored in M1, so M1 is the transaction host.
+-   The second transaction, T2, works on data whose primary buckets are stored in M2, so M1 is the transaction delegate and M2 is the transaction host.
+
+*Transaction on a Partitioned Region:*
+
+<img src="../../images_svg/transactions_partitioned_1.svg" id="concept_ysk_xj1_wk__image_9BF680072A674BCF9F01958753F02952" class="image imageleft" />
+
+The transaction is managed on the transaction host. This includes the transactional view, all operations, and all local cache event handling. In this example, when T2 is committed, the data on M2 is updated and the transaction events are distributed throughout the system, exactly as if the transaction had originated on M2.
+
+The first region operation within the transaction determines the transaction host. All other operations must also work with that as their transaction host:
+
+-   All partitioned region data managed inside the transaction must use the transaction host as their primary data store. In the example, if transaction T2 tried to work on entry W in addition to entries Y and Z, the `TransactionDataNotColocatedException` would be thrown. For information on partitioning data so it is properly colocated for transactions, see [Understanding Custom Partitioning and Data Colocation](../partitioned_regions/custom_partitioning_and_data_colocation.html#custom_partitioning_and_data_colocation). In addition, the data must not be moved during the transaction. Design partitioned region rebalancing to avoid rebalancing while transactions are running. See [Rebalancing Partitioned Region Data](../partitioned_regions/rebalancing_pr_data.html#rebalancing_pr_data).
+-   All non-partitioned region data managed inside the transaction must be available on the transaction host and must be distributed. Operations on regions with local scope are not allowed in transactions with partitioned regions.
+
+The next figure shows a transaction that operates on two partitioned regions and one replicated region. As with the single region example, all local event handling is done on the transaction host.
+
+For a transaction to work, the first operation must be on one of the partitioned regions, to establish M2 as the transaction host. Running the first operation on a key in the replicated region would set M1 as the transaction host, and subsequent operations on the partitioned region data would fail with a `TransactionDataNotColocatedException` exception.
+
+*Transaction on a Partitioned Region with Other Regions:*
+
+<img src="../../images_svg/transactions_partitioned_2.svg" id="concept_ysk_xj1_wk__image_34496249618F46F8B8F7E2D4F342E1E6" class="image" />
+
+## Transactions and Replicated Regions
+<a id="concept_nl5_pk1_wk">
+
+<a id="concept_nl5_pk1_wk__section_C55E80C7136D4A9A8327563E4B89356D"></a>
+For replicated regions, the transaction and its operations are applied to the local member and the resulting transaction state is distributed to other members according to the attributes of each region.
+
+**Note:**
+If possible, use `distributed-ack` scope for your regions where you will run transactions. The `REPLICATE` region shortcuts use `distributed-ack` scope.
+
+The region\u2019s scope affects how data is distributed during the commit phase. Transactions are supported for these region scopes:
+
+-   `distributed-ack`. Handles transactional conflicts both locally and between members. The `distributed-ack` scope is designed to protect data consistency. This scope provides the highest level of coordination among transactions in different members. When the commit call returns for a transaction run on all distributed-ack regions, you can be sure that the transaction\u2019s changes have already been sent and processed. In addition, any callbacks in the remote member have been invoked.
+-   `distributed-no-ack`. Handles transactional conflicts locally, with less coordination between members. This provides the fastest transactions with distributed regions, but it does not work for all situations. This scope is appropriate for:
+    -   Applications with only one writer
+    -   Applications with multiple writers that write to nonoverlapping data sets
+-   `local`. No distribution, handles transactional conflicts locally. Transactions on regions with local scope have no distribution, but they perform conflict checks in the local member. You can have conflict between two threads when their transactions change the same entry.
+
+Transactions on non-replicated regions (regions that use the old API with DataPolicy EMPTY, NORMAL and PRELOADED) are always transaction initiators, and the transaction data host is always a member with a replicated region. This is similar to the way transactions using the PARTITION\_PROXY shortcut are forwarded to members with primary bucket.
+
+**Note:**
+When you have transactions operating on EMPTY, NORMAL or PARTITION regions, make sure that the Geode property `conserve-sockets` is set to false to avoid distributed deadlocks. An empty region is a region created with the API `RegionShortcut.REPLICATE_PROXY` or a region with that uses the old API of `DataPolicy` set to `EMPTY`.
+
+## Conflicting Transactions in Distributed-Ack Regions
+
+In this series of figures, even after the commit operation is launched, the transaction continues to exist during the data distribution (step 3). The commit does not complete until the changes are made in the remote caches and M1 receives the acknowledgement that verifies that the tasks are complete.
+
+**Step 1:** Before commit, Transactions T1 and T2 each change the same entry in Region B within their local cache. T1 also makes a change to Region A.
+
+<img src="../../images_svg/transactions_replicate_1.svg" id="concept_nl5_pk1_wk__image_cj1_zzj_54" class="image" />
+
+**Step 2:** Conflict detected and eliminated. The distributed system recognizes the potential conflict from Transactions T1 and T2 using the same entry. T1 started to commit first, so it is allowed to continue. T2's commit fails with a conflict.
+
+<img src="../../images_svg/transactions_replicate_2.svg" id="concept_nl5_pk1_wk__image_sbh_21k_54" class="image" />
+
+**Step 3:** Changes are in transit. T1 commits and its changes are merged into the local cache. The commit does not complete until Geode distributes the changes to the remote regions and acknowledgment is received.
+
+<img src="../../images_svg/transactions_replicate_3.svg" id="concept_nl5_pk1_wk__image_qgl_k1k_54" class="image" />
+
+**Step 4:** After commit. Region A in M2 and Region B in M3 reflect the changes from transaction T1 and M1 has received acknowledgment. Results may not be identical in different members if their region attributes (such as expiration) are different.
+
+<img src="../../images_svg/transactions_replicate_4.svg" id="concept_nl5_pk1_wk__image_mkm_q1k_54" class="image" />
+
+## Conflicting Transactions in Distributed-No-Ack Regions
+
+These figures show how using the no-ack scope can produce unexpected results. These two transactions are operating on the same region B entry. Since they use no-ack scope, the conflicting changes cross paths and leave the data in an inconsistent state.
+
+**Step 1:** As in the previous example, Transactions T1 and T2 each change the same entry in Region B within their local cache. T1 also makes a change to Region A. Neither commit fails, and the data becomes inconsistent.
+
+<img src="../../images_svg/transactions_replicate_1.svg" id="concept_nl5_pk1_wk__image_jn2_cbk_54" class="image" />
+
+**Step 2:** Changes are in transit. Transactions T1 and T2 commit and merge their changes into the local cache. Geode then distributes changes to the remote regions.
+
+<img src="../../images_svg/transactions_replicate_no_ack_1.svg" id="concept_nl5_pk1_wk__image_fk1_hbk_54" class="image" />
+
+**Step 3:** Distribution is complete. The non-conflicting changes in Region A have been distributed to M2 as expected. For Region B however, T1 and T2 have traded changes, which is not the intended result.
+
+<img src="../../images_svg/transactions_replicate_no_ack_2.svg" id="concept_nl5_pk1_wk__image_ijc_4bk_54" class="image" />
+
+## <a id="concept_nl5_pk1_wk__section_760DE9F2226B46AD8A025F562CEA4D40" class="no-quick-link"></a>Conflicting Transactions with Local Scope
+
+When encountering conflicts with local scope, the first transaction to start the commit process completes, and the other transaction\u2019s commit fails with a conflict.. In the diagram below, the resulting value for entry Y depends on which transaction commits first.
+<img src="../../images_svg/transactions_replicate_local_1.svg" id="concept_nl5_pk1_wk__image_A37172C328404796AE1F318068C18F43" class="image" />
+
+## Transactions and Persistent Regions
+<a id="concept_omy_341_wk">
+
+By default, Geode does not allow transactions on persistent regions. You can enable the use of transactions on persistent regions by setting the property `gemfire.ALLOW_PERSISTENT_TRANSACTIONS` to true. This may also be accomplished at server startup using gfsh:
+
+``` pre
+gfsh start server --name=server1 --dir=server1_dir \
+--J=-Dgemfire.ALLOW_PERSISTENT_TRANSACTIONS=true 
+```
+
+Since Geode does not provide atomic disk persistence guarantees, the default behavior is to disallow disk-persistent regions from participating in transactions. However, when choosing to enable transactions on persistent regions, consider the following:
+
+-   Geode does ensure atomicity for in-memory updates.
+-   When any failed member is unable to complete the logic triggered by a transaction (including subsequent disk writes), that failed member is removed from the distributed system and, if restarted, must rebuild its state from surviving nodes that successfully complete the updates.
+-   The chances of multiple nodes failing to complete the disk writes that result from a transaction commit due to nodes crashing for unrelated reasons are small. The real risk is that the file system buffers holding the persistent updates do not get written to disk in the case of operating system or hardware failure. If only the Geode process crashes, atomicity still exists. The overall risk of losing disk updates can also be mitigated by enabling synchronized disk file mode for the disk stores, but this incurs a high performance penalty.
+
+To mitigate the risk of data not get fully written to disk on all copies of the participating persistent disk stores:
+
+-   Make sure you have enough redundant copies of the data. The guarantees of multiple/distributed in-memory copies being (each) atomically updated as part of the Transaction commit sequence can help guard against data corruption.
+-   When executing transactions on persistent regions, we recommend using the TransactionWriter to log all transactions along with a time stamp. This will allow you to recover in the event that all nodes fail simultaneously while a transaction is being committed. You can use the log to recover the data manually.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/transactions/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/chapter_overview.html.md.erb b/geode-docs/developing/transactions/chapter_overview.html.md.erb
new file mode 100644
index 0000000..4dd8c5b
--- /dev/null
+++ b/geode-docs/developing/transactions/chapter_overview.html.md.erb
@@ -0,0 +1,31 @@
+---
+title:  Transactions
+---
+
+Geode provides a transactions API, with `begin`, `commit`, and `rollback` methods. These methods are much the same as the familiar relational database transactions methods.
+
+-   **[About Transactions](../../developing/transactions/about_transactions.html)**
+
+    This section covers the features of Geode transactions.
+
+-   **[Types of Transactions](../../developing/transactions/about_transactions.html#concept_w3b_wh3_5k)**
+
+    Geode supports two kinds of transactions: **Geode cache transactions** and **JTA global transactions**.
+
+-   **[Geode Cache Transactions](../../developing/transactions/cache_transactions.html)**
+
+    Use Geode cache transactions to group the execution of cache operations and to gain the control offered by transactional commit and rollback. Geode cache transactions control operations within the Geode cache while the Geode distributed system handles data distribution in the usual way.
+
+-   **[JTA Global Transactions with Geode](../../developing/transactions/JTA_transactions.html)**
+
+    Use JTA global transactions to coordinate Geode cache transactions and JDBC transactions.
+
+-   **[Monitoring and Troubleshooting Transactions](../../developing/transactions/monitor_troubleshoot_transactions.html)**
+
+    This topic covers errors that may occur when running transactions in Geode.
+
+-   **[Transaction Coding Examples](../../developing/transactions/transaction_coding_examples.html)**
+
+    This section provides several code examples for writing and executing transactions.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/transactions/client_server_transactions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/client_server_transactions.html.md.erb b/geode-docs/developing/transactions/client_server_transactions.html.md.erb
new file mode 100644
index 0000000..e819c68
--- /dev/null
+++ b/geode-docs/developing/transactions/client_server_transactions.html.md.erb
@@ -0,0 +1,38 @@
+---
+title: Client Transactions
+---
+
+
+The syntax for writing client transactions is the same on the Java client as with any other Geode member, but the underlying behavior in a client-run transaction is different from general transaction behavior.
+
+For general information about running a transaction, refer to [How to Run a Geode Cache Transaction](run_a_cache_transaction.html#task_f15_mr3_5k).
+
+-   **[How Geode Runs Client Transactions](../../developing/transactions/client_server_transactions.html#how_gemfire_runs_clients)**
+
+-   **[Client Cache Access During a Transaction](../../developing/transactions/client_server_transactions.html#client_cache_access)**
+
+-   **[Client Transactions and Client Application Plug-Ins](../../developing/transactions/client_server_transactions.html#client_app_plugins)**
+
+-   **[Client Transaction Failures](../../developing/transactions/client_server_transactions.html#client_transaction_failures)**
+
+## <a id="how_gemfire_runs_clients" class="no-quick-link"></a>How Geode Runs Client Transactions
+
+When a client performs a transaction, the transaction is delegated to a server that acts as the transaction initiator in the server system. As with regular, non-client transactions, this server delegate may or may not be the transaction host.
+
+In this figure, the application code on the client makes changes to data entries Y and Z within a transaction. The delegate performing the transaction (M1) does not host the primary copy of the data being modified. The transaction takes place on the server containing this data (M2).
+
+<img src="../../images/transactions-client-1.png" id="how_gemfire_runs_clients__image_5DCA65F2B88F450299EFD19DAAA93D4F" class="image" />
+
+## <a id="client_cache_access" class="no-quick-link"></a>Client Cache Access During a Transaction
+
+To maintain cache consistency, Geode blocks access to the local client cache during a transaction. The local client cache may reflect information inconsistent with the transaction in progress. When the transaction completes, the local cache is accessible again.
+
+## <a id="client_app_plugins" class="no-quick-link"></a>Client Transactions and Client Application Plug-Ins
+
+Any plug-ins installed in the client are not invoked by the client-run transaction. The client that initiates the transaction receives changes from its server based on transaction operations the same as any other client - through mechanisms like subscriptions and continuous query results. The client transaction is performed by the server delegate, where application plug-ins operate the same as if the server were the sole initiator of the transaction.
+
+## <a id="client_transaction_failures" class="no-quick-link"></a>Client Transaction Failures
+
+In addition to the failure conditions common to all transactions, client transactions can fail if the transaction delegate fails. If the delegate performing the transaction fails, the transaction code throws a transaction exception. See [Transaction Exceptions](monitor_troubleshoot_transactions.html#monitor_troubleshoot_transactions__section_8942ABA6F23C4ED58877C894B13F4F21).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/transactions/configuring_db_connections_using_JNDI.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/configuring_db_connections_using_JNDI.html.md.erb b/geode-docs/developing/transactions/configuring_db_connections_using_JNDI.html.md.erb
new file mode 100644
index 0000000..c497608
--- /dev/null
+++ b/geode-docs/developing/transactions/configuring_db_connections_using_JNDI.html.md.erb
@@ -0,0 +1,313 @@
+---
+title: Configuring Database Connections Using JNDI
+---
+
+<a id="topic_A5E3A67C808D48C08E1F0DC167C5C494"></a>
+
+
+When using JTA transactions, you can configure database JNDI data sources in `cache.xml`. The `DataSource` object points to either a JDBC connection or, more commonly, a JDBC connection pool. The connection pool is usually preferred, because a program can use and reuse a connection as long as necessary and then free it for another thread to use.
+
+The following are a list of `DataSource` connection types used in JTA transactions.
+
+-   **XAPooledDataSource**. Pooled SQL connections.
+-   **ManagedDataSource**. JNDI binding type for the J2EE Connector Architecture (JCA) ManagedConnectionFactory.
+-   **PooledDataSource**. Pooled SQL connections.
+-   **SimpleDataSource**. Single SQL connection. No pooling of SQL connections is done. Connections are generated on the fly and cannot be reused.
+
+The `jndi-name` attribute of the `jndi-binding` element is the key binding parameter. If the value of `jndi-name` is a DataSource, it is bound as `java:/`*myDatabase*, where *myDatabase* is the name you assign to your data source. If the data source cannot be bound to JNDI at runtime, Geode logs a warning. For information on the `DataSource` interface, see: [http://docs.oracle.com/javase/8/docs/api/javax/sql/DataSource.html](http://docs.oracle.com/javase/8/docs/api/javax/sql/DataSource.html)
+
+Geode supports JDBC 2.0 and 3.0.
+
+**Note:**
+Include any data source JAR files in your CLASSPATH.
+
+## <a id="topic_F67EC20067124A618A8099AB4CBF634C" class="no-quick-link"></a>Example DataSource Configurations in cache.xml
+
+The following sections show example `cache.xml` files configured for each of the `DataSource` connection types.
+
+## XAPooledDataSource cache.xml Example (Derby)
+
+The example shows a `cache.xml` file configured for a pool of `XAPooledDataSource` connections connected to the data resource `newDB`.
+
+The log-in and blocking timeouts are set lower than the defaults. The connection information, including `user-name` and `password`, is set in the `cache.xml` file, instead of waiting until connection time. The password is encrypted; for details, see [Encrypting Passwords for Use in cache.xml](../../managing/security/encrypting_passwords.html#topic_730CC61BA84F421494956E2B98BDE2A1).
+
+When specifying the configuration properties for JCA-implemented database drivers that support XA transactions (in other words, **XAPooledDataSource**), you must use configuration properties to define the datasource connection instead of the `connection-url` attribute of the `<jndi-binding>` element. Configuration properties differ depending on your database vendor. Specify JNDI binding properties through the `config-property` tag, as shown in this example. You can add as many `config-property` tags as required.
+
+``` pre
+<?xml version="1.0" encoding="UTF-8"?>
+<cache
+    xmlns="http://geode.incubator.apache.org/schema/cache"
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+    xsi:schemaLocation="http://geode.incubator.apache.org/schema/cache http://geode.incubator.apache.org/schema/cache/cache-1.0.xsd"
+    version="1.0"
+    lock-lease="120" lock-timeout="60" search-timeout="300"> 
+   <region name="root">
+      <region-attributes scope="distributed-no-ack" data-policy="cached" initial-capacity="16"
+load-factor="0.75" concurrency-level="16" statistics-enabled="true">
+    . . .
+   </region>
+   <jndi-bindings>
+      <jndi-binding type="XAPooledDataSource" 
+    jndi-name="newDB2trans" 
+    init-pool-size="20" 
+    max-pool-size="100"
+    idle-timeout-seconds="20"
+    blocking-timeout-seconds="5" 
+    login-timeout-seconds="10"
+    xa-datasource-class="org.apache.derby.jdbc.EmbeddedXADataSource"
+    user-name="mitul" 
+    password="encrypted(83f0069202c571faf1ae6c42b4ad46030e4e31c17409e19a)">
+         <config-property>
+          <config-property-name>Description</config-property-name>
+          <config-property-type>java.lang.String</config-property-type>
+          <config-property-value>pooled_transact</config-property-value>
+       </config-property>
+          <config-property>
+             <config-property-name>DatabaseName</config-property-name>
+             <config-property-type>java.lang.String</config-property-type>
+             <config-property-value>newDB</config-property-value>
+          </config-property>
+          <config-property>
+             <config-property-name>CreateDatabase</config-property-name>
+             <config-property-type>java.lang.String</config-property-type>
+             <config-property-value>create</config-property-value>
+          </config-property>       
+       . . .
+      </jndi-binding>
+   </jndi-bindings>
+</cache>
+```
+
+## JNDI Binding Configuration Properties for Different XAPooledDataSource Connections
+
+The following are some example data source configurations for different databases. Consult your vendor database's documentation for additional details.
+
+**MySQL**
+
+``` pre
+...
+<jndi-bindings>
+   <jndi-binding type="XAPooledDataSource" 
+    ...
+    xa-datasource-class="com.mysql.jdbc.jdbc2.optional.MysqlXADataSource">
+    <config-property>
+    <config-property-name>URL</config-property-name>
+    <config-property-type>java.lang.String</config-property-type>
+    <config-property-value>"jdbc:mysql://mysql-servername:3306/databasename"</config-property-value>
+    </config-property>
+    ...
+    </jndi-binding>
+    ...
+</jndi-bindings>
+```
+
+**PostgreSQL**
+
+``` pre
+...
+<jndi-bindings>
+   <jndi-binding type="XAPooledDataSource" 
+    ...
+    xa-datasource-class="org.postgresql.xa.PGXADataSource">
+    <config-property>
+    <config-property-name>ServerName</config-property-name>
+    <config-property-type>java.lang.String</config-property-type>
+    <config-property-value>postgresql-hostname</config-property-value>
+    </config-property>
+    <config-property>
+    <config-property-name>DatabaseName</config-property-name>
+    <config-property-type>java.lang.String</config-property-type>
+    <config-property-value>postgresqldbname</config-property-value>
+    </config-property>
+    ...
+   </jndi-binding>
+    ...
+</jndi-bindings>
+```
+
+**Oracle**
+
+``` pre
+...
+<jndi-bindings>
+   <jndi-binding type="XAPooledDataSource" 
+    ...
+    xa-datasource-class="oracle.jdbc.xa.client.OracleXADataSource">
+    <config-property>
+    <config-property-name>URL</config-property-name>
+    <config-property-type>java.lang.String</config-property-type>
+    <config-property-value>jdbc:oracle:oci8:@tc</config-property-value>
+    </config-property>
+    ...
+    </jndi-binding>
+    ...
+</jndi-bindings>
+```
+
+**Microsoft SQL Server**
+
+``` pre
+...
+<jndi-bindings>
+   <jndi-binding type="XAPooledDataSource" 
+      ...
+    xa-datasource-class="com.microsoft.sqlserver.jdbc.SQLServerXADataSource">
+    <config-property>
+    <config-property-name>ServerName</config-property-name>
+    <config-property-type>java.lang.String</config-property-type>
+    <config-property-value>mysqlserver</config-property-value>
+    </config-property>
+    <config-property>
+    <config-property-name>DatabaseName</config-property-name>
+    <config-property-type>java.lang.String</config-property-type>
+    <config-property-value>databasename</config-property-value>
+    </config-property>
+    <config-property>
+    <config-property-name>SelectMethod</config-property-name>
+    <config-property-type>java.lang.String</config-property-type>
+    <config-property-value>cursor</config-property-value>
+    </config-property>
+    ...
+    </jndi-binding>
+    ...
+</jndi-bindings>
+```
+
+## ManagedDataSource Connection Example (Derby)
+
+`ManagedDataSource` connections for the JCA `ManagedConnectionFactory` are configured as shown in the example. This configuration is similar to `XAPooledDataSource` connections, except the type is `ManagedDataSource`, and you specify a `managed-conn-factory-class` instead of an `xa-datasource-class`.
+
+``` pre
+<?xml version="1.0"?>
+<cache xmlns="http://geode.incubator.apache.org/schema/cache"
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+    xsi:schemaLocation="http://geode.incubator.apache.org/schema/cache http://geode.incubator.apache.org/schema/cache/cache-1.0.xsd"
+    version="1.0"
+    lock-lease="120" 
+    lock-timeout="60"
+    search-timeout="300">
+   <region name="root">
+      <region-attributes scope="distributed-no-ack" data-policy="cached" initial-capacity="16"
+load-factor="0.75" concurrency-level="16" statistics-enabled="true">
+      . . .
+    </region>
+    <jndi-bindings>
+      <jndi-binding type="ManagedDataSource" 
+    jndi-name="DB3managed" 
+    init-pool-size="20" 
+    max-pool-size="100" 
+    idle-timeout-seconds="20" 
+    blocking-timeout-seconds="5" 
+    login-timeout-seconds="10"
+    managed-conn-factory-class="com.myvendor.connection.ConnFactory"
+    user-name="mitul"  
+    password="encrypted(83f0069202c571faf1ae6c42b4ad46030e4e31c17409e19a)">
+          <config-property>
+             <config-property-name>Description</config-property-name>
+             <config-property-type>java.lang.String</config-property-type>
+             <config-property-value>pooled_transact</config-property-value>
+          </config-property>  
+          <config-property>
+             <config-property-name>DatabaseName</config-property-name>
+             <config-property-type>java.lang.String</config-property-type>
+             <config-property-value>newDB</config-property-value>
+          </config-property>
+          <config-property>
+             <config-property-name>CreateDatabase</config-property-name>
+             <config-property-type>java.lang.String</config-property-type>
+             <config-property-value>create</config-property-value>
+          </config-property>           
+           . . .
+     </jndi-binding>
+   </jndi-bindings>
+ </cache>
+ 
+```
+
+## PooledDataSource Example (Derby)
+
+Use the `PooledDataSource` and `SimpleDataSource` connections for operations executed outside of any transaction. This example shows a `cache.xml` file configured for a pool of `PooledDataSource` connections to the data resource `newDB`. For this non-transactional connection pool, the log-in and blocking timeouts are set higher than for the transactional connection pools in the two previous examples. The connection information, including `user-name` and `password`, is set in the `cache.xml` file, instead of waiting until connection time. The password is encrypted; for details, see [Encrypting Passwords for Use in cache.xml](../../managing/security/encrypting_passwords.html#topic_730CC61BA84F421494956E2B98BDE2A1).
+
+``` pre
+<?xml version="1.0"?>
+<cache xmlns="http://geode.incubator.apache.org/schema/cache"
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+    xsi:schemaLocation="http://geode.incubator.apache.org/schema/cache http://geode.incubator.apache.org/schema/cache/cache-1.0.xsd"
+    version="1.0"
+    lock-lease="120"
+    lock-timeout="60"
+    search-timeout="300">
+    <region name="root">
+         <region-attributes scope="distributed-no-ack" data-policy="cached" 
+initial-capacity="16" load-factor="0.75" concurrency-level="16" statistics-enabled="true">
+            . . .
+    </region>
+    <jndi-bindings>
+      <jndi-binding
+    type="PooledDataSource"
+    jndi-name="newDB1" 
+    init-pool-size="2" 
+    max-pool-size="7" 
+    idle-timeout-seconds="20" 
+    blocking-timeout-seconds="20"
+    login-timeout-seconds="30" 
+    conn-pooled-datasource-class="org.apache.derby.jdbc.EmbeddedConnectionPoolDataSource"
+    user-name="mitul"
+    password="encrypted(83f0069202c571faf1ae6c42b4ad46030e4e31c17409e19a)">
+       <config-property>
+          <config-property-name>Description</config-property-name>
+          <config-property-type>java.lang.String</config-property-type>
+          <config-property-value>pooled_transact</config-property-value>
+       </config-property> 
+       <config-property>
+         <config-property-name>DatabaseName</config-property-name>
+         <config-property-type>java.lang.String</config-property-type>
+         <config-property-value>newDB</config-property-value>
+       </config-property>
+       <config-property>
+         <config-property-name>CreateDatabase</config-property-name>
+         <config-property-type>java.lang.String</config-property-type>
+         <config-property-value>create</config-property-value>
+       </config-property>              
+       . . .
+      </jndi-binding>
+   </jndi-bindings>
+</cache>
+      
+```
+
+## SimpleDataSource Connection Example (Derby)
+
+The example below shows a very basic configuration in the `cache.xml` file for a `SimpleDataSource` connection to the data resource `oldDB`. You only need to configure a few properties like a `jndi-name` for this connection pool, `oldDB1`, and the `databaseName`, `oldDB`. This password is in clear text.
+
+A simple data source connection does not generally require vendor-specific property settings. If you need them, add `config-property` tags as shown in the earlier examples.
+
+``` pre
+<?xml version="1.0"?>
+<cache xmlns="http://geode.incubator.apache.org/schema/cache"
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+    xsi:schemaLocation="http://geode.incubator.apache.org/schema/cache http://geode.incubator.apache.org/schema/cache/cache-1.0.xsd"
+    version="1.0"
+    lock-lease="120" 
+    lock-timeout="60" 
+    search-timeout="300">
+   <region name="root">
+      <region-attributes scope="distributed-no-ack" data-policy="cached" initial-capacity="16"
+load-factor="0.75" concurrency-level="16" statistics-enabled="true">
+        . . .
+      </region-attributes>   
+    </region>
+    <jndi-bindings>
+      <jndi-binding type="SimpleDataSource"
+    jndi-name="oldDB1" 
+    jdbc-driver-class="org.apache.derby.jdbc.EmbeddedDriver"
+    user-name="mitul" 
+    password="password" 
+    connection-url="jdbc:derby:newDB;create=true">
+        . . .
+       </jndi-binding>
+   </jndi-bindings>
+</cache>
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/transactions/data_location_cache_transactions.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/data_location_cache_transactions.html.md.erb b/geode-docs/developing/transactions/data_location_cache_transactions.html.md.erb
new file mode 100644
index 0000000..de2b149
--- /dev/null
+++ b/geode-docs/developing/transactions/data_location_cache_transactions.html.md.erb
@@ -0,0 +1,15 @@
+---
+title:  Data Location for Cache Transactions
+---
+
+The location where you can run your transaction depends on where you are storing your data.
+
+Transactions must operate on a data set that is hosted entirely by one member.
+
+-   For replicated or other distributed regions, the transaction uses only the data set in the member where the transaction is run.
+-   For partitioned regions, you must colocate all your transactional data in a single member. See [Colocate Data from Different Partitioned Regions](../partitioned_regions/colocating_partitioned_region_data.html).
+-   For transactions run on partitioned and distributed region mixes, you must colocate the partitioned region data and make sure the distributed region data is available in any member hosting the partitioned region data.
+
+For transactions involving partitioned regions, any member with the regions defined can orchestrate the transactional operations, regardless of whether that member hosts data for the regions. If the transactional data resides on a remote member, the transaction is carried out by proxy in the member hosting the data. The member hosting the data is referred to as the transaction host.
+
+