You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@ignite.apache.org by ds...@apache.org on 2017/10/07 02:11:04 UTC

svn commit: r1811379 - /ignite/site/trunk/features/persistence.html

Author: dsetrakyan
Date: Sat Oct  7 02:11:04 2017
New Revision: 1811379

URL: http://svn.apache.org/viewvc?rev=1811379&view=rev
Log:
Minor.

Modified:
    ignite/site/trunk/features/persistence.html

Modified: ignite/site/trunk/features/persistence.html
URL: http://svn.apache.org/viewvc/ignite/site/trunk/features/persistence.html?rev=1811379&r1=1811378&r2=1811379&view=diff
==============================================================================
--- ignite/site/trunk/features/persistence.html (original)
+++ ignite/site/trunk/features/persistence.html Sat Oct  7 02:11:04 2017
@@ -62,7 +62,7 @@ under the License.
                         </p>
                         <p>
                             With the the native persistence enabled, Ignite always stores a superset of data on disk,
-                            and as much as it can in RAM based on the capacity of the latter. For example, if there are
+                            and as much as possible in RAM. For example, if there are
                             100 entries and RAM has the capacity to store only 20, then all 100 will be stored on disk
                             and only 20 will be cached in RAM for better performance.
                         </p>
@@ -72,8 +72,7 @@ under the License.
                     </div>
                 </div>
                 <p>
-                    The native persistence has the following characteristics making it different from 3rd party
-                    databases that can be used as an alternative persistence layer in Ignite:
+                    The native persistence has the following important characteristics:
                 </p>
                 <ul class="page-list" style="margin-bottom: 20px;">
                     <li>
@@ -81,16 +80,16 @@ under the License.
                         Ignite can be used as a memory-centric distributed SQL database.
                     </li>
                     <li>
-                        No need to have all the data and indexes in memory. Ignite persistence allows storing a superset
+                        No need to have all the data in memory. Ignite persistence allows storing a superset
                         of data on disk and only most frequently used subsets in memory.
                     </li>
                     <li>
-                        Instantaneous cluster restarts. If the whole cluster goes down there is no need to warm up the
-                        memory by preloading data from the Ignite Persistence. The cluster becomes fully operational
-                        once all the cluster nodes are interconnected with each other.
+                        Instantaneous cluster restarts. Ignite becomes fully operational from disk immediately
+                        upon cluster startup or restart. There is no need to preload or warm up the in-memory caches.
+                        The data will be loaded in-memory lazily, as it gets accessed.
                     </li>
                     <li>
-                        Data and indexes are stored in a similar format both in memory and on disk which helps avoid
+                        Data and indexes are stored in a similar format both in memory and on disk, which helps avoid
                         expensive transformations when moving data between memory and disk.
                     </li>
                     <li>
@@ -101,49 +100,37 @@ under the License.
                 <div class="page-heading">Write-Ahead Log</div>
                 <p>
                     Every time the data is updated in memory, the update will be appended to the tail of
-                    a write-ahead log (WAL). The purpose of the WAL is to propagate updates to disk in
-                    the fastest way possible and provide a recovery mechanism for scenarios where a single node or the
-                    whole cluster goes down.
+                    the write-ahead log (WAL). The purpose of the WAL is to propagate updates to disk in
+                    the fastest way possible and provide a consistent recovery mechanism that supports full cluster failures.
                 </p>
                 <p>
                     The whole WAL is split into several files, called segments, that are filled out sequentially.
-                    Once a segment is full, its content will be copied to the WAL archive and kept there for the
-                    time defined by several configuration parameters. While the segment is being copied, another segment
-                    will be treated as an active WAL file and will accept all the updates coming from the application side.
+                    Once a segment is full, its content will be copied to the <i>WAL archive</i> where it will be preserved
+                    for a configurable amount of time. While the segment is being copied, another segment
+                    will be treated as an active WAL file.
                 </p>
                 <p>
-                    It is worth mentioning that a cluster can always be recovered to the latest successfully committed
-                    transaction in case of a crash or restart relying on the content of the WAL.
+                    The cluster can always be recovered up to the latest successfully committed transaction.
                 </p>
 
                 <div class="page-heading">Checkpointing</div>
                 <p>
-                    Due to the nature of the WAL, it constantly grows, and it would take a significant amount of time to
-                    recover the cluster by going over the WAL from its head to tail. To mitigate this, Ignite introduces
-                    a checkpointing process.
-                </p>
-                <p>
-                    The checkpointing is the process of copying dirty pages from the memory to the partition files on disk.
-                    A dirty page is a page that was updated in the memory but was not written to a respective partition
-                    file (an update was just appended to the WAL).
-                </p>
-                <p>
-                    This process helps to utilize disk space frugally by keeping pages in the most up-to-date state on
-                    disk and allowing to remove outdated WAL segments from the WAL archive.
+                    As WAL grows, it periodically gets <i>checkpointed</i> to the main storage.
+                    The checkpointing is the process of copying <i>dirty pages</i> from memory to the partition files on disk.
+                    A dirty page is a page that was updated in memory, was appended to WAL, but was not written to a respective partition
+                    file on disk yet.
                 </p>
                 <div class="page-heading">Durability</div>
                 <p>
-                    The native persistence fully complies with the ACID's durability property guaranteeing that:
+                    Ignite native persistence provides ACID durability guarantees to the data:
 
                     <ul class="page-list" style="margin-bottom: 20px;">
-                        <li>Committed transactions will survive permanently.</li>
+                        <li>Committed transactions will always survive any failures.</li>
                         <li>
-                            The cluster can always be recovered to the latest successfully committed transaction
-                            in the event of a crash or restart.
+                            The cluster can always be recovered to the latest successfully committed transaction.
                         </li>
                         <li>
-                            The cluster becomes fully operational once all the cluster nodes are interconnected with each other.
-                            There is no need to warm up the memory by preloading data from the disk (<i>instantaneous restarts</i>).
+                            The cluster restarts are very fast.
                         </li>
                     </ul>
                 </p>