You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@jackrabbit.apache.org by mr...@apache.org on 2018/07/09 08:53:19 UTC

svn commit: r1835390 [6/23] - in /jackrabbit/site/live/oak/docs: ./ architecture/ coldstandby/ features/ nodestore/ nodestore/document/ nodestore/segment/ oak-mongo-js/ oak_api/ plugins/ query/ security/ security/accesscontrol/ security/authentication/...

Modified: jackrabbit/site/live/oak/docs/nodestore/segment/overview.html
URL: http://svn.apache.org/viewvc/jackrabbit/site/live/oak/docs/nodestore/segment/overview.html?rev=1835390&r1=1835389&r2=1835390&view=diff
==============================================================================
--- jackrabbit/site/live/oak/docs/nodestore/segment/overview.html (original)
+++ jackrabbit/site/live/oak/docs/nodestore/segment/overview.html Mon Jul  9 08:53:17 2018
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia Site Renderer 1.7.4 at 2018-05-24 
+ | Generated by Apache Maven Doxia Site Renderer 1.8.1 at 2018-07-09 
  | Rendered using Apache Maven Fluido Skin 1.6
 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20180524" />
+    <meta name="Date-Revision-yyyymmdd" content="20180709" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Jackrabbit Oak &#x2013; Oak Segment Tar</title>
     <link rel="stylesheet" href="../../css/apache-maven-fluido-1.6.min.css" />
@@ -136,7 +136,7 @@
 
       <div id="breadcrumbs">
         <ul class="breadcrumb">
-        <li id="publishDate">Last Published: 2018-05-24<span class="divider">|</span>
+        <li id="publishDate">Last Published: 2018-07-09<span class="divider">|</span>
 </li>
           <li id="projectVersion">Version: 1.10-SNAPSHOT</li>
         </ul>
@@ -241,68 +241,58 @@
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
---><h1>Oak Segment Tar</h1>
-
+-->
+<h1>Oak Segment Tar</h1>
 <ul>
-  
+
 <li><a href="#overview">Overview</a></li>
-  
 <li><a href="#garbage-collection">Garbage Collection</a>
-  
 <ul>
-    
+
 <li><a href="#generational-garbage-collection">Generational Garbage Collection</a></li>
-    
 <li><a href="#estimation-compaction-cleanup">Estimation, Compaction and Cleanup</a></li>
-    
 <li><a href="#offline-garbage-collection">Offline Garbage Collection</a></li>
-    
 <li><a href="#online-garbage-collection">Online Garbage Collection</a></li>
-  </ul></li>
-  
+</ul>
+</li>
 <li><a href="#monitoring">Monitoring</a></li>
-  
 <li><a href="#tools">Tools</a>
-  
 <ul>
-    
+
 <li><a href="#backup">Backup</a></li>
-    
 <li><a href="#restore">Restore</a></li>
-    
 <li><a href="#check">Check</a></li>
-    
 <li><a href="#compact">Compact</a></li>
-    
 <li><a href="#debug">Debug</a></li>
-    
 <li><a href="#iotrace">IOTrace</a></li>
-    
 <li><a href="#diff">Diff</a></li>
-    
 <li><a href="#history">History</a></li>
-  </ul></li>
+</ul>
+</li>
 </ul>
 <div class="section">
 <h2><a name="Overview"></a><a name="overview"></a> Overview</h2>
 <p>Oak Segment Tar is an Oak storage backend that stores content as various types of <i>records</i> within larger <i>segments</i>. Segments themselves are collected within <i>tar files</i> along with further auxiliary information. A <i>journal</i> is used to track the latest state of the repository. It is based on the following key principles:</p>
-
 <ul>
-  
+
 <li>
-<p><i>Immutability</i>. Segments are immutable, which makes is easy to cache frequently accessed segments. This also makes it less likely for programming or system errors to cause repository inconsistencies, and simplifies features like backups or master-slave clustering.</p></li>
-  
+
+<p><i>Immutability</i>. Segments are immutable, which makes is easy to cache frequently accessed segments. This also makes it less likely for programming or system errors to cause repository inconsistencies, and simplifies features like backups or master-slave clustering.</p>
+</li>
 <li>
-<p><i>Compactness</i>. The formatting of records is optimized for size to reduce IO costs and to fit as much content in caches as possible.</p></li>
-  
+
+<p><i>Compactness</i>. The formatting of records is optimized for size to reduce IO costs and to fit as much content in caches as possible.</p>
+</li>
 <li>
-<p><i>Locality</i>. Segments are written so that related records, like a node and its immediate children, usually end up stored in the same segment. This makes tree traversals very fast and avoids most cache misses for typical clients that access more than one related node per session.</p></li>
+
+<p><i>Locality</i>. Segments are written so that related records, like a node and its immediate children, usually end up stored in the same segment. This makes tree traversals very fast and avoids most cache misses for typical clients that access more than one related node per session.</p>
+</li>
 </ul>
-<p>The content tree and all its revisions are stored in a collection of immutable <i>records</i> within <i>segments</i>. Each segment is identified by a UUID and typically contains a continuous subset of the content tree, for example a node with its properties and closest child nodes. Some segments might also be used to store commonly occurring property values or other shared data. Segments can be to up to 256KiB in size. See <a href="records.html">Segments and records</a> for a detailed description of the segments and records. </p>
+<p>The content tree and all its revisions are stored in a collection of immutable <i>records</i> within <i>segments</i>. Each segment is identified by a UUID and typically contains a continuous subset of the content tree, for example a node with its properties and closest child nodes. Some segments might also be used to store commonly occurring property values or other shared data. Segments can be to up to 256KiB in size. See <a href="records.html">Segments and records</a> for a detailed description of the segments and records.</p>
 <p>Segments are collectively stored in <i>tar files</i> and check-summed to ensure their integrity. Tar files also contain an index of the tar segments, the graph of segment references of all segments it contains and an index of all external binaries referenced from the segments in the tar file. See <a href="tar.html">Structure of TAR files</a> for details.</p>
-<p>The <i>journal</i> is a special, atomically updated file that records the state of the repository as a sequence of references to successive root node records. For crash resiliency the journal is always only updated with a new reference once the referenced record has been flushed to disk. The most recent root node reference stored in the journal is used as the starting point for garbage collection. All content currently visible to clients must be accessible through that reference. </p>
-<p>Oak Segment Tar is an evolution of a <a href="../segmentmk.html">previous implementation</a>. Upgrading requires <a href="../../migration.html">migrating</a> to the <a href="changes.html">new storage format</a>. </p>
-<p>See <a href="classes.html">Design of Oak Segment Tar</a> for a high level design overview of Oak Segment Tar. </p></div>
+<p>The <i>journal</i> is a special, atomically updated file that records the state of the repository as a sequence of references to successive root node records. For crash resiliency the journal is always only updated with a new reference once the referenced record has been flushed to disk.  The most recent root node reference stored in the journal is used as the starting point for garbage collection. All content currently visible to clients must be accessible through that reference.</p>
+<p>Oak Segment Tar is an evolution of a <a href="../segmentmk.html">previous implementation</a>. Upgrading requires <a href="../../migration.html">migrating</a> to the <a href="changes.html">new storage format</a>.</p>
+<p>See <a href="classes.html">Design of Oak Segment Tar</a> for a high level design overview of Oak Segment Tar.</p></div>
 <div class="section">
 <h2><a name="Garbage_Collection"></a><a name="garbage-collection"></a> Garbage Collection</h2>
 <p>Garbage Collection is the set of processes and techniques employed by Oak Segment Tar to eliminate unused persisted data, thus limiting the memory and disk footprint of the system. Most of the operations on repository data generate a certain amount of garbage. This garbage is a byproduct of the repository operations and consists of leftover data that is not usable by the user. If left unchecked, this garbage would just pile up, consume disk space and pollute in-memory data structures. To avoid this, Oak Segment Tar defines garbage collection procedures to eliminate unnecessary data.</p>
@@ -316,12 +306,12 @@
 <p>While the previous section describes the idea behind garbage collection, this section introduces the building blocks on top of which garbage collection is implemented. Oak Segment Tar splits the garbage collection process in three phases: estimation, compaction and cleanup.</p>
 <p>Estimation is the first phase of garbage collection. In this phase, the system estimates how much garbage is actually present in the system. If there is not enough garbage to justify the creation of a new generation, the rest of the garbage collection process is skipped. If the output of this phase reports that the amount of garbage is beyond a certain threshold, the system creates a new generation and goes on with the next phase.</p>
 <p>Compaction executes after a new generation is created. The purpose of compaction is to create a compact representation of the current generation. For this the current generation is copied to the new generation leaving out anything from the current generation that is not reachable anymore. Starting with Oak 1.8 compaction can operate in either of two modes: full compaction and tail compaction. Full compaction copies all revisions pertaining to the current generation to the new generation. In contrast tail compaction only copies the most recent ones. The two compaction modes differ in usage of system resources and how much time they consume. While full compaction is more thorough overall, it usually requires much more time, disk spice and disk IO than tail compaction.</p>
-<p>Cleanup is the last phase of garbage collection and kicks in as soon as compaction is done. Once relevant data is safe in the new generation, old and unused data from a previous generation can be removed. This phase locates outdated pieces of data from one of the oldest generations and removes it from the system. This is the only phase where data is actually deleted and disk space is finally freed. The amount of freed disk space depends on the preceding compaction operation. In general cleanup can free less space after a tail compaction than after a full compaction. However, this only becomes effective a further garbage collection cycle due to the system always retaining a total of two generations. </p></div>
+<p>Cleanup is the last phase of garbage collection and kicks in as soon as compaction is done. Once relevant data is safe in the new generation, old and unused data from a previous generation can be removed. This phase locates outdated pieces of data from one of the oldest generations and removes it from the system. This is the only phase where data is actually deleted and disk space is finally freed. The amount of freed disk space depends on the preceding compaction operation. In general cleanup can free less space after a tail compaction than after a full compaction. However, this only becomes effective a further garbage collection cycle due to the system always retaining a total of two generations.</p></div>
 <div class="section">
 <h3><a name="Offline_Garbage_Collection"></a><a name="offline-garbage-collection"></a> Offline Garbage Collection</h3>
 <p>Offline garbage collection is the procedure followed by Oak Segment Tar to execute garbage collection by taking exclusive control of the repository.</p>
 <p>Offline garbage collection runs as a standalone Java tool manually or semi-automatically started from the command line. The way offline garbage collection works is simpler than the online version. It is assumed that a human operator is in charge of deciding when offline compaction is needed. In such a case, the human operator has to take offline - hence the name - the system using the repository and start the compaction utility from the command line.</p>
-<p>Since offline garbage collection requires human intervention to run, the estimation phase is not executed at all. The human operator who decides to run offline garbage collection does so because he or she decided that the garbage in the repository is exceeding some arbitrary threshold. Since the decision comes from a human operator, offline garbage collection is not in charge of implementing heuristics to decide if and when garbage collection should be run. The offline garbage collection process consist of the compaction and cleanup phases only. It always employs full compaction with the subsequent cleanup retaining a single generation. </p>
+<p>Since offline garbage collection requires human intervention to run, the estimation phase is not executed at all. The human operator who decides to run offline garbage collection does so because he or she decided that the garbage in the repository is exceeding some arbitrary threshold. Since the decision comes from a human operator, offline garbage collection is not in charge of implementing heuristics to decide if and when garbage collection should be run. The offline garbage collection process consist of the compaction and cleanup phases only. It always employs full compaction with the subsequent cleanup retaining a single generation.</p>
 <p>The main drawback of offline garbage collection is that the process has to take exclusive control of the repository. Nevertheless, this is also a strength. Having exclusive access to the repository, offline garbage collection is usually faster and more effective of its online counterpart. Because of this, offline garbage collection is (and will always be) an important tool in repository management.</p></div>
 <div class="section">
 <h3><a name="Online_Garbage_Collection"></a><a name="online-garbage-collection"></a> Online Garbage Collection</h3>
@@ -332,443 +322,524 @@
 <p>Please note that the following messages are to be used as an example only. To make the examples clear, some information like the date and time, the name of the thread, and the name of the logger are removed. These information depend on the configuration of your logging framework. Moreover, some of those messages contain data that can and will change from one execution to the other.</p>
 <p>Every log message generated during the garbage collection process includes a sequence number indicating how many times garbage collection ran since the system started. The sequence number is always printed at the beginning of the message like in the following example.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: ...
+<div>
+<div>
+<pre class="source">TarMK GC #2: ...
 </pre></div></div>
+
 <div class="section">
 <h5><a name="When_did_garbage_collection_start"></a><a name="when-did-garbage-collection-start"></a> When did garbage collection start?</h5>
 <p>As soon as garbage collection is triggered, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: started
-</pre></div></div></div>
+<div>
+<div>
+<pre class="source">TarMK GC #2: started
+</pre></div></div>
+</div>
 <div class="section">
 <h5><a name="When_did_estimation_start"></a><a name="when-did-estimation-start"></a> When did estimation start?</h5>
 <p>As soon as the estimation phase of garbage collection starts, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: estimation started
-</pre></div></div></div>
+<div>
+<div>
+<pre class="source">TarMK GC #2: estimation started
+</pre></div></div>
+</div>
 <div class="section">
 <h5><a name="Is_estimation_disabled"></a><a name="is-estimation-disabled"></a> Is estimation disabled?</h5>
 <p>The estimation phase can be disabled by configuration. If this is the case, the system prints the following message.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: estimation skipped because it was explicitly disabled
+<div>
+<div>
+<pre class="source">TarMK GC #2: estimation skipped because it was explicitly disabled
 </pre></div></div>
+
 <p>Estimation is also skipped when compaction is disabled on the system. In this case, the following message is printed instead.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: estimation skipped because compaction is paused
-</pre></div></div></div>
+<div>
+<div>
+<pre class="source">TarMK GC #2: estimation skipped because compaction is paused
+</pre></div></div>
+</div>
 <div class="section">
 <h5><a name="Was_estimation_cancelled"></a><a name="was-estimation-cancelled"></a> Was estimation cancelled?</h5>
 <p>The execution of the estimation phase can be cancelled manually by the user or automatically if certain events occur. If estimation is cancelled, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: estimation interrupted: ${REASON}. Skipping compaction.
+<div>
+<div>
+<pre class="source">TarMK GC #2: estimation interrupted: ${REASON}. Skipping compaction.
 </pre></div></div>
+
 <p>The placeholder <tt>${REASON}</tt> is not actually printed in the message, but will be substituted by a more specific description of the reason that brought estimation to a premature halt. As stated before, some external events can terminate estimation, e.g. not enough memory or disk space on the host system. Moreover, estimation can also be cancelled by shutting down the system or by explicitly cancelling it via administrative interfaces. In each of these cases, the reason why estimation is cancelled will be printed in the log.</p></div>
 <div class="section">
 <h5><a name="When_did_estimation_complete"></a><a name="when-did-estimation-complete"></a> When did estimation complete?</h5>
 <p>When estimation terminates, either because of external cancellation or after a successful execution, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: estimation completed in 961.8 &#x3bc;s (0 ms). ${RESULT}
+<div>
+<div>
+<pre class="source">TarMK GC #2: estimation completed in 961.8 &#x3bc;s (0 ms). ${RESULT}
 </pre></div></div>
+
 <p>Moreover, the duration of the estimation phase is printed both in a readable format and in milliseconds. The placeholder <tt>${RESULT}</tt> stands for a message that depends on the estimation strategy.</p></div>
 <div class="section">
 <h5><a name="When_did_compaction_start"></a><a name="when-did-compaction-start"></a> When did compaction start?</h5>
 <p>When the compaction phase of the garbage collection process starts, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: compaction started, gc options=SegmentGCOptions{paused=false, estimationDisabled=false, gcSizeDeltaEstimation=1, retryCount=5, forceTimeout=3600, retainedGenerations=2, gcSizeDeltaEstimation=1}
+<div>
+<div>
+<pre class="source">TarMK GC #2: compaction started, gc options=SegmentGCOptions{paused=false, estimationDisabled=false, gcSizeDeltaEstimation=1, retryCount=5, forceTimeout=3600, retainedGenerations=2, gcSizeDeltaEstimation=1}
 </pre></div></div>
+
 <p>The message includes a dump of the garbage collection options that are used during the compaction phase.</p></div>
 <div class="section">
 <h5><a name="What_is_the_compaction_type"></a><a name="what-is-the-compaction-type"></a> What is the compaction type?</h5>
 <p>The type of the compaction phase is determined by the configuration. A log message indicates which compaction type is used.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: running ${MODE} compaction
+<div>
+<div>
+<pre class="source">TarMK GC #2: running ${MODE} compaction
 </pre></div></div>
+
 <p>Here ${MODE} is either <tt>full</tt> or <tt>tail</tt>. Under some circumstances (e.g. on the very first garbage collection run) when a tail compaction is scheduled to run the system needs to fall back to a full compaction. This is indicated in the log via the following message:</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: no base state available, running full compaction instead
-</pre></div></div></div>
+<div>
+<div>
+<pre class="source">TarMK GC #2: no base state available, running full compaction instead
+</pre></div></div>
+</div>
 <div class="section">
 <h5><a name="Is_compaction_disabled"></a><a name="is-compaction-disabled"></a> Is compaction disabled?</h5>
 <p>The compaction phase can be skipped by pausing the garbage collection process. If compaction is paused, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: compaction paused
+<div>
+<div>
+<pre class="source">TarMK GC #2: compaction paused
 </pre></div></div>
+
 <p>As long as compaction is paused, neither the estimation phase nor the compaction phase will be executed.</p></div>
 <div class="section">
 <h5><a name="Was_compaction_cancelled"></a><a name="was-compaction-cancelled"></a> Was compaction cancelled?</h5>
 <p>The compaction phase can be cancelled manually by the user or automatically because of external events. If compaction is cancelled, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: compaction cancelled: ${REASON}.
+<div>
+<div>
+<pre class="source">TarMK GC #2: compaction cancelled: ${REASON}.
 </pre></div></div>
+
 <p>The placeholder <tt>${REASON}</tt> is not actually printed in the message, but will be substituted by a more specific description of the reason that brought compaction to a premature halt. As stated before, some external events can terminate compaction, e.g. not enough memory or disk space on the host system. Moreover, compaction can also be cancelled by shutting down the system or by explicitly cancelling it via administrative interfaces. In each of these cases, the reason why compaction is cancelled will be printed in the log.</p></div>
 <div class="section">
 <h5><a name="When_did_compaction_complete"></a><a name="when-did-compaction-complete"></a> When did compaction complete?</h5>
 <p>When compaction complete successfully, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: compaction succeeded in 6.580 min (394828 ms), after 2 cycles
+<div>
+<div>
+<pre class="source">TarMK GC #2: compaction succeeded in 6.580 min (394828 ms), after 2 cycles
 </pre></div></div>
+
 <p>The time shown in the log message is relative to the compaction phase only. The reference to the amount of cycles spent for the compaction phase is explained in more detail below. If compaction did not complete successfully, the following message is printed instead.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: compaction failed in 32.902 min (1974140 ms), after 5 cycles
+<div>
+<div>
+<pre class="source">TarMK GC #2: compaction failed in 32.902 min (1974140 ms), after 5 cycles
 </pre></div></div>
+
 <p>This message doesn&#x2019;t mean that there was an unrecoverable error, but only that compaction gave up after a certain amount of attempts. In case an error occurs, the following message is printed instead.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: compaction encountered an error
+<div>
+<div>
+<pre class="source">TarMK GC #2: compaction encountered an error
 </pre></div></div>
+
 <p>This message is followed by the stack trace of the exception that was caught during the compaction phase. There is also a special message that is printed if the thread running the compaction phase is interrupted.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: compaction interrupted
-</pre></div></div></div>
+<div>
+<div>
+<pre class="source">TarMK GC #2: compaction interrupted
+</pre></div></div>
+</div>
 <div class="section">
 <h5><a name="How_does_compaction_deal_with_checkpoints"></a><a name="how-does-compaction-deal-with-checkpoints"></a> How does compaction deal with checkpoints?</h5>
 <p>Since checkpoints share a lot of common data between themselves and between the actual content compaction handles them individually deduplicating as much content as possible. The following messages will be printed to the log during the process.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: Found checkpoint 4b2ee46a-d7cf-45e7-93c3-799d538f85e6 created at Wed Nov 29 15:31:43 CET 2017.
+<div>
+<div>
+<pre class="source">TarMK GC #2: Found checkpoint 4b2ee46a-d7cf-45e7-93c3-799d538f85e6 created at Wed Nov 29 15:31:43 CET 2017.
 TarMK GC #2: Found checkpoint 5c45ca7b-5863-4679-a7c5-6056a999a6cd created at Wed Nov 29 15:31:43 CET 2017.
 TarMK GC #2: compacting checkpoints/4b2ee46a-d7cf-45e7-93c3-799d538f85e6/root.
 TarMK GC #2: compacting checkpoints/5c45ca7b-5863-4679-a7c5-6056a999a6cd/root.
 TarMK GC #2: compacting root.
-</pre></div></div></div>
+</pre></div></div>
+</div>
 <div class="section">
 <h5><a name="How_does_compaction_work_with_concurrent_writes"></a><a name="how-does-compaction-works-with-concurrent-writes"></a> How does compaction work with concurrent writes?</h5>
 <p>When compaction runs as part of online garbage collection, it has to work concurrently with the rest of the system. This means that, while compaction tries to copy useful data to the new generation, concurrent commits to the repository are writing data to the old generation. To cope with this, compaction tries to catch up with concurrent writes by incorporating their changes into the new generation.</p>
 <p>When compaction first tries to setup the new generation, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: compaction cycle 0 completed in 6.580 min (394828 ms). Compacted 3e3b35d3-2a15-43bc-a422-7bd4741d97a5.0000002a to 348b9500-0d67-46c5-a683-3ea8b0e6c21c.000012c0
+<div>
+<div>
+<pre class="source">TarMK GC #2: compaction cycle 0 completed in 6.580 min (394828 ms). Compacted 3e3b35d3-2a15-43bc-a422-7bd4741d97a5.0000002a to 348b9500-0d67-46c5-a683-3ea8b0e6c21c.000012c0
 </pre></div></div>
+
 <p>The message shows how long it took to compact the data to the new generation. It also prints the record identifiers of the two head states. The head state on the left belongs to the previous generation, the one on the right to the new.</p>
 <p>If concurrent commits are detected, compaction tries to incorporate those changes in the new generation. In this case, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: compaction detected concurrent commits while compacting. Compacting these commits. Cycle 1 of 5
+<div>
+<div>
+<pre class="source">TarMK GC #2: compaction detected concurrent commits while compacting. Compacting these commits. Cycle 1 of 5
 </pre></div></div>
+
 <p>This message means that a new compaction cycle is automatically started. Compaction will try to incorporate new changes for a certain amount of cycles, where the exact amount of cycles is a configuration option. After every compaction cycle, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: compaction cycle 1 completed in 6.580 min (394828 ms). Compacted 4d22b170-f8b7-406b-a2fc-45bf782440ac.00000065 against 3e3b35d3-2a15-43bc-a422-7bd4741d97a5.0000002a to 72e60037-f917-499b-a476-607ea6f2735c.00000d0d
+<div>
+<div>
+<pre class="source">TarMK GC #2: compaction cycle 1 completed in 6.580 min (394828 ms). Compacted 4d22b170-f8b7-406b-a2fc-45bf782440ac.00000065 against 3e3b35d3-2a15-43bc-a422-7bd4741d97a5.0000002a to 72e60037-f917-499b-a476-607ea6f2735c.00000d0d
 </pre></div></div>
+
 <p>This message contains three record identifiers instead of two. This is because the initial state that was being compacted evolved into a different one due to the concurrent commits. The message makes clear that the concurrent changes referenced from the first record identifier, up to the changes referenced from the second identifier, where moved to the new generation and are now referenced from third identifier.</p>
 <p>If the system is under heavy load and too many concurrent commits are generated, compaction might fail to catch up. In this case, a message like the following is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: compaction gave up compacting concurrent commits after 5 cycles.
+<div>
+<div>
+<pre class="source">TarMK GC #2: compaction gave up compacting concurrent commits after 5 cycles.
 </pre></div></div>
+
 <p>The message means that compaction tried to compact the repository data to the new generation for five times, but every time there were concurrent changes that prevented compaction from completion. To prevent the system from being too overloaded with background activity, compaction stopped itself after the configured amount of cycles.</p>
 <p>At this point the system can be configured to obtain exclusive access of the system and force compaction to complete. This means that if compaction gave up after the configured number of cycles, it would take full control over the repository and block concurrent writes. If the system is configured to behave this way, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: trying to force compact remaining commits for 60 seconds. Concurrent commits to the store will be blocked.
+<div>
+<div>
+<pre class="source">TarMK GC #2: trying to force compact remaining commits for 60 seconds. Concurrent commits to the store will be blocked.
 </pre></div></div>
+
 <p>If, after taking exclusive control of the repository for the specified amount of time, compaction completes successfully, the following message will be printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: compaction succeeded to force compact remaining commits after 56.7 s (56722 ms).
+<div>
+<div>
+<pre class="source">TarMK GC #2: compaction succeeded to force compact remaining commits after 56.7 s (56722 ms).
 </pre></div></div>
+
 <p>Sometimes the amount of time allocated to the compaction phase in exclusive mode is not enough. It might happen that compaction is not able to complete its work in the allocated time. If this happens, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: compaction failed to force compact remaining commits after 6.580 min (394828 ms). Most likely compaction didn't get exclusive access to the store.
+<div>
+<div>
+<pre class="source">TarMK GC #2: compaction failed to force compact remaining commits after 6.580 min (394828 ms). Most likely compaction didn't get exclusive access to the store.
 </pre></div></div>
+
 <p>Even if compaction takes exclusive access to the repository, it can still be interrupted. In this case, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: compaction failed to force compact remaining commits after 6.580 min (394828 ms). Compaction was cancelled: ${REASON}.
+<div>
+<div>
+<pre class="source">TarMK GC #2: compaction failed to force compact remaining commits after 6.580 min (394828 ms). Compaction was cancelled: ${REASON}.
 </pre></div></div>
+
 <p>The placeholder <tt>${REASON}</tt> will be substituted with a more detailed description of the reason why compaction was stopped.</p></div>
 <div class="section">
 <h5><a name="When_did_clean-up_start"></a><a name="when-did-cleanup-start"></a> When did clean-up start?</h5>
 <p>When the cleanup phase of the garbage collection process starts, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: cleanup started.
-</pre></div></div></div>
+<div>
+<div>
+<pre class="source">TarMK GC #2: cleanup started.
+</pre></div></div>
+</div>
 <div class="section">
 <h5><a name="Was_cleanup_cancelled"></a><a name="was-cleanup-cancelled"></a> Was cleanup cancelled?</h5>
 <p>If cleanup is cancelled, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: cleanup interrupted
+<div>
+<div>
+<pre class="source">TarMK GC #2: cleanup interrupted
 </pre></div></div>
+
 <p>There is no way to cancel cleanup manually. The only time cleanup can be cancel is when shutting down the repository.</p></div>
 <div class="section">
 <h5><a name="When_did_cleanup_complete"></a><a name="when-did-cleanup-complete"></a> When did cleanup complete?</h5>
 <p>When cleanup completes, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: cleanup completed in 16.23 min (974079 ms). Post cleanup size is 10.4 GB (10392082944 bytes) and space reclaimed 84.5 GB (84457663488 bytes).
+<div>
+<div>
+<pre class="source">TarMK GC #2: cleanup completed in 16.23 min (974079 ms). Post cleanup size is 10.4 GB (10392082944 bytes) and space reclaimed 84.5 GB (84457663488 bytes).
 </pre></div></div>
+
 <p>The message includes the time the cleanup phase took to complete, both in a human readable format and in milliseconds. Next the final size of the repository is shown, followed by the amount of space that was reclaimed during the cleanup phase. Both the final size and the reclaimed space are shown in human readable form and in bytes.</p></div>
 <div class="section">
 <h5><a name="What_happened_during_cleanup"></a><a name="what-happened-during-cleanup"></a> What happened during cleanup?</h5>
 <p>The first thing cleanup does is printing out the current size of the repository with a message similar to the following.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #1: current repository size is 89.3 GB (89260786688 bytes)
+<div>
+<div>
+<pre class="source">TarMK GC #1: current repository size is 89.3 GB (89260786688 bytes)
 </pre></div></div>
+
 <p>After that, the cleanup phase will iterate through every TAR file and figure out which segments are still in use and which ones can be reclaimed. After the cleanup phase scanned the repository, TAR files are purged of unused segments. In some cases, a TAR file would end up containing no segments at all. In this case, the TAR file is marked for deletion and the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">TarMK GC #2: cleanup marking files for deletion: data00000a.tar
+<div>
+<div>
+<pre class="source">TarMK GC #2: cleanup marking files for deletion: data00000a.tar
 </pre></div></div>
+
 <p>Please note that this message doesn&#x2019;t mean that cleanup will physically remove the file right now. The file is only being marked as deletable. Another background task will periodically kick in and remove unused files from disk. When this happens, the following message is printed.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">Removed files data00000a.tar,data00001a.tar,data00002a.tar
+<div>
+<div>
+<pre class="source">Removed files data00000a.tar,data00001a.tar,data00002a.tar
 </pre></div></div>
+
 <p>The output of this message can vary. It depends on the amount of segments that were cleaned up, on how many TAR files were emptied and on how often the background activity removes unused files.</p></div></div>
 <div class="section">
 <h4><a name="Monitoring"></a><a name="monitoring"></a> Monitoring</h4>
 <p>The Segment Store exposes certain pieces of information via JMX. This allows clients to easily access some statistics about the Segment Store, and connect the Segment Store to whatever monitoring infrastructure is in place. Moreover, JMX can be useful to execute some low-level operations in a manual fashion.</p>
-
 <ul>
-  
+
 <li>Each session exposes an <a href="#SessionMBean">SessionMBean</a> instance, which contains counters like the number and rate of reads and writes to the session.</li>
-  
 <li>The <a href="#RepositoryStatsMBean">RepositoryStatsMBean</a> exposes endpoints to monitor the number of open sessions, the session login rate, the overall read and write load across all sessions, the overall read and write timings across all sessions and overall load and timings for queries and observation.</li>
-  
 <li>The <a href="#SegmentNodeStoreStatsMBean">SegmentNodeStoreStatsMBean</a> exposes endpoints to monitor commits: number and rate, number of queued commits and queuing times.</li>
-  
 <li>The <a href="#FileStoreStatsMBean">FileStoreStatsMBean</a> exposes endpoints reflecting the amount of data written to disk, the number of tar files on disk and the total footprint on disk.</li>
-  
 <li>The <a href="#SegmentRevisionGarbageCollection">SegmentRevisionGarbageCollection</a> MBean tracks statistics about garbage collection.</li>
 </ul>
 <div class="section">
 <h5><a name="SessionMBean"></a> SessionMBean</h5>
 <p>Each session exposes an <tt>SessionMBean</tt> instance, which contains counters like the number and rate of reads and writes to the session:</p>
-
 <ul>
-  
+
 <li>
-<p><b>getInitStackTrace (string)</b> A stack trace from where the session was acquired.</p></li>
-  
+
+<p><b>getInitStackTrace (string)</b> A stack trace from where the session was acquired.</p>
+</li>
 <li>
-<p><b>AuthInfo (AuthInfo)</b> The <tt>AuthInfo</tt> instance for the user associated with the session.</p></li>
-  
+
+<p><b>AuthInfo (AuthInfo)</b> The <tt>AuthInfo</tt> instance for the user associated with the session.</p>
+</li>
 <li>
-<p><b>LoginTimeStamp (string)</b> The time stamp from when the session was acquired.</p></li>
-  
+
+<p><b>LoginTimeStamp (string)</b> The time stamp from when the session was acquired.</p>
+</li>
 <li>
-<p><b>LastReadAccess (string)</b> The time stamp from the last read access</p></li>
-  
+
+<p><b>LastReadAccess (string)</b> The time stamp from the last read access</p>
+</li>
 <li>
-<p><b>ReadCount (long)</b> The number of read accesses on this session</p></li>
-  
+
+<p><b>ReadCount (long)</b> The number of read accesses on this session</p>
+</li>
 <li>
-<p><b>ReadRate (double)</b> The read rate in number of reads per second on this session</p></li>
-  
+
+<p><b>ReadRate (double)</b> The read rate in number of reads per second on this session</p>
+</li>
 <li>
-<p><b>LastWriteAccess (string)</b> The time stamp from the last write access</p></li>
-  
+
+<p><b>LastWriteAccess (string)</b> The time stamp from the last write access</p>
+</li>
 <li>
-<p><b>WriteCount (long)</b> The number of write accesses on this session</p></li>
-  
+
+<p><b>WriteCount (long)</b> The number of write accesses on this session</p>
+</li>
 <li>
-<p><b>WriteRate (double)</b> The write rate in number of writes per second on this session</p></li>
-  
+
+<p><b>WriteRate (double)</b> The write rate in number of writes per second on this session</p>
+</li>
 <li>
-<p><b>LastRefresh (string)</b> The time stamp from the last refresh on this session</p></li>
-  
+
+<p><b>LastRefresh (string)</b> The time stamp from the last refresh on this session</p>
+</li>
 <li>
-<p><b>RefreshStrategy (string)</b> The refresh strategy of the session</p></li>
-  
+
+<p><b>RefreshStrategy (string)</b> The refresh strategy of the session</p>
+</li>
 <li>
-<p><b>RefreshPending (boolean)</b> A boolean indicating whether the session will be refreshed on next access.</p></li>
-  
+
+<p><b>RefreshPending (boolean)</b> A boolean indicating whether the session will be refreshed on next access.</p>
+</li>
 <li>
-<p><b>RefreshCount (long)</b> The number of refresh operations on this session</p></li>
-  
+
+<p><b>RefreshCount (long)</b> The number of refresh operations on this session</p>
+</li>
 <li>
-<p><b>RefreshRate (double)</b> The refresh rate in number of refreshes per second on this session</p></li>
-  
+
+<p><b>RefreshRate (double)</b> The refresh rate in number of refreshes per second on this session</p>
+</li>
 <li>
-<p><b>LastSave (string)</b> The time stamp from the last save on this session</p></li>
-  
+
+<p><b>LastSave (string)</b> The time stamp from the last save on this session</p>
+</li>
 <li>
-<p><b>SaveCount (long)</b> The number of save operations on this session</p></li>
-  
+
+<p><b>SaveCount (long)</b> The number of save operations on this session</p>
+</li>
 <li>
-<p><b>SaveRate (double)</b> The save rate in number of saves per second on this session</p></li>
-  
+
+<p><b>SaveRate (double)</b> The save rate in number of saves per second on this session</p>
+</li>
 <li>
-<p><b>SessionAttributes (string[])</b> The attributes associated with the session</p></li>
-  
+
+<p><b>SessionAttributes (string[])</b> The attributes associated with the session</p>
+</li>
 <li>
-<p><b>LastFailedSave (string)</b> The stack trace of the last exception that occurred during a save operation</p></li>
-  
+
+<p><b>LastFailedSave (string)</b> The stack trace of the last exception that occurred during a save operation</p>
+</li>
 <li>
-<p><b>refresh</b> Refresh this session.</p></li>
+
+<p><b>refresh</b> Refresh this session.</p>
+</li>
 </ul></div>
 <div class="section">
 <h5><a name="RepositoryStatsMBean"></a> RepositoryStatsMBean</h5>
 <p>The <tt>RepositoryStatsMBean</tt> exposes endpoints to monitor the number of open sessions, the session login rate, the overall read and write load across all sessions, the overall read and write timings across all sessions and overall load and timings for queries and observation.</p>
-
 <ul>
-  
+
 <li>
-<p><b>SessionCount (CompositeData)</b> Number of currently logged in sessions.</p></li>
-  
+
+<p><b>SessionCount (CompositeData)</b> Number of currently logged in sessions.</p>
+</li>
 <li>
-<p><b>SessionLogin (CompositeData)</b> Number of calls sessions that have been logged in.</p></li>
-  
+
+<p><b>SessionLogin (CompositeData)</b> Number of calls sessions that have been logged in.</p>
+</li>
 <li>
-<p><b>SessionReadCount (CompositeData)</b> Number of read accesses through any session.</p></li>
-  
+
+<p><b>SessionReadCount (CompositeData)</b> Number of read accesses through any session.</p>
+</li>
 <li>
-<p><b>SessionReadDuration (CompositeData)</b> Total time spent reading from sessions in nano seconds.</p></li>
-  
+
+<p><b>SessionReadDuration (CompositeData)</b> Total time spent reading from sessions in nano seconds.</p>
+</li>
 <li>
-<p><b>SessionReadAverage (CompositeData)</b> Average time spent reading from sessions in nano seconds. This is the sum of all read durations divided by the number of reads in the respective time period.</p></li>
-  
+
+<p><b>SessionReadAverage (CompositeData)</b> Average time spent reading from sessions in nano seconds. This is the sum of all read durations divided by the number of reads in the respective time period.</p>
+</li>
 <li>
-<p><b>SessionWriteCount (CompositeData)</b> Number of write accesses through any session.</p></li>
-  
+
+<p><b>SessionWriteCount (CompositeData)</b> Number of write accesses through any session.</p>
+</li>
 <li>
-<p><b>SessionWriteDuration (CompositeData)</b> Total time spent writing to sessions in nano seconds.</p></li>
-  
+
+<p><b>SessionWriteDuration (CompositeData)</b> Total time spent writing to sessions in nano seconds.</p>
+</li>
 <li>
-<p><b>SessionWriteAverage (CompositeData)</b> Average time spent writing to sessions in nano seconds. This is the sum of all write durations divided by the number of writes in the respective time period.</p></li>
-  
+
+<p><b>SessionWriteAverage (CompositeData)</b> Average time spent writing to sessions in nano seconds. This is the sum of all write durations divided by the number of writes in the respective time period.</p>
+</li>
 <li>
-<p><b>QueryCount()</b> Number of queries executed.</p></li>
-  
+
+<p><b>QueryCount()</b> Number of queries executed.</p>
+</li>
 <li>
-<p><b>QueryDuration (CompositeData)</b> Total time spent evaluating queries in milli seconds.</p></li>
-  
+
+<p><b>QueryDuration (CompositeData)</b> Total time spent evaluating queries in milli seconds.</p>
+</li>
 <li>
-<p><b>QueryAverage (CompositeData)</b> Average time spent evaluating queries in milli seconds. This is the sum of all query durations divided by the number of queries in the respective time period.</p></li>
-  
+
+<p><b>QueryAverage (CompositeData)</b> Average time spent evaluating queries in milli seconds. This is the sum of all query durations divided by the number of queries in the respective time period.</p>
+</li>
 <li>
-<p><b>ObservationEventCount (CompositeData)</b> Total number of observation {@code Event} instances delivered to all observation listeners.</p></li>
-  
+
+<p><b>ObservationEventCount (CompositeData)</b> Total number of observation {@code Event} instances delivered to all observation listeners.</p>
+</li>
 <li>
-<p><b>ObservationEventDuration (CompositeData)</b> Total time spent processing observation events by all observation listeners in nano seconds.</p></li>
-  
+
+<p><b>ObservationEventDuration (CompositeData)</b> Total time spent processing observation events by all observation listeners in nano seconds.</p>
+</li>
 <li>
-<p><b>ObservationEventAverage</b> Average time spent processing observation events by all observation listeners in nano seconds. This is the sum of all observation durations divided by the number of observation events in the respective time period.</p></li>
-  
+
+<p><b>ObservationEventAverage</b> Average time spent processing observation events by all observation listeners in nano seconds. This is the sum of all observation durations divided by the number of observation events in the respective time period.</p>
+</li>
 <li>
-<p><b>ObservationQueueMaxLength (CompositeData)</b> Maximum length of observation queue in the respective time period.</p></li>
+
+<p><b>ObservationQueueMaxLength (CompositeData)</b> Maximum length of observation queue in the respective time period.</p>
+</li>
 </ul></div>
 <div class="section">
 <h5><a name="SegmentNodeStoreStatsMBean"></a> SegmentNodeStoreStatsMBean</h5>
 <p>The <tt>SegmentNodeStoreStatsMBean</tt> exposes endpoints to monitor commits: number and rate, number of queued commits and queuing times.</p>
-
 <ul>
-  
+
 <li>
-<p><b>CommitsCount (CompositeData)</b> Time series of the number of commits</p></li>
-  
+
+<p><b>CommitsCount (CompositeData)</b> Time series of the number of commits</p>
+</li>
 <li>
-<p><b>QueuingCommitsCount (CompositeData)</b> Time series of the number of commits queuing</p></li>
-  
+
+<p><b>QueuingCommitsCount (CompositeData)</b> Time series of the number of commits queuing</p>
+</li>
 <li>
-<p><b>CommitTimes (CompositeData)</b> Time series of the commit times</p></li>
-  
+
+<p><b>CommitTimes (CompositeData)</b> Time series of the commit times</p>
+</li>
 <li>
-<p><b>QueuingTimes (CompositeData)</b> Time series of the commit queuing times</p></li>
+
+<p><b>QueuingTimes (CompositeData)</b> Time series of the commit queuing times</p>
+</li>
 </ul></div>
 <div class="section">
 <h5><a name="FileStoreStatsMBean"></a> FileStoreStatsMBean</h5>
 <p>The <tt>FileStoreStatsMBean</tt> exposes endpoints reflecting the amount of data written to disk, the number of tar files on disk and the total footprint on disk.</p>
-
 <ul>
-  
+
 <li>
-<p><b>ApproximateSize (long)</b> An approximate disk footprint of the Segment Store.</p></li>
-  
+
+<p><b>ApproximateSize (long)</b> An approximate disk footprint of the Segment Store.</p>
+</li>
 <li>
-<p><b>TarFileCount (int)</b> The number of tar files of the Segment Store.</p></li>
-  
+
+<p><b>TarFileCount (int)</b> The number of tar files of the Segment Store.</p>
+</li>
 <li>
-<p><b>WriteStats (CompositeData)</b> Time series of the writes to repository</p></li>
-  
+
+<p><b>WriteStats (CompositeData)</b> Time series of the writes to repository</p>
+</li>
 <li>
-<p><b>RepositorySize (CompositeData)</b> Time series of the writes to repository</p></li>
-  
+
+<p><b>RepositorySize (CompositeData)</b> Time series of the writes to repository</p>
+</li>
 <li>
-<p><b>StoreInfoAsString (string)</b> A human readable descriptive representation of the values exposed by this MBean.</p></li>
-  
+
+<p><b>StoreInfoAsString (string)</b> A human readable descriptive representation of the values exposed by this MBean.</p>
+</li>
 <li>
-<p><b>JournalWriteStatsAsCount (long)</b> Number of writes to the journal of this Segment Store.</p></li>
-  
+
+<p><b>JournalWriteStatsAsCount (long)</b> Number of writes to the journal of this Segment Store.</p>
+</li>
 <li>
-<p><b>JournalWriteStatsAsCompositeData (CompositeData)</b> Time series of the writes to the journal of this Segment Store.</p></li>
+
+<p><b>JournalWriteStatsAsCompositeData (CompositeData)</b> Time series of the writes to the journal of this Segment Store.</p>
+</li>
 </ul></div>
 <div class="section">
 <h5><a name="SegmentRevisionGarbageCollection_MBean"></a><a name="SegmentRevisionGarbageCollection"></a> SegmentRevisionGarbageCollection MBean</h5>
 <p>The <tt>SegmentRevisionGarbageCollection</tt> MBean tracks statistics about garbage collection. Some of the statistics are specific to specific phases of the garbage collection process, others are more widely applicable. This MBean also exposes management operations to start and cancel garbage collection and options that can influence the outcome of garbage collection. You should use this MBean with great care.</p>
 <p>The following options are collectively called &#x201c;garbage collection options&#x201d;, since they are used to tweak the behaviour of the garbage collection process. These options are readable and writable, but they take effect only at the start of the next garbage collection process.</p>
-
 <ul>
-  
+
 <li><b>PausedCompaction (boolean)</b> Determines if garbage collection is paused. If this value is set to <tt>true</tt>, garbage collection will not be performed. Compaction will be effectively skipped even if invoked manually or by scheduled maintenance tasks.</li>
-  
 <li><b>RetryCount (int)</b> Determines how many completion attempts the compaction phase should try before giving up. This parameter influences the behaviour of the compaction phase when concurrent writes are detected.</li>
-  
 <li><b>ForceTimeout (int)</b> The amount of time (in seconds) the compaction phase can take exclusive control of the repository. This parameter is used only if compaction is configured to take exclusive control of the repository instead of giving up after too many concurrent writes.</li>
-  
 <li><b>RetainedGenerations (int)</b> How many generations should be preserved when cleaning up the Segment Store. When the cleanup phase runs, only the latest <tt>RetainedGenerations</tt> generations are kept intact. Older generations will be deleted. <i>Deprecated</i>: as of Oak 1.8 this value is fixed to 2 generations and cannot be modified.</li>
-  
 <li><b>GcSizeDeltaEstimation (long)</b> The size (in bytes) of new content added to the repository since the end of the last garbage collection that would trigger another garbage collection run. This parameter influences the behaviour of the estimation phase.</li>
-  
 <li><b>EstimationDisabled (boolean)</b> Determines if the estimation phase is disabled. If this parameter is set to <tt>true</tt>, the estimation phase will be skipped and compaction will run unconditionally.</li>
-  
 <li><b>GCType (&#x201c;FULL&#x201d; or &#x201c;TAIL&#x201d;)</b> Determines the type of the garbage collection that should run when invoking the <tt>startRevisionGC</tt> operation.</li>
-  
 <li><b>RevisionGCProgressLog (long)</b> The number of processed nodes after which a progress message is logged. <tt>-1</tt> indicates no logging.</li>
-  
 <li><b>MemoryThreshold (int)</b> A number between <tt>0</tt> and <tt>100</tt> that represents the percentage of heap memory that should always be free during compaction. If the amount of free memory falls below the provided percentage, compaction will be interrupted.</li>
 </ul>
 <p>The following options are read-only and expose runtime statistics about the garbage collection process.</p>
-
 <ul>
-  
+
 <li><b>LastCompaction (string)</b> The formatted timestamp of the end of the last successful compaction phase.</li>
-  
 <li><b>LastCleanup (string)</b> The formatted timestamp of the end of the last cleanup phase.</li>
-  
 <li><b>LastRepositorySize (long)</b> The size of the repository (in bytes) after the last cleanup phase.</li>
-  
 <li><b>LastReclaimedSize (long)</b> The amount of data (in bytes) that was reclaimed during the last cleanup phase.</li>
-  
 <li><b>LastError (string)</b> The last error encountered during compaction, in a human readable form.</li>
-  
 <li><b>LastLogMessage (string)</b> The last log message produced during garbage collection.</li>
-  
 <li><b>Status (string)</b> The current status of the garbage collection process. This property can assume the values <tt>idle</tt>, <tt>estimation</tt>, <tt>compaction</tt>, <tt>compaction-retry-N</tt> (where <tt>N</tt> is the number of the current retry iteration), <tt>compaction-force-compact</tt> and <tt>cleanup</tt>.</li>
-  
 <li><b>RevisionGCRunning (boolean)</b> Indicates whether online revision garbage collection is currently running.</li>
-  
 <li><b>CompactedNodes (long)</b> The number of compacted nodes during the previous garbage collection</li>
-  
 <li><b>EstimatedCompactableNodes (long)</b> The estimated number of nodes to compact during the next garbage collection. <tt>-1</tt> indicates an estimated value is not available.</li>
-  
 <li><b>EstimatedRevisionGCCompletion (int)</b> Estimated percentage completed for the current garbage collection run. <tt>-1</tt> indicates an estimated percentage is not available.</li>
 </ul>
 <p>The <tt>SegmentRevisionGarbageCollection</tt> MBean also exposes the following management operations.</p>
-
 <ul>
-  
+
 <li><b>cancelRevisionGC</b> If garbage collection is currently running, schedule its cancellation. The garbage collection process will be interrupted as soon as it&#x2019;s safe to do so without losing data or corrupting the system. If garbage collection is not running, this operation has no effect.</li>
-  
 <li><b>startRevisionGC</b> Start garbage collection. If garbage collection is already running, this operation has no effect.</li>
 </ul></div></div></div></div>
 <div class="section">
@@ -778,47 +849,57 @@ TarMK GC #2: compacting root.
 <div class="section">
 <h3><a name="Backup"></a><a name="backup"></a> Backup</h3>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">java -jar oak-run.jar backup ORIGINAL BACKUP 
+<div>
+<div>
+<pre class="source">java -jar oak-run.jar backup ORIGINAL BACKUP 
 </pre></div></div>
+
 <p>The <tt>backup</tt> tool performs a backup of a Segment Store <tt>ORIGINAL</tt> and saves it to the folder <tt>BACKUP</tt>. <tt>ORIGINAL</tt> must be the path to an existing, valid Segment Store. <tt>BACKUP</tt> must be a valid path to a folder on the file system. If <tt>BACKUP</tt> doesn&#x2019;t exist, it will be created. If <tt>BACKUP</tt> exists, it must be a path to an existing, valid Segment Store.</p>
 <p>The tool assumes that the <tt>ORIGINAL</tt> Segment Store doesn&#x2019;t use an external Blob Store. If this is the case, it&#x2019;s necessary to set the <tt>oak.backup.UseFakeBlobStore</tt> system property to <tt>true</tt> on the command line as shown below.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">java -Doak.backup.UseFakeBlobStore=true -jar oak-run.jar backup ...
+<div>
+<div>
+<pre class="source">java -Doak.backup.UseFakeBlobStore=true -jar oak-run.jar backup ...
 </pre></div></div>
+
 <p>When a backup is performed, if <tt>BACKUP</tt> points to an existing Segment Store, only the content that is different from <tt>ORIGINAL</tt> is copied. This is similar to an incremental backup performed at the level of the content. When an incremental backup is performed, the tool will automatically try to cleanup eventual garbage from the <tt>BACKUP</tt> Segment Store.</p></div>
 <div class="section">
 <h3><a name="Restore"></a><a name="restore"></a> Restore</h3>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">java -jar oak-run.jar restore ORIGINAL BACKUP
+<div>
+<div>
+<pre class="source">java -jar oak-run.jar restore ORIGINAL BACKUP
 </pre></div></div>
+
 <p>The <tt>restore</tt> tool restores the state of the <tt>ORIGINAL</tt> Node Store from a previous backup <tt>BACKUP</tt>. This tool is the counterpart of <tt>backup</tt>.</p></div>
 <div class="section">
 <h3><a name="Check"></a><a name="check"></a> Check</h3>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">java -jar oak-run.jar check PATH [--journal JOURNAL] [--notify SECS] [--bin] [--head] [--checkpoints all | cp1[,cp2,..,cpn]]  [--filter PATH1[,PATH2,..,PATHn]] [--io-stats]
+<div>
+<div>
+<pre class="source">java -jar oak-run.jar check PATH [--journal JOURNAL] [--notify SECS] [--bin] [--head] [--checkpoints all | cp1[,cp2,..,cpn]]  [--filter PATH1[,PATH2,..,PATHn]] [--io-stats]
 </pre></div></div>
+
 <p>The <tt>check</tt> tool inspects an existing Segment Store at <tt>PATH</tt> for eventual inconsistencies. The algorithm implemented by this tool traverses every revision in the journal, from the most recent to the oldest. For every revision, the actual nodes and properties are traversed, verifying that every piece of data is reachable and undamaged. Moreover, if <tt>--head</tt> and <tt>--checkpoints</tt> options are used, the scope of the traversal can be limited to head state and/or a subset of checkpoints. A deep scan of the content tree, traversing every node and every property will be performed by default. The default scope includes head state and all checkpoints.</p>
-<p>If the <tt>--journal</tt> option is specified, the tool will use the journal file at <tt>JOURNAL</tt> instead of picking up the one contained in <tt>PATH</tt>. <tt>JOURNAL</tt> must be a path to a valid journal file for the Segment Store. </p>
+<p>If the <tt>--journal</tt> option is specified, the tool will use the journal file at <tt>JOURNAL</tt> instead of picking up the one contained in <tt>PATH</tt>. <tt>JOURNAL</tt> must be a path to a valid journal file for the Segment Store.</p>
 <p>If the <tt>--notify</tt> option is specified, the tool will print progress information messages every <tt>SECS</tt> seconds. If not specified, progress information messages will be disabled. If <tt>SECS</tt> equals <tt>0</tt>, every progress information message is printed.</p>
 <p>If the <tt>--bin</tt> option is specified, the tool will scan the full content of binary properties. If not specified, the binary properties will not be traversed. The <tt>--bin</tt> option has no effect on binary properties stored in an external Blob Store.</p>
 <p>If the <tt>--head</tt> option is specified, the tool will scan <b>only</b> the head state, ignoring any available checkpoints.</p>
 <p>If the <tt>--checkpoints</tt> option is specified, the tool will scan <b>only</b> the specified checkpoints, ignoring the head state. At least one argument is expected with this option; multiple arguments need to be comma-separated. The checkpoints will be traversed in the same order as they were specified. In order to scan all checkpoints, the correct argument for this option is <tt>all</tt> (i.e. <tt>--checkpoints all</tt>).</p>
 <p>As mentioned in the paragraph above, by default, both head state and all checkpoints will be checked. In other words, this is equivalent to having both options, <tt>--head</tt> and <tt>--checkpoints all</tt>, specified.</p>
-<p>If the <tt>--filter</tt> option is specified, the tool will traverse only the absolute paths specified as arguments. At least one argument is expected with this option; multiple arguments need to be comma-separated. The paths will be traversed in the same order as they were specified. </p>
+<p>If the <tt>--filter</tt> option is specified, the tool will traverse only the absolute paths specified as arguments. At least one argument is expected with this option; multiple arguments need to be comma-separated. The paths will be traversed in the same order as they were specified.</p>
 <p>The filtering applies to both head state and/or checkpoints, depending on the scope of the scan. For example, <tt>--head --filter PATH1</tt> will limit the traversal to <tt>PATH1</tt> under head state, <tt>--checkpoints cp1 --filter PATH2</tt> will limit the traversal to <tt>PATH2</tt> under <tt>cp1</tt>, while <tt>--filter PATH3</tt> will limit it to <tt>PATH3</tt>, <b>for both head state and all checkpoints</b>. If the option is not specified, the full traversal of the repository (rooted at <tt>/</tt>) will be performed.</p>
 <p>If the <tt>--io-stats</tt> option is specified, the tool will print some statistics about the I/O operations performed during the execution of the check command. This option is optional and is disabled by default.</p></div>
 <div class="section">
 <h3><a name="Compact"></a><a name="compact"></a> Compact</h3>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">java -jar oak-run.jar compact [--force] [--mmap] PATH
+<div>
+<div>
+<pre class="source">java -jar oak-run.jar compact [--force] [--mmap] PATH
 </pre></div></div>
-<p>The <tt>compact</tt> command performs offline compaction of the Segment Store at <tt>PATH</tt>. <tt>PATH</tt> must be a valid path to an existing Segment Store. </p>
-<p>If the optional <tt>--force [Boolean]</tt> argument is set to <tt>true</tt> the tool ignores a non matching Segment Store version. <i>CAUTION</i>: this will upgrade the Segment Store to the latest version, which is incompatible with older versions. <i>There is no way to downgrade an accidentally upgraded Segment Store</i>. </p>
+
+<p>The <tt>compact</tt> command performs offline compaction of the Segment Store at <tt>PATH</tt>. <tt>PATH</tt> must be a valid path to an existing Segment Store.</p>
+<p>If the optional <tt>--force [Boolean]</tt> argument is set to <tt>true</tt> the tool ignores a non matching Segment Store version. <i>CAUTION</i>: this will upgrade the Segment Store to the latest version, which is incompatible with older versions. <i>There is no way to downgrade an accidentally upgraded Segment Store</i>.</p>
 <p>The optional <tt>--mmap [Boolean]</tt> argument can be used to control the file access mode. Set to <tt>true</tt> for memory mapped access and <tt>false</tt> for file access. If not specified, memory mapped access is used on 64 bit systems and file access is used on 32 bit systems. On Windows, regular file access is always enforced and this option is ignored.</p>
 <p>To enable logging during offline compaction a Logback configuration file has to be injected via the <tt>logback.configurationFile</tt> property. In addition the <tt>compaction-progress-log</tt> property controls the number of compacted nodes that will be logged. The default value is 150000.</p>
 <div class="section">
@@ -826,13 +907,16 @@ TarMK GC #2: compacting root.
 <h5><a name="Example"></a>Example</h5>
 <p>The following command uses <tt>logback-compaction.xml</tt> to configure Logback logging compaction progress every 1000 nodes to the console.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">java -Dlogback.configurationFile=logback-compaction.xml -Dcompaction-progress-log=1000 -jar oak-run.jar compact /path/to/segmenstore
+<div>
+<div>
+<pre class="source">java -Dlogback.configurationFile=logback-compaction.xml -Dcompaction-progress-log=1000 -jar oak-run.jar compact /path/to/segmenstore
 </pre></div></div>
+
 <p>logback-compaction.xml:</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
+<div>
+<div>
+<pre class="source">&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
 &lt;configuration scan=&quot;true&quot;&gt;
   
   &lt;appender name=&quot;console&quot; class=&quot;ch.qos.logback.core.ConsoleAppender&quot;&gt;
@@ -847,14 +931,17 @@ TarMK GC #2: compacting root.
     &lt;appender-ref ref=&quot;console&quot; /&gt;
   &lt;/root&gt;
 &lt;/configuration&gt; 
-</pre></div></div></div></div></div>
+</pre></div></div>
+</div></div></div>
 <div class="section">
 <h3><a name="Debug"></a><a name="debug"></a> Debug</h3>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">java -jar oak-run.jar debug PATH
+<div>
+<div>
+<pre class="source">java -jar oak-run.jar debug PATH
 java -jar oak-run.jar debug PATH ITEMS...
 </pre></div></div>
+
 <p>The <tt>debug</tt> command prints diagnostic information about a Segment Store or individual Segment Store items.</p>
 <p><tt>PATH</tt> is mandatory and must be a valid path to an existing Segment Store. If only the path is specified - as in the first example above - only general debugging information about the Segment Store are printed.</p>
 <p><tt>ITEMS</tt> is a sequence of one or more TAR file name, segment ID, node record ID or range of node record ID. If one or more items are specified - as in the second example above - general debugging information about the segment store are not printed. Instead, detailed information about the specified items are shown.</p>
@@ -865,8 +952,9 @@ java -jar oak-run.jar debug PATH ITEMS..
 <div class="section">
 <h3><a name="IOTrace"></a><a name="iotrace"></a> IOTrace</h3>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">java -jar oak-run.jar iotrace PATH --trace DEPTH|BREADTH [--depth DEPTH] [--mmap MMAP] [--output OUTPUT] [--path PATH] [--segment-cache SEGMENT_CACHE] 
+<div>
+<div>
+<pre class="source">java -jar oak-run.jar iotrace PATH --trace DEPTH|BREADTH [--depth DEPTH] [--mmap MMAP] [--output OUTPUT] [--path PATH] [--segment-cache SEGMENT_CACHE] 
 
 usage: iotrace path/to/segmentstore &lt;options&gt;
 Option (* = required)      Description
@@ -881,35 +969,43 @@ Option (* = required)      Description
 --segment-cache &lt;Integer&gt;  size of the segment cache in MB (default: 256)
 * --trace &lt;Traces&gt;         type of the traversal. Either of [DEPTH, BREADTH, RANDOM]
 </pre></div></div>
-<p>The <tt>iotrace</tt> command collects IO traces of read accesses to the segment store&#x2019;s back-end (e.g. disk). Traffic patterns can be specified via the <tt>--trace</tt> option. Permissible values are <tt>DEPTH</tt> for depth first traversal, <tt>BREADTH</tt> for breadth first traversal and <tt>RANDOM</tt> for random access. The <tt>--depth</tt> option limits the maximum number of levels traversed. The <tt>--path</tt> option specifies the node where traversal starts (from the super root). The <tt>--mmap</tt> and <tt>--segment-cache</tt> options configure memory mapping and segment cache size of the segment store, respectively. The <tt>--paths</tt> option specifies the list of paths to access. The file must contain a single path per line. The <tt>--seed</tt> option specifies the seed to used when randomly choosing a paths.<br />The <tt>--output</tt> options specifies the file where the IO trace is stored. IO traces are stored in CSV format of the following form:</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">timestamp,file,segmentId,length,elapsed
+<p>The <tt>iotrace</tt> command collects IO traces of read accesses to the segment store&#x2019;s back-end (e.g. disk). Traffic patterns can be specified via the <tt>--trace</tt> option. Permissible values are <tt>DEPTH</tt> for depth first traversal, <tt>BREADTH</tt> for breadth first traversal and <tt>RANDOM</tt> for random access. The <tt>--depth</tt> option limits the maximum number of levels traversed. The <tt>--path</tt> option specifies the node where traversal starts (from the super root). The <tt>--mmap</tt> and <tt>--segment-cache</tt> options configure memory mapping and segment cache size of the segment store, respectively. The <tt>--paths</tt> option specifies the list of paths to access. The file must contain a single path per line. The <tt>--seed</tt> option specifies the seed to used when randomly choosing a paths.<br />
+The <tt>--output</tt> options specifies the file where the IO trace is stored. IO traces are stored in CSV format of the following form:</p>
+
+<div>
+<div>
+<pre class="source">timestamp,file,segmentId,length,elapsed
 1522147945084,data01415a.tar,f81378df-b3f8-4b25-0000-00000002c450,181328,171849
 1522147945096,data01415a.tar,f81378df-b3f8-4b25-0000-00000002c450,181328,131272
 1522147945097,data01415a.tar,f81378df-b3f8-4b25-0000-00000002c450,181328,142766
-</pre></div></div></div>
+</pre></div></div>
+</div>
 <div class="section">
 <h3><a name="Diff"></a><a name="diff"></a> Diff</h3>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">java -jar oak-run.jar tarmkdiff [--output OUTPUT] --list PATH
+<div>
+<div>
+<pre class="source">java -jar oak-run.jar tarmkdiff [--output OUTPUT] --list PATH
 java -jar oak-run.jar tarmkdiff [--output OUTPUT] [--incremental] [--path NODE] [--ignore-snfes] --diff REVS PATH
 </pre></div></div>
+
 <p>The <tt>diff</tt> command prints content diffs between revisions in the Segment Store at <tt>PATH</tt>.</p>
 <p>The <tt>--output</tt> option instructs the command to print its output to the file <tt>OUTPUT</tt>. If this option is not specified, the tool will print to a <tt>.log</tt> file augmented with the current timestamp. The default file will be saved in the current directory.</p>
 <p>If the <tt>--list</tt> option is specified, the command just prints a list of revisions available in the Segment Store. This is equivalent to the first command line specification in the example above.</p>
 <p>If the <tt>--list</tt> option is not specified, <tt>tarmkdiff</tt> prints one or more content diff between a pair of revisions. In this case, the command line specification is the second in the example above.</p>
 <p>The <tt>--diff</tt> option specifies an interval of revisions <tt>REVS</tt>. The interval is specified by a couple of revisions separated by two dots, e.g. <tt>333dc24d-438f-4cca-8b21-3ebf67c05856:12345..46116fda-7a72-4dbc-af88-a09322a7753a:67890</tt>. In place of any of the two revisions, the placeholder <tt>head</tt> can be used. The <tt>head</tt> placeholder is substituted (in a case-insensitive way) to the most recent revision in the Segment Store.</p>
 <p>The <tt>--path</tt> option can be used to restrict the diff to a portion of the content tree. The value <tt>NODE</tt> must be a valid path in the content tree.</p>
-<p>If the flag <tt>--incremental</tt> is specified, the output will contain an incremental diff between every pair of successive revisions occurring in the interval specified with <tt>--diff</tt>. This parameter is useful if you are interested in every change in content between every commit that happened in a specified range. </p>
+<p>If the flag <tt>--incremental</tt> is specified, the output will contain an incremental diff between every pair of successive revisions occurring in the interval specified with <tt>--diff</tt>. This parameter is useful if you are interested in every change in content between every commit that happened in a specified range.</p>
 <p>The <tt>--ignore-snfes</tt> flag can be used in combination with <tt>--incremental</tt> to ignore errors that might occur while generating the incremental diff because of damaged or too old content. If this flag is not specified and an error occurs while generating the incremental diff, the tool stops immediately and reports the error.</p></div>
 <div class="section">
 <h3><a name="History"></a><a name="history"></a> History</h3>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">java -jar oak-run.jar history [--journal JOURNAL] [--path NODE] [--depth DEPTH] PATH
+<div>
+<div>
+<pre class="source">java -jar oak-run.jar history [--journal JOURNAL] [--path NODE] [--depth DEPTH] PATH
 </pre></div></div>
+
 <p>The <tt>history</tt> command shows how the content of a node or of a sub-tree changed over time in the Segment Store at <tt>PATH</tt>.</p>
 <p>The history of the node is computed based on the revisions reported by the journal in the Segment Store. If a different set of revisions needs to be used, it is possible to specify a custom journal file by using the <tt>--journal</tt> option. If this option is used, <tt>JOURNAL</tt> must be a path to a valid journal file.</p>
 <p>The <tt>--path</tt> parameter specifies the node whose history will be printed. If not specified, the history of the root node will be printed. <tt>NODE</tt> must be a valid path to a node in the Segment Store.</p>

Modified: jackrabbit/site/live/oak/docs/nodestore/segment/records.html
URL: http://svn.apache.org/viewvc/jackrabbit/site/live/oak/docs/nodestore/segment/records.html?rev=1835390&r1=1835389&r2=1835390&view=diff
==============================================================================
--- jackrabbit/site/live/oak/docs/nodestore/segment/records.html (original)
+++ jackrabbit/site/live/oak/docs/nodestore/segment/records.html Mon Jul  9 08:53:17 2018
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia Site Renderer 1.7.4 at 2018-05-24 
+ | Generated by Apache Maven Doxia Site Renderer 1.8.1 at 2018-07-09 
  | Rendered using Apache Maven Fluido Skin 1.6
 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20180524" />
+    <meta name="Date-Revision-yyyymmdd" content="20180709" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Jackrabbit Oak &#x2013; Segments and records</title>
     <link rel="stylesheet" href="../../css/apache-maven-fluido-1.6.min.css" />
@@ -136,7 +136,7 @@
 
       <div id="breadcrumbs">
         <ul class="breadcrumb">
-        <li id="publishDate">Last Published: 2018-05-24<span class="divider">|</span>
+        <li id="publishDate">Last Published: 2018-07-09<span class="divider">|</span>
 </li>
           <li id="projectVersion">Version: 1.10-SNAPSHOT</li>
         </ul>
@@ -240,17 +240,16 @@
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
---><h1>Segments and records</h1>
+-->
+<h1>Segments and records</h1>
 <p>While <a href="tar.html">TAR files</a> and segments are a coarse-grained mechanism to divide the repository content in more manageable pieces, the real information is stored inside the segments as finer-grained records. This page details the structure of segments and show the binary representation of data stored by Oak.</p>
 <div class="section">
 <h2><a name="Segments"></a>Segments</h2>
 <p>Segments are not created equal. Oak, in fact, distinguishes data and bulk segments, where the former is used to store structured data (e.g. information about node and properties), while the latter contains unstructured data (e.g. the value of binary properties or of very long strings).</p>
 <p>It is possible to tell apart a bulk segment from a data segment by just looking at its identifier. A segment identifier is a randomly generated UUID. Segment identifiers are 16 bytes long, but Oak uses 4 bits to set apart bulk segments from data segments. The following bit patterns are used (each <tt>x</tt> represents four random bits):</p>
-
 <ul>
-  
+
 <li><tt>xxxxxxxx-xxxx-4xxx-axxx-xxxxxxxxxxxx</tt> data segment UUID</li>
-  
 <li><tt>xxxxxxxx-xxxx-4xxx-bxxx-xxxxxxxxxxxx</tt> bulk segment UUID</li>
 </ul>
 <p>(This encoding makes segment UUIDs appear as syntactically valid version 4 random UUIDs specified in RFC 4122.)</p></div>
@@ -258,9 +257,11 @@
 <h2><a name="Bulk_segments"></a>Bulk segments</h2>
 <p>Bulk segments contain raw binary data, interpreted simply as a sequence of block records with no headers or other extra metadata:</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">[block 1] [block 2] ... [block N]
+<div>
+<div>
+<pre class="source">[block 1] [block 2] ... [block N]
 </pre></div></div>
+
 <p>A bulk segment whose length is <tt>n</tt> bytes consists of <tt>n div 4096</tt> block records of 4KiB each followed possibly a block record of <tt>n mod 4096</tt> bytes, if there still are remaining bytes in the segment. The structure of a bulk segment can thus be determined based only on the segment length.</p></div>
 <div class="section">
 <h2><a name="Data_segments"></a>Data segments</h2>
@@ -270,14 +271,17 @@
 <p>The segment header also maintains a set of references to <i>root records</i>: those records that are not referenced from any other records in the segment.</p>
 <p>The overall structure of a data segment is:</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">[segment header] [record 1] [record 2] ... [record N]
+<div>
+<div>
+<pre class="source">[segment header] [record 1] [record 2] ... [record N]
 </pre></div></div>
+
 <p>The segment header and each record is zero-padded to make their size a multiple of four bytes and to align the next record at a four-byte boundary.</p>
 <p>The segment header consists of the following fields. All integers are stored in big endian format.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">+---------+---------+---------+---------+---------+---------+---------+---------+
+<div>
+<div>
+<pre class="source">+---------+---------+---------+---------+---------+---------+---------+---------+
 | magic bytes: &quot;0aK&quot;          | version | reserved                               
 +---------+---------+---------+---------+---------+---------+---------+---------+
   reserved          | generation                            | segrefcount            
@@ -297,6 +301,7 @@
 |                                                 | padding (set to 0)          |
 +---------+---------+---------+---------+---------+---------+---------+---------+
 </pre></div></div>
+
 <p>The first three bytes of a segment always contain the ASCII string &#x201c;0aK&#x201d;, which is intended to make the binary segment data format easily detectable. The next byte indicates the version of the segment format and is currently set to 12.</p>
 <p>The <tt>generation</tt> field indicates the segment&#x2019;s generation wrt. to garbage collection. This field is used by the garbage collector to determine whether a segment needs to be retained or can be collected.</p>
 <p>The <tt>segrefcount</tt> field indicates how many other segments are referenced by records within this segment. The identifiers of those segments are listed starting at offset 32 of the segment header. This lookup table is used to optimize garbage collection and to avoid having to repeat the 16-byte UUIDs whenever references to records in other segments are made.</p>
@@ -310,9 +315,11 @@
 <p>The record number field is a logical identifier for the record. The logical identifier is used as a lookup key in the record references table in the segment identified by the segment field. Once the correct row in the record references table is found, the record offset can be used to locate the position of the record in the segment.</p>
 <p>The offset is relative to the beginning of a theoretical segment which is defined to be 256 KiB. Since records are added from the bottom of a segment to the top (i.e. from higher to lower offsets), and since segments could be shrunk down to be smaller than 256 KiB, the offset has to be normalized with to the following formula.</p>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">SIZE - 256 KiB + OFFSET
+<div>
+<div>
+<pre class="source">SIZE - 256 KiB + OFFSET
 </pre></div></div>
+
 <p><tt>SIZE</tt> is the actual size of the segment under inspection, and <tt>OFFSET</tt> is the offset looked up from the record references table. The normalized offset can be used to locate the position of the record in the current segment.</p></div>
 <div class="section">
 <h2><a name="Records"></a>Records</h2>
@@ -330,54 +337,44 @@
 <p>Value records are used for storing names and values of the content tree. Since item names can be thought of as name values and since all JCR and Oak values can be expressed in binary form (strings encoded in UTF-8), it is easiest to simply use that form for storing all values. The size overhead of such a form for small value types like booleans or dates is amortized by the facts that those types are used only for a minority of values in typical content trees and that repeating copies of a value can be stored just once.</p>
 <p>There are four types of value records: small, medium, long and external. The small- and medium-sized values are stored in inline form, prepended by one or two bytes that indicate the length of the value. Long values of up to two exabytes (2^61) are stored as a list of block records. Finally an external value record contains the length of the value and a string reference (up to 4kB in length) to some external storage location.</p>
 <p>The type of a value record is encoded in the high-order bits of the first byte of the record. These bit patterns are:</p>
-
 <ul>
-  
+
 <li><tt>0xxxxxxx</tt>: small value, length (0 - 127 bytes) encoded in 7 bits</li>
-  
 <li><tt>10xxxxxx</tt>: medium value length (128 - 16511 bytes) encoded in 6 + 8 bits</li>
-  
 <li><tt>110xxxxx</tt>: long value, length (up to 2^61 bytes) encoded in 5 + 7*8 bits</li>
-  
 <li><tt>1110xxxx</tt>: external value, reference string length encoded in 4 + 8 bits</li>
 </ul></div>
 <div class="section">
 <h3><a name="List_records"></a>List records</h3>
 <p>List records represent a general-purpose list of record identifiers. They are used as building blocks for other types of records, as we saw for value records and as we will see for template records and node records.</p>
 <p>The list record is a logical record using two different types of physical records to represent itself:</p>
-
 <ul>
-  
+
 <li>
-<p>bucket record: this is a recursive record representing a list of at most 255  references. A bucket record can reference other bucket records,  hierarchically, or the record identifiers of the elements to be stored in the  list. A bucket record doesn&#x2019;t maintain any other information exception record  identifiers.</p></li>
-  
+
+<p>bucket record: this is a recursive record representing a list of at most 255 references. A bucket record can reference other bucket records, hierarchically, or the record identifiers of the elements to be stored in the list. A bucket record doesn&#x2019;t maintain any other information exception record identifiers.</p>
+</li>
 <li>
-<p>list record: this is a top-level record that maintains the size of the list in  an integer field and a record identifier pointing to a bucket.</p></li>
-</ul>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">+--------+--------+--------+-----+
-| sub-list ID 1            | ... |
-+--------+--------+--------+-----+
-  |
-  v
-+--------+--------+--------+-----+--------+--------+--------+
-| record ID 1              | ... | record ID 255            |
-+--------+--------+--------+-----+--------+--------+--------+
-</pre></div></div>
+<p>list record: this is a top-level record that maintains the size of the list in an integer field and a record identifier pointing to a bucket.</p>
+<p>+&#x2014;&#x2014;&#x2013;+&#x2014;&#x2014;&#x2013;+&#x2014;&#x2014;&#x2013;+&#x2014;&#x2013;+ | sub-list ID 1            | &#x2026; | +&#x2014;&#x2014;&#x2013;+&#x2014;&#x2014;&#x2013;+&#x2014;&#x2014;&#x2013;+&#x2014;&#x2013;+ | v +&#x2014;&#x2014;&#x2013;+&#x2014;&#x2014;&#x2013;+&#x2014;&#x2014;&#x2013;+&#x2014;&#x2013;+&#x2014;&#x2014;&#x2013;+&#x2014;&#x2014;&#x2013;+&#x2014;&#x2014;&#x2013;+ | record ID 1              | &#x2026; | record ID 255            | +&#x2014;&#x2014;&#x2013;+&#x2014;&#x2014;&#x2013;+&#x2014;&#x2014;&#x2013;+&#x2014;&#x2013;+&#x2014;&#x2014;&#x2013;+&#x2014;&#x2014;&#x2013;+&#x2014;&#x2014;&#x2013;+</p>
+</li>
+</ul>
 <p>The result is a hierarchically stored immutable list where each element can be accessed in O(log N) time and the size overhead of updating or appending list elements (and thus creating a new immutable list) is also O(log N).</p>
-<p>List records are useful to store a list of references to other records. If the list is too big, it is split into different bucket records that may be stored in the same segment or across segments. This guarantees good performance for small lists, without loosing the capability to store lists with a big number of elements.</p></div>
+<p>List records are useful to store a list of references to other records. If the list is too big, it is split into different bucket records that may be  stored in the same segment or across segments. This guarantees good performance for small lists, without loosing the capability to store lists with a big number of elements.</p></div>
 <div class="section">
 <h3><a name="Map_records"></a>Map records</h3>
 <p>Map records implement a general-purpose unordered map of strings to record identifiers. They are used for nodes with a large number of properties or child nodes. As lists they are represented using two types of physical record:</p>
-
 <ul>
-  
+
 <li>
-<p>leaf record: if the number of elements in the map is small, they are all  stored in a leaf record. This covers the simplest case for small maps.</p></li>
-  
+
+<p>leaf record: if the number of elements in the map is small, they are all stored in a leaf record. This covers the simplest case for small maps.</p>
+</li>
 <li>
-<p>branch record: if the number of elements in the map is too big, the original  map is split into smaller maps based on a hash function applied to the keys of  the map. A branch record is recursive, because it can reference other branch  records if the sub-maps are too big and need to be split again.</p></li>
+
+<p>branch record: if the number of elements in the map is too big, the original map is split into smaller maps based on a hash function applied to the keys of the map. A branch record is recursive, because it can reference other branch records if the sub-maps are too big and need to be split again.</p>
+</li>
 </ul>
 <p>Maps are stored using the hash array mapped trie (HAMT) data structure. The hash code of each key is split into pieces of 5 bits each and the keys are sorted into 32 (2^5) buckets based on the first 5 bits. If a bucket contains less than 32 entries, then it is stored directly as a list of key-value pairs. Otherwise the keys are split into sub-buckets based on the next 5 bits of their hash codes. When all buckets are stored, the list of top-level bucket references gets stored along with the total number of entries in the map.</p>
 <p>The result is a hierarchically stored immutable map where each element can be accessed in O(log N) time and the size overhead of updating or inserting list elements is also O(log N).</p>
@@ -389,17 +386,21 @@
 <p>The template record allows Oak to handle simple modifications to nodes in the most efficient way possible.</p>
 <p>As such a template record describes the common structure of a family of related nodes. Since the structures of most nodes in a typical content tree fall into a small set of common templates, it makes sense to store such templates separately instead of repeating that information separately for each node. For example, the property names and types as well as child node names of all nt:file nodes are typically the same. The presence of mixins and different subtypes increases the number of different templates, but they&#x2019;re typically still far fewer than nodes in the repository.</p>
 <p>A template record consists of a set of up to N (exact size TBD, N ~ 256) property name and type pairs. Additionally, since nodes that are empty or contain just a single child node are most common, a template record also contains information whether the node has zero, one or many child nodes. In case of a single child node, the template also contains the name of that node. For example, the template for typical mix:versionable nt:file nodes would be (using CND-like notation):</p>
+<ul>
 
-<div class="source">
-<div class="source"><pre class="prettyprint">- jcr:primaryType (NAME)
-- jcr:mixinTypes (NAME) multiple
-- jcr:created (DATE)
-- jcr:uuid (STRING)
-- jcr:versionHistory (REFERENCE)
-- jcr:predecessors (REFERENCE) multiple
-- jcr:baseVersion (REFERENCE)
-+ jcr:content
-</pre></div></div>
+<li>jcr:primaryType (NAME)
+<ul>
+
+<li>jcr:mixinTypes (NAME) multiple</li>
+<li>jcr:created (DATE)</li>
+<li>jcr:uuid (STRING)</li>
+<li>jcr:versionHistory (REFERENCE)</li>
+<li>jcr:predecessors (REFERENCE) multiple</li>
+<li>jcr:baseVersion (REFERENCE)</li>
+<li>jcr:content</li>
+</ul>
+</li>
+</ul>
 <p>The names used in a template are stored as separate value records and included by reference. This way multiple templates that for example all contain the &#x201c;jcr:primaryType&#x201d; property name don&#x2019;t need to repeatedly store it.</p></div>
 <div class="section">
 <h3><a name="Node_records"></a>Node records</h3>