You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@jena.apache.org by bu...@apache.org on 2015/07/26 12:50:20 UTC

svn commit: r959629 [3/10] - in /websites/staging/jena/trunk/content: ./ about_jena/ documentation/ documentation/assembler/ documentation/csv/ documentation/extras/ documentation/extras/querybuilder/ documentation/fuseki2/ documentation/hadoop/ docume...

Modified: websites/staging/jena/trunk/content/documentation/hadoop/io.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/hadoop/io.html (original)
+++ websites/staging/jena/trunk/content/documentation/hadoop/io.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>
@@ -144,7 +144,18 @@
     <div class="col-md-12">
     <div id="breadcrumbs"></div>
     <h1 class="title">Apache Jena Elephas - IO API</h1>
-  <p>The IO API provides support for reading and writing RDF within Apache Hadoop applications.  This is done by providing <code>InputFormat</code> and <code>OutputFormat</code> implementations that cover all the RDF serialisations that Jena supports.</p>
+  <style type="text/css">
+/* The following code is added by mdx_elementid.py
+   It was originally lifted from http://subversion.apache.org/style/site.css */
+/*
+ * Hide class="elementid-permalink", except when an enclosing heading
+ * has the :hover property.
+ */
+.headerlink, .elementid-permalink {
+  visibility: hidden;
+}
+h2:hover > .headerlink, h3:hover > .headerlink, h1:hover > .headerlink, h6:hover > .headerlink, h4:hover > .headerlink, h5:hover > .headerlink, dt:hover > .elementid-permalink { visibility: visible }</style>
+<p>The IO API provides support for reading and writing RDF within Apache Hadoop applications.  This is done by providing <code>InputFormat</code> and <code>OutputFormat</code> implementations that cover all the RDF serialisations that Jena supports.</p>
 <div class="toc">
 <ul>
 <li><a href="#background-on-hadoop-io">Background on Hadoop IO</a><ul>
@@ -185,12 +196,12 @@
 </li>
 </ul>
 </div>
-<h1 id="background-on-hadoop-io">Background on Hadoop IO</h1>
+<h1 id="background-on-hadoop-io">Background on Hadoop IO<a class="headerlink" href="#background-on-hadoop-io" title="Permanent link">&para;</a></h1>
 <p>If you are already familiar with the Hadoop IO paradigm then please skip this section, if not please read as otherwise some of the later information will not make much sense.</p>
 <p>Hadoop applications and particularly Map/Reduce exploit horizontally scalability by dividing input data up into <em>splits</em> where each <em>split</em> represents a portion of the input data that can be read in <em>isolation</em> from the other pieces.  This <em>isolation</em> property is very important to understand, if a file format requires that the entire file be read sequentially in order to properly interpret it then it cannot be split and must be read as a whole.</p>
 <p>Therefore depending on the file formats used for your input data you may not get as much parallel performance because Hadoop's ability to <em>split</em> the input data may be limited.</p>
 <p>In some cases there are file formats that may be processed in multiple ways i.e. you can <em>split</em> them into pieces or you can process them as a whole.  Which approach you wish to use will depend on whether you have a single file to process or many files to process.  In the case of many files processing files as a whole may provide better overall throughput than processing them as chunks.  However your mileage may vary especially if your input data has many files of uneven size.</p>
-<h2 id="compressed-io">Compressed IO</h2>
+<h2 id="compressed-io">Compressed IO<a class="headerlink" href="#compressed-io" title="Permanent link">&para;</a></h2>
 <p>Hadoop natively provides support for compressed input and output providing your Hadoop cluster is appropriately configured.  The advantage of compressing the input/output data is that it means there is less IO workload on the cluster however this comes with the disadvantage that most compression formats block Hadoop's ability to <em>split</em> up the input.</p>
 <p>Hadoop generally handles compression automatically and all our input and output formats are capable of handling compressed input and output as necessary.  However in order to use this your Hadoop cluster/job configuration must be appropriately configured to inform Hadoop about what compression codecs are in use.</p>
 <p>For example to enable BZip2 compression (assuming your cluster doesn't enable this by default):</p>
@@ -202,9 +213,9 @@
 
 
 <p>See the Javadocs for the Hadoop <a href="https://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/compress/CompressionCodec.html">CompressionCodec</a> API to see the available out of the box implementations.  Note that some clusters may provide additional compression codecs beyond those built directly into Hadoop.</p>
-<h1 id="rdf-io-in-hadoop">RDF IO in Hadoop</h1>
+<h1 id="rdf-io-in-hadoop">RDF IO in Hadoop<a class="headerlink" href="#rdf-io-in-hadoop" title="Permanent link">&para;</a></h1>
 <p>There are a wide range of RDF serialisations supported by ARQ, please see the <a href="../io/">RDF IO</a> for an overview of the formats that Jena supports.  In this section we go into a lot more depth of how exactly we support RDF IO in Hadoop.</p>
-<h2 id="input">Input</h2>
+<h2 id="input">Input<a class="headerlink" href="#input" title="Permanent link">&para;</a></h2>
 <p>One of the difficulties posed when wrapping these for Hadoop IO is that the formats have very different properties in terms of our ability to <em>split</em> them into distinct chunks for Hadoop to process.  So we categorise the possible ways to process RDF inputs as follows:</p>
 <ol>
 <li>Line Based - Each line of the input is processed as a single line</li>
@@ -212,15 +223,15 @@
 <li>Whole File - The input is processed as a whole</li>
 </ol>
 <p>There is then also the question of whether a serialisation encodes triples, quads or can encode both.  Where a serialisation encodes both we provide two variants of it so you can choose whether you want to process it as triples/quads.</p>
-<h3 id="blank-nodes-in-input">Blank Nodes in Input</h3>
+<h3 id="blank-nodes-in-input">Blank Nodes in Input<a class="headerlink" href="#blank-nodes-in-input" title="Permanent link">&para;</a></h3>
 <p>Note that readers familiar with RDF may be wondering how we cope with blank nodes when splitting input and this is an important issue to address.</p>
 <p>Essentially Jena contains functionality that allows it to predictably generate identifiers from the original identifier present in the file e.g. <code>_:blank</code>.  This means that wherever <code>_:blank</code> appears  in the original file we are guaranteed to assign it the same internal identifier.  Note that this functionality uses a seed value to ensure that blank nodes coming from different input files are not assigned the same identifier.</p>
 <p>When used with Hadoop this seed is chosen based on a combination of the Job ID and the input file path.  This means that the same file processed by different jobs will produce different blank node identifiers each time.  However within a job every read of the file will predictably generate blank node identifiers so splitting does not prevent correct blank node identification.</p>
 <p>Additionally the binary serialisation we use for our RDF primitives (described on the <a href="common.html">Common API</a>) page guarantees that internal identifiers are preserved as-is when communicating values across the cluster.</p>
-<h3 id="mixed-inputs">Mixed Inputs</h3>
+<h3 id="mixed-inputs">Mixed Inputs<a class="headerlink" href="#mixed-inputs" title="Permanent link">&para;</a></h3>
 <p>In many cases your input data may be in a variety of different RDF formats in which case we have you covered.  The <code>TriplesInputFormat</code>, <code>QuadsInputFormat</code> and <code>TriplesOrQuadsInputFormat</code> can handle mixture of triples/quads/both triples &amp; quads as desired.  Note that in the case of <code>TriplesOrQuadsInputFormat</code> any triples are up-cast into quads in the default graph.</p>
 <p>With mixed inputs the specific input format to use for each is determined based on the file extensions of each input file, unrecognised extensions will result in an <code>IOException</code>.  Compression is handled automatically you simply need to name your files appropriately to indicate the type of compression used e.g. <code>example.ttl.gz</code> would be treated as GZipped Turtle, if you've used a decent compression tool it should have done this for you.  The downside of mixed inputs is that it decides quite late what the input format is which means that it always processes inputs as whole files because it doesn't decide on the format until after it has been asked to split the inputs.</p>
-<h2 id="output">Output</h2>
+<h2 id="output">Output<a class="headerlink" href="#output" title="Permanent link">&para;</a></h2>
 <p>As with input we also need to be careful about how we output RDF data.  Similar to input some serialisations can be output in a streaming fashion while other serialisations require us to store up all the data and then write it out in one go at the end.  We use the same categorisations for output though the meanings are slightly different:</p>
 <ol>
 <li>Line Based - Each record is written as soon as it is received</li>
@@ -228,19 +239,19 @@
 <li>Whole File - Records are cached until the end of output and then the entire output is written in one go</li>
 </ol>
 <p>However both the batch based and whole file approaches have the downside that it is possible to exhaust memory if you have large amounts of output to process (or set the batch size too high for batch based output).</p>
-<h3 id="blank-nodes-in-output">Blank Nodes in Output</h3>
+<h3 id="blank-nodes-in-output">Blank Nodes in Output<a class="headerlink" href="#blank-nodes-in-output" title="Permanent link">&para;</a></h3>
 <p>As with input blank nodes provide a complicating factor in producing RDF output.  For whole file output formats this is not an issue but it does need to be considered for line and batch based formats.</p>
 <p>However what we have found in practise is that the Jena writers will predictably map internal identifiers to the blank node identifiers in the output serialisations.  What this means is that even when processing output in batches we've found that using the line/batch based formats correctly preserve blank node identity.</p>
 <p>If you are concerned about potential data corruption as a result of this then you should make sure to always choose a whole file output format but be aware that this can exhaust memory if your output is large.</p>
-<h4 id="blank-node-divergence-in-multi-stage-pipelines">Blank Node Divergence in multi-stage pipelines</h4>
+<h4 id="blank-node-divergence-in-multi-stage-pipelines">Blank Node Divergence in multi-stage pipelines<a class="headerlink" href="#blank-node-divergence-in-multi-stage-pipelines" title="Permanent link">&para;</a></h4>
 <p>The other thing to consider with regards to blank nodes in output is that Hadoop will by default create multiple output files (one for each reducer) so even if consistent and valid blank nodes are output they may be spread over multiple files.</p>
 <p>In multi-stage pipelines you may need to manually concatenate these files back together (assuming they are in a format that allows this e.g. NTriples) as otherwise when you pass them as input to the next job the blank node identifiers will diverge from each other.  <a href="https://issues.apache.org/jira/browse/JENA-820">JENA-820</a> discusses this problem and introduces a special configuration setting that can be used to resolve this.  Note that even with this setting enabled some formats are not capable of respecting it, see the later section on <a href="#job-configuration-options">Job Configuration Options</a> for more details.</p>
 <p>An alternative workaround is to always use RDF Thrift as the intermediate output format since it preserves blank node identifiers precisely as they are seen.  This also has the advantage that RDF Thrift is extremely fast to read and write which can speed up multi-stage pipelines considerably.</p>
-<h3 id="node-output-format">Node Output Format</h3>
+<h3 id="node-output-format">Node Output Format<a class="headerlink" href="#node-output-format" title="Permanent link">&para;</a></h3>
 <p>We also include a special <code>NTriplesNodeOutputFormat</code> which is capable of outputting pairs composed of a <code>NodeWritable</code> key and any value type.  Think of this as being similar to the standard Hadoop <code>TextOutputFormat</code> except it understands how to format nodes as valid NTriples serialisation.  This format is useful when performing simple statistical analysis such as node usage counts or other calculations over nodes.</p>
 <p>In the case where the value of the key value pair is also a RDF primitive proper NTriples formatting is also applied to each of the nodes in the value</p>
-<h2 id="rdf-serialisation-support">RDF Serialisation Support</h2>
-<h3 id="input_1">Input</h3>
+<h2 id="rdf-serialisation-support">RDF Serialisation Support<a class="headerlink" href="#rdf-serialisation-support" title="Permanent link">&para;</a></h2>
+<h3 id="input_1">Input<a class="headerlink" href="#input_1" title="Permanent link">&para;</a></h3>
 <p>The following table categorises how each supported RDF serialisation is processed for input.  Note that in some cases we offer multiple ways to process a serialisation.</p>
 <table>
   <tr>
@@ -269,7 +280,7 @@
   <tr><td>RDF Thrift</td><td>No</td><td>No</td><td>Yes</td></tr>
 </table>
 
-<h3 id="output_1">Output</h3>
+<h3 id="output_1">Output<a class="headerlink" href="#output_1" title="Permanent link">&para;</a></h3>
 <p>The following table categorises how each supported RDF serialisation can be processed for output.  As with input some serialisations may be processed in multiple ways.</p>
 <table>
   <tr>
@@ -298,7 +309,7 @@
   <tr><td>RDF Thrift</td><td>Yes</td><td>No</td><td>No</td></tr>
 </table>
 
-<h2 id="job-setup">Job Setup</h2>
+<h2 id="job-setup">Job Setup<a class="headerlink" href="#job-setup" title="Permanent link">&para;</a></h2>
 <p>To use RDF as an input and/or output format you will need to configure your Job appropriately, this requires setting the input/output format and setting the data paths:</p>
 <div class="codehilite"><pre><span class="c1">// Create a job using default configuration</span>
 <span class="n">Job</span> <span class="n">job</span> <span class="o">=</span> <span class="n">Job</span><span class="p">.</span><span class="n">createInstance</span><span class="p">(</span><span class="k">new</span> <span class="n">Configuration</span><span class="p">(</span><span class="n">true</span><span class="p">));</span>
@@ -317,9 +328,9 @@
 
 <p>This example takes in input in Turtle format from the directory <code>/users/example/input</code> and outputs the end results in NTriples in the directory <code>/users/example/output</code>.</p>
 <p>Take a look at the <a href="../javadoc/hadoop/io/">Javadocs</a> to find the actual available input and output format implementations.</p>
-<h3 id="job-configuration-options">Job Configuration Options</h3>
+<h3 id="job-configuration-options">Job Configuration Options<a class="headerlink" href="#job-configuration-options" title="Permanent link">&para;</a></h3>
 <p>There are a several useful configuration options that can be used to tweak the behaviour of the RDF IO functionality if desired.</p>
-<h4 id="input-lines-per-batch">Input Lines per Batch</h4>
+<h4 id="input-lines-per-batch">Input Lines per Batch<a class="headerlink" href="#input-lines-per-batch" title="Permanent link">&para;</a></h4>
 <p>Since our line based input formats use the standard Hadoop <code>NLineInputFormat</code> to decide how to split up inputs we support the standard <code>mapreduce.input.lineinputformat.linespermap</code> configuration setting for changing the number of lines processed per map.</p>
 <p>You can set this directly in your configuration:</p>
 <div class="codehilite"><pre><span class="n">job</span><span class="p">.</span><span class="n">getConfiguration</span><span class="p">().</span><span class="n">setInt</span><span class="p">(</span><span class="n">NLineInputFormat</span><span class="p">.</span><span class="n">LINES_PER_MAP</span><span class="p">,</span> 100<span class="p">);</span>
@@ -331,19 +342,19 @@
 </pre></div>
 
 
-<h4 id="max-line-length">Max Line Length</h4>
+<h4 id="max-line-length">Max Line Length<a class="headerlink" href="#max-line-length" title="Permanent link">&para;</a></h4>
 <p>When using line based inputs it may be desirable to ignore lines that exceed a certain length (for example if you are not interested in really long literals).  Again we use the standard Hadoop configuration setting <code>mapreduce.input.linerecordreader.line.maxlength</code> to control this behaviour:</p>
 <div class="codehilite"><pre><span class="n">job</span><span class="p">.</span><span class="n">getConfiguration</span><span class="p">().</span><span class="n">setInt</span><span class="p">(</span><span class="n">HadoopIOConstants</span><span class="p">.</span><span class="n">MAX_LINE_LENGTH</span><span class="p">,</span> 8192<span class="p">);</span>
 </pre></div>
 
 
-<h4 id="ignoring-bad-tuples">Ignoring Bad Tuples</h4>
+<h4 id="ignoring-bad-tuples">Ignoring Bad Tuples<a class="headerlink" href="#ignoring-bad-tuples" title="Permanent link">&para;</a></h4>
 <p>In many cases you may have data that you know contains invalid tuples, in such cases it can be useful to just ignore the bad tuples and continue.  By default we enable this behaviour and will skip over bad tuples though they will be logged as an error.  If you want you can disable this behaviour by setting the <code>rdf.io.input.ignore-bad-tuples</code> configuration setting:</p>
 <div class="codehilite"><pre><span class="n">job</span><span class="p">.</span><span class="n">getConfiguration</span><span class="p">().</span><span class="n">setBoolean</span><span class="p">(</span><span class="n">RdfIOConstants</span><span class="p">.</span><span class="n">INPUT_IGNORE_BAD_TUPLES</span><span class="p">,</span> <span class="n">false</span><span class="p">);</span>
 </pre></div>
 
 
-<h4 id="global-blank-node-identity">Global Blank Node Identity</h4>
+<h4 id="global-blank-node-identity">Global Blank Node Identity<a class="headerlink" href="#global-blank-node-identity" title="Permanent link">&para;</a></h4>
 <p>The default behaviour of this library is to allocate file scoped blank node identifiers in such a way that the same syntactic identifier read from the same file is allocated the same blank node ID even across input splits within a job.  Conversely the same syntactic identifier in different input files will result in different blank nodes within a job.</p>
 <p>However as discussed earlier in the case of multi-stage jobs the intermediate outputs may be split over several files which can cause the blank node identifiers to diverge from each other when they are read back in by subsequent jobs.  For multi-stage jobs this is often (but not always) incorrect and undesirable behaviour in which case you will need to set the <code>rdf.io.input.bnodes.global-identity</code> property to true for the subsequent jobs:</p>
 <div class="codehilite"><pre><span class="n">job</span><span class="p">.</span><span class="n">getConfiguration</span><span class="p">.</span><span class="n">setBoolean</span><span class="p">(</span><span class="n">RdfIOConstants</span><span class="p">.</span><span class="n">GLOBAL_BNODE_IDENTITY</span><span class="p">,</span> <span class="n">true</span><span class="p">);</span>
@@ -353,7 +364,7 @@
 <p><strong>Important</strong> - This should only be set for the later jobs in a multi-stage pipeline and should rarely (if ever) be set for single jobs or the first job of a pipeline.</p>
 <p>Even with this setting enabled not all formats are capable of honouring this option, RDF/XML and JSON-LD will ignore this option and should be avoided as intermediate output formats.</p>
 <p>As noted earlier an alternative workaround to enabling this setting is to instead use RDF Thrift as the intermediate output format since it guarantees to preserve blank node identifiers as-is on both reads and writes.</p>
-<h4 id="output-batch-size">Output Batch Size</h4>
+<h4 id="output-batch-size">Output Batch Size<a class="headerlink" href="#output-batch-size" title="Permanent link">&para;</a></h4>
 <p>The batch size for batched output formats can be controlled by setting the <code>rdf.io.output.batch-size</code> property as desired.  The default value for this if not explicitly configured is 10,000:</p>
 <div class="codehilite"><pre><span class="n">job</span><span class="p">.</span><span class="n">getConfiguration</span><span class="p">.</span><span class="n">setInt</span><span class="p">(</span><span class="n">RdfIOConstants</span><span class="p">.</span><span class="n">OUTPUT_BATCH_SIZE</span><span class="p">,</span> 25000<span class="p">);</span>
 </pre></div>

Modified: websites/staging/jena/trunk/content/documentation/hadoop/mapred.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/hadoop/mapred.html (original)
+++ websites/staging/jena/trunk/content/documentation/hadoop/mapred.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>
@@ -144,7 +144,18 @@
     <div class="col-md-12">
     <div id="breadcrumbs"></div>
     <h1 class="title">Apache Jena Elephas - Map/Reduce API</h1>
-  <p>The Map/Reduce API provides a range of building block <code>Mapper</code> and <code>Reducer</code> implementations that can be used as a starting point for building Map/Reduce applications that process RDF.  Typically more complex applications will need to implement their own variants but these basic ones may still prove useful as part of a larger pipeline.</p>
+  <style type="text/css">
+/* The following code is added by mdx_elementid.py
+   It was originally lifted from http://subversion.apache.org/style/site.css */
+/*
+ * Hide class="elementid-permalink", except when an enclosing heading
+ * has the :hover property.
+ */
+.headerlink, .elementid-permalink {
+  visibility: hidden;
+}
+h2:hover > .headerlink, h3:hover > .headerlink, h1:hover > .headerlink, h6:hover > .headerlink, h4:hover > .headerlink, h5:hover > .headerlink, dt:hover > .elementid-permalink { visibility: visible }</style>
+<p>The Map/Reduce API provides a range of building block <code>Mapper</code> and <code>Reducer</code> implementations that can be used as a starting point for building Map/Reduce applications that process RDF.  Typically more complex applications will need to implement their own variants but these basic ones may still prove useful as part of a larger pipeline.</p>
 <div class="toc">
 <ul>
 <li><a href="#tasks">Tasks</a><ul>
@@ -171,7 +182,7 @@
 </li>
 </ul>
 </div>
-<h1 id="tasks">Tasks</h1>
+<h1 id="tasks">Tasks<a class="headerlink" href="#tasks" title="Permanent link">&para;</a></h1>
 <p>The API is divided based upon implementations that support various common Hadoop tasks with appropriate <code>Mapper</code> and <code>Reducer</code> implementations provided for each.  In most cases these are implemented to be at least partially abstract to make it easy to implement customised versions of these.</p>
 <p>The following common tasks are supported:</p>
 <ul>
@@ -182,22 +193,22 @@
 <li>Transforming</li>
 </ul>
 <p>Note that standard Map/Reduce programming rules apply as normal.  For example if a mapper/reducer transforms between data types then you need to make <code>setMapOutputKeyClass()</code>, <code>setMapOutputValueClass()</code>, <code>setOutputKeyClass()</code> and <code>setOutputValueClass()</code> calls on your Job configuration as necessary.</p>
-<h2 id="counting">Counting</h2>
+<h2 id="counting">Counting<a class="headerlink" href="#counting" title="Permanent link">&para;</a></h2>
 <p>Counting is one of the classic Map/Reduce tasks and features as both the official Map/Reduce example for both Hadoop itself and for Elephas.  Implementations cover a number of different counting tasks that you might want to carry out upon RDF data, in most cases you will use the desired <code>Mapper</code> implementation in conjunction with the <code>NodeCountReducer</code>.</p>
-<h3 id="node-usage">Node Usage</h3>
+<h3 id="node-usage">Node Usage<a class="headerlink" href="#node-usage" title="Permanent link">&para;</a></h3>
 <p>The simplest type of counting supported is to count the usages of individual RDF nodes within the triples/quads.  Depending on whether your data is triples/quads you can use either the <code>TripleNodeCountMapper</code> or the <code>QuadNodeCountMapper</code>.</p>
 <p>If you want to count only usages of RDF nodes in a specific position then we also provide variants for that, for example <code>TripleSubjectCountMapper</code> counts only RDF nodes present in the subject position.  You can substitute <code>Predicate</code> or <code>Object</code> into the class name in place of <code>Subject</code> if you prefer to count just RDF nodes in the predicate/object position instead.  Similarly replace <code>Triple</code> with <code>Quad</code> if you wish to count usage of RDF nodes in specific positions of quads, an additional <code>QuadGraphCountMapper</code> if you want to calculate the size of graphs.</p>
-<h3 id="literal-data-types">Literal Data Types</h3>
+<h3 id="literal-data-types">Literal Data Types<a class="headerlink" href="#literal-data-types" title="Permanent link">&para;</a></h3>
 <p>Another interesting variant of counting is to count the usage of literal data types, you can use the <code>TripleDataTypeCountMapper</code> or <code>QuadDataTypeCountMapper</code> if you want to do this.</p>
-<h3 id="namespaces">Namespaces</h3>
+<h3 id="namespaces">Namespaces<a class="headerlink" href="#namespaces" title="Permanent link">&para;</a></h3>
 <p>Finally you may be interested in the usage of namespaces within your data, in this case the <code>TripleNamespaceCountMapper</code> or <code>QuadNamespaceCountMapper</code> can be used to do this.  For this use case you should use the <code>TextCountReducer</code> to total up the counts for each namespace.  Note that the mappers determine the namespace for a URI simply by splitting after the last <code>#</code> or <code>/</code> in the URI, if no such character exists then the full URI is considered to be the namespace.</p>
-<h2 id="filtering">Filtering</h2>
+<h2 id="filtering">Filtering<a class="headerlink" href="#filtering" title="Permanent link">&para;</a></h2>
 <p>Filtering is another classic Map/Reduce use case, here you want to take the data and extract only the portions that you are interested in based on some criteria.  All our filter <code>Mapper</code> implementations also support a Job configuration option named <code>rdf.mapreduce.filter.invert</code> allowing their effects to be inverted if desired e.g.</p>
 <div class="codehilite"><pre><span class="n">config</span><span class="p">.</span><span class="n">setBoolean</span><span class="p">(</span><span class="n">RdfMapReduceConstants</span><span class="p">.</span><span class="n">FILTER_INVERT</span><span class="p">,</span> <span class="n">true</span><span class="p">);</span>
 </pre></div>
 
 
-<h3 id="valid-data">Valid Data</h3>
+<h3 id="valid-data">Valid Data<a class="headerlink" href="#valid-data" title="Permanent link">&para;</a></h3>
 <p>One type of filter that may be useful particularly if you are generating RDF data that may not be strict RDF is the <code>ValidTripleFilterMapper</code> and the <code>ValidQuadFilterMapper</code>.  These filters only keep triples/quads that are valid according to strict RDF semantics i.e.</p>
 <ul>
 <li>Subject can only be URI/Blank Node</li>
@@ -206,9 +217,9 @@
 <li>Graph can only be a URI or Blank Node</li>
 </ul>
 <p>If you wanted to extract only the bad data e.g. for debugging then you can of course invert these filters by setting <code>rdf.mapreduce.filter.invert</code> to <code>true</code> as shown above.</p>
-<h3 id="ground-data">Ground Data</h3>
+<h3 id="ground-data">Ground Data<a class="headerlink" href="#ground-data" title="Permanent link">&para;</a></h3>
 <p>In some cases you may only be interesting in triples/quads that are grounded i.e. don't contain blank nodes in which case the <code>GroundTripleFilterMapper</code> and <code>GroundQuadFilterMapper</code> can be used.</p>
-<h3 id="data-with-a-specific-uri">Data with a specific URI</h3>
+<h3 id="data-with-a-specific-uri">Data with a specific URI<a class="headerlink" href="#data-with-a-specific-uri" title="Permanent link">&para;</a></h3>
 <p>In lots of case you may want to extract only data where a specific URI occurs in a specific position, for example if you wanted to extract all the <code>rdf:type</code> declarations then you might want to use the <code>TripleFilterByPredicateUriMapper</code> or <code>QuadFilterByPredicateUriMapper</code> as appropriate.  The job configuration option <code>rdf.mapreduce.filter.predicate.uris</code> is used to provide a comma separated list of the full URIs you want the filter to accept e.g.</p>
 <div class="codehilite"><pre><span class="n">config</span><span class="p">.</span><span class="n">set</span><span class="p">(</span><span class="n">RdfMapReduceConstants</span><span class="p">.</span><span class="n">FILTER_PREDICATE_URIS</span><span class="p">,</span> &quot;<span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">example</span><span class="p">.</span><span class="n">org</span><span class="o">/</span><span class="n">predicate</span><span class="p">,</span><span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">another</span><span class="p">.</span><span class="n">org</span><span class="o">/</span><span class="n">predicate</span>&quot;<span class="p">);</span>
 </pre></div>
@@ -219,19 +230,19 @@
 </pre></div>
 
 
-<h2 id="grouping">Grouping</h2>
+<h2 id="grouping">Grouping<a class="headerlink" href="#grouping" title="Permanent link">&para;</a></h2>
 <p>Grouping is again another frequent Map/Reduce use case, here we provide implementations that allow you to group triples or quads by a specific RDF node within the triples/quads e.g. by subject.  For example to group quads by predicate use the <code>QuadGroupByPredicateMapper</code>, similar to filtering and counting you can substitute <code>Predicate</code> for <code>Subject</code>, <code>Object</code> or <code>Graph</code> if you wish to group by another node of the triple/quad.</p>
-<h2 id="splitting">Splitting</h2>
+<h2 id="splitting">Splitting<a class="headerlink" href="#splitting" title="Permanent link">&para;</a></h2>
 <p>Splitting allows you to split triples/quads up into the constituent RDF nodes, we provide two kinds of splitting:</p>
 <ul>
 <li>To Nodes - Splits pairs of arbitrary keys with triple/quad values into several pairs of the key with the nodes as the values</li>
 <li>With Nodes - Splits pairs of arbitrary keys with triple/quad values keeping the triple/quad as the key and the nodes as the values.</li>
 </ul>
-<h2 id="transforming">Transforming</h2>
+<h2 id="transforming">Transforming<a class="headerlink" href="#transforming" title="Permanent link">&para;</a></h2>
 <p>Transforming provides some very simple implementations that allow you to convert between triples and quads.  For the lossy case of going from quads to triples simply use the <code>QuadsToTriplesMapper</code>.</p>
 <p>If you want to go the other way - triples to quads - this requires adding a graph field to each triple and we provide two implementations that do that.  Firstly there is <code>TriplesToQuadsBySubjectMapper</code> which puts each triple into a graph based on its subject i.e. all triples with a common subject go into a graph named for the subject.  Secondly there is <code>TriplesToQuadsConstantGraphMapper</code> which simply puts all triples into the default graph, if you wish to change the target graph you should extend this class.  If you wanted to select the graph to use based on some arbitrary criteria you should look at extending the <code>AbstractTriplesToQuadsMapper</code> instead.</p>
-<h1 id="example-jobs">Example Jobs</h1>
-<h2 id="node-count">Node Count</h2>
+<h1 id="example-jobs">Example Jobs<a class="headerlink" href="#example-jobs" title="Permanent link">&para;</a></h1>
+<h2 id="node-count">Node Count<a class="headerlink" href="#node-count" title="Permanent link">&para;</a></h2>
 <p>The following example shows how to configure a job which performs a node count i.e. counts the usages of RDF terms (aka nodes in Jena parlance) within the data:</p>
 <div class="codehilite"><pre><span class="c1">// Assumes we have already created a Hadoop Configuration </span>
 <span class="c1">// and stored it in the variable config</span>

Modified: websites/staging/jena/trunk/content/documentation/index.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/index.html (original)
+++ websites/staging/jena/trunk/content/documentation/index.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>
@@ -144,11 +144,22 @@
     <div class="col-md-12">
     <div id="breadcrumbs"></div>
     <h1 class="title">Jena documentation overview</h1>
-  <p>This section contains detailed information about the various Jena
+  <style type="text/css">
+/* The following code is added by mdx_elementid.py
+   It was originally lifted from http://subversion.apache.org/style/site.css */
+/*
+ * Hide class="elementid-permalink", except when an enclosing heading
+ * has the :hover property.
+ */
+.headerlink, .elementid-permalink {
+  visibility: hidden;
+}
+h2:hover > .headerlink, h3:hover > .headerlink, h1:hover > .headerlink, h6:hover > .headerlink, h4:hover > .headerlink, h5:hover > .headerlink, dt:hover > .elementid-permalink { visibility: visible }</style>
+<p>This section contains detailed information about the various Jena
 sub-systems, aimed at developers using Jena. For more general introductions,
 please refer to the <a href="/getting_started/">Getting started</a> and <a href="/tutorials/">Tutorial</a>
 sections.</p>
-<h2 id="documentation-index">Documentation index</h2>
+<h2 id="documentation-index">Documentation index<a class="headerlink" href="#documentation-index" title="Permanent link">&para;</a></h2>
 <ul>
 <li><a href="./rdf/">The RDF API</a> - the core RDF API in Jena</li>
 <li><a href="./query/">SPARQL</a> - querying and updating RDF models using the SPARQL standards</li>

Modified: websites/staging/jena/trunk/content/documentation/inference/index.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/inference/index.html (original)
+++ websites/staging/jena/trunk/content/documentation/inference/index.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>

Modified: websites/staging/jena/trunk/content/documentation/io/arp.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/io/arp.html (original)
+++ websites/staging/jena/trunk/content/documentation/io/arp.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>
@@ -144,14 +144,25 @@
     <div class="col-md-12">
     <div id="breadcrumbs"></div>
     <h1 class="title">RDF/XML Handling in Jena</h1>
-  <p>This section details the Jena RDF/XML parser.
+  <style type="text/css">
+/* The following code is added by mdx_elementid.py
+   It was originally lifted from http://subversion.apache.org/style/site.css */
+/*
+ * Hide class="elementid-permalink", except when an enclosing heading
+ * has the :hover property.
+ */
+.headerlink, .elementid-permalink {
+  visibility: hidden;
+}
+h2:hover > .headerlink, h3:hover > .headerlink, h1:hover > .headerlink, h6:hover > .headerlink, h4:hover > .headerlink, h5:hover > .headerlink, dt:hover > .elementid-permalink { visibility: visible }</style>
+<p>This section details the Jena RDF/XML parser.
 ARP is the parsing subsystem in Jena for handling the RDF/XML syntax.</p>
 <ul>
 <li><a href="#arp-features">ARP Features</a></li>
 <li><a href="arp_standalone.html">Using ARP without Jena</a></li>
 <li><a href="arp_sax.html">Using other SAX and DOM XML sources</a></li>
 </ul>
-<h2 id="arp-features">ARP Features</h2>
+<h2 id="arp-features">ARP Features<a class="headerlink" href="#arp-features" title="Permanent link">&para;</a></h2>
 <ul>
 <li>Java based RDF parser.</li>
 <li>Compliant with

Modified: websites/staging/jena/trunk/content/documentation/io/arp_sax.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/io/arp_sax.html (original)
+++ websites/staging/jena/trunk/content/documentation/io/arp_sax.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>

Modified: websites/staging/jena/trunk/content/documentation/io/arp_standalone.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/io/arp_standalone.html (original)
+++ websites/staging/jena/trunk/content/documentation/io/arp_standalone.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>

Modified: websites/staging/jena/trunk/content/documentation/io/index.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/io/index.html (original)
+++ websites/staging/jena/trunk/content/documentation/io/index.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>
@@ -144,7 +144,18 @@
     <div class="col-md-12">
     <div id="breadcrumbs"></div>
     <h1 class="title">Reading and Writing RDF in Apache Jena</h1>
-  <p>This page details the setup of RDF I/O technology (RIOT).</p>
+  <style type="text/css">
+/* The following code is added by mdx_elementid.py
+   It was originally lifted from http://subversion.apache.org/style/site.css */
+/*
+ * Hide class="elementid-permalink", except when an enclosing heading
+ * has the :hover property.
+ */
+.headerlink, .elementid-permalink {
+  visibility: hidden;
+}
+h2:hover > .headerlink, h3:hover > .headerlink, h1:hover > .headerlink, h6:hover > .headerlink, h4:hover > .headerlink, h5:hover > .headerlink, dt:hover > .elementid-permalink { visibility: visible }</style>
+<p>This page details the setup of RDF I/O technology (RIOT).</p>
 <ul>
 <li><a href="#formats">Formats</a></li>
 <li><a href="#command-line-tools">Commands</a></li>
@@ -153,7 +164,7 @@
 <li><a href="streaming-io.html">Working with RDF Streams</a></li>
 <li><a href="rdfxml_howto.html">Additional details on working with RDF/XML</a></li>
 </ul>
-<h2 id="formats">Formats</h2>
+<h2 id="formats">Formats<a class="headerlink" href="#formats" title="Permanent link">&para;</a></h2>
 <p>The following RDF formats are supported by Jena. In addition, other syntaxes
 can be integrated into both the parser and writer registries.</p>
 <ul>
@@ -172,7 +183,7 @@ See the <a href="rdf-json.html">descript
 <p>RDF Thrift is a binary encoding of RDF (graphs and datasets) that can be useful
 for fast parsing.  See the 
 <a href="http://afs.github.io/rdf-thrift">description of RDF Thrift</a>.</p>
-<h2 id="command-line-tools">Command line tools</h2>
+<h2 id="command-line-tools">Command line tools<a class="headerlink" href="#command-line-tools" title="Permanent link">&para;</a></h2>
 <p>There are scripts in Jena download to run these commands.</p>
 <ul>
 <li><code>riot</code> - parse, guessing the syntax from the file extension.
@@ -181,7 +192,7 @@ for fast parsing.  See the
 </ul>
 <p>These can be called directly as Java programs:</p>
 <p>The file extensions understood are:</p>
-<table>
+<table class="table">
 <thead>
 <tr>
 <th>&nbsp;Extension&nbsp;</th>
@@ -263,7 +274,7 @@ utility which reads a file of bytes as U
 <ul>
 <li><code>utf8</code> -- read bytes as UTF8</li>
 </ul>
-<h2 id="inference">Inference</h2>
+<h2 id="inference">Inference<a class="headerlink" href="#inference" title="Permanent link">&para;</a></h2>
 <p>RIOT support creation of inferred triples during the parsing
 process:</p>
 <div class="codehilite"><pre><span class="n">riotcmd</span><span class="p">.</span><span class="n">infer</span> <span class="o">--</span><span class="n">rdfs</span> <span class="n">VOCAB</span> <span class="n">FILE</span> <span class="n">FILE</span> <span class="p">...</span>

Modified: websites/staging/jena/trunk/content/documentation/io/rdf-input.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/io/rdf-input.html (original)
+++ websites/staging/jena/trunk/content/documentation/io/rdf-input.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>
@@ -144,7 +144,18 @@
     <div class="col-md-12">
     <div id="breadcrumbs"></div>
     <h1 class="title">Reading RDF in Apache Jena</h1>
-  <p>This page details the setup of RDF I/O technology (RIOT) for input 
+  <style type="text/css">
+/* The following code is added by mdx_elementid.py
+   It was originally lifted from http://subversion.apache.org/style/site.css */
+/*
+ * Hide class="elementid-permalink", except when an enclosing heading
+ * has the :hover property.
+ */
+.headerlink, .elementid-permalink {
+  visibility: hidden;
+}
+h2:hover > .headerlink, h3:hover > .headerlink, h1:hover > .headerlink, h6:hover > .headerlink, h4:hover > .headerlink, h5:hover > .headerlink, dt:hover > .elementid-permalink { visibility: visible }</style>
+<p>This page details the setup of RDF I/O technology (RIOT) for input 
 introduced in Jena 2.10.</p>
 <p>See <a href="rdf-output.html">Writing RDF</a> for details of the RIOT Writer system.</p>
 <ul>
@@ -168,13 +179,13 @@ introduced in Jena 2.10.</p>
 </li>
 </ul>
 <p>Full details of operations are given in the javadoc.</p>
-<h2 id="api">API</h2>
+<h2 id="api">API<a class="headerlink" href="#api" title="Permanent link">&para;</a></h2>
 <p>Much of the functionality is accessed via the Jena Model API; direct
 calling of the RIOT subsystem isn't needed.  A resource name
 with no URI scheme is assumed to be a local file name.</p>
 <p>Applications typically use at most <code>RDFDataMgr</code> to read RDF datasets.</p>
 <p>The major classes in the RIOT API are:</p>
-<table>
+<table class="table">
 <thead>
 <tr>
 <th>Class</th>
@@ -204,14 +215,14 @@ with no URI scheme is assumed to be a lo
 </tr>
 </tbody>
 </table>
-<h3 id="determining-the-rdf-syntax">Determining the RDF syntax</h3>
+<h3 id="determining-the-rdf-syntax">Determining the RDF syntax<a class="headerlink" href="#determining-the-rdf-syntax" title="Permanent link">&para;</a></h3>
 <p>The syntax of the RDF file is determined by the content type (if an HTTP
 request), then the file extension if there is no content type. Content type 
 <code>text/plain</code> is ignored; it is assumed to be type returned for an unconfigured
 http server. The application can also pass in a declared language hint.</p>
 <p>The string name traditionally used in <code>model.read</code> is mapped to RIOT <code>Lang</code>
 as:</p>
-<table>
+<table class="table">
 <thead>
 <tr>
 <th>Jena reader</th>
@@ -281,7 +292,7 @@ as:</p>
 </pre></div>
 
 
-<h3 id="example-1-common-usage">Example 1 : Common usage</h3>
+<h3 id="example-1-common-usage">Example 1 : Common usage<a class="headerlink" href="#example-1-common-usage" title="Permanent link">&para;</a></h3>
 <p>In this example, a file in the current directory is read as Turtle.</p>
 <p>Model model = ModelFactory.createDefaultModel() ;
   model.read("data.ttl") ;</p>
@@ -290,7 +301,7 @@ as:</p>
 </pre></div>
 
 
-<h3 id="example-2-using-the-rdfdatamgr">Example 2 : Using the RDFDataMgr</h3>
+<h3 id="example-2-using-the-rdfdatamgr">Example 2 : Using the RDFDataMgr<a class="headerlink" href="#example-2-using-the-rdfdatamgr" title="Permanent link">&para;</a></h3>
 <p>In versions of Jena priot to 2.10.0, the <code>FileManager</code> provided some of
 this functionality. It was more basic, and not properly web enabled.  The
 <code>RDFDataMgr</code> superceeds the <code>FileManager</code>.  "load*" operations create an
@@ -309,10 +320,10 @@ add data into an existing model or datas
 </pre></div>
 
 
-<h2 id="logging">Logging</h2>
+<h2 id="logging">Logging<a class="headerlink" href="#logging" title="Permanent link">&para;</a></h2>
 <p>The parsers log to a logger called <code>org.apache.jena.riot</code>.  To avoid <code>WARN</code>
 messages, set this in log4j.properties to <code>ERROR</code>.</p>
-<h2 id="streammanager-and-locationmapper">StreamManager and LocationMapper</h2>
+<h2 id="streammanager-and-locationmapper">StreamManager and LocationMapper<a class="headerlink" href="#streammanager-and-locationmapper" title="Permanent link">&para;</a></h2>
 <p>By default, the <code>RDFDataMgr</code> uses the global <code>StreamManager</code> to open typed
 InputStreams.  This is available to applications via <code>RDFDataMgr.open</code> as well as directly
 using a <code>StreamManager</code>.</p>
@@ -328,7 +339,7 @@ data:</p>
 <li>Class loader locator</li>
 <li>Zip file locator</li>
 </ul>
-<h3 id="configuring-a-streammanager">Configuring a <code>StreamManager</code></h3>
+<h3 id="configuring-a-streammanager">Configuring a <code>StreamManager</code><a class="headerlink" href="#configuring-a-streammanager" title="Permanent link">&para;</a></h3>
 <p>The <code>StreamManager</code> can be reconfigured with different places to look for
 files.  The default configuration used for the global <code>StreamManager</code> is
 a file access class, wihere the current directory is that of the java
@@ -338,7 +349,7 @@ either as the global set up, </p>
 <p>There is also a <code>LocationMapper</code> for rewiting file names and URLs before
 use to allow placing known names in different places (e.g. having local
 copies of import http resources).</p>
-<h3 id="configuring-a-locationmapper">Configuring a <code>LocationMapper</code></h3>
+<h3 id="configuring-a-locationmapper">Configuring a <code>LocationMapper</code><a class="headerlink" href="#configuring-a-locationmapper" title="Permanent link">&para;</a></h3>
 <p>Location mapping files are RDF, usually written in Turtle although
 an RDF syntax can be used.</p>
 <div class="codehilite"><pre><span class="p">@</span><span class="n">prefix</span> <span class="n">lm</span><span class="p">:</span> <span class="o">&lt;</span><span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">jena</span><span class="p">.</span><span class="n">hpl</span><span class="p">.</span><span class="n">hp</span><span class="p">.</span><span class="n">com</span><span class="o">/</span>2004<span class="o">/</span>08<span class="o">/</span><span class="n">location</span><span class="o">-</span><span class="n">mapping</span>#<span class="o">&gt;</span>
@@ -387,15 +398,15 @@ contain ':'.</p>
 <p>Applications can also set mappings programmatically. No
 configuration file is necessary.</p>
 <p>The base URI for reading models will be the original URI, not the alternative location.</p>
-<h3 id="debugging">Debugging</h3>
+<h3 id="debugging">Debugging<a class="headerlink" href="#debugging" title="Permanent link">&para;</a></h3>
 <p>Using log4j, set the logging level of the classes:</p>
 <ul>
 <li>org.apache.jena.riot.stream.StreamManager</li>
 <li>org.apache.jena.riot.stream.LocationMapper</li>
 </ul>
-<h2 id="advanced-examples">Advanced examples</h2>
+<h2 id="advanced-examples">Advanced examples<a class="headerlink" href="#advanced-examples" title="Permanent link">&para;</a></h2>
 <p>Example code may be found in <a href="https://github.com/apache/jena/tree/master/jena-arq/src-examples/arq/examples/riot/">jena-arq/src-examples</a>.</p>
-<h3 id="iterating-over-parser-output">Iterating over parser output</h3>
+<h3 id="iterating-over-parser-output">Iterating over parser output<a class="headerlink" href="#iterating-over-parser-output" title="Permanent link">&para;</a></h3>
 <p>One of the capabilities of the RIOT API is the ability to treat parser output as an iterator, 
 this is useful when you don't want to go to the trouble of writing a full sink implementation and can easily express your
 logic in normal iterator style.</p>
@@ -409,12 +420,12 @@ production of data to run ahead of your
 as otherwise you can run into a deadlock situation where one is waiting on data from the other which is never started.</p>
 <p>See <a href="https://github.com/apache/jena/tree/master/jena-arq/src-examples/arq/examples/riot/ExRIOT_6.java">RIOT example 6</a> 
 which shows an example usage including a simple way to push the parser onto a different thread to avoid the possible deadlock.</p>
-<h3 id="filter-the-output-of-parsing">Filter the output of parsing</h3>
+<h3 id="filter-the-output-of-parsing">Filter the output of parsing<a class="headerlink" href="#filter-the-output-of-parsing" title="Permanent link">&para;</a></h3>
 <p>When working with very large files, it can be useful to 
 process the stream of triples or quads produced
 by the parser so as to work in a streaming fashion.</p>
 <p>See <a href="https://github.com/apache/jena/tree/master/jena-arq/src-examples/arq/examples/riot/ExRIOT_4.java">RIOT example 4</a></p>
-<h3 id="add-a-new-language">Add a new language</h3>
+<h3 id="add-a-new-language">Add a new language<a class="headerlink" href="#add-a-new-language" title="Permanent link">&para;</a></h3>
 <p>The set of languages is not fixed. A new languages, 
 together with a parser, can be added to RIOT as shown in
 <a href="https://github.com/apache/jena/tree/master/jena-arq/src-examples/arq/examples/riot/ExRIOT_5.java">RIOT example 5</a></p>

Modified: websites/staging/jena/trunk/content/documentation/io/rdf-output.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/io/rdf-output.html (original)
+++ websites/staging/jena/trunk/content/documentation/io/rdf-output.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>
@@ -144,7 +144,18 @@
     <div class="col-md-12">
     <div id="breadcrumbs"></div>
     <h1 class="title">Writing RDF in Apache Jena</h1>
-  <p>This page describes the RIOT (RDF I/O technology) output capabilities
+  <style type="text/css">
+/* The following code is added by mdx_elementid.py
+   It was originally lifted from http://subversion.apache.org/style/site.css */
+/*
+ * Hide class="elementid-permalink", except when an enclosing heading
+ * has the :hover property.
+ */
+.headerlink, .elementid-permalink {
+  visibility: hidden;
+}
+h2:hover > .headerlink, h3:hover > .headerlink, h1:hover > .headerlink, h6:hover > .headerlink, h4:hover > .headerlink, h5:hover > .headerlink, dt:hover > .elementid-permalink { visibility: visible }</style>
+<p>This page describes the RIOT (RDF I/O technology) output capabilities
 introduced in Jena 2.10.1.</p>
 <p>See <a href="rdf-input.html">Reading RDF</a> for details of the RIOT Reader system.</p>
 <ul>
@@ -164,7 +175,7 @@ introduced in Jena 2.10.1.</p>
 </ul>
 <p>See <a href="rdfxml_howto.html#advanced-rdfxml-output">Advanced RDF/XML Output</a> 
 for details of the Jena RDF/XML writer.</p>
-<h2 id="api">API</h2>
+<h2 id="api">API<a class="headerlink" href="#api" title="Permanent link">&para;</a></h2>
 <p>There are two ways to write RDF data using Apache Jena RIOT, 
 either via the <code>RDFDataMgr</code> </p>
 <div class="codehilite"><pre><span class="n">RDFDataMgr</span><span class="p">.</span><span class="n">write</span><span class="p">(</span><span class="n">OutputStream</span><span class="p">,</span> <span class="n">Model</span><span class="p">,</span> <span class="n">Lang</span><span class="p">)</span> <span class="p">;</span>
@@ -182,7 +193,7 @@ either via the <code>RDFDataMgr</code> <
 <p>The <em><code>format</code></em> names are <a href="#jena_model_write_formats">described below</a>; they include the
 names Jena has supported before RIOT.</p>
 <p>Many variations of these methods exist.  See the full javadoc for details.</p>
-<h2 id="rdfformat"><code>RDFFormat</code></h2>
+<h2 id="rdfformat"><code>RDFFormat</code><a class="headerlink" href="#rdfformat" title="Permanent link">&para;</a></h2>
 <p>Output using RIOT depends on the format, which involves both the language (syntax)
 being written and the variant of that syntax. </p>
 <p>The RIOT writer architecture is extensible.  The following languages
@@ -208,10 +219,10 @@ for the standard supported formats.</p>
 <li>RDF/JSON is not JSON-LD. See the <a href="rdf-json.html">description of RDF/JSON</a>.</li>
 <li>N3 is treated as Turtle for output.</li>
 </ul>
-<h2 id="jena_model_write_formats"><code>RDFFormat</code>s and Jena syntax names</h2>
+<h2 id="jena_model_write_formats"><code>RDFFormat</code>s and Jena syntax names<a class="headerlink" href="#jena_model_write_formats" title="Permanent link">&para;</a></h2>
 <p>The string name traditionally used in <code>model.write</code> is mapped to RIOT <code>RDFFormat</code>
 as follows:</p>
-<table>
+<table class="table">
 <thead>
 <tr>
 <th>Jena writer name</th>
@@ -269,11 +280,11 @@ as follows:</p>
 </tr>
 </tbody>
 </table>
-<h2 id="formats">Formats</h2>
-<h3 id="normal-printing">Normal Printing</h3>
+<h2 id="formats">Formats<a class="headerlink" href="#formats" title="Permanent link">&para;</a></h2>
+<h3 id="normal-printing">Normal Printing<a class="headerlink" href="#normal-printing" title="Permanent link">&para;</a></h3>
 <p>A <code>Lang</code> can be used for the writer format, in which case it is mapped to
 an <code>RDFFormat</code> internally.  The normal writers are:</p>
-<table>
+<table class="table">
 <thead>
 <tr>
 <th>RDFFormat or Lang</th>
@@ -324,7 +335,7 @@ an <code>RDFFormat</code> internally.  T
 </tbody>
 </table>
 <p>Pretty printed RDF/XML is also known as RDF/XML-ABBREV</p>
-<h3 id="pretty-printed-languages">Pretty Printed Languages</h3>
+<h3 id="pretty-printed-languages">Pretty Printed Languages<a class="headerlink" href="#pretty-printed-languages" title="Permanent link">&para;</a></h3>
 <p>All Turtle and TriG formats use
 prefix names, and short forms for literals.</p>
 <p>The pretty printed versions of Turtle and TriG prints 
@@ -355,7 +366,7 @@ or write N-triples/N-Quads.</p>
 
 
 <p>Pretty printed formats:</p>
-<table>
+<table class="table">
 <thead>
 <tr>
 <th>RDFFormat</th>
@@ -377,7 +388,7 @@ or write N-triples/N-Quads.</p>
 </tr>
 </tbody>
 </table>
-<h3 id="streamed-block-formats">Streamed Block Formats</h3>
+<h3 id="streamed-block-formats">Streamed Block Formats<a class="headerlink" href="#streamed-block-formats" title="Permanent link">&para;</a></h3>
 <p>Fully pretty printed formats can't not be streamed.  They require analysis
 of all of the data to be written in order to choose the short forms.  This limits
 their use in fully scalable applications.</p>
@@ -416,7 +427,7 @@ to use the short label form.</p>
 
 
 <p>Formats:</p>
-<table>
+<table class="table">
 <thead>
 <tr>
 <th>RDFFormat</th>
@@ -431,7 +442,7 @@ to use the short label form.</p>
 </tr>
 </tbody>
 </table>
-<h3 id="line-printed-formats">Line printed formats</h3>
+<h3 id="line-printed-formats">Line printed formats<a class="headerlink" href="#line-printed-formats" title="Permanent link">&para;</a></h3>
 <p>There are writers for Turtle and Trig that use the abbreviated formats for
 prefix names and short forms for literals. They write each triple or quad
 on a single line.</p>
@@ -459,7 +470,7 @@ but always writes one complete triple on
 
 <p>&nbsp;</p>
 
-<table>
+<table class="table">
 <thead>
 <tr>
 <th>RDFFormat</th>
@@ -474,7 +485,7 @@ but always writes one complete triple on
 </tr>
 </tbody>
 </table>
-<h3 id="n-triples-and-n-quads">N-Triples and N-Quads</h3>
+<h3 id="n-triples-and-n-quads">N-Triples and N-Quads<a class="headerlink" href="#n-triples-and-n-quads" title="Permanent link">&para;</a></h3>
 <p>These provide the formats that are fastest to write, 
 and data of any size can be output.  They do not use any
 internal state and formats always stream without limitation.</p>
@@ -503,7 +514,7 @@ needing any writer state.</p>
 
 <p>&nbsp;</p>
 
-<table>
+<table class="table">
 <thead>
 <tr>
 <th>RDFFormat</th>
@@ -528,7 +539,7 @@ needing any writer state.</p>
 <p>The main N-Triples and N-Quads writers follow RDF 1.1 and output using UTF-8.<br />
 For compatibility with old software, writers are provided that output
 in ASCII (using <code>\u</code> escape sequences for non-ASCI characters where necessary).</p>
-<table>
+<table class="table">
 <thead>
 <tr>
 <th>RDFFormat</th>
@@ -543,11 +554,11 @@ in ASCII (using <code>\u</code> escape s
 </tr>
 </tbody>
 </table>
-<h3 id="rdf-thrift">RDF Thrift</h3>
+<h3 id="rdf-thrift">RDF Thrift<a class="headerlink" href="#rdf-thrift" title="Permanent link">&para;</a></h3>
 <p>The <a href="http://afs.github.io/rdf-thrift">RDF Thrift</a> format is a binary encoding of RDF Graphs
 and RDF Datasets, as well as SPARQL Result Sets, that can provide efficient parsing
 compared to the text-based standardised syntax such as N-triples, Turtle or RDF/XML.</p>
-<table>
+<table class="table">
 <thead>
 <tr>
 <th>RDFFormat</th>
@@ -566,10 +577,10 @@ compared to the text-based standardised
 not as lexcial format and datatype.  See the 
 <a href="http://afs.github.io/rdf-thrift">description of RDF Thrift</a>
 for details.</p>
-<h3 id="rdfxml">RDF/XML</h3>
+<h3 id="rdfxml">RDF/XML<a class="headerlink" href="#rdfxml" title="Permanent link">&para;</a></h3>
 <p>RIOT supports output in RDF/XML. RIOT RDFFormats defaults to pretty printed RDF/XML,
 while the jena writer writer name defaults to a streaming plain output.</p>
-<table>
+<table class="table">
 <thead>
 <tr>
 <th>RDFFormat</th>
@@ -590,9 +601,9 @@ while the jena writer writer name defaul
 </tr>
 </tbody>
 </table>
-<h2 id="examples">Examples</h2>
+<h2 id="examples">Examples<a class="headerlink" href="#examples" title="Permanent link">&para;</a></h2>
 <p>Example code may be found in <a href="https://github.com/apache/jena/tree/master/jena-arq/src-examples/arq/examples/riot/">jena-arq/src-examples</a>.</p>
-<h3 id="ways-to-write-a-model">Ways to write a model</h3>
+<h3 id="ways-to-write-a-model">Ways to write a model<a class="headerlink" href="#ways-to-write-a-model" title="Permanent link">&para;</a></h3>
 <p>The follow ways are different ways to write a model in Turtle:</p>
 <div class="codehilite"><pre>    <span class="n">Model</span> <span class="n">model</span> <span class="p">=</span>  <span class="p">...</span> <span class="p">;</span>
 
@@ -607,7 +618,7 @@ while the jena writer writer name defaul
 </pre></div>
 
 
-<h3 id="ways-to-write-a-dataset">Ways to write a dataset</h3>
+<h3 id="ways-to-write-a-dataset">Ways to write a dataset<a class="headerlink" href="#ways-to-write-a-dataset" title="Permanent link">&para;</a></h3>
 <p>The prefered style is to use <code>RDFDataMgr</code>:</p>
 <div class="codehilite"><pre><span class="n">Dataset</span> <span class="n">ds</span> <span class="p">=</span> <span class="p">....</span> <span class="p">;</span>
 <span class="o">//</span> <span class="n">Write</span> <span class="n">as</span> <span class="n">TriG</span>
@@ -643,10 +654,10 @@ the default graph of the dataset.</p>
 </pre></div>
 
 
-<h3 id="adding-a-new-output-format">Adding a new output format</h3>
+<h3 id="adding-a-new-output-format">Adding a new output format<a class="headerlink" href="#adding-a-new-output-format" title="Permanent link">&para;</a></h3>
 <p>An complete example of adding a new output format is given in the example file: 
 <a href="https://github.com/apache/jena/tree/master/jena-arq/src-examples/arq/examples/riot/ExRIOT_out3.java">RIOT Output example 3</a></p>
-<h2 id="notes">Notes</h2>
+<h2 id="notes">Notes<a class="headerlink" href="#notes" title="Permanent link">&para;</a></h2>
 <p>Using <code>OutputStream</code>s is strongly encouraged.  This allows the writers
 to manage the character encoding using UTF-8.  Using <code>java.io.Writer</code> 
 does not allow this; on platforms such as MS Windows, the default

Modified: websites/staging/jena/trunk/content/documentation/io/rdfxml_howto.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/io/rdfxml_howto.html (original)
+++ websites/staging/jena/trunk/content/documentation/io/rdfxml_howto.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>

Modified: websites/staging/jena/trunk/content/documentation/io/streaming-io.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/io/streaming-io.html (original)
+++ websites/staging/jena/trunk/content/documentation/io/streaming-io.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>
@@ -144,7 +144,18 @@
     <div class="col-md-12">
     <div id="breadcrumbs"></div>
     <h1 class="title">Working with RDF Streams in Apache Jena</h1>
-  <p>Jena has operations useful in processing RDF in a streaming
+  <style type="text/css">
+/* The following code is added by mdx_elementid.py
+   It was originally lifted from http://subversion.apache.org/style/site.css */
+/*
+ * Hide class="elementid-permalink", except when an enclosing heading
+ * has the :hover property.
+ */
+.headerlink, .elementid-permalink {
+  visibility: hidden;
+}
+h2:hover > .headerlink, h3:hover > .headerlink, h1:hover > .headerlink, h6:hover > .headerlink, h4:hover > .headerlink, h5:hover > .headerlink, dt:hover > .elementid-permalink { visibility: visible }</style>
+<p>Jena has operations useful in processing RDF in a streaming
 fashion. Streaming can be used for manipulating RDF at scale.  Jena
 provides high performance readers and writers for all standard RDF formats,
 and it can be extended with custom formats.</p>
@@ -154,7 +165,7 @@ input parsing performance using W3C Stan
 <p>Files ending in <code>.gz</code> are assumed to be gzip-compressed. Input and output
 to such files takes this into account, including looking for the other file
 extension.  <code>data.nt.gz</code> is a parsed as a gzip-compressed N-Triples file.</p>
-<h2 id="streamrdf">StreamRDF</h2>
+<h2 id="streamrdf">StreamRDF<a class="headerlink" href="#streamrdf" title="Permanent link">&para;</a></h2>
 <p>The central abstraction is 
 <a href="/documentation/javadoc/arq/org/apache/jena/riot/system/StreamRDF.html"><code>StreamRDF</code></a>
 which is an interface for streamed RDF data.  It covers triples and quads, 
@@ -186,7 +197,7 @@ and also parser events for prefix settin
 <li><a href="/documentation/javadoc/arq/org/apache/jena/riot/system/StreamRDFLib.html"><code>StreamRDFLib</code></a> &ndash; create <code>StreamRDF</code> objects</li>
 <li><a href="/documentation/javadoc/arq/org/apache/jena/riot/system/StreamOps.html"><code>StreamOps</code></a> &ndash; helpers for sending RDF data to <code>StreamRDF</code> objects</li>
 </ul>
-<h2 id="reading-data">Reading data</h2>
+<h2 id="reading-data">Reading data<a class="headerlink" href="#reading-data" title="Permanent link">&para;</a></h2>
 <p>All parsers of RDF syntaxes provided by RIOT are streaming with the
 exception of JSON-LD.  A JSON object can have members in any order so the
 parser may need the whole top-level object in order to have the information
@@ -201,7 +212,7 @@ directs the output of the parser to a <c
 
 <p>The above code reads the remote URL, with content negotiation, and send the
 triples to the <code>destination</code>.</p>
-<h2 id="writing-data">Writing data</h2>
+<h2 id="writing-data">Writing data<a class="headerlink" href="#writing-data" title="Permanent link">&para;</a></h2>
 <p>Not all RDF formats are suitable for writing as a stream.  Formats that
 provide pretty printing (for example the default <code>RDFFormat</code> for each of
 Turtle, TriG and RDF/XML) require analysis of the whole of a model in order
@@ -226,8 +237,8 @@ an <code>StreamRDF</code> backed by a st
 
 
 <p>N-Triples and N-Quads are always written as a stream.</p>
-<h2 id="rdfformat-and-lang">RDFFormat and Lang</h2>
-<table>
+<h2 id="rdfformat-and-lang">RDFFormat and Lang<a class="headerlink" href="#rdfformat-and-lang" title="Permanent link">&para;</a></h2>
+<table class="table">
 <thead>
 <tr>
 <th><a href="/documentation/javadoc/arq/org/apache/jena/riot/RDFFormat.html">RDFFormat</a></th>

Modified: websites/staging/jena/trunk/content/documentation/javadoc/elephas/index.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/javadoc/elephas/index.html (original)
+++ websites/staging/jena/trunk/content/documentation/javadoc/elephas/index.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>
@@ -144,7 +144,18 @@
     <div class="col-md-12">
     <div id="breadcrumbs"></div>
     <h1 class="title">Jena Elephas JavaDoc</h1>
-  <p>JavaDoc automatically generates detailed class and method documentation from the Jena source code.</p>
+  <style type="text/css">
+/* The following code is added by mdx_elementid.py
+   It was originally lifted from http://subversion.apache.org/style/site.css */
+/*
+ * Hide class="elementid-permalink", except when an enclosing heading
+ * has the :hover property.
+ */
+.headerlink, .elementid-permalink {
+  visibility: hidden;
+}
+h2:hover > .headerlink, h3:hover > .headerlink, h1:hover > .headerlink, h6:hover > .headerlink, h4:hover > .headerlink, h5:hover > .headerlink, dt:hover > .elementid-permalink { visibility: visible }</style>
+<p>JavaDoc automatically generates detailed class and method documentation from the Jena source code.</p>
 <ul>
 <li><a href="common/index.html">Elephas Common API JavaDoc</a></li>
 <li><a href="io/index.html">Elephas IO API JavaDoc</a></li>

Modified: websites/staging/jena/trunk/content/documentation/javadoc/extras/index.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/javadoc/extras/index.html (original)
+++ websites/staging/jena/trunk/content/documentation/javadoc/extras/index.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>
@@ -144,7 +144,18 @@
     <div class="col-md-12">
     <div id="breadcrumbs"></div>
     <h1 class="title">Jena Extras JavaDoc</h1>
-  <p>JavaDoc automatically generates detailed class and method documentation from the Jena source code.</p>
+  <style type="text/css">
+/* The following code is added by mdx_elementid.py
+   It was originally lifted from http://subversion.apache.org/style/site.css */
+/*
+ * Hide class="elementid-permalink", except when an enclosing heading
+ * has the :hover property.
+ */
+.headerlink, .elementid-permalink {
+  visibility: hidden;
+}
+h2:hover > .headerlink, h3:hover > .headerlink, h1:hover > .headerlink, h6:hover > .headerlink, h4:hover > .headerlink, h5:hover > .headerlink, dt:hover > .elementid-permalink { visibility: visible }</style>
+<p>JavaDoc automatically generates detailed class and method documentation from the Jena source code.</p>
 <ul>
 <li><a href="querybuilder/index.html">Extras - QueryBuilder</a></li>
 </ul>

Modified: websites/staging/jena/trunk/content/documentation/javadoc/index.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/javadoc/index.html (original)
+++ websites/staging/jena/trunk/content/documentation/javadoc/index.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>
@@ -144,7 +144,18 @@
     <div class="col-md-12">
     <div id="breadcrumbs"></div>
     <h1 class="title">Jena JavaDoc</h1>
-  <p>JavaDoc automatically generates detailed class and method documentation from the Jena source code.</p>
+  <style type="text/css">
+/* The following code is added by mdx_elementid.py
+   It was originally lifted from http://subversion.apache.org/style/site.css */
+/*
+ * Hide class="elementid-permalink", except when an enclosing heading
+ * has the :hover property.
+ */
+.headerlink, .elementid-permalink {
+  visibility: hidden;
+}
+h2:hover > .headerlink, h3:hover > .headerlink, h1:hover > .headerlink, h6:hover > .headerlink, h4:hover > .headerlink, h5:hover > .headerlink, dt:hover > .elementid-permalink { visibility: visible }</style>
+<p>JavaDoc automatically generates detailed class and method documentation from the Jena source code.</p>
 <ul>
 <li><a href="jena/index.html">Jena JavaDoc</a></li>
 <li><a href="arq/index.html">ARQ JavaDoc</a></li>

Modified: websites/staging/jena/trunk/content/documentation/jdbc/artifacts.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/jdbc/artifacts.html (original)
+++ websites/staging/jena/trunk/content/documentation/jdbc/artifacts.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>
@@ -144,10 +144,21 @@
     <div class="col-md-12">
     <div id="breadcrumbs"></div>
     <h1 class="title">Maven Artifacts for Jena JDBC</h1>
-  <p>The Jena JDBC libraries are a collection of maven artifacts which can be used individually
+  <style type="text/css">
+/* The following code is added by mdx_elementid.py
+   It was originally lifted from http://subversion.apache.org/style/site.css */
+/*
+ * Hide class="elementid-permalink", except when an enclosing heading
+ * has the :hover property.
+ */
+.headerlink, .elementid-permalink {
+  visibility: hidden;
+}
+h2:hover > .headerlink, h3:hover > .headerlink, h1:hover > .headerlink, h6:hover > .headerlink, h4:hover > .headerlink, h5:hover > .headerlink, dt:hover > .elementid-permalink { visibility: visible }</style>
+<p>The Jena JDBC libraries are a collection of maven artifacts which can be used individually
 or together as desired.  These are available from the same locations as any other Jena
 artifact, see <a href="/download/maven.html">Using Jena with Maven</a> for more information.</p>
-<h2 id="core-library">Core Library</h2>
+<h2 id="core-library">Core Library<a class="headerlink" href="#core-library" title="Permanent link">&para;</a></h2>
 <p>The <code>jena-jdbc-core</code> artifact is the core library that contains much of the common implementation
 for the drivers.  This is a dependency of the other artifacts and will typically only be required
 as a direct dependency if you are implementing a <a href="custom_driver.html">custom driver</a></p>
@@ -159,7 +170,7 @@ as a direct dependency if you are implem
 </pre></div>
 
 
-<h2 id="in-memory-driver">In-Memory Driver</h2>
+<h2 id="in-memory-driver">In-Memory Driver<a class="headerlink" href="#in-memory-driver" title="Permanent link">&para;</a></h2>
 <p>The <a href="drivers.html#in-memory">in-memory driver</a> artifact provides the JDBC driver for non-persistent
 in-memory datasets.</p>
 <div class="codehilite"><pre><span class="nt">&lt;dependency&gt;</span>
@@ -170,7 +181,7 @@ in-memory datasets.</p>
 </pre></div>
 
 
-<h2 id="tdb-driver">TDB Driver</h2>
+<h2 id="tdb-driver">TDB Driver<a class="headerlink" href="#tdb-driver" title="Permanent link">&para;</a></h2>
 <p>The <a href="drivers.html#tdb">TDB driver</a> artifact provides the JDBC driver for <a href="/documentation/tdb/">TDB</a>
 datasets.</p>
 <div class="codehilite"><pre><span class="nt">&lt;dependency&gt;</span>
@@ -181,7 +192,7 @@ datasets.</p>
 </pre></div>
 
 
-<h2 id="remote-endpoint-driver">Remote Endpoint Driver</h2>
+<h2 id="remote-endpoint-driver">Remote Endpoint Driver<a class="headerlink" href="#remote-endpoint-driver" title="Permanent link">&para;</a></h2>
 <p>The <a href="drivers.html#remote-endpoint">Remote Endpoint driver</a> artifact provides the JDBC driver for accessing
 arbitrary remote SPARQL compliant stores.</p>
 <div class="codehilite"><pre><span class="nt">&lt;dependency&gt;</span>
@@ -192,7 +203,7 @@ arbitrary remote SPARQL compliant stores
 </pre></div>
 
 
-<h2 id="driver-bundle">Driver Bundle</h2>
+<h2 id="driver-bundle">Driver Bundle<a class="headerlink" href="#driver-bundle" title="Permanent link">&para;</a></h2>
 <p>The driver bundle artifact is a shaded JAR (i.e. with dependencies included) suitable for dropping into tools
 to easily make Jena JDBC drivers available without having to do complex class path setups.</p>
 <p>This artifact depends on all the other artifacts.</p>

Modified: websites/staging/jena/trunk/content/documentation/jdbc/custom_driver.html
==============================================================================
--- websites/staging/jena/trunk/content/documentation/jdbc/custom_driver.html (original)
+++ websites/staging/jena/trunk/content/documentation/jdbc/custom_driver.html Sun Jul 26 10:50:18 2015
@@ -83,8 +83,8 @@
                   <li><a href="/documentation/tdb/index.html">TDB</a></li>
                   <li><a href="/documentation/sdb/index.html">SDB</a></li>
                   <li><a href="/documentation/jdbc/index.html">SPARQL over JDBC</a></li>
-                  <li><a href="/documentation/security/index.html">Security</a></li>
                   <li><a href="/documentation/fuseki2/index.html">Fuseki</a></li>
+                  <li><a href="/documentation/permissions/index.html">Permissions</a></li>
                   <li><a href="/documentation/assembler/index.html">Assembler</a></li>
                   <li><a href="/documentation/ontology/">Ontology API</a></li>
                   <li><a href="/documentation/inference/index.html">Inference API</a></li>
@@ -144,11 +144,22 @@
     <div class="col-md-12">
     <div id="breadcrumbs"></div>
     <h1 class="title">Creating a Custom Jena JDBC Driver</h1>
-  <p>As noted in the <a href="index.html#overview">overview</a> Jena JDBC drivers are built around a core
+  <style type="text/css">
+/* The following code is added by mdx_elementid.py
+   It was originally lifted from http://subversion.apache.org/style/site.css */
+/*
+ * Hide class="elementid-permalink", except when an enclosing heading
+ * has the :hover property.
+ */
+.headerlink, .elementid-permalink {
+  visibility: hidden;
+}
+h2:hover > .headerlink, h3:hover > .headerlink, h1:hover > .headerlink, h6:hover > .headerlink, h4:hover > .headerlink, h5:hover > .headerlink, dt:hover > .elementid-permalink { visibility: visible }</style>
+<p>As noted in the <a href="index.html#overview">overview</a> Jena JDBC drivers are built around a core
 library which implements much of the common functionality required in an abstract way.  This
 means that it is relatively easy to build a custom driver just by relying on the core library
 and implementing a minimum of one class.</p>
-<h2 id="custom-driver-class">Custom Driver class</h2>
+<h2 id="custom-driver-class">Custom Driver class<a class="headerlink" href="#custom-driver-class" title="Permanent link">&para;</a></h2>
 <p>The one and only thing that you are required to do to create a custom driver is to implement
 a class that extends <code>JenaDriver</code>.  This requires you to implement a constructor which simply
 needs to call the parent constructor with the relevant inputs, one of these is your driver specific
@@ -164,7 +175,7 @@ perfectly acceptable for your <code>conn
 from the built-in drivers.  This may be useful if you are writing a driver for a specific store and
 wish to provide simplified connection URL parameters and create the appropriate connection instance
 programmatically.</p>
-<h2 id="custom-connection-class">Custom Connection class</h2>
+<h2 id="custom-connection-class">Custom Connection class<a class="headerlink" href="#custom-connection-class" title="Permanent link">&para;</a></h2>
 <p>The next stage in creating a custom driver (where necessary) is to create a class derived from
 <code>JenaConnection</code>.  This has a somewhat broader set of abstract methods which you will need to implement
 such as <code>createStatementInternal()</code> and various methods which you may optionally override if you
@@ -174,7 +185,7 @@ to guide you in this.  It may be easier
 an entire custom implementation yourself.</p>
 <p>Note that custom implementations may also require you to implement custom <code>JenaStatement</code> and <code>JenaPreparedSatement</code>
 implementations.</p>
-<h2 id="testing-your-driver">Testing your Driver</h2>
+<h2 id="testing-your-driver">Testing your Driver<a class="headerlink" href="#testing-your-driver" title="Permanent link">&para;</a></h2>
 <p>To aid testing your custom driver the <code>jena-jdbc-core</code> module provides a number of abstract test classes which
 can be derived from in order to provide a wide variety of tests for your driver implementation.  This is how
 all the built in drivers are tested so you can check out their test sources for examples of this.</p>