You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@jena.apache.org by gi...@apache.org on 2023/01/03 10:25:50 UTC

[jena-site] branch asf-site updated: Updated site from main (dd1cd48453fc605bc09153ab9bf42aee3b680686)

This is an automated email from the ASF dual-hosted git repository.

git-site-role pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/jena-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new d7e9c03f2 Updated site from main (dd1cd48453fc605bc09153ab9bf42aee3b680686)
d7e9c03f2 is described below

commit d7e9c03f2c5533eba4daa33c1fa319c7fb069618
Author: jenkins <bu...@apache.org>
AuthorDate: Tue Jan 3 10:25:47 2023 +0000

    Updated site from main (dd1cd48453fc605bc09153ab9bf42aee3b680686)
---
 .../fuseki2/fuseki-server-protocol.html            |  7 ++++
 content/documentation/index.xml                    |  2 +-
 content/documentation/io/rdf-input.html            | 30 +++++++++++++++--
 content/documentation/query/service_enhancer.html  | 38 ++++++++++++++--------
 content/index.xml                                  |  2 +-
 content/sitemap.xml                                |  6 ++--
 6 files changed, 64 insertions(+), 21 deletions(-)

diff --git a/content/documentation/fuseki2/fuseki-server-protocol.html b/content/documentation/fuseki2/fuseki-server-protocol.html
index 54c1aaf2c..535c4aa2e 100644
--- a/content/documentation/fuseki2/fuseki-server-protocol.html
+++ b/content/documentation/fuseki2/fuseki-server-protocol.html
@@ -433,6 +433,13 @@ See <a href="fuseki-server-info.html">Fuseki Server Information</a> for details
 useful for managing the files externally.</p>
 <p>The returned JSON object will have the form <code>{ backups: [ ... ] }</code> where the <code>[]</code> array is
 a list of file names.</p>
+<p>Since 4.7.0 backups are written to a temporary file in the same directory and renamed on completion.
+In case of server crash, it will not be renamed.
+This guarantees backups are complete.
+Cleanup of incomplete backups can be done by users on application / container start: remove all incomplete files.g</p>
+<h3 id="backup-policies">Backup policies</h3>
+<p>Users can use the backup api <a href="/documentation/fuseki2/fuseki-server-protocol.html#backup">the Fuseki HTTP Administration Protocol</a> to build backup policies.
+See issue for more information <a href="https://github.com/apache/jena/issues/1500">https://github.com/apache/jena/issues/1500</a> .</p>
 <h3 id="compact">Compact</h3>
 <p>Pattern: <code>/$/compact/{name}</code></p>
 <p>This operations initiates a database compaction task and returns a JSON object with the task Id in it.</p>
diff --git a/content/documentation/index.xml b/content/documentation/index.xml
index 2ec4c80a6..a42400a98 100644
--- a/content/documentation/index.xml
+++ b/content/documentation/index.xml
@@ -1587,7 +1587,7 @@ Serving RDF For any use of users-password information, and especially HTTP basic
       
       <guid>https://jena.apache.org/documentation/query/service_enhancer.html</guid>
       <description>The service enhancer (SE) plugin extends the functionality of the SERVICE clause with:
- Bulk requests Correlated joins also known as lateral joins A streaming cache for SERVICE requests results which can also cope with bulk requests and correlated joins. Furthermore, queries that only differ in limit and offset will result in cache hits for overlapping ranges. At present, the plugin only ships with an in-memory caching provider.  As a fundamental principle, a request making use of cache and bulk should return the exact same result as if those settings were omitted.</description>
+ Bulk requests Correlated joins also known as lateral joins A streaming cache for SERVICE requests results which can also cope with bulk requests and correlated joins. Furthermore, queries that only differ in limit and offset will result in cache hits for overlapping ranges. At present, the plugin only ships with an in-memory cache provider.  As a fundamental principle, a request making use of cache and bulk should return the exact same result as if those settings were omitted.</description>
     </item>
     
     <item>
diff --git a/content/documentation/io/rdf-input.html b/content/documentation/io/rdf-input.html
index c18d03324..a0176db05 100644
--- a/content/documentation/io/rdf-input.html
+++ b/content/documentation/io/rdf-input.html
@@ -460,15 +460,41 @@ this is useful when you don&rsquo;t want to go to the trouble of writing a full
 logic in normal iterator style.</p>
 <p>To do this you use <code>AsyncParser.asyncParseTriples</code> which parses the input on
 another thread:</p>
-<pre><code>    Iterator&lt;Triple&gt; iter = AsyncParser.asyncParseTriples(filename);
+<pre><code>    IteratorCloseable&lt;Triple&gt; iter = AsyncParser.asyncParseTriples(filename);
     iter.forEachRemaining(triple-&gt;{
         // Do something with triple
     });
 </code></pre>
-<p>For N-Triples and N-Quads, you can use
+<p>Calling the iterator&rsquo;s close method stops parsing and closes the involved resources.
+For N-Triples and N-Quads, you can use
 <code>RiotParsers.createIteratorNTriples(input)</code> which parses the input on the
 calling thread.</p>
 <p><a href="https://github.com/apache/jena/blob/main/jena-examples/src/main/java/arq/examples/riot/ExRIOT9_AsyncParser.java">RIOT example 9</a>.</p>
+<p>Additional control over parsing is provided by the <code>AsyncParser.of(...)</code> methods which return <code>AsyncParserBuilder</code> instances.
+The builder features a fluent API that allows for fine-tuning internal buffer sizes as well as eventually obtaining
+a standard Java <code>Stream</code>. Calling the stream&rsquo;s close method stops parsing and closes the involved resources.
+Therefore, these streams are best used in conjunction with try-with-resources blocks:</p>
+<pre><code>    try (Stream&lt;Triple&gt; stream = AsyncParser.of(filename)
+            .setQueueSize(2).setChunkSize(100).streamTriples().limit(1000)) {
+        // Do something with the stream
+    }
+</code></pre>
+<p>The AsyncParser also supports parsing RDF into a stream of <code>EltStreamRDF</code> elements. Each element can hold a triple, quad, prefix, base IRI or exception.
+For all <code>Stream</code>-based methods there also exist <code>Iterator</code>-based versions:</p>
+<pre><code>    IteratorCloseable&lt;EltStreamRDF&gt; it = AsyncParser.of(filename).asyncParseElements();
+    try {
+        while (it.hasNext()) {
+            EltStreamRDF elt = it.next();
+            if (elt.isTriple()) {
+               // Do something with elt.getTriple();
+            } else if (elt.isPrefix()) {
+               // Do something with elt.getPrefix() and elt.getIri();
+            }
+        }
+    } finally {
+        Iter.close(it);
+    }
+</code></pre>
 <h3 id="filter-the-output-of-parsing">Filter the output of parsing</h3>
 <p>When working with very large files, it can be useful to
 process the stream of triples or quads produced
diff --git a/content/documentation/query/service_enhancer.html b/content/documentation/query/service_enhancer.html
index 349665347..c84a84ecb 100644
--- a/content/documentation/query/service_enhancer.html
+++ b/content/documentation/query/service_enhancer.html
@@ -187,11 +187,11 @@
 <li>Bulk requests</li>
 <li>Correlated joins also known as lateral joins</li>
 <li>A streaming cache for <code>SERVICE</code> requests results which can also cope with bulk requests and correlated joins. Furthermore, queries that only differ in limit and offset will result
-in cache hits for overlapping ranges. At present, the plugin only ships with an in-memory caching provider.</li>
+in cache hits for overlapping ranges. At present, the plugin only ships with an in-memory cache provider.</li>
 </ul>
 <p>As a fundamental principle, a request making use of <code>cache</code> and <code>bulk</code> should return the exact same result as if
-those settings were omitted. As a consequence runtime result set size recognition (RRR) is employed to reveal hidden
-result set limits and ensure that always only the appropriate amount of data is returned from the caches.</p>
+those settings were omitted. As a consequence, runtime result set size recognition (RRR) is employed to reveal hidden
+result set limits. This is used to ensure that always only the appropriate amount of data is returned from the caches.</p>
 <p>A correlated join using this plugin is syntactically expressed with <code>SERVICE &lt;loop:&gt; {}</code>.
 It is a binary operation on two graph patterns:
 The operation &ldquo;loops&rdquo; over every binding obtained from evaluation of the left-hand-side (lhs) and uses it as an input to substitute the variables of the right-hand-side (rhs).
@@ -213,7 +213,7 @@ It executes as a single remote request to Wikidata:</p>
   }
 }
 </code></pre></div><details>
-  <summary markdown="span">Click here to view the rewritten Query</summary>
+  <summary>Click here to view the rewritten query</summary>
 <div class="highlight"><pre style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sparql" data-lang="sparql"><span style="color:#a2f;font-weight:bold">SELECT</span>  <span style="color:#666">*</span>
 <span style="color:#a2f;font-weight:bold">WHERE</span>
   {   {   { { <span style="color:#a2f;font-weight:bold">SELECT</span>  <span style="color:#666">*</span>
@@ -308,7 +308,7 @@ Every obtained binding&rsquo;s <code>?__idx__</code>  value determines the input
 A special value for <code>?__idx__</code> is the  end marker. It is a number higher than any input binding ID and it is used to detect result set size limits: It&rsquo;s absence in a result set
 means that it was cut off. This information is used to ensure that a request using a certain service IRI does not yield more results than limit.</p>
 </details>
-<p>Note that a repeated execution of a query (possibly with different limits/offsets) will serve the data from cache rather than making another remote request.
+<p>Note, that a repeated execution of a query (possibly with different limits/offsets) will serve the data from cache rather than making another remote request.
 The cache operates on a per-input-binding basis: For instance, in the example above it means that when removing bindings from the <code>VALUES</code> block data will
 still be served from the cache. Conversely, adding additional bindings to the <code>VALUES</code> block will only send a (bulk) remote request for those
 that lack cache entries.</p>
@@ -331,10 +331,10 @@ For more details about the transformation see <a href="#programmatic-algebra-tra
 <div class="highlight"><pre style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-java" data-lang="java"><span style="color:#a2f;font-weight:bold">import</span> <span style="color:#00f;font-weight:bold">org.apache.jena.sparql.service.enhancer.init.ServiceEnhancerInit</span><span style="color:#666">;</span>
 
 ServiceEnhancerInit<span style="color:#666">.</span><span style="color:#b44">wrapOptimizer</span><span style="color:#666">(</span>ARQ<span style="color:#666">.</span><span style="color:#b44">getContext</span><span style="color:#666">());</span>
-</code></pre></div><p>As usual, in order to avoid a global setup, the the context of a dataset or statement execution (i.e. query / update) can be used instead:</p>
+</code></pre></div><p>As usual, in order to avoid a global setup, the context of a dataset or statement execution (i.e. query / update) can be used instead:</p>
 <div class="highlight"><pre style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-java" data-lang="java">DatasetFactory dataset <span style="color:#666">=</span> DatasetFactory<span style="color:#666">.</span><span style="color:#b44">create</span><span style="color:#666">();</span>
 ServiceEnhancerInit<span style="color:#666">.</span><span style="color:#b44">wrapOptimizer</span><span style="color:#666">(</span>dataset<span style="color:#666">.</span><span style="color:#b44">getContext</span><span style="color:#666">());</span>
-</code></pre></div><p>The lookup proceduce for which optimizer to wrap first consults the given context and then the global one.
+</code></pre></div><p>The lookup procedure for which optimizer to wrap first consults the given context and then the global one.
 If neither has an optimizer configured then Jena&rsquo;s default one will be used.</p>
 <p>Service requests that do not make use of this plugin&rsquo;s options will not be affected even if the plugin is loaded.
 The plugin registration makes use of the <a href="/documentation/query/custom_service_executors.html">custom service executor extension system</a>.</p>
@@ -342,7 +342,7 @@ The plugin registration makes use of the <a href="/documentation/query/custom_se
 <p>The <code>se:DatasetServiceEnhancer</code> assembler can be used to enable the SE plugin on a dataset.
 This procedure also automatically enables correlated joins using the dataset&rsquo;s context as described in <a href="#programmatic-setup">Programmatic Setup</a>.
 By default, the SE assembler alters the base dataset&rsquo;s context and returns the base dataset again.
-There is one important exception: If <code>se:enableMgmt</code> is true then the assembler&rsquo;s final step it to create a wrapped dataset with a copy of the original dataset&rsquo;s context where <code>enableMgmt</code> is true.
+There is one important exception: If <code>se:enableMgmt</code> is true then the assembler&rsquo;s final step is to create a wrapped dataset with a copy of the original dataset&rsquo;s context where <code>enableMgmt</code> is true.
 This way, management functions are not available in the base dataset.</p>
 <div class="highlight"><pre style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-ttl" data-lang="ttl"><span style="color:#080;font-style:italic"># assembler.ttl</span><span style="color:#bbb">
 </span><span style="color:#bbb"></span><span style="color:#a2f;font-weight:bold">PREFIX</span><span style="color:#bbb"> </span><span style="color:#00f;font-weight:bold">ja:</span><span style="color:#bbb"> </span><span style="color:#b8860b">&lt;http://jena.hpl.hp.com/2005/11/Assembler#&gt;</span><span style="color:#bbb">
@@ -355,12 +355,16 @@ This way, management functions are not available in the base dataset.</p>
 </span><span style="color:#bbb">                                          </span><span style="color:#080;font-style:italic"># identified by the tuple (service IRI, query, input binding)</span><span style="color:#bbb">
 </span><span style="color:#bbb">  </span><span style="color:#00f;font-weight:bold">se:</span><span style="color:#008000;font-weight:bold">cacheMaxPageCount</span><span style="color:#bbb"> </span><span style="color:#666">15</span><span style="color:#bbb"> </span>;<span style="color:#bbb">               </span><span style="color:#080;font-style:italic"># Maximum number of pages per cache entry</span><span style="color:#bbb">
 </span><span style="color:#bbb">  </span><span style="color:#00f;font-weight:bold">se:</span><span style="color:#008000;font-weight:bold">cachePageSize</span><span style="color:#bbb"> </span><span style="color:#666">10000</span><span style="color:#bbb"> </span>;<span style="color:#bbb">                </span><span style="color:#080;font-style:italic"># Number of bindings per page</span><span style="color:#bbb">
+</span><span style="color:#bbb">  </span><span style="color:#00f;font-weight:bold">se:</span><span style="color:#008000;font-weight:bold">bulkMaxSize</span><span style="color:#bbb"> </span><span style="color:#666">100</span><span style="color:#bbb"> </span>;<span style="color:#bbb">                    </span><span style="color:#080;font-style:italic"># Maximum number of input bindings to group into a bulk request</span><span style="color:#bbb">
+</span><span style="color:#bbb">  </span><span style="color:#00f;font-weight:bold">se:</span><span style="color:#008000;font-weight:bold">bulkSize</span><span style="color:#bbb"> </span><span style="color:#666">30</span><span style="color:#bbb"> </span>;<span style="color:#bbb">                        </span><span style="color:#080;font-style:italic"># Default bulk size when not specifying a size</span><span style="color:#bbb">
+</span><span style="color:#bbb">  </span><span style="color:#00f;font-weight:bold">se:</span><span style="color:#008000;font-weight:bold">bulkMaxOutOfBandSize</span><span style="color:#bbb"> </span><span style="color:#666">30</span><span style="color:#bbb"> </span>;<span style="color:#bbb">            </span><span style="color:#080;font-style:italic"># Dispatch non-full batches as soon as this number of non-fitting</span><span style="color:#bbb">
+</span><span style="color:#bbb">                                          </span><span style="color:#080;font-style:italic"># input bindings have been encountered</span><span style="color:#bbb">
 </span><span style="color:#bbb">  </span><span style="color:#00f;font-weight:bold">se:</span><span style="color:#008000;font-weight:bold">enableMgmt</span><span style="color:#bbb"> </span>false<span style="color:#bbb">                     </span><span style="color:#080;font-style:italic"># Enables management functions;</span><span style="color:#bbb">
 </span><span style="color:#bbb">                                          </span><span style="color:#080;font-style:italic"># wraps the base dataset with an independent context</span><span style="color:#bbb">
 </span><span style="color:#bbb">  </span>.<span style="color:#bbb">
 </span><span style="color:#bbb">
 </span><span style="color:#bbb"></span><span style="color:#b8860b">&lt;urn:example:base&gt;</span><span style="color:#bbb"> </span><span style="color:#0b0;font-weight:bold">a</span><span style="color:#bbb"> </span><span style="color:#00f;font-weight:bold">ja:</span><span style="color:#008000;font-weight:bold">MemoryDataset</span><span style="color:#bbb"> </span>.<span style="color:#bbb">
-</span></code></pre></div><p>In the example above, the shown values for <code>se:cacheMaxEntryCount</code>, <code>se:cacheMaxPageCount</code> and <code>se:cachePageSize</code> are the defaults which are used if those options are left unspecified.
+</span></code></pre></div><p>In the example above, the shown values for <code>se:cacheMaxEntryCount</code>, <code>se:cacheMaxPageCount</code>, <code>se:cachePageSize</code>, <code>se:bulkMaxSize</code>, <code>se:bulkSize</code> and <code>se:bulkMaxOutOfBandSize</code> are the defaults which are used if those options are left unspecified.
 They allow for caching up to 45mio bindings (300 x 15 x 10000).
 There is one caveat though: Specifying the cache options puts a new a cache instance in the dataset&rsquo;s context. Without these options the global cache instance that is registered in the ARQ context by the SE plugin during service loading is used.
 Presently, the global instance cannot be configured via the assembler.</p>
@@ -373,7 +377,7 @@ Dataset dataset <span style="color:#666">=</span> DatasetFactory<span style="col
 <p>This section assumes that one of the distributions of <code>apache-jena-fuseki</code> has been downloaded from [https://jena.apache.org/download/].
 The extracted folder should contain the <code>./fuseki-server</code> executable start script which automatically loads all jars (relative to <code>$PWD</code>) under <code>run/extra</code>.
 These folders need to be created e.g. using <code>mkdir -p run/extra</code>. The SE plugin can be manually built or downloaded from maven central (it is self-contained without transitive dependencies).
-Placing it into the <code>run/extra</code> folder makes it available for use with Fuseki. The plugin and Fuseki version should match.</p>
+Placing it into the <code>run/extra</code> folder makes it available for use with Fuseki. The plugin and Fuseki versions should match.</p>
 <h4 id="fuseki-assembler-configuration">Fuseki Assembler Configuration</h4>
 <p>The snippet below shows a simple setup of enabling the SE plugin for a given base dataset.
 Cache management can be performed via SPARQL extension functions. However, usually not every user should be allowed to invalidate caches as this
@@ -419,6 +423,12 @@ The context symbols are in the namespace <code>http://jena.apache.org/service-en
 <td>Maximum number of input bindings to group into a single bulk request; restricts <code>serviceBulkRequestItemCount</code>. When using <code>bulk+n</code> then <code>n</code> will be capped to the configured value.</td>
 </tr>
 <tr>
+<td><code>serviceBulkMaxOutOfBandBindingCount</code></td>
+<td>int</td>
+<td>30</td>
+<td>Dispatch non-full batches as soon as this number of non-fitting bindings have been read from the input iterator</td>
+</tr>
+<tr>
 <td><code>datasetId</code></td>
 <td>String</td>
 <td>null</td>
@@ -558,7 +568,7 @@ and derive a backend request that only fetches the needed parts.</p>
 </code></pre></div><p>Note, that in pathological cases this can require a bulk request to be repeatedly re-executed with disabled caches for each input binding.
 For example, assume that the largest result yet set seen for a service is 1000 and the system is about to serve the 1001st binding from cache for a specific input binding.
 The question is whether this would exceed the service&rsquo;s so far unknown result set size limit. Therefore, in order to answer that question a remote request that bypasses the cache is performed.
-Furthermore, let&rsquo;s assume that request produces 2000 results. Then for the problem repeats once another input binding&rsquo;s 2001st result was about to be served.</p>
+Furthermore, let&rsquo;s assume that another request produced 2000 results. Then the problem repeats once another input binding&rsquo;s 2001st result were about to be served.</p>
 <h3 id="sparql-functions">SPARQL Functions</h3>
 <p>The service enhancer plugin introduces functions and property functions for listing cache content and removing cache entries.
 The namespace is</p>
@@ -614,14 +624,14 @@ The namespace is</p>
 | 2  | &quot;urn:x-arq:self@dataset813601419&quot; | &quot;SELECT  (&lt;urn:x-arq:DefaultGraph&gt; AS ?g) ?p (count(*) AS ?c)\nWHERE\n  { ?s  a  ?o }\nGROUP BY ?p\n&quot; | &quot;( ?p = rdf:type )&quot; |
 | 3  | &quot;urn:x-arq:self@dataset813601419&quot; | &quot;SELECT  ?g ?p (count(*) AS ?c)\nWHERE\n  { GRAPH ?g\n      { ?s  a  ?o }\n  }\nGROUP BY ?g ?p\n&quot;     | &quot;( ?p = rdf:type )&quot; |
 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-</code></pre><h4 id="example-invaliding-all-cache-entries">Example: Invaliding all cache entries</h4>
+</code></pre><h4 id="example-invaliding-all-cache-entries">Example: Invaliding All Cache Entries</h4>
 <div class="highlight"><pre style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sparql" data-lang="sparql"><span style="color:#a2f;font-weight:bold">PREFIX</span> <span style="color:#00f;font-weight:bold">se</span>: <span style="color:#a0a000">&lt;http://jena.apache.org/service-enhancer#&gt;</span>
 <span style="color:#a2f;font-weight:bold">SELECT</span> (<span style="color:#00f;font-weight:bold">se</span>:<span style="color:#008000;font-weight:bold">cacheRm</span>() <span style="color:#a2f;font-weight:bold">AS</span> <span style="color:#b8860b">?count</span>) { }
-</code></pre></div><h4 id="example-invalidating-specific-cache-entries">Example: Invalidating specific cache entries</h4>
+</code></pre></div><h4 id="example-invalidating-specific-cache-entries">Example: Invalidating Specific Cache Entries</h4>
 <div class="highlight"><pre style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sparql" data-lang="sparql"><span style="color:#a2f;font-weight:bold">PREFIX</span> <span style="color:#00f;font-weight:bold">se</span>: <span style="color:#a0a000">&lt;http://jena.apache.org/service-enhancer#&gt;</span>
 
 <span style="color:#a2f;font-weight:bold">SELECT</span> <span style="color:#00a000">SUM</span>(<span style="color:#00f;font-weight:bold">se</span>:<span style="color:#008000;font-weight:bold">cacheRm</span>(<span style="color:#b8860b">?id</span>) <span style="color:#a2f;font-weight:bold">AS</span> <span style="color:#b8860b">?count</span>) {
-  <span style="color:#b8860b">?id</span> <span style="color:#00f;font-weight:bold">se</span>:<span style="color:#008000;font-weight:bold">cacheList</span> (<span style="color:#a0a000">&lt;http://dbpedia.org/sparql&gt;</span>)
+  <span style="color:#b8860b">?id</span> <span style="color:#00f;font-weight:bold">se</span>:<span style="color:#008000;font-weight:bold">cacheLs</span> (<span style="color:#a0a000">&lt;http://dbpedia.org/sparql&gt;</span>)
 }
 </code></pre></div><p>For completeness, the functions can be addressed via their fully qualified Java class names:</p>
 <pre><code>&lt;java:org.apache.jena.sparql.service.enhancer.pfunction.cacheLs&gt;
diff --git a/content/index.xml b/content/index.xml
index 5b7a73b98..3f91493e4 100644
--- a/content/index.xml
+++ b/content/index.xml
@@ -1777,7 +1777,7 @@ Serving RDF For any use of users-password information, and especially HTTP basic
       
       <guid>https://jena.apache.org/documentation/query/service_enhancer.html</guid>
       <description>The service enhancer (SE) plugin extends the functionality of the SERVICE clause with:
- Bulk requests Correlated joins also known as lateral joins A streaming cache for SERVICE requests results which can also cope with bulk requests and correlated joins. Furthermore, queries that only differ in limit and offset will result in cache hits for overlapping ranges. At present, the plugin only ships with an in-memory caching provider.  As a fundamental principle, a request making use of cache and bulk should return the exact same result as if those settings were omitted.</description>
+ Bulk requests Correlated joins also known as lateral joins A streaming cache for SERVICE requests results which can also cope with bulk requests and correlated joins. Furthermore, queries that only differ in limit and offset will result in cache hits for overlapping ranges. At present, the plugin only ships with an in-memory cache provider.  As a fundamental principle, a request making use of cache and bulk should return the exact same result as if those settings were omitted.</description>
     </item>
     
     <item>
diff --git a/content/sitemap.xml b/content/sitemap.xml
index 9535c8d13..9b1610739 100644
--- a/content/sitemap.xml
+++ b/content/sitemap.xml
@@ -409,7 +409,7 @@
   
   <url>
     <loc>https://jena.apache.org/documentation/fuseki2/fuseki-server-protocol.html</loc>
-    <lastmod>2022-07-27T16:28:14+02:00</lastmod>
+    <lastmod>2022-09-13T12:11:16+03:00</lastmod>
   </url>
   
   <url>
@@ -714,7 +714,7 @@
   
   <url>
     <loc>https://jena.apache.org/documentation/io/rdf-input.html</loc>
-    <lastmod>2022-08-26T15:49:37+01:00</lastmod>
+    <lastmod>2022-08-24T16:11:32+02:00</lastmod>
   </url>
   
   <url>
@@ -849,7 +849,7 @@
   
   <url>
     <loc>https://jena.apache.org/documentation/query/service_enhancer.html</loc>
-    <lastmod>2022-08-25T18:30:14+01:00</lastmod>
+    <lastmod>2022-09-07T22:46:36+02:00</lastmod>
   </url>
   
   <url>