You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@jena.apache.org by Pierre Grenon <pg...@horizon-asset.co.uk> on 2019/01/31 15:00:33 UTC

Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart

Hello,

I am trying to:

Set up Fuseki2 with inference and a TDB2 dataset in which can be persisted named graphs created with SPARQL Update.

This is in order to:
- maintain a set of ontologies in a named graph
- maintain datasets in a number of named graphs
- perform reasoning in the union graphs
The assumption is that all data is persisted in a given TDB2 database.

The higher purpose is to use reasoning over ontologies when querying over instance data located in named graphs. I think this is conceptually what is discussed here:
Subject Re: Ontologies and model data
Date      Mon, 03 Jun 2013 20:26:18 GMT
http://mail-archives.apache.org/mod_mbox/jena-users/201306.mbox/%3C51ACFBEA.2010206@apache.org%3E

The set up above is what comes across as a way of achieving this higher goal but I am not sure either that it is the best set up. Everything I have tried to do allows me to either perform inferences in <urn:x-arq:UnionGraph> or to persist triples in named graphs in a TDB2 database, but not both.


My problem is:

I cannot find a correct configuration that allows me to persist named graphs added to a TDB2 dataset and have inference at the same time.

Some attempts are documented below.

I have done most of my test in apache-jena-fuseki-3.8.0, my last tries were in apache-jena-fuseki-3.10.0.
Would somebody be in a position to advise or provide a minimal working example?

With many thanks and best regards,
Pierre


###
Attempt 1:

# Example of a data service with SPARQL query abnd update on an
# inference model.  Data is taken from TDB.
https://github.com/apache/jena/blob/master/jena-fuseki2/examples/service-inference-2.ttl
which I have adapted to use TDB2 (which basically meant updating the namespace and the references to classes).

Outcome: This allows me to load data into named graphs and to perform inferemce. However, it does not persist upon restarting the server.


###
Attempt 2:

Define two services pointing to different graphs as advised in
Subject Re: Persisting named graphs in TDB with jena-fuseki
Date      Thu, 10 Mar 2016 14:47:10 GMT
http://mail-archives.apache.org/mod_mbox/jena-users/201603.mbox/%3CD30738C0.64A6F%25rvesse@dotnetrdf.org%3E

Outcome: I could only manage de3fining two independent services on two independent datasets and couldn't figure out how to link the TDB2 and the Inference graphs.

A reference to https://issues.apache.org/jira/browse/JENA-1122
is made in
http://mail-archives.apache.org/mod_mbox/jena-users/201603.mbox/%3C56EC23E3.9010000@apache.org%3E
But I do not understand what is said here.


###
Attempt 3:

I have found a config that seemed t make the link that was needed between graphs in Attempt 2 in:
https://stackoverflow.com/questions/47568703/named-graphs-v-default-graph-behaviour-in-apache-jena-fuseki
which I have adapted to TDB2.

However, this gives me:
ERROR Exception in initialization: caught: Not in tramsaction

This also seems to have been a cut off point in the thread mentioned above
Subject Re: Configuring fuseki with TDB2 and OWL reasoning
Date      Tue, 20 Feb 2018 10:55:13 GMT
http://mail-archives.apache.org/mod_mbox/jena-users/201802.mbox/%3C6d37a8c7-aca1-4c1c-0cd3-fa041ecc07eb%40apache.org%3E

This message refers to https://issues.apache.org/jira/browse/JENA-1492
which I do not understand but that comes across as having been resolved. Indeed, I tested the config attached to it (probably as minimal example for that issue) and it worked, but I don't think this is the config I need.



#### ATTEMPT 3 Config

@prefix :      <http://base/#> .
@prefix tdb2:  <http://jena.apache.org/2016/tdb#> .
@prefix rdf:   <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix ja:    <http://jena.hpl.hp.com/2005/11/Assembler#> .
@prefix rdfs:  <http://www.w3.org/2000/01/rdf-schema#> .
@prefix fuseki: <http://jena.apache.org/fuseki#> .

# TDB2
#tdb2:DatasetTDB2 rdfs:subClassOf  ja:RDFDataset .
#tdb2:GraphTDB    rdfs:subClassOf  ja:Model .

# Service 1: Dataset endpoint (no reasoning)
:dataService a fuseki:Service ;
  fuseki:name           "tdbEnpoint" ;
  fuseki:serviceQuery   "sparql", "query" ;
  fuseki:serviceUpdate  "update" ;
  fuseki:dataset        :tdbDataset ;
.

# Service 2: Reasoning endpoint
:reasoningService a fuseki:Service ;
  fuseki:dataset                 :infDataset ;
  fuseki:name                    "reasoningEndpoint" ;
  fuseki:serviceQuery            "query", "sparql" ;
  fuseki:serviceReadGraphStore   "get" ;
.

# Inference dataset
:infDataset rdf:type ja:RDFDataset ;
            ja:defaultGraph :infModel ;
.

# Inference model
:infModel a ja:InfModel ;
           ja:baseModel :g ;

           ja:reasoner [
              ja:reasonerURL <http://jena.hpl.hp.com/2003/OWLFBRuleReasoner> ;
           ] ;
.

# Intermediate graph referencing the default union graph
:g rdf:type tdb2:GraphTDB ;
   tdb2:dataset :tdbDataset ;
   tdb2:graphName <urn:x-arq:UnionGraph> ;
.

# The location of the TDB dataset
:tdbDataset rdf:type tdb2:DatasetTDB2 ;
   tdb2:location  "C:\\dev\\apache-jena-fuseki-3.8.0\\run/databases/weird7" ;
   tdb2:unionDefaultGraph true ;
.


#### SERVER
...
[2019-01-31 13:54:58] Config     INFO  Load configuration: file:///C:/dev/apache-jena-fuseki-3.10.0/run/configuration/weird7.ttl
[2019-01-31 13:54:58] Server     ERROR Exception in initialization: caught: Not in a transaction
[2019-01-31 13:54:58] WebAppContext WARN  Failed startup of context o.e.j.w.WebAppContext@6edc4161{Apache Jena Fuseki Server,/,file:///C:/dev/apache-jena-fuseki-3.10.0/webapp/,UNAVAILABLE}
org.apache.jena.assembler.exceptions.AssemblerException: caught: Not in a transaction
  doing:
    root: file:///C:/dev/apache-jena-fuseki-3.10.0/run/configuration/weird7.ttl#model_inf with type: http://jena.hpl.hp.com/2005/11/Assembler#InfModel assembler class: class org.apache.jena.assembler.assemblers.InfModelAssembler
    root: http://base/#dataset with type: http://jena.hpl.hp.com/2005/11/Assembler#RDFDataset assembler class: class org.apache.jena.sparql.core.assembler.DatasetAssembler

        at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.openBySpecificType(AssemblerGroup.java:165)
        at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.open(AssemblerGroup.java:144)
       at org.apache.jena.assembler.assemblers.AssemblerGroup$ExpandingAssemblerGroup.open(AssemblerGroup.java:93)
        at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:39)
        at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:35)
        at org.apache.jena.assembler.assemblers.AssemblerGroup.openModel(AssemblerGroup.java:47)
        at org.apache.jena.sparql.core.assembler.DatasetAssembler.createDataset(DatasetAssembler.java:56)
        at org.apache.jena.sparql.core.assembler.DatasetAssembler.open(DatasetAssembler.java:43)
        at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.openBySpecificType(AssemblerGroup.java:157)
        at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.open(AssemblerGroup.java:144)
        at org.apache.jena.assembler.assemblers.AssemblerGroup$ExpandingAssemblerGroup.open(AssemblerGroup.java:93)
        at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:39)
        at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:35)
        at org.apache.jena.fuseki.build.FusekiConfig.getDataset(FusekiConfig.java:345)
        at org.apache.jena.fuseki.build.FusekiConfig.buildDataService(FusekiConfig.java:299)
        at org.apache.jena.fuseki.build.FusekiConfig.buildDataAccessPoint(FusekiConfig.java:289)
        at org.apache.jena.fuseki.build.FusekiConfig.readConfiguration(FusekiConfig.java:272)
        at org.apache.jena.fuseki.build.FusekiConfig.readConfigurationDirectory(FusekiConfig.java:251)
        at org.apache.jena.fuseki.webapp.FusekiWebapp.initializeDataAccessPoints(FusekiWebapp.java:226)
        at org.apache.jena.fuseki.webapp.FusekiServerListener.serverInitialization(FusekiServerListener.java:98)
        at org.apache.jena.fuseki.webapp.FusekiServerListener.contextInitialized(FusekiServerListener.java:56)
        at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:952)
        at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:558)
        at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:917)
        at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:370)
        at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1497)
        at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1459)
        at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:847)
        at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:287)
        at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:410)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
        at org.eclipse.jetty.server.Server.start(Server.java:416)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at org.eclipse.jetty.server.Server.doStart(Server.java:383)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.apache.jena.fuseki.cmd.JettyFusekiWebapp.start(JettyFusekiWebapp.java:138)
        at org.apache.jena.fuseki.cmd.FusekiCmd.runFuseki(FusekiCmd.java:372)
        at org.apache.jena.fuseki.cmd.FusekiCmd$FusekiCmdInner.exec(FusekiCmd.java:356)
        at jena.cmd.CmdMain.mainMethod(CmdMain.java:93)
        at jena.cmd.CmdMain.mainRun(CmdMain.java:58)
        at jena.cmd.CmdMain.mainRun(CmdMain.java:45)
        at org.apache.jena.fuseki.cmd.FusekiCmd$FusekiCmdInner.innerMain(FusekiCmd.java:104)
        at org.apache.jena.fuseki.cmd.FusekiCmd.main(FusekiCmd.java:67)
Caused by: org.apache.jena.dboe.transaction.txn.TransactionException: Not in a transaction
        at org.apache.jena.dboe.transaction.txn.TransactionalComponentLifecycle.checkTxn(TransactionalComponentLifecycle.java:417)
        at org.apache.jena.dboe.trans.bplustree.BPlusTree.getRootRead(BPlusTree.java:159)
        at org.apache.jena.dboe.trans.bplustree.BPlusTree.find(BPlusTree.java:239)
        at org.apache.jena.tdb2.store.nodetable.NodeTableNative.accessIndex(NodeTableNative.java:133)
        at org.apache.jena.tdb2.store.nodetable.NodeTableNative._idForNode(NodeTableNative.java:118)
        at org.apache.jena.tdb2.store.nodetable.NodeTableNative.getNodeIdForNode(NodeTableNative.java:57)
        at org.apache.jena.tdb2.store.nodetable.NodeTableCache._idForNode(NodeTableCache.java:222)
        at org.apache.jena.tdb2.store.nodetable.NodeTableCache.getNodeIdForNode(NodeTableCache.java:114)
        at org.apache.jena.tdb2.store.nodetable.NodeTableWrapper.getNodeIdForNode(NodeTableWrapper.java:47)
        at org.apache.jena.tdb2.store.nodetable.NodeTableInline.getNodeIdForNode(NodeTableInline.java:58)
        at org.apache.jena.tdb2.store.nodetupletable.NodeTupleTableConcrete.idForNode(NodeTupleTableConcrete.java:182)
        at org.apache.jena.tdb2.store.nodetupletable.NodeTupleTableConcrete.findAsNodeIds(NodeTupleTableConcrete.java:136)
        at org.apache.jena.tdb2.store.nodetupletable.NodeTupleTableConcrete.find(NodeTupleTableConcrete.java:114)
        at org.apache.jena.tdb2.store.DatasetPrefixesTDB.readPrefixMap(DatasetPrefixesTDB.java:111)
        at org.apache.jena.sparql.graph.GraphPrefixesProjection.getNsPrefixMap(GraphPrefixesProjection.java:94)
        at org.apache.jena.tdb2.store.GraphViewSwitchable$PrefixMappingImplTDB2.getNsPrefixMap(GraphViewSwitchable.java:159)
        at org.apache.jena.shared.impl.PrefixMappingImpl.setNsPrefixes(PrefixMappingImpl.java:138)
        at org.apache.jena.tdb2.store.GraphViewSwitchable.createPrefixMapping(GraphViewSwitchable.java:68)
        at org.apache.jena.graph.impl.GraphBase.getPrefixMapping(GraphBase.java:165)
        at org.apache.jena.reasoner.BaseInfGraph.getPrefixMapping(BaseInfGraph.java:55)
        at org.apache.jena.rdf.model.impl.ModelCom.getPrefixMapping(ModelCom.java:1018)
        at org.apache.jena.rdf.model.impl.ModelCom.setNsPrefixes(ModelCom.java:1055)
        at org.apache.jena.assembler.assemblers.ModelAssembler.open(ModelAssembler.java:45)
        at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.openBySpecificType(AssemblerGroup.java:157)
        ... 49 more
[2019-01-31 13:54:58] Server     INFO  Started 2019/01/31 13:54:58 GMT on port 3030

THIS E-MAIL MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION. 
IF YOU ARE NOT THE INTENDED RECIPIENT (OR HAVE RECEIVED THIS E-MAIL 
IN ERROR) PLEASE NOTIFY THE SENDER IMMEDIATELY AND DESTROY THIS 
E-MAIL. ANY UNAUTHORISED COPYING, DISCLOSURE OR DISTRIBUTION OF THE 
MATERIAL IN THIS E-MAIL IS STRICTLY FORBIDDEN. 

IN ACCORDANCE WITH MIFID II RULES ON INDUCEMENTS, THE FIRM'S EMPLOYEES 
MAY ATTEND CORPORATE ACCESS EVENTS (DEFINED IN THE FCA HANDBOOK AS 
"THE SERVICE OF ARRANGING OR BRINGING ABOUT CONTACT BETWEEN AN INVESTMENT 
MANAGER AND AN ISSUER OR POTENTIAL ISSUER"). DURING SUCH MEETINGS, THE 
FIRM'S EMPLOYEES MAY ON NO ACCOUNT BE IN RECEIPT OF INSIDE INFORMATION 
(AS DESCRIBED IN ARTICLE 7 OF THE MARKET ABUSE REGULATION (EU) NO 596/2014). 
(https://www.handbook.fca.org.uk/handbook/glossary/G3532m.html)
COMPANIES WHO DISCLOSE INSIDE INFORMATION ARE IN BREACH OF REGULATION 
AND MUST IMMEDIATELY AND CLEARLY NOTIFY ALL ATTENDEES. FOR INFORMATION 
ON THE FIRM'S POLICY IN RELATION TO ITS PARTICIPATION IN MARKET SOUNDINGS, 
PLEASE SEE https://www.horizon-asset.co.uk/market-soundings/. 

HORIZON ASSET LLP IS AUTHORISED AND REGULATED 
BY THE FINANCIAL CONDUCT AUTHORITY.



Re: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart

Posted by Andy Seaborne <an...@apache.org>.

On 19/02/2019 13:30, Pierre Grenon wrote:
> Hey Andy,
> 
> Sorry I don’t mean to be agonisingly thick but I’m not sure I follow the conclusion and I don’t get how to modify the config file that I had attached for a TDB config.
> 
> I didn’t modify the data on disc model. I added a sparql update method to the inference model and I removed the explicit link to the union graph. I loaded in both data and inference models. I lost the ability to query without named graphs (so I have to use GRAPH) and the inference model wasn’t loaded with disc saved data when restarting.
> 
> To the question:
> 
>>> What is the prescribed way of keeping disc data and inference datasets in synch?
> 
> Your answer is two parts:
> 
>> Update via the inference model.
> 
> This means I keep two separate models, right? One in memory where I inference. One on disk where I just store data. But then they remain disconnected and I can’t initialise the inference model with the disk data in any case. Sorry I am a bit confused.

Inference is executed at start up and as the data changes.  Only some 
calculated at query time and even then some results are cached.

If you directly change the base graph while the system is running, the 
inference graph won't notice the changes.

So either you have to stop the system update the base data and restart 
(which rebuilds the in-memory structures with the new data) or update 
the inference graph while running, which will both update the base graph 
and update in-memory structures.

> 
>> Don't wire it to the union graph.
          write
> 
> What does this mean? Will the default graph give me access to the union of graphs?

The union of graph is artifact of reading or querying the data. It can't 
be updated (where would the triples go?)

> 
> In the config file, do I entirely get rid of this or just the last clause?

See abov about update approach.

> 
>>> # Intermediate graph referencing the default union graph
>>> :g rdf:type tdb:GraphTDB ;
>>> tdb:dataset :tdbDataset ;
>>> tdb:graphName <urn:x-arq:UnionGraph> ;
>>> .
> 
> Thank you,
> Pierre
> 
> 
> 
> From: Andy Seaborne [mailto:andy@apache.org]
> Sent: 09 February 2019 17:53
> To: users@jena.apache.org
> Subject: Re: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart
> 
> 
> 
> On 04/02/2019 12:31, Pierre Grenon wrote:
>> Hi,
>>
>> following up after going through my attempts more systematically again. I'm trying to be as specific and clear as I can. Any feedback most appreciated.
>>
>> Many thanks,
>> Pierre
>>
>> 1. It is possible to have a configuration file in which data is loaded into a TDB and inferences are ran over this data. In this case:
>>
>> 1.a Data in named graphs created using SPARQL Update into a TDB dataset persists upon restart.
> 
> Data must be loaded through the inference graph for the inferencer to
> notice the change.
> 
> So the SPARQL updates can't create a new graph. Assemblers have a fixed
> configuration.
> 
> (You could have one graph per database and upload new assemblers while
> Fuseki is running..)
> 
>>
>> 1.b Assertional data in these named graphs is immediately available to the reasoning endpoint without server restart.
>>
>> 1.c Inference on data loaded using SPARQL Update requires restart of the server after upload.
>>
>> 1.d CLEAR ALL in the TDB dataset endpoint requires server restart to have the inference dataset emptied. (Queries to the reasoning endpoint for either assertional or inferred data both return the same results as prior to clearing the TDB dataset.)
> 
> Same general point - if you manipulate the database directly, the
> inference code doesn't know a change has happened or what has changed.
> 
>> 2. TDB2 does not allow this --- or is it, at the moment only? As per OP in this thread, the configuration adapted to TDB2 breaks. Based on Andy's response, this may be caused by Bug Jena-1633. Would fixing the bug be enough to allow for the configuration using TDB2?
> 
> JENA-1663.
> 
>> 3. Inference datasets do not synch with the underlying TDB(2) datasets (1.b and 1.c in virtue of the in memory nature of inference models and the way configuration files are handled as per Andy's and ajs6f 's responses).
>>
>> In view of this, however, 1.b is really weird.
>>
>> 4. Adding a service update method to the reasoning service does not seem to allow updating the inference dataset. Sending SPARQL Update to the inference endpoint does not result in either additional assertional or inferred data. (Although, per 1.b, asserted data is returned when the SPARQL Update is sent to the TDB endpoint.)
> 
> The base graph of the inference model is updated.
> 
> But you have that set to <urn:x-arq:UnionGraph>.
> 
> That applies to SPARQL query - the updates will have gone to the real
> default graph but that is hidden by your setup.
> 
>>
>>
>> Question:
>>
>> What is the prescribed way of keeping disc data and inference datasets in synch?
> 
> Update via the inference model.
> Don't wire it to the union graph.
> 
> THIS E-MAIL MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION.
> IF YOU ARE NOT THE INTENDED RECIPIENT (OR HAVE RECEIVED THIS E-MAIL
> IN ERROR) PLEASE NOTIFY THE SENDER IMMEDIATELY AND DESTROY THIS
> E-MAIL. ANY UNAUTHORISED COPYING, DISCLOSURE OR DISTRIBUTION OF THE
> MATERIAL IN THIS E-MAIL IS STRICTLY FORBIDDEN.
> 
> IN ACCORDANCE WITH MIFID II RULES ON INDUCEMENTS, THE FIRM'S EMPLOYEES
> MAY ATTEND CORPORATE ACCESS EVENTS (DEFINED IN THE FCA HANDBOOK AS
> "THE SERVICE OF ARRANGING OR BRINGING ABOUT CONTACT BETWEEN AN INVESTMENT
> MANAGER AND AN ISSUER OR POTENTIAL ISSUER"). DURING SUCH MEETINGS, THE
> FIRM'S EMPLOYEES MAY ON NO ACCOUNT BE IN RECEIPT OF INSIDE INFORMATION
> (AS DESCRIBED IN ARTICLE 7 OF THE MARKET ABUSE REGULATION (EU) NO 596/2014).
> (https://www.handbook.fca.org.uk/handbook/glossary/G3532m.html)
> COMPANIES WHO DISCLOSE INSIDE INFORMATION ARE IN BREACH OF REGULATION
> AND MUST IMMEDIATELY AND CLEARLY NOTIFY ALL ATTENDEES. FOR INFORMATION
> ON THE FIRM'S POLICY IN RELATION TO ITS PARTICIPATION IN MARKET SOUNDINGS,
> PLEASE SEE https://www.horizon-asset.co.uk/market-soundings/.
> 
> HORIZON ASSET LLP IS AUTHORISED AND REGULATED
> BY THE FINANCIAL CONDUCT AUTHORITY.
> 
> 
>>
>> Is it:
>>
>> P1 - upon SPARQL Update to disc data, restart server (and reinitialise inference dataset)?
>> This makes it difficult to manage successive updates, especially when there may be dependencies between states them, e.g., in in order to make update 2 I need to have done update 1, I need to restart after update 1.
>>
>> Given that TDB only works at the moment, what is the 'transactional' meaning of having to do this?
>>
>> P2 - upon SPARQL Update to disc data, SPARQL Update inference dataset. Is it possible to update the inference dataset? In that case, is it possible to guarantee that the two datasets are in synch? Does TDB versus TDB2 matter?
>>
>> 5. Not for self that property chains are not supported by the OWLFBReasoner.
>>
>>
>> ##### TDB Configuration
>> ##### From:
>>
>> https://stackoverflow.com/questions/47568703/named-graphs-v-default-graph-behaviour-in-apache-jena-fuseki<https://stackoverflow.com/questions/47568703/named-graphs-v-default-graph-behaviour-in-apache-jena-fuseki>
>> @prefix : <http://base/#> .
>> @prefix tdb: <http://jena.hpl.hp.com/2008/tdb#<http://jena.hpl.hp.com/2008/tdb#>> .
>> @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#<http://www.w3.org/1999/02/22-rdf-syntax-ns#>> .
>> @prefix ja: <http://jena.hpl.hp.com/2005/11/Assembler#<http://jena.hpl.hp.com/2005/11/Assembler#>> .
>> @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#<http://www.w3.org/2000/01/rdf-schema#>> .
>> @prefix fuseki: <http://jena.apache.org/fuseki#<http://jena.apache.org/fuseki#>> .
>>
>> # TDB
>> tdb:DatasetTDB rdfs:subClassOf ja:RDFDataset .
>> tdb:GraphTDB rdfs:subClassOf ja:Model .
>>
>>
>> # Service 1: Dataset endpoint (no reasoning)
>> :dataService a fuseki:Service ;
>> fuseki:name "tdbEnpointTDBB" ;
>> fuseki:serviceQuery "sparql", "query" ;
>> fuseki:serviceUpdate "update" ;
>> fuseki:dataset :tdbDataset ;
>> .
>>
>> # Service 2: Reasoning endpoint
>> :reasoningService a fuseki:Service ;
>> fuseki:dataset :infDataset ;
>> fuseki:name "reasoningEndpointTDBB" ;
>> fuseki:serviceQuery "query", "sparql" ;
>> fuseki:serviceReadGraphStore "get" ;
>> .
>>
>> # Inference dataset
>> :infDataset rdf:type ja:RDFDataset ;
>> ja:defaultGraph :infModel ;
>> .
>>
>> # Inference model
>> :infModel a ja:InfModel ;
>> ja:baseModel :g ;
>>
>> ja:reasoner [
>> ja:reasonerURL <http://jena.hpl.hp.com/2003/OWLFBRuleReasoner<http://jena.hpl.hp.com/2003/OWLFBRuleReasoner>> ;
>> ] ;
>> .
>>
>> # Intermediate graph referencing the default union graph
>> :g rdf:type tdb:GraphTDB ;
>> tdb:dataset :tdbDataset ;
>> tdb:graphName <urn:x-arq:UnionGraph> ;
>> .
>>
>> # The location of the TDB dataset
>> :tdbDataset rdf:type tdb:DatasetTDB ;
>> tdb:location "C:\\dev\\apache-jena-fuseki-3.8.0\\run/databases/tdbB" ;
>> tdb:unionDefaultGraph true ;
>> .
>>
>> From: Pierre Grenon
>> Sent: 01 February 2019 15:07
>> To: 'users@jena.apache.org'
>> Subject: RE: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart
>>
>>
>> I'll address you two, fine gentlemen, at once if that's OK.
>>
>>> On 31/01/2019 17:57, ajs6f wrote:
>>>>> 2/ It is not possible in an assembler/Fuseki configuration file, to create a new named graph and have a another inference graph put around that new graph at runtime.
>>>>
>>>> Just to pull on one of these threads, my understanding is that this essentially because the assembler system works only by names. IOW, there's no such thing as a "variable", and a blank node doesn't function as a slot (as it might in a SPARQL query), just as an nameless node. So you have to know the specific name of any specific named graph to which you want to refer. A named graph that doesn't yet exist and may have any name at all when it does obviously doesn't fit into that.
>>>>
>>
>> I find this difficult to follow. By name you mean a value to ja:graphName so something like <urn:my:beautiful:graph>?
>>
>> I have tried a configuration in which I was defining graphs.
>>
>> <#graph_umb> rdf:type tdb2:GraphTDB ;
>> tdb2:dataset :datasetTDB2 ;
>> ja:graphName <urn:mad:bro> .
>>
>> Then I'd load into that graph.
>>
>> Again, I haven't found a configuration that allowed me to also define an inference engine and keep the content of these graphs.
>>
>> I will retry and try to post files for comments, unless you can come up with a minimal example that would save both save time and help preserve sanity.
>>
>>>> Andy and other more knowledgeable people: is that correct?
>>>
>>> The issue is that the assembler runs once at the start, builds some Java
>>> structures based on that and does not get invoked when the new graph is
>>> created later.
>>
>> To some extent, it would be possible to live with predefined graphs in the config file. This would work for ontologies and reference data that doesn't change.
>>
>> For data, in particular the type of data with lots of numbers and that corresponds to daily operation data, it might be infeasible to predefine graph names unless you can declare some sorts of template graphs names (e.g., <urn:data:icecream:[FLAVOUR]:[YYYMMDDD]>) which sounds like a stretch. Alternatively, we could use a rolling predefined graph and save with a specific name as archive, then clear and load new data on a daily basis. I think this is a different issue though.
>>
>>> The issue is also that the union graph is partition - if a single
>>> concrete graph were used, it might well work.
>>
>> I'm not sure I follow this. Can you show an example of a config file that makes that partitioning?
>>
>>> I haven't worked out the other details like why persistence isn't
>>> happening. Might be related to a union graph. Might be update
>>> happening going around the inference graph.
>>
>> Hope the previous message helped clarifying the issue.
>>
>> As a follow up too, I'm asked if it is possible to save to disc any named graph created in memory before shutting down the server and if that would be a work around.
>>
>> with many thanks and kind regards,
>> Pierre
>>
>> THIS E-MAIL MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION.
>> IF YOU ARE NOT THE INTENDED RECIPIENT (OR HAVE RECEIVED THIS E-MAIL
>> IN ERROR) PLEASE NOTIFY THE SENDER IMMEDIATELY AND DESTROY THIS
>> E-MAIL. ANY UNAUTHORISED COPYING, DISCLOSURE OR DISTRIBUTION OF THE
>> MATERIAL IN THIS E-MAIL IS STRICTLY FORBIDDEN.
>>
>> IN ACCORDANCE WITH MIFID II RULES ON INDUCEMENTS, THE FIRM'S EMPLOYEES
>> MAY ATTEND CORPORATE ACCESS EVENTS (DEFINED IN THE FCA HANDBOOK AS
>> "THE SERVICE OF ARRANGING OR BRINGING ABOUT CONTACT BETWEEN AN INVESTMENT
>> MANAGER AND AN ISSUER OR POTENTIAL ISSUER"). DURING SUCH MEETINGS, THE
>> FIRM'S EMPLOYEES MAY ON NO ACCOUNT BE IN RECEIPT OF INSIDE INFORMATION
>> (AS DESCRIBED IN ARTICLE 7 OF THE MARKET ABUSE REGULATION (EU) NO 596/2014).
>> (https://www.handbook.fca.org.uk/handbook/glossary/G3532m.html<https://www.handbook.fca.org.uk/handbook/glossary/G3532m.html>)
>> COMPANIES WHO DISCLOSE INSIDE INFORMATION ARE IN BREACH OF REGULATION
>> AND MUST IMMEDIATELY AND CLEARLY NOTIFY ALL ATTENDEES. FOR INFORMATION
>> ON THE FIRM'S POLICY IN RELATION TO ITS PARTICIPATION IN MARKET SOUNDINGS,
>> PLEASE SEE https://www.horizon-asset.co.uk/market-soundings/<https://www.horizon-asset.co.uk/market-soundings/>.
>>
>> HORIZON ASSET LLP IS AUTHORISED AND REGULATED
>> BY THE FINANCIAL CONDUCT AUTHORITY.
>>
>>

RE: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart

Posted by Pierre Grenon <pg...@horizon-asset.co.uk>.
Hey Andy,

Sorry I don’t mean to be agonisingly thick but I’m not sure I follow the conclusion and I don’t get how to modify the config file that I had attached for a TDB config.

I didn’t modify the data on disc model. I added a sparql update method to the inference model and I removed the explicit link to the union graph. I loaded in both data and inference models. I lost the ability to query without named graphs (so I have to use GRAPH) and the inference model wasn’t loaded with disc saved data when restarting.

To the question:

>> What is the prescribed way of keeping disc data and inference datasets in synch?

Your answer is two parts:

> Update via the inference model.

This means I keep two separate models, right? One in memory where I inference. One on disk where I just store data. But then they remain disconnected and I can’t initialise the inference model with the disk data in any case. Sorry I am a bit confused.

> Don't wire it to the union graph.

What does this mean? Will the default graph give me access to the union of graphs?

In the config file, do I entirely get rid of this or just the last clause?

>> # Intermediate graph referencing the default union graph
>> :g rdf:type tdb:GraphTDB ;
>> tdb:dataset :tdbDataset ;
>> tdb:graphName <urn:x-arq:UnionGraph> ;
>> .

Thank you,
Pierre



From: Andy Seaborne [mailto:andy@apache.org]
Sent: 09 February 2019 17:53
To: users@jena.apache.org
Subject: Re: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart



On 04/02/2019 12:31, Pierre Grenon wrote:
> Hi,
>
> following up after going through my attempts more systematically again. I'm trying to be as specific and clear as I can. Any feedback most appreciated.
>
> Many thanks,
> Pierre
>
> 1. It is possible to have a configuration file in which data is loaded into a TDB and inferences are ran over this data. In this case:
>
> 1.a Data in named graphs created using SPARQL Update into a TDB dataset persists upon restart.

Data must be loaded through the inference graph for the inferencer to
notice the change.

So the SPARQL updates can't create a new graph. Assemblers have a fixed
configuration.

(You could have one graph per database and upload new assemblers while
Fuseki is running..)

>
> 1.b Assertional data in these named graphs is immediately available to the reasoning endpoint without server restart.
>
> 1.c Inference on data loaded using SPARQL Update requires restart of the server after upload.
>
> 1.d CLEAR ALL in the TDB dataset endpoint requires server restart to have the inference dataset emptied. (Queries to the reasoning endpoint for either assertional or inferred data both return the same results as prior to clearing the TDB dataset.)

Same general point - if you manipulate the database directly, the
inference code doesn't know a change has happened or what has changed.

> 2. TDB2 does not allow this --- or is it, at the moment only? As per OP in this thread, the configuration adapted to TDB2 breaks. Based on Andy's response, this may be caused by Bug Jena-1633. Would fixing the bug be enough to allow for the configuration using TDB2?

JENA-1663.

> 3. Inference datasets do not synch with the underlying TDB(2) datasets (1.b and 1.c in virtue of the in memory nature of inference models and the way configuration files are handled as per Andy's and ajs6f 's responses).
>
> In view of this, however, 1.b is really weird.
>
> 4. Adding a service update method to the reasoning service does not seem to allow updating the inference dataset. Sending SPARQL Update to the inference endpoint does not result in either additional assertional or inferred data. (Although, per 1.b, asserted data is returned when the SPARQL Update is sent to the TDB endpoint.)

The base graph of the inference model is updated.

But you have that set to <urn:x-arq:UnionGraph>.

That applies to SPARQL query - the updates will have gone to the real
default graph but that is hidden by your setup.

>
>
> Question:
>
> What is the prescribed way of keeping disc data and inference datasets in synch?

Update via the inference model.
Don't wire it to the union graph.

THIS E-MAIL MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION. 
IF YOU ARE NOT THE INTENDED RECIPIENT (OR HAVE RECEIVED THIS E-MAIL 
IN ERROR) PLEASE NOTIFY THE SENDER IMMEDIATELY AND DESTROY THIS 
E-MAIL. ANY UNAUTHORISED COPYING, DISCLOSURE OR DISTRIBUTION OF THE 
MATERIAL IN THIS E-MAIL IS STRICTLY FORBIDDEN. 

IN ACCORDANCE WITH MIFID II RULES ON INDUCEMENTS, THE FIRM'S EMPLOYEES 
MAY ATTEND CORPORATE ACCESS EVENTS (DEFINED IN THE FCA HANDBOOK AS 
"THE SERVICE OF ARRANGING OR BRINGING ABOUT CONTACT BETWEEN AN INVESTMENT 
MANAGER AND AN ISSUER OR POTENTIAL ISSUER"). DURING SUCH MEETINGS, THE 
FIRM'S EMPLOYEES MAY ON NO ACCOUNT BE IN RECEIPT OF INSIDE INFORMATION 
(AS DESCRIBED IN ARTICLE 7 OF THE MARKET ABUSE REGULATION (EU) NO 596/2014). 
(https://www.handbook.fca.org.uk/handbook/glossary/G3532m.html)
COMPANIES WHO DISCLOSE INSIDE INFORMATION ARE IN BREACH OF REGULATION 
AND MUST IMMEDIATELY AND CLEARLY NOTIFY ALL ATTENDEES. FOR INFORMATION 
ON THE FIRM'S POLICY IN RELATION TO ITS PARTICIPATION IN MARKET SOUNDINGS, 
PLEASE SEE https://www.horizon-asset.co.uk/market-soundings/. 

HORIZON ASSET LLP IS AUTHORISED AND REGULATED 
BY THE FINANCIAL CONDUCT AUTHORITY.


>
> Is it:
>
> P1 - upon SPARQL Update to disc data, restart server (and reinitialise inference dataset)?
> This makes it difficult to manage successive updates, especially when there may be dependencies between states them, e.g., in in order to make update 2 I need to have done update 1, I need to restart after update 1.
>
> Given that TDB only works at the moment, what is the 'transactional' meaning of having to do this?
>
> P2 - upon SPARQL Update to disc data, SPARQL Update inference dataset. Is it possible to update the inference dataset? In that case, is it possible to guarantee that the two datasets are in synch? Does TDB versus TDB2 matter?
>
> 5. Not for self that property chains are not supported by the OWLFBReasoner.
>
>
> ##### TDB Configuration
> ##### From:
>
> https://stackoverflow.com/questions/47568703/named-graphs-v-default-graph-behaviour-in-apache-jena-fuseki<https://stackoverflow.com/questions/47568703/named-graphs-v-default-graph-behaviour-in-apache-jena-fuseki>
> @prefix : <http://base/#> .
> @prefix tdb: <http://jena.hpl.hp.com/2008/tdb#<http://jena.hpl.hp.com/2008/tdb#>> .
> @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#<http://www.w3.org/1999/02/22-rdf-syntax-ns#>> .
> @prefix ja: <http://jena.hpl.hp.com/2005/11/Assembler#<http://jena.hpl.hp.com/2005/11/Assembler#>> .
> @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#<http://www.w3.org/2000/01/rdf-schema#>> .
> @prefix fuseki: <http://jena.apache.org/fuseki#<http://jena.apache.org/fuseki#>> .
>
> # TDB
> tdb:DatasetTDB rdfs:subClassOf ja:RDFDataset .
> tdb:GraphTDB rdfs:subClassOf ja:Model .
>
>
> # Service 1: Dataset endpoint (no reasoning)
> :dataService a fuseki:Service ;
> fuseki:name "tdbEnpointTDBB" ;
> fuseki:serviceQuery "sparql", "query" ;
> fuseki:serviceUpdate "update" ;
> fuseki:dataset :tdbDataset ;
> .
>
> # Service 2: Reasoning endpoint
> :reasoningService a fuseki:Service ;
> fuseki:dataset :infDataset ;
> fuseki:name "reasoningEndpointTDBB" ;
> fuseki:serviceQuery "query", "sparql" ;
> fuseki:serviceReadGraphStore "get" ;
> .
>
> # Inference dataset
> :infDataset rdf:type ja:RDFDataset ;
> ja:defaultGraph :infModel ;
> .
>
> # Inference model
> :infModel a ja:InfModel ;
> ja:baseModel :g ;
>
> ja:reasoner [
> ja:reasonerURL <http://jena.hpl.hp.com/2003/OWLFBRuleReasoner<http://jena.hpl.hp.com/2003/OWLFBRuleReasoner>> ;
> ] ;
> .
>
> # Intermediate graph referencing the default union graph
> :g rdf:type tdb:GraphTDB ;
> tdb:dataset :tdbDataset ;
> tdb:graphName <urn:x-arq:UnionGraph> ;
> .
>
> # The location of the TDB dataset
> :tdbDataset rdf:type tdb:DatasetTDB ;
> tdb:location "C:\\dev\\apache-jena-fuseki-3.8.0\\run/databases/tdbB" ;
> tdb:unionDefaultGraph true ;
> .
>
> From: Pierre Grenon
> Sent: 01 February 2019 15:07
> To: 'users@jena.apache.org'
> Subject: RE: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart
>
>
> I'll address you two, fine gentlemen, at once if that's OK.
>
>> On 31/01/2019 17:57, ajs6f wrote:
>>>> 2/ It is not possible in an assembler/Fuseki configuration file, to create a new named graph and have a another inference graph put around that new graph at runtime.
>>>
>>> Just to pull on one of these threads, my understanding is that this essentially because the assembler system works only by names. IOW, there's no such thing as a "variable", and a blank node doesn't function as a slot (as it might in a SPARQL query), just as an nameless node. So you have to know the specific name of any specific named graph to which you want to refer. A named graph that doesn't yet exist and may have any name at all when it does obviously doesn't fit into that.
>>>
>
> I find this difficult to follow. By name you mean a value to ja:graphName so something like <urn:my:beautiful:graph>?
>
> I have tried a configuration in which I was defining graphs.
>
> <#graph_umb> rdf:type tdb2:GraphTDB ;
> tdb2:dataset :datasetTDB2 ;
> ja:graphName <urn:mad:bro> .
>
> Then I'd load into that graph.
>
> Again, I haven't found a configuration that allowed me to also define an inference engine and keep the content of these graphs.
>
> I will retry and try to post files for comments, unless you can come up with a minimal example that would save both save time and help preserve sanity.
>
>>> Andy and other more knowledgeable people: is that correct?
>>
>> The issue is that the assembler runs once at the start, builds some Java
>> structures based on that and does not get invoked when the new graph is
>> created later.
>
> To some extent, it would be possible to live with predefined graphs in the config file. This would work for ontologies and reference data that doesn't change.
>
> For data, in particular the type of data with lots of numbers and that corresponds to daily operation data, it might be infeasible to predefine graph names unless you can declare some sorts of template graphs names (e.g., <urn:data:icecream:[FLAVOUR]:[YYYMMDDD]>) which sounds like a stretch. Alternatively, we could use a rolling predefined graph and save with a specific name as archive, then clear and load new data on a daily basis. I think this is a different issue though.
>
>> The issue is also that the union graph is partition - if a single
>> concrete graph were used, it might well work.
>
> I'm not sure I follow this. Can you show an example of a config file that makes that partitioning?
>
>> I haven't worked out the other details like why persistence isn't
>> happening. Might be related to a union graph. Might be update
>> happening going around the inference graph.
>
> Hope the previous message helped clarifying the issue.
>
> As a follow up too, I'm asked if it is possible to save to disc any named graph created in memory before shutting down the server and if that would be a work around.
>
> with many thanks and kind regards,
> Pierre
>
> THIS E-MAIL MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION.
> IF YOU ARE NOT THE INTENDED RECIPIENT (OR HAVE RECEIVED THIS E-MAIL
> IN ERROR) PLEASE NOTIFY THE SENDER IMMEDIATELY AND DESTROY THIS
> E-MAIL. ANY UNAUTHORISED COPYING, DISCLOSURE OR DISTRIBUTION OF THE
> MATERIAL IN THIS E-MAIL IS STRICTLY FORBIDDEN.
>
> IN ACCORDANCE WITH MIFID II RULES ON INDUCEMENTS, THE FIRM'S EMPLOYEES
> MAY ATTEND CORPORATE ACCESS EVENTS (DEFINED IN THE FCA HANDBOOK AS
> "THE SERVICE OF ARRANGING OR BRINGING ABOUT CONTACT BETWEEN AN INVESTMENT
> MANAGER AND AN ISSUER OR POTENTIAL ISSUER"). DURING SUCH MEETINGS, THE
> FIRM'S EMPLOYEES MAY ON NO ACCOUNT BE IN RECEIPT OF INSIDE INFORMATION
> (AS DESCRIBED IN ARTICLE 7 OF THE MARKET ABUSE REGULATION (EU) NO 596/2014).
> (https://www.handbook.fca.org.uk/handbook/glossary/G3532m.html<https://www.handbook.fca.org.uk/handbook/glossary/G3532m.html>)
> COMPANIES WHO DISCLOSE INSIDE INFORMATION ARE IN BREACH OF REGULATION
> AND MUST IMMEDIATELY AND CLEARLY NOTIFY ALL ATTENDEES. FOR INFORMATION
> ON THE FIRM'S POLICY IN RELATION TO ITS PARTICIPATION IN MARKET SOUNDINGS,
> PLEASE SEE https://www.horizon-asset.co.uk/market-soundings/<https://www.horizon-asset.co.uk/market-soundings/>.
>
> HORIZON ASSET LLP IS AUTHORISED AND REGULATED
> BY THE FINANCIAL CONDUCT AUTHORITY.
>
>

Re: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart

Posted by Andy Seaborne <an...@apache.org>.

On 04/02/2019 12:31, Pierre Grenon wrote:
> Hi,
> 
> following up after going through my attempts more systematically again. I'm trying to be as specific and clear as I can. Any feedback most appreciated.
> 
> Many thanks,
> Pierre
> 
> 1. It is possible to have a configuration file in which data is loaded into a TDB and inferences are ran over this data. In this case:
> 
> 1.a Data in named graphs created using SPARQL Update into a TDB dataset persists upon restart.

Data must be loaded through the inference graph for the inferencer to 
notice the change.

So the SPARQL updates can't create a new graph. Assemblers have a fixed 
configuration.

(You could have one graph per database and upload new assemblers while 
Fuseki is running..)

> 
> 1.b Assertional data in these named graphs is immediately available to the reasoning endpoint without server restart.
> 
> 1.c Inference on data loaded using SPARQL Update requires restart of the server after upload.
> 
> 1.d CLEAR ALL in the TDB dataset endpoint requires server restart to have the inference dataset emptied. (Queries to the reasoning endpoint for either assertional or inferred data both return the same results as prior to clearing the TDB dataset.)

Same general point - if you manipulate the database directly, the 
inference code doesn't know a change has happened or what has changed.

> 2. TDB2 does not allow this --- or is it, at the moment only? As per OP in this thread, the configuration adapted to TDB2 breaks. Based on Andy's response, this may be caused by Bug Jena-1633. Would fixing the bug be enough to allow for the configuration using TDB2?

JENA-1663.

> 3. Inference datasets do not synch with the underlying TDB(2) datasets (1.b and 1.c in virtue of the in memory nature of inference models and the way configuration files are handled as per Andy's and ajs6f 's responses).
> 
> In view of this, however, 1.b is really weird.
> 
> 4. Adding a service update method to the reasoning service does not seem to allow updating the inference dataset. Sending SPARQL Update to the inference endpoint does not result in either additional assertional or inferred data. (Although, per 1.b, asserted data is returned when the SPARQL Update is sent to the TDB endpoint.)

The base graph of the inference model is updated.

But you have that set to <urn:x-arq:UnionGraph>.

That applies to SPARQL query - the updates will have gone to the real 
default graph but that is hidden by your setup.

> 
> 
> Question:
> 
> What is the prescribed way of keeping disc data and inference datasets in synch?

Update via the inference model.
Don't wire it to the union graph.

> 
> Is it:
> 
> P1 - upon SPARQL Update to disc data, restart server (and reinitialise inference dataset)?
> This makes it difficult to manage successive updates, especially when there may be dependencies between states them, e.g., in in order to make update 2 I need to have done update 1, I need to restart after update 1.
> 
> Given that TDB only works at the moment, what is the 'transactional' meaning of having to do this?
> 
> P2 - upon SPARQL Update to disc data, SPARQL Update inference dataset. Is it possible to update the inference dataset? In that case, is it possible to guarantee that the two datasets are in synch? Does TDB versus TDB2 matter?
> 
> 5. Not for self that property chains are not supported by the OWLFBReasoner.
> 
> 
> ##### TDB Configuration
> ##### From:
> 
> https://stackoverflow.com/questions/47568703/named-graphs-v-default-graph-behaviour-in-apache-jena-fuseki
> @prefix :      <http://base/#> .
> @prefix tdb:   <http://jena.hpl.hp.com/2008/tdb#> .
> @prefix rdf:   <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
> @prefix ja:    <http://jena.hpl.hp.com/2005/11/Assembler#> .
> @prefix rdfs:  <http://www.w3.org/2000/01/rdf-schema#> .
> @prefix fuseki: <http://jena.apache.org/fuseki#> .
> 
> # TDB
> tdb:DatasetTDB  rdfs:subClassOf  ja:RDFDataset .
> tdb:GraphTDB    rdfs:subClassOf  ja:Model .
> 
> 
> # Service 1: Dataset endpoint (no reasoning)
> :dataService a fuseki:Service ;
>    fuseki:name           "tdbEnpointTDBB" ;
>    fuseki:serviceQuery   "sparql", "query" ;
>    fuseki:serviceUpdate  "update" ;
>    fuseki:dataset        :tdbDataset ;
> .
> 
> # Service 2: Reasoning endpoint
> :reasoningService a fuseki:Service ;
>    fuseki:dataset                 :infDataset ;
>    fuseki:name                    "reasoningEndpointTDBB" ;
>    fuseki:serviceQuery            "query", "sparql" ;
>    fuseki:serviceReadGraphStore   "get" ;
> .
> 
> # Inference dataset
> :infDataset rdf:type ja:RDFDataset ;
>              ja:defaultGraph :infModel ;
> .
> 
> # Inference model
> :infModel a ja:InfModel ;
>             ja:baseModel :g ;
> 
>             ja:reasoner [
>                ja:reasonerURL <http://jena.hpl.hp.com/2003/OWLFBRuleReasoner> ;
>             ] ;
> .
> 
> # Intermediate graph referencing the default union graph
> :g rdf:type tdb:GraphTDB ;
>     tdb:dataset :tdbDataset ;
>     tdb:graphName <urn:x-arq:UnionGraph> ;
> .
> 
> # The location of the TDB dataset
> :tdbDataset rdf:type tdb:DatasetTDB ;
>          tdb:location  "C:\\dev\\apache-jena-fuseki-3.8.0\\run/databases/tdbB" ;
>              tdb:unionDefaultGraph true ;
> .
> 
> From: Pierre Grenon
> Sent: 01 February 2019 15:07
> To: 'users@jena.apache.org'
> Subject: RE: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart
> 
> 
> I'll address you two, fine gentlemen, at once if that's OK.
> 
>> On 31/01/2019 17:57, ajs6f wrote:
>>>> 2/ It is not possible in an assembler/Fuseki configuration file, to create a new named graph and have a another inference graph put around that new graph at runtime.
>>>
>>> Just to pull on one of these threads, my understanding is that this essentially because the assembler system works only by names. IOW, there's no such thing as a "variable", and a blank node doesn't function as a slot (as it might in a SPARQL query), just as an nameless node. So you have to know the specific name of any specific named graph to which you want to refer. A named graph that doesn't yet exist and may have any name at all when it does obviously doesn't fit into that.
>>>
> 
> I find this difficult to follow. By name you mean a value to ja:graphName so something like <urn:my:beautiful:graph>?
> 
> I have tried a configuration in which I was defining graphs.
> 
> <#graph_umb> rdf:type tdb2:GraphTDB ;
>    tdb2:dataset :datasetTDB2 ;
>    ja:graphName <urn:mad:bro> .
> 
> Then I'd load into that graph.
> 
> Again, I haven't found a configuration that allowed me to also define an inference engine and keep the content of these graphs.
> 
> I will retry and try to post files for comments, unless you can come up with a minimal example that would save both save time and help preserve sanity.
> 
>>> Andy and other more knowledgeable people: is that correct?
>>
>> The issue is that the assembler runs once at the start, builds some Java
>> structures based on that and does not get invoked when the new graph is
>> created later.
> 
> To some extent, it would be possible to live with predefined graphs in the config file. This would work for ontologies and reference data that doesn't change.
> 
> For data, in particular the type of data with lots of numbers and that corresponds to daily operation data, it might be infeasible to predefine graph names unless you can declare some sorts of template graphs names (e.g., <urn:data:icecream:[FLAVOUR]:[YYYMMDDD]>) which sounds like a stretch. Alternatively, we could use a rolling predefined graph and save with a specific name as archive, then clear and load new data on a daily basis. I think this is a different issue though.
> 
>> The issue is also that the union graph is partition - if a single
>> concrete graph were used, it might well work.
> 
> I'm not sure I follow this. Can you show an example of a config file that makes that partitioning?
> 
>> I haven't worked out the other details like why persistence isn't
>> happening. Might be related to a union graph. Might be update
>> happening going around the inference graph.
> 
> Hope the previous message helped clarifying the issue.
> 
> As a follow up too, I'm asked if it is possible to save to disc any named graph created in memory before shutting down the server and if that would be a work around.
> 
> with many thanks and kind regards,
> Pierre
> 
> THIS E-MAIL MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION.
> IF YOU ARE NOT THE INTENDED RECIPIENT (OR HAVE RECEIVED THIS E-MAIL
> IN ERROR) PLEASE NOTIFY THE SENDER IMMEDIATELY AND DESTROY THIS
> E-MAIL. ANY UNAUTHORISED COPYING, DISCLOSURE OR DISTRIBUTION OF THE
> MATERIAL IN THIS E-MAIL IS STRICTLY FORBIDDEN.
> 
> IN ACCORDANCE WITH MIFID II RULES ON INDUCEMENTS, THE FIRM'S EMPLOYEES
> MAY ATTEND CORPORATE ACCESS EVENTS (DEFINED IN THE FCA HANDBOOK AS
> "THE SERVICE OF ARRANGING OR BRINGING ABOUT CONTACT BETWEEN AN INVESTMENT
> MANAGER AND AN ISSUER OR POTENTIAL ISSUER"). DURING SUCH MEETINGS, THE
> FIRM'S EMPLOYEES MAY ON NO ACCOUNT BE IN RECEIPT OF INSIDE INFORMATION
> (AS DESCRIBED IN ARTICLE 7 OF THE MARKET ABUSE REGULATION (EU) NO 596/2014).
> (https://www.handbook.fca.org.uk/handbook/glossary/G3532m.html)
> COMPANIES WHO DISCLOSE INSIDE INFORMATION ARE IN BREACH OF REGULATION
> AND MUST IMMEDIATELY AND CLEARLY NOTIFY ALL ATTENDEES. FOR INFORMATION
> ON THE FIRM'S POLICY IN RELATION TO ITS PARTICIPATION IN MARKET SOUNDINGS,
> PLEASE SEE https://www.horizon-asset.co.uk/market-soundings/.
> 
> HORIZON ASSET LLP IS AUTHORISED AND REGULATED
> BY THE FINANCIAL CONDUCT AUTHORITY.
> 
> 

RE: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart

Posted by Pierre Grenon <pg...@horizon-asset.co.uk>.
Hi,

following up after going through my attempts more systematically again. I'm trying to be as specific and clear as I can. Any feedback most appreciated.

Many thanks,
Pierre

1. It is possible to have a configuration file in which data is loaded into a TDB and inferences are ran over this data. In this case:

1.a Data in named graphs created using SPARQL Update into a TDB dataset persists upon restart.

1.b Assertional data in these named graphs is immediately available to the reasoning endpoint without server restart.

1.c Inference on data loaded using SPARQL Update requires restart of the server after upload.

1.d CLEAR ALL in the TDB dataset endpoint requires server restart to have the inference dataset emptied. (Queries to the reasoning endpoint for either assertional or inferred data both return the same results as prior to clearing the TDB dataset.)

2. TDB2 does not allow this --- or is it, at the moment only? As per OP in this thread, the configuration adapted to TDB2 breaks. Based on Andy's response, this may be caused by Bug Jena-1633. Would fixing the bug be enough to allow for the configuration using TDB2?

3. Inference datasets do not synch with the underlying TDB(2) datasets (1.b and 1.c in virtue of the in memory nature of inference models and the way configuration files are handled as per Andy's and ajs6f 's responses).

In view of this, however, 1.b is really weird.

4. Adding a service update method to the reasoning service does not seem to allow updating the inference dataset. Sending SPARQL Update to the inference endpoint does not result in either additional assertional or inferred data. (Although, per 1.b, asserted data is returned when the SPARQL Update is sent to the TDB endpoint.)


Question:

What is the prescribed way of keeping disc data and inference datasets in synch?

Is it:

P1 - upon SPARQL Update to disc data, restart server (and reinitialise inference dataset)?
This makes it difficult to manage successive updates, especially when there may be dependencies between states them, e.g., in in order to make update 2 I need to have done update 1, I need to restart after update 1.

Given that TDB only works at the moment, what is the 'transactional' meaning of having to do this?

P2 - upon SPARQL Update to disc data, SPARQL Update inference dataset. Is it possible to update the inference dataset? In that case, is it possible to guarantee that the two datasets are in synch? Does TDB versus TDB2 matter?

5. Not for self that property chains are not supported by the OWLFBReasoner.


##### TDB Configuration
##### From:

https://stackoverflow.com/questions/47568703/named-graphs-v-default-graph-behaviour-in-apache-jena-fuseki
@prefix :      <http://base/#> .
@prefix tdb:   <http://jena.hpl.hp.com/2008/tdb#> .
@prefix rdf:   <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix ja:    <http://jena.hpl.hp.com/2005/11/Assembler#> .
@prefix rdfs:  <http://www.w3.org/2000/01/rdf-schema#> .
@prefix fuseki: <http://jena.apache.org/fuseki#> .

# TDB
tdb:DatasetTDB  rdfs:subClassOf  ja:RDFDataset .
tdb:GraphTDB    rdfs:subClassOf  ja:Model .


# Service 1: Dataset endpoint (no reasoning)
:dataService a fuseki:Service ;
  fuseki:name           "tdbEnpointTDBB" ;
  fuseki:serviceQuery   "sparql", "query" ;
  fuseki:serviceUpdate  "update" ;
  fuseki:dataset        :tdbDataset ;
.

# Service 2: Reasoning endpoint
:reasoningService a fuseki:Service ;
  fuseki:dataset                 :infDataset ;
  fuseki:name                    "reasoningEndpointTDBB" ;
  fuseki:serviceQuery            "query", "sparql" ;
  fuseki:serviceReadGraphStore   "get" ;
.

# Inference dataset
:infDataset rdf:type ja:RDFDataset ;
            ja:defaultGraph :infModel ;
.

# Inference model
:infModel a ja:InfModel ;
           ja:baseModel :g ;

           ja:reasoner [
              ja:reasonerURL <http://jena.hpl.hp.com/2003/OWLFBRuleReasoner> ;
           ] ;
.

# Intermediate graph referencing the default union graph
:g rdf:type tdb:GraphTDB ;
   tdb:dataset :tdbDataset ;
   tdb:graphName <urn:x-arq:UnionGraph> ;
.

# The location of the TDB dataset
:tdbDataset rdf:type tdb:DatasetTDB ;
        tdb:location  "C:\\dev\\apache-jena-fuseki-3.8.0\\run/databases/tdbB" ;
            tdb:unionDefaultGraph true ;
.

From: Pierre Grenon
Sent: 01 February 2019 15:07
To: 'users@jena.apache.org'
Subject: RE: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart


I'll address you two, fine gentlemen, at once if that's OK.

> On 31/01/2019 17:57, ajs6f wrote:
>>> 2/ It is not possible in an assembler/Fuseki configuration file, to create a new named graph and have a another inference graph put around that new graph at runtime.
>>
>> Just to pull on one of these threads, my understanding is that this essentially because the assembler system works only by names. IOW, there's no such thing as a "variable", and a blank node doesn't function as a slot (as it might in a SPARQL query), just as an nameless node. So you have to know the specific name of any specific named graph to which you want to refer. A named graph that doesn't yet exist and may have any name at all when it does obviously doesn't fit into that.
>>

I find this difficult to follow. By name you mean a value to ja:graphName so something like <urn:my:beautiful:graph>?

I have tried a configuration in which I was defining graphs.

<#graph_umb> rdf:type tdb2:GraphTDB ;
  tdb2:dataset :datasetTDB2 ;
  ja:graphName <urn:mad:bro> .

Then I'd load into that graph.

Again, I haven't found a configuration that allowed me to also define an inference engine and keep the content of these graphs.

I will retry and try to post files for comments, unless you can come up with a minimal example that would save both save time and help preserve sanity.

>> Andy and other more knowledgeable people: is that correct?
>
> The issue is that the assembler runs once at the start, builds some Java
> structures based on that and does not get invoked when the new graph is
> created later.

To some extent, it would be possible to live with predefined graphs in the config file. This would work for ontologies and reference data that doesn't change.

For data, in particular the type of data with lots of numbers and that corresponds to daily operation data, it might be infeasible to predefine graph names unless you can declare some sorts of template graphs names (e.g., <urn:data:icecream:[FLAVOUR]:[YYYMMDDD]>) which sounds like a stretch. Alternatively, we could use a rolling predefined graph and save with a specific name as archive, then clear and load new data on a daily basis. I think this is a different issue though.

> The issue is also that the union graph is partition - if a single
> concrete graph were used, it might well work.

I'm not sure I follow this. Can you show an example of a config file that makes that partitioning?

> I haven't worked out the other details like why persistence isn't
> happening. Might be related to a union graph. Might be update
> happening going around the inference graph.

Hope the previous message helped clarifying the issue.

As a follow up too, I'm asked if it is possible to save to disc any named graph created in memory before shutting down the server and if that would be a work around.

with many thanks and kind regards,
Pierre

THIS E-MAIL MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION. 
IF YOU ARE NOT THE INTENDED RECIPIENT (OR HAVE RECEIVED THIS E-MAIL 
IN ERROR) PLEASE NOTIFY THE SENDER IMMEDIATELY AND DESTROY THIS 
E-MAIL. ANY UNAUTHORISED COPYING, DISCLOSURE OR DISTRIBUTION OF THE 
MATERIAL IN THIS E-MAIL IS STRICTLY FORBIDDEN. 

IN ACCORDANCE WITH MIFID II RULES ON INDUCEMENTS, THE FIRM'S EMPLOYEES 
MAY ATTEND CORPORATE ACCESS EVENTS (DEFINED IN THE FCA HANDBOOK AS 
"THE SERVICE OF ARRANGING OR BRINGING ABOUT CONTACT BETWEEN AN INVESTMENT 
MANAGER AND AN ISSUER OR POTENTIAL ISSUER"). DURING SUCH MEETINGS, THE 
FIRM'S EMPLOYEES MAY ON NO ACCOUNT BE IN RECEIPT OF INSIDE INFORMATION 
(AS DESCRIBED IN ARTICLE 7 OF THE MARKET ABUSE REGULATION (EU) NO 596/2014). 
(https://www.handbook.fca.org.uk/handbook/glossary/G3532m.html)
COMPANIES WHO DISCLOSE INSIDE INFORMATION ARE IN BREACH OF REGULATION 
AND MUST IMMEDIATELY AND CLEARLY NOTIFY ALL ATTENDEES. FOR INFORMATION 
ON THE FIRM'S POLICY IN RELATION TO ITS PARTICIPATION IN MARKET SOUNDINGS, 
PLEASE SEE https://www.horizon-asset.co.uk/market-soundings/. 

HORIZON ASSET LLP IS AUTHORISED AND REGULATED 
BY THE FINANCIAL CONDUCT AUTHORITY.



Re: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart

Posted by Andy Seaborne <an...@apache.org>.

On 31/01/2019 18:27, ajs6f wrote:
> 
>> On Jan 31, 2019, at 1:23 PM, Andy Seaborne <an...@apache.org> wrote:
>>
>> On 31/01/2019 17:57, ajs6f wrote:
>>>> 2/ It is not possible in an assembler/Fuseki configuration file, to create a new named graph and have a another inference graph put around that new graph at runtime.
>>> Just to pull on one of these threads, my understanding is that this essentially because the assembler system works only by names. IOW, there's no such thing as a "variable", and a blank node doesn't function as a slot (as it might in a SPARQL query), just as an nameless node. So you have to know the specific name of any specific named graph to which you want to refer. A named graph that doesn't yet exist and may have any name at all when it does obviously doesn't fit into that.
>>> Andy and other more knowledgeable people: is that correct?
>>
>> The issue is that the assembler runs once at the start, builds some Java structures based on that and does not get invoked when the new graph is created later.
> 
> So a means by which the assembly process could be reinitiated would work, because you could name the graph then. But there are other problems with that-- you're going to reload _everything_.
> 
> Perhaps we could think about the functionality in the admin API for Fuseki that allows assemblers to be reloaded:
> 
> https://jena.apache.org/documentation/fuseki2/fuseki-server-protocol.html#adding-a-dataset-and-its-services
> 
> because apparently, we left ourselves a note there: "@@ May add server-managed templates". That's not too far from what I was thinking of, whether or not I expressed myself well.
> 
> ajs6f
> 

Being able to upload a particular a assembler for a dataset, and if it 
is already present drop the old one, build and install the new one would 
be good.

     Andy


Re: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart

Posted by ajs6f <aj...@apache.org>.
> On Jan 31, 2019, at 1:23 PM, Andy Seaborne <an...@apache.org> wrote:
> 
> On 31/01/2019 17:57, ajs6f wrote:
>>> 2/ It is not possible in an assembler/Fuseki configuration file, to create a new named graph and have a another inference graph put around that new graph at runtime.
>> Just to pull on one of these threads, my understanding is that this essentially because the assembler system works only by names. IOW, there's no such thing as a "variable", and a blank node doesn't function as a slot (as it might in a SPARQL query), just as an nameless node. So you have to know the specific name of any specific named graph to which you want to refer. A named graph that doesn't yet exist and may have any name at all when it does obviously doesn't fit into that.
>> Andy and other more knowledgeable people: is that correct?
> 
> The issue is that the assembler runs once at the start, builds some Java structures based on that and does not get invoked when the new graph is created later.

So a means by which the assembly process could be reinitiated would work, because you could name the graph then. But there are other problems with that-- you're going to reload _everything_.

Perhaps we could think about the functionality in the admin API for Fuseki that allows assemblers to be reloaded:

https://jena.apache.org/documentation/fuseki2/fuseki-server-protocol.html#adding-a-dataset-and-its-services

because apparently, we left ourselves a note there: "@@ May add server-managed templates". That's not too far from what I was thinking of, whether or not I expressed myself well.

ajs6f

RE: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart

Posted by Pierre Grenon <pg...@horizon-asset.co.uk>.
I'll address you two, fine gentlemen, at once if that's OK.

> On 31/01/2019 17:57, ajs6f wrote:
>>> 2/ It is not possible in an assembler/Fuseki configuration file, to create a new named graph and have a another inference graph put around that new graph at runtime.
>>
>> Just to pull on one of these threads, my understanding is that this essentially because the assembler system works only by names. IOW, there's no such thing as a "variable", and a blank node doesn't function as a slot (as it might in a SPARQL query), just as an nameless node. So you have to know the specific name of any specific named graph to which you want to refer. A named graph that doesn't yet exist and may have any name at all when it does obviously doesn't fit into that.
>>

I find this difficult to follow. By name you mean a value to ja:graphName so something like <urn:my:beautiful:graph>?

I have tried a configuration in which I was defining graphs.

<#graph_umb> rdf:type tdb2:GraphTDB ;
  tdb2:dataset :datasetTDB2 ;
  ja:graphName <urn:mad:bro> .

Then I'd load into that graph.

Again, I haven't found a configuration that allowed me to also define an inference engine and keep the content of these graphs.

I will retry and try to post files for comments, unless you can come up with a minimal example that would save both save time and help preserve sanity.

>> Andy and other more knowledgeable people: is that correct?
>
> The issue is that the assembler runs once at the start, builds some Java
> structures based on that and does not get invoked when the new graph is
> created later.

To some extent, it would be possible to live with predefined graphs in the config file. This would work for ontologies and reference data that doesn't change.

For data, in particular the type of data with lots of numbers and that corresponds to daily operation data, it might be infeasible to predefine graph names unless you can declare some sorts of template graphs names (e.g., <urn:data:icecream:[FLAVOUR]:[YYYMMDDD]>) which sounds like a stretch. Alternatively, we could use a rolling predefined graph and save with a specific name as archive, then clear and load new data on a daily basis. I think this is a different issue though.

> The issue is also that the union graph is partition - if a single
> concrete graph were used, it might well work.

I'm not sure I follow this. Can you show an example of a config file that makes that partitioning?

> I haven't worked out the other details like why persistence isn't
> happening. Might be related to a union graph. Might be update
> happening going around the inference graph.

Hope the previous message helped clarifying the issue.

As a follow up too, I'm asked if it is possible to save to disc any named graph created in memory before shutting down the server and if that would be a work around.

with many thanks and kind regards,
Pierre

THIS E-MAIL MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION. 
IF YOU ARE NOT THE INTENDED RECIPIENT (OR HAVE RECEIVED THIS E-MAIL 
IN ERROR) PLEASE NOTIFY THE SENDER IMMEDIATELY AND DESTROY THIS 
E-MAIL. ANY UNAUTHORISED COPYING, DISCLOSURE OR DISTRIBUTION OF THE 
MATERIAL IN THIS E-MAIL IS STRICTLY FORBIDDEN. 

IN ACCORDANCE WITH MIFID II RULES ON INDUCEMENTS, THE FIRM'S EMPLOYEES 
MAY ATTEND CORPORATE ACCESS EVENTS (DEFINED IN THE FCA HANDBOOK AS 
"THE SERVICE OF ARRANGING OR BRINGING ABOUT CONTACT BETWEEN AN INVESTMENT 
MANAGER AND AN ISSUER OR POTENTIAL ISSUER"). DURING SUCH MEETINGS, THE 
FIRM'S EMPLOYEES MAY ON NO ACCOUNT BE IN RECEIPT OF INSIDE INFORMATION 
(AS DESCRIBED IN ARTICLE 7 OF THE MARKET ABUSE REGULATION (EU) NO 596/2014). 
(https://www.handbook.fca.org.uk/handbook/glossary/G3532m.html)
COMPANIES WHO DISCLOSE INSIDE INFORMATION ARE IN BREACH OF REGULATION 
AND MUST IMMEDIATELY AND CLEARLY NOTIFY ALL ATTENDEES. FOR INFORMATION 
ON THE FIRM'S POLICY IN RELATION TO ITS PARTICIPATION IN MARKET SOUNDINGS, 
PLEASE SEE https://www.horizon-asset.co.uk/market-soundings/. 

HORIZON ASSET LLP IS AUTHORISED AND REGULATED 
BY THE FINANCIAL CONDUCT AUTHORITY.



Re: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart

Posted by Andy Seaborne <an...@apache.org>.

On 31/01/2019 17:57, ajs6f wrote:
>> 2/ It is not possible in an assembler/Fuseki configuration file, to create a new named graph and have a another inference graph put around that new graph at runtime.
> 
> Just to pull on one of these threads, my understanding is that this essentially because the assembler system works only by names. IOW, there's no such thing as a "variable", and a blank node doesn't function as a slot (as it might in a SPARQL query), just as an nameless node. So you have to know the specific name of any specific named graph to which you want to refer. A named graph that doesn't yet exist and may have any name at all when it does obviously doesn't fit into that.
> 
> Andy and other more knowledgeable people: is that correct?

The issue is that the assembler runs once at the start, builds some Java 
structures based on that and does not get invoked when the new graph is 
created later.

The issue is also that the union graph is partition - if a single 
concrete graph were used, it might well work.

I haven't worked out the other details like why persistence isn't 
happening.  Might be related to a union graph.  Might be update 
happening going around the inference graph.

> If so, we might want to think about changing that as a _very long-term_ goal. Having the ability to refer to resources in assembler RDF by triple pattern or property matching or by some other means like that could be astoundingly powerful for a lot of these use cases that we see involving complex setups with multiple sources of data and inference. Yes, I realize that it would also be pretty gosh dang difficult (how do you know when to apply or re-apply the filtering/matching?) and might not be worth doing on those grounds alone, but I might file a ticket just to not forget the possibility.
> 
> ajs6f
> 
>> On Jan 31, 2019, at 12:41 PM, Andy Seaborne <an...@apache.org> wrote:
>>
>> Hi Pierre,
>>
>> A few points to start with:
>>
>> 1/ For my general understanding of how we might take the inferene provision further in jena, what inferences are you particularly interested in?
>>
>> 2/ It is not possible in an assembler/Fuseki configuration file, to create a new named graph and have a another inference graph put around that new graph at runtime.
>>
>> 3/ The union graph can't be updated.  It's a read-only union.
>>
>> 4/ "ERROR Exception in initialization: caught: Not in tramsaction"
>> Do you have the stacktrace for this?
>>
>> Inline questions about attempt 1.
>>
>>     Andy
>>
>>
>>
>> On 31/01/2019 15:00, Pierre Grenon wrote:
>>> Hello,
>>> I am trying to:
>>> Set up Fuseki2 with inference and a TDB2 dataset in which can be persisted named graphs created with SPARQL Update.
>>> This is in order to:
>>> - maintain a set of ontologies in a named graph
>>> - maintain datasets in a number of named graphs
>>> - perform reasoning in the union graphs
>>> The assumption is that all data is persisted in a given TDB2 database.
>>> The higher purpose is to use reasoning over ontologies when querying over instance data located in named graphs. I think this is conceptually what is discussed here:
>>> Subject Re: Ontologies and model data
>>> Date      Mon, 03 Jun 2013 20:26:18 GMT
>>> http://mail-archives.apache.org/mod_mbox/jena-users/201306.mbox/%3C51ACFBEA.2010206@apache.org%3E
>>> The set up above is what comes across as a way of achieving this higher goal but I am not sure either that it is the best set up. Everything I have tried to do allows me to either perform inferences in <urn:x-arq:UnionGraph> or to persist triples in named graphs in a TDB2 database, but not both.
>>> My problem is:
>>> I cannot find a correct configuration that allows me to persist named graphs added to a TDB2 dataset and have inference at the same time.
>>> Some attempts are documented below.
>>> I have done most of my test in apache-jena-fuseki-3.8.0, my last tries were in apache-jena-fuseki-3.10.0.
>>> Would somebody be in a position to advise or provide a minimal working example?
>>> With many thanks and best regards,
>>> Pierre
>>> ###
>>> Attempt 1:
>>> # Example of a data service with SPARQL query abnd update on an
>>> # inference model.  Data is taken from TDB.
>>> https://github.com/apache/jena/blob/master/jena-fuseki2/examples/service-inference-2.ttl
>>> which I have adapted to use TDB2 (which basically meant updating the namespace and the references to classes).
>>> Outcome: This allows me to load data into named graphs and to perform inferemce. However, it does not persist upon restarting the server.
>>
>> Did you enable tdb:unionDefaultGraph as well, load the named graph and the access the union?
>>
>> What isn't persisted? Inferences or base data?
>>> ###
>>> Attempt 2:
>>> Define two services pointing to different graphs as advised in
>>> Subject Re: Persisting named graphs in TDB with jena-fuseki
>>> Date      Thu, 10 Mar 2016 14:47:10 GMT
>>> http://mail-archives.apache.org/mod_mbox/jena-users/201603.mbox/%3CD30738C0.64A6F%25rvesse@dotnetrdf.org%3E
>>> Outcome: I could only manage de3fining two independent services on two independent datasets and couldn't figure out how to link the TDB2 and the Inference graphs.
>>> A reference to https://issues.apache.org/jira/browse/JENA-1122
>>> is made in
>>> http://mail-archives.apache.org/mod_mbox/jena-users/201603.mbox/%3C56EC23E3.9010000@apache.org%3E
>>> But I do not understand what is said here.
>>> ###
>>> Attempt 3:
>>> I have found a config that seemed t make the link that was needed between graphs in Attempt 2 in:
>>> https://stackoverflow.com/questions/47568703/named-graphs-v-default-graph-behaviour-in-apache-jena-fuseki
>>> which I have adapted to TDB2.
>>> However, this gives me:
>>> ERROR Exception in initialization: caught: Not in tramsaction
>>> This also seems to have been a cut off point in the thread mentioned above
>>> Subject Re: Configuring fuseki with TDB2 and OWL reasoning
>>> Date      Tue, 20 Feb 2018 10:55:13 GMT
>>> http://mail-archives.apache.org/mod_mbox/jena-users/201802.mbox/%3C6d37a8c7-aca1-4c1c-0cd3-fa041ecc07eb%40apache.org%3E
>>> This message refers to https://issues.apache.org/jira/browse/JENA-1492
>>> which I do not understand but that comes across as having been resolved. Indeed, I tested the config attached to it (probably as minimal example for that issue) and it worked, but I don't think this is the config I need.
>>> #### ATTEMPT 3 Config
>>> @prefix :      <http://base/#> .
>>> @prefix tdb2:  <http://jena.apache.org/2016/tdb#> .
>>> @prefix rdf:   <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
>>> @prefix ja:    <http://jena.hpl.hp.com/2005/11/Assembler#> .
>>> @prefix rdfs:  <http://www.w3.org/2000/01/rdf-schema#> .
>>> @prefix fuseki: <http://jena.apache.org/fuseki#> .
>>> # TDB2
>>> #tdb2:DatasetTDB2 rdfs:subClassOf  ja:RDFDataset .
>>> #tdb2:GraphTDB    rdfs:subClassOf  ja:Model .
>>> # Service 1: Dataset endpoint (no reasoning)
>>> :dataService a fuseki:Service ;
>>>    fuseki:name           "tdbEnpoint" ;
>>>    fuseki:serviceQuery   "sparql", "query" ;
>>>    fuseki:serviceUpdate  "update" ;
>>>    fuseki:dataset        :tdbDataset ;
>>> .
>>> # Service 2: Reasoning endpoint
>>> :reasoningService a fuseki:Service ;
>>>    fuseki:dataset                 :infDataset ;
>>>    fuseki:name                    "reasoningEndpoint" ;
>>>    fuseki:serviceQuery            "query", "sparql" ;
>>>    fuseki:serviceReadGraphStore   "get" ;
>>> .
>>> # Inference dataset
>>> :infDataset rdf:type ja:RDFDataset ;
>>>              ja:defaultGraph :infModel ;
>>> .
>>> # Inference model
>>> :infModel a ja:InfModel ;
>>>             ja:baseModel :g ;
>>>             ja:reasoner [
>>>                ja:reasonerURL <http://jena.hpl.hp.com/2003/OWLFBRuleReasoner> ;
>>>             ] ;
>>> .
>>> # Intermediate graph referencing the default union graph
>>> :g rdf:type tdb2:GraphTDB ;
>>>     tdb2:dataset :tdbDataset ;
>>>     tdb2:graphName <urn:x-arq:UnionGraph> ;
>>> .
>>> # The location of the TDB dataset
>>> :tdbDataset rdf:type tdb2:DatasetTDB2 ;
>>>     tdb2:location  "C:\\dev\\apache-jena-fuseki-3.8.0\\run/databases/weird7" ;
>>>     tdb2:unionDefaultGraph true ;
>>> .
>>> #### SERVER
>>> ...
>>> [2019-01-31 13:54:58] Config     INFO  Load configuration: file:///C:/dev/apache-jena-fuseki-3.10.0/run/configuration/weird7.ttl
>>> [2019-01-31 13:54:58] Server     ERROR Exception in initialization: caught: Not in a transaction
>>> [2019-01-31 13:54:58] WebAppContext WARN  Failed startup of context o.e.j.w.WebAppContext@6edc4161{Apache Jena Fuseki Server,/,file:///C:/dev/apache-jena-fuseki-3.10.0/webapp/,UNAVAILABLE}
>>> org.apache.jena.assembler.exceptions.AssemblerException: caught: Not in a transaction
>>>    doing:
>>>      root: file:///C:/dev/apache-jena-fuseki-3.10.0/run/configuration/weird7.ttl#model_inf with type: http://jena.hpl.hp.com/2005/11/Assembler#InfModel assembler class: class org.apache.jena.assembler.assemblers.InfModelAssembler
>>>      root: http://base/#dataset with type: http://jena.hpl.hp.com/2005/11/Assembler#RDFDataset assembler class: class org.apache.jena.sparql.core.assembler.DatasetAssembler
>>>          at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.openBySpecificType(AssemblerGroup.java:165)
>>>          at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.open(AssemblerGroup.java:144)
>>>         at org.apache.jena.assembler.assemblers.AssemblerGroup$ExpandingAssemblerGroup.open(AssemblerGroup.java:93)
>>>          at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:39)
>>>          at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:35)
>>>          at org.apache.jena.assembler.assemblers.AssemblerGroup.openModel(AssemblerGroup.java:47)
>>>          at org.apache.jena.sparql.core.assembler.DatasetAssembler.createDataset(DatasetAssembler.java:56)
>>>          at org.apache.jena.sparql.core.assembler.DatasetAssembler.open(DatasetAssembler.java:43)
>>>          at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.openBySpecificType(AssemblerGroup.java:157)
>>>          at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.open(AssemblerGroup.java:144)
>>>          at org.apache.jena.assembler.assemblers.AssemblerGroup$ExpandingAssemblerGroup.open(AssemblerGroup.java:93)
>>>          at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:39)
>>>          at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:35)
>>>          at org.apache.jena.fuseki.build.FusekiConfig.getDataset(FusekiConfig.java:345)
>>>          at org.apache.jena.fuseki.build.FusekiConfig.buildDataService(FusekiConfig.java:299)
>>>          at org.apache.jena.fuseki.build.FusekiConfig.buildDataAccessPoint(FusekiConfig.java:289)
>>>          at org.apache.jena.fuseki.build.FusekiConfig.readConfiguration(FusekiConfig.java:272)
>>>          at org.apache.jena.fuseki.build.FusekiConfig.readConfigurationDirectory(FusekiConfig.java:251)
>>>          at org.apache.jena.fuseki.webapp.FusekiWebapp.initializeDataAccessPoints(FusekiWebapp.java:226)
>>>          at org.apache.jena.fuseki.webapp.FusekiServerListener.serverInitialization(FusekiServerListener.java:98)
>>>          at org.apache.jena.fuseki.webapp.FusekiServerListener.contextInitialized(FusekiServerListener.java:56)
>>>          at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:952)
>>>          at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:558)
>>>          at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:917)
>>>          at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:370)
>>>          at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1497)
>>>          at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1459)
>>>          at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:847)
>>>          at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:287)
>>>          at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
>>>          at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>>>          at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
>>>          at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108)
>>>          at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
>>>          at org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:410)
>>>          at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>>>          at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
>>>          at org.eclipse.jetty.server.Server.start(Server.java:416)
>>>          at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108)
>>>          at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
>>>          at org.eclipse.jetty.server.Server.doStart(Server.java:383)
>>>          at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>>>          at org.apache.jena.fuseki.cmd.JettyFusekiWebapp.start(JettyFusekiWebapp.java:138)
>>>          at org.apache.jena.fuseki.cmd.FusekiCmd.runFuseki(FusekiCmd.java:372)
>>>          at org.apache.jena.fuseki.cmd.FusekiCmd$FusekiCmdInner.exec(FusekiCmd.java:356)
>>>          at jena.cmd.CmdMain.mainMethod(CmdMain.java:93)
>>>          at jena.cmd.CmdMain.mainRun(CmdMain.java:58)
>>>          at jena.cmd.CmdMain.mainRun(CmdMain.java:45)
>>>          at org.apache.jena.fuseki.cmd.FusekiCmd$FusekiCmdInner.innerMain(FusekiCmd.java:104)
>>>          at org.apache.jena.fuseki.cmd.FusekiCmd.main(FusekiCmd.java:67)
>>> Caused by: org.apache.jena.dboe.transaction.txn.TransactionException: Not in a transaction
>>>          at org.apache.jena.dboe.transaction.txn.TransactionalComponentLifecycle.checkTxn(TransactionalComponentLifecycle.java:417)
>>>          at org.apache.jena.dboe.trans.bplustree.BPlusTree.getRootRead(BPlusTree.java:159)
>>>          at org.apache.jena.dboe.trans.bplustree.BPlusTree.find(BPlusTree.java:239)
>>>          at org.apache.jena.tdb2.store.nodetable.NodeTableNative.accessIndex(NodeTableNative.java:133)
>>>          at org.apache.jena.tdb2.store.nodetable.NodeTableNative._idForNode(NodeTableNative.java:118)
>>>          at org.apache.jena.tdb2.store.nodetable.NodeTableNative.getNodeIdForNode(NodeTableNative.java:57)
>>>          at org.apache.jena.tdb2.store.nodetable.NodeTableCache._idForNode(NodeTableCache.java:222)
>>>          at org.apache.jena.tdb2.store.nodetable.NodeTableCache.getNodeIdForNode(NodeTableCache.java:114)
>>>          at org.apache.jena.tdb2.store.nodetable.NodeTableWrapper.getNodeIdForNode(NodeTableWrapper.java:47)
>>>          at org.apache.jena.tdb2.store.nodetable.NodeTableInline.getNodeIdForNode(NodeTableInline.java:58)
>>>          at org.apache.jena.tdb2.store.nodetupletable.NodeTupleTableConcrete.idForNode(NodeTupleTableConcrete.java:182)
>>>          at org.apache.jena.tdb2.store.nodetupletable.NodeTupleTableConcrete.findAsNodeIds(NodeTupleTableConcrete.java:136)
>>>          at org.apache.jena.tdb2.store.nodetupletable.NodeTupleTableConcrete.find(NodeTupleTableConcrete.java:114)
>>>          at org.apache.jena.tdb2.store.DatasetPrefixesTDB.readPrefixMap(DatasetPrefixesTDB.java:111)
>>>          at org.apache.jena.sparql.graph.GraphPrefixesProjection.getNsPrefixMap(GraphPrefixesProjection.java:94)
>>>          at org.apache.jena.tdb2.store.GraphViewSwitchable$PrefixMappingImplTDB2.getNsPrefixMap(GraphViewSwitchable.java:159)
>>>          at org.apache.jena.shared.impl.PrefixMappingImpl.setNsPrefixes(PrefixMappingImpl.java:138)
>>>          at org.apache.jena.tdb2.store.GraphViewSwitchable.createPrefixMapping(GraphViewSwitchable.java:68)
>>>          at org.apache.jena.graph.impl.GraphBase.getPrefixMapping(GraphBase.java:165)
>>>          at org.apache.jena.reasoner.BaseInfGraph.getPrefixMapping(BaseInfGraph.java:55)
>>>          at org.apache.jena.rdf.model.impl.ModelCom.getPrefixMapping(ModelCom.java:1018)
>>>          at org.apache.jena.rdf.model.impl.ModelCom.setNsPrefixes(ModelCom.java:1055)
>>>          at org.apache.jena.assembler.assemblers.ModelAssembler.open(ModelAssembler.java:45)
>>>          at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.openBySpecificType(AssemblerGroup.java:157)
>>>          ... 49 more
>>> [2019-01-31 13:54:58] Server     INFO  Started 2019/01/31 13:54:58 GMT on port 3030
>>> THIS E-MAIL MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION.
>>> IF YOU ARE NOT THE INTENDED RECIPIENT (OR HAVE RECEIVED THIS E-MAIL
>>> IN ERROR) PLEASE NOTIFY THE SENDER IMMEDIATELY AND DESTROY THIS
>>> E-MAIL. ANY UNAUTHORISED COPYING, DISCLOSURE OR DISTRIBUTION OF THE
>>> MATERIAL IN THIS E-MAIL IS STRICTLY FORBIDDEN.
>>> IN ACCORDANCE WITH MIFID II RULES ON INDUCEMENTS, THE FIRM'S EMPLOYEES
>>> MAY ATTEND CORPORATE ACCESS EVENTS (DEFINED IN THE FCA HANDBOOK AS
>>> "THE SERVICE OF ARRANGING OR BRINGING ABOUT CONTACT BETWEEN AN INVESTMENT
>>> MANAGER AND AN ISSUER OR POTENTIAL ISSUER"). DURING SUCH MEETINGS, THE
>>> FIRM'S EMPLOYEES MAY ON NO ACCOUNT BE IN RECEIPT OF INSIDE INFORMATION
>>> (AS DESCRIBED IN ARTICLE 7 OF THE MARKET ABUSE REGULATION (EU) NO 596/2014).
>>> (https://www.handbook.fca.org.uk/handbook/glossary/G3532m.html)
>>> COMPANIES WHO DISCLOSE INSIDE INFORMATION ARE IN BREACH OF REGULATION
>>> AND MUST IMMEDIATELY AND CLEARLY NOTIFY ALL ATTENDEES. FOR INFORMATION
>>> ON THE FIRM'S POLICY IN RELATION TO ITS PARTICIPATION IN MARKET SOUNDINGS,
>>> PLEASE SEE https://www.horizon-asset.co.uk/market-soundings/.
>>> HORIZON ASSET LLP IS AUTHORISED AND REGULATED
>>> BY THE FINANCIAL CONDUCT AUTHORITY.
> 

Re: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart

Posted by ajs6f <aj...@apache.org>.
> 2/ It is not possible in an assembler/Fuseki configuration file, to create a new named graph and have a another inference graph put around that new graph at runtime.

Just to pull on one of these threads, my understanding is that this essentially because the assembler system works only by names. IOW, there's no such thing as a "variable", and a blank node doesn't function as a slot (as it might in a SPARQL query), just as an nameless node. So you have to know the specific name of any specific named graph to which you want to refer. A named graph that doesn't yet exist and may have any name at all when it does obviously doesn't fit into that.

Andy and other more knowledgeable people: is that correct?

If so, we might want to think about changing that as a _very long-term_ goal. Having the ability to refer to resources in assembler RDF by triple pattern or property matching or by some other means like that could be astoundingly powerful for a lot of these use cases that we see involving complex setups with multiple sources of data and inference. Yes, I realize that it would also be pretty gosh dang difficult (how do you know when to apply or re-apply the filtering/matching?) and might not be worth doing on those grounds alone, but I might file a ticket just to not forget the possibility.

ajs6f

> On Jan 31, 2019, at 12:41 PM, Andy Seaborne <an...@apache.org> wrote:
> 
> Hi Pierre,
> 
> A few points to start with:
> 
> 1/ For my general understanding of how we might take the inferene provision further in jena, what inferences are you particularly interested in?
> 
> 2/ It is not possible in an assembler/Fuseki configuration file, to create a new named graph and have a another inference graph put around that new graph at runtime.
> 
> 3/ The union graph can't be updated.  It's a read-only union.
> 
> 4/ "ERROR Exception in initialization: caught: Not in tramsaction"
> Do you have the stacktrace for this?
> 
> Inline questions about attempt 1.
> 
>    Andy
> 
> 
> 
> On 31/01/2019 15:00, Pierre Grenon wrote:
>> Hello,
>> I am trying to:
>> Set up Fuseki2 with inference and a TDB2 dataset in which can be persisted named graphs created with SPARQL Update.
>> This is in order to:
>> - maintain a set of ontologies in a named graph
>> - maintain datasets in a number of named graphs
>> - perform reasoning in the union graphs
>> The assumption is that all data is persisted in a given TDB2 database.
>> The higher purpose is to use reasoning over ontologies when querying over instance data located in named graphs. I think this is conceptually what is discussed here:
>> Subject Re: Ontologies and model data
>> Date      Mon, 03 Jun 2013 20:26:18 GMT
>> http://mail-archives.apache.org/mod_mbox/jena-users/201306.mbox/%3C51ACFBEA.2010206@apache.org%3E
>> The set up above is what comes across as a way of achieving this higher goal but I am not sure either that it is the best set up. Everything I have tried to do allows me to either perform inferences in <urn:x-arq:UnionGraph> or to persist triples in named graphs in a TDB2 database, but not both.
>> My problem is:
>> I cannot find a correct configuration that allows me to persist named graphs added to a TDB2 dataset and have inference at the same time.
>> Some attempts are documented below.
>> I have done most of my test in apache-jena-fuseki-3.8.0, my last tries were in apache-jena-fuseki-3.10.0.
>> Would somebody be in a position to advise or provide a minimal working example?
>> With many thanks and best regards,
>> Pierre
>> ###
>> Attempt 1:
>> # Example of a data service with SPARQL query abnd update on an
>> # inference model.  Data is taken from TDB.
>> https://github.com/apache/jena/blob/master/jena-fuseki2/examples/service-inference-2.ttl
>> which I have adapted to use TDB2 (which basically meant updating the namespace and the references to classes).
>> Outcome: This allows me to load data into named graphs and to perform inferemce. However, it does not persist upon restarting the server.
> 
> Did you enable tdb:unionDefaultGraph as well, load the named graph and the access the union?
> 
> What isn't persisted? Inferences or base data?
>> ###
>> Attempt 2:
>> Define two services pointing to different graphs as advised in
>> Subject Re: Persisting named graphs in TDB with jena-fuseki
>> Date      Thu, 10 Mar 2016 14:47:10 GMT
>> http://mail-archives.apache.org/mod_mbox/jena-users/201603.mbox/%3CD30738C0.64A6F%25rvesse@dotnetrdf.org%3E
>> Outcome: I could only manage de3fining two independent services on two independent datasets and couldn't figure out how to link the TDB2 and the Inference graphs.
>> A reference to https://issues.apache.org/jira/browse/JENA-1122
>> is made in
>> http://mail-archives.apache.org/mod_mbox/jena-users/201603.mbox/%3C56EC23E3.9010000@apache.org%3E
>> But I do not understand what is said here.
>> ###
>> Attempt 3:
>> I have found a config that seemed t make the link that was needed between graphs in Attempt 2 in:
>> https://stackoverflow.com/questions/47568703/named-graphs-v-default-graph-behaviour-in-apache-jena-fuseki
>> which I have adapted to TDB2.
>> However, this gives me:
>> ERROR Exception in initialization: caught: Not in tramsaction
>> This also seems to have been a cut off point in the thread mentioned above
>> Subject Re: Configuring fuseki with TDB2 and OWL reasoning
>> Date      Tue, 20 Feb 2018 10:55:13 GMT
>> http://mail-archives.apache.org/mod_mbox/jena-users/201802.mbox/%3C6d37a8c7-aca1-4c1c-0cd3-fa041ecc07eb%40apache.org%3E
>> This message refers to https://issues.apache.org/jira/browse/JENA-1492
>> which I do not understand but that comes across as having been resolved. Indeed, I tested the config attached to it (probably as minimal example for that issue) and it worked, but I don't think this is the config I need.
>> #### ATTEMPT 3 Config
>> @prefix :      <http://base/#> .
>> @prefix tdb2:  <http://jena.apache.org/2016/tdb#> .
>> @prefix rdf:   <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
>> @prefix ja:    <http://jena.hpl.hp.com/2005/11/Assembler#> .
>> @prefix rdfs:  <http://www.w3.org/2000/01/rdf-schema#> .
>> @prefix fuseki: <http://jena.apache.org/fuseki#> .
>> # TDB2
>> #tdb2:DatasetTDB2 rdfs:subClassOf  ja:RDFDataset .
>> #tdb2:GraphTDB    rdfs:subClassOf  ja:Model .
>> # Service 1: Dataset endpoint (no reasoning)
>> :dataService a fuseki:Service ;
>>   fuseki:name           "tdbEnpoint" ;
>>   fuseki:serviceQuery   "sparql", "query" ;
>>   fuseki:serviceUpdate  "update" ;
>>   fuseki:dataset        :tdbDataset ;
>> .
>> # Service 2: Reasoning endpoint
>> :reasoningService a fuseki:Service ;
>>   fuseki:dataset                 :infDataset ;
>>   fuseki:name                    "reasoningEndpoint" ;
>>   fuseki:serviceQuery            "query", "sparql" ;
>>   fuseki:serviceReadGraphStore   "get" ;
>> .
>> # Inference dataset
>> :infDataset rdf:type ja:RDFDataset ;
>>             ja:defaultGraph :infModel ;
>> .
>> # Inference model
>> :infModel a ja:InfModel ;
>>            ja:baseModel :g ;
>>            ja:reasoner [
>>               ja:reasonerURL <http://jena.hpl.hp.com/2003/OWLFBRuleReasoner> ;
>>            ] ;
>> .
>> # Intermediate graph referencing the default union graph
>> :g rdf:type tdb2:GraphTDB ;
>>    tdb2:dataset :tdbDataset ;
>>    tdb2:graphName <urn:x-arq:UnionGraph> ;
>> .
>> # The location of the TDB dataset
>> :tdbDataset rdf:type tdb2:DatasetTDB2 ;
>>    tdb2:location  "C:\\dev\\apache-jena-fuseki-3.8.0\\run/databases/weird7" ;
>>    tdb2:unionDefaultGraph true ;
>> .
>> #### SERVER
>> ...
>> [2019-01-31 13:54:58] Config     INFO  Load configuration: file:///C:/dev/apache-jena-fuseki-3.10.0/run/configuration/weird7.ttl
>> [2019-01-31 13:54:58] Server     ERROR Exception in initialization: caught: Not in a transaction
>> [2019-01-31 13:54:58] WebAppContext WARN  Failed startup of context o.e.j.w.WebAppContext@6edc4161{Apache Jena Fuseki Server,/,file:///C:/dev/apache-jena-fuseki-3.10.0/webapp/,UNAVAILABLE}
>> org.apache.jena.assembler.exceptions.AssemblerException: caught: Not in a transaction
>>   doing:
>>     root: file:///C:/dev/apache-jena-fuseki-3.10.0/run/configuration/weird7.ttl#model_inf with type: http://jena.hpl.hp.com/2005/11/Assembler#InfModel assembler class: class org.apache.jena.assembler.assemblers.InfModelAssembler
>>     root: http://base/#dataset with type: http://jena.hpl.hp.com/2005/11/Assembler#RDFDataset assembler class: class org.apache.jena.sparql.core.assembler.DatasetAssembler
>>         at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.openBySpecificType(AssemblerGroup.java:165)
>>         at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.open(AssemblerGroup.java:144)
>>        at org.apache.jena.assembler.assemblers.AssemblerGroup$ExpandingAssemblerGroup.open(AssemblerGroup.java:93)
>>         at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:39)
>>         at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:35)
>>         at org.apache.jena.assembler.assemblers.AssemblerGroup.openModel(AssemblerGroup.java:47)
>>         at org.apache.jena.sparql.core.assembler.DatasetAssembler.createDataset(DatasetAssembler.java:56)
>>         at org.apache.jena.sparql.core.assembler.DatasetAssembler.open(DatasetAssembler.java:43)
>>         at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.openBySpecificType(AssemblerGroup.java:157)
>>         at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.open(AssemblerGroup.java:144)
>>         at org.apache.jena.assembler.assemblers.AssemblerGroup$ExpandingAssemblerGroup.open(AssemblerGroup.java:93)
>>         at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:39)
>>         at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:35)
>>         at org.apache.jena.fuseki.build.FusekiConfig.getDataset(FusekiConfig.java:345)
>>         at org.apache.jena.fuseki.build.FusekiConfig.buildDataService(FusekiConfig.java:299)
>>         at org.apache.jena.fuseki.build.FusekiConfig.buildDataAccessPoint(FusekiConfig.java:289)
>>         at org.apache.jena.fuseki.build.FusekiConfig.readConfiguration(FusekiConfig.java:272)
>>         at org.apache.jena.fuseki.build.FusekiConfig.readConfigurationDirectory(FusekiConfig.java:251)
>>         at org.apache.jena.fuseki.webapp.FusekiWebapp.initializeDataAccessPoints(FusekiWebapp.java:226)
>>         at org.apache.jena.fuseki.webapp.FusekiServerListener.serverInitialization(FusekiServerListener.java:98)
>>         at org.apache.jena.fuseki.webapp.FusekiServerListener.contextInitialized(FusekiServerListener.java:56)
>>         at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:952)
>>         at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:558)
>>         at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:917)
>>         at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:370)
>>         at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1497)
>>         at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1459)
>>         at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:847)
>>         at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:287)
>>         at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
>>         at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>>         at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
>>         at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108)
>>         at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
>>         at org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:410)
>>         at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>>         at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
>>         at org.eclipse.jetty.server.Server.start(Server.java:416)
>>         at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108)
>>         at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
>>         at org.eclipse.jetty.server.Server.doStart(Server.java:383)
>>         at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>>         at org.apache.jena.fuseki.cmd.JettyFusekiWebapp.start(JettyFusekiWebapp.java:138)
>>         at org.apache.jena.fuseki.cmd.FusekiCmd.runFuseki(FusekiCmd.java:372)
>>         at org.apache.jena.fuseki.cmd.FusekiCmd$FusekiCmdInner.exec(FusekiCmd.java:356)
>>         at jena.cmd.CmdMain.mainMethod(CmdMain.java:93)
>>         at jena.cmd.CmdMain.mainRun(CmdMain.java:58)
>>         at jena.cmd.CmdMain.mainRun(CmdMain.java:45)
>>         at org.apache.jena.fuseki.cmd.FusekiCmd$FusekiCmdInner.innerMain(FusekiCmd.java:104)
>>         at org.apache.jena.fuseki.cmd.FusekiCmd.main(FusekiCmd.java:67)
>> Caused by: org.apache.jena.dboe.transaction.txn.TransactionException: Not in a transaction
>>         at org.apache.jena.dboe.transaction.txn.TransactionalComponentLifecycle.checkTxn(TransactionalComponentLifecycle.java:417)
>>         at org.apache.jena.dboe.trans.bplustree.BPlusTree.getRootRead(BPlusTree.java:159)
>>         at org.apache.jena.dboe.trans.bplustree.BPlusTree.find(BPlusTree.java:239)
>>         at org.apache.jena.tdb2.store.nodetable.NodeTableNative.accessIndex(NodeTableNative.java:133)
>>         at org.apache.jena.tdb2.store.nodetable.NodeTableNative._idForNode(NodeTableNative.java:118)
>>         at org.apache.jena.tdb2.store.nodetable.NodeTableNative.getNodeIdForNode(NodeTableNative.java:57)
>>         at org.apache.jena.tdb2.store.nodetable.NodeTableCache._idForNode(NodeTableCache.java:222)
>>         at org.apache.jena.tdb2.store.nodetable.NodeTableCache.getNodeIdForNode(NodeTableCache.java:114)
>>         at org.apache.jena.tdb2.store.nodetable.NodeTableWrapper.getNodeIdForNode(NodeTableWrapper.java:47)
>>         at org.apache.jena.tdb2.store.nodetable.NodeTableInline.getNodeIdForNode(NodeTableInline.java:58)
>>         at org.apache.jena.tdb2.store.nodetupletable.NodeTupleTableConcrete.idForNode(NodeTupleTableConcrete.java:182)
>>         at org.apache.jena.tdb2.store.nodetupletable.NodeTupleTableConcrete.findAsNodeIds(NodeTupleTableConcrete.java:136)
>>         at org.apache.jena.tdb2.store.nodetupletable.NodeTupleTableConcrete.find(NodeTupleTableConcrete.java:114)
>>         at org.apache.jena.tdb2.store.DatasetPrefixesTDB.readPrefixMap(DatasetPrefixesTDB.java:111)
>>         at org.apache.jena.sparql.graph.GraphPrefixesProjection.getNsPrefixMap(GraphPrefixesProjection.java:94)
>>         at org.apache.jena.tdb2.store.GraphViewSwitchable$PrefixMappingImplTDB2.getNsPrefixMap(GraphViewSwitchable.java:159)
>>         at org.apache.jena.shared.impl.PrefixMappingImpl.setNsPrefixes(PrefixMappingImpl.java:138)
>>         at org.apache.jena.tdb2.store.GraphViewSwitchable.createPrefixMapping(GraphViewSwitchable.java:68)
>>         at org.apache.jena.graph.impl.GraphBase.getPrefixMapping(GraphBase.java:165)
>>         at org.apache.jena.reasoner.BaseInfGraph.getPrefixMapping(BaseInfGraph.java:55)
>>         at org.apache.jena.rdf.model.impl.ModelCom.getPrefixMapping(ModelCom.java:1018)
>>         at org.apache.jena.rdf.model.impl.ModelCom.setNsPrefixes(ModelCom.java:1055)
>>         at org.apache.jena.assembler.assemblers.ModelAssembler.open(ModelAssembler.java:45)
>>         at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.openBySpecificType(AssemblerGroup.java:157)
>>         ... 49 more
>> [2019-01-31 13:54:58] Server     INFO  Started 2019/01/31 13:54:58 GMT on port 3030
>> THIS E-MAIL MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION.
>> IF YOU ARE NOT THE INTENDED RECIPIENT (OR HAVE RECEIVED THIS E-MAIL
>> IN ERROR) PLEASE NOTIFY THE SENDER IMMEDIATELY AND DESTROY THIS
>> E-MAIL. ANY UNAUTHORISED COPYING, DISCLOSURE OR DISTRIBUTION OF THE
>> MATERIAL IN THIS E-MAIL IS STRICTLY FORBIDDEN.
>> IN ACCORDANCE WITH MIFID II RULES ON INDUCEMENTS, THE FIRM'S EMPLOYEES
>> MAY ATTEND CORPORATE ACCESS EVENTS (DEFINED IN THE FCA HANDBOOK AS
>> "THE SERVICE OF ARRANGING OR BRINGING ABOUT CONTACT BETWEEN AN INVESTMENT
>> MANAGER AND AN ISSUER OR POTENTIAL ISSUER"). DURING SUCH MEETINGS, THE
>> FIRM'S EMPLOYEES MAY ON NO ACCOUNT BE IN RECEIPT OF INSIDE INFORMATION
>> (AS DESCRIBED IN ARTICLE 7 OF THE MARKET ABUSE REGULATION (EU) NO 596/2014).
>> (https://www.handbook.fca.org.uk/handbook/glossary/G3532m.html)
>> COMPANIES WHO DISCLOSE INSIDE INFORMATION ARE IN BREACH OF REGULATION
>> AND MUST IMMEDIATELY AND CLEARLY NOTIFY ALL ATTENDEES. FOR INFORMATION
>> ON THE FIRM'S POLICY IN RELATION TO ITS PARTICIPATION IN MARKET SOUNDINGS,
>> PLEASE SEE https://www.horizon-asset.co.uk/market-soundings/.
>> HORIZON ASSET LLP IS AUTHORISED AND REGULATED
>> BY THE FINANCIAL CONDUCT AUTHORITY.


Re: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart

Posted by Andy Seaborne <an...@apache.org>.
>> 4/ "ERROR Exception in initialization: caught: Not in tramsaction"

It's a bug : now recorded as JENA-1663.

     Andy

Re: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart

Posted by Andy Seaborne <an...@apache.org>.

On 31/01/2019 17:41, Andy Seaborne wrote:
> Hi Pierre,
> 
> A few points to start with:
> 
> 1/ For my general understanding of how we might take the inferene 
> provision further in jena, what inferences are you particularly 
> interested in?
> 
> 2/ It is not possible in an assembler/Fuseki configuration file, to 
> create a new named graph and have a another inference graph put around 
> that new graph at runtime.
> 
> 3/ The union graph can't be updated.  It's a read-only union.
> 
> 4/ "ERROR Exception in initialization: caught: Not in tramsaction"
> Do you have the stacktrace for this?

OK found it - different spelling.

> 
> Inline questions about attempt 1.
> 
>      Andy
> 
> 
> 
> On 31/01/2019 15:00, Pierre Grenon wrote:
>> Hello,
>>
>> I am trying to:
>>
>> Set up Fuseki2 with inference and a TDB2 dataset in which can be 
>> persisted named graphs created with SPARQL Update.
>>
>> This is in order to:
>> - maintain a set of ontologies in a named graph
>> - maintain datasets in a number of named graphs
>> - perform reasoning in the union graphs
>> The assumption is that all data is persisted in a given TDB2 database.
>>
>> The higher purpose is to use reasoning over ontologies when querying 
>> over instance data located in named graphs. I think this is 
>> conceptually what is discussed here:
>> Subject Re: Ontologies and model data
>> Date      Mon, 03 Jun 2013 20:26:18 GMT
>> http://mail-archives.apache.org/mod_mbox/jena-users/201306.mbox/%3C51ACFBEA.2010206@apache.org%3E 
>>
>>
>> The set up above is what comes across as a way of achieving this 
>> higher goal but I am not sure either that it is the best set up. 
>> Everything I have tried to do allows me to either perform inferences 
>> in <urn:x-arq:UnionGraph> or to persist triples in named graphs in a 
>> TDB2 database, but not both.
>>
>>
>> My problem is:
>>
>> I cannot find a correct configuration that allows me to persist named 
>> graphs added to a TDB2 dataset and have inference at the same time.
>>
>> Some attempts are documented below.
>>
>> I have done most of my test in apache-jena-fuseki-3.8.0, my last tries 
>> were in apache-jena-fuseki-3.10.0.
>> Would somebody be in a position to advise or provide a minimal working 
>> example?
>>
>> With many thanks and best regards,
>> Pierre
>>
>>
>> ###
>> Attempt 1:
>>
>> # Example of a data service with SPARQL query abnd update on an
>> # inference model.  Data is taken from TDB.
>> https://github.com/apache/jena/blob/master/jena-fuseki2/examples/service-inference-2.ttl 
>>
>> which I have adapted to use TDB2 (which basically meant updating the 
>> namespace and the references to classes).
>>
>> Outcome: This allows me to load data into named graphs and to perform 
>> inferemce. However, it does not persist upon restarting the server.
> 
> Did you enable tdb:unionDefaultGraph as well, load the named graph and 
> the access the union?
> 
> What isn't persisted? Inferences or base data?
>>
>>
>> ###
>> Attempt 2:
>>
>> Define two services pointing to different graphs as advised in
>> Subject Re: Persisting named graphs in TDB with jena-fuseki
>> Date      Thu, 10 Mar 2016 14:47:10 GMT
>> http://mail-archives.apache.org/mod_mbox/jena-users/201603.mbox/%3CD30738C0.64A6F%25rvesse@dotnetrdf.org%3E 
>>
>>
>> Outcome: I could only manage de3fining two independent services on two 
>> independent datasets and couldn't figure out how to link the TDB2 and 
>> the Inference graphs.
>>
>> A reference to https://issues.apache.org/jira/browse/JENA-1122
>> is made in
>> http://mail-archives.apache.org/mod_mbox/jena-users/201603.mbox/%3C56EC23E3.9010000@apache.org%3E 
>>
>> But I do not understand what is said here.
>>
>>
>> ###
>> Attempt 3:
>>
>> I have found a config that seemed t make the link that was needed 
>> between graphs in Attempt 2 in:
>> https://stackoverflow.com/questions/47568703/named-graphs-v-default-graph-behaviour-in-apache-jena-fuseki 
>>
>> which I have adapted to TDB2.
>>
>> However, this gives me:
>> ERROR Exception in initialization: caught: Not in tramsaction
>>
>> This also seems to have been a cut off point in the thread mentioned 
>> above
>> Subject Re: Configuring fuseki with TDB2 and OWL reasoning
>> Date      Tue, 20 Feb 2018 10:55:13 GMT
>> http://mail-archives.apache.org/mod_mbox/jena-users/201802.mbox/%3C6d37a8c7-aca1-4c1c-0cd3-fa041ecc07eb%40apache.org%3E 
>>
>>
>> This message refers to https://issues.apache.org/jira/browse/JENA-1492
>> which I do not understand but that comes across as having been 
>> resolved. Indeed, I tested the config attached to it (probably as 
>> minimal example for that issue) and it worked, but I don't think this 
>> is the config I need.
>>
>>
>>
>> #### ATTEMPT 3 Config
>>
>> @prefix :      <http://base/#> .
>> @prefix tdb2:  <http://jena.apache.org/2016/tdb#> .
>> @prefix rdf:   <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
>> @prefix ja:    <http://jena.hpl.hp.com/2005/11/Assembler#> .
>> @prefix rdfs:  <http://www.w3.org/2000/01/rdf-schema#> .
>> @prefix fuseki: <http://jena.apache.org/fuseki#> .
>>
>> # TDB2
>> #tdb2:DatasetTDB2 rdfs:subClassOf  ja:RDFDataset .
>> #tdb2:GraphTDB    rdfs:subClassOf  ja:Model .
>>
>> # Service 1: Dataset endpoint (no reasoning)
>> :dataService a fuseki:Service ;
>>    fuseki:name           "tdbEnpoint" ;
>>    fuseki:serviceQuery   "sparql", "query" ;
>>    fuseki:serviceUpdate  "update" ;
>>    fuseki:dataset        :tdbDataset ;
>> .
>>
>> # Service 2: Reasoning endpoint
>> :reasoningService a fuseki:Service ;
>>    fuseki:dataset                 :infDataset ;
>>    fuseki:name                    "reasoningEndpoint" ;
>>    fuseki:serviceQuery            "query", "sparql" ;
>>    fuseki:serviceReadGraphStore   "get" ;
>> .
>>
>> # Inference dataset
>> :infDataset rdf:type ja:RDFDataset ;
>>              ja:defaultGraph :infModel ;
>> .
>>
>> # Inference model
>> :infModel a ja:InfModel ;
>>             ja:baseModel :g ;
>>
>>             ja:reasoner [
>>                ja:reasonerURL 
>> <http://jena.hpl.hp.com/2003/OWLFBRuleReasoner> ;
>>             ] ;
>> .
>>
>> # Intermediate graph referencing the default union graph
>> :g rdf:type tdb2:GraphTDB ;
>>     tdb2:dataset :tdbDataset ;
>>     tdb2:graphName <urn:x-arq:UnionGraph> ;
>> .
>>
>> # The location of the TDB dataset
>> :tdbDataset rdf:type tdb2:DatasetTDB2 ;
>>     tdb2:location  
>> "C:\\dev\\apache-jena-fuseki-3.8.0\\run/databases/weird7" ;
>>     tdb2:unionDefaultGraph true ;
>> .
>>
>>
>> #### SERVER
>> ...
>> [2019-01-31 13:54:58] Config     INFO  Load configuration: 
>> file:///C:/dev/apache-jena-fuseki-3.10.0/run/configuration/weird7.ttl
>> [2019-01-31 13:54:58] Server     ERROR Exception in initialization: 
>> caught: Not in a transaction
>> [2019-01-31 13:54:58] WebAppContext WARN  Failed startup of context 
>> o.e.j.w.WebAppContext@6edc4161{Apache Jena Fuseki 
>> Server,/,file:///C:/dev/apache-jena-fuseki-3.10.0/webapp/,UNAVAILABLE}
>> org.apache.jena.assembler.exceptions.AssemblerException: caught: Not 
>> in a transaction
>>    doing:
>>      root: 
>> file:///C:/dev/apache-jena-fuseki-3.10.0/run/configuration/weird7.ttl#model_inf 
>> with type: http://jena.hpl.hp.com/2005/11/Assembler#InfModel assembler 
>> class: class org.apache.jena.assembler.assemblers.InfModelAssembler
>>      root: http://base/#dataset with type: 
>> http://jena.hpl.hp.com/2005/11/Assembler#RDFDataset assembler class: 
>> class org.apache.jena.sparql.core.assembler.DatasetAssembler
>>
>>          at 
>> org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.openBySpecificType(AssemblerGroup.java:165) 
>>
>>          at 
>> org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.open(AssemblerGroup.java:144) 
>>
>>         at 
>> org.apache.jena.assembler.assemblers.AssemblerGroup$ExpandingAssemblerGroup.open(AssemblerGroup.java:93) 
>>
>>          at 
>> org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:39) 
>>
>>          at 
>> org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:35) 
>>
>>          at 
>> org.apache.jena.assembler.assemblers.AssemblerGroup.openModel(AssemblerGroup.java:47) 
>>
>>          at 
>> org.apache.jena.sparql.core.assembler.DatasetAssembler.createDataset(DatasetAssembler.java:56) 
>>
>>          at 
>> org.apache.jena.sparql.core.assembler.DatasetAssembler.open(DatasetAssembler.java:43) 
>>
>>          at 
>> org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.openBySpecificType(AssemblerGroup.java:157) 
>>
>>          at 
>> org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.open(AssemblerGroup.java:144) 
>>
>>          at 
>> org.apache.jena.assembler.assemblers.AssemblerGroup$ExpandingAssemblerGroup.open(AssemblerGroup.java:93) 
>>
>>          at 
>> org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:39) 
>>
>>          at 
>> org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:35) 
>>
>>          at 
>> org.apache.jena.fuseki.build.FusekiConfig.getDataset(FusekiConfig.java:345) 
>>
>>          at 
>> org.apache.jena.fuseki.build.FusekiConfig.buildDataService(FusekiConfig.java:299) 
>>
>>          at 
>> org.apache.jena.fuseki.build.FusekiConfig.buildDataAccessPoint(FusekiConfig.java:289) 
>>
>>          at 
>> org.apache.jena.fuseki.build.FusekiConfig.readConfiguration(FusekiConfig.java:272) 
>>
>>          at 
>> org.apache.jena.fuseki.build.FusekiConfig.readConfigurationDirectory(FusekiConfig.java:251) 
>>
>>          at 
>> org.apache.jena.fuseki.webapp.FusekiWebapp.initializeDataAccessPoints(FusekiWebapp.java:226) 
>>
>>          at 
>> org.apache.jena.fuseki.webapp.FusekiServerListener.serverInitialization(FusekiServerListener.java:98) 
>>
>>          at 
>> org.apache.jena.fuseki.webapp.FusekiServerListener.contextInitialized(FusekiServerListener.java:56) 
>>
>>          at 
>> org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:952) 
>>
>>          at 
>> org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:558) 
>>
>>          at 
>> org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:917) 
>>
>>          at 
>> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:370) 
>>
>>          at 
>> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1497) 
>>
>>          at 
>> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1459) 
>>
>>          at 
>> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:847) 
>>
>>          at 
>> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:287) 
>>
>>          at 
>> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
>>          at 
>> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) 
>>
>>          at 
>> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138) 
>>
>>          at 
>> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108) 
>>
>>          at 
>> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113) 
>>
>>          at 
>> org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:410) 
>>
>>          at 
>> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) 
>>
>>          at 
>> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138) 
>>
>>          at org.eclipse.jetty.server.Server.start(Server.java:416)
>>          at 
>> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108) 
>>
>>          at 
>> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113) 
>>
>>          at org.eclipse.jetty.server.Server.doStart(Server.java:383)
>>          at 
>> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) 
>>
>>          at 
>> org.apache.jena.fuseki.cmd.JettyFusekiWebapp.start(JettyFusekiWebapp.java:138) 
>>
>>          at 
>> org.apache.jena.fuseki.cmd.FusekiCmd.runFuseki(FusekiCmd.java:372)
>>          at 
>> org.apache.jena.fuseki.cmd.FusekiCmd$FusekiCmdInner.exec(FusekiCmd.java:356) 
>>
>>          at jena.cmd.CmdMain.mainMethod(CmdMain.java:93)
>>          at jena.cmd.CmdMain.mainRun(CmdMain.java:58)
>>          at jena.cmd.CmdMain.mainRun(CmdMain.java:45)
>>          at 
>> org.apache.jena.fuseki.cmd.FusekiCmd$FusekiCmdInner.innerMain(FusekiCmd.java:104) 
>>
>>          at org.apache.jena.fuseki.cmd.FusekiCmd.main(FusekiCmd.java:67)
>> Caused by: org.apache.jena.dboe.transaction.txn.TransactionException: 
>> Not in a transaction
>>          at 
>> org.apache.jena.dboe.transaction.txn.TransactionalComponentLifecycle.checkTxn(TransactionalComponentLifecycle.java:417) 
>>
>>          at 
>> org.apache.jena.dboe.trans.bplustree.BPlusTree.getRootRead(BPlusTree.java:159) 
>>
>>          at 
>> org.apache.jena.dboe.trans.bplustree.BPlusTree.find(BPlusTree.java:239)
>>          at 
>> org.apache.jena.tdb2.store.nodetable.NodeTableNative.accessIndex(NodeTableNative.java:133) 
>>
>>          at 
>> org.apache.jena.tdb2.store.nodetable.NodeTableNative._idForNode(NodeTableNative.java:118) 
>>
>>          at 
>> org.apache.jena.tdb2.store.nodetable.NodeTableNative.getNodeIdForNode(NodeTableNative.java:57) 
>>
>>          at 
>> org.apache.jena.tdb2.store.nodetable.NodeTableCache._idForNode(NodeTableCache.java:222) 
>>
>>          at 
>> org.apache.jena.tdb2.store.nodetable.NodeTableCache.getNodeIdForNode(NodeTableCache.java:114) 
>>
>>          at 
>> org.apache.jena.tdb2.store.nodetable.NodeTableWrapper.getNodeIdForNode(NodeTableWrapper.java:47) 
>>
>>          at 
>> org.apache.jena.tdb2.store.nodetable.NodeTableInline.getNodeIdForNode(NodeTableInline.java:58) 
>>
>>          at 
>> org.apache.jena.tdb2.store.nodetupletable.NodeTupleTableConcrete.idForNode(NodeTupleTableConcrete.java:182) 
>>
>>          at 
>> org.apache.jena.tdb2.store.nodetupletable.NodeTupleTableConcrete.findAsNodeIds(NodeTupleTableConcrete.java:136) 
>>
>>          at 
>> org.apache.jena.tdb2.store.nodetupletable.NodeTupleTableConcrete.find(NodeTupleTableConcrete.java:114) 
>>
>>          at 
>> org.apache.jena.tdb2.store.DatasetPrefixesTDB.readPrefixMap(DatasetPrefixesTDB.java:111) 
>>
>>          at 
>> org.apache.jena.sparql.graph.GraphPrefixesProjection.getNsPrefixMap(GraphPrefixesProjection.java:94) 
>>
>>          at 
>> org.apache.jena.tdb2.store.GraphViewSwitchable$PrefixMappingImplTDB2.getNsPrefixMap(GraphViewSwitchable.java:159) 
>>
>>          at 
>> org.apache.jena.shared.impl.PrefixMappingImpl.setNsPrefixes(PrefixMappingImpl.java:138) 
>>
>>          at 
>> org.apache.jena.tdb2.store.GraphViewSwitchable.createPrefixMapping(GraphViewSwitchable.java:68) 
>>
>>          at 
>> org.apache.jena.graph.impl.GraphBase.getPrefixMapping(GraphBase.java:165)
>>          at 
>> org.apache.jena.reasoner.BaseInfGraph.getPrefixMapping(BaseInfGraph.java:55) 
>>
>>          at 
>> org.apache.jena.rdf.model.impl.ModelCom.getPrefixMapping(ModelCom.java:1018) 
>>
>>          at 
>> org.apache.jena.rdf.model.impl.ModelCom.setNsPrefixes(ModelCom.java:1055)
>>          at 
>> org.apache.jena.assembler.assemblers.ModelAssembler.open(ModelAssembler.java:45) 
>>
>>          at 
>> org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.openBySpecificType(AssemblerGroup.java:157) 
>>
>>          ... 49 more
>> [2019-01-31 13:54:58] Server     INFO  Started 2019/01/31 13:54:58 GMT 
>> on port 3030
>>
>> THIS E-MAIL MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION.
>> IF YOU ARE NOT THE INTENDED RECIPIENT (OR HAVE RECEIVED THIS E-MAIL
>> IN ERROR) PLEASE NOTIFY THE SENDER IMMEDIATELY AND DESTROY THIS
>> E-MAIL. ANY UNAUTHORISED COPYING, DISCLOSURE OR DISTRIBUTION OF THE
>> MATERIAL IN THIS E-MAIL IS STRICTLY FORBIDDEN.
>>
>> IN ACCORDANCE WITH MIFID II RULES ON INDUCEMENTS, THE FIRM'S EMPLOYEES
>> MAY ATTEND CORPORATE ACCESS EVENTS (DEFINED IN THE FCA HANDBOOK AS
>> "THE SERVICE OF ARRANGING OR BRINGING ABOUT CONTACT BETWEEN AN INVESTMENT
>> MANAGER AND AN ISSUER OR POTENTIAL ISSUER"). DURING SUCH MEETINGS, THE
>> FIRM'S EMPLOYEES MAY ON NO ACCOUNT BE IN RECEIPT OF INSIDE INFORMATION
>> (AS DESCRIBED IN ARTICLE 7 OF THE MARKET ABUSE REGULATION (EU) NO 
>> 596/2014).
>> (https://www.handbook.fca.org.uk/handbook/glossary/G3532m.html)
>> COMPANIES WHO DISCLOSE INSIDE INFORMATION ARE IN BREACH OF REGULATION
>> AND MUST IMMEDIATELY AND CLEARLY NOTIFY ALL ATTENDEES. FOR INFORMATION
>> ON THE FIRM'S POLICY IN RELATION TO ITS PARTICIPATION IN MARKET 
>> SOUNDINGS,
>> PLEASE SEE https://www.horizon-asset.co.uk/market-soundings/.
>>
>> HORIZON ASSET LLP IS AUTHORISED AND REGULATED
>> BY THE FINANCIAL CONDUCT AUTHORITY.
>>
>>
>>

RE: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart

Posted by Pierre Grenon <pg...@horizon-asset.co.uk>.
Hi Andy,

picking up the prefatory questions first, in line below.

> Hi Pierre,
>
> A few points to start with:
>
> 1/ For my general understanding of how we might take the inferene
> provision further in jena, what inferences are you particularly
> interested in?

I noticed you like to ask :) I'm not sure what level of details is useful so it'd probably be more informative to have some sort of sticky thread about that or a poll if you know how to categorise what would be useful to know.

In the present context, ANY inference is good over writing specific SPARQL look up queries.

The present context is:
- a few small OWL ontologies where the most complicated axioms are various class restrictions and property chains
- some amount of rather qualitative data (pizzas and toppings kind of things) with a lot of n-ary relations reification and where basic ontology traversal combining predicate composition and transitivity is useful
- some large amount of quantitative data for which the main problems are i) navigating reified relations and ii) aggregating quantities but I'm not sure that second use case is inferencing rather than some form of computation
- some pervasive reliance on labels with queries where some variables are labels
- a lot of the data is temporal too (one of the reasons for reification and for named graphs)

So as a first answer, in this context, at this stage, transitivity and RDFS type of inference are a very minimum. (Although, rdfs:domain type of inference is not really desired because these are better interpreted as constraints and SHACL might be preferred for handling this.) This bare minimum is just to be able to query with a hierarchy of predicates, for example. Another typical use case is class instantiation and subsumption (so again, rdfs:subPropertyOf and rdfs:subClassOf being transitive...) This is usually not enough and OWL property chains are useful.

At another stage, backward and/or forward rules but I don't have clear use cases in mind at this point in this context.


> 2/ It is not possible in an assembler/Fuseki configuration file, to
> create a new named graph and have a another inference graph put around
> that new graph at runtime.

I will follow up on this further down the threads.

> 3/ The union graph can't be updated. It's a read-only union.

I'm not sure what this means. It can't add new graphs to the union?
Also more below re. Attempt 1 question.

> 4/ "ERROR Exception in initialization: caught: Not in tramsaction"
> Do you have the stacktrace for this?

I saw you found it -- yes, typo, sorry, I typed the error message before pasting the whole trace.

Noted the bug. Not sure that would make the config do it's intended job though.


> Inline questions about attempt 1.
>
> Andy

<...>

> > Outcome: This allows me to load data into named graphs and to perform inferemce. However, it does not persist upon restarting the server.
>
> Did you enable tdb:unionDefaultGraph as well, load the named graph and
the access the union?

Here I might be confused. Documentation says to either use union graph or update but not both, is that because of the static character of the union graph? I haven't realised that there is any issue having both, i.e., have an update service where the TDB2 dataset is defaultUnionGraph true.

I *think* I tried both. I'm not sure it changed anything to the behaviour I was trying to check (i.e., persistence of named graphs created with SPARQL update).

My basic process is:

1. Terminal$ fuseki-server
2. Run bunch of: LOAD <myfile> INTO <myNamedGraph>
3. Query triples in union graph or {GRAPH <myNamedGraph> {?i ?like ?trains}}.
4. Terminal$ CTRL+C CTRL+C Y
5. Terminal$ fuseki-server
6. Query triples in union graph or {GRAPH <myNamedGraph> {?i ?like ?trains}}.


> What isn't persisted? Inferences or base data?

This is about the data stored in the named graph during 2.

When I have a single TDB2 dataset with no inference engine in my configuration file:
3 succeeds and 6 succeeds

The data is persisted, including in named graphs.

When I also have a set up for an inference engine in my configuration file,
3 succeeds and 6 fails.

I'm not entirely sure I managed to explain myself very clearly. But hope this helps.

Also, I will probably try to redo everything I have done to better address your questions. I will follow up down the thread on 2/ as I think some clarity is needed on the named graphs. I have tried naming graphs in the configuration file but this didn't seem to help, however, I think I wasn't linking to the inference model. SO I'll follow up on this and try to provide clear config examples for that.

With many thanks and kind regards,
Pierre

THIS E-MAIL MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION. 
IF YOU ARE NOT THE INTENDED RECIPIENT (OR HAVE RECEIVED THIS E-MAIL 
IN ERROR) PLEASE NOTIFY THE SENDER IMMEDIATELY AND DESTROY THIS 
E-MAIL. ANY UNAUTHORISED COPYING, DISCLOSURE OR DISTRIBUTION OF THE 
MATERIAL IN THIS E-MAIL IS STRICTLY FORBIDDEN. 

IN ACCORDANCE WITH MIFID II RULES ON INDUCEMENTS, THE FIRM'S EMPLOYEES 
MAY ATTEND CORPORATE ACCESS EVENTS (DEFINED IN THE FCA HANDBOOK AS 
"THE SERVICE OF ARRANGING OR BRINGING ABOUT CONTACT BETWEEN AN INVESTMENT 
MANAGER AND AN ISSUER OR POTENTIAL ISSUER"). DURING SUCH MEETINGS, THE 
FIRM'S EMPLOYEES MAY ON NO ACCOUNT BE IN RECEIPT OF INSIDE INFORMATION 
(AS DESCRIBED IN ARTICLE 7 OF THE MARKET ABUSE REGULATION (EU) NO 596/2014). 
(https://www.handbook.fca.org.uk/handbook/glossary/G3532m.html)
COMPANIES WHO DISCLOSE INSIDE INFORMATION ARE IN BREACH OF REGULATION 
AND MUST IMMEDIATELY AND CLEARLY NOTIFY ALL ATTENDEES. FOR INFORMATION 
ON THE FIRM'S POLICY IN RELATION TO ITS PARTICIPATION IN MARKET SOUNDINGS, 
PLEASE SEE https://www.horizon-asset.co.uk/market-soundings/. 

HORIZON ASSET LLP IS AUTHORISED AND REGULATED 
BY THE FINANCIAL CONDUCT AUTHORITY.



Re: Fuskei2 configuration, TDB2 data, Inferencing with ontologies, Persisting named graphs upon server restart

Posted by Andy Seaborne <an...@apache.org>.
Hi Pierre,

A few points to start with:

1/ For my general understanding of how we might take the inferene 
provision further in jena, what inferences are you particularly 
interested in?

2/ It is not possible in an assembler/Fuseki configuration file, to 
create a new named graph and have a another inference graph put around 
that new graph at runtime.

3/ The union graph can't be updated.  It's a read-only union.

4/ "ERROR Exception in initialization: caught: Not in tramsaction"
Do you have the stacktrace for this?

Inline questions about attempt 1.

     Andy



On 31/01/2019 15:00, Pierre Grenon wrote:
> Hello,
> 
> I am trying to:
> 
> Set up Fuseki2 with inference and a TDB2 dataset in which can be persisted named graphs created with SPARQL Update.
> 
> This is in order to:
> - maintain a set of ontologies in a named graph
> - maintain datasets in a number of named graphs
> - perform reasoning in the union graphs
> The assumption is that all data is persisted in a given TDB2 database.
> 
> The higher purpose is to use reasoning over ontologies when querying over instance data located in named graphs. I think this is conceptually what is discussed here:
> Subject Re: Ontologies and model data
> Date      Mon, 03 Jun 2013 20:26:18 GMT
> http://mail-archives.apache.org/mod_mbox/jena-users/201306.mbox/%3C51ACFBEA.2010206@apache.org%3E
> 
> The set up above is what comes across as a way of achieving this higher goal but I am not sure either that it is the best set up. Everything I have tried to do allows me to either perform inferences in <urn:x-arq:UnionGraph> or to persist triples in named graphs in a TDB2 database, but not both.
> 
> 
> My problem is:
> 
> I cannot find a correct configuration that allows me to persist named graphs added to a TDB2 dataset and have inference at the same time.
> 
> Some attempts are documented below.
> 
> I have done most of my test in apache-jena-fuseki-3.8.0, my last tries were in apache-jena-fuseki-3.10.0.
> Would somebody be in a position to advise or provide a minimal working example?
> 
> With many thanks and best regards,
> Pierre
> 
> 
> ###
> Attempt 1:
> 
> # Example of a data service with SPARQL query abnd update on an
> # inference model.  Data is taken from TDB.
> https://github.com/apache/jena/blob/master/jena-fuseki2/examples/service-inference-2.ttl
> which I have adapted to use TDB2 (which basically meant updating the namespace and the references to classes).
> 
> Outcome: This allows me to load data into named graphs and to perform inferemce. However, it does not persist upon restarting the server.

Did you enable tdb:unionDefaultGraph as well, load the named graph and 
the access the union?

What isn't persisted? Inferences or base data?
> 
> 
> ###
> Attempt 2:
> 
> Define two services pointing to different graphs as advised in
> Subject Re: Persisting named graphs in TDB with jena-fuseki
> Date      Thu, 10 Mar 2016 14:47:10 GMT
> http://mail-archives.apache.org/mod_mbox/jena-users/201603.mbox/%3CD30738C0.64A6F%25rvesse@dotnetrdf.org%3E
> 
> Outcome: I could only manage de3fining two independent services on two independent datasets and couldn't figure out how to link the TDB2 and the Inference graphs.
> 
> A reference to https://issues.apache.org/jira/browse/JENA-1122
> is made in
> http://mail-archives.apache.org/mod_mbox/jena-users/201603.mbox/%3C56EC23E3.9010000@apache.org%3E
> But I do not understand what is said here.
> 
> 
> ###
> Attempt 3:
> 
> I have found a config that seemed t make the link that was needed between graphs in Attempt 2 in:
> https://stackoverflow.com/questions/47568703/named-graphs-v-default-graph-behaviour-in-apache-jena-fuseki
> which I have adapted to TDB2.
> 
> However, this gives me:
> ERROR Exception in initialization: caught: Not in tramsaction
> 
> This also seems to have been a cut off point in the thread mentioned above
> Subject Re: Configuring fuseki with TDB2 and OWL reasoning
> Date      Tue, 20 Feb 2018 10:55:13 GMT
> http://mail-archives.apache.org/mod_mbox/jena-users/201802.mbox/%3C6d37a8c7-aca1-4c1c-0cd3-fa041ecc07eb%40apache.org%3E
> 
> This message refers to https://issues.apache.org/jira/browse/JENA-1492
> which I do not understand but that comes across as having been resolved. Indeed, I tested the config attached to it (probably as minimal example for that issue) and it worked, but I don't think this is the config I need.
> 
> 
> 
> #### ATTEMPT 3 Config
> 
> @prefix :      <http://base/#> .
> @prefix tdb2:  <http://jena.apache.org/2016/tdb#> .
> @prefix rdf:   <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
> @prefix ja:    <http://jena.hpl.hp.com/2005/11/Assembler#> .
> @prefix rdfs:  <http://www.w3.org/2000/01/rdf-schema#> .
> @prefix fuseki: <http://jena.apache.org/fuseki#> .
> 
> # TDB2
> #tdb2:DatasetTDB2 rdfs:subClassOf  ja:RDFDataset .
> #tdb2:GraphTDB    rdfs:subClassOf  ja:Model .
> 
> # Service 1: Dataset endpoint (no reasoning)
> :dataService a fuseki:Service ;
>    fuseki:name           "tdbEnpoint" ;
>    fuseki:serviceQuery   "sparql", "query" ;
>    fuseki:serviceUpdate  "update" ;
>    fuseki:dataset        :tdbDataset ;
> .
> 
> # Service 2: Reasoning endpoint
> :reasoningService a fuseki:Service ;
>    fuseki:dataset                 :infDataset ;
>    fuseki:name                    "reasoningEndpoint" ;
>    fuseki:serviceQuery            "query", "sparql" ;
>    fuseki:serviceReadGraphStore   "get" ;
> .
> 
> # Inference dataset
> :infDataset rdf:type ja:RDFDataset ;
>              ja:defaultGraph :infModel ;
> .
> 
> # Inference model
> :infModel a ja:InfModel ;
>             ja:baseModel :g ;
> 
>             ja:reasoner [
>                ja:reasonerURL <http://jena.hpl.hp.com/2003/OWLFBRuleReasoner> ;
>             ] ;
> .
> 
> # Intermediate graph referencing the default union graph
> :g rdf:type tdb2:GraphTDB ;
>     tdb2:dataset :tdbDataset ;
>     tdb2:graphName <urn:x-arq:UnionGraph> ;
> .
> 
> # The location of the TDB dataset
> :tdbDataset rdf:type tdb2:DatasetTDB2 ;
>     tdb2:location  "C:\\dev\\apache-jena-fuseki-3.8.0\\run/databases/weird7" ;
>     tdb2:unionDefaultGraph true ;
> .
> 
> 
> #### SERVER
> ...
> [2019-01-31 13:54:58] Config     INFO  Load configuration: file:///C:/dev/apache-jena-fuseki-3.10.0/run/configuration/weird7.ttl
> [2019-01-31 13:54:58] Server     ERROR Exception in initialization: caught: Not in a transaction
> [2019-01-31 13:54:58] WebAppContext WARN  Failed startup of context o.e.j.w.WebAppContext@6edc4161{Apache Jena Fuseki Server,/,file:///C:/dev/apache-jena-fuseki-3.10.0/webapp/,UNAVAILABLE}
> org.apache.jena.assembler.exceptions.AssemblerException: caught: Not in a transaction
>    doing:
>      root: file:///C:/dev/apache-jena-fuseki-3.10.0/run/configuration/weird7.ttl#model_inf with type: http://jena.hpl.hp.com/2005/11/Assembler#InfModel assembler class: class org.apache.jena.assembler.assemblers.InfModelAssembler
>      root: http://base/#dataset with type: http://jena.hpl.hp.com/2005/11/Assembler#RDFDataset assembler class: class org.apache.jena.sparql.core.assembler.DatasetAssembler
> 
>          at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.openBySpecificType(AssemblerGroup.java:165)
>          at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.open(AssemblerGroup.java:144)
>         at org.apache.jena.assembler.assemblers.AssemblerGroup$ExpandingAssemblerGroup.open(AssemblerGroup.java:93)
>          at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:39)
>          at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:35)
>          at org.apache.jena.assembler.assemblers.AssemblerGroup.openModel(AssemblerGroup.java:47)
>          at org.apache.jena.sparql.core.assembler.DatasetAssembler.createDataset(DatasetAssembler.java:56)
>          at org.apache.jena.sparql.core.assembler.DatasetAssembler.open(DatasetAssembler.java:43)
>          at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.openBySpecificType(AssemblerGroup.java:157)
>          at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.open(AssemblerGroup.java:144)
>          at org.apache.jena.assembler.assemblers.AssemblerGroup$ExpandingAssemblerGroup.open(AssemblerGroup.java:93)
>          at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:39)
>          at org.apache.jena.assembler.assemblers.AssemblerBase.open(AssemblerBase.java:35)
>          at org.apache.jena.fuseki.build.FusekiConfig.getDataset(FusekiConfig.java:345)
>          at org.apache.jena.fuseki.build.FusekiConfig.buildDataService(FusekiConfig.java:299)
>          at org.apache.jena.fuseki.build.FusekiConfig.buildDataAccessPoint(FusekiConfig.java:289)
>          at org.apache.jena.fuseki.build.FusekiConfig.readConfiguration(FusekiConfig.java:272)
>          at org.apache.jena.fuseki.build.FusekiConfig.readConfigurationDirectory(FusekiConfig.java:251)
>          at org.apache.jena.fuseki.webapp.FusekiWebapp.initializeDataAccessPoints(FusekiWebapp.java:226)
>          at org.apache.jena.fuseki.webapp.FusekiServerListener.serverInitialization(FusekiServerListener.java:98)
>          at org.apache.jena.fuseki.webapp.FusekiServerListener.contextInitialized(FusekiServerListener.java:56)
>          at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:952)
>          at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:558)
>          at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:917)
>          at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:370)
>          at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1497)
>          at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1459)
>          at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:847)
>          at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:287)
>          at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
>          at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>          at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
>          at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108)
>          at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
>          at org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:410)
>          at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>          at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
>          at org.eclipse.jetty.server.Server.start(Server.java:416)
>          at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108)
>          at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
>          at org.eclipse.jetty.server.Server.doStart(Server.java:383)
>          at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>          at org.apache.jena.fuseki.cmd.JettyFusekiWebapp.start(JettyFusekiWebapp.java:138)
>          at org.apache.jena.fuseki.cmd.FusekiCmd.runFuseki(FusekiCmd.java:372)
>          at org.apache.jena.fuseki.cmd.FusekiCmd$FusekiCmdInner.exec(FusekiCmd.java:356)
>          at jena.cmd.CmdMain.mainMethod(CmdMain.java:93)
>          at jena.cmd.CmdMain.mainRun(CmdMain.java:58)
>          at jena.cmd.CmdMain.mainRun(CmdMain.java:45)
>          at org.apache.jena.fuseki.cmd.FusekiCmd$FusekiCmdInner.innerMain(FusekiCmd.java:104)
>          at org.apache.jena.fuseki.cmd.FusekiCmd.main(FusekiCmd.java:67)
> Caused by: org.apache.jena.dboe.transaction.txn.TransactionException: Not in a transaction
>          at org.apache.jena.dboe.transaction.txn.TransactionalComponentLifecycle.checkTxn(TransactionalComponentLifecycle.java:417)
>          at org.apache.jena.dboe.trans.bplustree.BPlusTree.getRootRead(BPlusTree.java:159)
>          at org.apache.jena.dboe.trans.bplustree.BPlusTree.find(BPlusTree.java:239)
>          at org.apache.jena.tdb2.store.nodetable.NodeTableNative.accessIndex(NodeTableNative.java:133)
>          at org.apache.jena.tdb2.store.nodetable.NodeTableNative._idForNode(NodeTableNative.java:118)
>          at org.apache.jena.tdb2.store.nodetable.NodeTableNative.getNodeIdForNode(NodeTableNative.java:57)
>          at org.apache.jena.tdb2.store.nodetable.NodeTableCache._idForNode(NodeTableCache.java:222)
>          at org.apache.jena.tdb2.store.nodetable.NodeTableCache.getNodeIdForNode(NodeTableCache.java:114)
>          at org.apache.jena.tdb2.store.nodetable.NodeTableWrapper.getNodeIdForNode(NodeTableWrapper.java:47)
>          at org.apache.jena.tdb2.store.nodetable.NodeTableInline.getNodeIdForNode(NodeTableInline.java:58)
>          at org.apache.jena.tdb2.store.nodetupletable.NodeTupleTableConcrete.idForNode(NodeTupleTableConcrete.java:182)
>          at org.apache.jena.tdb2.store.nodetupletable.NodeTupleTableConcrete.findAsNodeIds(NodeTupleTableConcrete.java:136)
>          at org.apache.jena.tdb2.store.nodetupletable.NodeTupleTableConcrete.find(NodeTupleTableConcrete.java:114)
>          at org.apache.jena.tdb2.store.DatasetPrefixesTDB.readPrefixMap(DatasetPrefixesTDB.java:111)
>          at org.apache.jena.sparql.graph.GraphPrefixesProjection.getNsPrefixMap(GraphPrefixesProjection.java:94)
>          at org.apache.jena.tdb2.store.GraphViewSwitchable$PrefixMappingImplTDB2.getNsPrefixMap(GraphViewSwitchable.java:159)
>          at org.apache.jena.shared.impl.PrefixMappingImpl.setNsPrefixes(PrefixMappingImpl.java:138)
>          at org.apache.jena.tdb2.store.GraphViewSwitchable.createPrefixMapping(GraphViewSwitchable.java:68)
>          at org.apache.jena.graph.impl.GraphBase.getPrefixMapping(GraphBase.java:165)
>          at org.apache.jena.reasoner.BaseInfGraph.getPrefixMapping(BaseInfGraph.java:55)
>          at org.apache.jena.rdf.model.impl.ModelCom.getPrefixMapping(ModelCom.java:1018)
>          at org.apache.jena.rdf.model.impl.ModelCom.setNsPrefixes(ModelCom.java:1055)
>          at org.apache.jena.assembler.assemblers.ModelAssembler.open(ModelAssembler.java:45)
>          at org.apache.jena.assembler.assemblers.AssemblerGroup$PlainAssemblerGroup.openBySpecificType(AssemblerGroup.java:157)
>          ... 49 more
> [2019-01-31 13:54:58] Server     INFO  Started 2019/01/31 13:54:58 GMT on port 3030
> 
> THIS E-MAIL MAY CONTAIN CONFIDENTIAL AND/OR PRIVILEGED INFORMATION.
> IF YOU ARE NOT THE INTENDED RECIPIENT (OR HAVE RECEIVED THIS E-MAIL
> IN ERROR) PLEASE NOTIFY THE SENDER IMMEDIATELY AND DESTROY THIS
> E-MAIL. ANY UNAUTHORISED COPYING, DISCLOSURE OR DISTRIBUTION OF THE
> MATERIAL IN THIS E-MAIL IS STRICTLY FORBIDDEN.
> 
> IN ACCORDANCE WITH MIFID II RULES ON INDUCEMENTS, THE FIRM'S EMPLOYEES
> MAY ATTEND CORPORATE ACCESS EVENTS (DEFINED IN THE FCA HANDBOOK AS
> "THE SERVICE OF ARRANGING OR BRINGING ABOUT CONTACT BETWEEN AN INVESTMENT
> MANAGER AND AN ISSUER OR POTENTIAL ISSUER"). DURING SUCH MEETINGS, THE
> FIRM'S EMPLOYEES MAY ON NO ACCOUNT BE IN RECEIPT OF INSIDE INFORMATION
> (AS DESCRIBED IN ARTICLE 7 OF THE MARKET ABUSE REGULATION (EU) NO 596/2014).
> (https://www.handbook.fca.org.uk/handbook/glossary/G3532m.html)
> COMPANIES WHO DISCLOSE INSIDE INFORMATION ARE IN BREACH OF REGULATION
> AND MUST IMMEDIATELY AND CLEARLY NOTIFY ALL ATTENDEES. FOR INFORMATION
> ON THE FIRM'S POLICY IN RELATION TO ITS PARTICIPATION IN MARKET SOUNDINGS,
> PLEASE SEE https://www.horizon-asset.co.uk/market-soundings/.
> 
> HORIZON ASSET LLP IS AUTHORISED AND REGULATED
> BY THE FINANCIAL CONDUCT AUTHORITY.
> 
> 
>