You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@jena.apache.org by Daniel Maatari Okouya <ok...@yahoo.fr> on 2013/12/10 22:58:36 UTC

Jena TDB and Jena Inference cooperation underlying mechanics

Dear All, 

If they can work together, I would like to understand a bit better the underlying mechanics of  making  Jena TDB and the Jena Inference infrastructure work together. 


Hence, I have the following question: 


1-Can someone have the Jena TDB act like Stardog, in the sense that if one make a query that would include inferred triple as well ?

If that is possible, can someone explain to me the underlying mechanics that it would imply, knowing that i understood the following from the documentation. 

Inferred triples are situated in an InfGraph/InfModel(warper), which is obtained by binding a reasoner to a base Graph/Model (understood as the one containing the asserted triples). One can query that infGraph using the Querry engine ( However this is happening on an InMemory models).

My guess here is that if a model is in the TDB, then for our query to include the inferred triple, this would requires that, the querry to the model is actually run against the InfModel/Infgraph.  However i don’t know if that is possible and how exactly TDB would do that? Indeed, that would mean choosing a reasoner, creating the infmodel, and exposing it in lieu of the base model. 

Can someone explain a bit that mechanics, that is how it works with Jena TDB, if one wants some inference. For instance, if the actual model is RDFS or OWL ? how to query it and obtain answer that include the inferred knowledge. Many thanks

Best, 

-M-

 

-- 
Daniel Maatari Okouya
Sent with Airmail

Re: Jena TDB and Jena Inference cooperation underlying mechanics

Posted by Maatari Okouya <ok...@yahoo.fr>.
Hi Dave,
Got it,
Many thanks for your support.
-M-

Re: Jena TDB and Jena Inference cooperation underlying mechanics

Posted by Daniel Maatari Okouya <ok...@yahoo.fr>.
thanks for your help  Andy. 
-- 
Daniel Maatari Okouya
Sent with Airmail
From: Andy Seaborne Andy Seaborne
Reply: users@jena.apache.org users@jena.apache.org
Date: December 13, 2013 at 11:54:32 AM
To: users@jena.apache.org users@jena.apache.org
Subject:  Re: Jena TDB and Jena Inference cooperation underlying mechanics  
On 13/12/13 08:26, Dave Reynolds wrote:  
> If you want to save everything then do:  
>  
> tdbmodel.add( infmodel );  
>  
> Or, more completely,  
>  
> dataset.begin(ReadWrite.READ) ;  

ReadWrite.WRITE :-)  

> try {  
> tdbmodel.add( infmodel );  
> } finally { dataset.end() ; }  

dataset.begin(ReadWrite.WRITE) ;  
try {  
Model tdbmodel = dataset.get....  
tdbmodel.add( infmodel );  
daatset.commit() ;  
} finally { dataset.end() ; }  

>  
> Dave  
>  
> On 12/12/13 22:48, Daniel Maatari Okouya wrote:  
>> Hi,  
>>  
>> Sorry to bother you again on that, but i tried to read online and still  
>> i could not really figure out that last part. How to save back the  
>> inferred triple in the TDB. After trying out few things, i realize that  
>> as soon as you create an infgraph, many statement are already generated.  
>> I use a Infgraph generated out of a pellet reasoner by the way. I just  
>> understand how to proceed from there. “Generating the query saving back  
>> the result” does not talk to me. Please could you explain further here.  
>>  
>> Many thanks,  
>>  
>> M  
>> --  
>> Daniel Maatari Okouya  
>> Sent with Airmail <http://airmailapp.com/tracking>  
>> ------------------------------------------------------------------------  
>> From: Daniel Maatari Okouya Daniel Maatari Okouya  
>> <ma...@yahoo.fr>  
>> Reply: users@jena.apache.org users@jena.apache.org  
>> <ma...@jena.apache.org>  
>> Date: December 11, 2013 at 3:35:41 PM  
>> To: users@jena.apache.org users@jena.apache.org  
>> <ma...@jena.apache.org>  
>> Subject: Re: Jena TDB and Jena Inference cooperation underlying mechanics  
>>> I like that solution 3. This is something i had in mind before as  
>>> well. I get it now.  
>>>  
>>> 1-However, how exactly can one add the information back into the  
>>> ontology ? Do you have some example code ? From what i understood, you  
>>> query the ontology, and then …… “I don’t understand”. Cause the result  
>>> that you get, could very well contain both asserted triple and  
>>> inferred triples. I come from the OWL-API. With it, there is clearly  
>>> something called axiom generator, along with a clear procedure to add  
>>> the inferred triple in the original ontology or any other ontology. Is  
>>> there something similar in Jena ? Or simply how is it done, what is  
>>> the best practice ?  
>>> From your explanation it seems that you want only to generate those  
>>> triples that are inferred out of the query, that’s interesting.  
>>> In any case i would need some pointer for that procedure. Is there  
>>> some code or example somewhere ?  
>>>  
>>> Many thanks,  
>>> -M-  
>>>  
>>> --  
>>> Daniel Maatari Okouya  
>>> Sent with Airmail  
>>> From: Dave Reynolds Dave Reynolds  
>>> Reply: users@jena.apache.org users@jena.apache.org  
>>> Date: December 11, 2013 at 1:29:48 PM  
>>> To: users@jena.apache.org users@jena.apache.org  
>>> Subject: Re: Jena TDB and Jena Inference cooperation underlying  
>>> mechanics  
>>> On 11/12/13 10:23, Daniel Maatari Okouya wrote:  
>>> > Many thanks for taking the time to provide such a precise answer.  
>>> >  
>>> > Meanwhile, I’m afraid i would require some clarification.  
>>> >  
>>> >  
>>> > 1- Could you explain the different between 1 and 2. To me the  
>>> solution are exactly the same. With my current knowledge of Jena. I  
>>> don’t see 2 different code coming out of "Construct an Infmodel over  
>>> a TDB" and “Load The TDB model into memory”. In both case i would go  
>>> with dataset.getNamedModel or getdefaultmodel, which to me would  
>>> result in an in-memory model, that will be a parameter of a  
>>> createInfModel…..  
>>>  
>>> No, a call like getNamedModel doesn't load the model into memory, it  
>>> gives you an interface onto the TDB store. When you query that model the  
>>> queries are sent to TDB.  
>>>  
>>> To create an in memory copy you would do something like:  
>>>  
>>> Model memCopy = ModelFactory.createDefaultModel();  
>>> memCopy.add( tdbModel );  
>>>  
>>> > 2-In (3) I’m not familiar with the term "inference closure”. My  
>>> guess would be that you mean ontology schema… is there another term  
>>> for that, could you please clarify in easier jargon.  
>>>  
>>> Maybe a trivial example would help:  
>>>  
>>> Suppose you have an ontology, in a TDB model or a file, which states:  
>>>  
>>> foaf:knows rdfs:domain foaf:Person; rdfs:range foaf:Person .  
>>>  
>>> This is a small fragment of the FOAF vocabulary specification [1].  
>>>  
>>> Now suppose you have some instance data, again a TDB model or file,  
>>> which only states:  
>>>  
>>> :dave foaf:knows :bob .  
>>>  
>>> The a query to list the types of :dave and :bob on that instance data  
>>> will of itself return empty.  
>>>  
>>> However, if you construct an RDFS inference model which combines the  
>>> ontology and the instance data with a knowledge of RDFS semantics then a  
>>> query to that for type statements would yeild, among other things:  
>>>  
>>> :dave rdf:type foaf:Person .  
>>> :bob rdf:type foaf:Person .  
>>>  
>>> These plus the ontology plus the instance data are the inference  
>>> closure. I.e. the closure is what you get by adding back in all the new  
>>> statements you can derive as a result of inference [2].  
>>>  
>>> The inference engine has had to do work to generate those additional  
>>> inferred statements. All the internal reasoner state for doing so is in  
>>> memory. So if you kill your application and ask the same query again the  
>>> inference engine has to do that work over again. For more complex  
>>> examples that work can be expensive, particularly over a persistent  
>>> store.  
>>>  
>>> So in option 3 you take those inferred triples and store them, along  
>>> with the original triples. If you list the rdf:type statements on that  
>>> model now you see the same answers you would have seen if you queried  
>>> via an inference engine but didn't have to do inference to get them (you  
>>> did it ahead of time).  
>>>  
>>> Dave  
>>>  
>>> [1] http://xmlns.com/foaf/spec/#term_knows  
>>>  
>>> [2] Sometimes that complete closure can be infinite, so you can't infer  
>>> and store everything. But in practice the fragments implemented by the  
>>> rule engine are generally finite.  
>>>  
>>> > Moreover, I’m not sure to understand the full logic here. Why would  
>>> i store the result of some query and then query them again. Is this  
>>> the way to store all the inferredTriple back in the store? Do you  
>>> mean that you take every triple that is returned and assert them back  
>>> in the Infmodel or the base model ? By the way does that mean that in  
>>> general, in jena, there is noway, to generate a model that is the  
>>> combination of the asserted and all the inferred triple (or a  
>>> selected set of inferred triple, e.g. all class axioms).  
>>> >  
>>> > As you can see i’m quite confuse ;) . I would much appreciate if  
>>> you could clarified and detailed a bit further (3)  
>>> >  
>>> > Also if u have good pointers for me to read, please do not hesitate ;)  
>>> >  
>>> >  
>>> > Many thanks,  
>>> >  
>>> > -M-  
>>> >  
>>> > --  
>>> > Daniel Maatari Okouya  
>>> > Sent with Airmail  
>>> > From: Dave Reynolds Dave Reynolds  
>>> > Reply: users@jena.apache.org users@jena.apache.org  
>>> > Date: December 11, 2013 at 9:47:51 AM  
>>> > To: users@jena.apache.org users@jena.apache.org  
>>> > Subject: Re: Jena TDB and Jena Inference cooperation underlying  
>>> mechanics  
>>> > There's no specific integration of TDB and inference.  
>>> >  
>>> > The rule-based inference engines themselves run in memory but can  
>>> > operate over any model, however it is stored. So there are several  
>>> options.  
>>> >  
>>> > 1. Construct an InfModel over a TDB based model. When you query the  
>>> > InfModel you will see both the TDB model and any inferences.  
>>> >  
>>> > 2. Load the TDB model into memory then construct an InfModel over  
>>> that.  
>>> > Then query the InfModel.  
>>> >  
>>> > 3. Prepare an inference closure and store that. Load the data, e.g.  
>>> into  
>>> > memory, construct the InfModel, query the InfModel for the patterns  
>>> you  
>>> > are interested in (which might be every triple) store all those  
>>> results  
>>> > in a TDB model. Then at run time open the closure TDB as a plain model  
>>> > and query it.  
>>> >  
>>> > 4. As 3 but use TDB's high performance RDFS-subset closure.  
>>> >  
>>> > #1 is easy to do but the inference results are stored in memory so it  
>>> > doesn't enable you to scale to models that wouldn't fit in memory  
>>> anyway.  
>>> >  
>>> > #2 can be faster. Inference involves a lot of queries and query to  
>>> an in  
>>> > memory model is naturally faster than querying TDB. Whether the  
>>> cost of  
>>> > the initial load outweighs the speed of inference depends on how  
>>> caching  
>>> > works out, your data and your queries.  
>>> >  
>>> > #3 gives you good query performance at the cost of an expensive slow  
>>> > cycle to prepare the data. It's not suited to mutating data and  
>>> requires  
>>> > the preparation phase to be run on a machine with enough memory to  
>>> > compute the closure, or closure subset, that you want.  
>>> >  
>>> > #4 can cope with much larger data sets than #3 at the expense of a  
>>> more  
>>> > limited range of inference.  
>>> >  
>>> > Dave  
>>> >  
>>> >  
>>> >  
>>> > On 10/12/13 21:58, Daniel Maatari Okouya wrote:  
>>> >> Dear All,  
>>> >>  
>>> >> If they can work together, I would like to understand a bit better  
>>> the underlying mechanics of making Jena TDB and the Jena Inference  
>>> infrastructure work together.  
>>> >>  
>>> >>  
>>> >> Hence, I have the following question:  
>>> >>  
>>> >>  
>>> >> 1-Can someone have the Jena TDB act like Stardog, in the sense  
>>> that if one make a query that would include inferred triple as well ?  
>>> >>  
>>> >> If that is possible, can someone explain to me the underlying  
>>> mechanics that it would imply, knowing that i understood the  
>>> following from the documentation.  
>>> >>  
>>> >> Inferred triples are situated in an InfGraph/InfModel(warper),  
>>> which is obtained by binding a reasoner to a base Graph/Model  
>>> (understood as the one containing the asserted triples). One can  
>>> query that infGraph using the Querry engine ( However this is  
>>> happening on an InMemory models).  
>>> >>  
>>> >> My guess here is that if a model is in the TDB, then for our query  
>>> to include the inferred triple, this would requires that, the querry  
>>> to the model is actually run against the InfModel/Infgraph. However i  
>>> don’t know if that is possible and how exactly TDB would do that?  
>>> Indeed, that would mean choosing a reasoner, creating the infmodel,  
>>> and exposing it in lieu of the base model.  
>>> >>  
>>> >> Can someone explain a bit that mechanics, that is how it works  
>>> with Jena TDB, if one wants some inference. For instance, if the  
>>> actual model is RDFS or OWL ? how to query it and obtain answer that  
>>> include the inferred knowledge. Many thanks  
>>> >>  
>>> >> Best,  
>>> >>  
>>> >> -M-  
>>> >>  
>>> >>  
>>> >>  
>>> >> --  
>>> >> Daniel Maatari Okouya  
>>> >> Sent with Airmail  
>>> >>  
>>> >  
>>> >  
>>>  
>  


Re: Jena TDB and Jena Inference cooperation underlying mechanics

Posted by Dave Reynolds <da...@gmail.com>.
On 13/12/13 10:53, Andy Seaborne wrote:
> On 13/12/13 08:26, Dave Reynolds wrote:
>> If you want to save everything then do:
>>
>>      tdbmodel.add( infmodel );
>>
>> Or, more completely,
>>
>>      dataset.begin(ReadWrite.READ) ;
>
> ReadWrite.WRITE :-)

Oops, thanks for spotting that, cut/paste the wrong example!
Dave

>
>>      try {
>>          tdbmodel.add( infmodel );
>>      } finally { dataset.end() ; }
>
> dataset.begin(ReadWrite.WRITE) ;
> try {
>      Model tdbmodel = dataset.get....
>      tdbmodel.add( infmodel );
>      daatset.commit() ;
> } finally { dataset.end() ; }
>
>>
>> Dave
>>
>> On 12/12/13 22:48, Daniel Maatari Okouya wrote:
>>> Hi,
>>>
>>> Sorry to bother you again on that, but i tried to read online and still
>>> i could not really figure out that last part. How to save back the
>>> inferred triple in the TDB. After trying out few things, i realize that
>>> as soon as you create an infgraph, many statement are already generated.
>>> I use a Infgraph generated out of a pellet reasoner by the way. I just
>>> understand how to proceed from there. “Generating the query saving back
>>> the result” does not talk to me. Please could you explain further here.
>>>
>>> Many thanks,
>>>
>>> M
>>> --
>>> Daniel Maatari Okouya
>>> Sent with Airmail <http://airmailapp.com/tracking>
>>> ------------------------------------------------------------------------
>>> From: Daniel Maatari Okouya Daniel Maatari Okouya
>>> <ma...@yahoo.fr>
>>> Reply: users@jena.apache.org users@jena.apache.org
>>> <ma...@jena.apache.org>
>>> Date: December 11, 2013 at 3:35:41 PM
>>> To: users@jena.apache.org users@jena.apache.org
>>> <ma...@jena.apache.org>
>>> Subject: Re: Jena TDB and Jena Inference cooperation underlying
>>> mechanics
>>>> I like that solution 3. This is something i had in mind before as
>>>> well. I get it now.
>>>>
>>>> 1-However, how exactly can one add the information back into the
>>>> ontology ? Do you have some example code ? From what i understood, you
>>>> query the ontology, and then …… “I don’t understand”. Cause the result
>>>> that you get, could very well contain both asserted triple and
>>>> inferred triples. I come from the OWL-API. With it, there is clearly
>>>> something called axiom generator, along with a clear procedure to add
>>>> the inferred triple in the original ontology or any other ontology. Is
>>>> there something similar in Jena ? Or simply how is it done, what is
>>>> the best practice ?
>>>> From your explanation it seems that you want only to generate those
>>>> triples that are inferred out of the query, that’s interesting.
>>>> In any case i would need some pointer for that procedure. Is there
>>>> some code or example somewhere ?
>>>>
>>>> Many thanks,
>>>> -M-
>>>>
>>>> --
>>>> Daniel Maatari Okouya
>>>> Sent with Airmail
>>>> From: Dave Reynolds Dave Reynolds
>>>> Reply: users@jena.apache.org users@jena.apache.org
>>>> Date: December 11, 2013 at 1:29:48 PM
>>>> To: users@jena.apache.org users@jena.apache.org
>>>> Subject:  Re: Jena TDB and Jena Inference cooperation underlying
>>>> mechanics
>>>> On 11/12/13 10:23, Daniel Maatari Okouya wrote:
>>>> > Many thanks for taking the time to provide such a precise answer.
>>>> >
>>>> > Meanwhile, I’m afraid i would require some clarification.
>>>> >
>>>> >
>>>> > 1- Could you explain the different between 1 and 2. To me the
>>>> solution are exactly the same. With my current knowledge of Jena. I
>>>> don’t see 2 different code coming out of "Construct an Infmodel over
>>>> a TDB" and “Load The TDB model into memory”. In both case i would go
>>>> with dataset.getNamedModel or getdefaultmodel, which to me would
>>>> result in an in-memory model, that will be a parameter of a
>>>> createInfModel…..
>>>>
>>>> No, a call like getNamedModel doesn't load the model into memory, it
>>>> gives you an interface onto the TDB store. When you query that model
>>>> the
>>>> queries are sent to TDB.
>>>>
>>>> To create an in memory copy you would do something like:
>>>>
>>>> Model memCopy = ModelFactory.createDefaultModel();
>>>> memCopy.add( tdbModel );
>>>>
>>>> > 2-In (3) I’m not familiar with the term "inference closure”. My
>>>> guess would be that you mean ontology schema… is there another term
>>>> for that, could you please clarify in easier jargon.
>>>>
>>>> Maybe a trivial example would help:
>>>>
>>>> Suppose you have an ontology, in a TDB model or a file, which states:
>>>>
>>>> foaf:knows rdfs:domain foaf:Person; rdfs:range foaf:Person .
>>>>
>>>> This is a small fragment of the FOAF vocabulary specification [1].
>>>>
>>>> Now suppose you have some instance data, again a TDB model or file,
>>>> which only states:
>>>>
>>>> :dave foaf:knows :bob .
>>>>
>>>> The a query to list the types of :dave and :bob on that instance data
>>>> will of itself return empty.
>>>>
>>>> However, if you construct an RDFS inference model which combines the
>>>> ontology and the instance data with a knowledge of RDFS semantics
>>>> then a
>>>> query to that for type statements would yeild, among other things:
>>>>
>>>> :dave rdf:type foaf:Person .
>>>> :bob rdf:type foaf:Person .
>>>>
>>>> These plus the ontology plus the instance data are the inference
>>>> closure. I.e. the closure is what you get by adding back in all the new
>>>> statements you can derive as a result of inference [2].
>>>>
>>>> The inference engine has had to do work to generate those additional
>>>> inferred statements. All the internal reasoner state for doing so is in
>>>> memory. So if you kill your application and ask the same query again
>>>> the
>>>> inference engine has to do that work over again. For more complex
>>>> examples that work can be expensive, particularly over a persistent
>>>> store.
>>>>
>>>> So in option 3 you take those inferred triples and store them, along
>>>> with the original triples. If you list the rdf:type statements on that
>>>> model now you see the same answers you would have seen if you queried
>>>> via an inference engine but didn't have to do inference to get them
>>>> (you
>>>> did it ahead of time).
>>>>
>>>> Dave
>>>>
>>>> [1] http://xmlns.com/foaf/spec/#term_knows
>>>>
>>>> [2] Sometimes that complete closure can be infinite, so you can't infer
>>>> and store everything. But in practice the fragments implemented by the
>>>> rule engine are generally finite.
>>>>
>>>> > Moreover, I’m not sure to understand the full logic here. Why would
>>>> i store the result of some query and then query them again. Is this
>>>> the way to store all the inferredTriple back in the store? Do you
>>>> mean that you take every triple that is returned and assert them back
>>>> in the Infmodel or the base model ? By the way does that mean that in
>>>> general, in jena, there is noway, to generate a model that is the
>>>> combination of the asserted and all the inferred triple (or a
>>>> selected set of inferred triple, e.g. all class axioms).
>>>> >
>>>> > As you can see i’m quite confuse ;) . I would much appreciate if
>>>> you could clarified and detailed a bit further (3)
>>>> >
>>>> > Also if u have good pointers for me to read, please do not
>>>> hesitate ;)
>>>> >
>>>> >
>>>> > Many thanks,
>>>> >
>>>> > -M-
>>>> >
>>>> > --
>>>> > Daniel Maatari Okouya
>>>> > Sent with Airmail
>>>> > From: Dave Reynolds Dave Reynolds
>>>> > Reply: users@jena.apache.org users@jena.apache.org
>>>> > Date: December 11, 2013 at 9:47:51 AM
>>>> > To: users@jena.apache.org users@jena.apache.org
>>>> > Subject: Re: Jena TDB and Jena Inference cooperation underlying
>>>> mechanics
>>>> > There's no specific integration of TDB and inference.
>>>> >
>>>> > The rule-based inference engines themselves run in memory but can
>>>> > operate over any model, however it is stored. So there are several
>>>> options.
>>>> >
>>>> > 1. Construct an InfModel over a TDB based model. When you query the
>>>> > InfModel you will see both the TDB model and any inferences.
>>>> >
>>>> > 2. Load the TDB model into memory then construct an InfModel over
>>>> that.
>>>> > Then query the InfModel.
>>>> >
>>>> > 3. Prepare an inference closure and store that. Load the data, e.g.
>>>> into
>>>> > memory, construct the InfModel, query the InfModel for the patterns
>>>> you
>>>> > are interested in (which might be every triple) store all those
>>>> results
>>>> > in a TDB model. Then at run time open the closure TDB as a plain
>>>> model
>>>> > and query it.
>>>> >
>>>> > 4. As 3 but use TDB's high performance RDFS-subset closure.
>>>> >
>>>> > #1 is easy to do but the inference results are stored in memory so it
>>>> > doesn't enable you to scale to models that wouldn't fit in memory
>>>> anyway.
>>>> >
>>>> > #2 can be faster. Inference involves a lot of queries and query to
>>>> an in
>>>> > memory model is naturally faster than querying TDB. Whether the
>>>> cost of
>>>> > the initial load outweighs the speed of inference depends on how
>>>> caching
>>>> > works out, your data and your queries.
>>>> >
>>>> > #3 gives you good query performance at the cost of an expensive slow
>>>> > cycle to prepare the data. It's not suited to mutating data and
>>>> requires
>>>> > the preparation phase to be run on a machine with enough memory to
>>>> > compute the closure, or closure subset, that you want.
>>>> >
>>>> > #4 can cope with much larger data sets than #3 at the expense of a
>>>> more
>>>> > limited range of inference.
>>>> >
>>>> > Dave
>>>> >
>>>> >
>>>> >
>>>> > On 10/12/13 21:58, Daniel Maatari Okouya wrote:
>>>> >> Dear All,
>>>> >>
>>>> >> If they can work together, I would like to understand a bit better
>>>> the underlying mechanics of making Jena TDB and the Jena Inference
>>>> infrastructure work together.
>>>> >>
>>>> >>
>>>> >> Hence, I have the following question:
>>>> >>
>>>> >>
>>>> >> 1-Can someone have the Jena TDB act like Stardog, in the sense
>>>> that if one make a query that would include inferred triple as well ?
>>>> >>
>>>> >> If that is possible, can someone explain to me the underlying
>>>> mechanics that it would imply, knowing that i understood the
>>>> following from the documentation.
>>>> >>
>>>> >> Inferred triples are situated in an InfGraph/InfModel(warper),
>>>> which is obtained by binding a reasoner to a base Graph/Model
>>>> (understood as the one containing the asserted triples). One can
>>>> query that infGraph using the Querry engine ( However this is
>>>> happening on an InMemory models).
>>>> >>
>>>> >> My guess here is that if a model is in the TDB, then for our query
>>>> to include the inferred triple, this would requires that, the querry
>>>> to the model is actually run against the InfModel/Infgraph. However i
>>>> don’t know if that is possible and how exactly TDB would do that?
>>>> Indeed, that would mean choosing a reasoner, creating the infmodel,
>>>> and exposing it in lieu of the base model.
>>>> >>
>>>> >> Can someone explain a bit that mechanics, that is how it works
>>>> with Jena TDB, if one wants some inference. For instance, if the
>>>> actual model is RDFS or OWL ? how to query it and obtain answer that
>>>> include the inferred knowledge. Many thanks
>>>> >>
>>>> >> Best,
>>>> >>
>>>> >> -M-
>>>> >>
>>>> >>
>>>> >>
>>>> >> --
>>>> >> Daniel Maatari Okouya
>>>> >> Sent with Airmail
>>>> >>
>>>> >
>>>> >
>>>>
>>
>


Re: Jena TDB and Jena Inference cooperation underlying mechanics

Posted by Andy Seaborne <an...@apache.org>.
On 13/12/13 08:26, Dave Reynolds wrote:
> If you want to save everything then do:
>
>      tdbmodel.add( infmodel );
>
> Or, more completely,
>
>      dataset.begin(ReadWrite.READ) ;

ReadWrite.WRITE :-)

>      try {
>          tdbmodel.add( infmodel );
>      } finally { dataset.end() ; }

dataset.begin(ReadWrite.WRITE) ;
try {
     Model tdbmodel = dataset.get....
     tdbmodel.add( infmodel );
     daatset.commit() ;
} finally { dataset.end() ; }

>
> Dave
>
> On 12/12/13 22:48, Daniel Maatari Okouya wrote:
>> Hi,
>>
>> Sorry to bother you again on that, but i tried to read online and still
>> i could not really figure out that last part. How to save back the
>> inferred triple in the TDB. After trying out few things, i realize that
>> as soon as you create an infgraph, many statement are already generated.
>> I use a Infgraph generated out of a pellet reasoner by the way. I just
>> understand how to proceed from there. “Generating the query saving back
>> the result” does not talk to me. Please could you explain further here.
>>
>> Many thanks,
>>
>> M
>> --
>> Daniel Maatari Okouya
>> Sent with Airmail <http://airmailapp.com/tracking>
>> ------------------------------------------------------------------------
>> From: Daniel Maatari Okouya Daniel Maatari Okouya
>> <ma...@yahoo.fr>
>> Reply: users@jena.apache.org users@jena.apache.org
>> <ma...@jena.apache.org>
>> Date: December 11, 2013 at 3:35:41 PM
>> To: users@jena.apache.org users@jena.apache.org
>> <ma...@jena.apache.org>
>> Subject: Re: Jena TDB and Jena Inference cooperation underlying mechanics
>>> I like that solution 3. This is something i had in mind before as
>>> well. I get it now.
>>>
>>> 1-However, how exactly can one add the information back into the
>>> ontology ? Do you have some example code ? From what i understood, you
>>> query the ontology, and then …… “I don’t understand”. Cause the result
>>> that you get, could very well contain both asserted triple and
>>> inferred triples. I come from the OWL-API. With it, there is clearly
>>> something called axiom generator, along with a clear procedure to add
>>> the inferred triple in the original ontology or any other ontology. Is
>>> there something similar in Jena ? Or simply how is it done, what is
>>> the best practice ?
>>> From your explanation it seems that you want only to generate those
>>> triples that are inferred out of the query, that’s interesting.
>>> In any case i would need some pointer for that procedure. Is there
>>> some code or example somewhere ?
>>>
>>> Many thanks,
>>> -M-
>>>
>>> --
>>> Daniel Maatari Okouya
>>> Sent with Airmail
>>> From: Dave Reynolds Dave Reynolds
>>> Reply: users@jena.apache.org users@jena.apache.org
>>> Date: December 11, 2013 at 1:29:48 PM
>>> To: users@jena.apache.org users@jena.apache.org
>>> Subject:  Re: Jena TDB and Jena Inference cooperation underlying
>>> mechanics
>>> On 11/12/13 10:23, Daniel Maatari Okouya wrote:
>>> > Many thanks for taking the time to provide such a precise answer.
>>> >
>>> > Meanwhile, I’m afraid i would require some clarification.
>>> >
>>> >
>>> > 1- Could you explain the different between 1 and 2. To me the
>>> solution are exactly the same. With my current knowledge of Jena. I
>>> don’t see 2 different code coming out of "Construct an Infmodel over
>>> a TDB" and “Load The TDB model into memory”. In both case i would go
>>> with dataset.getNamedModel or getdefaultmodel, which to me would
>>> result in an in-memory model, that will be a parameter of a
>>> createInfModel…..
>>>
>>> No, a call like getNamedModel doesn't load the model into memory, it
>>> gives you an interface onto the TDB store. When you query that model the
>>> queries are sent to TDB.
>>>
>>> To create an in memory copy you would do something like:
>>>
>>> Model memCopy = ModelFactory.createDefaultModel();
>>> memCopy.add( tdbModel );
>>>
>>> > 2-In (3) I’m not familiar with the term "inference closure”. My
>>> guess would be that you mean ontology schema… is there another term
>>> for that, could you please clarify in easier jargon.
>>>
>>> Maybe a trivial example would help:
>>>
>>> Suppose you have an ontology, in a TDB model or a file, which states:
>>>
>>> foaf:knows rdfs:domain foaf:Person; rdfs:range foaf:Person .
>>>
>>> This is a small fragment of the FOAF vocabulary specification [1].
>>>
>>> Now suppose you have some instance data, again a TDB model or file,
>>> which only states:
>>>
>>> :dave foaf:knows :bob .
>>>
>>> The a query to list the types of :dave and :bob on that instance data
>>> will of itself return empty.
>>>
>>> However, if you construct an RDFS inference model which combines the
>>> ontology and the instance data with a knowledge of RDFS semantics then a
>>> query to that for type statements would yeild, among other things:
>>>
>>> :dave rdf:type foaf:Person .
>>> :bob rdf:type foaf:Person .
>>>
>>> These plus the ontology plus the instance data are the inference
>>> closure. I.e. the closure is what you get by adding back in all the new
>>> statements you can derive as a result of inference [2].
>>>
>>> The inference engine has had to do work to generate those additional
>>> inferred statements. All the internal reasoner state for doing so is in
>>> memory. So if you kill your application and ask the same query again the
>>> inference engine has to do that work over again. For more complex
>>> examples that work can be expensive, particularly over a persistent
>>> store.
>>>
>>> So in option 3 you take those inferred triples and store them, along
>>> with the original triples. If you list the rdf:type statements on that
>>> model now you see the same answers you would have seen if you queried
>>> via an inference engine but didn't have to do inference to get them (you
>>> did it ahead of time).
>>>
>>> Dave
>>>
>>> [1] http://xmlns.com/foaf/spec/#term_knows
>>>
>>> [2] Sometimes that complete closure can be infinite, so you can't infer
>>> and store everything. But in practice the fragments implemented by the
>>> rule engine are generally finite.
>>>
>>> > Moreover, I’m not sure to understand the full logic here. Why would
>>> i store the result of some query and then query them again. Is this
>>> the way to store all the inferredTriple back in the store? Do you
>>> mean that you take every triple that is returned and assert them back
>>> in the Infmodel or the base model ? By the way does that mean that in
>>> general, in jena, there is noway, to generate a model that is the
>>> combination of the asserted and all the inferred triple (or a
>>> selected set of inferred triple, e.g. all class axioms).
>>> >
>>> > As you can see i’m quite confuse ;) . I would much appreciate if
>>> you could clarified and detailed a bit further (3)
>>> >
>>> > Also if u have good pointers for me to read, please do not hesitate ;)
>>> >
>>> >
>>> > Many thanks,
>>> >
>>> > -M-
>>> >
>>> > --
>>> > Daniel Maatari Okouya
>>> > Sent with Airmail
>>> > From: Dave Reynolds Dave Reynolds
>>> > Reply: users@jena.apache.org users@jena.apache.org
>>> > Date: December 11, 2013 at 9:47:51 AM
>>> > To: users@jena.apache.org users@jena.apache.org
>>> > Subject: Re: Jena TDB and Jena Inference cooperation underlying
>>> mechanics
>>> > There's no specific integration of TDB and inference.
>>> >
>>> > The rule-based inference engines themselves run in memory but can
>>> > operate over any model, however it is stored. So there are several
>>> options.
>>> >
>>> > 1. Construct an InfModel over a TDB based model. When you query the
>>> > InfModel you will see both the TDB model and any inferences.
>>> >
>>> > 2. Load the TDB model into memory then construct an InfModel over
>>> that.
>>> > Then query the InfModel.
>>> >
>>> > 3. Prepare an inference closure and store that. Load the data, e.g.
>>> into
>>> > memory, construct the InfModel, query the InfModel for the patterns
>>> you
>>> > are interested in (which might be every triple) store all those
>>> results
>>> > in a TDB model. Then at run time open the closure TDB as a plain model
>>> > and query it.
>>> >
>>> > 4. As 3 but use TDB's high performance RDFS-subset closure.
>>> >
>>> > #1 is easy to do but the inference results are stored in memory so it
>>> > doesn't enable you to scale to models that wouldn't fit in memory
>>> anyway.
>>> >
>>> > #2 can be faster. Inference involves a lot of queries and query to
>>> an in
>>> > memory model is naturally faster than querying TDB. Whether the
>>> cost of
>>> > the initial load outweighs the speed of inference depends on how
>>> caching
>>> > works out, your data and your queries.
>>> >
>>> > #3 gives you good query performance at the cost of an expensive slow
>>> > cycle to prepare the data. It's not suited to mutating data and
>>> requires
>>> > the preparation phase to be run on a machine with enough memory to
>>> > compute the closure, or closure subset, that you want.
>>> >
>>> > #4 can cope with much larger data sets than #3 at the expense of a
>>> more
>>> > limited range of inference.
>>> >
>>> > Dave
>>> >
>>> >
>>> >
>>> > On 10/12/13 21:58, Daniel Maatari Okouya wrote:
>>> >> Dear All,
>>> >>
>>> >> If they can work together, I would like to understand a bit better
>>> the underlying mechanics of making Jena TDB and the Jena Inference
>>> infrastructure work together.
>>> >>
>>> >>
>>> >> Hence, I have the following question:
>>> >>
>>> >>
>>> >> 1-Can someone have the Jena TDB act like Stardog, in the sense
>>> that if one make a query that would include inferred triple as well ?
>>> >>
>>> >> If that is possible, can someone explain to me the underlying
>>> mechanics that it would imply, knowing that i understood the
>>> following from the documentation.
>>> >>
>>> >> Inferred triples are situated in an InfGraph/InfModel(warper),
>>> which is obtained by binding a reasoner to a base Graph/Model
>>> (understood as the one containing the asserted triples). One can
>>> query that infGraph using the Querry engine ( However this is
>>> happening on an InMemory models).
>>> >>
>>> >> My guess here is that if a model is in the TDB, then for our query
>>> to include the inferred triple, this would requires that, the querry
>>> to the model is actually run against the InfModel/Infgraph. However i
>>> don’t know if that is possible and how exactly TDB would do that?
>>> Indeed, that would mean choosing a reasoner, creating the infmodel,
>>> and exposing it in lieu of the base model.
>>> >>
>>> >> Can someone explain a bit that mechanics, that is how it works
>>> with Jena TDB, if one wants some inference. For instance, if the
>>> actual model is RDFS or OWL ? how to query it and obtain answer that
>>> include the inferred knowledge. Many thanks
>>> >>
>>> >> Best,
>>> >>
>>> >> -M-
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Daniel Maatari Okouya
>>> >> Sent with Airmail
>>> >>
>>> >
>>> >
>>>
>


Re: Jena TDB and Jena Inference cooperation underlying mechanics

Posted by Dave Reynolds <da...@gmail.com>.
If you want to save everything then do:

     tdbmodel.add( infmodel );

Or, more completely,

     dataset.begin(ReadWrite.READ) ;
     try {
         tdbmodel.add( infmodel );
     } finally { dataset.end() ; }

Dave

On 12/12/13 22:48, Daniel Maatari Okouya wrote:
> Hi,
>
> Sorry to bother you again on that, but i tried to read online and still
> i could not really figure out that last part. How to save back the
> inferred triple in the TDB. After trying out few things, i realize that
> as soon as you create an infgraph, many statement are already generated.
> I use a Infgraph generated out of a pellet reasoner by the way. I just
> understand how to proceed from there. “Generating the query saving back
> the result” does not talk to me. Please could you explain further here.
>
> Many thanks,
>
> M
> --
> Daniel Maatari Okouya
> Sent with Airmail <http://airmailapp.com/tracking>
> ------------------------------------------------------------------------
> From: Daniel Maatari Okouya Daniel Maatari Okouya <ma...@yahoo.fr>
> Reply: users@jena.apache.org users@jena.apache.org
> <ma...@jena.apache.org>
> Date: December 11, 2013 at 3:35:41 PM
> To: users@jena.apache.org users@jena.apache.org
> <ma...@jena.apache.org>
> Subject: Re: Jena TDB and Jena Inference cooperation underlying mechanics
>> I like that solution 3. This is something i had in mind before as
>> well. I get it now.
>>
>> 1-However, how exactly can one add the information back into the
>> ontology ? Do you have some example code ? From what i understood, you
>> query the ontology, and then …… “I don’t understand”. Cause the result
>> that you get, could very well contain both asserted triple and
>> inferred triples. I come from the OWL-API. With it, there is clearly
>> something called axiom generator, along with a clear procedure to add
>> the inferred triple in the original ontology or any other ontology. Is
>> there something similar in Jena ? Or simply how is it done, what is
>> the best practice ?
>> From your explanation it seems that you want only to generate those
>> triples that are inferred out of the query, that’s interesting.
>> In any case i would need some pointer for that procedure. Is there
>> some code or example somewhere ?
>>
>> Many thanks,
>> -M-
>>
>> --
>> Daniel Maatari Okouya
>> Sent with Airmail
>> From: Dave Reynolds Dave Reynolds
>> Reply: users@jena.apache.org users@jena.apache.org
>> Date: December 11, 2013 at 1:29:48 PM
>> To: users@jena.apache.org users@jena.apache.org
>> Subject:  Re: Jena TDB and Jena Inference cooperation underlying
>> mechanics
>> On 11/12/13 10:23, Daniel Maatari Okouya wrote:
>> > Many thanks for taking the time to provide such a precise answer.
>> >
>> > Meanwhile, I’m afraid i would require some clarification.
>> >
>> >
>> > 1- Could you explain the different between 1 and 2. To me the solution are exactly the same. With my current knowledge of Jena. I don’t see 2 different code coming out of "Construct an Infmodel over a TDB" and “Load The TDB model into memory”. In both case i would go with dataset.getNamedModel or getdefaultmodel, which to me would result in an in-memory model, that will be a parameter of a createInfModel…..
>>
>> No, a call like getNamedModel doesn't load the model into memory, it
>> gives you an interface onto the TDB store. When you query that model the
>> queries are sent to TDB.
>>
>> To create an in memory copy you would do something like:
>>
>> Model memCopy = ModelFactory.createDefaultModel();
>> memCopy.add( tdbModel );
>>
>> > 2-In (3) I’m not familiar with the term "inference closure”. My guess would be that you mean ontology schema… is there another term for that, could you please clarify in easier jargon.
>>
>> Maybe a trivial example would help:
>>
>> Suppose you have an ontology, in a TDB model or a file, which states:
>>
>> foaf:knows rdfs:domain foaf:Person; rdfs:range foaf:Person .
>>
>> This is a small fragment of the FOAF vocabulary specification [1].
>>
>> Now suppose you have some instance data, again a TDB model or file,
>> which only states:
>>
>> :dave foaf:knows :bob .
>>
>> The a query to list the types of :dave and :bob on that instance data
>> will of itself return empty.
>>
>> However, if you construct an RDFS inference model which combines the
>> ontology and the instance data with a knowledge of RDFS semantics then a
>> query to that for type statements would yeild, among other things:
>>
>> :dave rdf:type foaf:Person .
>> :bob rdf:type foaf:Person .
>>
>> These plus the ontology plus the instance data are the inference
>> closure. I.e. the closure is what you get by adding back in all the new
>> statements you can derive as a result of inference [2].
>>
>> The inference engine has had to do work to generate those additional
>> inferred statements. All the internal reasoner state for doing so is in
>> memory. So if you kill your application and ask the same query again the
>> inference engine has to do that work over again. For more complex
>> examples that work can be expensive, particularly over a persistent
>> store.
>>
>> So in option 3 you take those inferred triples and store them, along
>> with the original triples. If you list the rdf:type statements on that
>> model now you see the same answers you would have seen if you queried
>> via an inference engine but didn't have to do inference to get them (you
>> did it ahead of time).
>>
>> Dave
>>
>> [1] http://xmlns.com/foaf/spec/#term_knows
>>
>> [2] Sometimes that complete closure can be infinite, so you can't infer
>> and store everything. But in practice the fragments implemented by the
>> rule engine are generally finite.
>>
>> > Moreover, I’m not sure to understand the full logic here. Why would i store the result of some query and then query them again. Is this the way to store all the inferredTriple back in the store? Do you mean that you take every triple that is returned and assert them back in the Infmodel or the base model ? By the way does that mean that in general, in jena, there is noway, to generate a model that is the combination of the asserted and all the inferred triple (or a selected set of inferred triple, e.g. all class axioms).
>> >
>> > As you can see i’m quite confuse ;) . I would much appreciate if you could clarified and detailed a bit further (3)
>> >
>> > Also if u have good pointers for me to read, please do not hesitate ;)
>> >
>> >
>> > Many thanks,
>> >
>> > -M-
>> >
>> > --
>> > Daniel Maatari Okouya
>> > Sent with Airmail
>> > From: Dave Reynolds Dave Reynolds
>> > Reply: users@jena.apache.org users@jena.apache.org
>> > Date: December 11, 2013 at 9:47:51 AM
>> > To: users@jena.apache.org users@jena.apache.org
>> > Subject: Re: Jena TDB and Jena Inference cooperation underlying mechanics
>> > There's no specific integration of TDB and inference.
>> >
>> > The rule-based inference engines themselves run in memory but can
>> > operate over any model, however it is stored. So there are several options.
>> >
>> > 1. Construct an InfModel over a TDB based model. When you query the
>> > InfModel you will see both the TDB model and any inferences.
>> >
>> > 2. Load the TDB model into memory then construct an InfModel over that.
>> > Then query the InfModel.
>> >
>> > 3. Prepare an inference closure and store that. Load the data, e.g. into
>> > memory, construct the InfModel, query the InfModel for the patterns you
>> > are interested in (which might be every triple) store all those results
>> > in a TDB model. Then at run time open the closure TDB as a plain model
>> > and query it.
>> >
>> > 4. As 3 but use TDB's high performance RDFS-subset closure.
>> >
>> > #1 is easy to do but the inference results are stored in memory so it
>> > doesn't enable you to scale to models that wouldn't fit in memory anyway.
>> >
>> > #2 can be faster. Inference involves a lot of queries and query to an in
>> > memory model is naturally faster than querying TDB. Whether the cost of
>> > the initial load outweighs the speed of inference depends on how caching
>> > works out, your data and your queries.
>> >
>> > #3 gives you good query performance at the cost of an expensive slow
>> > cycle to prepare the data. It's not suited to mutating data and requires
>> > the preparation phase to be run on a machine with enough memory to
>> > compute the closure, or closure subset, that you want.
>> >
>> > #4 can cope with much larger data sets than #3 at the expense of a more
>> > limited range of inference.
>> >
>> > Dave
>> >
>> >
>> >
>> > On 10/12/13 21:58, Daniel Maatari Okouya wrote:
>> >> Dear All,
>> >>
>> >> If they can work together, I would like to understand a bit better the underlying mechanics of making Jena TDB and the Jena Inference infrastructure work together.
>> >>
>> >>
>> >> Hence, I have the following question:
>> >>
>> >>
>> >> 1-Can someone have the Jena TDB act like Stardog, in the sense that if one make a query that would include inferred triple as well ?
>> >>
>> >> If that is possible, can someone explain to me the underlying mechanics that it would imply, knowing that i understood the following from the documentation.
>> >>
>> >> Inferred triples are situated in an InfGraph/InfModel(warper), which is obtained by binding a reasoner to a base Graph/Model (understood as the one containing the asserted triples). One can query that infGraph using the Querry engine ( However this is happening on an InMemory models).
>> >>
>> >> My guess here is that if a model is in the TDB, then for our query to include the inferred triple, this would requires that, the querry to the model is actually run against the InfModel/Infgraph. However i don’t know if that is possible and how exactly TDB would do that? Indeed, that would mean choosing a reasoner, creating the infmodel, and exposing it in lieu of the base model.
>> >>
>> >> Can someone explain a bit that mechanics, that is how it works with Jena TDB, if one wants some inference. For instance, if the actual model is RDFS or OWL ? how to query it and obtain answer that include the inferred knowledge. Many thanks
>> >>
>> >> Best,
>> >>
>> >> -M-
>> >>
>> >>
>> >>
>> >> --
>> >> Daniel Maatari Okouya
>> >> Sent with Airmail
>> >>
>> >
>> >
>>


Re: Jena TDB and Jena Inference cooperation underlying mechanics

Posted by Daniel Maatari Okouya <ok...@yahoo.fr>.
Hi, 

Sorry to bother you again on that, but i tried to read online and still i could not really figure out that last part. How to save back the inferred triple in the TDB. After trying out few things, i realize that as soon as you create an infgraph, many statement are already generated. I use a Infgraph generated out of a pellet reasoner by the way. I just understand how to proceed from there. “Generating the query saving back the result” does not talk to me. Please could you explain further here. 

Many thanks,

M
-- 
Daniel Maatari Okouya
Sent with Airmail
From: Daniel Maatari Okouya Daniel Maatari Okouya
Reply: users@jena.apache.org users@jena.apache.org
Date: December 11, 2013 at 3:35:41 PM
To: users@jena.apache.org users@jena.apache.org
Subject:  Re: Jena TDB and Jena Inference cooperation underlying mechanics  
I like that solution 3. This is something i had in mind before as well. I get it now.   

1-However, how exactly can one add the information back into the ontology ? Do you have some example code ? From what i understood, you query the ontology, and then …… “I don’t understand”. Cause the result that you get, could very well contain both asserted triple and inferred triples. I come from the OWL-API. With it, there is clearly something called axiom generator, along with a clear procedure to add the inferred triple in the original ontology or any other ontology. Is there something similar in Jena ? Or simply how is it done, what is the best practice ?   
From your explanation it seems that you want only to generate those triples that are inferred out of the query, that’s interesting.   
In any case i would need some pointer for that procedure. Is there some code or example somewhere ?  

Many thanks,   
-M-  

--   
Daniel Maatari Okouya  
Sent with Airmail  
From: Dave Reynolds Dave Reynolds  
Reply: users@jena.apache.org users@jena.apache.org  
Date: December 11, 2013 at 1:29:48 PM  
To: users@jena.apache.org users@jena.apache.org  
Subject:  Re: Jena TDB and Jena Inference cooperation underlying mechanics  
On 11/12/13 10:23, Daniel Maatari Okouya wrote:  
> Many thanks for taking the time to provide such a precise answer.  
>  
> Meanwhile, I’m afraid i would require some clarification.  
>  
>  
> 1- Could you explain the different between 1 and 2. To me the solution are exactly the same. With my current knowledge of Jena. I don’t see 2 different code coming out of "Construct an Infmodel over a TDB" and “Load The TDB model into memory”. In both case i would go with dataset.getNamedModel or getdefaultmodel, which to me would result in an in-memory model, that will be a parameter of a createInfModel…..  

No, a call like getNamedModel doesn't load the model into memory, it  
gives you an interface onto the TDB store. When you query that model the  
queries are sent to TDB.  

To create an in memory copy you would do something like:  

Model memCopy = ModelFactory.createDefaultModel();  
memCopy.add( tdbModel );  

> 2-In (3) I’m not familiar with the term "inference closure”. My guess would be that you mean ontology schema… is there another term for that, could you please clarify in easier jargon.  

Maybe a trivial example would help:  

Suppose you have an ontology, in a TDB model or a file, which states:  

foaf:knows rdfs:domain foaf:Person; rdfs:range foaf:Person .  

This is a small fragment of the FOAF vocabulary specification [1].  

Now suppose you have some instance data, again a TDB model or file,  
which only states:  

:dave foaf:knows :bob .  

The a query to list the types of :dave and :bob on that instance data  
will of itself return empty.  

However, if you construct an RDFS inference model which combines the  
ontology and the instance data with a knowledge of RDFS semantics then a  
query to that for type statements would yeild, among other things:  

:dave rdf:type foaf:Person .  
:bob rdf:type foaf:Person .  

These plus the ontology plus the instance data are the inference  
closure. I.e. the closure is what you get by adding back in all the new  
statements you can derive as a result of inference [2].  

The inference engine has had to do work to generate those additional  
inferred statements. All the internal reasoner state for doing so is in  
memory. So if you kill your application and ask the same query again the  
inference engine has to do that work over again. For more complex  
examples that work can be expensive, particularly over a persistent store.  

So in option 3 you take those inferred triples and store them, along  
with the original triples. If you list the rdf:type statements on that  
model now you see the same answers you would have seen if you queried  
via an inference engine but didn't have to do inference to get them (you  
did it ahead of time).  

Dave  

[1] http://xmlns.com/foaf/spec/#term_knows  

[2] Sometimes that complete closure can be infinite, so you can't infer  
and store everything. But in practice the fragments implemented by the  
rule engine are generally finite.  

> Moreover, I’m not sure to understand the full logic here. Why would i store the result of some query and then query them again. Is this the way to store all the inferredTriple back in the store? Do you mean that you take every triple that is returned and assert them back in the Infmodel or the base model ? By the way does that mean that in general, in jena, there is noway, to generate a model that is the combination of the asserted and all the inferred triple (or a selected set of inferred triple, e.g. all class axioms).  
>  
> As you can see i’m quite confuse ;) . I would much appreciate if you could clarified and detailed a bit further (3)  
>  
> Also if u have good pointers for me to read, please do not hesitate ;)  
>  
>  
> Many thanks,  
>  
> -M-  
>  
> --  
> Daniel Maatari Okouya  
> Sent with Airmail  
> From: Dave Reynolds Dave Reynolds  
> Reply: users@jena.apache.org users@jena.apache.org  
> Date: December 11, 2013 at 9:47:51 AM  
> To: users@jena.apache.org users@jena.apache.org  
> Subject: Re: Jena TDB and Jena Inference cooperation underlying mechanics  
> There's no specific integration of TDB and inference.  
>  
> The rule-based inference engines themselves run in memory but can  
> operate over any model, however it is stored. So there are several options.  
>  
> 1. Construct an InfModel over a TDB based model. When you query the  
> InfModel you will see both the TDB model and any inferences.  
>  
> 2. Load the TDB model into memory then construct an InfModel over that.  
> Then query the InfModel.  
>  
> 3. Prepare an inference closure and store that. Load the data, e.g. into  
> memory, construct the InfModel, query the InfModel for the patterns you  
> are interested in (which might be every triple) store all those results  
> in a TDB model. Then at run time open the closure TDB as a plain model  
> and query it.  
>  
> 4. As 3 but use TDB's high performance RDFS-subset closure.  
>  
> #1 is easy to do but the inference results are stored in memory so it  
> doesn't enable you to scale to models that wouldn't fit in memory anyway.  
>  
> #2 can be faster. Inference involves a lot of queries and query to an in  
> memory model is naturally faster than querying TDB. Whether the cost of  
> the initial load outweighs the speed of inference depends on how caching  
> works out, your data and your queries.  
>  
> #3 gives you good query performance at the cost of an expensive slow  
> cycle to prepare the data. It's not suited to mutating data and requires  
> the preparation phase to be run on a machine with enough memory to  
> compute the closure, or closure subset, that you want.  
>  
> #4 can cope with much larger data sets than #3 at the expense of a more  
> limited range of inference.  
>  
> Dave  
>  
>  
>  
> On 10/12/13 21:58, Daniel Maatari Okouya wrote:  
>> Dear All,  
>>  
>> If they can work together, I would like to understand a bit better the underlying mechanics of making Jena TDB and the Jena Inference infrastructure work together.  
>>  
>>  
>> Hence, I have the following question:  
>>  
>>  
>> 1-Can someone have the Jena TDB act like Stardog, in the sense that if one make a query that would include inferred triple as well ?  
>>  
>> If that is possible, can someone explain to me the underlying mechanics that it would imply, knowing that i understood the following from the documentation.  
>>  
>> Inferred triples are situated in an InfGraph/InfModel(warper), which is obtained by binding a reasoner to a base Graph/Model (understood as the one containing the asserted triples). One can query that infGraph using the Querry engine ( However this is happening on an InMemory models).  
>>  
>> My guess here is that if a model is in the TDB, then for our query to include the inferred triple, this would requires that, the querry to the model is actually run against the InfModel/Infgraph. However i don’t know if that is possible and how exactly TDB would do that? Indeed, that would mean choosing a reasoner, creating the infmodel, and exposing it in lieu of the base model.  
>>  
>> Can someone explain a bit that mechanics, that is how it works with Jena TDB, if one wants some inference. For instance, if the actual model is RDFS or OWL ? how to query it and obtain answer that include the inferred knowledge. Many thanks  
>>  
>> Best,  
>>  
>> -M-  
>>  
>>  
>>  
>> --  
>> Daniel Maatari Okouya  
>> Sent with Airmail  
>>  
>  
>  


Re: Jena TDB and Jena Inference cooperation underlying mechanics

Posted by Daniel Maatari Okouya <ok...@yahoo.fr>.
I like that solution 3. This is something i had in mind before as well. I get it now. 

1-However, how exactly can one add the information back into the ontology ? Do you have some example code ? From what i understood, you query the ontology, and then …… “I don’t understand”. Cause the result that you get, could very well contain both asserted triple and inferred triples. I come from the OWL-API. With it, there is clearly something called axiom generator, along with a clear procedure to add the inferred triple in the original ontology or any other ontology. Is there something similar in Jena ? Or simply how is it done, what is the best practice ? 
From your explanation it seems that you want only to generate those triples that are inferred out of the query, that’s interesting. 
In any case i would need some pointer for that procedure. Is there some code or example somewhere ?

Many thanks, 
-M-

-- 
Daniel Maatari Okouya
Sent with Airmail
From: Dave Reynolds Dave Reynolds
Reply: users@jena.apache.org users@jena.apache.org
Date: December 11, 2013 at 1:29:48 PM
To: users@jena.apache.org users@jena.apache.org
Subject:  Re: Jena TDB and Jena Inference cooperation underlying mechanics  
On 11/12/13 10:23, Daniel Maatari Okouya wrote:  
> Many thanks for taking the time to provide such a precise answer.  
>  
> Meanwhile, I’m afraid i would require some clarification.  
>  
>  
> 1- Could you explain the different between 1 and 2. To me the solution are exactly the same. With my current knowledge of Jena. I don’t see 2 different code coming out of "Construct an Infmodel over a TDB" and “Load The TDB model into memory”. In both case i would go with dataset.getNamedModel or getdefaultmodel, which to me would result in an in-memory model, that will be a parameter of a createInfModel…..  

No, a call like getNamedModel doesn't load the model into memory, it  
gives you an interface onto the TDB store. When you query that model the  
queries are sent to TDB.  

To create an in memory copy you would do something like:  

Model memCopy = ModelFactory.createDefaultModel();  
memCopy.add( tdbModel );  

> 2-In (3) I’m not familiar with the term "inference closure”. My guess would be that you mean ontology schema… is there another term for that, could you please clarify in easier jargon.  

Maybe a trivial example would help:  

Suppose you have an ontology, in a TDB model or a file, which states:  

foaf:knows rdfs:domain foaf:Person; rdfs:range foaf:Person .  

This is a small fragment of the FOAF vocabulary specification [1].  

Now suppose you have some instance data, again a TDB model or file,  
which only states:  

:dave foaf:knows :bob .  

The a query to list the types of :dave and :bob on that instance data  
will of itself return empty.  

However, if you construct an RDFS inference model which combines the  
ontology and the instance data with a knowledge of RDFS semantics then a  
query to that for type statements would yeild, among other things:  

:dave rdf:type foaf:Person .  
:bob rdf:type foaf:Person .  

These plus the ontology plus the instance data are the inference  
closure. I.e. the closure is what you get by adding back in all the new  
statements you can derive as a result of inference [2].  

The inference engine has had to do work to generate those additional  
inferred statements. All the internal reasoner state for doing so is in  
memory. So if you kill your application and ask the same query again the  
inference engine has to do that work over again. For more complex  
examples that work can be expensive, particularly over a persistent store.  

So in option 3 you take those inferred triples and store them, along  
with the original triples. If you list the rdf:type statements on that  
model now you see the same answers you would have seen if you queried  
via an inference engine but didn't have to do inference to get them (you  
did it ahead of time).  

Dave  

[1] http://xmlns.com/foaf/spec/#term_knows  

[2] Sometimes that complete closure can be infinite, so you can't infer  
and store everything. But in practice the fragments implemented by the  
rule engine are generally finite.  

> Moreover, I’m not sure to understand the full logic here. Why would i store the result of some query and then query them again. Is this the way to store all the inferredTriple back in the store? Do you mean that you take every triple that is returned and assert them back in the Infmodel or the base model ? By the way does that mean that in general, in jena, there is noway, to generate a model that is the combination of the asserted and all the inferred triple (or a selected set of inferred triple, e.g. all class axioms).  
>  
> As you can see i’m quite confuse ;) . I would much appreciate if you could clarified and detailed a bit further (3)  
>  
> Also if u have good pointers for me to read, please do not hesitate ;)  
>  
>  
> Many thanks,  
>  
> -M-  
>  
> --  
> Daniel Maatari Okouya  
> Sent with Airmail  
> From: Dave Reynolds Dave Reynolds  
> Reply: users@jena.apache.org users@jena.apache.org  
> Date: December 11, 2013 at 9:47:51 AM  
> To: users@jena.apache.org users@jena.apache.org  
> Subject: Re: Jena TDB and Jena Inference cooperation underlying mechanics  
> There's no specific integration of TDB and inference.  
>  
> The rule-based inference engines themselves run in memory but can  
> operate over any model, however it is stored. So there are several options.  
>  
> 1. Construct an InfModel over a TDB based model. When you query the  
> InfModel you will see both the TDB model and any inferences.  
>  
> 2. Load the TDB model into memory then construct an InfModel over that.  
> Then query the InfModel.  
>  
> 3. Prepare an inference closure and store that. Load the data, e.g. into  
> memory, construct the InfModel, query the InfModel for the patterns you  
> are interested in (which might be every triple) store all those results  
> in a TDB model. Then at run time open the closure TDB as a plain model  
> and query it.  
>  
> 4. As 3 but use TDB's high performance RDFS-subset closure.  
>  
> #1 is easy to do but the inference results are stored in memory so it  
> doesn't enable you to scale to models that wouldn't fit in memory anyway.  
>  
> #2 can be faster. Inference involves a lot of queries and query to an in  
> memory model is naturally faster than querying TDB. Whether the cost of  
> the initial load outweighs the speed of inference depends on how caching  
> works out, your data and your queries.  
>  
> #3 gives you good query performance at the cost of an expensive slow  
> cycle to prepare the data. It's not suited to mutating data and requires  
> the preparation phase to be run on a machine with enough memory to  
> compute the closure, or closure subset, that you want.  
>  
> #4 can cope with much larger data sets than #3 at the expense of a more  
> limited range of inference.  
>  
> Dave  
>  
>  
>  
> On 10/12/13 21:58, Daniel Maatari Okouya wrote:  
>> Dear All,  
>>  
>> If they can work together, I would like to understand a bit better the underlying mechanics of making Jena TDB and the Jena Inference infrastructure work together.  
>>  
>>  
>> Hence, I have the following question:  
>>  
>>  
>> 1-Can someone have the Jena TDB act like Stardog, in the sense that if one make a query that would include inferred triple as well ?  
>>  
>> If that is possible, can someone explain to me the underlying mechanics that it would imply, knowing that i understood the following from the documentation.  
>>  
>> Inferred triples are situated in an InfGraph/InfModel(warper), which is obtained by binding a reasoner to a base Graph/Model (understood as the one containing the asserted triples). One can query that infGraph using the Querry engine ( However this is happening on an InMemory models).  
>>  
>> My guess here is that if a model is in the TDB, then for our query to include the inferred triple, this would requires that, the querry to the model is actually run against the InfModel/Infgraph. However i don’t know if that is possible and how exactly TDB would do that? Indeed, that would mean choosing a reasoner, creating the infmodel, and exposing it in lieu of the base model.  
>>  
>> Can someone explain a bit that mechanics, that is how it works with Jena TDB, if one wants some inference. For instance, if the actual model is RDFS or OWL ? how to query it and obtain answer that include the inferred knowledge. Many thanks  
>>  
>> Best,  
>>  
>> -M-  
>>  
>>  
>>  
>> --  
>> Daniel Maatari Okouya  
>> Sent with Airmail  
>>  
>  
>  


Re: Jena TDB and Jena Inference cooperation underlying mechanics

Posted by Dave Reynolds <da...@gmail.com>.
On 11/12/13 10:23, Daniel Maatari Okouya wrote:
> Many thanks for taking the time to provide such a precise answer.
>
> Meanwhile, I’m afraid i would require some clarification.
>
>
> 1- Could you explain the different between 1 and 2. To me the solution are exactly the same. With my current knowledge of Jena. I don’t see 2 different code coming out of "Construct an Infmodel over a TDB" and  “Load The TDB model into memory”. In both case i would go with dataset.getNamedModel or getdefaultmodel, which to me would result in an in-memory model, that will be a parameter of a createInfModel…..

No, a call like getNamedModel doesn't load the model into memory, it 
gives you an interface onto the TDB store. When you query that model the 
queries are sent to TDB.

To create an in memory copy you would do something like:

    Model memCopy = ModelFactory.createDefaultModel();
    memCopy.add( tdbModel );

> 2-In (3) I’m not familiar with the term "inference closure”. My guess would be that you mean ontology schema… is there another term for that, could you please clarify in easier jargon.

Maybe a trivial example would help:

Suppose you have an ontology, in a TDB model or a file, which states:

      foaf:knows rdfs:domain foaf:Person; rdfs:range foaf:Person .

This is a small fragment of the FOAF vocabulary specification [1].

Now suppose you have some instance data, again a TDB model or file, 
which only states:

      :dave foaf:knows :bob .

The a query to list the types of :dave and :bob on that instance data 
will of itself return empty.

However, if you construct an RDFS inference model which combines the 
ontology and the instance data with a knowledge of RDFS semantics then a 
query to that for type statements would yeild, among other things:

      :dave rdf:type foaf:Person .
      :bob rdf:type foaf:Person .

These plus the ontology plus the instance data are the inference 
closure. I.e. the closure is what you get by adding back in all the new 
statements you can derive as a result of inference [2].

The inference engine has had to do work to generate those additional 
inferred statements. All the internal reasoner state for doing so is in 
memory. So if you kill your application and ask the same query again the 
inference engine has to do that work over again. For more complex 
examples that work can be expensive, particularly over a persistent store.

So in option 3 you take those inferred triples and store them, along 
with the original triples. If you list the rdf:type statements on that 
model now you see the same answers you would have seen if you queried 
via an inference engine but didn't have to do inference to get them (you 
did it ahead of time).

Dave

[1] http://xmlns.com/foaf/spec/#term_knows

[2] Sometimes that complete closure can be infinite, so you can't infer 
and store everything. But in practice the fragments implemented by the 
rule engine are generally finite.

> Moreover, I’m not sure to understand the full logic here. Why would i store the result of some query and then query them again. Is this the way to store all the inferredTriple back in the store? Do you mean that you take every triple that is returned and assert them back in the Infmodel or the base model ? By the way does that mean that in general, in jena, there is noway, to generate a model that is the combination of the asserted and all the inferred triple (or a selected set of inferred triple, e.g. all class axioms).
>
> As you can see i’m quite confuse ;) . I would much appreciate if you could clarified and detailed a bit further (3)
>
> Also if u have good pointers for me to read, please do not hesitate ;)
>
>
> Many thanks,
>
> -M-
>
> --
> Daniel Maatari Okouya
> Sent with Airmail
> From: Dave Reynolds Dave Reynolds
> Reply: users@jena.apache.org users@jena.apache.org
> Date: December 11, 2013 at 9:47:51 AM
> To: users@jena.apache.org users@jena.apache.org
> Subject:  Re: Jena TDB and Jena Inference cooperation underlying mechanics
> There's no specific integration of TDB and inference.
>
> The rule-based inference engines themselves run in memory but can
> operate over any model, however it is stored. So there are several options.
>
> 1. Construct an InfModel over a TDB based model. When you query the
> InfModel you will see both the TDB model and any inferences.
>
> 2. Load the TDB model into memory then construct an InfModel over that.
> Then query the InfModel.
>
> 3. Prepare an inference closure and store that. Load the data, e.g. into
> memory, construct the InfModel, query the InfModel for the patterns you
> are interested in (which might be every triple) store all those results
> in a TDB model. Then at run time open the closure TDB as a plain model
> and query it.
>
> 4. As 3 but use TDB's high performance RDFS-subset closure.
>
> #1 is easy to do but the inference results are stored in memory so it
> doesn't enable you to scale to models that wouldn't fit in memory anyway.
>
> #2 can be faster. Inference involves a lot of queries and query to an in
> memory model is naturally faster than querying TDB. Whether the cost of
> the initial load outweighs the speed of inference depends on how caching
> works out, your data and your queries.
>
> #3 gives you good query performance at the cost of an expensive slow
> cycle to prepare the data. It's not suited to mutating data and requires
> the preparation phase to be run on a machine with enough memory to
> compute the closure, or closure subset, that you want.
>
> #4 can cope with much larger data sets than #3 at the expense of a more
> limited range of inference.
>
> Dave
>
>
>
> On 10/12/13 21:58, Daniel Maatari Okouya wrote:
>> Dear All,
>>
>> If they can work together, I would like to understand a bit better the underlying mechanics of making Jena TDB and the Jena Inference infrastructure work together.
>>
>>
>> Hence, I have the following question:
>>
>>
>> 1-Can someone have the Jena TDB act like Stardog, in the sense that if one make a query that would include inferred triple as well ?
>>
>> If that is possible, can someone explain to me the underlying mechanics that it would imply, knowing that i understood the following from the documentation.
>>
>> Inferred triples are situated in an InfGraph/InfModel(warper), which is obtained by binding a reasoner to a base Graph/Model (understood as the one containing the asserted triples). One can query that infGraph using the Querry engine ( However this is happening on an InMemory models).
>>
>> My guess here is that if a model is in the TDB, then for our query to include the inferred triple, this would requires that, the querry to the model is actually run against the InfModel/Infgraph. However i don’t know if that is possible and how exactly TDB would do that? Indeed, that would mean choosing a reasoner, creating the infmodel, and exposing it in lieu of the base model.
>>
>> Can someone explain a bit that mechanics, that is how it works with Jena TDB, if one wants some inference. For instance, if the actual model is RDFS or OWL ? how to query it and obtain answer that include the inferred knowledge. Many thanks
>>
>> Best,
>>
>> -M-
>>
>>
>>
>> --
>> Daniel Maatari Okouya
>> Sent with Airmail
>>
>
>


Re: Jena TDB and Jena Inference cooperation underlying mechanics

Posted by Daniel Maatari Okouya <ok...@yahoo.fr>.
Many thanks for taking the time to provide such a precise answer. 

Meanwhile, I’m afraid i would require some clarification. 


1- Could you explain the different between 1 and 2. To me the solution are exactly the same. With my current knowledge of Jena. I don’t see 2 different code coming out of "Construct an Infmodel over a TDB" and  “Load The TDB model into memory”. In both case i would go with dataset.getNamedModel or getdefaultmodel, which to me would result in an in-memory model, that will be a parameter of a createInfModel…..

2-In (3) I’m not familiar with the term "inference closure”. My guess would be that you mean ontology schema… is there another term for that, could you please clarify in easier jargon. 
Moreover, I’m not sure to understand the full logic here. Why would i store the result of some query and then query them again. Is this the way to store all the inferredTriple back in the store? Do you mean that you take every triple that is returned and assert them back in the Infmodel or the base model ? By the way does that mean that in general, in jena, there is noway, to generate a model that is the combination of the asserted and all the inferred triple (or a selected set of inferred triple, e.g. all class axioms). 

As you can see i’m quite confuse ;) . I would much appreciate if you could clarified and detailed a bit further (3)

Also if u have good pointers for me to read, please do not hesitate ;) 


Many thanks, 

-M-

-- 
Daniel Maatari Okouya
Sent with Airmail
From: Dave Reynolds Dave Reynolds
Reply: users@jena.apache.org users@jena.apache.org
Date: December 11, 2013 at 9:47:51 AM
To: users@jena.apache.org users@jena.apache.org
Subject:  Re: Jena TDB and Jena Inference cooperation underlying mechanics  
There's no specific integration of TDB and inference.  

The rule-based inference engines themselves run in memory but can  
operate over any model, however it is stored. So there are several options.  

1. Construct an InfModel over a TDB based model. When you query the  
InfModel you will see both the TDB model and any inferences.  

2. Load the TDB model into memory then construct an InfModel over that.  
Then query the InfModel.  

3. Prepare an inference closure and store that. Load the data, e.g. into  
memory, construct the InfModel, query the InfModel for the patterns you  
are interested in (which might be every triple) store all those results  
in a TDB model. Then at run time open the closure TDB as a plain model  
and query it.  

4. As 3 but use TDB's high performance RDFS-subset closure.  

#1 is easy to do but the inference results are stored in memory so it  
doesn't enable you to scale to models that wouldn't fit in memory anyway.  

#2 can be faster. Inference involves a lot of queries and query to an in  
memory model is naturally faster than querying TDB. Whether the cost of  
the initial load outweighs the speed of inference depends on how caching  
works out, your data and your queries.  

#3 gives you good query performance at the cost of an expensive slow  
cycle to prepare the data. It's not suited to mutating data and requires  
the preparation phase to be run on a machine with enough memory to  
compute the closure, or closure subset, that you want.  

#4 can cope with much larger data sets than #3 at the expense of a more  
limited range of inference.  

Dave  



On 10/12/13 21:58, Daniel Maatari Okouya wrote:  
> Dear All,  
>  
> If they can work together, I would like to understand a bit better the underlying mechanics of making Jena TDB and the Jena Inference infrastructure work together.  
>  
>  
> Hence, I have the following question:  
>  
>  
> 1-Can someone have the Jena TDB act like Stardog, in the sense that if one make a query that would include inferred triple as well ?  
>  
> If that is possible, can someone explain to me the underlying mechanics that it would imply, knowing that i understood the following from the documentation.  
>  
> Inferred triples are situated in an InfGraph/InfModel(warper), which is obtained by binding a reasoner to a base Graph/Model (understood as the one containing the asserted triples). One can query that infGraph using the Querry engine ( However this is happening on an InMemory models).  
>  
> My guess here is that if a model is in the TDB, then for our query to include the inferred triple, this would requires that, the querry to the model is actually run against the InfModel/Infgraph. However i don’t know if that is possible and how exactly TDB would do that? Indeed, that would mean choosing a reasoner, creating the infmodel, and exposing it in lieu of the base model.  
>  
> Can someone explain a bit that mechanics, that is how it works with Jena TDB, if one wants some inference. For instance, if the actual model is RDFS or OWL ? how to query it and obtain answer that include the inferred knowledge. Many thanks  
>  
> Best,  
>  
> -M-  
>  
>  
>  
> --  
> Daniel Maatari Okouya  
> Sent with Airmail  
>  


Re: Jena TDB and Jena Inference cooperation underlying mechanics

Posted by Dave Reynolds <da...@gmail.com>.
There's no specific integration of TDB and inference.

The rule-based inference engines themselves run in memory but can 
operate over any model, however it is stored. So there are several options.

1. Construct an InfModel over a TDB based model. When you query the 
InfModel you will see both the TDB model and any inferences.

2. Load the TDB model into memory then construct an InfModel over that. 
Then query the InfModel.

3. Prepare an inference closure and store that. Load the data, e.g. into 
memory, construct the InfModel, query the InfModel for the patterns you 
are interested in (which might be every triple) store all those results 
in a TDB model. Then at run time open the closure TDB as a plain model 
and query it.

4. As 3 but use TDB's high performance RDFS-subset closure.

#1 is easy to do but the inference results are stored in memory so it 
doesn't enable you to scale to models that wouldn't fit in memory anyway.

#2 can be faster. Inference involves a lot of queries and query to an in 
memory model is naturally faster than querying TDB. Whether the cost of 
the initial load outweighs the speed of inference depends on how caching 
works out, your data and your queries.

#3 gives you good query performance at the cost of an expensive slow 
cycle to prepare the data. It's not suited to mutating data and requires 
the preparation phase to be run on a machine with enough memory to 
compute the closure, or closure subset, that you want.

#4 can cope with much larger data sets than #3 at the expense of a more 
limited range of inference.

Dave



On 10/12/13 21:58, Daniel Maatari Okouya wrote:
> Dear All,
>
> If they can work together, I would like to understand a bit better the underlying mechanics of  making  Jena TDB and the Jena Inference infrastructure work together.
>
>
> Hence, I have the following question:
>
>
> 1-Can someone have the Jena TDB act like Stardog, in the sense that if one make a query that would include inferred triple as well ?
>
> If that is possible, can someone explain to me the underlying mechanics that it would imply, knowing that i understood the following from the documentation.
>
> Inferred triples are situated in an InfGraph/InfModel(warper), which is obtained by binding a reasoner to a base Graph/Model (understood as the one containing the asserted triples). One can query that infGraph using the Querry engine ( However this is happening on an InMemory models).
>
> My guess here is that if a model is in the TDB, then for our query to include the inferred triple, this would requires that, the querry to the model is actually run against the InfModel/Infgraph.  However i don’t know if that is possible and how exactly TDB would do that? Indeed, that would mean choosing a reasoner, creating the infmodel, and exposing it in lieu of the base model.
>
> Can someone explain a bit that mechanics, that is how it works with Jena TDB, if one wants some inference. For instance, if the actual model is RDFS or OWL ? how to query it and obtain answer that include the inferred knowledge. Many thanks
>
> Best,
>
> -M-
>
>
>
> --
> Daniel Maatari Okouya
> Sent with Airmail
>