You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@jena.apache.org by Jacek Grzebyta <jg...@gmail.com> on 2014/03/22 11:29:59 UTC

OWL Reasoner and OutOfMemory

Dear All,

I have problem with owl reasoning. Even quite simple sparql request makes
huge CPU consumption and finally produces: java.lang.OutOfMemoryError:
GCoverhead limit exceeded

As a schema I use Uniprot owl (ftp://ftp.uniprot.org/pub/databases/uniprot
/current_release/rdf/core.owl) file loaded into TDB. There was no error
during that file loading. My config file is available here: ​
 config.ttl<https://docs.google.com/file/d/0B9vmnMuEyN3VajBJNTZ1UW03SWc/edit?usp=drive_web>
​
And log file example here:
​
 error.log<https://docs.google.com/file/d/0B9vmnMuEyN3VNVIwY3VYM0V6NDg/edit?usp=drive_web>
​

Is any way to optimise configuration? I know I have a lot of files for the
last graph but it is necessary to not to load into TDB. Because some of the
files would be replaced by newer versions. Is my own Java program with the
reasoning would have better performance?

Thanks a lot,
Jacek

Re: OWL Reasoner and OutOfMemory

Posted by Jacek Grzebyta <jg...@gmail.com>.
I found that OWLMicro was enough.

Thank you very much.

Jacek


On 23 March 2014 09:42, Dave Reynolds <da...@gmail.com> wrote:

> The out-of-the-box Jena reasoners are in-memory reasoners. Reasoning over
> TDB just makes them slower, doesn't allow them to scale better.
>
> Things to try are:
>   - use reasoner OWLMicro (if that covers sufficient cases for your
> purposes)
>   - allocate more memory
>   - use Pellet
>   - handcrafted inferences using SPARQL Update or java if you only need
> some specific simple inferences that you can handle that way
>
> Dave
>
>
> On 22/03/14 10:29, Jacek Grzebyta wrote:
>
>> Dear All,
>>
>> I have problem with owl reasoning. Even quite simple sparql request makes
>> huge CPU consumption and finally produces: java.lang.OutOfMemoryError:
>> GCoverhead limit exceeded
>>
>>
>> As a schema I use Uniprot owl (ftp://ftp.uniprot.org/pub/
>> databases/uniprot
>> /current_release/rdf/core.owl) file loaded into TDB. There was no error
>> during that file loading. My config file is available here: ​
>>   config.ttl<https://docs.google.com/file/d/0B9vmnMuEyN3VajBJNTZ1UW03SWc/
>> edit?usp=drive_web>
>>
>> ​
>> And log file example here:
>> ​
>>   error.log<https://docs.google.com/file/d/0B9vmnMuEyN3VNVIwY3VYM0V6NDg/
>> edit?usp=drive_web>
>>
>> ​
>>
>> Is any way to optimise configuration? I know I have a lot of files for the
>> last graph but it is necessary to not to load into TDB. Because some of
>> the
>> files would be replaced by newer versions. Is my own Java program with the
>> reasoning would have better performance?
>>
>> Thanks a lot,
>> Jacek
>>
>>
>

Re: OWL Reasoner and OutOfMemory

Posted by Jacek Grzebyta <jg...@gmail.com>.
Thanks Dave. I will write later about outcome.

Jacek


On 23 March 2014 09:42, Dave Reynolds <da...@gmail.com> wrote:

> The out-of-the-box Jena reasoners are in-memory reasoners. Reasoning over
> TDB just makes them slower, doesn't allow them to scale better.
>
> Things to try are:
>   - use reasoner OWLMicro (if that covers sufficient cases for your
> purposes)
>   - allocate more memory
>   - use Pellet
>   - handcrafted inferences using SPARQL Update or java if you only need
> some specific simple inferences that you can handle that way
>
> Dave
>
>
> On 22/03/14 10:29, Jacek Grzebyta wrote:
>
>> Dear All,
>>
>> I have problem with owl reasoning. Even quite simple sparql request makes
>> huge CPU consumption and finally produces: java.lang.OutOfMemoryError:
>> GCoverhead limit exceeded
>>
>>
>> As a schema I use Uniprot owl (ftp://ftp.uniprot.org/pub/
>> databases/uniprot
>> /current_release/rdf/core.owl) file loaded into TDB. There was no error
>> during that file loading. My config file is available here: ​
>>   config.ttl<https://docs.google.com/file/d/0B9vmnMuEyN3VajBJNTZ1UW03SWc/
>> edit?usp=drive_web>
>>
>> ​
>> And log file example here:
>> ​
>>   error.log<https://docs.google.com/file/d/0B9vmnMuEyN3VNVIwY3VYM0V6NDg/
>> edit?usp=drive_web>
>>
>> ​
>>
>> Is any way to optimise configuration? I know I have a lot of files for the
>> last graph but it is necessary to not to load into TDB. Because some of
>> the
>> files would be replaced by newer versions. Is my own Java program with the
>> reasoning would have better performance?
>>
>> Thanks a lot,
>> Jacek
>>
>>
>

Re: OWL Reasoner and OutOfMemory

Posted by Dave Reynolds <da...@gmail.com>.
The out-of-the-box Jena reasoners are in-memory reasoners. Reasoning 
over TDB just makes them slower, doesn't allow them to scale better.

Things to try are:
   - use reasoner OWLMicro (if that covers sufficient cases for your 
purposes)
   - allocate more memory
   - use Pellet
   - handcrafted inferences using SPARQL Update or java if you only need 
some specific simple inferences that you can handle that way

Dave

On 22/03/14 10:29, Jacek Grzebyta wrote:
> Dear All,
>
> I have problem with owl reasoning. Even quite simple sparql request makes
> huge CPU consumption and finally produces: java.lang.OutOfMemoryError:
> GCoverhead limit exceeded
>
> As a schema I use Uniprot owl (ftp://ftp.uniprot.org/pub/databases/uniprot
> /current_release/rdf/core.owl) file loaded into TDB. There was no error
> during that file loading. My config file is available here: ​
>   config.ttl<https://docs.google.com/file/d/0B9vmnMuEyN3VajBJNTZ1UW03SWc/edit?usp=drive_web>
> ​
> And log file example here:
> ​
>   error.log<https://docs.google.com/file/d/0B9vmnMuEyN3VNVIwY3VYM0V6NDg/edit?usp=drive_web>
> ​
>
> Is any way to optimise configuration? I know I have a lot of files for the
> last graph but it is necessary to not to load into TDB. Because some of the
> files would be replaced by newer versions. Is my own Java program with the
> reasoning would have better performance?
>
> Thanks a lot,
> Jacek
>