You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@jena.apache.org by Piotr Nowara <pi...@gmail.com> on 2022/01/05 09:15:33 UTC

Fuseki 4.* increased RAM consumption

Hi,

we've recently upgraded Fuseki from 3.13.1 to 4.3.1 because of log4shell.
Our old Fuseki was super stable and reliable, but now after the upgrade we
are getting Service Unavailable during normal load.

I realized RAM consumption increased dramatically from 1-1.5GB (3.13.1) to
more than 6GB now. We didn't change anything, just the version. After
downgrading to 3.13.1 RAM consumption is back to normal.

Can anyone explain that? Or maybe someone have experienced a similar
degradation?

Thanks.
Piotr

Re: Fuseki 4.* increased RAM consumption

Posted by jerven Bolleman <je...@sib.swiss>.
Hi All,

Debugging memory use can be really hard. These JVM options help
a lot.

java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/

not on out of memory the jvm will produce a heap dump that can be 
analyzed with

https://www.eclipse.org/mat/

Which will be very helpful.

Regards,
Jerven

On 05/01/2022 17:57, Andy Seaborne wrote:
> That isn't the serverlog is it?
> 
> What does the server log have in it?
> 
>      Andy
> 
> On 05/01/2022 14:45, Piotr Nowara wrote:
>> Hi Andy,
>>
>> Thank you for your reply and those config file related suggestions.
>>
>> Regarding those issues the stack trace always looks like this:
>> o.a.j.s.e.h.QueryExceptionHTTP: Service Unavailable
>> at o.a.j.s.e.h.QueryExceptionHTTP.rewrap(QueryExceptionHTTP.java:49)
>> at o.a.j.s.e.h.QueryExecHTTP.executeQuery(QueryExecHTTP.java:493)
>> at o.a.j.s.e.h.QueryExecHTTP.query(QueryExecHTTP.java:483)
>> at o.a.j.s.e.h.QueryExecHTTP.execRowSet(QueryExecHTTP.java:164)
>> at o.a.j.s.e.h.QueryExecHTTP.select(QueryExecHTTP.java:156)
>> at
>> o.a.j.s.e.QueryExecutionAdapter.execSelect(QueryExecutionAdapter.java:117) 
>>
>> at 
>> o.a.j.s.e.QueryExecutionCompat.execSelect(QueryExecutionCompat.java:97)
>> at o.a.j.r.RDFConnection.lambda$querySelect$2(RDFConnection.java:222)
>> at o.a.j.r.RDFConnection$$Lambda$757/0000000000000000.run(Unknown Source)
>> at o.a.jena.system.Txn.exec(Txn.java:77)
>> at o.a.jena.system.Txn.executeRead(Txn.java:115)
>> at o.a.j.r.RDFConnection.querySelect(RDFConnection.java:220)
>> at c.c.m.k.r.f.FusekiWorker.executeQuery(FusekiWorker.java:44)
>>
>> Fuseki did not crash, just seemingly random queries failed to execute
>> because of this error.
> 
>> After switching back to 3.13.1 those errors are gone
>> and memory consumption is below 1GB RAM instead of 6GB. No ad hoc queries
>> are executed against our Fuseki, just the predefined ones which made me
>> feel something might change in Fuseki between 3.13.1 and those recent
>> versions (we observed these issues on 4.1.0 and 4.3.1). We use Fuseki on
>> prod and a lot of our API calls depend on it so I will investigate this
>> anomaly further and let you know if I find something new.
>>
>> Best,
>> Piotr
>>
>>
>> śr., 5 sty 2022 o 14:54 Andy Seaborne <an...@apache.org> napisał(a):
>>
>>> Hi Piotr,
>>>
>>> For that in-memory setup, I don't know of any changes that might lead to
>>> increased memory use. Might it be related to the queries received?
>>>
>>> Service Unavailable -->
>>>
>>> That looks like a reverse proxy can't contact the Fuseki server.  What's
>>> the Fuseki server log say around that point?
>>>
>>> The JVM will grow to 6G - or whatever you set the -Xmx to - before a
>>> full GC cuts in.
>>>
>>>       Andy
>>>
>>> Unrelated inline:
>>>
>>> On 05/01/2022 11:49, Piotr Nowara wrote:
>>>> Hi Andy,
>>>>
>>>> We are running Fuseki inside Kubernetes pod using our own Docker 
>>>> image on
>>>> Ubuntu 20.04 and eclipse-temurin:11-jre-focal.
>>>>
>>>> The old RAM limit set in Kubernetes was 6GB which was more than enough
>>>> until we upgraded to 4.3.1.
>>>>
>>>> Our Fuseki hosts four in-memory datasets. The biggest one has 1.6 
>>>> million
>>>> triples (200 MB big when exporting to RDF/TTL file). The three 
>>>> others are
>>>> significantly smaller (less than 50k triples). Our datasets are used as
>>>> read-only data repositories. They are restored from S3-stored TTL 
>>>> backup
>>>> files when Fuseki restarts (see below for config TTL). Longest query
>>> takes
>>>> ca. 8 seconds, 90% of them complete in less than 20ms.
>>>>
>>>> Thanks,
>>>> Piotr
>>>>
>>>> Our config:
>>>> @prefix :      <http://base/#> .
>>>> @prefix tdb:   <http://jena.hpl.hp.com/2008/tdb#> .
>>>> @prefix rdf:   <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
>>>> @prefix ja:    <http://jena.hpl.hp.com/2005/11/Assembler#> .
>>>> @prefix rdfs:  <http://www.w3.org/2000/01/rdf-schema#> .
>>>> @prefix fuseki: <http://jena.apache.org/fuseki#> .
>>>>
>>> ...
>>> no need for the declarations
>>> ...
>>>
>>> Simpler and modernization:
>>>
>>> :service1  a                          fuseki:Service ;
>>>           fuseki:dataset                :dataset ;
>>>           fuseki:name                   "meta" ;
>>>           fuseki:serviceQuery           "query" , "sparql" ;
>>>           fuseki:serviceReadGraphStore  "get" ;
>>>           fuseki:serviceReadWriteGraphStore
>>>                   "data" ;
>>>           fuseki:serviceUpdate          "update" ;
>>>           fuseki:serviceUpload          "upload" .
>>>
>>> :dataset rdf:type ja:MemoryDataset ;
>>>       ja:data <file:./seed/meta-cdq-latest.ttl.gz>
>>>       .
>>>
>>>
>>> and using
>>>
>>>       fuseki:endpoint [
>>>           fuseki:operation fuseki:query ;
>>>           fuseki:name "sparql"
>>>       ];
>>>       fuseki:endpoint [
>>>           fuseki:operation fuseki:query ;
>>>           fuseki:name "query"
>>>       ];
>>>
>>>
>>> rather than
>>>
>>>      fuseki:serviceQuery       "query" , "sparql" ;
>>>
>>> or
>>>
>>> :service1  a          fuseki:Service ;
>>>       fuseki:endpoint [ fuseki:operation fuseki:query ] ;
>>>       fuseki:endpoint [ fuseki:operation fuseki:update ] ;
>>>       ...
>>>
>>> if you use the dataset as the operation endpoint. (service* adds both
>>> for compatibility reasons).
>>>
>>>       Andy
>>>
>>

-- 

	*Jerven Tjalling Bolleman*
Principal Software Developer
*SIB | Swiss Institute of Bioinformatics*
1, rue Michel Servet - CH 1211 Geneva 4 - Switzerland
t +41 22 379 58 85
Jerven.Bolleman@sib.swiss - www.sib.swiss


Re: Fuseki 4.* increased RAM consumption

Posted by Andy Seaborne <an...@apache.org>.
That isn't the serverlog is it?

What does the server log have in it?

     Andy

On 05/01/2022 14:45, Piotr Nowara wrote:
> Hi Andy,
> 
> Thank you for your reply and those config file related suggestions.
> 
> Regarding those issues the stack trace always looks like this:
> o.a.j.s.e.h.QueryExceptionHTTP: Service Unavailable
> at o.a.j.s.e.h.QueryExceptionHTTP.rewrap(QueryExceptionHTTP.java:49)
> at o.a.j.s.e.h.QueryExecHTTP.executeQuery(QueryExecHTTP.java:493)
> at o.a.j.s.e.h.QueryExecHTTP.query(QueryExecHTTP.java:483)
> at o.a.j.s.e.h.QueryExecHTTP.execRowSet(QueryExecHTTP.java:164)
> at o.a.j.s.e.h.QueryExecHTTP.select(QueryExecHTTP.java:156)
> at
> o.a.j.s.e.QueryExecutionAdapter.execSelect(QueryExecutionAdapter.java:117)
> at o.a.j.s.e.QueryExecutionCompat.execSelect(QueryExecutionCompat.java:97)
> at o.a.j.r.RDFConnection.lambda$querySelect$2(RDFConnection.java:222)
> at o.a.j.r.RDFConnection$$Lambda$757/0000000000000000.run(Unknown Source)
> at o.a.jena.system.Txn.exec(Txn.java:77)
> at o.a.jena.system.Txn.executeRead(Txn.java:115)
> at o.a.j.r.RDFConnection.querySelect(RDFConnection.java:220)
> at c.c.m.k.r.f.FusekiWorker.executeQuery(FusekiWorker.java:44)
> 
> Fuseki did not crash, just seemingly random queries failed to execute
> because of this error.

> After switching back to 3.13.1 those errors are gone
> and memory consumption is below 1GB RAM instead of 6GB. No ad hoc queries
> are executed against our Fuseki, just the predefined ones which made me
> feel something might change in Fuseki between 3.13.1 and those recent
> versions (we observed these issues on 4.1.0 and 4.3.1). We use Fuseki on
> prod and a lot of our API calls depend on it so I will investigate this
> anomaly further and let you know if I find something new.
> 
> Best,
> Piotr
> 
> 
> śr., 5 sty 2022 o 14:54 Andy Seaborne <an...@apache.org> napisał(a):
> 
>> Hi Piotr,
>>
>> For that in-memory setup, I don't know of any changes that might lead to
>> increased memory use. Might it be related to the queries received?
>>
>> Service Unavailable -->
>>
>> That looks like a reverse proxy can't contact the Fuseki server.  What's
>> the Fuseki server log say around that point?
>>
>> The JVM will grow to 6G - or whatever you set the -Xmx to - before a
>> full GC cuts in.
>>
>>       Andy
>>
>> Unrelated inline:
>>
>> On 05/01/2022 11:49, Piotr Nowara wrote:
>>> Hi Andy,
>>>
>>> We are running Fuseki inside Kubernetes pod using our own Docker image on
>>> Ubuntu 20.04 and eclipse-temurin:11-jre-focal.
>>>
>>> The old RAM limit set in Kubernetes was 6GB which was more than enough
>>> until we upgraded to 4.3.1.
>>>
>>> Our Fuseki hosts four in-memory datasets. The biggest one has 1.6 million
>>> triples (200 MB big when exporting to RDF/TTL file). The three others are
>>> significantly smaller (less than 50k triples). Our datasets are used as
>>> read-only data repositories. They are restored from S3-stored TTL backup
>>> files when Fuseki restarts (see below for config TTL). Longest query
>> takes
>>> ca. 8 seconds, 90% of them complete in less than 20ms.
>>>
>>> Thanks,
>>> Piotr
>>>
>>> Our config:
>>> @prefix :      <http://base/#> .
>>> @prefix tdb:   <http://jena.hpl.hp.com/2008/tdb#> .
>>> @prefix rdf:   <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
>>> @prefix ja:    <http://jena.hpl.hp.com/2005/11/Assembler#> .
>>> @prefix rdfs:  <http://www.w3.org/2000/01/rdf-schema#> .
>>> @prefix fuseki: <http://jena.apache.org/fuseki#> .
>>>
>> ...
>> no need for the declarations
>> ...
>>
>> Simpler and modernization:
>>
>> :service1  a                          fuseki:Service ;
>>           fuseki:dataset                :dataset ;
>>           fuseki:name                   "meta" ;
>>           fuseki:serviceQuery           "query" , "sparql" ;
>>           fuseki:serviceReadGraphStore  "get" ;
>>           fuseki:serviceReadWriteGraphStore
>>                   "data" ;
>>           fuseki:serviceUpdate          "update" ;
>>           fuseki:serviceUpload          "upload" .
>>
>> :dataset rdf:type ja:MemoryDataset ;
>>       ja:data <file:./seed/meta-cdq-latest.ttl.gz>
>>       .
>>
>>
>> and using
>>
>>       fuseki:endpoint [
>>           fuseki:operation fuseki:query ;
>>           fuseki:name "sparql"
>>       ];
>>       fuseki:endpoint [
>>           fuseki:operation fuseki:query ;
>>           fuseki:name "query"
>>       ];
>>
>>
>> rather than
>>
>>      fuseki:serviceQuery       "query" , "sparql" ;
>>
>> or
>>
>> :service1  a          fuseki:Service ;
>>       fuseki:endpoint [ fuseki:operation fuseki:query ] ;
>>       fuseki:endpoint [ fuseki:operation fuseki:update ] ;
>>       ...
>>
>> if you use the dataset as the operation endpoint. (service* adds both
>> for compatibility reasons).
>>
>>       Andy
>>
> 

Re: Fuseki 4.* increased RAM consumption

Posted by Piotr Nowara <pi...@gmail.com>.
Hi Andy,

Thank you for your reply and those config file related suggestions.

Regarding those issues the stack trace always looks like this:
o.a.j.s.e.h.QueryExceptionHTTP: Service Unavailable
at o.a.j.s.e.h.QueryExceptionHTTP.rewrap(QueryExceptionHTTP.java:49)
at o.a.j.s.e.h.QueryExecHTTP.executeQuery(QueryExecHTTP.java:493)
at o.a.j.s.e.h.QueryExecHTTP.query(QueryExecHTTP.java:483)
at o.a.j.s.e.h.QueryExecHTTP.execRowSet(QueryExecHTTP.java:164)
at o.a.j.s.e.h.QueryExecHTTP.select(QueryExecHTTP.java:156)
at
o.a.j.s.e.QueryExecutionAdapter.execSelect(QueryExecutionAdapter.java:117)
at o.a.j.s.e.QueryExecutionCompat.execSelect(QueryExecutionCompat.java:97)
at o.a.j.r.RDFConnection.lambda$querySelect$2(RDFConnection.java:222)
at o.a.j.r.RDFConnection$$Lambda$757/0000000000000000.run(Unknown Source)
at o.a.jena.system.Txn.exec(Txn.java:77)
at o.a.jena.system.Txn.executeRead(Txn.java:115)
at o.a.j.r.RDFConnection.querySelect(RDFConnection.java:220)
at c.c.m.k.r.f.FusekiWorker.executeQuery(FusekiWorker.java:44)

Fuseki did not crash, just seemingly random queries failed to execute
because of this error. After switching back to 3.13.1 those errors are gone
and memory consumption is below 1GB RAM instead of 6GB. No ad hoc queries
are executed against our Fuseki, just the predefined ones which made me
feel something might change in Fuseki between 3.13.1 and those recent
versions (we observed these issues on 4.1.0 and 4.3.1). We use Fuseki on
prod and a lot of our API calls depend on it so I will investigate this
anomaly further and let you know if I find something new.

Best,
Piotr


śr., 5 sty 2022 o 14:54 Andy Seaborne <an...@apache.org> napisał(a):

> Hi Piotr,
>
> For that in-memory setup, I don't know of any changes that might lead to
> increased memory use. Might it be related to the queries received?
>
> Service Unavailable -->
>
> That looks like a reverse proxy can't contact the Fuseki server.  What's
> the Fuseki server log say around that point?
>
> The JVM will grow to 6G - or whatever you set the -Xmx to - before a
> full GC cuts in.
>
>      Andy
>
> Unrelated inline:
>
> On 05/01/2022 11:49, Piotr Nowara wrote:
> > Hi Andy,
> >
> > We are running Fuseki inside Kubernetes pod using our own Docker image on
> > Ubuntu 20.04 and eclipse-temurin:11-jre-focal.
> >
> > The old RAM limit set in Kubernetes was 6GB which was more than enough
> > until we upgraded to 4.3.1.
> >
> > Our Fuseki hosts four in-memory datasets. The biggest one has 1.6 million
> > triples (200 MB big when exporting to RDF/TTL file). The three others are
> > significantly smaller (less than 50k triples). Our datasets are used as
> > read-only data repositories. They are restored from S3-stored TTL backup
> > files when Fuseki restarts (see below for config TTL). Longest query
> takes
> > ca. 8 seconds, 90% of them complete in less than 20ms.
> >
> > Thanks,
> > Piotr
> >
> > Our config:
> > @prefix :      <http://base/#> .
> > @prefix tdb:   <http://jena.hpl.hp.com/2008/tdb#> .
> > @prefix rdf:   <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
> > @prefix ja:    <http://jena.hpl.hp.com/2005/11/Assembler#> .
> > @prefix rdfs:  <http://www.w3.org/2000/01/rdf-schema#> .
> > @prefix fuseki: <http://jena.apache.org/fuseki#> .
> >
> ...
> no need for the declarations
> ...
>
> Simpler and modernization:
>
> :service1  a                          fuseki:Service ;
>          fuseki:dataset                :dataset ;
>          fuseki:name                   "meta" ;
>          fuseki:serviceQuery           "query" , "sparql" ;
>          fuseki:serviceReadGraphStore  "get" ;
>          fuseki:serviceReadWriteGraphStore
>                  "data" ;
>          fuseki:serviceUpdate          "update" ;
>          fuseki:serviceUpload          "upload" .
>
> :dataset rdf:type ja:MemoryDataset ;
>      ja:data <file:./seed/meta-cdq-latest.ttl.gz>
>      .
>
>
> and using
>
>      fuseki:endpoint [
>          fuseki:operation fuseki:query ;
>          fuseki:name "sparql"
>      ];
>      fuseki:endpoint [
>          fuseki:operation fuseki:query ;
>          fuseki:name "query"
>      ];
>
>
> rather than
>
>     fuseki:serviceQuery       "query" , "sparql" ;
>
> or
>
> :service1  a          fuseki:Service ;
>      fuseki:endpoint [ fuseki:operation fuseki:query ] ;
>      fuseki:endpoint [ fuseki:operation fuseki:update ] ;
>      ...
>
> if you use the dataset as the operation endpoint. (service* adds both
> for compatibility reasons).
>
>      Andy
>

Re: Fuseki 4.* increased RAM consumption

Posted by Andy Seaborne <an...@apache.org>.
Hi Piotr,

For that in-memory setup, I don't know of any changes that might lead to 
increased memory use. Might it be related to the queries received?

Service Unavailable -->

That looks like a reverse proxy can't contact the Fuseki server.  What's 
the Fuseki server log say around that point?

The JVM will grow to 6G - or whatever you set the -Xmx to - before a 
full GC cuts in.

     Andy

Unrelated inline:

On 05/01/2022 11:49, Piotr Nowara wrote:
> Hi Andy,
> 
> We are running Fuseki inside Kubernetes pod using our own Docker image on
> Ubuntu 20.04 and eclipse-temurin:11-jre-focal.
> 
> The old RAM limit set in Kubernetes was 6GB which was more than enough
> until we upgraded to 4.3.1.
> 
> Our Fuseki hosts four in-memory datasets. The biggest one has 1.6 million
> triples (200 MB big when exporting to RDF/TTL file). The three others are
> significantly smaller (less than 50k triples). Our datasets are used as
> read-only data repositories. They are restored from S3-stored TTL backup
> files when Fuseki restarts (see below for config TTL). Longest query takes
> ca. 8 seconds, 90% of them complete in less than 20ms.
> 
> Thanks,
> Piotr
> 
> Our config:
> @prefix :      <http://base/#> .
> @prefix tdb:   <http://jena.hpl.hp.com/2008/tdb#> .
> @prefix rdf:   <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
> @prefix ja:    <http://jena.hpl.hp.com/2005/11/Assembler#> .
> @prefix rdfs:  <http://www.w3.org/2000/01/rdf-schema#> .
> @prefix fuseki: <http://jena.apache.org/fuseki#> .
> 
...
no need for the declarations
...

Simpler and modernization:

:service1  a                          fuseki:Service ;
         fuseki:dataset                :dataset ;
         fuseki:name                   "meta" ;
         fuseki:serviceQuery           "query" , "sparql" ;
         fuseki:serviceReadGraphStore  "get" ;
         fuseki:serviceReadWriteGraphStore
                 "data" ;
         fuseki:serviceUpdate          "update" ;
         fuseki:serviceUpload          "upload" .

:dataset rdf:type ja:MemoryDataset ;
     ja:data <file:./seed/meta-cdq-latest.ttl.gz>
     .


and using

     fuseki:endpoint [
         fuseki:operation fuseki:query ;
         fuseki:name "sparql"
     ];
     fuseki:endpoint [
         fuseki:operation fuseki:query ;
         fuseki:name "query"
     ];


rather than

    fuseki:serviceQuery       "query" , "sparql" ;

or

:service1  a          fuseki:Service ;
     fuseki:endpoint [ fuseki:operation fuseki:query ] ;
     fuseki:endpoint [ fuseki:operation fuseki:update ] ;
     ...

if you use the dataset as the operation endpoint. (service* adds both 
for compatibility reasons).

     Andy

Re: Fuseki 4.* increased RAM consumption

Posted by Piotr Nowara <pi...@gmail.com>.
Hi Andy,

We are running Fuseki inside Kubernetes pod using our own Docker image on
Ubuntu 20.04 and eclipse-temurin:11-jre-focal.

The old RAM limit set in Kubernetes was 6GB which was more than enough
until we upgraded to 4.3.1.

Our Fuseki hosts four in-memory datasets. The biggest one has 1.6 million
triples (200 MB big when exporting to RDF/TTL file). The three others are
significantly smaller (less than 50k triples). Our datasets are used as
read-only data repositories. They are restored from S3-stored TTL backup
files when Fuseki restarts (see below for config TTL). Longest query takes
ca. 8 seconds, 90% of them complete in less than 20ms.

Thanks,
Piotr

Our config:
@prefix :      <http://base/#> .
@prefix tdb:   <http://jena.hpl.hp.com/2008/tdb#> .
@prefix rdf:   <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix ja:    <http://jena.hpl.hp.com/2005/11/Assembler#> .
@prefix rdfs:  <http://www.w3.org/2000/01/rdf-schema#> .
@prefix fuseki: <http://jena.apache.org/fuseki#> .

<http://jena.apache.org/2016/tdb#DatasetTDB>
        rdfs:subClassOf  ja:RDFDataset .

ja:DatasetTxnMem  rdfs:subClassOf  ja:RDFDataset .

tdb:DatasetTDB  rdfs:subClassOf  ja:RDFDataset .

:service1  a                          fuseki:Service ;
        fuseki:dataset                :dataset ;
        fuseki:name                   "meta" ;
        fuseki:serviceQuery           "query" , "sparql" ;
        fuseki:serviceReadGraphStore  "get" ;
        fuseki:serviceReadWriteGraphStore
                "data" ;
        fuseki:serviceUpdate          "update" ;
        fuseki:serviceUpload          "upload" .

tdb:GraphTDB  rdfs:subClassOf  ja:Model .

<http://jena.apache.org/2016/tdb#GraphTDB2>
        rdfs:subClassOf  ja:Model .

ja:MemoryDataset  rdfs:subClassOf  ja:RDFDataset .

ja:RDFDatasetZero  rdfs:subClassOf  ja:RDFDataset .

<http://jena.apache.org/text#TextDataset>
        rdfs:subClassOf  ja:RDFDataset .

<http://jena.apache.org/2016/tdb#GraphTDB>
        rdfs:subClassOf  ja:Model .

<http://jena.apache.org/spatial#SpatialDataset>
        rdfs:subClassOf  ja:RDFDataset .

ja:RDFDatasetOne  rdfs:subClassOf  ja:RDFDataset .

ja:RDFDatasetSink  rdfs:subClassOf  ja:RDFDataset .

:dataset  a     ja:RDFDataset ;
    ja:defaultGraph
      [ a ja:MemoryModel ;
        ja:content [ja:externalContent <file:./seed/meta-cdq-latest.ttl.gz>
] ;
      ] .

<http://jena.apache.org/2016/tdb#DatasetTDB2>
        rdfs:subClassOf  ja:RDFDataset .





śr., 5 sty 2022 o 11:49 Andy Seaborne <an...@apache.org> napisał(a):

> Hi Piotr,
>
> Could you remind us what your setup is?
>
>      Andy
>
>
> On 05/01/2022 09:15, Piotr Nowara wrote:
> > Hi,
> >
> > we've recently upgraded Fuseki from 3.13.1 to 4.3.1 because of log4shell.
>
> 4.3.2 is available.
>
> > Our old Fuseki was super stable and reliable, but now after the upgrade
> we
> > are getting Service Unavailable during normal load.
> >
> > I realized RAM consumption increased dramatically from 1-1.5GB (3.13.1)
> to
> > more than 6GB now. We didn't change anything, just the version. After
> > downgrading to 3.13.1 RAM consumption is back to normal.
> >
> > Can anyone explain that? Or maybe someone have experienced a similar
> > degradation?
> >
> > Thanks.
> > Piotr
> >
>

Re: Fuseki 4.* increased RAM consumption

Posted by Andy Seaborne <an...@apache.org>.
Hi Piotr,

Could you remind us what your setup is?

     Andy


On 05/01/2022 09:15, Piotr Nowara wrote:
> Hi,
> 
> we've recently upgraded Fuseki from 3.13.1 to 4.3.1 because of log4shell.

4.3.2 is available.

> Our old Fuseki was super stable and reliable, but now after the upgrade we
> are getting Service Unavailable during normal load.
> 
> I realized RAM consumption increased dramatically from 1-1.5GB (3.13.1) to
> more than 6GB now. We didn't change anything, just the version. After
> downgrading to 3.13.1 RAM consumption is back to normal.
> 
> Can anyone explain that? Or maybe someone have experienced a similar
> degradation?
> 
> Thanks.
> Piotr
>