You are viewing a plain text version of this content. The canonical link for it is here.
Posted to oak-dev@jackrabbit.apache.org by Tanvi Shah <ta...@springernature.com.INVALID> on 2020/11/03 05:44:50 UTC

Re: Error in S3 Garbage Collection

Hi,
I think this fetch size limit should be able to change through the jackrabbit api also. I think the task should be taken by jackrabbit team to make it configurable.

Also I needed to understand is there some another provision through which S3 garbage collection could be initiated for such huge Database.
________________________________________
From: Julian Reschke <ju...@gmx.de>
Sent: 23 October 2020 15:47
To: Tanvi Shah; oak-dev@jackrabbit.apache.org
Subject: Re: Error in S3 Garbage Collection

[External - Use Caution]

Am 23.10.2020 um 11:51 schrieb Tanvi Shah:
> How is it possible with jackrabbit oak api version 1.22.2?
> ...

You can "svn co
https://urldefense.proofpoint.com/v2/url?u=https-3A__svn.apache.org_repos_asf_jackrabbit_oak_tags_jackrabbit-2Doak-2D1.22.2_&d=DwIFaQ&c=vh6FgFnduejNhPPD0fl_yRaSfZy8CWbWnIf4XJhSqx8&r=efxn8UeXcSHRO_QY23J3UMMNiX9eCS4lSzRWxErP-mo&m=jv7jB5OSfg-HoPceELR3PyeSk4OwDMgBHXQwtR-J4wY&s=Ibzk4Oy9usiv-GnZN2IIjEyM_LWGiNV7Q9Zo_z_Lh98&e= ",
modify the code, then rebuild.

(that said, I'd update to the latest, that is 1.22.5 or the current
branch snapshot first).

Best regards, Julian

**********************************************************************
Disclaimer: This e-mail is confidential and should not be used by anyone who is not the original intended recipient. If you have received this e-mail in error please inform the sender and delete it from your mailbox or any other storage mechanism. Springer Nature Technology and Publishing Solutions Private Limited does not accept liability for any statements made which are clearly the sender's own and not expressly made on behalf of Springer Nature Technology and Publishing Solutions Private Limited or one of their agents.
Please note that Springer Nature Technology and Publishing Solutions Private Limited and their agents and affiliates do not accept any responsibility for viruses or malware that may be contained in this e-mail or its attachments and it is your responsibility to scan the e-mail and attachments (if any).
Springer Nature Technology and Publishing Solutions Private Limited. Registered office: Upper Ground Floor, Wing B, Tower 8, Magarpatta City SEZ, Hadapsar Pune MH 411013 IN
Registered number: U72200PN2006FTC128967

Re: Error in S3 Garbage Collection

Posted by Julian Reschke <ju...@gmx.de>.
Am 03.11.2020 um 06:44 schrieb Tanvi Shah:
> Hi,
> I think this fetch size limit should be able to change through the jackrabbit api also. I think the task should be taken by jackrabbit team to make it configurable.

Optimally, things should work without configuration.

That said, this is open source. You have the source. You can very easily
modify the code to see whether setting the fetch limit actually helps in
your case.

Altenatively (or additionally), it would be good to have a test case
that shows the problem and which can be used to verify that a change
actually helps.

> Also I needed to understand is there some another provision through which S3 garbage collection could be initiated for such huge Database.

I don't think there's a way to get it running before fixing the OOM in
the scan phase first.

Best regards, Julian