You are viewing a plain text version of this content. The canonical link for it is here.
Posted to oak-dev@jackrabbit.apache.org by Geoffroy Schneck <gs...@adobe.com> on 2015/10/09 16:01:18 UTC

[Oak] Lucene copyonread OOM

Hello Oak Experts,

On an Oak 1.2.4 version, OOM are thrown quite regularly by the copyonread feature , see below.

However, the system where it runs, has 32GB total, and JVM -Xmx settings set to 12G . The JVM memory settings are the following :

-Xms12288m -Xmx12288m -XX:MaxMetaspaceSize=512m -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=96m

We have to assume, the repository size is huge (but unknown to me at that moment).


-        Where does the Lucene copyonread feature use the memory from ? off-heap memory of JVM allocated memory ?

-        Are there additional memory settings to increase for this specific feature ? Or one of the above seems unsufficient ?

Thanks,

09.10.2015 09:52:42.439 *ERROR* [pool-5-thread-28] org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker Failed to open Lucene index at /oak:index/lucene
java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907)
at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:283)
at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:228)
at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:195)
at org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$FileReference.openLocalInput(IndexCopier.java:382)
at org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory.openInput(IndexCopier.java:227)
at org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader.<init>(Lucene40StoredFieldsReader.java:82)
at org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsFormat.fieldsReader(Lucene40StoredFieldsFormat.java:91)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:129)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:96)
at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:62)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:66)
at org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.<init>(IndexNode.java:94)
at org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.open(IndexNode.java:62)
at org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker$1.leave(IndexTracker.java:98)
at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:153)
at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compareBranch(MapRecord.java:565)
at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:470)
at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
at org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:444)
at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:436)
at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
at org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
at org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker.update(IndexTracker.java:108)
at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider.contentChanged(LuceneIndexProvider.java:69)
at org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:125)
at org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:119)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904)
... 35 common frames omitted
09.10.2015 09:52:42.439 *WARN* [pool-5-thread-70] org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier Error occurred while copying file [segments_nw] from OakDirectory@5aaa07ea lockFactory=org.apache.lucene.store.NoLockFactory@b1401e5 to MMapDirectory@/srv/jas/data/mapcms/cma0/CQ/prod0/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/1 lockFactory=NativeFSLockFactory@/srv/jas/data/mapcms/cma0/CQ/prod0/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/1
java.io.FileNotFoundException: segments_nw
at org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory.openInput(OakDirectory.java:115)
at org.apache.lucene.store.Directory.copy(Directory.java:185)
at org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$1.run(IndexCopier.java:249)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


[cid:image001.png@01CF3C74.944BEB30]

Geoffroy Schneck
Program manager - Team Lead
Marketing Cloud Customer Care

T: +41 61 226 55 70
M: +41 79 207 45 04
email: gschneck@adobe.com<ma...@adobe.com>

Barfuesserplatz 6
CH-4001 Basel, Switzerland
www.adobe.com<http://www.adobe.com/>

[cid:image006.jpg@01D102AB.BA47B5E0]






For CQ support and tips, follow us on Twitter: @AdobeMktgCare<https://twitter.com/AdobeMktgCare>



Re: [Oak] Lucene copyonread OOM

Posted by Stephan Becker <st...@netcentric.biz>.
Hi Thomas, Thierry,

my bad ulimit -v is what I meant. Set it in the limits.conf with as

https://plumbr.eu/outofmemoryerror/unable-to-create-new-native-thread
different OOM but probably related.

On Fri, Oct 9, 2015 at 4:25 PM, Thomas Mueller <mu...@adobe.com> wrote:

> Hi,
>
> Is this a 32-bit or 64-bit JVM?
>
> Could you try
>
>     ulimit -v unlimited
>
> See
> http://stackoverflow.com/questions/8892143/error-when-opening-a-lucene-index-map-failed
> and possibly
> http://stackoverflow.com/questions/11683850/how-much-memory-could-vm-use-in-linux
>
> Regards,
> Thomas
>
>
> From: Geoffroy Schneck <gs...@adobe.com>
> Date: Friday 9 October 2015 16:01
> To: "oak-dev@jackrabbit.apache.org" <oa...@jackrabbit.apache.org>,
> DL-tech <te...@adobe.com>
> Subject: [Oak] Lucene copyonread OOM
>
> Hello Oak Experts,
>
>
>
> On an Oak 1.2.4 version, OOM are thrown quite regularly by the copyonread
> feature , see below.
>
>
>
> However, the system where it runs, has 32GB total, and JVM –Xmx settings
> set to 12G . The JVM memory settings are the following :
>
>
>
> -Xms12288m -Xmx12288m -XX:MaxMetaspaceSize=512m -XX:MaxPermSize=512M
> -XX:ReservedCodeCacheSize=96m
>
>
>
> We have to assume, the repository size is huge (but unknown to me at that
> moment).
>
>
>
> -        Where does the Lucene copyonread feature use the memory from ?
> off-heap memory of JVM allocated memory ?
>
> -        Are there additional memory settings to increase for this
> specific feature ? Or one of the above seems unsufficient ?
>
>
>
> Thanks,
>
>
>
> *09.10.2015 09:52:42.439 *ERROR* [pool-5-thread-28]
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker Failed to open
> Lucene index at /oak:index/lucene*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> * java.io.IOException: Map failed at
> sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907) at
> org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:283) at
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:228)
> at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:195)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$FileReference.openLocalInput(IndexCopier.java:382)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory.openInput(IndexCopier.java:227)
> at
> org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader.<init>(Lucene40StoredFieldsReader.java:82)
> at
> org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsFormat.fieldsReader(Lucene40StoredFieldsFormat.java:91)
> at
> org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:129)
> at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:96) at
> org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:62)
> at
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
> at
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
> at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:66) at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.<init>(IndexNode.java:94)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.open(IndexNode.java:62)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker$1.leave(IndexTracker.java:98)
> at
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:153)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compareBranch(MapRecord.java:565)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:470)
> at
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
> at
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:444)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:436)
> at
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
> at
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker.update(IndexTracker.java:108)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider.contentChanged(LuceneIndexProvider.java:69)
> at
> org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:125)
> at
> org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:119)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) Caused by:
> java.lang.OutOfMemoryError: Map failed at
> sun.nio.ch.FileChannelImpl.map0(Native Method) at
> sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904) ... 35 common
> frames omitted 09.10.2015 09:52:42.439 *WARN* [pool-5-thread-70]
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier Error occurred
> while copying file [segments_nw] from OakDirectory@5aaa07ea
> lockFactory=org.apache.lucene.store.NoLockFactory@b1401e5 to
> MMapDirectory@/srv/jas/data/mapcms/cma0/CQ/prod0/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/1
> lockFactory=NativeFSLockFactory@/srv/jas/data/mapcms/cma0/CQ/prod0/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/1
> java.io.FileNotFoundException: segments_nw at
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory.openInput(OakDirectory.java:115)
> at org.apache.lucene.store.Directory.copy(Directory.java:185) at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$1.run(IndexCopier.java:249)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)*
>
>
>
>
>
> [image: cid:image001.png@01CF3C74.944BEB30]
>
> *Geoffroy Schneck*
> Program manager – Team Lead
> Marketing Cloud Customer Care
>
> T: +41 61 226 55 70
>
> M: +41 79 207 45 04
>
> email: gschneck@adobe.com
>
> Barfuesserplatz 6
> CH-4001 Basel, Switzerland
> www.adobe.com
>
>
>
>
>
> For CQ support and tips, follow us on Twitter: @AdobeMktgCare
> <https://twitter.com/AdobeMktgCare>
>
>
>
>
>



-- 
With kind regards

Mit freundlichen Grüßen

*Stephan Becker* | Senior System Engineer
Netcentric Deutschland GmbH
M D: +49 (0) 175 2238120
Skype: stephanhs.b

stephan.becker@netcentric.biz | www.netcentric.biz
Other disclosures according to §35a GmbhG, §161, 125a HGB:
www.netcentric.biz/imprint.html

Re: [Oak] Lucene copyonread OOM

Posted by Thomas Mueller <mu...@adobe.com>.
Hi,

Is this a 32-bit or 64-bit JVM?

Could you try

    ulimit -v unlimited

See http://stackoverflow.com/questions/8892143/error-when-opening-a-lucene-index-map-failed
and possibly http://stackoverflow.com/questions/11683850/how-much-memory-could-vm-use-in-linux

Regards,
Thomas


From: Geoffroy Schneck <gs...@adobe.com>>
Date: Friday 9 October 2015 16:01
To: "oak-dev@jackrabbit.apache.org<ma...@jackrabbit.apache.org>" <oa...@jackrabbit.apache.org>>, DL-tech <te...@adobe.com>>
Subject: [Oak] Lucene copyonread OOM

Hello Oak Experts,

On an Oak 1.2.4 version, OOM are thrown quite regularly by the copyonread feature , see below.

However, the system where it runs, has 32GB total, and JVM –Xmx settings set to 12G . The JVM memory settings are the following :

-Xms12288m -Xmx12288m -XX:MaxMetaspaceSize=512m -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=96m

We have to assume, the repository size is huge (but unknown to me at that moment).


-        Where does the Lucene copyonread feature use the memory from ? off-heap memory of JVM allocated memory ?

-        Are there additional memory settings to increase for this specific feature ? Or one of the above seems unsufficient ?

Thanks,

09.10.2015 09:52:42.439 *ERROR* [pool-5-thread-28] org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker Failed to open Lucene index at /oak:index/lucene
java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907)
at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:283)
at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:228)
at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:195)
at org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$FileReference.openLocalInput(IndexCopier.java:382)
at org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory.openInput(IndexCopier.java:227)
at org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader.<init>(Lucene40StoredFieldsReader.java:82)
at org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsFormat.fieldsReader(Lucene40StoredFieldsFormat.java:91)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:129)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:96)
at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:62)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:66)
at org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.<init>(IndexNode.java:94)
at org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.open(IndexNode.java:62)
at org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker$1.leave(IndexTracker.java:98)
at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:153)
at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compareBranch(MapRecord.java:565)
at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:470)
at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
at org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:444)
at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:436)
at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
at org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
at org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker.update(IndexTracker.java:108)
at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider.contentChanged(LuceneIndexProvider.java:69)
at org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:125)
at org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:119)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904)
... 35 common frames omitted
09.10.2015 09:52:42.439 *WARN* [pool-5-thread-70] org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier Error occurred while copying file [segments_nw] from OakDirectory@5aaa07ea lockFactory=org.apache.lucene.store.NoLockFactory@b1401e5 to MMapDirectory@/srv/jas/data/mapcms/cma0/CQ/prod0/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/1 lockFactory=NativeFSLockFactory@/srv/jas/data/mapcms/cma0/CQ/prod0/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/1
java.io.FileNotFoundException: segments_nw
at org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory.openInput(OakDirectory.java:115)
at org.apache.lucene.store.Directory.copy(Directory.java:185)
at org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$1.run(IndexCopier.java:249)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


[cid:image001.png@01CF3C74.944BEB30]

Geoffroy Schneck
Program manager – Team Lead
Marketing Cloud Customer Care

T: +41 61 226 55 70
M: +41 79 207 45 04
email: gschneck@adobe.com<ma...@adobe.com>

Barfuesserplatz 6
CH-4001 Basel, Switzerland
www.adobe.com<http://www.adobe.com/>

[cid:image006.jpg@01D102AB.BA47B5E0]






For CQ support and tips, follow us on Twitter: @AdobeMktgCare<https://twitter.com/AdobeMktgCare>



Re: [Oak] Lucene copyonread OOM

Posted by Stephan Becker <st...@netcentric.biz>.
Hi Geoffroy,

what OS is used (I assume SLES)? What OOM is thrown exactly?

The JVM settings imho seem sufficient but the OS may not. Lucene now for
some operations uses parts of Virtualmemory.

Can you check the ulimits

ulimit -n

If not set unlimited -> give setting it a try to unlimited in the
/etc/security/limits.conf where it is the "as" parameter that should
resolve the issue. You will see the virtual memory usage increase
significantly though when checking with top, this is acceptable though.

On Fri, Oct 9, 2015 at 4:01 PM, Geoffroy Schneck <gs...@adobe.com> wrote:

> Hello Oak Experts,
>
>
>
> On an Oak 1.2.4 version, OOM are thrown quite regularly by the copyonread
> feature , see below.
>
>
>
> However, the system where it runs, has 32GB total, and JVM –Xmx settings
> set to 12G . The JVM memory settings are the following :
>
>
>
> -Xms12288m -Xmx12288m -XX:MaxMetaspaceSize=512m -XX:MaxPermSize=512M
> -XX:ReservedCodeCacheSize=96m
>
>
>
> We have to assume, the repository size is huge (but unknown to me at that
> moment).
>
>
>
> -        Where does the Lucene copyonread feature use the memory from ?
> off-heap memory of JVM allocated memory ?
>
> -        Are there additional memory settings to increase for this
> specific feature ? Or one of the above seems unsufficient ?
>
>
>
> Thanks,
>
>
>
> *09.10.2015 09:52:42.439 *ERROR* [pool-5-thread-28]
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker Failed to open
> Lucene index at /oak:index/lucene*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> * java.io.IOException: Map failed at
> sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907) at
> org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:283) at
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:228)
> at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:195)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$FileReference.openLocalInput(IndexCopier.java:382)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory.openInput(IndexCopier.java:227)
> at
> org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsReader.<init>(Lucene40StoredFieldsReader.java:82)
> at
> org.apache.lucene.codecs.lucene40.Lucene40StoredFieldsFormat.fieldsReader(Lucene40StoredFieldsFormat.java:91)
> at
> org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:129)
> at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:96) at
> org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:62)
> at
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
> at
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
> at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:66) at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.<init>(IndexNode.java:94)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.open(IndexNode.java:62)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker$1.leave(IndexTracker.java:98)
> at
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:153)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compareBranch(MapRecord.java:565)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:470)
> at
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
> at
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:444)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
> at
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:436)
> at
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
> at
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker.update(IndexTracker.java:108)
> at
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider.contentChanged(LuceneIndexProvider.java:69)
> at
> org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:125)
> at
> org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:119)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) Caused by:
> java.lang.OutOfMemoryError: Map failed at
> sun.nio.ch.FileChannelImpl.map0(Native Method) at
> sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904) ... 35 common
> frames omitted 09.10.2015 09:52:42.439 *WARN* [pool-5-thread-70]
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier Error occurred
> while copying file [segments_nw] from OakDirectory@5aaa07ea
> lockFactory=org.apache.lucene.store.NoLockFactory@b1401e5 to
> MMapDirectory@/srv/jas/data/mapcms/cma0/CQ/prod0/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/1
> lockFactory=NativeFSLockFactory@/srv/jas/data/mapcms/cma0/CQ/prod0/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/1
> java.io.FileNotFoundException: segments_nw at
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory.openInput(OakDirectory.java:115)
> at org.apache.lucene.store.Directory.copy(Directory.java:185) at
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$1.run(IndexCopier.java:249)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)*
>
>
>
>
>
> [image: cid:image001.png@01CF3C74.944BEB30]
>
> *Geoffroy Schneck*
> Program manager – Team Lead
> Marketing Cloud Customer Care
>
> T: +41 61 226 55 70
>
> M: +41 79 207 45 04
>
> email: gschneck@adobe.com
>
> Barfuesserplatz 6
> CH-4001 Basel, Switzerland
> www.adobe.com
>
>
>
>
>
> For CQ support and tips, follow us on Twitter: @AdobeMktgCare
> <https://twitter.com/AdobeMktgCare>
>
>
>
>
>



-- 
With kind regards

Mit freundlichen Grüßen

*Stephan Becker* | Senior System Engineer
Netcentric Deutschland GmbH
M D: +49 (0) 175 2238120
Skype: stephanhs.b

stephan.becker@netcentric.biz | www.netcentric.biz
Other disclosures according to §35a GmbhG, §161, 125a HGB:
www.netcentric.biz/imprint.html