You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Rich Cariens <ri...@gmail.com> on 2011/09/07 18:25:37 UTC

MMapDirectory failed to map a 23G compound index segment

Ahoy ahoy!

I've run into the dreaded OOM error with MMapDirectory on a 23G cfs compound
index segment file. The stack trace looks pretty much like every other trace
I've found when searching for OOM & "map failed"[1]. My configuration
follows:

Solr 1.4.1/Lucene 2.9.3 (plus
SOLR-1969<https://issues.apache.org/jira/browse/SOLR-1969>
)
CentOS 4.9 (Final)
Linux 2.6.9-100.ELsmp x86_64 yada yada yada
Java SE (build 1.6.0_21-b06)
Hotspot 64-bit Server VM (build 17.0-b16, mixed mode)
ulimits:
    core file size     (blocks, -c)     0
    data seg size    (kbytes, -d)     unlimited
    file size     (blocks, -f)     unlimited
    pending signals    (-i)     1024
    max locked memory     (kbytes, -l)     32
    max memory size     (kbytes, -m)     unlimited
    open files    (-n)     256000
    pipe size     (512 bytes, -p)     8
    POSIX message queues     (bytes, -q)     819200
    stack size    (kbytes, -s)     10240
    cpu time    (seconds, -t)     unlimited
    max user processes     (-u)     1064959
    virtual memory    (kbytes, -v)     unlimited
    file locks    (-x)     unlimited

Any suggestions?

Thanks in advance,
Rich

[1]
...
java.io.IOException: Map failed
 at sun.nio.ch.FileChannelImpl.map(Unknown Source)
 at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
Source)
 at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
Source)
 at org.apache.lucene.store.MMapDirectory.openInput(Unknown Source)
 at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(Unknown Source)

 at org.apache.lucene.index.SegmentReader.get(Unknown Source)
 at org.apache.lucene.index.SegmentReader.get(Unknown Source)
 at org.apache.lucene.index.DirectoryReader.<init>(Unknown Source)
 at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(Unknown Source)
 at org.apache.lucene.index.DirectoryReader$1.doBody(Unknown Source)
 at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(Unknown
Source)
 at org.apache.lucene.index.DirectoryReader.open(Unknown Source)
 at org.apache.lucene.index.IndexReader.open(Unknown Source)
...
Caused by: java.lang.OutOfMemoryError: Map failed
 at sun.nio.ch.FileChannelImpl.map0(Native Method)
...

Re: MMapDirectory failed to map a 23G compound index segment

Posted by Rich Cariens <ri...@gmail.com>.
My colleague and I thought the same thing - that this is an O/S
configuration issue.

/proc/sys/vm/max_map_count = 65536

I honestly don't know how many segments were in the index. Our merge factor
is 10 and there were around 4.4 million docs indexed. The OOME was raised
when the MMapDirectory was opened, so I don't think were reopening the
reader several times. Our MMapDirectory is set to use the "unmapHack".

We've since switched back to non-compound index files and are having no
trouble at all.

On Tue, Sep 20, 2011 at 3:32 PM, Michael McCandless <
lucene@mikemccandless.com> wrote:

> Since you hit OOME during mmap, I think this is an OS issue not a JVM
> issue.  Ie, the JVM isn't running out of memory.
>
> How many segments were in the unoptimized index?  It's possible the OS
> rejected the mmap because of process limits.  Run "cat
> /proc/sys/vm/max_map_count" to see how many mmaps are allowed.
>
> Or: is it possible you reopened the reader several times against the
> index (ie, after committing from Solr)?  If so, I think 2.9.x never
> unmaps the mapped areas, and so this would "accumulate" against the
> system limit.
>
> > My memory of this is a little rusty but isn't mmap also limited by mem +
> swap on the box? What does 'free -g' report?
>
> I don't think this should be the case; you are using a 64 bit OS/JVM
> so in theory (except for OS system wide / per-process limits imposed)
> you should be able to mmap up to the full 64 bit address space.
>
> Your virtual memory is unlimited (from "ulimit" output), so that's good.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Wed, Sep 7, 2011 at 12:25 PM, Rich Cariens <ri...@gmail.com>
> wrote:
> > Ahoy ahoy!
> >
> > I've run into the dreaded OOM error with MMapDirectory on a 23G cfs
> compound
> > index segment file. The stack trace looks pretty much like every other
> trace
> > I've found when searching for OOM & "map failed"[1]. My configuration
> > follows:
> >
> > Solr 1.4.1/Lucene 2.9.3 (plus
> > SOLR-1969<https://issues.apache.org/jira/browse/SOLR-1969>
> > )
> > CentOS 4.9 (Final)
> > Linux 2.6.9-100.ELsmp x86_64 yada yada yada
> > Java SE (build 1.6.0_21-b06)
> > Hotspot 64-bit Server VM (build 17.0-b16, mixed mode)
> > ulimits:
> >    core file size     (blocks, -c)     0
> >    data seg size    (kbytes, -d)     unlimited
> >    file size     (blocks, -f)     unlimited
> >    pending signals    (-i)     1024
> >    max locked memory     (kbytes, -l)     32
> >    max memory size     (kbytes, -m)     unlimited
> >    open files    (-n)     256000
> >    pipe size     (512 bytes, -p)     8
> >    POSIX message queues     (bytes, -q)     819200
> >    stack size    (kbytes, -s)     10240
> >    cpu time    (seconds, -t)     unlimited
> >    max user processes     (-u)     1064959
> >    virtual memory    (kbytes, -v)     unlimited
> >    file locks    (-x)     unlimited
> >
> > Any suggestions?
> >
> > Thanks in advance,
> > Rich
> >
> > [1]
> > ...
> > java.io.IOException: Map failed
> >  at sun.nio.ch.FileChannelImpl.map(Unknown Source)
> >  at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> > Source)
> >  at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> > Source)
> >  at org.apache.lucene.store.MMapDirectory.openInput(Unknown Source)
> >  at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(Unknown
> Source)
> >
> >  at org.apache.lucene.index.SegmentReader.get(Unknown Source)
> >  at org.apache.lucene.index.SegmentReader.get(Unknown Source)
> >  at org.apache.lucene.index.DirectoryReader.<init>(Unknown Source)
> >  at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(Unknown
> Source)
> >  at org.apache.lucene.index.DirectoryReader$1.doBody(Unknown Source)
> >  at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(Unknown
> > Source)
> >  at org.apache.lucene.index.DirectoryReader.open(Unknown Source)
> >  at org.apache.lucene.index.IndexReader.open(Unknown Source)
> > ...
> > Caused by: java.lang.OutOfMemoryError: Map failed
> >  at sun.nio.ch.FileChannelImpl.map0(Native Method)
> > ...
> >
>

Re: MMapDirectory failed to map a 23G compound index segment

Posted by Robert Muir <rc...@gmail.com>.
On Tue, Sep 20, 2011 at 12:32 PM, Michael McCandless
<lu...@mikemccandless.com> wrote:
>
> Or: is it possible you reopened the reader several times against the
> index (ie, after committing from Solr)?  If so, I think 2.9.x never
> unmaps the mapped areas, and so this would "accumulate" against the
> system limit.

In order to unmap in Lucene 2.9.x you must specifically turn this
unmapping on with setUseUnmapHack(true)

-- 
lucidimagination.com

Re: FW: MMapDirectory failed to map a 23G compound index segment

Posted by Yongtao Liu <li...@gmail.com>.
I hit similar issue recently.
Not sure if MMapDirectory is right way to go.

When index file be map to ram, JVM will call OS file mapping function.
The memory usage is in share memory, it may not be calculate to JVM process
space.

I saw one problem is if the index file bigger then physical ram, and there
are lot of query which cause wide index file access.
Then, the machine has no available memory.
The system change to very slow.

What i did is change lucene code to disable MMapDirectory.

On Wed, Sep 21, 2011 at 1:26 PM, Yongtao Liu <yl...@commvault.com> wrote:

>
>
> -----Original Message-----
> From: Michael McCandless [mailto:lucene@mikemccandless.com]
> Sent: Tuesday, September 20, 2011 3:33 PM
> To: solr-user@lucene.apache.org
> Subject: Re: MMapDirectory failed to map a 23G compound index segment
>
> Since you hit OOME during mmap, I think this is an OS issue not a JVM
> issue.  Ie, the JVM isn't running out of memory.
>
> How many segments were in the unoptimized index?  It's possible the OS
> rejected the mmap because of process limits.  Run "cat
> /proc/sys/vm/max_map_count" to see how many mmaps are allowed.
>
> Or: is it possible you reopened the reader several times against the index
> (ie, after committing from Solr)?  If so, I think 2.9.x never unmaps the
> mapped areas, and so this would "accumulate" against the system limit.
>
> > My memory of this is a little rusty but isn't mmap also limited by mem +
> swap on the box? What does 'free -g' report?
>
> I don't think this should be the case; you are using a 64 bit OS/JVM so in
> theory (except for OS system wide / per-process limits imposed) you should
> be able to mmap up to the full 64 bit address space.
>
> Your virtual memory is unlimited (from "ulimit" output), so that's good.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Wed, Sep 7, 2011 at 12:25 PM, Rich Cariens <ri...@gmail.com>
> wrote:
> > Ahoy ahoy!
> >
> > I've run into the dreaded OOM error with MMapDirectory on a 23G cfs
> > compound index segment file. The stack trace looks pretty much like
> > every other trace I've found when searching for OOM & "map failed"[1].
> > My configuration
> > follows:
> >
> > Solr 1.4.1/Lucene 2.9.3 (plus
> > SOLR-1969<https://issues.apache.org/jira/browse/SOLR-1969>
> > )
> > CentOS 4.9 (Final)
> > Linux 2.6.9-100.ELsmp x86_64 yada yada yada Java SE (build
> > 1.6.0_21-b06) Hotspot 64-bit Server VM (build 17.0-b16, mixed mode)
> > ulimits:
> >    core file size     (blocks, -c)     0
> >    data seg size    (kbytes, -d)     unlimited
> >    file size     (blocks, -f)     unlimited
> >    pending signals    (-i)     1024
> >    max locked memory     (kbytes, -l)     32
> >    max memory size     (kbytes, -m)     unlimited
> >    open files    (-n)     256000
> >    pipe size     (512 bytes, -p)     8
> >    POSIX message queues     (bytes, -q)     819200
> >    stack size    (kbytes, -s)     10240
> >    cpu time    (seconds, -t)     unlimited
> >    max user processes     (-u)     1064959
> >    virtual memory    (kbytes, -v)     unlimited
> >    file locks    (-x)     unlimited
> >
> > Any suggestions?
> >
> > Thanks in advance,
> > Rich
> >
> > [1]
> > ...
> > java.io.IOException: Map failed
> >  at sun.nio.ch.FileChannelImpl.map(Unknown Source)
> >  at
> > org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> > Source)
> >  at
> > org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> > Source)
> >  at org.apache.lucene.store.MMapDirectory.openInput(Unknown Source)
> >  at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(Unknown
> > Source)
> >
> >  at org.apache.lucene.index.SegmentReader.get(Unknown Source)
> >  at org.apache.lucene.index.SegmentReader.get(Unknown Source)
> >  at org.apache.lucene.index.DirectoryReader.<init>(Unknown Source)
> >  at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(Unknown
> > Source)
> >  at org.apache.lucene.index.DirectoryReader$1.doBody(Unknown Source)
> >  at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(Unknown
> > Source)
> >  at org.apache.lucene.index.DirectoryReader.open(Unknown Source)
> >  at org.apache.lucene.index.IndexReader.open(Unknown Source) ...
> > Caused by: java.lang.OutOfMemoryError: Map failed
> >  at sun.nio.ch.FileChannelImpl.map0(Native Method) ...
> >
> ******************Legal Disclaimer***************************
> "This communication may contain confidential and privileged
> material for the sole use of the intended recipient. Any
> unauthorized review, use or distribution by others is strictly
> prohibited. If you have received the message in error, please
> advise the sender by reply email and delete the message. Thank
> you."
> *********************************************************
>

Re: MMapDirectory failed to map a 23G compound index segment

Posted by Michael McCandless <lu...@mikemccandless.com>.
Since you hit OOME during mmap, I think this is an OS issue not a JVM
issue.  Ie, the JVM isn't running out of memory.

How many segments were in the unoptimized index?  It's possible the OS
rejected the mmap because of process limits.  Run "cat
/proc/sys/vm/max_map_count" to see how many mmaps are allowed.

Or: is it possible you reopened the reader several times against the
index (ie, after committing from Solr)?  If so, I think 2.9.x never
unmaps the mapped areas, and so this would "accumulate" against the
system limit.

> My memory of this is a little rusty but isn't mmap also limited by mem + swap on the box? What does 'free -g' report?

I don't think this should be the case; you are using a 64 bit OS/JVM
so in theory (except for OS system wide / per-process limits imposed)
you should be able to mmap up to the full 64 bit address space.

Your virtual memory is unlimited (from "ulimit" output), so that's good.

Mike McCandless

http://blog.mikemccandless.com

On Wed, Sep 7, 2011 at 12:25 PM, Rich Cariens <ri...@gmail.com> wrote:
> Ahoy ahoy!
>
> I've run into the dreaded OOM error with MMapDirectory on a 23G cfs compound
> index segment file. The stack trace looks pretty much like every other trace
> I've found when searching for OOM & "map failed"[1]. My configuration
> follows:
>
> Solr 1.4.1/Lucene 2.9.3 (plus
> SOLR-1969<https://issues.apache.org/jira/browse/SOLR-1969>
> )
> CentOS 4.9 (Final)
> Linux 2.6.9-100.ELsmp x86_64 yada yada yada
> Java SE (build 1.6.0_21-b06)
> Hotspot 64-bit Server VM (build 17.0-b16, mixed mode)
> ulimits:
>    core file size     (blocks, -c)     0
>    data seg size    (kbytes, -d)     unlimited
>    file size     (blocks, -f)     unlimited
>    pending signals    (-i)     1024
>    max locked memory     (kbytes, -l)     32
>    max memory size     (kbytes, -m)     unlimited
>    open files    (-n)     256000
>    pipe size     (512 bytes, -p)     8
>    POSIX message queues     (bytes, -q)     819200
>    stack size    (kbytes, -s)     10240
>    cpu time    (seconds, -t)     unlimited
>    max user processes     (-u)     1064959
>    virtual memory    (kbytes, -v)     unlimited
>    file locks    (-x)     unlimited
>
> Any suggestions?
>
> Thanks in advance,
> Rich
>
> [1]
> ...
> java.io.IOException: Map failed
>  at sun.nio.ch.FileChannelImpl.map(Unknown Source)
>  at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> Source)
>  at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> Source)
>  at org.apache.lucene.store.MMapDirectory.openInput(Unknown Source)
>  at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(Unknown Source)
>
>  at org.apache.lucene.index.SegmentReader.get(Unknown Source)
>  at org.apache.lucene.index.SegmentReader.get(Unknown Source)
>  at org.apache.lucene.index.DirectoryReader.<init>(Unknown Source)
>  at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(Unknown Source)
>  at org.apache.lucene.index.DirectoryReader$1.doBody(Unknown Source)
>  at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(Unknown
> Source)
>  at org.apache.lucene.index.DirectoryReader.open(Unknown Source)
>  at org.apache.lucene.index.IndexReader.open(Unknown Source)
> ...
> Caused by: java.lang.OutOfMemoryError: Map failed
>  at sun.nio.ch.FileChannelImpl.map0(Native Method)
> ...
>

Re: MMapDirectory failed to map a 23G compound index segment

Posted by Rich Cariens <ri...@gmail.com>.
Thanks. It's definitely repeatable and I may spend some time plumbing this
further. I'll let the list know if I find anything.

The problem went away once I optimized the index down to a single segment
using a simple IndexWriter driver. This was a bit strange since the
resulting index contained similarly large (> 23G) files. The JVM didn't seem
to have any trouble MMap'ing those.

No, I don't need (or necessarily want) to use compound index file formats.
That was actually a goof on my part which I've since corrected :).

On Fri, Sep 9, 2011 at 9:42 PM, Lance Norskog <go...@gmail.com> wrote:

> I remember now: by memory-mapping one block of address space that big, the
> garbage collector has problems working around it. If the OOM is repeatable,
> you could try watching the app with jconsole and watch the memory spaces.
>
> Lance
>
> On Thu, Sep 8, 2011 at 8:58 PM, Lance Norskog <go...@gmail.com> wrote:
>
> > Do you need to use the compound format?
> >
> > On Thu, Sep 8, 2011 at 3:57 PM, Rich Cariens <richcariens@gmail.com
> >wrote:
> >
> >> I should add some more context:
> >>
> >>   1. the problem index included several cfs segment files that were
> around
> >>   4.7G, and
> >>   2. I'm running four SOLR instances on the same box, all of which have
> >>   similiar problem indeces.
> >>
> >> A colleague thought perhaps I was bumping up against my 256,000 open
> files
> >> ulimit. Do the MultiMMapIndexInput ByteBuffer arrays each consume a file
> >> handle/descriptor?
> >>
> >> On Thu, Sep 8, 2011 at 5:19 PM, Rich Cariens <ri...@gmail.com>
> >> wrote:
> >>
> >> > FWiW I optimized the index down to a single segment and now I have no
> >> > trouble opening an MMapDirectory on that index, even though the 23G
> cfx
> >> > segment file remains.
> >> >
> >> >
> >> > On Thu, Sep 8, 2011 at 4:27 PM, Rich Cariens <richcariens@gmail.com
> >> >wrote:
> >> >
> >> >> Thanks for the response. "free -g" reports:
> >> >>
> >> >>         total        used        free        shared        buffers
> >> >> cached
> >> >> Mem:      141          95          46             0
> >> >> 0            93
> >> >> -/+ buffers/cache:      2         139
> >> >> Swap:       3           0           3
> >> >>
> >> >> 2011/9/7 François Schiettecatte <fs...@gmail.com>
> >> >>
> >> >>> My memory of this is a little rusty but isn't mmap also limited by
> mem
> >> +
> >> >>> swap on the box? What does 'free -g' report?
> >> >>>
> >> >>> François
> >> >>>
> >> >>> On Sep 7, 2011, at 12:25 PM, Rich Cariens wrote:
> >> >>>
> >> >>> > Ahoy ahoy!
> >> >>> >
> >> >>> > I've run into the dreaded OOM error with MMapDirectory on a 23G
> cfs
> >> >>> compound
> >> >>> > index segment file. The stack trace looks pretty much like every
> >> other
> >> >>> trace
> >> >>> > I've found when searching for OOM & "map failed"[1]. My
> >> configuration
> >> >>> > follows:
> >> >>> >
> >> >>> > Solr 1.4.1/Lucene 2.9.3 (plus
> >> >>> > SOLR-1969<https://issues.apache.org/jira/browse/SOLR-1969>
> >> >>> > )
> >> >>> > CentOS 4.9 (Final)
> >> >>> > Linux 2.6.9-100.ELsmp x86_64 yada yada yada
> >> >>> > Java SE (build 1.6.0_21-b06)
> >> >>> > Hotspot 64-bit Server VM (build 17.0-b16, mixed mode)
> >> >>> > ulimits:
> >> >>> >    core file size     (blocks, -c)     0
> >> >>> >    data seg size    (kbytes, -d)     unlimited
> >> >>> >    file size     (blocks, -f)     unlimited
> >> >>> >    pending signals    (-i)     1024
> >> >>> >    max locked memory     (kbytes, -l)     32
> >> >>> >    max memory size     (kbytes, -m)     unlimited
> >> >>> >    open files    (-n)     256000
> >> >>> >    pipe size     (512 bytes, -p)     8
> >> >>> >    POSIX message queues     (bytes, -q)     819200
> >> >>> >    stack size    (kbytes, -s)     10240
> >> >>> >    cpu time    (seconds, -t)     unlimited
> >> >>> >    max user processes     (-u)     1064959
> >> >>> >    virtual memory    (kbytes, -v)     unlimited
> >> >>> >    file locks    (-x)     unlimited
> >> >>> >
> >> >>> > Any suggestions?
> >> >>> >
> >> >>> > Thanks in advance,
> >> >>> > Rich
> >> >>> >
> >> >>> > [1]
> >> >>> > ...
> >> >>> > java.io.IOException: Map failed
> >> >>> > at sun.nio.ch.FileChannelImpl.map(Unknown Source)
> >> >>> > at
> >> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> >> >>> > Source)
> >> >>> > at
> >> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> >> >>> > Source)
> >> >>> > at org.apache.lucene.store.MMapDirectory.openInput(Unknown Source)
> >> >>> > at
> org.apache.lucene.index.SegmentReader$CoreReaders.<init>(Unknown
> >> >>> Source)
> >> >>> >
> >> >>> > at org.apache.lucene.index.SegmentReader.get(Unknown Source)
> >> >>> > at org.apache.lucene.index.SegmentReader.get(Unknown Source)
> >> >>> > at org.apache.lucene.index.DirectoryReader.<init>(Unknown Source)
> >> >>> > at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(Unknown
> >> >>> Source)
> >> >>> > at org.apache.lucene.index.DirectoryReader$1.doBody(Unknown
> Source)
> >> >>> > at
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(Unknown
> >> >>> > Source)
> >> >>> > at org.apache.lucene.index.DirectoryReader.open(Unknown Source)
> >> >>> > at org.apache.lucene.index.IndexReader.open(Unknown Source)
> >> >>> > ...
> >> >>> > Caused by: java.lang.OutOfMemoryError: Map failed
> >> >>> > at sun.nio.ch.FileChannelImpl.map0(Native Method)
> >> >>> > ...
> >> >>>
> >> >>>
> >> >>
> >> >
> >>
> >
> >
> >
> > --
> > Lance Norskog
> > goksron@gmail.com
> >
> >
>
>
> --
> Lance Norskog
> goksron@gmail.com
>

Re: MMapDirectory failed to map a 23G compound index segment

Posted by Lance Norskog <go...@gmail.com>.
I remember now: by memory-mapping one block of address space that big, the
garbage collector has problems working around it. If the OOM is repeatable,
you could try watching the app with jconsole and watch the memory spaces.

Lance

On Thu, Sep 8, 2011 at 8:58 PM, Lance Norskog <go...@gmail.com> wrote:

> Do you need to use the compound format?
>
> On Thu, Sep 8, 2011 at 3:57 PM, Rich Cariens <ri...@gmail.com>wrote:
>
>> I should add some more context:
>>
>>   1. the problem index included several cfs segment files that were around
>>   4.7G, and
>>   2. I'm running four SOLR instances on the same box, all of which have
>>   similiar problem indeces.
>>
>> A colleague thought perhaps I was bumping up against my 256,000 open files
>> ulimit. Do the MultiMMapIndexInput ByteBuffer arrays each consume a file
>> handle/descriptor?
>>
>> On Thu, Sep 8, 2011 at 5:19 PM, Rich Cariens <ri...@gmail.com>
>> wrote:
>>
>> > FWiW I optimized the index down to a single segment and now I have no
>> > trouble opening an MMapDirectory on that index, even though the 23G cfx
>> > segment file remains.
>> >
>> >
>> > On Thu, Sep 8, 2011 at 4:27 PM, Rich Cariens <richcariens@gmail.com
>> >wrote:
>> >
>> >> Thanks for the response. "free -g" reports:
>> >>
>> >>         total        used        free        shared        buffers
>> >> cached
>> >> Mem:      141          95          46             0
>> >> 0            93
>> >> -/+ buffers/cache:      2         139
>> >> Swap:       3           0           3
>> >>
>> >> 2011/9/7 François Schiettecatte <fs...@gmail.com>
>> >>
>> >>> My memory of this is a little rusty but isn't mmap also limited by mem
>> +
>> >>> swap on the box? What does 'free -g' report?
>> >>>
>> >>> François
>> >>>
>> >>> On Sep 7, 2011, at 12:25 PM, Rich Cariens wrote:
>> >>>
>> >>> > Ahoy ahoy!
>> >>> >
>> >>> > I've run into the dreaded OOM error with MMapDirectory on a 23G cfs
>> >>> compound
>> >>> > index segment file. The stack trace looks pretty much like every
>> other
>> >>> trace
>> >>> > I've found when searching for OOM & "map failed"[1]. My
>> configuration
>> >>> > follows:
>> >>> >
>> >>> > Solr 1.4.1/Lucene 2.9.3 (plus
>> >>> > SOLR-1969<https://issues.apache.org/jira/browse/SOLR-1969>
>> >>> > )
>> >>> > CentOS 4.9 (Final)
>> >>> > Linux 2.6.9-100.ELsmp x86_64 yada yada yada
>> >>> > Java SE (build 1.6.0_21-b06)
>> >>> > Hotspot 64-bit Server VM (build 17.0-b16, mixed mode)
>> >>> > ulimits:
>> >>> >    core file size     (blocks, -c)     0
>> >>> >    data seg size    (kbytes, -d)     unlimited
>> >>> >    file size     (blocks, -f)     unlimited
>> >>> >    pending signals    (-i)     1024
>> >>> >    max locked memory     (kbytes, -l)     32
>> >>> >    max memory size     (kbytes, -m)     unlimited
>> >>> >    open files    (-n)     256000
>> >>> >    pipe size     (512 bytes, -p)     8
>> >>> >    POSIX message queues     (bytes, -q)     819200
>> >>> >    stack size    (kbytes, -s)     10240
>> >>> >    cpu time    (seconds, -t)     unlimited
>> >>> >    max user processes     (-u)     1064959
>> >>> >    virtual memory    (kbytes, -v)     unlimited
>> >>> >    file locks    (-x)     unlimited
>> >>> >
>> >>> > Any suggestions?
>> >>> >
>> >>> > Thanks in advance,
>> >>> > Rich
>> >>> >
>> >>> > [1]
>> >>> > ...
>> >>> > java.io.IOException: Map failed
>> >>> > at sun.nio.ch.FileChannelImpl.map(Unknown Source)
>> >>> > at
>> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
>> >>> > Source)
>> >>> > at
>> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
>> >>> > Source)
>> >>> > at org.apache.lucene.store.MMapDirectory.openInput(Unknown Source)
>> >>> > at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(Unknown
>> >>> Source)
>> >>> >
>> >>> > at org.apache.lucene.index.SegmentReader.get(Unknown Source)
>> >>> > at org.apache.lucene.index.SegmentReader.get(Unknown Source)
>> >>> > at org.apache.lucene.index.DirectoryReader.<init>(Unknown Source)
>> >>> > at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(Unknown
>> >>> Source)
>> >>> > at org.apache.lucene.index.DirectoryReader$1.doBody(Unknown Source)
>> >>> > at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(Unknown
>> >>> > Source)
>> >>> > at org.apache.lucene.index.DirectoryReader.open(Unknown Source)
>> >>> > at org.apache.lucene.index.IndexReader.open(Unknown Source)
>> >>> > ...
>> >>> > Caused by: java.lang.OutOfMemoryError: Map failed
>> >>> > at sun.nio.ch.FileChannelImpl.map0(Native Method)
>> >>> > ...
>> >>>
>> >>>
>> >>
>> >
>>
>
>
>
> --
> Lance Norskog
> goksron@gmail.com
>
>


-- 
Lance Norskog
goksron@gmail.com

Re: MMapDirectory failed to map a 23G compound index segment

Posted by Lance Norskog <go...@gmail.com>.
Do you need to use the compound format?

On Thu, Sep 8, 2011 at 3:57 PM, Rich Cariens <ri...@gmail.com> wrote:

> I should add some more context:
>
>   1. the problem index included several cfs segment files that were around
>   4.7G, and
>   2. I'm running four SOLR instances on the same box, all of which have
>   similiar problem indeces.
>
> A colleague thought perhaps I was bumping up against my 256,000 open files
> ulimit. Do the MultiMMapIndexInput ByteBuffer arrays each consume a file
> handle/descriptor?
>
> On Thu, Sep 8, 2011 at 5:19 PM, Rich Cariens <ri...@gmail.com>
> wrote:
>
> > FWiW I optimized the index down to a single segment and now I have no
> > trouble opening an MMapDirectory on that index, even though the 23G cfx
> > segment file remains.
> >
> >
> > On Thu, Sep 8, 2011 at 4:27 PM, Rich Cariens <richcariens@gmail.com
> >wrote:
> >
> >> Thanks for the response. "free -g" reports:
> >>
> >>         total        used        free        shared        buffers
> >> cached
> >> Mem:      141          95          46             0
> >> 0            93
> >> -/+ buffers/cache:      2         139
> >> Swap:       3           0           3
> >>
> >> 2011/9/7 François Schiettecatte <fs...@gmail.com>
> >>
> >>> My memory of this is a little rusty but isn't mmap also limited by mem
> +
> >>> swap on the box? What does 'free -g' report?
> >>>
> >>> François
> >>>
> >>> On Sep 7, 2011, at 12:25 PM, Rich Cariens wrote:
> >>>
> >>> > Ahoy ahoy!
> >>> >
> >>> > I've run into the dreaded OOM error with MMapDirectory on a 23G cfs
> >>> compound
> >>> > index segment file. The stack trace looks pretty much like every
> other
> >>> trace
> >>> > I've found when searching for OOM & "map failed"[1]. My configuration
> >>> > follows:
> >>> >
> >>> > Solr 1.4.1/Lucene 2.9.3 (plus
> >>> > SOLR-1969<https://issues.apache.org/jira/browse/SOLR-1969>
> >>> > )
> >>> > CentOS 4.9 (Final)
> >>> > Linux 2.6.9-100.ELsmp x86_64 yada yada yada
> >>> > Java SE (build 1.6.0_21-b06)
> >>> > Hotspot 64-bit Server VM (build 17.0-b16, mixed mode)
> >>> > ulimits:
> >>> >    core file size     (blocks, -c)     0
> >>> >    data seg size    (kbytes, -d)     unlimited
> >>> >    file size     (blocks, -f)     unlimited
> >>> >    pending signals    (-i)     1024
> >>> >    max locked memory     (kbytes, -l)     32
> >>> >    max memory size     (kbytes, -m)     unlimited
> >>> >    open files    (-n)     256000
> >>> >    pipe size     (512 bytes, -p)     8
> >>> >    POSIX message queues     (bytes, -q)     819200
> >>> >    stack size    (kbytes, -s)     10240
> >>> >    cpu time    (seconds, -t)     unlimited
> >>> >    max user processes     (-u)     1064959
> >>> >    virtual memory    (kbytes, -v)     unlimited
> >>> >    file locks    (-x)     unlimited
> >>> >
> >>> > Any suggestions?
> >>> >
> >>> > Thanks in advance,
> >>> > Rich
> >>> >
> >>> > [1]
> >>> > ...
> >>> > java.io.IOException: Map failed
> >>> > at sun.nio.ch.FileChannelImpl.map(Unknown Source)
> >>> > at
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> >>> > Source)
> >>> > at
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> >>> > Source)
> >>> > at org.apache.lucene.store.MMapDirectory.openInput(Unknown Source)
> >>> > at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(Unknown
> >>> Source)
> >>> >
> >>> > at org.apache.lucene.index.SegmentReader.get(Unknown Source)
> >>> > at org.apache.lucene.index.SegmentReader.get(Unknown Source)
> >>> > at org.apache.lucene.index.DirectoryReader.<init>(Unknown Source)
> >>> > at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(Unknown
> >>> Source)
> >>> > at org.apache.lucene.index.DirectoryReader$1.doBody(Unknown Source)
> >>> > at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(Unknown
> >>> > Source)
> >>> > at org.apache.lucene.index.DirectoryReader.open(Unknown Source)
> >>> > at org.apache.lucene.index.IndexReader.open(Unknown Source)
> >>> > ...
> >>> > Caused by: java.lang.OutOfMemoryError: Map failed
> >>> > at sun.nio.ch.FileChannelImpl.map0(Native Method)
> >>> > ...
> >>>
> >>>
> >>
> >
>



-- 
Lance Norskog
goksron@gmail.com

Re: MMapDirectory failed to map a 23G compound index segment

Posted by Rich Cariens <ri...@gmail.com>.
I should add some more context:

   1. the problem index included several cfs segment files that were around
   4.7G, and
   2. I'm running four SOLR instances on the same box, all of which have
   similiar problem indeces.

A colleague thought perhaps I was bumping up against my 256,000 open files
ulimit. Do the MultiMMapIndexInput ByteBuffer arrays each consume a file
handle/descriptor?

On Thu, Sep 8, 2011 at 5:19 PM, Rich Cariens <ri...@gmail.com> wrote:

> FWiW I optimized the index down to a single segment and now I have no
> trouble opening an MMapDirectory on that index, even though the 23G cfx
> segment file remains.
>
>
> On Thu, Sep 8, 2011 at 4:27 PM, Rich Cariens <ri...@gmail.com>wrote:
>
>> Thanks for the response. "free -g" reports:
>>
>>         total        used        free        shared        buffers
>> cached
>> Mem:      141          95          46             0
>> 0            93
>> -/+ buffers/cache:      2         139
>> Swap:       3           0           3
>>
>> 2011/9/7 François Schiettecatte <fs...@gmail.com>
>>
>>> My memory of this is a little rusty but isn't mmap also limited by mem +
>>> swap on the box? What does 'free -g' report?
>>>
>>> François
>>>
>>> On Sep 7, 2011, at 12:25 PM, Rich Cariens wrote:
>>>
>>> > Ahoy ahoy!
>>> >
>>> > I've run into the dreaded OOM error with MMapDirectory on a 23G cfs
>>> compound
>>> > index segment file. The stack trace looks pretty much like every other
>>> trace
>>> > I've found when searching for OOM & "map failed"[1]. My configuration
>>> > follows:
>>> >
>>> > Solr 1.4.1/Lucene 2.9.3 (plus
>>> > SOLR-1969<https://issues.apache.org/jira/browse/SOLR-1969>
>>> > )
>>> > CentOS 4.9 (Final)
>>> > Linux 2.6.9-100.ELsmp x86_64 yada yada yada
>>> > Java SE (build 1.6.0_21-b06)
>>> > Hotspot 64-bit Server VM (build 17.0-b16, mixed mode)
>>> > ulimits:
>>> >    core file size     (blocks, -c)     0
>>> >    data seg size    (kbytes, -d)     unlimited
>>> >    file size     (blocks, -f)     unlimited
>>> >    pending signals    (-i)     1024
>>> >    max locked memory     (kbytes, -l)     32
>>> >    max memory size     (kbytes, -m)     unlimited
>>> >    open files    (-n)     256000
>>> >    pipe size     (512 bytes, -p)     8
>>> >    POSIX message queues     (bytes, -q)     819200
>>> >    stack size    (kbytes, -s)     10240
>>> >    cpu time    (seconds, -t)     unlimited
>>> >    max user processes     (-u)     1064959
>>> >    virtual memory    (kbytes, -v)     unlimited
>>> >    file locks    (-x)     unlimited
>>> >
>>> > Any suggestions?
>>> >
>>> > Thanks in advance,
>>> > Rich
>>> >
>>> > [1]
>>> > ...
>>> > java.io.IOException: Map failed
>>> > at sun.nio.ch.FileChannelImpl.map(Unknown Source)
>>> > at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
>>> > Source)
>>> > at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
>>> > Source)
>>> > at org.apache.lucene.store.MMapDirectory.openInput(Unknown Source)
>>> > at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(Unknown
>>> Source)
>>> >
>>> > at org.apache.lucene.index.SegmentReader.get(Unknown Source)
>>> > at org.apache.lucene.index.SegmentReader.get(Unknown Source)
>>> > at org.apache.lucene.index.DirectoryReader.<init>(Unknown Source)
>>> > at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(Unknown
>>> Source)
>>> > at org.apache.lucene.index.DirectoryReader$1.doBody(Unknown Source)
>>> > at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(Unknown
>>> > Source)
>>> > at org.apache.lucene.index.DirectoryReader.open(Unknown Source)
>>> > at org.apache.lucene.index.IndexReader.open(Unknown Source)
>>> > ...
>>> > Caused by: java.lang.OutOfMemoryError: Map failed
>>> > at sun.nio.ch.FileChannelImpl.map0(Native Method)
>>> > ...
>>>
>>>
>>
>

Re: MMapDirectory failed to map a 23G compound index segment

Posted by Rich Cariens <ri...@gmail.com>.
FWiW I optimized the index down to a single segment and now I have no
trouble opening an MMapDirectory on that index, even though the 23G cfx
segment file remains.

On Thu, Sep 8, 2011 at 4:27 PM, Rich Cariens <ri...@gmail.com> wrote:

> Thanks for the response. "free -g" reports:
>
>         total        used        free        shared        buffers
> cached
> Mem:      141          95          46             0
> 0            93
> -/+ buffers/cache:      2         139
> Swap:       3           0           3
>
> 2011/9/7 François Schiettecatte <fs...@gmail.com>
>
>> My memory of this is a little rusty but isn't mmap also limited by mem +
>> swap on the box? What does 'free -g' report?
>>
>> François
>>
>> On Sep 7, 2011, at 12:25 PM, Rich Cariens wrote:
>>
>> > Ahoy ahoy!
>> >
>> > I've run into the dreaded OOM error with MMapDirectory on a 23G cfs
>> compound
>> > index segment file. The stack trace looks pretty much like every other
>> trace
>> > I've found when searching for OOM & "map failed"[1]. My configuration
>> > follows:
>> >
>> > Solr 1.4.1/Lucene 2.9.3 (plus
>> > SOLR-1969<https://issues.apache.org/jira/browse/SOLR-1969>
>> > )
>> > CentOS 4.9 (Final)
>> > Linux 2.6.9-100.ELsmp x86_64 yada yada yada
>> > Java SE (build 1.6.0_21-b06)
>> > Hotspot 64-bit Server VM (build 17.0-b16, mixed mode)
>> > ulimits:
>> >    core file size     (blocks, -c)     0
>> >    data seg size    (kbytes, -d)     unlimited
>> >    file size     (blocks, -f)     unlimited
>> >    pending signals    (-i)     1024
>> >    max locked memory     (kbytes, -l)     32
>> >    max memory size     (kbytes, -m)     unlimited
>> >    open files    (-n)     256000
>> >    pipe size     (512 bytes, -p)     8
>> >    POSIX message queues     (bytes, -q)     819200
>> >    stack size    (kbytes, -s)     10240
>> >    cpu time    (seconds, -t)     unlimited
>> >    max user processes     (-u)     1064959
>> >    virtual memory    (kbytes, -v)     unlimited
>> >    file locks    (-x)     unlimited
>> >
>> > Any suggestions?
>> >
>> > Thanks in advance,
>> > Rich
>> >
>> > [1]
>> > ...
>> > java.io.IOException: Map failed
>> > at sun.nio.ch.FileChannelImpl.map(Unknown Source)
>> > at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
>> > Source)
>> > at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
>> > Source)
>> > at org.apache.lucene.store.MMapDirectory.openInput(Unknown Source)
>> > at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(Unknown
>> Source)
>> >
>> > at org.apache.lucene.index.SegmentReader.get(Unknown Source)
>> > at org.apache.lucene.index.SegmentReader.get(Unknown Source)
>> > at org.apache.lucene.index.DirectoryReader.<init>(Unknown Source)
>> > at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(Unknown
>> Source)
>> > at org.apache.lucene.index.DirectoryReader$1.doBody(Unknown Source)
>> > at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(Unknown
>> > Source)
>> > at org.apache.lucene.index.DirectoryReader.open(Unknown Source)
>> > at org.apache.lucene.index.IndexReader.open(Unknown Source)
>> > ...
>> > Caused by: java.lang.OutOfMemoryError: Map failed
>> > at sun.nio.ch.FileChannelImpl.map0(Native Method)
>> > ...
>>
>>
>

Re: MMapDirectory failed to map a 23G compound index segment

Posted by Rich Cariens <ri...@gmail.com>.
Thanks for the response. "free -g" reports:

        total        used        free        shared        buffers
cached
Mem:      141          95          46             0
0            93
-/+ buffers/cache:      2         139
Swap:       3           0           3

2011/9/7 François Schiettecatte <fs...@gmail.com>

> My memory of this is a little rusty but isn't mmap also limited by mem +
> swap on the box? What does 'free -g' report?
>
> François
>
> On Sep 7, 2011, at 12:25 PM, Rich Cariens wrote:
>
> > Ahoy ahoy!
> >
> > I've run into the dreaded OOM error with MMapDirectory on a 23G cfs
> compound
> > index segment file. The stack trace looks pretty much like every other
> trace
> > I've found when searching for OOM & "map failed"[1]. My configuration
> > follows:
> >
> > Solr 1.4.1/Lucene 2.9.3 (plus
> > SOLR-1969<https://issues.apache.org/jira/browse/SOLR-1969>
> > )
> > CentOS 4.9 (Final)
> > Linux 2.6.9-100.ELsmp x86_64 yada yada yada
> > Java SE (build 1.6.0_21-b06)
> > Hotspot 64-bit Server VM (build 17.0-b16, mixed mode)
> > ulimits:
> >    core file size     (blocks, -c)     0
> >    data seg size    (kbytes, -d)     unlimited
> >    file size     (blocks, -f)     unlimited
> >    pending signals    (-i)     1024
> >    max locked memory     (kbytes, -l)     32
> >    max memory size     (kbytes, -m)     unlimited
> >    open files    (-n)     256000
> >    pipe size     (512 bytes, -p)     8
> >    POSIX message queues     (bytes, -q)     819200
> >    stack size    (kbytes, -s)     10240
> >    cpu time    (seconds, -t)     unlimited
> >    max user processes     (-u)     1064959
> >    virtual memory    (kbytes, -v)     unlimited
> >    file locks    (-x)     unlimited
> >
> > Any suggestions?
> >
> > Thanks in advance,
> > Rich
> >
> > [1]
> > ...
> > java.io.IOException: Map failed
> > at sun.nio.ch.FileChannelImpl.map(Unknown Source)
> > at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> > Source)
> > at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> > Source)
> > at org.apache.lucene.store.MMapDirectory.openInput(Unknown Source)
> > at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(Unknown
> Source)
> >
> > at org.apache.lucene.index.SegmentReader.get(Unknown Source)
> > at org.apache.lucene.index.SegmentReader.get(Unknown Source)
> > at org.apache.lucene.index.DirectoryReader.<init>(Unknown Source)
> > at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(Unknown Source)
> > at org.apache.lucene.index.DirectoryReader$1.doBody(Unknown Source)
> > at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(Unknown
> > Source)
> > at org.apache.lucene.index.DirectoryReader.open(Unknown Source)
> > at org.apache.lucene.index.IndexReader.open(Unknown Source)
> > ...
> > Caused by: java.lang.OutOfMemoryError: Map failed
> > at sun.nio.ch.FileChannelImpl.map0(Native Method)
> > ...
>
>

Re: MMapDirectory failed to map a 23G compound index segment

Posted by François Schiettecatte <fs...@gmail.com>.
My memory of this is a little rusty but isn't mmap also limited by mem + swap on the box? What does 'free -g' report?

François

On Sep 7, 2011, at 12:25 PM, Rich Cariens wrote:

> Ahoy ahoy!
> 
> I've run into the dreaded OOM error with MMapDirectory on a 23G cfs compound
> index segment file. The stack trace looks pretty much like every other trace
> I've found when searching for OOM & "map failed"[1]. My configuration
> follows:
> 
> Solr 1.4.1/Lucene 2.9.3 (plus
> SOLR-1969<https://issues.apache.org/jira/browse/SOLR-1969>
> )
> CentOS 4.9 (Final)
> Linux 2.6.9-100.ELsmp x86_64 yada yada yada
> Java SE (build 1.6.0_21-b06)
> Hotspot 64-bit Server VM (build 17.0-b16, mixed mode)
> ulimits:
>    core file size     (blocks, -c)     0
>    data seg size    (kbytes, -d)     unlimited
>    file size     (blocks, -f)     unlimited
>    pending signals    (-i)     1024
>    max locked memory     (kbytes, -l)     32
>    max memory size     (kbytes, -m)     unlimited
>    open files    (-n)     256000
>    pipe size     (512 bytes, -p)     8
>    POSIX message queues     (bytes, -q)     819200
>    stack size    (kbytes, -s)     10240
>    cpu time    (seconds, -t)     unlimited
>    max user processes     (-u)     1064959
>    virtual memory    (kbytes, -v)     unlimited
>    file locks    (-x)     unlimited
> 
> Any suggestions?
> 
> Thanks in advance,
> Rich
> 
> [1]
> ...
> java.io.IOException: Map failed
> at sun.nio.ch.FileChannelImpl.map(Unknown Source)
> at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> Source)
> at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(Unknown
> Source)
> at org.apache.lucene.store.MMapDirectory.openInput(Unknown Source)
> at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(Unknown Source)
> 
> at org.apache.lucene.index.SegmentReader.get(Unknown Source)
> at org.apache.lucene.index.SegmentReader.get(Unknown Source)
> at org.apache.lucene.index.DirectoryReader.<init>(Unknown Source)
> at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(Unknown Source)
> at org.apache.lucene.index.DirectoryReader$1.doBody(Unknown Source)
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(Unknown
> Source)
> at org.apache.lucene.index.DirectoryReader.open(Unknown Source)
> at org.apache.lucene.index.IndexReader.open(Unknown Source)
> ...
> Caused by: java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method)
> ...