You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Gopal Patwa <go...@gmail.com> on 2012/03/31 07:26:53 UTC

Large Index and OutOfMemoryError: Map failed

*I need help!!*

*
*

*I am using Solr 4.0 nightly build with NRT and I often get this error
during auto commit "**java.lang.OutOfMemoryError:* *Map* *failed". I
have search this forum and what I found it is related to OS ulimit
setting, please se below my ulimit settings. I am not sure what ulimit
setting I should have? and we also get "**java.net.SocketException:*
*Too* *many* *open* *files" NOT sure how many open file we need to
set?*


I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
15GB, with Single shard

*
*

*We update the index every 5 seconds, soft commit every 1 second and
hard commit every 15 minutes*

*
*

*Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB*

*
*

ulimit:

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 401408
max locked memory       (kbytes, -l) 1024
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 401408
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


*
*

*ERROR:*

*
*

*2012-03-29* *15:14:08*,*560* [] *priority=ERROR* *app_name=*
*thread=pool-3-thread-1* *location=CommitTracker* *line=93* *auto*
*commit* *error...:java.io.IOException:* *Map* *failed*
	*at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:748*)
	*at* *org.apache.lucene.store.MMapDirectory$MMapIndexInput.*<*init*>(*MMapDirectory.java:293*)
	*at* *org.apache.lucene.store.MMapDirectory.openInput*(*MMapDirectory.java:221*)
	*at* *org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.*<*init*>(*Lucene40PostingsReader.java:58*)
	*at* *org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer*(*Lucene40PostingsFormat.java:80*)
	*at* *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat*(*PerFieldPostingsFormat.java:189*)
	*at* *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.*<*init*>(*PerFieldPostingsFormat.java:280*)
	*at* *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.*<*init*>(*PerFieldPostingsFormat.java:186*)
	*at* *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.*<*init*>(*PerFieldPostingsFormat.java:186*)
	*at* *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer*(*PerFieldPostingsFormat.java:256*)
	*at* *org.apache.lucene.index.SegmentCoreReaders.*<*init*>(*SegmentCoreReaders.java:108*)
	*at* *org.apache.lucene.index.SegmentReader.*<*init*>(*SegmentReader.java:51*)
	*at* *org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader*(*IndexWriter.java:494*)
	*at* *org.apache.lucene.index.BufferedDeletesStream.applyDeletes*(*BufferedDeletesStream.java:214*)
	*at* *org.apache.lucene.index.IndexWriter.applyAllDeletes*(*IndexWriter.java:2939*)
	*at* *org.apache.lucene.index.IndexWriter.maybeApplyDeletes*(*IndexWriter.java:2930*)
	*at* *org.apache.lucene.index.IndexWriter.prepareCommit*(*IndexWriter.java:2681*)
	*at* *org.apache.lucene.index.IndexWriter.commitInternal*(*IndexWriter.java:2804*)
	*at* *org.apache.lucene.index.IndexWriter.commit*(*IndexWriter.java:2786*)
	*at* *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:391*)
	*at* *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
	*at* *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
	*at* *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
	*at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
	*at* *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
	*at* *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
	*at* *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
	*at* *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
	*at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
*java.lang.OutOfMemoryError:* *Map* *failed*
	*at* *sun.nio.ch.FileChannelImpl.map0*(*Native* *Method*)
	*at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:745*)
	*...* *28* *more*

*
*

*
*

*


SolrConfig.xml:


	<indexDefaults>
		<useCompoundFile>false</useCompoundFile>
		<mergeFactor>10</mergeFactor>
		<maxMergeDocs>2147483647</maxMergeDocs>
		<maxFieldLength>10000</maxFieldLength-->
		<ramBufferSizeMB>4096</ramBufferSizeMB>
		<maxThreadStates>10</maxThreadStates>
		<writeLockTimeout>1000</writeLockTimeout>
		<commitLockTimeout>10000</commitLockTimeout>
		<lockType>single</lockType>
		
	    <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
	      <double name="forceMergeDeletesPctAllowed">0.0</double>
	      <double name="reclaimDeletesWeight">10.0</double>
	    </mergePolicy>

	    <deletionPolicy class="solr.SolrDeletionPolicy">
	      <str name="keepOptimizedOnly">false</str>
	      <str name="maxCommitsToKeep">0</str>
	    </deletionPolicy>
		
	</indexDefaults>


	<updateHandler class="solr.DirectUpdateHandler2">
	    <maxPendingDeletes>1000</maxPendingDeletes>
	     <autoCommit>
	       <maxTime>900000</maxTime>
	       <openSearcher>false</openSearcher>
	     </autoCommit>
	     <autoSoftCommit>
	       <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
	     </autoSoftCommit>
	
	</updateHandler>



Thanks
Gopal Patwa
*

Re: Large Index and OutOfMemoryError: Map failed

Posted by Mark Miller <ma...@gmail.com>.
On Apr 12, 2012, at 6:07 AM, Michael McCandless wrote:

> Your largest index has 66 segments (690 files) ... biggish but not
> insane.  With 64K maps you should be able to have ~47 searchers open
> on each core.
> 
> Enabling compound file format (not the opposite!) will mean fewer maps
> ... ie should improve this situation.
> 
> I don't understand why Solr defaults to compound file off... that
> seems dangerous.
> 
> Really we need a Solr dev here... to answer "how long is a stale
> searcher kept open".  Is it somehow possible 46 old searchers are
> being left open...?

Probably only if there is a bug. When a new Searcher is opened, any previous Searcher is closed as soon as there are no more references to it (eg all in flight requests to that Searcher finish).

> 
> I don't see any other reason why you'd run out of maps.  Hmm, unless
> MMapDirectory didn't think it could safely invoke unmap in your JVM.
> Which exact JVM are you using?  If you can print the
> MMapDirectory.UNMAP_SUPPORTED constant, we'd know for sure.
> 
> Yes, switching away from MMapDir will sidestep the "too many maps"
> issue, however, 1) MMapDir has better perf than NIOFSDir, and 2) if
> there really is a leak here (Solr not closing the old searchers or a
> Lucene bug or something...) then you'll eventually run out of file
> descriptors (ie, same  problem, different manifestation).
> 
> Mike McCandless
> 
> http://blog.mikemccandless.com
> 
> 2012/4/11 Gopal Patwa <go...@gmail.com>:
>> 
>> I have not change the mergefactor, it was 10. Compound index file is disable
>> in my config but I read from below post, that some one had similar issue and
>> it was resolved by switching from compound index file format to non-compound
>> index file.
>> 
>> and some folks resolved by "changing lucene code to disable MMapDirectory."
>> Is this best practice to do, if so is this can be done in configuration?
>> 
>> http://lucene.472066.n3.nabble.com/MMapDirectory-failed-to-map-a-23G-compound-index-segment-td3317208.html
>> 
>> I have index document of core1 = 5 million, core2=8million and
>> core3=3million and all index are hosted in single Solr instance
>> 
>> I am going to use Solr for our site StubHub.com, see attached "ls -l" list
>> of index files for all core
>> 
>> SolrConfig.xml:
>> 
>> 
>> 	<indexDefaults>
>> 		<useCompoundFile>false</useCompoundFile>
>> 		<mergeFactor>10</mergeFactor>
>> 		<maxMergeDocs>2147483647</maxMergeDocs>
>> 		<maxFieldLength>10000</maxFieldLength-->
>> 		<ramBufferSizeMB>4096</ramBufferSizeMB>
>> 		<maxThreadStates>10</maxThreadStates>
>> 		<writeLockTimeout>1000</writeLockTimeout>
>> 		<commitLockTimeout>10000</commitLockTimeout>
>> 		<lockType>single</lockType>
>> 		
>> 	    <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
>> 	      <double name="forceMergeDeletesPctAllowed">0.0</double>
>> 	      <double name="reclaimDeletesWeight">10.0</double>
>> 	    </mergePolicy>
>> 
>> 	    <deletionPolicy class="solr.SolrDeletionPolicy">
>> 	      <str name="keepOptimizedOnly">false</str>
>> 	      <str name="maxCommitsToKeep">0</str>
>> 	    </deletionPolicy>
>> 		
>> 	</indexDefaults>
>> 
>> 
>> 	<updateHandler class="solr.DirectUpdateHandler2">
>> 	    <maxPendingDeletes>1000</maxPendingDeletes>
>> 	     <autoCommit>
>> 	       <maxTime>900000</maxTime>
>> 	       <openSearcher>false</openSearcher>
>> 	     </autoCommit>
>> 	     <autoSoftCommit>
>> 	       <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
>> 	     </autoSoftCommit>
>> 	
>> 	</updateHandler>
>> 
>> 
>> Forwarded conversation
>> Subject: Large Index and OutOfMemoryError: Map failed
>> ------------------------
>> 
>> From: Gopal Patwa <go...@gmail.com>
>> Date: Fri, Mar 30, 2012 at 10:26 PM
>> To: solr-user@lucene.apache.org
>> 
>> 
>> I need help!!
>> 
>> 
>> 
>> 
>> 
>> I am using Solr 4.0 nightly build with NRT and I often get this error during
>> auto commit "java.lang.OutOfMemoryError: Map failed". I have search this
>> forum and what I found it is related to OS ulimit setting, please se below
>> my ulimit settings. I am not sure what ulimit setting I should have? and we
>> also get "java.net.SocketException: Too many open files" NOT sure how many
>> open file we need to set?
>> 
>> 
>> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 - 15GB,
>> with Single shard
>> 
>> 
>> We update the index every 5 seconds, soft commit every 1 second and hard
>> commit every 15 minutes
>> 
>> 
>> Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB
>> 
>> 
>> ulimit:
>> 
>> core file size          (blocks, -c) 0
>> data seg size           (kbytes, -d) unlimited
>> scheduling priority             (-e) 0
>> file size               (blocks, -f) unlimited
>> pending signals                 (-i) 401408
>> max locked memory       (kbytes, -l) 1024
>> max memory size         (kbytes, -m) unlimited
>> open files                      (-n) 1024
>> pipe size            (512 bytes, -p) 8
>> POSIX message queues     (bytes, -q) 819200
>> real-time priority              (-r) 0
>> stack size              (kbytes, -s) 10240
>> cpu time               (seconds, -t) unlimited
>> max user processes              (-u) 401408
>> virtual memory          (kbytes, -v) unlimited
>> file locks                      (-x) unlimited
>> 
>> 
>> 
>> ERROR:
>> 
>> 
>> 
>> 
>> 
>> 2012-03-29 15:14:08,560 [] priority=ERROR app_name= thread=pool-3-thread-1
>> location=CommitTracker line=93 auto commit error...:java.io.IOException: Map
>> failed
>> 	at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
>> 	at
>> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
>> 	at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
>> 	at
>> org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.<init>(Lucene40PostingsReader.java:58)
>> 	at
>> org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer(Lucene40PostingsFormat.java:80)
>> 	at
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat(PerFieldPostingsFormat.java:189)
>> 	at
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:280)
>> 	at
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.<init>(PerFieldPostingsFormat.java:186)
>> 	at
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.<init>(PerFieldPostingsFormat.java:186)
>> 	at
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:256)
>> 	at
>> org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:108)
>> 	at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:51)
>> 	at
>> org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader(IndexWriter.java:494)
>> 	at
>> org.apache.lucene.index.BufferedDeletesStream.applyDeletes(BufferedDeletesStream.java:214)
>> 	at
>> org.apache.lucene.index.IndexWriter.applyAllDeletes(IndexWriter.java:2939)
>> 	at
>> org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:2930)
>> 	at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:2681)
>> 	at
>> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2804)
>> 	at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2786)
>> 	at
>> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:391)
>> 	at org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
>> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>> 	at
>> 
>> ...
>> 
>> [Message clipped]
>> ----------
>> From: Michael McCandless <lu...@mikemccandless.com>
>> Date: Sat, Mar 31, 2012 at 3:15 AM
>> To: solr-user@lucene.apache.org
>> 
>> 
>> It's the virtual memory limit that matters; yours says unlimited below
>> (good!), but, are you certain that's really the limit your Solr
>> process runs with?
>> 
>> On Linux, there is also a per-process map count:
>> 
>>    cat /proc/sys/vm/max_map_count
>> 
>> I think it typically defaults to 65,536 but you should check on your
>> env.  If a process tries to map more than this many regions, you'll
>> hit that exception.
>> 
>> I think you can:
>> 
>>  cat /proc/<pid>/maps | wc
>> 
>> to see how many maps your Solr process currently has... if that is
>> anywhere near the limit then it could be the cause.
>> 
>> Mike McCandless
>> 
>> http://blog.mikemccandless.com
>> 
>> On Sat, Mar 31, 2012 at 1:26 AM, Gopal Patwa <go...@gmail.com> wrote:
>>> *I need help!!*
>>> 
>>> *
>>> *
>>> 
>>> *I am using Solr 4.0 nightly build with NRT and I often get this error
>>> during auto commit "**java.lang.OutOfMemoryError:* *Map* *failed". I
>>> have search this forum and what I found it is related to OS ulimit
>>> setting, please se below my ulimit settings. I am not sure what ulimit
>>> setting I should have? and we also get "**java.net.SocketException:*
>>> *Too* *many* *open* *files" NOT sure how many open file we need to
>>> set?*
>>> 
>>> 
>>> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
>>> 15GB, with Single shard
>>> 
>>> *
>>> *
>>> 
>>> *We update the index every 5 seconds, soft commit every 1 second and
>>> hard commit every 15 minutes*
>>> 
>>> *
>>> *
>>> 
>>> *Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB*
>>> 
>>> *
>>> *
>>> 
>>> ulimit:
>>> 
>>> core file size          (blocks, -c) 0
>>> data seg size           (kbytes, -d) unlimited
>>> scheduling priority             (-e) 0
>>> file size               (blocks, -f) unlimited
>>> pending signals                 (-i) 401408
>>> max locked memory       (kbytes, -l) 1024
>>> max memory size         (kbytes, -m) unlimited
>>> open files                      (-n) 1024
>>> pipe size            (512 bytes, -p) 8
>>> POSIX message queues     (bytes, -q) 819200
>>> real-time priority              (-r) 0
>>> stack size              (kbytes, -s) 10240
>>> cpu time               (seconds, -t) unlimited
>>> max user processes              (-u) 401408
>>> virtual memory          (kbytes, -v) unlimited
>>> file locks                      (-x) unlimited
>>> 
>>> 
>>> *
>>> *
>>> 
>>> *ERROR:*
>>> 
>>> *
>>> *
>>> 
>>> *2012-03-29* *15:14:08*,*560* [] *priority=ERROR* *app_name=*
>>> *thread=pool-3-thread-1* *location=CommitTracker* *line=93* *auto*
>>> *commit* *error...:java.io.IOException:* *Map* *failed*
>>>        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:748*)
>>>        *at*
>>> *org.apache.lucene.store.MMapDirectory$MMapIndexInput.*<*init*>(*MMapDirectory.java:293*)
>>>        *at*
>>> *org.apache.lucene.store.MMapDirectory.openInput*(*MMapDirectory.java:221*)
>>>        *at*
>>> *org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.*<*init*>(*Lucene40PostingsReader.java:58*)
>>>        *at*
>>> *org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer*(*Lucene40PostingsFormat.java:80*)
>>>        *at*
>>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat*(*PerFieldPostingsFormat.java:189*)
>>>        *at*
>>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.*<*init*>(*PerFieldPostingsFormat.java:280*)
>>>        *at*
>>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.*<*init*>(*PerFieldPostingsFormat.java:186*)
>>>        *at*
>>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.*<*init*>(*PerFieldPostingsFormat.java:186*)
>>>        *at*
>>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer*(*PerFieldPostingsFormat.java:256*)
>>>        *at*
>>> *org.apache.lucene.index.SegmentCoreReaders.*<*init*>(*SegmentCoreReaders.java:108*)
>>>        *at*
>>> *org.apache.lucene.index.SegmentReader.*<*init*>(*SegmentReader.java:51*)
>>>        *at*
>>> *org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader*(*IndexWriter.java:494*)
>>>        *at*
>>> *org.apache.lucene.index.BufferedDeletesStream.applyDeletes*(*BufferedDeletesStream.java:214*)
>>>        *at*
>>> *org.apache.lucene.index.IndexWriter.applyAllDeletes*(*IndexWriter.java:2939*)
>>>        *at*
>>> *org.apache.lucene.index.IndexWriter.maybeApplyDeletes*(*IndexWriter.java:2930*)
>>>        *at*
>>> *org.apache.lucene.index.IndexWriter.prepareCommit*(*IndexWriter.java:2681*)
>>>        *at*
>>> *org.apache.lucene.index.IndexWriter.commitInternal*(*IndexWriter.java:2804*)
>>>        *at*
>>> *org.apache.lucene.index.IndexWriter.commit*(*IndexWriter.java:2786*)
>>>        *at*
>>> *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:391*)
>>>        *at*
>>> *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
>>>        *at*
>>> *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
>>>        *at*
>>> *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
>>>        *at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
>>>        *at*
>>> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
>>>        *at*
>>> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
>>>        *at*
>>> *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
>>>        *at*
>>> *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
>>>        *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
>>> *java.lang.OutOfMemoryError:* *Map* *failed*
>>>        *at* *sun.nio.ch.FileChannelImpl.map0*(*Native* *Method*)
>>>        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:745*)
>>>        *...* *28* *more*
>>> 
>>> *
>>> *
>>> 
>>> *
>>> *
>>> 
>>> *
>>> 
>>> 
>>> SolrConfig.xml:
>>> 
>>> 
>>>        <indexDefaults>
>>>                <useCompoundFile>false</useCompoundFile>
>>>                <mergeFactor>10</mergeFactor>
>>>                <maxMergeDocs>2147483647</maxMergeDocs>
>>>                <maxFieldLength>10000</maxFieldLength-->
>>>                <ramBufferSizeMB>4096</ramBufferSizeMB>
>>>                <maxThreadStates>10</maxThreadStates>
>>>                <writeLockTimeout>1000</writeLockTimeout>
>>>                <commitLockTimeout>10000</commitLockTimeout>
>>>                <lockType>single</lockType>
>>> 
>>>            <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
>>>              <double name="forceMergeDeletesPctAllowed">0.0</double>
>>>              <double name="reclaimDeletesWeight">10.0</double>
>>>            </mergePolicy>
>>> 
>>>            <deletionPolicy class="solr.SolrDeletionPolicy">
>>>              <str name="keepOptimizedOnly">false</str>
>>>              <str name="maxCommitsToKeep">0</str>
>>>            </deletionPolicy>
>>> 
>>>        </indexDefaults>
>>> 
>>> 
>>>        <updateHandler class="solr.DirectUpdateHandler2">
>>>            <maxPendingDeletes>1000</maxPendingDeletes>
>>>             <autoCommit>
>>>               <maxTime>900000</maxTime>
>>>               <openSearcher>false</openSearcher>
>>>             </autoCommit>
>>>             <autoSoftCommit>
>>> 
>>> <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
>>>             </autoSoftCommit>
>>> 
>>>        </updateHandler>
>>> 
>>> 
>>> 
>>> Thanks
>>> Gopal Patwa
>>> *
>> 
>> ----------
>> From: Gopal Patwa <go...@gmail.com>
>> Date: Tue, Apr 10, 2012 at 8:35 PM
>> To: solr-user@lucene.apache.org
>> 
>> 
>> Michael, Thanks for response
>> 
>> it was 65K as you mention the default value for "cat
>> /proc/sys/vm/max_map_count" . How we determine what value this should be?
>>  is it number of document during hard commit in my case it is 15 minutes? or
>> it is number of  index file or number of documents we have in all cores.
>> 
>> I have raised the number to 140K but I still get when it reaches to 140K, we
>> have to restart jboss server to free up the map count, sometime OOM error
>> happen during "Error opening new searcher"
>> 
>> is making this number to unlimited is only solution?''
>> 
>> 
>> Error log:
>> 
>> location=CommitTracker line=93 auto commit
>> error...:org.apache.solr.common.SolrException: Error opening new searcher
>> 	at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
>> 	at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
>> 	at
>> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
>> 	at org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
>> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>> 	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>> 	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>> 	at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
>> 	at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
>> 	at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> 	at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> 	at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.io.IOException: Map failed
>> 	at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
>> 	at
>> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
>> 	at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
>> 	at
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
>> 	at
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
>> 	at
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
>> 	at org.apache.lucene.codecs.Codec.files(Codec.java:56)
>> 	at org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
>> 	at org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
>> 	at
>> org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
>> 	at
>> org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
>> 	at
>> org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
>> 	at
>> org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
>> 	at
>> org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
>> 	at
>> org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
>> 	at org.apache.lucene.index.
>> 
>> ...
>> 
>> [Message clipped]
>> ----------
>> From: Michael McCandless <lu...@mikemccandless.com>
>> Date: Wed, Apr 11, 2012 at 2:20 AM
>> To: solr-user@lucene.apache.org
>> 
>> 
>> Hi,
>> 
>> 65K is already a very large number and should have been sufficient...
>> 
>> However: have you increased the merge factor?  Doing so increases the
>> open files (maps) required.
>> 
>> Have you disabled compound file format?  (Hmmm: I think Solr does so
>> by default... which is dangerous).  Maybe try enabling compound file
>> format?
>> 
>> Can you "ls -l" your index dir and post the results?
>> 
>> It's also possible Solr isn't closing the old searchers quickly enough
>> ... I don't know the details on when Solr closes old searchers...
>> 
>> Mike McCandless
>> 
>> http://blog.mikemccandless.com
>> 
>> 
>> 
>> On Tue, Apr 10, 2012 at 11:35 PM, Gopal Patwa <go...@gmail.com> wrote:
>>> Michael, Thanks for response
>>> 
>>> it was 65K as you mention the default value for "cat
>>> /proc/sys/vm/max_map_count" . How we determine what value this should be?
>>>  is it number of document during hard commit in my case it is 15 minutes?
>>> or it is number of  index file or number of documents we have in all
>>> cores.
>>> 
>>> I have raised the number to 140K but I still get when it reaches to 140K,
>>> we have to restart jboss server to free up the map count, sometime OOM
>>> error happen during "*Error opening new searcher"*
>>> 
>>> is making this number to unlimited is only solution?''
>>> 
>>> 
>>> Error log:
>>> 
>>> *location=CommitTracker line=93 auto commit
>>> error...:org.apache.solr.common.SolrException: Error opening new
>>> searcher
>>>        at
>>> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
>>>        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
>>>        at
>>> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
>>>        at org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
>>>        at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>>>        at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>        at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
>>>        at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
>>>        at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at java.lang.Thread.run(Thread.java:662)Caused by:
>>> java.io.IOException: Map failed
>>>        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
>>>        at
>>> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
>>>        at
>>> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
>>>        at
>>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
>>>        at
>>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
>>>        at
>>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
>>>        at org.apache.lucene.codecs.Codec.files(Codec.java:56)
>>>        at org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
>>>        at
>>> org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
>>>        at
>>> org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
>>>        at
>>> org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
>>>        at
>>> org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
>>>        at
>>> org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
>>>        at
>>> org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
>>>        at
>>> org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
>>>        at
>>> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:438)
>>>        at
>>> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:553)
>>>        at
>>> org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:354)
>>>        at
>>> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:258)
>>>        at
>>> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:243)
>>>        at
>>> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:250)
>>>        at
>>> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1091)
>>>        ... 11 moreCaused by: java.lang.OutOfMemoryError: Map failed
>>>        at sun.nio.ch.FileChannelImpl.map0(Native Method)
>>>        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)*
>>> 
>>> 
>>> 
>>> And one more issue we came across i.e
>> 
>> 

- Mark Miller
lucidimagination.com












Re: Large Index and OutOfMemoryError: Map failed

Posted by Michael McCandless <lu...@mikemccandless.com>.
Is it possible you are hitting this (just opened) Solr issue?:

    https://issues.apache.org/jira/browse/SOLR-3392

Mike McCandless

http://blog.mikemccandless.com

On Fri, Apr 20, 2012 at 9:33 AM, Gopal Patwa <go...@gmail.com> wrote:
> We cannot avoid auto soft commit, since we need Lucene NRT feature. And I
> use StreamingUpdateSolrServer for adding/updating index.
>
> On Thu, Apr 19, 2012 at 7:42 AM, Boon Low <bo...@brightsolid.com> wrote:
>
>> Hi,
>>
>> Also came across this error recently, while indexing with > 10 DIH
>> processes in parallel + default index setting. The JVM grinds to a halt and
>> throws this error. Checking the index of a core reveals thousands of files!
>> Tuning the default autocommit from 15000ms to 900000ms solved the problem
>> for us. (no 'autosoftcommit').
>>
>> Boon
>>
>> -----
>> Boon Low
>> Search UX and Engine Developer
>> brightsolid Online Publishing
>>
>> On 14 Apr 2012, at 17:40, Gopal Patwa wrote:
>>
>> > I checked it was "MMapDirectory.UNMAP_SUPPORTED=true" and below are my
>> > system data. Is their any existing test case to reproduce this issue? I
>> am
>> > trying understand how I can reproduce this issue with unit/integration
>> test
>> >
>> > I will try recent solr trunk build too,  if it is some bug in solr or
>> > lucene keeping old searcher open then how to reproduce it?
>> >
>> > SYSTEM DATA
>> > ===========
>> > PROCESSOR: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz
>> > SYSTEM ID: x86_64
>> > CURRENT CPU SPEED: 1600.000 MHz
>> > CPUS: 8 processor(s)
>> > MEMORY: 49449296 kB
>> > DISTRIBUTION: CentOS release 5.3 (Final)
>> > KERNEL NAME: 2.6.18-128.el5
>> > UPTIME: up 71 days
>> > LOAD AVERAGE: 1.42, 1.45, 1.53
>> > JBOSS Version: Implementation-Version: 4.2.2.GA (build:
>> > SVNTag=JBoss_4_2_2_GA date=20
>> > JAVA Version: java version "1.6.0_24"
>> >
>> >
>> > On Thu, Apr 12, 2012 at 3:07 AM, Michael McCandless <
>> > lucene@mikemccandless.com> wrote:
>> >
>> >> Your largest index has 66 segments (690 files) ... biggish but not
>> >> insane.  With 64K maps you should be able to have ~47 searchers open
>> >> on each core.
>> >>
>> >> Enabling compound file format (not the opposite!) will mean fewer maps
>> >> ... ie should improve this situation.
>> >>
>> >> I don't understand why Solr defaults to compound file off... that
>> >> seems dangerous.
>> >>
>> >> Really we need a Solr dev here... to answer "how long is a stale
>> >> searcher kept open".  Is it somehow possible 46 old searchers are
>> >> being left open...?
>> >>
>> >> I don't see any other reason why you'd run out of maps.  Hmm, unless
>> >> MMapDirectory didn't think it could safely invoke unmap in your JVM.
>> >> Which exact JVM are you using?  If you can print the
>> >> MMapDirectory.UNMAP_SUPPORTED constant, we'd know for sure.
>> >>
>> >> Yes, switching away from MMapDir will sidestep the "too many maps"
>> >> issue, however, 1) MMapDir has better perf than NIOFSDir, and 2) if
>> >> there really is a leak here (Solr not closing the old searchers or a
>> >> Lucene bug or something...) then you'll eventually run out of file
>> >> descriptors (ie, same  problem, different manifestation).
>> >>
>> >> Mike McCandless
>> >>
>> >> http://blog.mikemccandless.com
>> >>
>> >> 2012/4/11 Gopal Patwa <go...@gmail.com>:
>> >>>
>> >>> I have not change the mergefactor, it was 10. Compound index file is
>> >> disable
>> >>> in my config but I read from below post, that some one had similar
>> issue
>> >> and
>> >>> it was resolved by switching from compound index file format to
>> >> non-compound
>> >>> index file.
>> >>>
>> >>> and some folks resolved by "changing lucene code to disable
>> >> MMapDirectory."
>> >>> Is this best practice to do, if so is this can be done in
>> configuration?
>> >>>
>> >>>
>> >>
>> http://lucene.472066.n3.nabble.com/MMapDirectory-failed-to-map-a-23G-compound-index-segment-td3317208.html
>> >>>
>> >>> I have index document of core1 = 5 million, core2=8million and
>> >>> core3=3million and all index are hosted in single Solr instance
>> >>>
>> >>> I am going to use Solr for our site StubHub.com, see attached "ls -l"
>> >> list
>> >>> of index files for all core
>> >>>
>> >>> SolrConfig.xml:
>> >>>
>> >>>
>> >>>      <indexDefaults>
>> >>>              <useCompoundFile>false</useCompoundFile>
>> >>>              <mergeFactor>10</mergeFactor>
>> >>>              <maxMergeDocs>2147483647</maxMergeDocs>
>> >>>              <maxFieldLength>10000</maxFieldLength-->
>> >>>              <ramBufferSizeMB>4096</ramBufferSizeMB>
>> >>>              <maxThreadStates>10</maxThreadStates>
>> >>>              <writeLockTimeout>1000</writeLockTimeout>
>> >>>              <commitLockTimeout>10000</commitLockTimeout>
>> >>>              <lockType>single</lockType>
>> >>>
>> >>>          <mergePolicy
>> class="org.apache.lucene.index.TieredMergePolicy">
>> >>>            <double name="forceMergeDeletesPctAllowed">0.0</double>
>> >>>            <double name="reclaimDeletesWeight">10.0</double>
>> >>>          </mergePolicy>
>> >>>
>> >>>          <deletionPolicy class="solr.SolrDeletionPolicy">
>> >>>            <str name="keepOptimizedOnly">false</str>
>> >>>            <str name="maxCommitsToKeep">0</str>
>> >>>          </deletionPolicy>
>> >>>
>> >>>      </indexDefaults>
>> >>>
>> >>>
>> >>>      <updateHandler class="solr.DirectUpdateHandler2">
>> >>>          <maxPendingDeletes>1000</maxPendingDeletes>
>> >>>           <autoCommit>
>> >>>             <maxTime>900000</maxTime>
>> >>>             <openSearcher>false</openSearcher>
>> >>>           </autoCommit>
>> >>>           <autoSoftCommit>
>> >>>
>> >> <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
>> >>>           </autoSoftCommit>
>> >>>
>> >>>      </updateHandler>
>> >>>
>> >>>
>> >>> Forwarded conversation
>> >>> Subject: Large Index and OutOfMemoryError: Map failed
>> >>> ------------------------
>> >>>
>> >>> From: Gopal Patwa <go...@gmail.com>
>> >>> Date: Fri, Mar 30, 2012 at 10:26 PM
>> >>> To: solr-user@lucene.apache.org
>> >>>
>> >>>
>> >>> I need help!!
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> I am using Solr 4.0 nightly build with NRT and I often get this error
>> >> during
>> >>> auto commit "java.lang.OutOfMemoryError: Map failed". I have search
>> this
>> >>> forum and what I found it is related to OS ulimit setting, please se
>> >> below
>> >>> my ulimit settings. I am not sure what ulimit setting I should have?
>> and
>> >> we
>> >>> also get "java.net.SocketException: Too many open files" NOT sure how
>> >> many
>> >>> open file we need to set?
>> >>>
>> >>>
>> >>> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
>> >> 15GB,
>> >>> with Single shard
>> >>>
>> >>>
>> >>> We update the index every 5 seconds, soft commit every 1 second and
>> hard
>> >>> commit every 15 minutes
>> >>>
>> >>>
>> >>> Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB
>> >>>
>> >>>
>> >>> ulimit:
>> >>>
>> >>> core file size          (blocks, -c) 0
>> >>> data seg size           (kbytes, -d) unlimited
>> >>> scheduling priority             (-e) 0
>> >>> file size               (blocks, -f) unlimited
>> >>> pending signals                 (-i) 401408
>> >>> max locked memory       (kbytes, -l) 1024
>> >>> max memory size         (kbytes, -m) unlimited
>> >>> open files                      (-n) 1024
>> >>> pipe size            (512 bytes, -p) 8
>> >>> POSIX message queues     (bytes, -q) 819200
>> >>> real-time priority              (-r) 0
>> >>> stack size              (kbytes, -s) 10240
>> >>> cpu time               (seconds, -t) unlimited
>> >>> max user processes              (-u) 401408
>> >>> virtual memory          (kbytes, -v) unlimited
>> >>> file locks                      (-x) unlimited
>> >>>
>> >>>
>> >>>
>> >>> ERROR:
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> 2012-03-29 15:14:08,560 [] priority=ERROR app_name=
>> >> thread=pool-3-thread-1
>> >>> location=CommitTracker line=93 auto commit
>> error...:java.io.IOException:
>> >> Map
>> >>> failed
>> >>>      at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
>> >>>      at
>> >> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.<init>(Lucene40PostingsReader.java:58)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer(Lucene40PostingsFormat.java:80)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat(PerFieldPostingsFormat.java:189)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:280)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.<init>(PerFieldPostingsFormat.java:186)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.<init>(PerFieldPostingsFormat.java:186)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:256)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:108)
>> >>>      at
>> >> org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:51)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader(IndexWriter.java:494)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.index.BufferedDeletesStream.applyDeletes(BufferedDeletesStream.java:214)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.index.IndexWriter.applyAllDeletes(IndexWriter.java:2939)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:2930)
>> >>>      at
>> >> org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:2681)
>> >>>      at
>> >>>
>> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2804)
>> >>>      at
>> >> org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2786)
>> >>>      at
>> >>>
>> >>
>> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:391)
>> >>>      at
>> org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
>> >>>      at
>> >> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>> >>>      at
>> >>>
>> >>> ...
>> >>>
>> >>> [Message clipped]
>> >>> ----------
>> >>> From: Michael McCandless <lu...@mikemccandless.com>
>> >>> Date: Sat, Mar 31, 2012 at 3:15 AM
>> >>> To: solr-user@lucene.apache.org
>> >>>
>> >>>
>> >>> It's the virtual memory limit that matters; yours says unlimited below
>> >>> (good!), but, are you certain that's really the limit your Solr
>> >>> process runs with?
>> >>>
>> >>> On Linux, there is also a per-process map count:
>> >>>
>> >>>   cat /proc/sys/vm/max_map_count
>> >>>
>> >>> I think it typically defaults to 65,536 but you should check on your
>> >>> env.  If a process tries to map more than this many regions, you'll
>> >>> hit that exception.
>> >>>
>> >>> I think you can:
>> >>>
>> >>> cat /proc/<pid>/maps | wc
>> >>>
>> >>> to see how many maps your Solr process currently has... if that is
>> >>> anywhere near the limit then it could be the cause.
>> >>>
>> >>> Mike McCandless
>> >>>
>> >>> http://blog.mikemccandless.com
>> >>>
>> >>> On Sat, Mar 31, 2012 at 1:26 AM, Gopal Patwa <go...@gmail.com>
>> >> wrote:
>> >>>> *I need help!!*
>> >>>>
>> >>>> *
>> >>>> *
>> >>>>
>> >>>> *I am using Solr 4.0 nightly build with NRT and I often get this error
>> >>>> during auto commit "**java.lang.OutOfMemoryError:* *Map* *failed". I
>> >>>> have search this forum and what I found it is related to OS ulimit
>> >>>> setting, please se below my ulimit settings. I am not sure what ulimit
>> >>>> setting I should have? and we also get "**java.net.SocketException:*
>> >>>> *Too* *many* *open* *files" NOT sure how many open file we need to
>> >>>> set?*
>> >>>>
>> >>>>
>> >>>> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
>> >>>> 15GB, with Single shard
>> >>>>
>> >>>> *
>> >>>> *
>> >>>>
>> >>>> *We update the index every 5 seconds, soft commit every 1 second and
>> >>>> hard commit every 15 minutes*
>> >>>>
>> >>>> *
>> >>>> *
>> >>>>
>> >>>> *Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB*
>> >>>>
>> >>>> *
>> >>>> *
>> >>>>
>> >>>> ulimit:
>> >>>>
>> >>>> core file size          (blocks, -c) 0
>> >>>> data seg size           (kbytes, -d) unlimited
>> >>>> scheduling priority             (-e) 0
>> >>>> file size               (blocks, -f) unlimited
>> >>>> pending signals                 (-i) 401408
>> >>>> max locked memory       (kbytes, -l) 1024
>> >>>> max memory size         (kbytes, -m) unlimited
>> >>>> open files                      (-n) 1024
>> >>>> pipe size            (512 bytes, -p) 8
>> >>>> POSIX message queues     (bytes, -q) 819200
>> >>>> real-time priority              (-r) 0
>> >>>> stack size              (kbytes, -s) 10240
>> >>>> cpu time               (seconds, -t) unlimited
>> >>>> max user processes              (-u) 401408
>> >>>> virtual memory          (kbytes, -v) unlimited
>> >>>> file locks                      (-x) unlimited
>> >>>>
>> >>>>
>> >>>> *
>> >>>> *
>> >>>>
>> >>>> *ERROR:*
>> >>>>
>> >>>> *
>> >>>> *
>> >>>>
>> >>>> *2012-03-29* *15:14:08*,*560* [] *priority=ERROR* *app_name=*
>> >>>> *thread=pool-3-thread-1* *location=CommitTracker* *line=93* *auto*
>> >>>> *commit* *error...:java.io.IOException:* *Map* *failed*
>> >>>>       *at*
>> *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:748*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.store.MMapDirectory$MMapIndexInput.*<*init*>(*MMapDirectory.java:293*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.store.MMapDirectory.openInput*(*MMapDirectory.java:221*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.*<*init*>(*Lucene40PostingsReader.java:58*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer*(*Lucene40PostingsFormat.java:80*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat*(*PerFieldPostingsFormat.java:189*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.*<*init*>(*PerFieldPostingsFormat.java:280*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.*<*init*>(*PerFieldPostingsFormat.java:186*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.*<*init*>(*PerFieldPostingsFormat.java:186*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer*(*PerFieldPostingsFormat.java:256*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.index.SegmentCoreReaders.*<*init*>(*SegmentCoreReaders.java:108*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.index.SegmentReader.*<*init*>(*SegmentReader.java:51*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader*(*IndexWriter.java:494*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.index.BufferedDeletesStream.applyDeletes*(*BufferedDeletesStream.java:214*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.index.IndexWriter.applyAllDeletes*(*IndexWriter.java:2939*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.index.IndexWriter.maybeApplyDeletes*(*IndexWriter.java:2930*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.index.IndexWriter.prepareCommit*(*IndexWriter.java:2681*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.lucene.index.IndexWriter.commitInternal*(*IndexWriter.java:2804*)
>> >>>>       *at*
>> >>>> *org.apache.lucene.index.IndexWriter.commit*(*IndexWriter.java:2786*)
>> >>>>       *at*
>> >>>>
>> >>
>> *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:391*)
>> >>>>       *at*
>> >>>> *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
>> >>>>       *at*
>> >>>>
>> >>
>> *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
>> >>>>       *at*
>> >>>> *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
>> >>>>       *at*
>> *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
>> >>>>       *at*
>> >>>>
>> >>
>> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
>> >>>>       *at*
>> >>>>
>> >>
>> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
>> >>>>       *at*
>> >>>>
>> >>
>> *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
>> >>>>       *at*
>> >>>>
>> >>
>> *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
>> >>>>       *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
>> >>>> *java.lang.OutOfMemoryError:* *Map* *failed*
>> >>>>       *at* *sun.nio.ch.FileChannelImpl.map0*(*Native* *Method*)
>> >>>>       *at*
>> *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:745*)
>> >>>>       *...* *28* *more*
>> >>>>
>> >>>> *
>> >>>> *
>> >>>>
>> >>>> *
>> >>>> *
>> >>>>
>> >>>> *
>> >>>>
>> >>>>
>> >>>> SolrConfig.xml:
>> >>>>
>> >>>>
>> >>>>       <indexDefaults>
>> >>>>               <useCompoundFile>false</useCompoundFile>
>> >>>>               <mergeFactor>10</mergeFactor>
>> >>>>               <maxMergeDocs>2147483647</maxMergeDocs>
>> >>>>               <maxFieldLength>10000</maxFieldLength-->
>> >>>>               <ramBufferSizeMB>4096</ramBufferSizeMB>
>> >>>>               <maxThreadStates>10</maxThreadStates>
>> >>>>               <writeLockTimeout>1000</writeLockTimeout>
>> >>>>               <commitLockTimeout>10000</commitLockTimeout>
>> >>>>               <lockType>single</lockType>
>> >>>>
>> >>>>           <mergePolicy
>> >> class="org.apache.lucene.index.TieredMergePolicy">
>> >>>>             <double name="forceMergeDeletesPctAllowed">0.0</double>
>> >>>>             <double name="reclaimDeletesWeight">10.0</double>
>> >>>>           </mergePolicy>
>> >>>>
>> >>>>           <deletionPolicy class="solr.SolrDeletionPolicy">
>> >>>>             <str name="keepOptimizedOnly">false</str>
>> >>>>             <str name="maxCommitsToKeep">0</str>
>> >>>>           </deletionPolicy>
>> >>>>
>> >>>>       </indexDefaults>
>> >>>>
>> >>>>
>> >>>>       <updateHandler class="solr.DirectUpdateHandler2">
>> >>>>           <maxPendingDeletes>1000</maxPendingDeletes>
>> >>>>            <autoCommit>
>> >>>>              <maxTime>900000</maxTime>
>> >>>>              <openSearcher>false</openSearcher>
>> >>>>            </autoCommit>
>> >>>>            <autoSoftCommit>
>> >>>>
>> >>>> <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
>> >>>>            </autoSoftCommit>
>> >>>>
>> >>>>       </updateHandler>
>> >>>>
>> >>>>
>> >>>>
>> >>>> Thanks
>> >>>> Gopal Patwa
>> >>>> *
>> >>>
>> >>> ----------
>> >>> From: Gopal Patwa <go...@gmail.com>
>> >>> Date: Tue, Apr 10, 2012 at 8:35 PM
>> >>> To: solr-user@lucene.apache.org
>> >>>
>> >>>
>> >>> Michael, Thanks for response
>> >>>
>> >>> it was 65K as you mention the default value for "cat
>> >>> /proc/sys/vm/max_map_count" . How we determine what value this should
>> be?
>> >>> is it number of document during hard commit in my case it is 15
>> >> minutes? or
>> >>> it is number of  index file or number of documents we have in all
>> cores.
>> >>>
>> >>> I have raised the number to 140K but I still get when it reaches to
>> >> 140K, we
>> >>> have to restart jboss server to free up the map count, sometime OOM
>> error
>> >>> happen during "Error opening new searcher"
>> >>>
>> >>> is making this number to unlimited is only solution?''
>> >>>
>> >>>
>> >>> Error log:
>> >>>
>> >>> location=CommitTracker line=93 auto commit
>> >>> error...:org.apache.solr.common.SolrException: Error opening new
>> searcher
>> >>>      at
>> >> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
>> >>>      at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
>> >>>      at
>> >>>
>> >>
>> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
>> >>>      at
>> org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
>> >>>      at
>> >> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>> >>>      at
>> >> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>> >>>      at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>> >>>      at
>> >>>
>> >>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
>> >>>      at
>> >>>
>> >>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
>> >>>      at
>> >>>
>> >>
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> >>>      at
>> >>>
>> >>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >>>      at java.lang.Thread.run(Thread.java:662)
>> >>> Caused by: java.io.IOException: Map failed
>> >>>      at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
>> >>>      at
>> >> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
>> >>>      at org.apache.lucene.codecs.Codec.files(Codec.java:56)
>> >>>      at org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
>> >>>      at
>> >> org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
>> >>>      at
>> >>>
>> >>
>> org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
>> >>>      at org.apache.lucene.index.
>> >>>
>> >>> ...
>> >>>
>> >>> [Message clipped]
>> >>> ----------
>> >>> From: Michael McCandless <lu...@mikemccandless.com>
>> >>> Date: Wed, Apr 11, 2012 at 2:20 AM
>> >>> To: solr-user@lucene.apache.org
>> >>>
>> >>>
>> >>> Hi,
>> >>>
>> >>> 65K is already a very large number and should have been sufficient...
>> >>>
>> >>> However: have you increased the merge factor?  Doing so increases the
>> >>> open files (maps) required.
>> >>>
>> >>> Have you disabled compound file format?  (Hmmm: I think Solr does so
>> >>> by default... which is dangerous).  Maybe try enabling compound file
>> >>> format?
>> >>>
>> >>> Can you "ls -l" your index dir and post the results?
>> >>>
>> >>> It's also possible Solr isn't closing the old searchers quickly enough
>> >>> ... I don't know the details on when Solr closes old searchers...
>> >>>
>> >>> Mike McCandless
>> >>>
>> >>> http://blog.mikemccandless.com
>> >>>
>> >>>
>> >>>
>> >>> On Tue, Apr 10, 2012 at 11:35 PM, Gopal Patwa <go...@gmail.com>
>> >> wrote:
>> >>>> Michael, Thanks for response
>> >>>>
>> >>>> it was 65K as you mention the default value for "cat
>> >>>> /proc/sys/vm/max_map_count" . How we determine what value this should
>> >> be?
>> >>>> is it number of document during hard commit in my case it is 15
>> >> minutes?
>> >>>> or it is number of  index file or number of documents we have in all
>> >>>> cores.
>> >>>>
>> >>>> I have raised the number to 140K but I still get when it reaches to
>> >> 140K,
>> >>>> we have to restart jboss server to free up the map count, sometime OOM
>> >>>> error happen during "*Error opening new searcher"*
>> >>>>
>> >>>> is making this number to unlimited is only solution?''
>> >>>>
>> >>>>
>> >>>> Error log:
>> >>>>
>> >>>> *location=CommitTracker line=93 auto commit
>> >>>> error...:org.apache.solr.common.SolrException: Error opening new
>> >>>> searcher
>> >>>>       at
>> >>>> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
>> >>>>       at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
>> >>>>       at
>> >> org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
>> >>>>       at
>> >>>>
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>> >>>>       at
>> >>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>> >>>>       at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>> >>>>       at
>> >>>>
>> >>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
>> >>>>       at
>> >>>>
>> >>
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
>> >>>>       at
>> >>>>
>> >>
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> >>>>       at
>> >>>>
>> >>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >>>>       at java.lang.Thread.run(Thread.java:662)Caused by:
>> >>>> java.io.IOException: Map failed
>> >>>>       at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
>> >>>>       at
>> >>>>
>> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
>> >>>>       at org.apache.lucene.codecs.Codec.files(Codec.java:56)
>> >>>>       at
>> >> org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
>> >>>>       at
>> >>>> org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:438)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:553)
>> >>>>       at
>> >>>> org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:354)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:258)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:243)
>> >>>>       at
>> >>>>
>> >>
>> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:250)
>> >>>>       at
>> >>>> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1091)
>> >>>>       ... 11 moreCaused by: java.lang.OutOfMemoryError: Map failed
>> >>>>       at sun.nio.ch.FileChannelImpl.map0(Native Method)
>> >>>>       at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)*
>> >>>>
>> >>>>
>> >>>>
>> >>>> And one more issue we came across i.e
>> >>>
>> >>>
>> >>
>> >
>> >
>> > ______________________________________________________________________
>> > This email has been scanned by the brightsolid Email Security System.
>> Powered by MessageLabs
>> > ______________________________________________________________________
>>
>>
>> ______________________________________________________________________
>> "brightsolid" is used in this email to collectively mean brightsolid
>> online innovation limited and its subsidiary companies brightsolid online
>> publishing limited and brightsolid online technology limited.
>> findmypast.co.uk is a brand of brightsolid online publishing limited.
>> brightsolid online innovation limited, Gateway House, Luna Place, Dundee
>> Technology Park, Dundee DD2 1TP.  Registered in Scotland No. SC274983.
>> brightsolid online publishing limited, The Glebe, 6 Chapel Place,
>> Rivington Street, London EC2A 3DQ. Registered in England No. 04369607.
>> brightsolid online technology limited, Gateway House, Luna Place, Dundee
>> Technology Park, Dundee DD2 1TP.  Registered in Scotland No. SC161678.
>>
>> Email Disclaimer
>>
>> This message is confidential and may contain privileged information. You
>> should not disclose its contents to any other person. If you are not the
>> intended recipient, please notify the sender named above immediately. It is
>> expressly declared that this e-mail does not constitute nor form part of a
>> contract or unilateral obligation. Opinions, conclusions and other
>> information in this message that do not relate to the official business of
>> brightsolid shall be understood as neither given nor endorsed by it.
>> ______________________________________________________________________
>> This email has been scanned by the brightsolid Email Security System.
>> Powered by MessageLabs
>> ______________________________________________________________________
>>

Re: Large Index and OutOfMemoryError: Map failed

Posted by Gopal Patwa <go...@gmail.com>.
We cannot avoid auto soft commit, since we need Lucene NRT feature. And I
use StreamingUpdateSolrServer for adding/updating index.

On Thu, Apr 19, 2012 at 7:42 AM, Boon Low <bo...@brightsolid.com> wrote:

> Hi,
>
> Also came across this error recently, while indexing with > 10 DIH
> processes in parallel + default index setting. The JVM grinds to a halt and
> throws this error. Checking the index of a core reveals thousands of files!
> Tuning the default autocommit from 15000ms to 900000ms solved the problem
> for us. (no 'autosoftcommit').
>
> Boon
>
> -----
> Boon Low
> Search UX and Engine Developer
> brightsolid Online Publishing
>
> On 14 Apr 2012, at 17:40, Gopal Patwa wrote:
>
> > I checked it was "MMapDirectory.UNMAP_SUPPORTED=true" and below are my
> > system data. Is their any existing test case to reproduce this issue? I
> am
> > trying understand how I can reproduce this issue with unit/integration
> test
> >
> > I will try recent solr trunk build too,  if it is some bug in solr or
> > lucene keeping old searcher open then how to reproduce it?
> >
> > SYSTEM DATA
> > ===========
> > PROCESSOR: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz
> > SYSTEM ID: x86_64
> > CURRENT CPU SPEED: 1600.000 MHz
> > CPUS: 8 processor(s)
> > MEMORY: 49449296 kB
> > DISTRIBUTION: CentOS release 5.3 (Final)
> > KERNEL NAME: 2.6.18-128.el5
> > UPTIME: up 71 days
> > LOAD AVERAGE: 1.42, 1.45, 1.53
> > JBOSS Version: Implementation-Version: 4.2.2.GA (build:
> > SVNTag=JBoss_4_2_2_GA date=20
> > JAVA Version: java version "1.6.0_24"
> >
> >
> > On Thu, Apr 12, 2012 at 3:07 AM, Michael McCandless <
> > lucene@mikemccandless.com> wrote:
> >
> >> Your largest index has 66 segments (690 files) ... biggish but not
> >> insane.  With 64K maps you should be able to have ~47 searchers open
> >> on each core.
> >>
> >> Enabling compound file format (not the opposite!) will mean fewer maps
> >> ... ie should improve this situation.
> >>
> >> I don't understand why Solr defaults to compound file off... that
> >> seems dangerous.
> >>
> >> Really we need a Solr dev here... to answer "how long is a stale
> >> searcher kept open".  Is it somehow possible 46 old searchers are
> >> being left open...?
> >>
> >> I don't see any other reason why you'd run out of maps.  Hmm, unless
> >> MMapDirectory didn't think it could safely invoke unmap in your JVM.
> >> Which exact JVM are you using?  If you can print the
> >> MMapDirectory.UNMAP_SUPPORTED constant, we'd know for sure.
> >>
> >> Yes, switching away from MMapDir will sidestep the "too many maps"
> >> issue, however, 1) MMapDir has better perf than NIOFSDir, and 2) if
> >> there really is a leak here (Solr not closing the old searchers or a
> >> Lucene bug or something...) then you'll eventually run out of file
> >> descriptors (ie, same  problem, different manifestation).
> >>
> >> Mike McCandless
> >>
> >> http://blog.mikemccandless.com
> >>
> >> 2012/4/11 Gopal Patwa <go...@gmail.com>:
> >>>
> >>> I have not change the mergefactor, it was 10. Compound index file is
> >> disable
> >>> in my config but I read from below post, that some one had similar
> issue
> >> and
> >>> it was resolved by switching from compound index file format to
> >> non-compound
> >>> index file.
> >>>
> >>> and some folks resolved by "changing lucene code to disable
> >> MMapDirectory."
> >>> Is this best practice to do, if so is this can be done in
> configuration?
> >>>
> >>>
> >>
> http://lucene.472066.n3.nabble.com/MMapDirectory-failed-to-map-a-23G-compound-index-segment-td3317208.html
> >>>
> >>> I have index document of core1 = 5 million, core2=8million and
> >>> core3=3million and all index are hosted in single Solr instance
> >>>
> >>> I am going to use Solr for our site StubHub.com, see attached "ls -l"
> >> list
> >>> of index files for all core
> >>>
> >>> SolrConfig.xml:
> >>>
> >>>
> >>>      <indexDefaults>
> >>>              <useCompoundFile>false</useCompoundFile>
> >>>              <mergeFactor>10</mergeFactor>
> >>>              <maxMergeDocs>2147483647</maxMergeDocs>
> >>>              <maxFieldLength>10000</maxFieldLength-->
> >>>              <ramBufferSizeMB>4096</ramBufferSizeMB>
> >>>              <maxThreadStates>10</maxThreadStates>
> >>>              <writeLockTimeout>1000</writeLockTimeout>
> >>>              <commitLockTimeout>10000</commitLockTimeout>
> >>>              <lockType>single</lockType>
> >>>
> >>>          <mergePolicy
> class="org.apache.lucene.index.TieredMergePolicy">
> >>>            <double name="forceMergeDeletesPctAllowed">0.0</double>
> >>>            <double name="reclaimDeletesWeight">10.0</double>
> >>>          </mergePolicy>
> >>>
> >>>          <deletionPolicy class="solr.SolrDeletionPolicy">
> >>>            <str name="keepOptimizedOnly">false</str>
> >>>            <str name="maxCommitsToKeep">0</str>
> >>>          </deletionPolicy>
> >>>
> >>>      </indexDefaults>
> >>>
> >>>
> >>>      <updateHandler class="solr.DirectUpdateHandler2">
> >>>          <maxPendingDeletes>1000</maxPendingDeletes>
> >>>           <autoCommit>
> >>>             <maxTime>900000</maxTime>
> >>>             <openSearcher>false</openSearcher>
> >>>           </autoCommit>
> >>>           <autoSoftCommit>
> >>>
> >> <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
> >>>           </autoSoftCommit>
> >>>
> >>>      </updateHandler>
> >>>
> >>>
> >>> Forwarded conversation
> >>> Subject: Large Index and OutOfMemoryError: Map failed
> >>> ------------------------
> >>>
> >>> From: Gopal Patwa <go...@gmail.com>
> >>> Date: Fri, Mar 30, 2012 at 10:26 PM
> >>> To: solr-user@lucene.apache.org
> >>>
> >>>
> >>> I need help!!
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> I am using Solr 4.0 nightly build with NRT and I often get this error
> >> during
> >>> auto commit "java.lang.OutOfMemoryError: Map failed". I have search
> this
> >>> forum and what I found it is related to OS ulimit setting, please se
> >> below
> >>> my ulimit settings. I am not sure what ulimit setting I should have?
> and
> >> we
> >>> also get "java.net.SocketException: Too many open files" NOT sure how
> >> many
> >>> open file we need to set?
> >>>
> >>>
> >>> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
> >> 15GB,
> >>> with Single shard
> >>>
> >>>
> >>> We update the index every 5 seconds, soft commit every 1 second and
> hard
> >>> commit every 15 minutes
> >>>
> >>>
> >>> Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB
> >>>
> >>>
> >>> ulimit:
> >>>
> >>> core file size          (blocks, -c) 0
> >>> data seg size           (kbytes, -d) unlimited
> >>> scheduling priority             (-e) 0
> >>> file size               (blocks, -f) unlimited
> >>> pending signals                 (-i) 401408
> >>> max locked memory       (kbytes, -l) 1024
> >>> max memory size         (kbytes, -m) unlimited
> >>> open files                      (-n) 1024
> >>> pipe size            (512 bytes, -p) 8
> >>> POSIX message queues     (bytes, -q) 819200
> >>> real-time priority              (-r) 0
> >>> stack size              (kbytes, -s) 10240
> >>> cpu time               (seconds, -t) unlimited
> >>> max user processes              (-u) 401408
> >>> virtual memory          (kbytes, -v) unlimited
> >>> file locks                      (-x) unlimited
> >>>
> >>>
> >>>
> >>> ERROR:
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> 2012-03-29 15:14:08,560 [] priority=ERROR app_name=
> >> thread=pool-3-thread-1
> >>> location=CommitTracker line=93 auto commit
> error...:java.io.IOException:
> >> Map
> >>> failed
> >>>      at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
> >>>      at
> >>>
> >>
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
> >>>      at
> >> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
> >>>      at
> >>>
> >>
> org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.<init>(Lucene40PostingsReader.java:58)
> >>>      at
> >>>
> >>
> org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer(Lucene40PostingsFormat.java:80)
> >>>      at
> >>>
> >>
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat(PerFieldPostingsFormat.java:189)
> >>>      at
> >>>
> >>
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:280)
> >>>      at
> >>>
> >>
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.<init>(PerFieldPostingsFormat.java:186)
> >>>      at
> >>>
> >>
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.<init>(PerFieldPostingsFormat.java:186)
> >>>      at
> >>>
> >>
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:256)
> >>>      at
> >>>
> >>
> org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:108)
> >>>      at
> >> org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:51)
> >>>      at
> >>>
> >>
> org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader(IndexWriter.java:494)
> >>>      at
> >>>
> >>
> org.apache.lucene.index.BufferedDeletesStream.applyDeletes(BufferedDeletesStream.java:214)
> >>>      at
> >>>
> >>
> org.apache.lucene.index.IndexWriter.applyAllDeletes(IndexWriter.java:2939)
> >>>      at
> >>>
> >>
> org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:2930)
> >>>      at
> >> org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:2681)
> >>>      at
> >>>
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2804)
> >>>      at
> >> org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2786)
> >>>      at
> >>>
> >>
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:391)
> >>>      at
> org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
> >>>      at
> >> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> >>>      at
> >>>
> >>> ...
> >>>
> >>> [Message clipped]
> >>> ----------
> >>> From: Michael McCandless <lu...@mikemccandless.com>
> >>> Date: Sat, Mar 31, 2012 at 3:15 AM
> >>> To: solr-user@lucene.apache.org
> >>>
> >>>
> >>> It's the virtual memory limit that matters; yours says unlimited below
> >>> (good!), but, are you certain that's really the limit your Solr
> >>> process runs with?
> >>>
> >>> On Linux, there is also a per-process map count:
> >>>
> >>>   cat /proc/sys/vm/max_map_count
> >>>
> >>> I think it typically defaults to 65,536 but you should check on your
> >>> env.  If a process tries to map more than this many regions, you'll
> >>> hit that exception.
> >>>
> >>> I think you can:
> >>>
> >>> cat /proc/<pid>/maps | wc
> >>>
> >>> to see how many maps your Solr process currently has... if that is
> >>> anywhere near the limit then it could be the cause.
> >>>
> >>> Mike McCandless
> >>>
> >>> http://blog.mikemccandless.com
> >>>
> >>> On Sat, Mar 31, 2012 at 1:26 AM, Gopal Patwa <go...@gmail.com>
> >> wrote:
> >>>> *I need help!!*
> >>>>
> >>>> *
> >>>> *
> >>>>
> >>>> *I am using Solr 4.0 nightly build with NRT and I often get this error
> >>>> during auto commit "**java.lang.OutOfMemoryError:* *Map* *failed". I
> >>>> have search this forum and what I found it is related to OS ulimit
> >>>> setting, please se below my ulimit settings. I am not sure what ulimit
> >>>> setting I should have? and we also get "**java.net.SocketException:*
> >>>> *Too* *many* *open* *files" NOT sure how many open file we need to
> >>>> set?*
> >>>>
> >>>>
> >>>> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
> >>>> 15GB, with Single shard
> >>>>
> >>>> *
> >>>> *
> >>>>
> >>>> *We update the index every 5 seconds, soft commit every 1 second and
> >>>> hard commit every 15 minutes*
> >>>>
> >>>> *
> >>>> *
> >>>>
> >>>> *Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB*
> >>>>
> >>>> *
> >>>> *
> >>>>
> >>>> ulimit:
> >>>>
> >>>> core file size          (blocks, -c) 0
> >>>> data seg size           (kbytes, -d) unlimited
> >>>> scheduling priority             (-e) 0
> >>>> file size               (blocks, -f) unlimited
> >>>> pending signals                 (-i) 401408
> >>>> max locked memory       (kbytes, -l) 1024
> >>>> max memory size         (kbytes, -m) unlimited
> >>>> open files                      (-n) 1024
> >>>> pipe size            (512 bytes, -p) 8
> >>>> POSIX message queues     (bytes, -q) 819200
> >>>> real-time priority              (-r) 0
> >>>> stack size              (kbytes, -s) 10240
> >>>> cpu time               (seconds, -t) unlimited
> >>>> max user processes              (-u) 401408
> >>>> virtual memory          (kbytes, -v) unlimited
> >>>> file locks                      (-x) unlimited
> >>>>
> >>>>
> >>>> *
> >>>> *
> >>>>
> >>>> *ERROR:*
> >>>>
> >>>> *
> >>>> *
> >>>>
> >>>> *2012-03-29* *15:14:08*,*560* [] *priority=ERROR* *app_name=*
> >>>> *thread=pool-3-thread-1* *location=CommitTracker* *line=93* *auto*
> >>>> *commit* *error...:java.io.IOException:* *Map* *failed*
> >>>>       *at*
> *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:748*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.store.MMapDirectory$MMapIndexInput.*<*init*>(*MMapDirectory.java:293*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.store.MMapDirectory.openInput*(*MMapDirectory.java:221*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.*<*init*>(*Lucene40PostingsReader.java:58*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer*(*Lucene40PostingsFormat.java:80*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat*(*PerFieldPostingsFormat.java:189*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.*<*init*>(*PerFieldPostingsFormat.java:280*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.*<*init*>(*PerFieldPostingsFormat.java:186*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.*<*init*>(*PerFieldPostingsFormat.java:186*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer*(*PerFieldPostingsFormat.java:256*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.index.SegmentCoreReaders.*<*init*>(*SegmentCoreReaders.java:108*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.index.SegmentReader.*<*init*>(*SegmentReader.java:51*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader*(*IndexWriter.java:494*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.index.BufferedDeletesStream.applyDeletes*(*BufferedDeletesStream.java:214*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.index.IndexWriter.applyAllDeletes*(*IndexWriter.java:2939*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.index.IndexWriter.maybeApplyDeletes*(*IndexWriter.java:2930*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.index.IndexWriter.prepareCommit*(*IndexWriter.java:2681*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.lucene.index.IndexWriter.commitInternal*(*IndexWriter.java:2804*)
> >>>>       *at*
> >>>> *org.apache.lucene.index.IndexWriter.commit*(*IndexWriter.java:2786*)
> >>>>       *at*
> >>>>
> >>
> *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:391*)
> >>>>       *at*
> >>>> *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
> >>>>       *at*
> >>>>
> >>
> *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
> >>>>       *at*
> >>>> *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
> >>>>       *at*
> *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
> >>>>       *at*
> >>>>
> >>
> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
> >>>>       *at*
> >>>>
> >>
> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
> >>>>       *at*
> >>>>
> >>
> *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
> >>>>       *at*
> >>>>
> >>
> *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
> >>>>       *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
> >>>> *java.lang.OutOfMemoryError:* *Map* *failed*
> >>>>       *at* *sun.nio.ch.FileChannelImpl.map0*(*Native* *Method*)
> >>>>       *at*
> *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:745*)
> >>>>       *...* *28* *more*
> >>>>
> >>>> *
> >>>> *
> >>>>
> >>>> *
> >>>> *
> >>>>
> >>>> *
> >>>>
> >>>>
> >>>> SolrConfig.xml:
> >>>>
> >>>>
> >>>>       <indexDefaults>
> >>>>               <useCompoundFile>false</useCompoundFile>
> >>>>               <mergeFactor>10</mergeFactor>
> >>>>               <maxMergeDocs>2147483647</maxMergeDocs>
> >>>>               <maxFieldLength>10000</maxFieldLength-->
> >>>>               <ramBufferSizeMB>4096</ramBufferSizeMB>
> >>>>               <maxThreadStates>10</maxThreadStates>
> >>>>               <writeLockTimeout>1000</writeLockTimeout>
> >>>>               <commitLockTimeout>10000</commitLockTimeout>
> >>>>               <lockType>single</lockType>
> >>>>
> >>>>           <mergePolicy
> >> class="org.apache.lucene.index.TieredMergePolicy">
> >>>>             <double name="forceMergeDeletesPctAllowed">0.0</double>
> >>>>             <double name="reclaimDeletesWeight">10.0</double>
> >>>>           </mergePolicy>
> >>>>
> >>>>           <deletionPolicy class="solr.SolrDeletionPolicy">
> >>>>             <str name="keepOptimizedOnly">false</str>
> >>>>             <str name="maxCommitsToKeep">0</str>
> >>>>           </deletionPolicy>
> >>>>
> >>>>       </indexDefaults>
> >>>>
> >>>>
> >>>>       <updateHandler class="solr.DirectUpdateHandler2">
> >>>>           <maxPendingDeletes>1000</maxPendingDeletes>
> >>>>            <autoCommit>
> >>>>              <maxTime>900000</maxTime>
> >>>>              <openSearcher>false</openSearcher>
> >>>>            </autoCommit>
> >>>>            <autoSoftCommit>
> >>>>
> >>>> <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
> >>>>            </autoSoftCommit>
> >>>>
> >>>>       </updateHandler>
> >>>>
> >>>>
> >>>>
> >>>> Thanks
> >>>> Gopal Patwa
> >>>> *
> >>>
> >>> ----------
> >>> From: Gopal Patwa <go...@gmail.com>
> >>> Date: Tue, Apr 10, 2012 at 8:35 PM
> >>> To: solr-user@lucene.apache.org
> >>>
> >>>
> >>> Michael, Thanks for response
> >>>
> >>> it was 65K as you mention the default value for "cat
> >>> /proc/sys/vm/max_map_count" . How we determine what value this should
> be?
> >>> is it number of document during hard commit in my case it is 15
> >> minutes? or
> >>> it is number of  index file or number of documents we have in all
> cores.
> >>>
> >>> I have raised the number to 140K but I still get when it reaches to
> >> 140K, we
> >>> have to restart jboss server to free up the map count, sometime OOM
> error
> >>> happen during "Error opening new searcher"
> >>>
> >>> is making this number to unlimited is only solution?''
> >>>
> >>>
> >>> Error log:
> >>>
> >>> location=CommitTracker line=93 auto commit
> >>> error...:org.apache.solr.common.SolrException: Error opening new
> searcher
> >>>      at
> >> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
> >>>      at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
> >>>      at
> >>>
> >>
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
> >>>      at
> org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
> >>>      at
> >> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> >>>      at
> >> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> >>>      at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> >>>      at
> >>>
> >>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
> >>>      at
> >>>
> >>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
> >>>      at
> >>>
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >>>      at
> >>>
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >>>      at java.lang.Thread.run(Thread.java:662)
> >>> Caused by: java.io.IOException: Map failed
> >>>      at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
> >>>      at
> >>>
> >>
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
> >>>      at
> >> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
> >>>      at
> >>>
> >>
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
> >>>      at
> >>>
> >>
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
> >>>      at
> >>>
> >>
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
> >>>      at org.apache.lucene.codecs.Codec.files(Codec.java:56)
> >>>      at org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
> >>>      at
> >> org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
> >>>      at
> >>>
> >>
> org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
> >>>      at
> >>>
> >>
> org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
> >>>      at
> >>>
> >>
> org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
> >>>      at
> >>>
> >>
> org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
> >>>      at
> >>>
> >>
> org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
> >>>      at
> >>>
> >>
> org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
> >>>      at org.apache.lucene.index.
> >>>
> >>> ...
> >>>
> >>> [Message clipped]
> >>> ----------
> >>> From: Michael McCandless <lu...@mikemccandless.com>
> >>> Date: Wed, Apr 11, 2012 at 2:20 AM
> >>> To: solr-user@lucene.apache.org
> >>>
> >>>
> >>> Hi,
> >>>
> >>> 65K is already a very large number and should have been sufficient...
> >>>
> >>> However: have you increased the merge factor?  Doing so increases the
> >>> open files (maps) required.
> >>>
> >>> Have you disabled compound file format?  (Hmmm: I think Solr does so
> >>> by default... which is dangerous).  Maybe try enabling compound file
> >>> format?
> >>>
> >>> Can you "ls -l" your index dir and post the results?
> >>>
> >>> It's also possible Solr isn't closing the old searchers quickly enough
> >>> ... I don't know the details on when Solr closes old searchers...
> >>>
> >>> Mike McCandless
> >>>
> >>> http://blog.mikemccandless.com
> >>>
> >>>
> >>>
> >>> On Tue, Apr 10, 2012 at 11:35 PM, Gopal Patwa <go...@gmail.com>
> >> wrote:
> >>>> Michael, Thanks for response
> >>>>
> >>>> it was 65K as you mention the default value for "cat
> >>>> /proc/sys/vm/max_map_count" . How we determine what value this should
> >> be?
> >>>> is it number of document during hard commit in my case it is 15
> >> minutes?
> >>>> or it is number of  index file or number of documents we have in all
> >>>> cores.
> >>>>
> >>>> I have raised the number to 140K but I still get when it reaches to
> >> 140K,
> >>>> we have to restart jboss server to free up the map count, sometime OOM
> >>>> error happen during "*Error opening new searcher"*
> >>>>
> >>>> is making this number to unlimited is only solution?''
> >>>>
> >>>>
> >>>> Error log:
> >>>>
> >>>> *location=CommitTracker line=93 auto commit
> >>>> error...:org.apache.solr.common.SolrException: Error opening new
> >>>> searcher
> >>>>       at
> >>>> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
> >>>>       at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
> >>>>       at
> >>>>
> >>
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
> >>>>       at
> >> org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
> >>>>       at
> >>>>
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> >>>>       at
> >>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> >>>>       at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> >>>>       at
> >>>>
> >>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
> >>>>       at
> >>>>
> >>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
> >>>>       at
> >>>>
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >>>>       at
> >>>>
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >>>>       at java.lang.Thread.run(Thread.java:662)Caused by:
> >>>> java.io.IOException: Map failed
> >>>>       at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
> >>>>       at
> >>>>
> >>
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
> >>>>       at
> >>>>
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
> >>>>       at
> >>>>
> >>
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
> >>>>       at
> >>>>
> >>
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
> >>>>       at
> >>>>
> >>
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
> >>>>       at org.apache.lucene.codecs.Codec.files(Codec.java:56)
> >>>>       at
> >> org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
> >>>>       at
> >>>> org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
> >>>>       at
> >>>>
> >>
> org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
> >>>>       at
> >>>>
> >>
> org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
> >>>>       at
> >>>>
> >>
> org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
> >>>>       at
> >>>>
> >>
> org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
> >>>>       at
> >>>>
> >>
> org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
> >>>>       at
> >>>>
> >>
> org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
> >>>>       at
> >>>>
> >>
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:438)
> >>>>       at
> >>>>
> >>
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:553)
> >>>>       at
> >>>> org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:354)
> >>>>       at
> >>>>
> >>
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:258)
> >>>>       at
> >>>>
> >>
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:243)
> >>>>       at
> >>>>
> >>
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:250)
> >>>>       at
> >>>> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1091)
> >>>>       ... 11 moreCaused by: java.lang.OutOfMemoryError: Map failed
> >>>>       at sun.nio.ch.FileChannelImpl.map0(Native Method)
> >>>>       at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)*
> >>>>
> >>>>
> >>>>
> >>>> And one more issue we came across i.e
> >>>
> >>>
> >>
> >
> >
> > ______________________________________________________________________
> > This email has been scanned by the brightsolid Email Security System.
> Powered by MessageLabs
> > ______________________________________________________________________
>
>
> ______________________________________________________________________
> "brightsolid" is used in this email to collectively mean brightsolid
> online innovation limited and its subsidiary companies brightsolid online
> publishing limited and brightsolid online technology limited.
> findmypast.co.uk is a brand of brightsolid online publishing limited.
> brightsolid online innovation limited, Gateway House, Luna Place, Dundee
> Technology Park, Dundee DD2 1TP.  Registered in Scotland No. SC274983.
> brightsolid online publishing limited, The Glebe, 6 Chapel Place,
> Rivington Street, London EC2A 3DQ. Registered in England No. 04369607.
> brightsolid online technology limited, Gateway House, Luna Place, Dundee
> Technology Park, Dundee DD2 1TP.  Registered in Scotland No. SC161678.
>
> Email Disclaimer
>
> This message is confidential and may contain privileged information. You
> should not disclose its contents to any other person. If you are not the
> intended recipient, please notify the sender named above immediately. It is
> expressly declared that this e-mail does not constitute nor form part of a
> contract or unilateral obligation. Opinions, conclusions and other
> information in this message that do not relate to the official business of
> brightsolid shall be understood as neither given nor endorsed by it.
> ______________________________________________________________________
> This email has been scanned by the brightsolid Email Security System.
> Powered by MessageLabs
> ______________________________________________________________________
>

Re: Large Index and OutOfMemoryError: Map failed

Posted by Boon Low <bo...@brightsolid.com>.
Hi,

Also came across this error recently, while indexing with > 10 DIH processes in parallel + default index setting. The JVM grinds to a halt and throws this error. Checking the index of a core reveals thousands of files! Tuning the default autocommit from 15000ms to 900000ms solved the problem for us. (no 'autosoftcommit').

Boon 

-----
Boon Low
Search UX and Engine Developer
brightsolid Online Publishing

On 14 Apr 2012, at 17:40, Gopal Patwa wrote:

> I checked it was "MMapDirectory.UNMAP_SUPPORTED=true" and below are my
> system data. Is their any existing test case to reproduce this issue? I am
> trying understand how I can reproduce this issue with unit/integration test
> 
> I will try recent solr trunk build too,  if it is some bug in solr or
> lucene keeping old searcher open then how to reproduce it?
> 
> SYSTEM DATA
> ===========
> PROCESSOR: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz
> SYSTEM ID: x86_64
> CURRENT CPU SPEED: 1600.000 MHz
> CPUS: 8 processor(s)
> MEMORY: 49449296 kB
> DISTRIBUTION: CentOS release 5.3 (Final)
> KERNEL NAME: 2.6.18-128.el5
> UPTIME: up 71 days
> LOAD AVERAGE: 1.42, 1.45, 1.53
> JBOSS Version: Implementation-Version: 4.2.2.GA (build:
> SVNTag=JBoss_4_2_2_GA date=20
> JAVA Version: java version "1.6.0_24"
> 
> 
> On Thu, Apr 12, 2012 at 3:07 AM, Michael McCandless <
> lucene@mikemccandless.com> wrote:
> 
>> Your largest index has 66 segments (690 files) ... biggish but not
>> insane.  With 64K maps you should be able to have ~47 searchers open
>> on each core.
>> 
>> Enabling compound file format (not the opposite!) will mean fewer maps
>> ... ie should improve this situation.
>> 
>> I don't understand why Solr defaults to compound file off... that
>> seems dangerous.
>> 
>> Really we need a Solr dev here... to answer "how long is a stale
>> searcher kept open".  Is it somehow possible 46 old searchers are
>> being left open...?
>> 
>> I don't see any other reason why you'd run out of maps.  Hmm, unless
>> MMapDirectory didn't think it could safely invoke unmap in your JVM.
>> Which exact JVM are you using?  If you can print the
>> MMapDirectory.UNMAP_SUPPORTED constant, we'd know for sure.
>> 
>> Yes, switching away from MMapDir will sidestep the "too many maps"
>> issue, however, 1) MMapDir has better perf than NIOFSDir, and 2) if
>> there really is a leak here (Solr not closing the old searchers or a
>> Lucene bug or something...) then you'll eventually run out of file
>> descriptors (ie, same  problem, different manifestation).
>> 
>> Mike McCandless
>> 
>> http://blog.mikemccandless.com
>> 
>> 2012/4/11 Gopal Patwa <go...@gmail.com>:
>>> 
>>> I have not change the mergefactor, it was 10. Compound index file is
>> disable
>>> in my config but I read from below post, that some one had similar issue
>> and
>>> it was resolved by switching from compound index file format to
>> non-compound
>>> index file.
>>> 
>>> and some folks resolved by "changing lucene code to disable
>> MMapDirectory."
>>> Is this best practice to do, if so is this can be done in configuration?
>>> 
>>> 
>> http://lucene.472066.n3.nabble.com/MMapDirectory-failed-to-map-a-23G-compound-index-segment-td3317208.html
>>> 
>>> I have index document of core1 = 5 million, core2=8million and
>>> core3=3million and all index are hosted in single Solr instance
>>> 
>>> I am going to use Solr for our site StubHub.com, see attached "ls -l"
>> list
>>> of index files for all core
>>> 
>>> SolrConfig.xml:
>>> 
>>> 
>>>      <indexDefaults>
>>>              <useCompoundFile>false</useCompoundFile>
>>>              <mergeFactor>10</mergeFactor>
>>>              <maxMergeDocs>2147483647</maxMergeDocs>
>>>              <maxFieldLength>10000</maxFieldLength-->
>>>              <ramBufferSizeMB>4096</ramBufferSizeMB>
>>>              <maxThreadStates>10</maxThreadStates>
>>>              <writeLockTimeout>1000</writeLockTimeout>
>>>              <commitLockTimeout>10000</commitLockTimeout>
>>>              <lockType>single</lockType>
>>> 
>>>          <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
>>>            <double name="forceMergeDeletesPctAllowed">0.0</double>
>>>            <double name="reclaimDeletesWeight">10.0</double>
>>>          </mergePolicy>
>>> 
>>>          <deletionPolicy class="solr.SolrDeletionPolicy">
>>>            <str name="keepOptimizedOnly">false</str>
>>>            <str name="maxCommitsToKeep">0</str>
>>>          </deletionPolicy>
>>> 
>>>      </indexDefaults>
>>> 
>>> 
>>>      <updateHandler class="solr.DirectUpdateHandler2">
>>>          <maxPendingDeletes>1000</maxPendingDeletes>
>>>           <autoCommit>
>>>             <maxTime>900000</maxTime>
>>>             <openSearcher>false</openSearcher>
>>>           </autoCommit>
>>>           <autoSoftCommit>
>>> 
>> <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
>>>           </autoSoftCommit>
>>> 
>>>      </updateHandler>
>>> 
>>> 
>>> Forwarded conversation
>>> Subject: Large Index and OutOfMemoryError: Map failed
>>> ------------------------
>>> 
>>> From: Gopal Patwa <go...@gmail.com>
>>> Date: Fri, Mar 30, 2012 at 10:26 PM
>>> To: solr-user@lucene.apache.org
>>> 
>>> 
>>> I need help!!
>>> 
>>> 
>>> 
>>> 
>>> 
>>> I am using Solr 4.0 nightly build with NRT and I often get this error
>> during
>>> auto commit "java.lang.OutOfMemoryError: Map failed". I have search this
>>> forum and what I found it is related to OS ulimit setting, please se
>> below
>>> my ulimit settings. I am not sure what ulimit setting I should have? and
>> we
>>> also get "java.net.SocketException: Too many open files" NOT sure how
>> many
>>> open file we need to set?
>>> 
>>> 
>>> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
>> 15GB,
>>> with Single shard
>>> 
>>> 
>>> We update the index every 5 seconds, soft commit every 1 second and hard
>>> commit every 15 minutes
>>> 
>>> 
>>> Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB
>>> 
>>> 
>>> ulimit:
>>> 
>>> core file size          (blocks, -c) 0
>>> data seg size           (kbytes, -d) unlimited
>>> scheduling priority             (-e) 0
>>> file size               (blocks, -f) unlimited
>>> pending signals                 (-i) 401408
>>> max locked memory       (kbytes, -l) 1024
>>> max memory size         (kbytes, -m) unlimited
>>> open files                      (-n) 1024
>>> pipe size            (512 bytes, -p) 8
>>> POSIX message queues     (bytes, -q) 819200
>>> real-time priority              (-r) 0
>>> stack size              (kbytes, -s) 10240
>>> cpu time               (seconds, -t) unlimited
>>> max user processes              (-u) 401408
>>> virtual memory          (kbytes, -v) unlimited
>>> file locks                      (-x) unlimited
>>> 
>>> 
>>> 
>>> ERROR:
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 2012-03-29 15:14:08,560 [] priority=ERROR app_name=
>> thread=pool-3-thread-1
>>> location=CommitTracker line=93 auto commit error...:java.io.IOException:
>> Map
>>> failed
>>>      at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
>>>      at
>>> 
>> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
>>>      at
>> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
>>>      at
>>> 
>> org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.<init>(Lucene40PostingsReader.java:58)
>>>      at
>>> 
>> org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer(Lucene40PostingsFormat.java:80)
>>>      at
>>> 
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat(PerFieldPostingsFormat.java:189)
>>>      at
>>> 
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:280)
>>>      at
>>> 
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.<init>(PerFieldPostingsFormat.java:186)
>>>      at
>>> 
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.<init>(PerFieldPostingsFormat.java:186)
>>>      at
>>> 
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:256)
>>>      at
>>> 
>> org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:108)
>>>      at
>> org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:51)
>>>      at
>>> 
>> org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader(IndexWriter.java:494)
>>>      at
>>> 
>> org.apache.lucene.index.BufferedDeletesStream.applyDeletes(BufferedDeletesStream.java:214)
>>>      at
>>> 
>> org.apache.lucene.index.IndexWriter.applyAllDeletes(IndexWriter.java:2939)
>>>      at
>>> 
>> org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:2930)
>>>      at
>> org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:2681)
>>>      at
>>> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2804)
>>>      at
>> org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2786)
>>>      at
>>> 
>> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:391)
>>>      at org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
>>>      at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>>>      at
>>> 
>>> ...
>>> 
>>> [Message clipped]
>>> ----------
>>> From: Michael McCandless <lu...@mikemccandless.com>
>>> Date: Sat, Mar 31, 2012 at 3:15 AM
>>> To: solr-user@lucene.apache.org
>>> 
>>> 
>>> It's the virtual memory limit that matters; yours says unlimited below
>>> (good!), but, are you certain that's really the limit your Solr
>>> process runs with?
>>> 
>>> On Linux, there is also a per-process map count:
>>> 
>>>   cat /proc/sys/vm/max_map_count
>>> 
>>> I think it typically defaults to 65,536 but you should check on your
>>> env.  If a process tries to map more than this many regions, you'll
>>> hit that exception.
>>> 
>>> I think you can:
>>> 
>>> cat /proc/<pid>/maps | wc
>>> 
>>> to see how many maps your Solr process currently has... if that is
>>> anywhere near the limit then it could be the cause.
>>> 
>>> Mike McCandless
>>> 
>>> http://blog.mikemccandless.com
>>> 
>>> On Sat, Mar 31, 2012 at 1:26 AM, Gopal Patwa <go...@gmail.com>
>> wrote:
>>>> *I need help!!*
>>>> 
>>>> *
>>>> *
>>>> 
>>>> *I am using Solr 4.0 nightly build with NRT and I often get this error
>>>> during auto commit "**java.lang.OutOfMemoryError:* *Map* *failed". I
>>>> have search this forum and what I found it is related to OS ulimit
>>>> setting, please se below my ulimit settings. I am not sure what ulimit
>>>> setting I should have? and we also get "**java.net.SocketException:*
>>>> *Too* *many* *open* *files" NOT sure how many open file we need to
>>>> set?*
>>>> 
>>>> 
>>>> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
>>>> 15GB, with Single shard
>>>> 
>>>> *
>>>> *
>>>> 
>>>> *We update the index every 5 seconds, soft commit every 1 second and
>>>> hard commit every 15 minutes*
>>>> 
>>>> *
>>>> *
>>>> 
>>>> *Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB*
>>>> 
>>>> *
>>>> *
>>>> 
>>>> ulimit:
>>>> 
>>>> core file size          (blocks, -c) 0
>>>> data seg size           (kbytes, -d) unlimited
>>>> scheduling priority             (-e) 0
>>>> file size               (blocks, -f) unlimited
>>>> pending signals                 (-i) 401408
>>>> max locked memory       (kbytes, -l) 1024
>>>> max memory size         (kbytes, -m) unlimited
>>>> open files                      (-n) 1024
>>>> pipe size            (512 bytes, -p) 8
>>>> POSIX message queues     (bytes, -q) 819200
>>>> real-time priority              (-r) 0
>>>> stack size              (kbytes, -s) 10240
>>>> cpu time               (seconds, -t) unlimited
>>>> max user processes              (-u) 401408
>>>> virtual memory          (kbytes, -v) unlimited
>>>> file locks                      (-x) unlimited
>>>> 
>>>> 
>>>> *
>>>> *
>>>> 
>>>> *ERROR:*
>>>> 
>>>> *
>>>> *
>>>> 
>>>> *2012-03-29* *15:14:08*,*560* [] *priority=ERROR* *app_name=*
>>>> *thread=pool-3-thread-1* *location=CommitTracker* *line=93* *auto*
>>>> *commit* *error...:java.io.IOException:* *Map* *failed*
>>>>       *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:748*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.store.MMapDirectory$MMapIndexInput.*<*init*>(*MMapDirectory.java:293*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.store.MMapDirectory.openInput*(*MMapDirectory.java:221*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.*<*init*>(*Lucene40PostingsReader.java:58*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer*(*Lucene40PostingsFormat.java:80*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat*(*PerFieldPostingsFormat.java:189*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.*<*init*>(*PerFieldPostingsFormat.java:280*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.*<*init*>(*PerFieldPostingsFormat.java:186*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.*<*init*>(*PerFieldPostingsFormat.java:186*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer*(*PerFieldPostingsFormat.java:256*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.index.SegmentCoreReaders.*<*init*>(*SegmentCoreReaders.java:108*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.index.SegmentReader.*<*init*>(*SegmentReader.java:51*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader*(*IndexWriter.java:494*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.index.BufferedDeletesStream.applyDeletes*(*BufferedDeletesStream.java:214*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.index.IndexWriter.applyAllDeletes*(*IndexWriter.java:2939*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.index.IndexWriter.maybeApplyDeletes*(*IndexWriter.java:2930*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.index.IndexWriter.prepareCommit*(*IndexWriter.java:2681*)
>>>>       *at*
>>>> 
>> *org.apache.lucene.index.IndexWriter.commitInternal*(*IndexWriter.java:2804*)
>>>>       *at*
>>>> *org.apache.lucene.index.IndexWriter.commit*(*IndexWriter.java:2786*)
>>>>       *at*
>>>> 
>> *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:391*)
>>>>       *at*
>>>> *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
>>>>       *at*
>>>> 
>> *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
>>>>       *at*
>>>> *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
>>>>       *at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
>>>>       *at*
>>>> 
>> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
>>>>       *at*
>>>> 
>> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
>>>>       *at*
>>>> 
>> *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
>>>>       *at*
>>>> 
>> *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
>>>>       *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
>>>> *java.lang.OutOfMemoryError:* *Map* *failed*
>>>>       *at* *sun.nio.ch.FileChannelImpl.map0*(*Native* *Method*)
>>>>       *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:745*)
>>>>       *...* *28* *more*
>>>> 
>>>> *
>>>> *
>>>> 
>>>> *
>>>> *
>>>> 
>>>> *
>>>> 
>>>> 
>>>> SolrConfig.xml:
>>>> 
>>>> 
>>>>       <indexDefaults>
>>>>               <useCompoundFile>false</useCompoundFile>
>>>>               <mergeFactor>10</mergeFactor>
>>>>               <maxMergeDocs>2147483647</maxMergeDocs>
>>>>               <maxFieldLength>10000</maxFieldLength-->
>>>>               <ramBufferSizeMB>4096</ramBufferSizeMB>
>>>>               <maxThreadStates>10</maxThreadStates>
>>>>               <writeLockTimeout>1000</writeLockTimeout>
>>>>               <commitLockTimeout>10000</commitLockTimeout>
>>>>               <lockType>single</lockType>
>>>> 
>>>>           <mergePolicy
>> class="org.apache.lucene.index.TieredMergePolicy">
>>>>             <double name="forceMergeDeletesPctAllowed">0.0</double>
>>>>             <double name="reclaimDeletesWeight">10.0</double>
>>>>           </mergePolicy>
>>>> 
>>>>           <deletionPolicy class="solr.SolrDeletionPolicy">
>>>>             <str name="keepOptimizedOnly">false</str>
>>>>             <str name="maxCommitsToKeep">0</str>
>>>>           </deletionPolicy>
>>>> 
>>>>       </indexDefaults>
>>>> 
>>>> 
>>>>       <updateHandler class="solr.DirectUpdateHandler2">
>>>>           <maxPendingDeletes>1000</maxPendingDeletes>
>>>>            <autoCommit>
>>>>              <maxTime>900000</maxTime>
>>>>              <openSearcher>false</openSearcher>
>>>>            </autoCommit>
>>>>            <autoSoftCommit>
>>>> 
>>>> <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
>>>>            </autoSoftCommit>
>>>> 
>>>>       </updateHandler>
>>>> 
>>>> 
>>>> 
>>>> Thanks
>>>> Gopal Patwa
>>>> *
>>> 
>>> ----------
>>> From: Gopal Patwa <go...@gmail.com>
>>> Date: Tue, Apr 10, 2012 at 8:35 PM
>>> To: solr-user@lucene.apache.org
>>> 
>>> 
>>> Michael, Thanks for response
>>> 
>>> it was 65K as you mention the default value for "cat
>>> /proc/sys/vm/max_map_count" . How we determine what value this should be?
>>> is it number of document during hard commit in my case it is 15
>> minutes? or
>>> it is number of  index file or number of documents we have in all cores.
>>> 
>>> I have raised the number to 140K but I still get when it reaches to
>> 140K, we
>>> have to restart jboss server to free up the map count, sometime OOM error
>>> happen during "Error opening new searcher"
>>> 
>>> is making this number to unlimited is only solution?''
>>> 
>>> 
>>> Error log:
>>> 
>>> location=CommitTracker line=93 auto commit
>>> error...:org.apache.solr.common.SolrException: Error opening new searcher
>>>      at
>> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
>>>      at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
>>>      at
>>> 
>> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
>>>      at org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
>>>      at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>>>      at
>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>      at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>      at
>>> 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
>>>      at
>>> 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
>>>      at
>>> 
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>      at
>>> 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>      at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.io.IOException: Map failed
>>>      at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
>>>      at
>>> 
>> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
>>>      at
>> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
>>>      at
>>> 
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
>>>      at
>>> 
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
>>>      at
>>> 
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
>>>      at org.apache.lucene.codecs.Codec.files(Codec.java:56)
>>>      at org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
>>>      at
>> org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
>>>      at
>>> 
>> org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
>>>      at
>>> 
>> org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
>>>      at
>>> 
>> org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
>>>      at
>>> 
>> org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
>>>      at
>>> 
>> org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
>>>      at
>>> 
>> org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
>>>      at org.apache.lucene.index.
>>> 
>>> ...
>>> 
>>> [Message clipped]
>>> ----------
>>> From: Michael McCandless <lu...@mikemccandless.com>
>>> Date: Wed, Apr 11, 2012 at 2:20 AM
>>> To: solr-user@lucene.apache.org
>>> 
>>> 
>>> Hi,
>>> 
>>> 65K is already a very large number and should have been sufficient...
>>> 
>>> However: have you increased the merge factor?  Doing so increases the
>>> open files (maps) required.
>>> 
>>> Have you disabled compound file format?  (Hmmm: I think Solr does so
>>> by default... which is dangerous).  Maybe try enabling compound file
>>> format?
>>> 
>>> Can you "ls -l" your index dir and post the results?
>>> 
>>> It's also possible Solr isn't closing the old searchers quickly enough
>>> ... I don't know the details on when Solr closes old searchers...
>>> 
>>> Mike McCandless
>>> 
>>> http://blog.mikemccandless.com
>>> 
>>> 
>>> 
>>> On Tue, Apr 10, 2012 at 11:35 PM, Gopal Patwa <go...@gmail.com>
>> wrote:
>>>> Michael, Thanks for response
>>>> 
>>>> it was 65K as you mention the default value for "cat
>>>> /proc/sys/vm/max_map_count" . How we determine what value this should
>> be?
>>>> is it number of document during hard commit in my case it is 15
>> minutes?
>>>> or it is number of  index file or number of documents we have in all
>>>> cores.
>>>> 
>>>> I have raised the number to 140K but I still get when it reaches to
>> 140K,
>>>> we have to restart jboss server to free up the map count, sometime OOM
>>>> error happen during "*Error opening new searcher"*
>>>> 
>>>> is making this number to unlimited is only solution?''
>>>> 
>>>> 
>>>> Error log:
>>>> 
>>>> *location=CommitTracker line=93 auto commit
>>>> error...:org.apache.solr.common.SolrException: Error opening new
>>>> searcher
>>>>       at
>>>> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
>>>>       at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
>>>>       at
>>>> 
>> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
>>>>       at
>> org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
>>>>       at
>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>>>>       at
>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>       at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>       at
>>>> 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
>>>>       at
>>>> 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
>>>>       at
>>>> 
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>       at
>>>> 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>       at java.lang.Thread.run(Thread.java:662)Caused by:
>>>> java.io.IOException: Map failed
>>>>       at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
>>>>       at
>>>> 
>> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
>>>>       at
>>>> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
>>>>       at
>>>> 
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
>>>>       at
>>>> 
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
>>>>       at
>>>> 
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
>>>>       at org.apache.lucene.codecs.Codec.files(Codec.java:56)
>>>>       at
>> org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
>>>>       at
>>>> org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
>>>>       at
>>>> 
>> org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
>>>>       at
>>>> 
>> org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
>>>>       at
>>>> 
>> org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
>>>>       at
>>>> 
>> org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
>>>>       at
>>>> 
>> org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
>>>>       at
>>>> 
>> org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
>>>>       at
>>>> 
>> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:438)
>>>>       at
>>>> 
>> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:553)
>>>>       at
>>>> org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:354)
>>>>       at
>>>> 
>> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:258)
>>>>       at
>>>> 
>> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:243)
>>>>       at
>>>> 
>> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:250)
>>>>       at
>>>> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1091)
>>>>       ... 11 moreCaused by: java.lang.OutOfMemoryError: Map failed
>>>>       at sun.nio.ch.FileChannelImpl.map0(Native Method)
>>>>       at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)*
>>>> 
>>>> 
>>>> 
>>>> And one more issue we came across i.e
>>> 
>>> 
>> 
> 
> 
> ______________________________________________________________________
> This email has been scanned by the brightsolid Email Security System. Powered by MessageLabs
> ______________________________________________________________________


______________________________________________________________________
"brightsolid" is used in this email to collectively mean brightsolid online innovation limited and its subsidiary companies brightsolid online publishing limited and brightsolid online technology limited.
findmypast.co.uk is a brand of brightsolid online publishing limited.
brightsolid online innovation limited, Gateway House, Luna Place, Dundee Technology Park, Dundee DD2 1TP.  Registered in Scotland No. SC274983.
brightsolid online publishing limited, The Glebe, 6 Chapel Place, Rivington Street, London EC2A 3DQ. Registered in England No. 04369607.
brightsolid online technology limited, Gateway House, Luna Place, Dundee Technology Park, Dundee DD2 1TP.  Registered in Scotland No. SC161678.

Email Disclaimer

This message is confidential and may contain privileged information. You should not disclose its contents to any other person. If you are not the intended recipient, please notify the sender named above immediately. It is expressly declared that this e-mail does not constitute nor form part of a contract or unilateral obligation. Opinions, conclusions and other information in this message that do not relate to the official business of brightsolid shall be understood as neither given nor endorsed by it.
______________________________________________________________________
This email has been scanned by the brightsolid Email Security System. Powered by MessageLabs
______________________________________________________________________

Re: Large Index and OutOfMemoryError: Map failed

Posted by Gopal Patwa <go...@gmail.com>.
I checked it was "MMapDirectory.UNMAP_SUPPORTED=true" and below are my
system data. Is their any existing test case to reproduce this issue? I am
trying understand how I can reproduce this issue with unit/integration test

I will try recent solr trunk build too,  if it is some bug in solr or
lucene keeping old searcher open then how to reproduce it?

SYSTEM DATA
===========
PROCESSOR: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz
SYSTEM ID: x86_64
CURRENT CPU SPEED: 1600.000 MHz
CPUS: 8 processor(s)
MEMORY: 49449296 kB
DISTRIBUTION: CentOS release 5.3 (Final)
KERNEL NAME: 2.6.18-128.el5
UPTIME: up 71 days
LOAD AVERAGE: 1.42, 1.45, 1.53
JBOSS Version: Implementation-Version: 4.2.2.GA (build:
SVNTag=JBoss_4_2_2_GA date=20
JAVA Version: java version "1.6.0_24"


On Thu, Apr 12, 2012 at 3:07 AM, Michael McCandless <
lucene@mikemccandless.com> wrote:

> Your largest index has 66 segments (690 files) ... biggish but not
> insane.  With 64K maps you should be able to have ~47 searchers open
> on each core.
>
> Enabling compound file format (not the opposite!) will mean fewer maps
> ... ie should improve this situation.
>
> I don't understand why Solr defaults to compound file off... that
> seems dangerous.
>
> Really we need a Solr dev here... to answer "how long is a stale
> searcher kept open".  Is it somehow possible 46 old searchers are
> being left open...?
>
> I don't see any other reason why you'd run out of maps.  Hmm, unless
> MMapDirectory didn't think it could safely invoke unmap in your JVM.
> Which exact JVM are you using?  If you can print the
> MMapDirectory.UNMAP_SUPPORTED constant, we'd know for sure.
>
> Yes, switching away from MMapDir will sidestep the "too many maps"
> issue, however, 1) MMapDir has better perf than NIOFSDir, and 2) if
> there really is a leak here (Solr not closing the old searchers or a
> Lucene bug or something...) then you'll eventually run out of file
> descriptors (ie, same  problem, different manifestation).
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> 2012/4/11 Gopal Patwa <go...@gmail.com>:
> >
> > I have not change the mergefactor, it was 10. Compound index file is
> disable
> > in my config but I read from below post, that some one had similar issue
> and
> > it was resolved by switching from compound index file format to
> non-compound
> > index file.
> >
> > and some folks resolved by "changing lucene code to disable
> MMapDirectory."
> > Is this best practice to do, if so is this can be done in configuration?
> >
> >
> http://lucene.472066.n3.nabble.com/MMapDirectory-failed-to-map-a-23G-compound-index-segment-td3317208.html
> >
> > I have index document of core1 = 5 million, core2=8million and
> > core3=3million and all index are hosted in single Solr instance
> >
> > I am going to use Solr for our site StubHub.com, see attached "ls -l"
> list
> > of index files for all core
> >
> > SolrConfig.xml:
> >
> >
> >       <indexDefaults>
> >               <useCompoundFile>false</useCompoundFile>
> >               <mergeFactor>10</mergeFactor>
> >               <maxMergeDocs>2147483647</maxMergeDocs>
> >               <maxFieldLength>10000</maxFieldLength-->
> >               <ramBufferSizeMB>4096</ramBufferSizeMB>
> >               <maxThreadStates>10</maxThreadStates>
> >               <writeLockTimeout>1000</writeLockTimeout>
> >               <commitLockTimeout>10000</commitLockTimeout>
> >               <lockType>single</lockType>
> >
> >           <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
> >             <double name="forceMergeDeletesPctAllowed">0.0</double>
> >             <double name="reclaimDeletesWeight">10.0</double>
> >           </mergePolicy>
> >
> >           <deletionPolicy class="solr.SolrDeletionPolicy">
> >             <str name="keepOptimizedOnly">false</str>
> >             <str name="maxCommitsToKeep">0</str>
> >           </deletionPolicy>
> >
> >       </indexDefaults>
> >
> >
> >       <updateHandler class="solr.DirectUpdateHandler2">
> >           <maxPendingDeletes>1000</maxPendingDeletes>
> >            <autoCommit>
> >              <maxTime>900000</maxTime>
> >              <openSearcher>false</openSearcher>
> >            </autoCommit>
> >            <autoSoftCommit>
> >
>  <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
> >            </autoSoftCommit>
> >
> >       </updateHandler>
> >
> >
> > Forwarded conversation
> > Subject: Large Index and OutOfMemoryError: Map failed
> > ------------------------
> >
> > From: Gopal Patwa <go...@gmail.com>
> > Date: Fri, Mar 30, 2012 at 10:26 PM
> > To: solr-user@lucene.apache.org
> >
> >
> > I need help!!
> >
> >
> >
> >
> >
> > I am using Solr 4.0 nightly build with NRT and I often get this error
> during
> > auto commit "java.lang.OutOfMemoryError: Map failed". I have search this
> > forum and what I found it is related to OS ulimit setting, please se
> below
> > my ulimit settings. I am not sure what ulimit setting I should have? and
> we
> > also get "java.net.SocketException: Too many open files" NOT sure how
> many
> > open file we need to set?
> >
> >
> > I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
> 15GB,
> > with Single shard
> >
> >
> > We update the index every 5 seconds, soft commit every 1 second and hard
> > commit every 15 minutes
> >
> >
> > Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB
> >
> >
> > ulimit:
> >
> > core file size          (blocks, -c) 0
> > data seg size           (kbytes, -d) unlimited
> > scheduling priority             (-e) 0
> > file size               (blocks, -f) unlimited
> > pending signals                 (-i) 401408
> > max locked memory       (kbytes, -l) 1024
> > max memory size         (kbytes, -m) unlimited
> > open files                      (-n) 1024
> > pipe size            (512 bytes, -p) 8
> > POSIX message queues     (bytes, -q) 819200
> > real-time priority              (-r) 0
> > stack size              (kbytes, -s) 10240
> > cpu time               (seconds, -t) unlimited
> > max user processes              (-u) 401408
> > virtual memory          (kbytes, -v) unlimited
> > file locks                      (-x) unlimited
> >
> >
> >
> > ERROR:
> >
> >
> >
> >
> >
> > 2012-03-29 15:14:08,560 [] priority=ERROR app_name=
> thread=pool-3-thread-1
> > location=CommitTracker line=93 auto commit error...:java.io.IOException:
> Map
> > failed
> >       at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
> >       at
> >
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
> >       at
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
> >       at
> >
> org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.<init>(Lucene40PostingsReader.java:58)
> >       at
> >
> org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer(Lucene40PostingsFormat.java:80)
> >       at
> >
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat(PerFieldPostingsFormat.java:189)
> >       at
> >
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:280)
> >       at
> >
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.<init>(PerFieldPostingsFormat.java:186)
> >       at
> >
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.<init>(PerFieldPostingsFormat.java:186)
> >       at
> >
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:256)
> >       at
> >
> org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:108)
> >       at
> org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:51)
> >       at
> >
> org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader(IndexWriter.java:494)
> >       at
> >
> org.apache.lucene.index.BufferedDeletesStream.applyDeletes(BufferedDeletesStream.java:214)
> >       at
> >
> org.apache.lucene.index.IndexWriter.applyAllDeletes(IndexWriter.java:2939)
> >       at
> >
> org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:2930)
> >       at
> org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:2681)
> >       at
> > org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2804)
> >       at
> org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2786)
> >       at
> >
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:391)
> >       at org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
> >       at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> >       at
> >
> > ...
> >
> > [Message clipped]
> > ----------
> > From: Michael McCandless <lu...@mikemccandless.com>
> > Date: Sat, Mar 31, 2012 at 3:15 AM
> > To: solr-user@lucene.apache.org
> >
> >
> > It's the virtual memory limit that matters; yours says unlimited below
> > (good!), but, are you certain that's really the limit your Solr
> > process runs with?
> >
> > On Linux, there is also a per-process map count:
> >
> >    cat /proc/sys/vm/max_map_count
> >
> > I think it typically defaults to 65,536 but you should check on your
> > env.  If a process tries to map more than this many regions, you'll
> > hit that exception.
> >
> > I think you can:
> >
> >  cat /proc/<pid>/maps | wc
> >
> > to see how many maps your Solr process currently has... if that is
> > anywhere near the limit then it could be the cause.
> >
> > Mike McCandless
> >
> > http://blog.mikemccandless.com
> >
> > On Sat, Mar 31, 2012 at 1:26 AM, Gopal Patwa <go...@gmail.com>
> wrote:
> >> *I need help!!*
> >>
> >> *
> >> *
> >>
> >> *I am using Solr 4.0 nightly build with NRT and I often get this error
> >> during auto commit "**java.lang.OutOfMemoryError:* *Map* *failed". I
> >> have search this forum and what I found it is related to OS ulimit
> >> setting, please se below my ulimit settings. I am not sure what ulimit
> >> setting I should have? and we also get "**java.net.SocketException:*
> >> *Too* *many* *open* *files" NOT sure how many open file we need to
> >> set?*
> >>
> >>
> >> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
> >> 15GB, with Single shard
> >>
> >> *
> >> *
> >>
> >> *We update the index every 5 seconds, soft commit every 1 second and
> >> hard commit every 15 minutes*
> >>
> >> *
> >> *
> >>
> >> *Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB*
> >>
> >> *
> >> *
> >>
> >> ulimit:
> >>
> >> core file size          (blocks, -c) 0
> >> data seg size           (kbytes, -d) unlimited
> >> scheduling priority             (-e) 0
> >> file size               (blocks, -f) unlimited
> >> pending signals                 (-i) 401408
> >> max locked memory       (kbytes, -l) 1024
> >> max memory size         (kbytes, -m) unlimited
> >> open files                      (-n) 1024
> >> pipe size            (512 bytes, -p) 8
> >> POSIX message queues     (bytes, -q) 819200
> >> real-time priority              (-r) 0
> >> stack size              (kbytes, -s) 10240
> >> cpu time               (seconds, -t) unlimited
> >> max user processes              (-u) 401408
> >> virtual memory          (kbytes, -v) unlimited
> >> file locks                      (-x) unlimited
> >>
> >>
> >> *
> >> *
> >>
> >> *ERROR:*
> >>
> >> *
> >> *
> >>
> >> *2012-03-29* *15:14:08*,*560* [] *priority=ERROR* *app_name=*
> >> *thread=pool-3-thread-1* *location=CommitTracker* *line=93* *auto*
> >> *commit* *error...:java.io.IOException:* *Map* *failed*
> >>        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:748*)
> >>        *at*
> >>
> *org.apache.lucene.store.MMapDirectory$MMapIndexInput.*<*init*>(*MMapDirectory.java:293*)
> >>        *at*
> >>
> *org.apache.lucene.store.MMapDirectory.openInput*(*MMapDirectory.java:221*)
> >>        *at*
> >>
> *org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.*<*init*>(*Lucene40PostingsReader.java:58*)
> >>        *at*
> >>
> *org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer*(*Lucene40PostingsFormat.java:80*)
> >>        *at*
> >>
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat*(*PerFieldPostingsFormat.java:189*)
> >>        *at*
> >>
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.*<*init*>(*PerFieldPostingsFormat.java:280*)
> >>        *at*
> >>
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.*<*init*>(*PerFieldPostingsFormat.java:186*)
> >>        *at*
> >>
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.*<*init*>(*PerFieldPostingsFormat.java:186*)
> >>        *at*
> >>
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer*(*PerFieldPostingsFormat.java:256*)
> >>        *at*
> >>
> *org.apache.lucene.index.SegmentCoreReaders.*<*init*>(*SegmentCoreReaders.java:108*)
> >>        *at*
> >>
> *org.apache.lucene.index.SegmentReader.*<*init*>(*SegmentReader.java:51*)
> >>        *at*
> >>
> *org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader*(*IndexWriter.java:494*)
> >>        *at*
> >>
> *org.apache.lucene.index.BufferedDeletesStream.applyDeletes*(*BufferedDeletesStream.java:214*)
> >>        *at*
> >>
> *org.apache.lucene.index.IndexWriter.applyAllDeletes*(*IndexWriter.java:2939*)
> >>        *at*
> >>
> *org.apache.lucene.index.IndexWriter.maybeApplyDeletes*(*IndexWriter.java:2930*)
> >>        *at*
> >>
> *org.apache.lucene.index.IndexWriter.prepareCommit*(*IndexWriter.java:2681*)
> >>        *at*
> >>
> *org.apache.lucene.index.IndexWriter.commitInternal*(*IndexWriter.java:2804*)
> >>        *at*
> >> *org.apache.lucene.index.IndexWriter.commit*(*IndexWriter.java:2786*)
> >>        *at*
> >>
> *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:391*)
> >>        *at*
> >> *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
> >>        *at*
> >>
> *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
> >>        *at*
> >> *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
> >>        *at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
> >>        *at*
> >>
> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
> >>        *at*
> >>
> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
> >>        *at*
> >>
> *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
> >>        *at*
> >>
> *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
> >>        *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
> >> *java.lang.OutOfMemoryError:* *Map* *failed*
> >>        *at* *sun.nio.ch.FileChannelImpl.map0*(*Native* *Method*)
> >>        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:745*)
> >>        *...* *28* *more*
> >>
> >> *
> >> *
> >>
> >> *
> >> *
> >>
> >> *
> >>
> >>
> >> SolrConfig.xml:
> >>
> >>
> >>        <indexDefaults>
> >>                <useCompoundFile>false</useCompoundFile>
> >>                <mergeFactor>10</mergeFactor>
> >>                <maxMergeDocs>2147483647</maxMergeDocs>
> >>                <maxFieldLength>10000</maxFieldLength-->
> >>                <ramBufferSizeMB>4096</ramBufferSizeMB>
> >>                <maxThreadStates>10</maxThreadStates>
> >>                <writeLockTimeout>1000</writeLockTimeout>
> >>                <commitLockTimeout>10000</commitLockTimeout>
> >>                <lockType>single</lockType>
> >>
> >>            <mergePolicy
> class="org.apache.lucene.index.TieredMergePolicy">
> >>              <double name="forceMergeDeletesPctAllowed">0.0</double>
> >>              <double name="reclaimDeletesWeight">10.0</double>
> >>            </mergePolicy>
> >>
> >>            <deletionPolicy class="solr.SolrDeletionPolicy">
> >>              <str name="keepOptimizedOnly">false</str>
> >>              <str name="maxCommitsToKeep">0</str>
> >>            </deletionPolicy>
> >>
> >>        </indexDefaults>
> >>
> >>
> >>        <updateHandler class="solr.DirectUpdateHandler2">
> >>            <maxPendingDeletes>1000</maxPendingDeletes>
> >>             <autoCommit>
> >>               <maxTime>900000</maxTime>
> >>               <openSearcher>false</openSearcher>
> >>             </autoCommit>
> >>             <autoSoftCommit>
> >>
> >> <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
> >>             </autoSoftCommit>
> >>
> >>        </updateHandler>
> >>
> >>
> >>
> >> Thanks
> >> Gopal Patwa
> >> *
> >
> > ----------
> > From: Gopal Patwa <go...@gmail.com>
> > Date: Tue, Apr 10, 2012 at 8:35 PM
> > To: solr-user@lucene.apache.org
> >
> >
> > Michael, Thanks for response
> >
> > it was 65K as you mention the default value for "cat
> > /proc/sys/vm/max_map_count" . How we determine what value this should be?
> >  is it number of document during hard commit in my case it is 15
> minutes? or
> > it is number of  index file or number of documents we have in all cores.
> >
> > I have raised the number to 140K but I still get when it reaches to
> 140K, we
> > have to restart jboss server to free up the map count, sometime OOM error
> > happen during "Error opening new searcher"
> >
> > is making this number to unlimited is only solution?''
> >
> >
> > Error log:
> >
> > location=CommitTracker line=93 auto commit
> > error...:org.apache.solr.common.SolrException: Error opening new searcher
> >       at
> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
> >       at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
> >       at
> >
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
> >       at org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
> >       at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> >       at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> >       at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> >       at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
> >       at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
> >       at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >       at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >       at java.lang.Thread.run(Thread.java:662)
> > Caused by: java.io.IOException: Map failed
> >       at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
> >       at
> >
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
> >       at
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
> >       at
> >
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
> >       at
> >
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
> >       at
> >
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
> >       at org.apache.lucene.codecs.Codec.files(Codec.java:56)
> >       at org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
> >       at
> org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
> >       at
> >
> org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
> >       at
> >
> org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
> >       at
> >
> org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
> >       at
> >
> org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
> >       at
> >
> org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
> >       at
> >
> org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
> >       at org.apache.lucene.index.
> >
> > ...
> >
> > [Message clipped]
> > ----------
> > From: Michael McCandless <lu...@mikemccandless.com>
> > Date: Wed, Apr 11, 2012 at 2:20 AM
> > To: solr-user@lucene.apache.org
> >
> >
> > Hi,
> >
> > 65K is already a very large number and should have been sufficient...
> >
> > However: have you increased the merge factor?  Doing so increases the
> > open files (maps) required.
> >
> > Have you disabled compound file format?  (Hmmm: I think Solr does so
> > by default... which is dangerous).  Maybe try enabling compound file
> > format?
> >
> > Can you "ls -l" your index dir and post the results?
> >
> > It's also possible Solr isn't closing the old searchers quickly enough
> > ... I don't know the details on when Solr closes old searchers...
> >
> > Mike McCandless
> >
> > http://blog.mikemccandless.com
> >
> >
> >
> > On Tue, Apr 10, 2012 at 11:35 PM, Gopal Patwa <go...@gmail.com>
> wrote:
> >> Michael, Thanks for response
> >>
> >> it was 65K as you mention the default value for "cat
> >> /proc/sys/vm/max_map_count" . How we determine what value this should
> be?
> >>  is it number of document during hard commit in my case it is 15
> minutes?
> >> or it is number of  index file or number of documents we have in all
> >> cores.
> >>
> >> I have raised the number to 140K but I still get when it reaches to
> 140K,
> >> we have to restart jboss server to free up the map count, sometime OOM
> >> error happen during "*Error opening new searcher"*
> >>
> >> is making this number to unlimited is only solution?''
> >>
> >>
> >> Error log:
> >>
> >> *location=CommitTracker line=93 auto commit
> >> error...:org.apache.solr.common.SolrException: Error opening new
> >> searcher
> >>        at
> >> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
> >>        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
> >>        at
> >>
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
> >>        at
> org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
> >>        at
> >> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> >>        at
> >> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> >>        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> >>        at
> >>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
> >>        at
> >>
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
> >>        at
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >>        at
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >>        at java.lang.Thread.run(Thread.java:662)Caused by:
> >> java.io.IOException: Map failed
> >>        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
> >>        at
> >>
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
> >>        at
> >> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
> >>        at
> >>
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
> >>        at
> >>
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
> >>        at
> >>
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
> >>        at org.apache.lucene.codecs.Codec.files(Codec.java:56)
> >>        at
> org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
> >>        at
> >> org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
> >>        at
> >>
> org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
> >>        at
> >>
> org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
> >>        at
> >>
> org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
> >>        at
> >>
> org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
> >>        at
> >>
> org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
> >>        at
> >>
> org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
> >>        at
> >>
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:438)
> >>        at
> >>
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:553)
> >>        at
> >> org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:354)
> >>        at
> >>
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:258)
> >>        at
> >>
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:243)
> >>        at
> >>
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:250)
> >>        at
> >> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1091)
> >>        ... 11 moreCaused by: java.lang.OutOfMemoryError: Map failed
> >>        at sun.nio.ch.FileChannelImpl.map0(Native Method)
> >>        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)*
> >>
> >>
> >>
> >> And one more issue we came across i.e
> >
> >
>

Re: Large Index and OutOfMemoryError: Map failed

Posted by Michael McCandless <lu...@mikemccandless.com>.
Your largest index has 66 segments (690 files) ... biggish but not
insane.  With 64K maps you should be able to have ~47 searchers open
on each core.

Enabling compound file format (not the opposite!) will mean fewer maps
... ie should improve this situation.

I don't understand why Solr defaults to compound file off... that
seems dangerous.

Really we need a Solr dev here... to answer "how long is a stale
searcher kept open".  Is it somehow possible 46 old searchers are
being left open...?

I don't see any other reason why you'd run out of maps.  Hmm, unless
MMapDirectory didn't think it could safely invoke unmap in your JVM.
Which exact JVM are you using?  If you can print the
MMapDirectory.UNMAP_SUPPORTED constant, we'd know for sure.

Yes, switching away from MMapDir will sidestep the "too many maps"
issue, however, 1) MMapDir has better perf than NIOFSDir, and 2) if
there really is a leak here (Solr not closing the old searchers or a
Lucene bug or something...) then you'll eventually run out of file
descriptors (ie, same  problem, different manifestation).

Mike McCandless

http://blog.mikemccandless.com

2012/4/11 Gopal Patwa <go...@gmail.com>:
>
> I have not change the mergefactor, it was 10. Compound index file is disable
> in my config but I read from below post, that some one had similar issue and
> it was resolved by switching from compound index file format to non-compound
> index file.
>
> and some folks resolved by "changing lucene code to disable MMapDirectory."
> Is this best practice to do, if so is this can be done in configuration?
>
> http://lucene.472066.n3.nabble.com/MMapDirectory-failed-to-map-a-23G-compound-index-segment-td3317208.html
>
> I have index document of core1 = 5 million, core2=8million and
> core3=3million and all index are hosted in single Solr instance
>
> I am going to use Solr for our site StubHub.com, see attached "ls -l" list
> of index files for all core
>
> SolrConfig.xml:
>
>
> 	<indexDefaults>
> 		<useCompoundFile>false</useCompoundFile>
> 		<mergeFactor>10</mergeFactor>
> 		<maxMergeDocs>2147483647</maxMergeDocs>
> 		<maxFieldLength>10000</maxFieldLength-->
> 		<ramBufferSizeMB>4096</ramBufferSizeMB>
> 		<maxThreadStates>10</maxThreadStates>
> 		<writeLockTimeout>1000</writeLockTimeout>
> 		<commitLockTimeout>10000</commitLockTimeout>
> 		<lockType>single</lockType>
> 		
> 	    <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
> 	      <double name="forceMergeDeletesPctAllowed">0.0</double>
> 	      <double name="reclaimDeletesWeight">10.0</double>
> 	    </mergePolicy>
>
> 	    <deletionPolicy class="solr.SolrDeletionPolicy">
> 	      <str name="keepOptimizedOnly">false</str>
> 	      <str name="maxCommitsToKeep">0</str>
> 	    </deletionPolicy>
> 		
> 	</indexDefaults>
>
>
> 	<updateHandler class="solr.DirectUpdateHandler2">
> 	    <maxPendingDeletes>1000</maxPendingDeletes>
> 	     <autoCommit>
> 	       <maxTime>900000</maxTime>
> 	       <openSearcher>false</openSearcher>
> 	     </autoCommit>
> 	     <autoSoftCommit>
> 	       <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
> 	     </autoSoftCommit>
> 	
> 	</updateHandler>
>
>
> Forwarded conversation
> Subject: Large Index and OutOfMemoryError: Map failed
> ------------------------
>
> From: Gopal Patwa <go...@gmail.com>
> Date: Fri, Mar 30, 2012 at 10:26 PM
> To: solr-user@lucene.apache.org
>
>
> I need help!!
>
>
>
>
>
> I am using Solr 4.0 nightly build with NRT and I often get this error during
> auto commit "java.lang.OutOfMemoryError: Map failed". I have search this
> forum and what I found it is related to OS ulimit setting, please se below
> my ulimit settings. I am not sure what ulimit setting I should have? and we
> also get "java.net.SocketException: Too many open files" NOT sure how many
> open file we need to set?
>
>
> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 - 15GB,
> with Single shard
>
>
> We update the index every 5 seconds, soft commit every 1 second and hard
> commit every 15 minutes
>
>
> Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB
>
>
> ulimit:
>
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 401408
> max locked memory       (kbytes, -l) 1024
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 10240
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 401408
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
>
>
>
> ERROR:
>
>
>
>
>
> 2012-03-29 15:14:08,560 [] priority=ERROR app_name= thread=pool-3-thread-1
> location=CommitTracker line=93 auto commit error...:java.io.IOException: Map
> failed
> 	at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
> 	at
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
> 	at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
> 	at
> org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.<init>(Lucene40PostingsReader.java:58)
> 	at
> org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer(Lucene40PostingsFormat.java:80)
> 	at
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat(PerFieldPostingsFormat.java:189)
> 	at
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:280)
> 	at
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.<init>(PerFieldPostingsFormat.java:186)
> 	at
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.<init>(PerFieldPostingsFormat.java:186)
> 	at
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:256)
> 	at
> org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:108)
> 	at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:51)
> 	at
> org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader(IndexWriter.java:494)
> 	at
> org.apache.lucene.index.BufferedDeletesStream.applyDeletes(BufferedDeletesStream.java:214)
> 	at
> org.apache.lucene.index.IndexWriter.applyAllDeletes(IndexWriter.java:2939)
> 	at
> org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:2930)
> 	at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:2681)
> 	at
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2804)
> 	at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2786)
> 	at
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:391)
> 	at org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> 	at
>
> ...
>
> [Message clipped]
> ----------
> From: Michael McCandless <lu...@mikemccandless.com>
> Date: Sat, Mar 31, 2012 at 3:15 AM
> To: solr-user@lucene.apache.org
>
>
> It's the virtual memory limit that matters; yours says unlimited below
> (good!), but, are you certain that's really the limit your Solr
> process runs with?
>
> On Linux, there is also a per-process map count:
>
>    cat /proc/sys/vm/max_map_count
>
> I think it typically defaults to 65,536 but you should check on your
> env.  If a process tries to map more than this many regions, you'll
> hit that exception.
>
> I think you can:
>
>  cat /proc/<pid>/maps | wc
>
> to see how many maps your Solr process currently has... if that is
> anywhere near the limit then it could be the cause.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Sat, Mar 31, 2012 at 1:26 AM, Gopal Patwa <go...@gmail.com> wrote:
>> *I need help!!*
>>
>> *
>> *
>>
>> *I am using Solr 4.0 nightly build with NRT and I often get this error
>> during auto commit "**java.lang.OutOfMemoryError:* *Map* *failed". I
>> have search this forum and what I found it is related to OS ulimit
>> setting, please se below my ulimit settings. I am not sure what ulimit
>> setting I should have? and we also get "**java.net.SocketException:*
>> *Too* *many* *open* *files" NOT sure how many open file we need to
>> set?*
>>
>>
>> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
>> 15GB, with Single shard
>>
>> *
>> *
>>
>> *We update the index every 5 seconds, soft commit every 1 second and
>> hard commit every 15 minutes*
>>
>> *
>> *
>>
>> *Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB*
>>
>> *
>> *
>>
>> ulimit:
>>
>> core file size          (blocks, -c) 0
>> data seg size           (kbytes, -d) unlimited
>> scheduling priority             (-e) 0
>> file size               (blocks, -f) unlimited
>> pending signals                 (-i) 401408
>> max locked memory       (kbytes, -l) 1024
>> max memory size         (kbytes, -m) unlimited
>> open files                      (-n) 1024
>> pipe size            (512 bytes, -p) 8
>> POSIX message queues     (bytes, -q) 819200
>> real-time priority              (-r) 0
>> stack size              (kbytes, -s) 10240
>> cpu time               (seconds, -t) unlimited
>> max user processes              (-u) 401408
>> virtual memory          (kbytes, -v) unlimited
>> file locks                      (-x) unlimited
>>
>>
>> *
>> *
>>
>> *ERROR:*
>>
>> *
>> *
>>
>> *2012-03-29* *15:14:08*,*560* [] *priority=ERROR* *app_name=*
>> *thread=pool-3-thread-1* *location=CommitTracker* *line=93* *auto*
>> *commit* *error...:java.io.IOException:* *Map* *failed*
>>        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:748*)
>>        *at*
>> *org.apache.lucene.store.MMapDirectory$MMapIndexInput.*<*init*>(*MMapDirectory.java:293*)
>>        *at*
>> *org.apache.lucene.store.MMapDirectory.openInput*(*MMapDirectory.java:221*)
>>        *at*
>> *org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.*<*init*>(*Lucene40PostingsReader.java:58*)
>>        *at*
>> *org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer*(*Lucene40PostingsFormat.java:80*)
>>        *at*
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat*(*PerFieldPostingsFormat.java:189*)
>>        *at*
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.*<*init*>(*PerFieldPostingsFormat.java:280*)
>>        *at*
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.*<*init*>(*PerFieldPostingsFormat.java:186*)
>>        *at*
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.*<*init*>(*PerFieldPostingsFormat.java:186*)
>>        *at*
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer*(*PerFieldPostingsFormat.java:256*)
>>        *at*
>> *org.apache.lucene.index.SegmentCoreReaders.*<*init*>(*SegmentCoreReaders.java:108*)
>>        *at*
>> *org.apache.lucene.index.SegmentReader.*<*init*>(*SegmentReader.java:51*)
>>        *at*
>> *org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader*(*IndexWriter.java:494*)
>>        *at*
>> *org.apache.lucene.index.BufferedDeletesStream.applyDeletes*(*BufferedDeletesStream.java:214*)
>>        *at*
>> *org.apache.lucene.index.IndexWriter.applyAllDeletes*(*IndexWriter.java:2939*)
>>        *at*
>> *org.apache.lucene.index.IndexWriter.maybeApplyDeletes*(*IndexWriter.java:2930*)
>>        *at*
>> *org.apache.lucene.index.IndexWriter.prepareCommit*(*IndexWriter.java:2681*)
>>        *at*
>> *org.apache.lucene.index.IndexWriter.commitInternal*(*IndexWriter.java:2804*)
>>        *at*
>> *org.apache.lucene.index.IndexWriter.commit*(*IndexWriter.java:2786*)
>>        *at*
>> *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:391*)
>>        *at*
>> *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
>>        *at*
>> *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
>>        *at*
>> *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
>>        *at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
>>        *at*
>> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
>>        *at*
>> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
>>        *at*
>> *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
>>        *at*
>> *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
>>        *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
>> *java.lang.OutOfMemoryError:* *Map* *failed*
>>        *at* *sun.nio.ch.FileChannelImpl.map0*(*Native* *Method*)
>>        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:745*)
>>        *...* *28* *more*
>>
>> *
>> *
>>
>> *
>> *
>>
>> *
>>
>>
>> SolrConfig.xml:
>>
>>
>>        <indexDefaults>
>>                <useCompoundFile>false</useCompoundFile>
>>                <mergeFactor>10</mergeFactor>
>>                <maxMergeDocs>2147483647</maxMergeDocs>
>>                <maxFieldLength>10000</maxFieldLength-->
>>                <ramBufferSizeMB>4096</ramBufferSizeMB>
>>                <maxThreadStates>10</maxThreadStates>
>>                <writeLockTimeout>1000</writeLockTimeout>
>>                <commitLockTimeout>10000</commitLockTimeout>
>>                <lockType>single</lockType>
>>
>>            <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
>>              <double name="forceMergeDeletesPctAllowed">0.0</double>
>>              <double name="reclaimDeletesWeight">10.0</double>
>>            </mergePolicy>
>>
>>            <deletionPolicy class="solr.SolrDeletionPolicy">
>>              <str name="keepOptimizedOnly">false</str>
>>              <str name="maxCommitsToKeep">0</str>
>>            </deletionPolicy>
>>
>>        </indexDefaults>
>>
>>
>>        <updateHandler class="solr.DirectUpdateHandler2">
>>            <maxPendingDeletes>1000</maxPendingDeletes>
>>             <autoCommit>
>>               <maxTime>900000</maxTime>
>>               <openSearcher>false</openSearcher>
>>             </autoCommit>
>>             <autoSoftCommit>
>>
>> <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
>>             </autoSoftCommit>
>>
>>        </updateHandler>
>>
>>
>>
>> Thanks
>> Gopal Patwa
>> *
>
> ----------
> From: Gopal Patwa <go...@gmail.com>
> Date: Tue, Apr 10, 2012 at 8:35 PM
> To: solr-user@lucene.apache.org
>
>
> Michael, Thanks for response
>
> it was 65K as you mention the default value for "cat
> /proc/sys/vm/max_map_count" . How we determine what value this should be?
>  is it number of document during hard commit in my case it is 15 minutes? or
> it is number of  index file or number of documents we have in all cores.
>
> I have raised the number to 140K but I still get when it reaches to 140K, we
> have to restart jboss server to free up the map count, sometime OOM error
> happen during "Error opening new searcher"
>
> is making this number to unlimited is only solution?''
>
>
> Error log:
>
> location=CommitTracker line=93 auto commit
> error...:org.apache.solr.common.SolrException: Error opening new searcher
> 	at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
> 	at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
> 	at
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
> 	at org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> 	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> 	at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
> 	at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
> 	at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Map failed
> 	at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
> 	at
> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
> 	at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
> 	at
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
> 	at
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
> 	at
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
> 	at org.apache.lucene.codecs.Codec.files(Codec.java:56)
> 	at org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
> 	at org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
> 	at
> org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
> 	at
> org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
> 	at
> org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
> 	at
> org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
> 	at
> org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
> 	at
> org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
> 	at org.apache.lucene.index.
>
> ...
>
> [Message clipped]
> ----------
> From: Michael McCandless <lu...@mikemccandless.com>
> Date: Wed, Apr 11, 2012 at 2:20 AM
> To: solr-user@lucene.apache.org
>
>
> Hi,
>
> 65K is already a very large number and should have been sufficient...
>
> However: have you increased the merge factor?  Doing so increases the
> open files (maps) required.
>
> Have you disabled compound file format?  (Hmmm: I think Solr does so
> by default... which is dangerous).  Maybe try enabling compound file
> format?
>
> Can you "ls -l" your index dir and post the results?
>
> It's also possible Solr isn't closing the old searchers quickly enough
> ... I don't know the details on when Solr closes old searchers...
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
>
> On Tue, Apr 10, 2012 at 11:35 PM, Gopal Patwa <go...@gmail.com> wrote:
>> Michael, Thanks for response
>>
>> it was 65K as you mention the default value for "cat
>> /proc/sys/vm/max_map_count" . How we determine what value this should be?
>>  is it number of document during hard commit in my case it is 15 minutes?
>> or it is number of  index file or number of documents we have in all
>> cores.
>>
>> I have raised the number to 140K but I still get when it reaches to 140K,
>> we have to restart jboss server to free up the map count, sometime OOM
>> error happen during "*Error opening new searcher"*
>>
>> is making this number to unlimited is only solution?''
>>
>>
>> Error log:
>>
>> *location=CommitTracker line=93 auto commit
>> error...:org.apache.solr.common.SolrException: Error opening new
>> searcher
>>        at
>> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
>>        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
>>        at
>> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
>>        at org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
>>        at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>>        at
>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>        at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
>>        at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
>>        at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>        at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>        at java.lang.Thread.run(Thread.java:662)Caused by:
>> java.io.IOException: Map failed
>>        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
>>        at
>> org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
>>        at
>> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
>>        at
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
>>        at
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
>>        at
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
>>        at org.apache.lucene.codecs.Codec.files(Codec.java:56)
>>        at org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
>>        at
>> org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
>>        at
>> org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
>>        at
>> org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
>>        at
>> org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
>>        at
>> org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
>>        at
>> org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
>>        at
>> org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
>>        at
>> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:438)
>>        at
>> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:553)
>>        at
>> org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:354)
>>        at
>> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:258)
>>        at
>> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:243)
>>        at
>> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:250)
>>        at
>> org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1091)
>>        ... 11 moreCaused by: java.lang.OutOfMemoryError: Map failed
>>        at sun.nio.ch.FileChannelImpl.map0(Native Method)
>>        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)*
>>
>>
>>
>> And one more issue we came across i.e
>
>

Re: Large Index and OutOfMemoryError: Map failed

Posted by Gopal Patwa <go...@gmail.com>.
I have not change the mergefactor, it was 10. Compound index file is
disable in my config but I read from below post, that some one had similar
issue and it was resolved by switching from compound index file format to
non-compound index file.

and some folks resolved by "changing lucene code to disable MMapDirectory."
Is this best practice to do, if so is this can be done in configuration?

http://lucene.472066.n3.nabble.com/MMapDirectory-failed-to-map-a-23G-compound-index-segment-td3317208.html

I have index document of core1 = 5 million, core2=8million and
core3=3million and all index are hosted in single Solr instance

I am going to use Solr for our site StubHub.com, see attached "ls -l" list
of index files for all core

*

SolrConfig.xml:


	<indexDefaults>
		<useCompoundFile>false</useCompoundFile>
		<mergeFactor>10</mergeFactor>
		<maxMergeDocs>2147483647</maxMergeDocs>
		<maxFieldLength>10000</maxFieldLength-->
		<ramBufferSizeMB>4096</ramBufferSizeMB>
		<maxThreadStates>10</maxThreadStates>
		<writeLockTimeout>1000</writeLockTimeout>
		<commitLockTimeout>10000</commitLockTimeout>
		<lockType>single</lockType>
		
	    <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
	      <double name="forceMergeDeletesPctAllowed">0.0</double>
	      <double name="reclaimDeletesWeight">10.0</double>
	    </mergePolicy>

	    <deletionPolicy class="solr.SolrDeletionPolicy">
	      <str name="keepOptimizedOnly">false</str>
	      <str name="maxCommitsToKeep">0</str>
	    </deletionPolicy>
		
	</indexDefaults>


	<updateHandler class="solr.DirectUpdateHandler2">
	    <maxPendingDeletes>1000</maxPendingDeletes>
	     <autoCommit>
	       <maxTime>900000</maxTime>
	       <openSearcher>false</openSearcher>
	     </autoCommit>
	     <autoSoftCommit>
	       <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
	     </autoSoftCommit>
	
	</updateHandler>

*


Forwarded conversation
Subject: Large Index and OutOfMemoryError: Map failed
------------------------

From: *Gopal Patwa* <go...@gmail.com>
Date: Fri, Mar 30, 2012 at 10:26 PM
To: solr-user@lucene.apache.org


*I need help!!*

*

*

*I am using Solr 4.0 nightly build with NRT and I often get this error
during auto commit "**java.lang.OutOfMemoryError:* *Map* *failed". I
have search this forum and what I found it is related to OS ulimit
setting, please se below my ulimit settings. I am not sure what ulimit
setting I should have? and we also get "**java.net.SocketException:*
*Too* *many* *open* *files" NOT sure how many open file we need to
set?*


I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
15GB, with Single shard

*
*

*We update the index every 5 seconds, soft commit every 1 second and
hard commit every 15 minutes*

*
*

*Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB*

*
*

ulimit:

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 401408
max locked memory       (kbytes, -l) 1024
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 401408
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


*
*

*ERROR:*

*

*

*2012-03-29* *15:14:08*,*560* [] *priority=ERROR* *app_name=*
*thread=pool-3-thread-1* *location=CommitTracker* *line=93* *auto*
*commit* *error...:java.io.IOException:* *Map* *failed*
	*at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:748*)
	*at* *org.apache.lucene.store.MMapDirectory$MMapIndexInput.*<*init*>(*MMapDirectory.java:293*)
	*at* *org.apache.lucene.store.MMapDirectory.openInput*(*MMapDirectory.java:221*)
	*at* *org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.*<*init*>(*Lucene40PostingsReader.java:58*)
	*at* *org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer*(*Lucene40PostingsFormat.java:80*)
	*at* *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat*(*PerFieldPostingsFormat.java:189*)
	*at* *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.*<*init*>(*PerFieldPostingsFormat.java:280*)
	*at* *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.*<*init*>(*PerFieldPostingsFormat.java:186*)
	*at* *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.*<*init*>(*PerFieldPostingsFormat.java:186*)
	*at* *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer*(*PerFieldPostingsFormat.java:256*)
	*at* *org.apache.lucene.index.SegmentCoreReaders.*<*init*>(*SegmentCoreReaders.java:108*)
	*at* *org.apache.lucene.index.SegmentReader.*<*init*>(*SegmentReader.java:51*)
	*at* *org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader*(*IndexWriter.java:494*)
	*at* *org.apache.lucene.index.BufferedDeletesStream.applyDeletes*(*BufferedDeletesStream.java:214*)
	*at* *org.apache.lucene.index.IndexWriter.applyAllDeletes*(*IndexWriter.java:2939*)
	*at* *org.apache.lucene.index.IndexWriter.maybeApplyDeletes*(*IndexWriter.java:2930*)
	*at* *org.apache.lucene.index.IndexWriter.prepareCommit*(*IndexWriter.java:2681*)
	*at* *org.apache.lucene.index.IndexWriter.commitInternal*(*IndexWriter.java:2804*)
	*at* *org.apache.lucene.index.IndexWriter.commit*(*IndexWriter.java:2786*)
	*at* *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:391*)
	*at* *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
	*at* *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
	*at*

...

[Message clipped]
----------
From: *Michael McCandless* <lu...@mikemccandless.com>
Date: Sat, Mar 31, 2012 at 3:15 AM
To: solr-user@lucene.apache.org


It's the virtual memory limit that matters; yours says unlimited below
(good!), but, are you certain that's really the limit your Solr
process runs with?

On Linux, there is also a per-process map count:

   cat /proc/sys/vm/max_map_count

I think it typically defaults to 65,536 but you should check on your
env.  If a process tries to map more than this many regions, you'll
hit that exception.

I think you can:

 cat /proc/<pid>/maps | wc

to see how many maps your Solr process currently has... if that is
anywhere near the limit then it could be the cause.

Mike McCandless

http://blog.mikemccandless.com

On Sat, Mar 31, 2012 at 1:26 AM, Gopal Patwa <go...@gmail.com> wrote:
> *I need help!!*
>
> *
> *
>
> *I am using Solr 4.0 nightly build with NRT and I often get this error
> during auto commit "**java.lang.OutOfMemoryError:* *Map* *failed". I
> have search this forum and what I found it is related to OS ulimit
> setting, please se below my ulimit settings. I am not sure what ulimit
> setting I should have? and we also get "**java.net.SocketException:*
> *Too* *many* *open* *files" NOT sure how many open file we need to
> set?*
>
>
> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
> 15GB, with Single shard
>
> *
> *
>
> *We update the index every 5 seconds, soft commit every 1 second and
> hard commit every 15 minutes*
>
> *
> *
>
> *Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB*
>
> *
> *
>
> ulimit:
>
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 401408
> max locked memory       (kbytes, -l) 1024
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 10240
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 401408
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
>
>
> *
> *
>
> *ERROR:*
>
> *
> *
>
> *2012-03-29* *15:14:08*,*560* [] *priority=ERROR* *app_name=*
> *thread=pool-3-thread-1* *location=CommitTracker* *line=93* *auto*
> *commit* *error...:java.io.IOException:* *Map* *failed*
>        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:748*)
>        *at*
*org.apache.lucene.store.MMapDirectory$MMapIndexInput.*<*init*>(*MMapDirectory.java:293*)
>        *at*
*org.apache.lucene.store.MMapDirectory.openInput*(*MMapDirectory.java:221*)
>        *at*
*org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.*<*init*>(*Lucene40PostingsReader.java:58*)
>        *at*
*org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer*(*Lucene40PostingsFormat.java:80*)
>        *at*
*org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat*(*PerFieldPostingsFormat.java:189*)
>        *at*
*org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.*<*init*>(*PerFieldPostingsFormat.java:280*)
>        *at*
*org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.*<*init*>(*PerFieldPostingsFormat.java:186*)
>        *at*
*org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.*<*init*>(*PerFieldPostingsFormat.java:186*)
>        *at*
*org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer*(*PerFieldPostingsFormat.java:256*)
>        *at*
*org.apache.lucene.index.SegmentCoreReaders.*<*init*>(*SegmentCoreReaders.java:108*)
>        *at*
*org.apache.lucene.index.SegmentReader.*<*init*>(*SegmentReader.java:51*)
>        *at*
*org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader*(*IndexWriter.java:494*)
>        *at*
*org.apache.lucene.index.BufferedDeletesStream.applyDeletes*(*BufferedDeletesStream.java:214*)
>        *at*
*org.apache.lucene.index.IndexWriter.applyAllDeletes*(*IndexWriter.java:2939*)
>        *at*
*org.apache.lucene.index.IndexWriter.maybeApplyDeletes*(*IndexWriter.java:2930*)
>        *at*
*org.apache.lucene.index.IndexWriter.prepareCommit*(*IndexWriter.java:2681*)
>        *at*
*org.apache.lucene.index.IndexWriter.commitInternal*(*IndexWriter.java:2804*)
>        *at*
*org.apache.lucene.index.IndexWriter.commit*(*IndexWriter.java:2786*)
>        *at*
*org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:391*)
>        *at*
*org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
>        *at*
*java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
>        *at*
*java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
>        *at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
>        *at*
*java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
>        *at*
*java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
>        *at*
*java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
>        *at*
*java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
>        *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
> *java.lang.OutOfMemoryError:* *Map* *failed*
>        *at* *sun.nio.ch.FileChannelImpl.map0*(*Native* *Method*)
>        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:745*)
>        *...* *28* *more*
>
> *
> *
>
> *
> *
>
> *
>
>
> SolrConfig.xml:
>
>
>        <indexDefaults>
>                <useCompoundFile>false</useCompoundFile>
>                <mergeFactor>10</mergeFactor>
>                <maxMergeDocs>2147483647</maxMergeDocs>
>                <maxFieldLength>10000</maxFieldLength-->
>                <ramBufferSizeMB>4096</ramBufferSizeMB>
>                <maxThreadStates>10</maxThreadStates>
>                <writeLockTimeout>1000</writeLockTimeout>
>                <commitLockTimeout>10000</commitLockTimeout>
>                <lockType>single</lockType>
>
>            <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
>              <double name="forceMergeDeletesPctAllowed">0.0</double>
>              <double name="reclaimDeletesWeight">10.0</double>
>            </mergePolicy>
>
>            <deletionPolicy class="solr.SolrDeletionPolicy">
>              <str name="keepOptimizedOnly">false</str>
>              <str name="maxCommitsToKeep">0</str>
>            </deletionPolicy>
>
>        </indexDefaults>
>
>
>        <updateHandler class="solr.DirectUpdateHandler2">
>            <maxPendingDeletes>1000</maxPendingDeletes>
>             <autoCommit>
>               <maxTime>900000</maxTime>
>               <openSearcher>false</openSearcher>
>             </autoCommit>
>             <autoSoftCommit>
>
<maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
>             </autoSoftCommit>
>
>        </updateHandler>
>
>
>
> Thanks
> Gopal Patwa
> *

----------
From: *Gopal Patwa* <go...@gmail.com>
Date: Tue, Apr 10, 2012 at 8:35 PM
To: solr-user@lucene.apache.org


Michael, Thanks for response

it was 65K as you mention the default value for "cat
/proc/sys/vm/max_map_count" . How we determine what value this should be?
 is it number of document during hard commit in my case it is 15 minutes?
or it is number of  index file or number of documents we have in all cores.

I have raised the number to 140K but I still get when it reaches to 140K,
we have to restart jboss server to free up the map count, sometime OOM
error happen during "*Error opening new searcher"*

is making this number to unlimited is only solution?''


Error log:

*location=CommitTracker line=93 auto commit
error...:org.apache.solr.common.SolrException: Error opening new
searcher
	at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
	at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
	at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
	at org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:662)Caused by:
java.io.IOException: Map failed
	at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
	at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
	at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
	at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
	at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
	at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
	at org.apache.lucene.codecs.Codec.files(Codec.java:56)
	at org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
	at org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
	at org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
	at org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
	at org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
	at org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
	at org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
	at org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
	at org.apache.lucene.index.*

...

[Message clipped]
----------
From: *Michael McCandless* <lu...@mikemccandless.com>
Date: Wed, Apr 11, 2012 at 2:20 AM
To: solr-user@lucene.apache.org


Hi,

65K is already a very large number and should have been sufficient...

However: have you increased the merge factor?  Doing so increases the
open files (maps) required.

Have you disabled compound file format?  (Hmmm: I think Solr does so
by default... which is dangerous).  Maybe try enabling compound file
format?

Can you "ls -l" your index dir and post the results?

It's also possible Solr isn't closing the old searchers quickly enough
... I don't know the details on when Solr closes old searchers...

Mike McCandless

http://blog.mikemccandless.com



On Tue, Apr 10, 2012 at 11:35 PM, Gopal Patwa <go...@gmail.com> wrote:
> Michael, Thanks for response
>
> it was 65K as you mention the default value for "cat
> /proc/sys/vm/max_map_count" . How we determine what value this should be?
>  is it number of document during hard commit in my case it is 15 minutes?
> or it is number of  index file or number of documents we have in all
cores.
>
> I have raised the number to 140K but I still get when it reaches to 140K,
> we have to restart jboss server to free up the map count, sometime OOM
> error happen during "*Error opening new searcher"*
>
> is making this number to unlimited is only solution?''
>
>
> Error log:
>
> *location=CommitTracker line=93 auto commit
> error...:org.apache.solr.common.SolrException: Error opening new
> searcher
>        at
org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
>        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
>        at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
>        at org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
>        at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>        at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
>        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
>        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)Caused by:
> java.io.IOException: Map failed
>        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
>        at
org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
>        at
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
>        at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
>        at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
>        at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
>        at org.apache.lucene.codecs.Codec.files(Codec.java:56)
>        at org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
>        at
org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
>        at
org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
>        at
org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
>        at
org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
>        at
org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
>        at
org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
>        at
org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
>        at
org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:438)
>        at
org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:553)
>        at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:354)
>        at
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:258)
>        at
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:243)
>        at
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:250)
>        at
org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1091)
>        ... 11 moreCaused by: java.lang.OutOfMemoryError: Map failed
>        at sun.nio.ch.FileChannelImpl.map0(Native Method)
>        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)*
>
>
>
> And one more issue we came across i.e

Re: Large Index and OutOfMemoryError: Map failed

Posted by Michael McCandless <lu...@mikemccandless.com>.
Hi,

65K is already a very large number and should have been sufficient...

However: have you increased the merge factor?  Doing so increases the
open files (maps) required.

Have you disabled compound file format?  (Hmmm: I think Solr does so
by default... which is dangerous).  Maybe try enabling compound file
format?

Can you "ls -l" your index dir and post the results?

It's also possible Solr isn't closing the old searchers quickly enough
... I don't know the details on when Solr closes old searchers...

Mike McCandless

http://blog.mikemccandless.com



On Tue, Apr 10, 2012 at 11:35 PM, Gopal Patwa <go...@gmail.com> wrote:
> Michael, Thanks for response
>
> it was 65K as you mention the default value for "cat
> /proc/sys/vm/max_map_count" . How we determine what value this should be?
>  is it number of document during hard commit in my case it is 15 minutes?
> or it is number of  index file or number of documents we have in all cores.
>
> I have raised the number to 140K but I still get when it reaches to 140K,
> we have to restart jboss server to free up the map count, sometime OOM
> error happen during "*Error opening new searcher"*
>
> is making this number to unlimited is only solution?''
>
>
> Error log:
>
> *location=CommitTracker line=93 auto commit
> error...:org.apache.solr.common.SolrException: Error opening new
> searcher
>        at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
>        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
>        at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
>        at org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
>        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
>        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)Caused by:
> java.io.IOException: Map failed
>        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
>        at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
>        at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
>        at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
>        at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
>        at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
>        at org.apache.lucene.codecs.Codec.files(Codec.java:56)
>        at org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
>        at org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
>        at org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
>        at org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
>        at org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
>        at org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
>        at org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
>        at org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
>        at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:438)
>        at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:553)
>        at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:354)
>        at org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:258)
>        at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:243)
>        at org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:250)
>        at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1091)
>        ... 11 moreCaused by: java.lang.OutOfMemoryError: Map failed
>        at sun.nio.ch.FileChannelImpl.map0(Native Method)
>        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)*
>
>
>
> And one more issue we came across i.e
>
> On Sat, Mar 31, 2012 at 3:15 AM, Michael McCandless <
> lucene@mikemccandless.com> wrote:
>
>> It's the virtual memory limit that matters; yours says unlimited below
>> (good!), but, are you certain that's really the limit your Solr
>> process runs with?
>>
>> On Linux, there is also a per-process map count:
>>
>>    cat /proc/sys/vm/max_map_count
>>
>> I think it typically defaults to 65,536 but you should check on your
>> env.  If a process tries to map more than this many regions, you'll
>> hit that exception.
>>
>> I think you can:
>>
>>  cat /proc/<pid>/maps | wc
>>
>> to see how many maps your Solr process currently has... if that is
>> anywhere near the limit then it could be the cause.
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> On Sat, Mar 31, 2012 at 1:26 AM, Gopal Patwa <go...@gmail.com> wrote:
>> > *I need help!!*
>> >
>> > *
>> > *
>> >
>> > *I am using Solr 4.0 nightly build with NRT and I often get this error
>> > during auto commit "**java.lang.OutOfMemoryError:* *Map* *failed". I
>> > have search this forum and what I found it is related to OS ulimit
>> > setting, please se below my ulimit settings. I am not sure what ulimit
>> > setting I should have? and we also get "**java.net.SocketException:*
>> > *Too* *many* *open* *files" NOT sure how many open file we need to
>> > set?*
>> >
>> >
>> > I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
>> > 15GB, with Single shard
>> >
>> > *
>> > *
>> >
>> > *We update the index every 5 seconds, soft commit every 1 second and
>> > hard commit every 15 minutes*
>> >
>> > *
>> > *
>> >
>> > *Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB*
>> >
>> > *
>> > *
>> >
>> > ulimit:
>> >
>> > core file size          (blocks, -c) 0
>> > data seg size           (kbytes, -d) unlimited
>> > scheduling priority             (-e) 0
>> > file size               (blocks, -f) unlimited
>> > pending signals                 (-i) 401408
>> > max locked memory       (kbytes, -l) 1024
>> > max memory size         (kbytes, -m) unlimited
>> > open files                      (-n) 1024
>> > pipe size            (512 bytes, -p) 8
>> > POSIX message queues     (bytes, -q) 819200
>> > real-time priority              (-r) 0
>> > stack size              (kbytes, -s) 10240
>> > cpu time               (seconds, -t) unlimited
>> > max user processes              (-u) 401408
>> > virtual memory          (kbytes, -v) unlimited
>> > file locks                      (-x) unlimited
>> >
>> >
>> > *
>> > *
>> >
>> > *ERROR:*
>> >
>> > *
>> > *
>> >
>> > *2012-03-29* *15:14:08*,*560* [] *priority=ERROR* *app_name=*
>> > *thread=pool-3-thread-1* *location=CommitTracker* *line=93* *auto*
>> > *commit* *error...:java.io.IOException:* *Map* *failed*
>> >        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:748*)
>> >        *at*
>> *org.apache.lucene.store.MMapDirectory$MMapIndexInput.*<*init*>(*MMapDirectory.java:293*)
>> >        *at*
>> *org.apache.lucene.store.MMapDirectory.openInput*(*MMapDirectory.java:221*)
>> >        *at*
>> *org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.*<*init*>(*Lucene40PostingsReader.java:58*)
>> >        *at*
>> *org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer*(*Lucene40PostingsFormat.java:80*)
>> >        *at*
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat*(*PerFieldPostingsFormat.java:189*)
>> >        *at*
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.*<*init*>(*PerFieldPostingsFormat.java:280*)
>> >        *at*
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.*<*init*>(*PerFieldPostingsFormat.java:186*)
>> >        *at*
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.*<*init*>(*PerFieldPostingsFormat.java:186*)
>> >        *at*
>> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer*(*PerFieldPostingsFormat.java:256*)
>> >        *at*
>> *org.apache.lucene.index.SegmentCoreReaders.*<*init*>(*SegmentCoreReaders.java:108*)
>> >        *at*
>> *org.apache.lucene.index.SegmentReader.*<*init*>(*SegmentReader.java:51*)
>> >        *at*
>> *org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader*(*IndexWriter.java:494*)
>> >        *at*
>> *org.apache.lucene.index.BufferedDeletesStream.applyDeletes*(*BufferedDeletesStream.java:214*)
>> >        *at*
>> *org.apache.lucene.index.IndexWriter.applyAllDeletes*(*IndexWriter.java:2939*)
>> >        *at*
>> *org.apache.lucene.index.IndexWriter.maybeApplyDeletes*(*IndexWriter.java:2930*)
>> >        *at*
>> *org.apache.lucene.index.IndexWriter.prepareCommit*(*IndexWriter.java:2681*)
>> >        *at*
>> *org.apache.lucene.index.IndexWriter.commitInternal*(*IndexWriter.java:2804*)
>> >        *at*
>> *org.apache.lucene.index.IndexWriter.commit*(*IndexWriter.java:2786*)
>> >        *at*
>> *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:391*)
>> >        *at*
>> *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
>> >        *at*
>> *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
>> >        *at*
>> *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
>> >        *at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
>> >        *at*
>> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
>> >        *at*
>> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
>> >        *at*
>> *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
>> >        *at*
>> *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
>> >        *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
>> > *java.lang.OutOfMemoryError:* *Map* *failed*
>> >        *at* *sun.nio.ch.FileChannelImpl.map0*(*Native* *Method*)
>> >        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:745*)
>> >        *...* *28* *more*
>> >
>> > *
>> > *
>> >
>> > *
>> > *
>> >
>> > *
>> >
>> >
>> > SolrConfig.xml:
>> >
>> >
>> >        <indexDefaults>
>> >                <useCompoundFile>false</useCompoundFile>
>> >                <mergeFactor>10</mergeFactor>
>> >                <maxMergeDocs>2147483647</maxMergeDocs>
>> >                <maxFieldLength>10000</maxFieldLength-->
>> >                <ramBufferSizeMB>4096</ramBufferSizeMB>
>> >                <maxThreadStates>10</maxThreadStates>
>> >                <writeLockTimeout>1000</writeLockTimeout>
>> >                <commitLockTimeout>10000</commitLockTimeout>
>> >                <lockType>single</lockType>
>> >
>> >            <mergePolicy
>> class="org.apache.lucene.index.TieredMergePolicy">
>> >              <double name="forceMergeDeletesPctAllowed">0.0</double>
>> >              <double name="reclaimDeletesWeight">10.0</double>
>> >            </mergePolicy>
>> >
>> >            <deletionPolicy class="solr.SolrDeletionPolicy">
>> >              <str name="keepOptimizedOnly">false</str>
>> >              <str name="maxCommitsToKeep">0</str>
>> >            </deletionPolicy>
>> >
>> >        </indexDefaults>
>> >
>> >
>> >        <updateHandler class="solr.DirectUpdateHandler2">
>> >            <maxPendingDeletes>1000</maxPendingDeletes>
>> >             <autoCommit>
>> >               <maxTime>900000</maxTime>
>> >               <openSearcher>false</openSearcher>
>> >             </autoCommit>
>> >             <autoSoftCommit>
>> >
>> <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
>> >             </autoSoftCommit>
>> >
>> >        </updateHandler>
>> >
>> >
>> >
>> > Thanks
>> > Gopal Patwa
>> > *
>>

Re: Large Index and OutOfMemoryError: Map failed

Posted by Gopal Patwa <go...@gmail.com>.
Michael, Thanks for response

it was 65K as you mention the default value for "cat
/proc/sys/vm/max_map_count" . How we determine what value this should be?
 is it number of document during hard commit in my case it is 15 minutes?
or it is number of  index file or number of documents we have in all cores.

I have raised the number to 140K but I still get when it reaches to 140K,
we have to restart jboss server to free up the map count, sometime OOM
error happen during "*Error opening new searcher"*

is making this number to unlimited is only solution?''


Error log:

*location=CommitTracker line=93 auto commit
error...:org.apache.solr.common.SolrException: Error opening new
searcher
	at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1138)
	at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1251)
	at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:409)
	at org.apache.solr.update.CommitTracker.run(CommitTracker.java:197)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:662)Caused by:
java.io.IOException: Map failed
	at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
	at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:293)
	at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:221)
	at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.<init>(PerFieldPostingsFormat.java:262)
	at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$1.<init>(PerFieldPostingsFormat.java:316)
	at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.files(PerFieldPostingsFormat.java:316)
	at org.apache.lucene.codecs.Codec.files(Codec.java:56)
	at org.apache.lucene.index.SegmentInfo.files(SegmentInfo.java:423)
	at org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:215)
	at org.apache.lucene.index.IndexWriter.prepareFlushedSegment(IndexWriter.java:2220)
	at org.apache.lucene.index.DocumentsWriter.publishFlushedSegment(DocumentsWriter.java:497)
	at org.apache.lucene.index.DocumentsWriter.finishFlush(DocumentsWriter.java:477)
	at org.apache.lucene.index.DocumentsWriterFlushQueue$SegmentFlushTicket.publish(DocumentsWriterFlushQueue.java:201)
	at org.apache.lucene.index.DocumentsWriterFlushQueue.innerPurge(DocumentsWriterFlushQueue.java:119)
	at org.apache.lucene.index.DocumentsWriterFlushQueue.tryPurge(DocumentsWriterFlushQueue.java:148)
	at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:438)
	at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:553)
	at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:354)
	at org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:258)
	at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:243)
	at org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:250)
	at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1091)
	... 11 moreCaused by: java.lang.OutOfMemoryError: Map failed
	at sun.nio.ch.FileChannelImpl.map0(Native Method)
	at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)*



And one more issue we came across i.e

On Sat, Mar 31, 2012 at 3:15 AM, Michael McCandless <
lucene@mikemccandless.com> wrote:

> It's the virtual memory limit that matters; yours says unlimited below
> (good!), but, are you certain that's really the limit your Solr
> process runs with?
>
> On Linux, there is also a per-process map count:
>
>    cat /proc/sys/vm/max_map_count
>
> I think it typically defaults to 65,536 but you should check on your
> env.  If a process tries to map more than this many regions, you'll
> hit that exception.
>
> I think you can:
>
>  cat /proc/<pid>/maps | wc
>
> to see how many maps your Solr process currently has... if that is
> anywhere near the limit then it could be the cause.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Sat, Mar 31, 2012 at 1:26 AM, Gopal Patwa <go...@gmail.com> wrote:
> > *I need help!!*
> >
> > *
> > *
> >
> > *I am using Solr 4.0 nightly build with NRT and I often get this error
> > during auto commit "**java.lang.OutOfMemoryError:* *Map* *failed". I
> > have search this forum and what I found it is related to OS ulimit
> > setting, please se below my ulimit settings. I am not sure what ulimit
> > setting I should have? and we also get "**java.net.SocketException:*
> > *Too* *many* *open* *files" NOT sure how many open file we need to
> > set?*
> >
> >
> > I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
> > 15GB, with Single shard
> >
> > *
> > *
> >
> > *We update the index every 5 seconds, soft commit every 1 second and
> > hard commit every 15 minutes*
> >
> > *
> > *
> >
> > *Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB*
> >
> > *
> > *
> >
> > ulimit:
> >
> > core file size          (blocks, -c) 0
> > data seg size           (kbytes, -d) unlimited
> > scheduling priority             (-e) 0
> > file size               (blocks, -f) unlimited
> > pending signals                 (-i) 401408
> > max locked memory       (kbytes, -l) 1024
> > max memory size         (kbytes, -m) unlimited
> > open files                      (-n) 1024
> > pipe size            (512 bytes, -p) 8
> > POSIX message queues     (bytes, -q) 819200
> > real-time priority              (-r) 0
> > stack size              (kbytes, -s) 10240
> > cpu time               (seconds, -t) unlimited
> > max user processes              (-u) 401408
> > virtual memory          (kbytes, -v) unlimited
> > file locks                      (-x) unlimited
> >
> >
> > *
> > *
> >
> > *ERROR:*
> >
> > *
> > *
> >
> > *2012-03-29* *15:14:08*,*560* [] *priority=ERROR* *app_name=*
> > *thread=pool-3-thread-1* *location=CommitTracker* *line=93* *auto*
> > *commit* *error...:java.io.IOException:* *Map* *failed*
> >        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:748*)
> >        *at*
> *org.apache.lucene.store.MMapDirectory$MMapIndexInput.*<*init*>(*MMapDirectory.java:293*)
> >        *at*
> *org.apache.lucene.store.MMapDirectory.openInput*(*MMapDirectory.java:221*)
> >        *at*
> *org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.*<*init*>(*Lucene40PostingsReader.java:58*)
> >        *at*
> *org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer*(*Lucene40PostingsFormat.java:80*)
> >        *at*
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat*(*PerFieldPostingsFormat.java:189*)
> >        *at*
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.*<*init*>(*PerFieldPostingsFormat.java:280*)
> >        *at*
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.*<*init*>(*PerFieldPostingsFormat.java:186*)
> >        *at*
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.*<*init*>(*PerFieldPostingsFormat.java:186*)
> >        *at*
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer*(*PerFieldPostingsFormat.java:256*)
> >        *at*
> *org.apache.lucene.index.SegmentCoreReaders.*<*init*>(*SegmentCoreReaders.java:108*)
> >        *at*
> *org.apache.lucene.index.SegmentReader.*<*init*>(*SegmentReader.java:51*)
> >        *at*
> *org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader*(*IndexWriter.java:494*)
> >        *at*
> *org.apache.lucene.index.BufferedDeletesStream.applyDeletes*(*BufferedDeletesStream.java:214*)
> >        *at*
> *org.apache.lucene.index.IndexWriter.applyAllDeletes*(*IndexWriter.java:2939*)
> >        *at*
> *org.apache.lucene.index.IndexWriter.maybeApplyDeletes*(*IndexWriter.java:2930*)
> >        *at*
> *org.apache.lucene.index.IndexWriter.prepareCommit*(*IndexWriter.java:2681*)
> >        *at*
> *org.apache.lucene.index.IndexWriter.commitInternal*(*IndexWriter.java:2804*)
> >        *at*
> *org.apache.lucene.index.IndexWriter.commit*(*IndexWriter.java:2786*)
> >        *at*
> *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:391*)
> >        *at*
> *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
> >        *at*
> *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
> >        *at*
> *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
> >        *at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
> >        *at*
> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
> >        *at*
> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
> >        *at*
> *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
> >        *at*
> *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
> >        *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
> > *java.lang.OutOfMemoryError:* *Map* *failed*
> >        *at* *sun.nio.ch.FileChannelImpl.map0*(*Native* *Method*)
> >        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:745*)
> >        *...* *28* *more*
> >
> > *
> > *
> >
> > *
> > *
> >
> > *
> >
> >
> > SolrConfig.xml:
> >
> >
> >        <indexDefaults>
> >                <useCompoundFile>false</useCompoundFile>
> >                <mergeFactor>10</mergeFactor>
> >                <maxMergeDocs>2147483647</maxMergeDocs>
> >                <maxFieldLength>10000</maxFieldLength-->
> >                <ramBufferSizeMB>4096</ramBufferSizeMB>
> >                <maxThreadStates>10</maxThreadStates>
> >                <writeLockTimeout>1000</writeLockTimeout>
> >                <commitLockTimeout>10000</commitLockTimeout>
> >                <lockType>single</lockType>
> >
> >            <mergePolicy
> class="org.apache.lucene.index.TieredMergePolicy">
> >              <double name="forceMergeDeletesPctAllowed">0.0</double>
> >              <double name="reclaimDeletesWeight">10.0</double>
> >            </mergePolicy>
> >
> >            <deletionPolicy class="solr.SolrDeletionPolicy">
> >              <str name="keepOptimizedOnly">false</str>
> >              <str name="maxCommitsToKeep">0</str>
> >            </deletionPolicy>
> >
> >        </indexDefaults>
> >
> >
> >        <updateHandler class="solr.DirectUpdateHandler2">
> >            <maxPendingDeletes>1000</maxPendingDeletes>
> >             <autoCommit>
> >               <maxTime>900000</maxTime>
> >               <openSearcher>false</openSearcher>
> >             </autoCommit>
> >             <autoSoftCommit>
> >
> <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
> >             </autoSoftCommit>
> >
> >        </updateHandler>
> >
> >
> >
> > Thanks
> > Gopal Patwa
> > *
>

Re: Large Index and OutOfMemoryError: Map failed

Posted by Michael McCandless <lu...@mikemccandless.com>.
It's the virtual memory limit that matters; yours says unlimited below
(good!), but, are you certain that's really the limit your Solr
process runs with?

On Linux, there is also a per-process map count:

    cat /proc/sys/vm/max_map_count

I think it typically defaults to 65,536 but you should check on your
env.  If a process tries to map more than this many regions, you'll
hit that exception.

I think you can:

  cat /proc/<pid>/maps | wc

to see how many maps your Solr process currently has... if that is
anywhere near the limit then it could be the cause.

Mike McCandless

http://blog.mikemccandless.com

On Sat, Mar 31, 2012 at 1:26 AM, Gopal Patwa <go...@gmail.com> wrote:
> *I need help!!*
>
> *
> *
>
> *I am using Solr 4.0 nightly build with NRT and I often get this error
> during auto commit "**java.lang.OutOfMemoryError:* *Map* *failed". I
> have search this forum and what I found it is related to OS ulimit
> setting, please se below my ulimit settings. I am not sure what ulimit
> setting I should have? and we also get "**java.net.SocketException:*
> *Too* *many* *open* *files" NOT sure how many open file we need to
> set?*
>
>
> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3 -
> 15GB, with Single shard
>
> *
> *
>
> *We update the index every 5 seconds, soft commit every 1 second and
> hard commit every 15 minutes*
>
> *
> *
>
> *Environment: Jboss 4.2, JDK 1.6 , CentOS, JVM Heap Size = 24GB*
>
> *
> *
>
> ulimit:
>
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 401408
> max locked memory       (kbytes, -l) 1024
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 10240
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 401408
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
>
>
> *
> *
>
> *ERROR:*
>
> *
> *
>
> *2012-03-29* *15:14:08*,*560* [] *priority=ERROR* *app_name=*
> *thread=pool-3-thread-1* *location=CommitTracker* *line=93* *auto*
> *commit* *error...:java.io.IOException:* *Map* *failed*
>        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:748*)
>        *at* *org.apache.lucene.store.MMapDirectory$MMapIndexInput.*<*init*>(*MMapDirectory.java:293*)
>        *at* *org.apache.lucene.store.MMapDirectory.openInput*(*MMapDirectory.java:221*)
>        *at* *org.apache.lucene.codecs.lucene40.Lucene40PostingsReader.*<*init*>(*Lucene40PostingsReader.java:58*)
>        *at* *org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsProducer*(*Lucene40PostingsFormat.java:80*)
>        *at* *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.visitOneFormat*(*PerFieldPostingsFormat.java:189*)
>        *at* *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$VisitPerFieldFile.*<*init*>(*PerFieldPostingsFormat.java:280*)
>        *at* *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader$1.*<*init*>(*PerFieldPostingsFormat.java:186*)
>        *at* *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.*<*init*>(*PerFieldPostingsFormat.java:186*)
>        *at* *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer*(*PerFieldPostingsFormat.java:256*)
>        *at* *org.apache.lucene.index.SegmentCoreReaders.*<*init*>(*SegmentCoreReaders.java:108*)
>        *at* *org.apache.lucene.index.SegmentReader.*<*init*>(*SegmentReader.java:51*)
>        *at* *org.apache.lucene.index.IndexWriter$ReadersAndLiveDocs.getReader*(*IndexWriter.java:494*)
>        *at* *org.apache.lucene.index.BufferedDeletesStream.applyDeletes*(*BufferedDeletesStream.java:214*)
>        *at* *org.apache.lucene.index.IndexWriter.applyAllDeletes*(*IndexWriter.java:2939*)
>        *at* *org.apache.lucene.index.IndexWriter.maybeApplyDeletes*(*IndexWriter.java:2930*)
>        *at* *org.apache.lucene.index.IndexWriter.prepareCommit*(*IndexWriter.java:2681*)
>        *at* *org.apache.lucene.index.IndexWriter.commitInternal*(*IndexWriter.java:2804*)
>        *at* *org.apache.lucene.index.IndexWriter.commit*(*IndexWriter.java:2786*)
>        *at* *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:391*)
>        *at* *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
>        *at* *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
>        *at* *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
>        *at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
>        *at* *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
>        *at* *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
>        *at* *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
>        *at* *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
>        *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
> *java.lang.OutOfMemoryError:* *Map* *failed*
>        *at* *sun.nio.ch.FileChannelImpl.map0*(*Native* *Method*)
>        *at* *sun.nio.ch.FileChannelImpl.map*(*FileChannelImpl.java:745*)
>        *...* *28* *more*
>
> *
> *
>
> *
> *
>
> *
>
>
> SolrConfig.xml:
>
>
>        <indexDefaults>
>                <useCompoundFile>false</useCompoundFile>
>                <mergeFactor>10</mergeFactor>
>                <maxMergeDocs>2147483647</maxMergeDocs>
>                <maxFieldLength>10000</maxFieldLength-->
>                <ramBufferSizeMB>4096</ramBufferSizeMB>
>                <maxThreadStates>10</maxThreadStates>
>                <writeLockTimeout>1000</writeLockTimeout>
>                <commitLockTimeout>10000</commitLockTimeout>
>                <lockType>single</lockType>
>
>            <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
>              <double name="forceMergeDeletesPctAllowed">0.0</double>
>              <double name="reclaimDeletesWeight">10.0</double>
>            </mergePolicy>
>
>            <deletionPolicy class="solr.SolrDeletionPolicy">
>              <str name="keepOptimizedOnly">false</str>
>              <str name="maxCommitsToKeep">0</str>
>            </deletionPolicy>
>
>        </indexDefaults>
>
>
>        <updateHandler class="solr.DirectUpdateHandler2">
>            <maxPendingDeletes>1000</maxPendingDeletes>
>             <autoCommit>
>               <maxTime>900000</maxTime>
>               <openSearcher>false</openSearcher>
>             </autoCommit>
>             <autoSoftCommit>
>               <maxTime>${inventory.solr.softcommit.duration:1000}</maxTime>
>             </autoSoftCommit>
>
>        </updateHandler>
>
>
>
> Thanks
> Gopal Patwa
> *