You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-commits@lucene.apache.org by Apache Wiki <wi...@apache.org> on 2007/10/01 02:32:11 UTC

[Solr Wiki] Update of "SolrPerformanceFactors" by ChrisHarris

Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Solr Wiki" for change notification.

The following page has been changed by ChrisHarris:
http://wiki.apache.org/solr/SolrPerformanceFactors

The comment on the change is:
A preliminary "RAM Usage Considerations" section

------------------------------------------------------------------------------
  
  See [http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/package-summary.html java.util.concurrency javadoc] for more information on threading.
  
+ == RAM Usage Considerations ==
+ 
+ === OutOfMemoryErrors ===
+ 
+ If your Solr instance doesn't have enough memory allocated to it, the Java virtual machine will sometimes throw a Java [http://java.sun.com/j2se/1.4.2/docs/api/java/lang/OutOfMemoryError.html OutOfMemoryError]. There is no danger of data corruption when this occurs, and Solr will attempt to recover gracefully. Any adds/deletes/commits in progress when the error was thrown are not likely to succeed, however. Other adverse effects may arise. For instance, if the SimpleFSLock locking mechanism is in use (as is the case in Solr 1.2), an ill-timed !OutOfMemoryError can potentially cause Solr to lose its lock on the index. If this happens, further attempts to modify the index will result in
+ 
+ {{{
+ SEVERE: Exception during commit/optimize:java.io.IOException: Lock obtain timed out: SimpleFSLock@/tmp/lucene-5d12dd782520964674beb001c4877b36-write.lock
+ }}}
+ 
+ errors.
+ 
+ === Memory allocated to the Java VM ===
+ 
+ The easiest way to fight this error, assuming the Java virtual machine isn't already using all your physical memory, is to increase the amount of memory allocated to the Java virtual machine running Solr. To do this for the example/ in the Solr distribution, if you're running the standard Sun virtual machine, you can use the -Xms and -Xmx command-line parameters:
+ 
+ {{{
+ java -Xms512M -Xmx1024M -jar start.jar
+ }}}
+ 
+ 
+ === Factors affecting memory usage ===
+ 
+ You may also wish to try to actually reduce Solr's memory usage.
+ 
+ One factor is the size of the input document:
+ 
+ When processing an "add" command for a document, the standard XML update handler has two limitations:
+ 
+  * All of the document's fields must simultaneously fit into memory. (Technically, it's actually the sum of min(<the actual field value's length>, maxFieldLength). As such, adjusting maxFieldLength may be of some help.)
+   * (''I'm assuming that fields are truncated to maxFieldLength before being added to the relevant document object. If that's not true, then maxFieldLength won't help here. --!ChrisHarris'')
+  * Each individual <field>...</field> tag in the input XML must fit into memory, regardless of maxFieldLength.
+ 
+ Note that several different "add" commands can be running simultaneously (in different threads). The more threads, the greater the memory usage.
+