You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Ansgar Kapels (JIRA)" <ji...@apache.org> on 2015/11/23 17:14:10 UTC

[jira] [Created] (SOLR-8331) CorruptIndexException after auto commit

Ansgar Kapels created SOLR-8331:
-----------------------------------

             Summary: CorruptIndexException after auto commit
                 Key: SOLR-8331
                 URL: https://issues.apache.org/jira/browse/SOLR-8331
             Project: Solr
          Issue Type: Bug
          Components: update
    Affects Versions: 4.10.4
         Environment: OS: SUSE Linux Enterprise Server 11 SP3
File system: ext3
Application server: Tomcat 7
            Reporter: Ansgar Kapels
            Priority: Critical


While adding many new documents to Solr (via solrJ) it happens that the index files get corrupted.
This problem is occurring on different virtual servers (all same OS and configuration). Especially when adding many new (or updated) documents to Solr in a short time.

Here's the exception:
{code}
org.apache.solr.common.SolrException; auto commit error...:org.apache.lucene.index.CorruptIndexException: codec header mismatch: actual header=1970145651 vs expected header=1071082519 (resource: BufferedChecksumIndexInput(MMapIndexInput(path="/data/solr/data1/index/_gru.fnm")))
        at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:136)
        at org.apache.lucene.codecs.lucene46.Lucene46FieldInfosReader.read(Lucene46FieldInfosReader.java:57)
        at org.apache.lucene.index.SegmentReader.readFieldInfos(SegmentReader.java:289)
        at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:107)
        at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
        at org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:282)
        at org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3312)
        at org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3303)
        at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2989)
        at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3134)
        at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3101)
        at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:582)
        at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
{code}

I noticed that Solr runs stable much longer when I modify merge settings:
Reducing
 <maxMergeDocs>100000</maxMergeDocs>
to
 <maxMergeDocs>10000</maxMergeDocs>
has a positive effect but at some point the index still gets corrupted.
Same when setting a higher mergeFactor. So it seems I can delay the issue for a while but certainly it will reach a critical point after a few days or weeks it can't handle anymore. Maybe it is related to a certain file size or something?

The index's total size (data directory) is about 23 GB with 1,600,000 documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org