You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Mark Miller (JIRA)" <ji...@apache.org> on 2009/01/04 16:05:44 UTC

[jira] Commented: (LUCENE-628) Intermittent FileNotFoundException for .fnm when using rsync

    [ https://issues.apache.org/jira/browse/LUCENE-628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12660581#action_12660581 ] 

Mark Miller commented on LUCENE-628:
------------------------------------

Hey Simon, anything to report back on this issue? I'd like to close it out if you have worked out what happened.

> Intermittent FileNotFoundException for .fnm when using rsync
> ------------------------------------------------------------
>
>                 Key: LUCENE-628
>                 URL: https://issues.apache.org/jira/browse/LUCENE-628
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: Search
>    Affects Versions: 1.9
>         Environment: Linux RedHat ES3, Jboss402
>            Reporter: Simon Lorenz
>            Priority: Minor
>
> We use Lucene 1.9.1 to create and search indexes for web applications. The application runs in Jboss402 on Redhat ES3. A single Master (Writer) Jboss instance creates and writes the indexes using the compound file format , which is optimised after all updates. These index files are replicated every few hours using rsync, to a number of other application servers (Searchers). The rsync job only runs if there are no lucene lock files present on the Writer. The Searcher servers that receive the replicated files, perform only searches on the index. Up to 60 searches may be performed each minute. 
> Everything works well most of the time, but we get the following issue on the Searcher servers about 10% of the time. 
> Following an rsync replication one or all of the Searcher server throws
> IOException caught when creating and IndexSearcher
> java.io.FileNotFoundException: /..../_1zm.fnm (No such file or directory)
>         at java.io.RandomAccessFile.open(Native Method)
>         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
>         at org.apache.lucene.store.FSIndexInput$Descriptor.<init>(FSDirectory.java:425)
>         at org.apache.lucene.store.FSIndexInput.<init>(FSDirectory.java:434)
>         at org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:324)
>         at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:56)
>         at org.apache.lucene.index.SegmentReader.initialize(SegmentReader.java:144)
>         at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:129)
>         at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:110)
>         at org.apache.lucene.index.IndexReader$1.doBody(IndexReader.java:154)
>         at org.apache.lucene.store.Lock$With.run(Lock.java:109)
>         at org.apache.lucene.index.IndexReader.open(IndexReader.java:143)  
> As we use the compound file format I would not expect .fnm files to be present. When replicating, we do not delete the old .cfs index files as these could still be referenced by old Searcher threads. We do overwrite the segments and deletable files on the Searcher servers. 
> My thoughts are: Either we are occasionally overwriting a file at the exact time a new searcher is being created, or the lock files are removed from the Writer server before the compaction process is completed, we then replicate a segments file that still references a ghost .fnm file.
> I would greatly appreciate any ideas and suggestions to solve this annoying issue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org