You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Nick Kirsch (JIRA)" <ji...@apache.org> on 2011/06/29 21:00:28 UTC

[jira] [Created] (LUCENE-3258) File leak when IOException occurs during index optimization.

File leak when IOException occurs during index optimization.
------------------------------------------------------------

                 Key: LUCENE-3258
                 URL: https://issues.apache.org/jira/browse/LUCENE-3258
             Project: Lucene - Java
          Issue Type: Bug
          Components: core/index
    Affects Versions: 3.0.3
         Environment: SUSE Linux 11, Java 6
            Reporter: Nick Kirsch
             Fix For: 3.0.3


I am not sure if this issue requires a fix due to the nature of its occurrence, or if it exists in other versions of Lucene.

I am using Lucene Java 3.0.3 on a SUSE Linux machine with Java 6 and have noticed there are a number of file handles that are not being released from my java application. There are IOExceptions in my log regarding disk full, which causes a merge and the optimization to fail. The index is not currupt upon encountering the IOException. I am using CFS for my index format, so 3X my largest index size during optimization certainly consumes all of my available disk. 

I realize that I need to add more disk space to my machine, but I investigated how to clean up the leaking file handles. After failing to find a misuse of Lucene's IndexWriter in the code I have wrapping Lucene, I did a quick search for close() being invoked in the Lucene Jave source code. I found a number of source files that attempt to close more than one object within the same close() method. I think a try/catch should be put around each of these close() attempts to avoid skipping a subsequent closes. The catch may be able to ignore a caught exception to avoid masking the original exception like done in SimpleFSDirectory.close().

Locations in Lucene Java source where I suggest a try/catch should be used:
- org.apache.lucene.index.FormatPostingFieldsWriter.finish()
- org.apache.lucene.index.TermInfosWriter.close()
- org.apache.lucene.index.SegmentTermPositions.close()
- org.apache.lucene.index.SegmentMergeInfo.close()
- org.apache.lucene.index.SegmentMerger.mergeTerms() (The finally block)
- org.apache.lucene.index.DirectoryReader.close()
- org.apache.lucene.index.FieldsReader.close()
- org.apache.lucene.index.MultiLevelSkipListReader.close()
- org.apache.lucene.index.MultipleTermPositions.close()
- org.apache.lucene.index.SegmentMergeQueue.close()
- org.apache.lucene.index.SegmentMergeDocs.close()
- org.apache.lucene.index.TermInfosReader.close()

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


[jira] [Resolved] (LUCENE-3258) File leak when IOException occurs during index optimization.

Posted by "Shai Erera (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/LUCENE-3258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Shai Erera resolved LUCENE-3258.
--------------------------------

       Resolution: Fixed
    Fix Version/s: 3.3

Already fixed in 3.3

> File leak when IOException occurs during index optimization.
> ------------------------------------------------------------
>
>                 Key: LUCENE-3258
>                 URL: https://issues.apache.org/jira/browse/LUCENE-3258
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: core/index
>    Affects Versions: 3.0.3
>         Environment: SUSE Linux 11, Java 6
>            Reporter: Nick Kirsch
>             Fix For: 3.3
>
>
> I am not sure if this issue requires a fix due to the nature of its occurrence, or if it exists in other versions of Lucene.
> I am using Lucene Java 3.0.3 on a SUSE Linux machine with Java 6 and have noticed there are a number of file handles that are not being released from my java application. There are IOExceptions in my log regarding disk full, which causes a merge and the optimization to fail. The index is not currupt upon encountering the IOException. I am using CFS for my index format, so 3X my largest index size during optimization certainly consumes all of my available disk. 
> I realize that I need to add more disk space to my machine, but I investigated how to clean up the leaking file handles. After failing to find a misuse of Lucene's IndexWriter in the code I have wrapping Lucene, I did a quick search for close() being invoked in the Lucene Jave source code. I found a number of source files that attempt to close more than one object within the same close() method. I think a try/catch should be put around each of these close() attempts to avoid skipping a subsequent closes. The catch may be able to ignore a caught exception to avoid masking the original exception like done in SimpleFSDirectory.close().
> Locations in Lucene Java source where I suggest a try/catch should be used:
> - org.apache.lucene.index.FormatPostingFieldsWriter.finish()
> - org.apache.lucene.index.TermInfosWriter.close()
> - org.apache.lucene.index.SegmentTermPositions.close()
> - org.apache.lucene.index.SegmentMergeInfo.close()
> - org.apache.lucene.index.SegmentMerger.mergeTerms() (The finally block)
> - org.apache.lucene.index.DirectoryReader.close()
> - org.apache.lucene.index.FieldsReader.close()
> - org.apache.lucene.index.MultiLevelSkipListReader.close()
> - org.apache.lucene.index.MultipleTermPositions.close()
> - org.apache.lucene.index.SegmentMergeQueue.close()
> - org.apache.lucene.index.SegmentMergeDocs.close()
> - org.apache.lucene.index.TermInfosReader.close()

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


[jira] [Reopened] (LUCENE-3258) File leak when IOException occurs during index optimization.

Posted by "Shai Erera (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/LUCENE-3258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Shai Erera reopened LUCENE-3258:
--------------------------------


Reopening to change resolution

> File leak when IOException occurs during index optimization.
> ------------------------------------------------------------
>
>                 Key: LUCENE-3258
>                 URL: https://issues.apache.org/jira/browse/LUCENE-3258
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: core/index
>    Affects Versions: 3.0.3
>         Environment: SUSE Linux 11, Java 6
>            Reporter: Nick Kirsch
>             Fix For: 3.3
>
>
> I am not sure if this issue requires a fix due to the nature of its occurrence, or if it exists in other versions of Lucene.
> I am using Lucene Java 3.0.3 on a SUSE Linux machine with Java 6 and have noticed there are a number of file handles that are not being released from my java application. There are IOExceptions in my log regarding disk full, which causes a merge and the optimization to fail. The index is not currupt upon encountering the IOException. I am using CFS for my index format, so 3X my largest index size during optimization certainly consumes all of my available disk. 
> I realize that I need to add more disk space to my machine, but I investigated how to clean up the leaking file handles. After failing to find a misuse of Lucene's IndexWriter in the code I have wrapping Lucene, I did a quick search for close() being invoked in the Lucene Jave source code. I found a number of source files that attempt to close more than one object within the same close() method. I think a try/catch should be put around each of these close() attempts to avoid skipping a subsequent closes. The catch may be able to ignore a caught exception to avoid masking the original exception like done in SimpleFSDirectory.close().
> Locations in Lucene Java source where I suggest a try/catch should be used:
> - org.apache.lucene.index.FormatPostingFieldsWriter.finish()
> - org.apache.lucene.index.TermInfosWriter.close()
> - org.apache.lucene.index.SegmentTermPositions.close()
> - org.apache.lucene.index.SegmentMergeInfo.close()
> - org.apache.lucene.index.SegmentMerger.mergeTerms() (The finally block)
> - org.apache.lucene.index.DirectoryReader.close()
> - org.apache.lucene.index.FieldsReader.close()
> - org.apache.lucene.index.MultiLevelSkipListReader.close()
> - org.apache.lucene.index.MultipleTermPositions.close()
> - org.apache.lucene.index.SegmentMergeQueue.close()
> - org.apache.lucene.index.SegmentMergeDocs.close()
> - org.apache.lucene.index.TermInfosReader.close()

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


[jira] [Commented] (LUCENE-3258) File leak when IOException occurs during index optimization.

Posted by "Uwe Schindler (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/LUCENE-3258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13057448#comment-13057448 ] 

Uwe Schindler commented on LUCENE-3258:
---------------------------------------

I don't think "won't" fix is the correct "resolution". It's "fixed in 3.3", right?

> File leak when IOException occurs during index optimization.
> ------------------------------------------------------------
>
>                 Key: LUCENE-3258
>                 URL: https://issues.apache.org/jira/browse/LUCENE-3258
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: core/index
>    Affects Versions: 3.0.3
>         Environment: SUSE Linux 11, Java 6
>            Reporter: Nick Kirsch
>
> I am not sure if this issue requires a fix due to the nature of its occurrence, or if it exists in other versions of Lucene.
> I am using Lucene Java 3.0.3 on a SUSE Linux machine with Java 6 and have noticed there are a number of file handles that are not being released from my java application. There are IOExceptions in my log regarding disk full, which causes a merge and the optimization to fail. The index is not currupt upon encountering the IOException. I am using CFS for my index format, so 3X my largest index size during optimization certainly consumes all of my available disk. 
> I realize that I need to add more disk space to my machine, but I investigated how to clean up the leaking file handles. After failing to find a misuse of Lucene's IndexWriter in the code I have wrapping Lucene, I did a quick search for close() being invoked in the Lucene Jave source code. I found a number of source files that attempt to close more than one object within the same close() method. I think a try/catch should be put around each of these close() attempts to avoid skipping a subsequent closes. The catch may be able to ignore a caught exception to avoid masking the original exception like done in SimpleFSDirectory.close().
> Locations in Lucene Java source where I suggest a try/catch should be used:
> - org.apache.lucene.index.FormatPostingFieldsWriter.finish()
> - org.apache.lucene.index.TermInfosWriter.close()
> - org.apache.lucene.index.SegmentTermPositions.close()
> - org.apache.lucene.index.SegmentMergeInfo.close()
> - org.apache.lucene.index.SegmentMerger.mergeTerms() (The finally block)
> - org.apache.lucene.index.DirectoryReader.close()
> - org.apache.lucene.index.FieldsReader.close()
> - org.apache.lucene.index.MultiLevelSkipListReader.close()
> - org.apache.lucene.index.MultipleTermPositions.close()
> - org.apache.lucene.index.SegmentMergeQueue.close()
> - org.apache.lucene.index.SegmentMergeDocs.close()
> - org.apache.lucene.index.TermInfosReader.close()

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


[jira] [Resolved] (LUCENE-3258) File leak when IOException occurs during index optimization.

Posted by "Shai Erera (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/LUCENE-3258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Shai Erera resolved LUCENE-3258.
--------------------------------

       Resolution: Won't Fix
    Fix Version/s:     (was: 3.0.3)

These issues were fixed in LUCENE-3147 and have been released w/ Lucene 3.2.0. I don't think we should backport those fixes to the 3.0.x branch, nor do we have the test-framework in place there to test them.

> File leak when IOException occurs during index optimization.
> ------------------------------------------------------------
>
>                 Key: LUCENE-3258
>                 URL: https://issues.apache.org/jira/browse/LUCENE-3258
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: core/index
>    Affects Versions: 3.0.3
>         Environment: SUSE Linux 11, Java 6
>            Reporter: Nick Kirsch
>
> I am not sure if this issue requires a fix due to the nature of its occurrence, or if it exists in other versions of Lucene.
> I am using Lucene Java 3.0.3 on a SUSE Linux machine with Java 6 and have noticed there are a number of file handles that are not being released from my java application. There are IOExceptions in my log regarding disk full, which causes a merge and the optimization to fail. The index is not currupt upon encountering the IOException. I am using CFS for my index format, so 3X my largest index size during optimization certainly consumes all of my available disk. 
> I realize that I need to add more disk space to my machine, but I investigated how to clean up the leaking file handles. After failing to find a misuse of Lucene's IndexWriter in the code I have wrapping Lucene, I did a quick search for close() being invoked in the Lucene Jave source code. I found a number of source files that attempt to close more than one object within the same close() method. I think a try/catch should be put around each of these close() attempts to avoid skipping a subsequent closes. The catch may be able to ignore a caught exception to avoid masking the original exception like done in SimpleFSDirectory.close().
> Locations in Lucene Java source where I suggest a try/catch should be used:
> - org.apache.lucene.index.FormatPostingFieldsWriter.finish()
> - org.apache.lucene.index.TermInfosWriter.close()
> - org.apache.lucene.index.SegmentTermPositions.close()
> - org.apache.lucene.index.SegmentMergeInfo.close()
> - org.apache.lucene.index.SegmentMerger.mergeTerms() (The finally block)
> - org.apache.lucene.index.DirectoryReader.close()
> - org.apache.lucene.index.FieldsReader.close()
> - org.apache.lucene.index.MultiLevelSkipListReader.close()
> - org.apache.lucene.index.MultipleTermPositions.close()
> - org.apache.lucene.index.SegmentMergeQueue.close()
> - org.apache.lucene.index.SegmentMergeDocs.close()
> - org.apache.lucene.index.TermInfosReader.close()

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


[jira] [Commented] (LUCENE-3258) File leak when IOException occurs during index optimization.

Posted by "Robert Muir (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/LUCENE-3258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13057407#comment-13057407 ] 

Robert Muir commented on LUCENE-3258:
-------------------------------------

just to followup, the changes here didn't make it until Lucene 3.3.0.
This isn't yet released, but should be any time soon (like within days)

you can try out the release candidate here: http://s.apache.org/lusolr330rc1

furthermore, if you want you can use lucene's test-framework jar in your own tests to help you track down any file leaks in your own application, by wrapping your directory with MockDirectoryWrapper, or by extending LuceneTestCase and using newDirectory() and newFSDirectory().

> File leak when IOException occurs during index optimization.
> ------------------------------------------------------------
>
>                 Key: LUCENE-3258
>                 URL: https://issues.apache.org/jira/browse/LUCENE-3258
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: core/index
>    Affects Versions: 3.0.3
>         Environment: SUSE Linux 11, Java 6
>            Reporter: Nick Kirsch
>
> I am not sure if this issue requires a fix due to the nature of its occurrence, or if it exists in other versions of Lucene.
> I am using Lucene Java 3.0.3 on a SUSE Linux machine with Java 6 and have noticed there are a number of file handles that are not being released from my java application. There are IOExceptions in my log regarding disk full, which causes a merge and the optimization to fail. The index is not currupt upon encountering the IOException. I am using CFS for my index format, so 3X my largest index size during optimization certainly consumes all of my available disk. 
> I realize that I need to add more disk space to my machine, but I investigated how to clean up the leaking file handles. After failing to find a misuse of Lucene's IndexWriter in the code I have wrapping Lucene, I did a quick search for close() being invoked in the Lucene Jave source code. I found a number of source files that attempt to close more than one object within the same close() method. I think a try/catch should be put around each of these close() attempts to avoid skipping a subsequent closes. The catch may be able to ignore a caught exception to avoid masking the original exception like done in SimpleFSDirectory.close().
> Locations in Lucene Java source where I suggest a try/catch should be used:
> - org.apache.lucene.index.FormatPostingFieldsWriter.finish()
> - org.apache.lucene.index.TermInfosWriter.close()
> - org.apache.lucene.index.SegmentTermPositions.close()
> - org.apache.lucene.index.SegmentMergeInfo.close()
> - org.apache.lucene.index.SegmentMerger.mergeTerms() (The finally block)
> - org.apache.lucene.index.DirectoryReader.close()
> - org.apache.lucene.index.FieldsReader.close()
> - org.apache.lucene.index.MultiLevelSkipListReader.close()
> - org.apache.lucene.index.MultipleTermPositions.close()
> - org.apache.lucene.index.SegmentMergeQueue.close()
> - org.apache.lucene.index.SegmentMergeDocs.close()
> - org.apache.lucene.index.TermInfosReader.close()

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org