You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Uwe Schindler (JIRA)" <ji...@apache.org> on 2013/08/08 23:44:48 UTC

[jira] [Commented] (LUCENE-5161) review FSDirectory chunking defaults and test the chunking

    [ https://issues.apache.org/jira/browse/LUCENE-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13734047#comment-13734047 ] 

Uwe Schindler commented on LUCENE-5161:
---------------------------------------

Thanks Robert for opening.

It is too late today, so I will respond tomorrow morning about the NIO stuff. I am now aware and inspected the JVM code, so I can explain why the OOMs occur in SimpleFSDir and NIOFSDir if you read large buffers. More details tomorrow, just one thing before: It has nothing to do with 32 or 64 bits, it is more limitations of the JVM with direct memory and heap size leading to the OOM under certain conditions. But the Integer.MAX_VALUE for 64 bit JVMs is just wrong, too (could also lead to OOM).

In general I would not make the buffers too large, so the junk size should be limited to not more than a few megabytes. Making them large brings no performance improvement at all, it just wastes emory in thread-local direct buffers allocated internally by the JVM's NIO code.
                
> review FSDirectory chunking defaults and test the chunking
> ----------------------------------------------------------
>
>                 Key: LUCENE-5161
>                 URL: https://issues.apache.org/jira/browse/LUCENE-5161
>             Project: Lucene - Core
>          Issue Type: Improvement
>            Reporter: Robert Muir
>         Attachments: LUCENE-5161.patch
>
>
> Today there is a loop in SimpleFS/NIOFS:
> {code}
> try {
>           do {
>             final int readLength;
>             if (total + chunkSize > len) {
>               readLength = len - total;
>             } else {
>               // LUCENE-1566 - work around JVM Bug by breaking very large reads into chunks
>               readLength = chunkSize;
>             }
>             final int i = file.read(b, offset + total, readLength);
>             total += i;
>           } while (total < len);
>         } catch (OutOfMemoryError e) {
> {code}
> I bet if you look at the clover report its untested, because its fixed at 100MB for 32-bit users and 2GB for 64-bit users (are these defaults even good?!).
> Also if you call the setter on a 64-bit machine to change the size, it just totally ignores it. We should remove that, the setter should always work.
> And we should set it to small values in tests so this loop is actually executed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org