You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Robert Muir (JIRA)" <ji...@apache.org> on 2014/05/06 07:13:15 UTC

[jira] [Updated] (LUCENE-5646) stored fields bulk merging doesn't quite work right

     [ https://issues.apache.org/jira/browse/LUCENE-5646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Robert Muir updated LUCENE-5646:
--------------------------------

    Attachment: LUCENE-5646.patch

Patch disabling the "one in a billion" optimization. IMO Its too rare you ever get this, and doesn't see hardly any test coverage.

In order to fix bulk merge to really bulk-copy over compressed data, it would have to be more sophisticated I think: e.g. allowing/tracking "padding" for final chunks in segments and at some point, determining it should GC the padding by forcing decompression/recompression. Honestly I'm not sure that kind of stuff belongs in bulk merge.

NOTE: I didnt not do any similar inspection yet of term vectors. But IIRC that one has less fancy stuff in bulk merge.

> stored fields bulk merging doesn't quite work right
> ---------------------------------------------------
>
>                 Key: LUCENE-5646
>                 URL: https://issues.apache.org/jira/browse/LUCENE-5646
>             Project: Lucene - Core
>          Issue Type: Bug
>            Reporter: Robert Muir
>             Fix For: 4.9, 5.0
>
>         Attachments: LUCENE-5646.patch
>
>
> from doing some profiling of merging:
> CompressingStoredFieldsWriter has 3 codepaths (as i see it):
> 1. optimized bulk copy (no deletions in chunk). In this case compressed data is copied over.
> 2. semi-optimized copy: in this case its optimized for an existing storedfieldswriter, and it decompresses and recompresses doc-at-a-time around any deleted docs in the chunk.
> 3. ordinary merging
> In my dataset, i only see #2 happening, never #1. The logic for determining if we can do #1 seems to be:
> {code}
> onChunkBoundary && chunkSmallEnough && chunkLargeEnough && noDeletions
> {code}
> I think the logic for "chunkLargeEnough" is out of sync with the MAX_DOCS_PER_CHUNK limit? e.g. instead of:
> {code}
> startOffsets[it.chunkDocs - 1] + it.lengths[it.chunkDocs - 1] >= chunkSize // chunk is large enough
> {code}
> it should be something like:
> {code}
> (it.chunkDocs >= MAX_DOCUMENTS_PER_CHUNK || startOffsets[it.chunkDocs - 1] + it.lengths[it.chunkDocs - 1] >= chunkSize) // chunk is large enough
> {code}
> But this only works "at first" then falls out of sync in my tests. Once this happens, it never reverts back to #1 algorithm and sticks with #2. So its still not quite right.
> Maybe [~jpountz] knows off the top of his head...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org