You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@lucene.apache.org by GitBox <gi...@apache.org> on 2022/09/02 00:07:44 UTC

[GitHub] [lucene] gsmiller opened a new pull request, #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

gsmiller opened a new pull request, #11738:
URL: https://github.com/apache/lucene/pull/11738

   ### Description
   
   This PR brings over an optimization we recently made to `TermInSetQuery` (#1062) to `MultiTermQuery` more generally.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] gsmiller commented on a diff in pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
gsmiller commented on code in PR #11738:
URL: https://github.com/apache/lucene/pull/11738#discussion_r966403309


##########
lucene/core/src/java/org/apache/lucene/search/MultiTermQueryConstantScoreWrapper.java:
##########
@@ -179,8 +189,29 @@ private WeightOrDocIdSet rewrite(LeafReaderContext context) throws IOException {
           return new WeightOrDocIdSet(weight);
         }
 
-        // Too many terms: go back to the terms we already collected and start building the bit set
+        // Too many terms: we'll evaluate the term disjunction and populate a bitset. We start with
+        // the terms we haven't seen yet in case one of them matches all docs and lets us optimize
+        // (likely rare in practice):

Review Comment:
   Got it. Yeah that makes sense to me. I'll flip this back around.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] gsmiller commented on pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
gsmiller commented on PR #11738:
URL: https://github.com/apache/lucene/pull/11738#issuecomment-1255173279

   @rmuir did you have any other feedback or opposition to this change? Sorry, it dropped off my plate for a bit but picking it up now and looking to get it merged. Thanks again!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] rmuir commented on a diff in pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
rmuir commented on code in PR #11738:
URL: https://github.com/apache/lucene/pull/11738#discussion_r966338760


##########
lucene/core/src/java/org/apache/lucene/search/MultiTermQueryConstantScoreWrapper.java:
##########
@@ -179,8 +189,29 @@ private WeightOrDocIdSet rewrite(LeafReaderContext context) throws IOException {
           return new WeightOrDocIdSet(weight);
         }
 
-        // Too many terms: go back to the terms we already collected and start building the bit set
+        // Too many terms: we'll evaluate the term disjunction and populate a bitset. We start with
+        // the terms we haven't seen yet in case one of them matches all docs and lets us optimize
+        // (likely rare in practice):

Review Comment:
   right, whereas before we'd always iterate all the terms/postings in sequential order. i feel here that the optimization is being too-invasive on the typical case? e.g. on a big index with a MTQ that matches many terms, we may suffer page faults and stuff going back to those first 16 terms, when it can be avoided.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] gsmiller commented on a diff in pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
gsmiller commented on code in PR #11738:
URL: https://github.com/apache/lucene/pull/11738#discussion_r966336768


##########
lucene/core/src/java/org/apache/lucene/search/MultiTermQueryConstantScoreWrapper.java:
##########
@@ -165,9 +143,46 @@ private WeightOrDocIdSet rewrite(LeafReaderContext context) throws IOException {
 
         PostingsEnum docs = null;
 
-        final List<TermAndState> collectedTerms = new ArrayList<>();
-        if (collectTerms(context, termsEnum, collectedTerms)) {
-          // build a boolean query
+        // We will first try to collect up to 'threshold' terms into 'matchingTerms'
+        // if there are too many terms, we will fall back to building the 'builder'

Review Comment:
   > I don't think its worth completely restructuring the code for that case? Its only 16 terms at most.
   
   Yeah, that's a fair point. I think we can get the best of both potentially anyway (see latest revision). Happy to stay more faithful to the current code structure.
   
   > Yeah, I think i could have read it wrong, my bad. I did try to stare for a while but I think i got confused by the decision tree.
   
   It's nuanced and slightly tricky code. I've stared at it a lot between this and `TermInSetQuery` so it's kind of ingrained in my brain right now, but it is a bit tricky to read for sure.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] rmuir commented on a diff in pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
rmuir commented on code in PR #11738:
URL: https://github.com/apache/lucene/pull/11738#discussion_r965442751


##########
lucene/core/src/java/org/apache/lucene/search/MultiTermQueryConstantScoreWrapper.java:
##########
@@ -165,9 +143,46 @@ private WeightOrDocIdSet rewrite(LeafReaderContext context) throws IOException {
 
         PostingsEnum docs = null;
 
-        final List<TermAndState> collectedTerms = new ArrayList<>();
-        if (collectTerms(context, termsEnum, collectedTerms)) {
-          // build a boolean query
+        // We will first try to collect up to 'threshold' terms into 'matchingTerms'
+        // if there are too many terms, we will fall back to building the 'builder'

Review Comment:
   But this comment isn't really what happens. Instead, if we hit the threshold (16), we just null out our ArrayList of `collectedTerms`, and continue to iterate through the same loop and allocate another ArrayList of `collectedTerms` for the next 16 terms. and over and over again.
   
   So if there are millions of terms, we add a lot more allocations and GC pressure to do this optimization. I would prefer if we would "really bail" after we hit the limit, so it behaves as fast as before for the "huge numbers of terms" case. Seems like handling the case for `if (builder != null)` in the if-then-else decision tree here might do the trick?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] rmuir commented on pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
rmuir commented on PR #11738:
URL: https://github.com/apache/lucene/pull/11738#issuecomment-1240897004

   @gsmiller I think the question is, is it worth adding all those extra conditionals? I don't think the `DocIdSet#all` will really be that much faster in practice (I'm not even sure how often this optimization will hit for users anyway).
   
   Usually if a term is so dense that it is matching all the documents, then we aren't really reading actual postings anyway: we basically just read a single byte header that means "all ones" for the 128-doc FOR block and don't do any actual read/decoding.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] gsmiller commented on a diff in pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
gsmiller commented on code in PR #11738:
URL: https://github.com/apache/lucene/pull/11738#discussion_r966070400


##########
lucene/core/src/java/org/apache/lucene/search/MultiTermQueryConstantScoreWrapper.java:
##########
@@ -165,9 +143,46 @@ private WeightOrDocIdSet rewrite(LeafReaderContext context) throws IOException {
 
         PostingsEnum docs = null;
 
-        final List<TermAndState> collectedTerms = new ArrayList<>();
-        if (collectTerms(context, termsEnum, collectedTerms)) {
-          // build a boolean query
+        // We will first try to collect up to 'threshold' terms into 'matchingTerms'
+        // if there are too many terms, we will fall back to building the 'builder'

Review Comment:
   Thanks for the suggestion @rmuir. Let me see if I can use the existing code structure a bit more in this change. The reason I didn't want to just call `collectTerms` as-is is that we could unnecessarily seek and load term states when we've already found a term covering all docs. For example, if the first term we visit covers all terms, we can just stop there.
   
   I'm also not sure I'm following your point about reallocating `collectedTerms` as part of this change? That's certainly not my intention with this code, but maybe I'm staring at a bug and not realizing it? As soon as we hit the size threshold, we should be nulling out `collectTerms`, initializing a building and just using that for the remaining term iteration. Apologies if I'm overlooking something though. Entirely possible.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] rmuir commented on a diff in pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
rmuir commented on code in PR #11738:
URL: https://github.com/apache/lucene/pull/11738#discussion_r966085670


##########
lucene/core/src/java/org/apache/lucene/search/MultiTermQueryConstantScoreWrapper.java:
##########
@@ -165,9 +143,46 @@ private WeightOrDocIdSet rewrite(LeafReaderContext context) throws IOException {
 
         PostingsEnum docs = null;
 
-        final List<TermAndState> collectedTerms = new ArrayList<>();
-        if (collectTerms(context, termsEnum, collectedTerms)) {
-          // build a boolean query
+        // We will first try to collect up to 'threshold' terms into 'matchingTerms'
+        // if there are too many terms, we will fall back to building the 'builder'

Review Comment:
   > Thanks for the suggestion @rmuir. Let me see if I can use the existing code structure a bit more in this change. The reason I didn't want to just call collectTerms as-is is that we could unnecessarily seek and load term states when we've already found a term covering all docs. For example, if the first term we visit covers all terms, we can just stop there.
   
   I don't think its worth completely restructuring the code for that case? Its only 16 terms at most.
   
   > I'm also not sure I'm following your point about reallocating `collectedTerms` as part of this change? That's certainly not my intention with this code, but maybe I'm staring at a bug and not realizing it? As soon as we hit the size threshold, we should be nulling out `collectTerms`, initializing a building and just using that for the remaining term iteration. Apologies if I'm overlooking something though. Entirely possible.
   
   Yeah, I think i could have read it wrong, my bad. I did try to stare for a while but I think i got confused by the decision tree.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] gsmiller commented on pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
gsmiller commented on PR #11738:
URL: https://github.com/apache/lucene/pull/11738#issuecomment-1241061002

   @rmuir that's a fair point. I'll put up another iteration shortly that tries to address this feedback. Hopefully it will converge on something that makes sense to everyone :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] gsmiller commented on pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
gsmiller commented on PR #11738:
URL: https://github.com/apache/lucene/pull/11738#issuecomment-1240856191

   @jpountz:
   
   > It might not be a big win in practice, but it should be enough to compare the docFreq with the docCount (rather than maxDoc) and use this postings whose docFreq is equal to docCount as an iterator of matches.
   
   I like that idea. I wonder if checking for both conditions makes sense? If a term contains all docs in the segment, it should be more efficient to use `DocIdSet#all` right? (rather than iterating the actual postings). But, if a term doesn't contain all docs in the segment but _does_ contain all docs in the field (i.e., the field isn't completely dense), we could add an additional optimization here to use that single term's postings. Is that what you had in mind?
   
   Here's what I'm thinking:
   ```
             int docFreq = termsEnum.docFreq();
             if (reader.maxDoc() == docFreq) {
               return new WeightOrDocIdSet(DocIdSet.all(docFreq));
             } else if (terms.getDocCount() == docFreq) {
               TermStates termStates = new TermStates(searcher.getTopReaderContext());
               termStates.register(termsEnum.termState(), context.ord, docFreq, termsEnum.totalTermFreq());
               Query q = new ConstantScoreQuery(new TermQuery(new Term(query.field, term), termStates));
               Weight weight = searcher.rewrite(q).createWeight(searcher, scoreMode, score());
               return new WeightOrDocIdSet(weight);
             }
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] rmuir commented on a diff in pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
rmuir commented on code in PR #11738:
URL: https://github.com/apache/lucene/pull/11738#discussion_r965449782


##########
lucene/core/src/java/org/apache/lucene/search/MultiTermQueryConstantScoreWrapper.java:
##########
@@ -165,9 +143,46 @@ private WeightOrDocIdSet rewrite(LeafReaderContext context) throws IOException {
 
         PostingsEnum docs = null;
 
-        final List<TermAndState> collectedTerms = new ArrayList<>();
-        if (collectTerms(context, termsEnum, collectedTerms)) {
-          // build a boolean query
+        // We will first try to collect up to 'threshold' terms into 'matchingTerms'
+        // if there are too many terms, we will fall back to building the 'builder'

Review Comment:
   by the way, i think we could make the change more safely (performance wise), to just use the existing code structure, where we call collectTerms() and so on. It has been optimized over the years.
   
   We can just add a simple check instead to be more conservative?:
   ```
           if (collectTerms(context, termsEnum, collectedTerms)) {
             // build a boolean query
             BooleanQuery.Builder bq = new BooleanQuery.Builder();
             for (TermAndState t : collectedTerms) {
   +          // optimize terms that match all documents
   +          if (t.docFreq == reader.maxDoc()) {
   +            return new WeightOrDocIdSet(DocIdSet.all(reader.maxDoc()));
   +          }
               final TermStates termStates = new TermStates(searcher.getTopReaderContext());
               termStates.register(t.state, context.ord, t.docFreq, t.totalTermFreq);
               bq.add(new TermQuery(new Term(query.field, t.term), termStates), Occur.SHOULD);
             }
             Query q = new ConstantScoreQuery(bq.build());
             final Weight weight = searcher.rewrite(q).createWeight(searcher, scoreMode, score());
             return new WeightOrDocIdSet(weight);
           }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] gsmiller commented on a diff in pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
gsmiller commented on code in PR #11738:
URL: https://github.com/apache/lucene/pull/11738#discussion_r966335478


##########
lucene/core/src/java/org/apache/lucene/search/MultiTermQueryConstantScoreWrapper.java:
##########
@@ -179,8 +189,29 @@ private WeightOrDocIdSet rewrite(LeafReaderContext context) throws IOException {
           return new WeightOrDocIdSet(weight);
         }
 
-        // Too many terms: go back to the terms we already collected and start building the bit set
+        // Too many terms: we'll evaluate the term disjunction and populate a bitset. We start with
+        // the terms we haven't seen yet in case one of them matches all docs and lets us optimize
+        // (likely rare in practice):

Review Comment:
   The difference in the ordering change is just that we'll start building our bitset from the 17th term onwards and then come back and "fill in" the first 16. We still iterate the query-provided terms a single time in the order they're provided, it's just a question of when we go back to "fill in" those first 16 (do we "pause" after hitting the 17th term to fill in the first 16 the pick back up, or do we continue on and come back).
   
   Do you suspect a performance shift or some other impact if we tweak this? I don't have a strong opinion.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] gsmiller merged pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
gsmiller merged PR #11738:
URL: https://github.com/apache/lucene/pull/11738


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] jpountz commented on pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
jpountz commented on PR #11738:
URL: https://github.com/apache/lucene/pull/11738#issuecomment-1240714080

   It might not be a big win in practice, but it should be enough to compare the `docFreq` with the `docCount` (rather than `maxDoc`) and use this postings whose `docFreq` is equal to `docCount` as an iterator of matches.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] rmuir commented on a diff in pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
rmuir commented on code in PR #11738:
URL: https://github.com/apache/lucene/pull/11738#discussion_r966326978


##########
lucene/core/src/java/org/apache/lucene/search/MultiTermQueryConstantScoreWrapper.java:
##########
@@ -179,8 +189,29 @@ private WeightOrDocIdSet rewrite(LeafReaderContext context) throws IOException {
           return new WeightOrDocIdSet(weight);
         }
 
-        // Too many terms: go back to the terms we already collected and start building the bit set
+        // Too many terms: we'll evaluate the term disjunction and populate a bitset. We start with
+        // the terms we haven't seen yet in case one of them matches all docs and lets us optimize
+        // (likely rare in practice):

Review Comment:
   i'd prefer we didn't do this "backwards" because it means we no longer traverse everything sequentially like we previously did?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] rmuir commented on pull request #11738: Optimize MultiTermQueryConstantScoreWrapper for case when a term matches all docs in a segment.

Posted by GitBox <gi...@apache.org>.
rmuir commented on PR #11738:
URL: https://github.com/apache/lucene/pull/11738#issuecomment-1256102333

   nope, looks good


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org