You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@lucene.apache.org by GitBox <gi...@apache.org> on 2022/11/09 13:51:12 UTC

[GitHub] [lucene] donnerpeter opened a new pull request, #11909: hunspell: introduce FragmentChecker to speed up ModifyingSuggester

donnerpeter opened a new pull request, #11909:
URL: https://github.com/apache/lucene/pull/11909

   add NGramFragmentChecker to quickly check whether insertions/replacements produce strings that are even possible in the language


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] uschindler commented on a diff in pull request #11909: hunspell: introduce FragmentChecker to speed up ModifyingSuggester

Posted by GitBox <gi...@apache.org>.
uschindler commented on code in PR #11909:
URL: https://github.com/apache/lucene/pull/11909#discussion_r1020151061


##########
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/FragmentChecker.java:
##########
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.hunspell;
+
+/**
+ * An oracle for quickly checking that a specific part of a word can never be a valid word. This
+ * allows speeding up the "Modification" part of {@link Suggester} by avoiding expensive checks on
+ * impossible words. Implementations may use character case, n-grams or whatever they like.
+ *
+ * @see NGramFragmentChecker
+ */
+public interface FragmentChecker {

Review Comment:
   I would mark this as `@FunctionalInterface`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] dweiss commented on a diff in pull request #11909: hunspell: introduce FragmentChecker to speed up ModifyingSuggester

Posted by GitBox <gi...@apache.org>.
dweiss commented on code in PR #11909:
URL: https://github.com/apache/lucene/pull/11909#discussion_r1020253325


##########
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/FragmentChecker.java:
##########
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.hunspell;
+
+/**
+ * An oracle for quickly checking that a specific part of a word can never be a valid word. This
+ * allows speeding up the "Modification" part of {@link Suggester} by avoiding expensive checks on
+ * impossible words. Implementations may use character case, n-grams or whatever they like.
+ *
+ * @see NGramFragmentChecker
+ */
+public interface FragmentChecker {

Review Comment:
   I never understood the intent of this annotation... What difference does it make in practice?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] dweiss commented on a diff in pull request #11909: hunspell: introduce FragmentChecker to speed up ModifyingSuggester

Posted by GitBox <gi...@apache.org>.
dweiss commented on code in PR #11909:
URL: https://github.com/apache/lucene/pull/11909#discussion_r1018232665


##########
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/FragmentChecker.java:
##########
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.hunspell;
+
+/**
+ * An oracle for quickly checking that a specific part of a word can never be a valid word. This
+ * allows speeding up the "Modification" part of {@link Suggester} but avoiding expensive checks on
+ * impossible words. Implementations may use character case, n-grams or whatever they like.
+ *
+ * @see NGramFragmentChecker
+ */
+public interface FragmentChecker {
+  FragmentChecker EVERYTHING_POSSIBLE = (word, start, end) -> false;
+
+  /**
+   * Check if some substring of the word that intersects with the given offsets is impossible in the

Review Comment:
   ```suggestion
      * Check if the substring of the word between {@code start} (inclusive) and {@code end} (exclusive) is impossible in the
   ```
   I didn't check whether the inclusive/exclusive part is right but I'd use the same contract as in String substring/subsequence.



##########
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/FragmentChecker.java:
##########
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.hunspell;
+
+/**
+ * An oracle for quickly checking that a specific part of a word can never be a valid word. This
+ * allows speeding up the "Modification" part of {@link Suggester} but avoiding expensive checks on

Review Comment:
   ```suggestion
    * allows speeding up the "Modification" part of {@link Suggester} by avoiding expensive checks on
   ```



##########
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/NGramFragmentChecker.java:
##########
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.hunspell;
+
+import java.util.BitSet;
+import java.util.Collection;
+import java.util.Locale;
+import java.util.function.Consumer;
+
+/**
+ * A {@link FragmentChecker} based on all character n-grams possible in a certain language, keeping
+ * them in a relatively memory-efficient, but probabilistic data structure.
+ *
+ * @see #fromAllSimpleWords for enumerating the whole dictionary automatically
+ * @see #fromWords for creating an instance from a precomputed set of all word forms or n-grams
+ */
+public class NGramFragmentChecker implements FragmentChecker {
+  private final int n;
+  private final BitSet hashes;
+
+  private NGramFragmentChecker(int n, BitSet hashes) {
+    if (n < 2) throw new IllegalArgumentException("N should be more >=2: " + n);
+
+    this.n = n;
+    this.hashes = hashes;
+
+    if (hashes.cardinality() > hashes.size() * 2 / 3) {
+      throw new IllegalArgumentException(
+          "Too many collisions, please report this to dev@lucene.apache.org");
+    }
+  }
+
+  int hashCount() {
+    return hashes.cardinality();
+  }
+
+  @Override
+  public boolean hasImpossibleFragmentAround(CharSequence word, int start, int end) {
+    if (word.length() < n) {
+      return false;
+    }
+    int firstIntersectingStart = Math.max(0, start - n + 1);
+    int lastIntersectingStart = Math.min(end - 1, word.length() - n);
+    for (int i = firstIntersectingStart; i <= lastIntersectingStart; i++) {
+      if (!hashes.get(Math.abs(lowCollisionHash(word, i, i + n) % hashes.size()))) {
+        return true;
+      }
+    }
+    return false;
+  }
+
+  private static int lowCollisionHash(CharSequence chars, int offset, int end) {
+    int result = 0;
+    for (int i = offset; i < end; i++) {
+      result = 239 * result + chars.charAt(i);
+    }
+    return result;
+  }
+
+  /**
+   * Iterate the whole dictionary, derive all word forms (using {@link WordFormGenerator}), vary the
+   * case to get all words acceptable by the spellchecker, and create a fragment checker based on
+   * their {@code n}-grams. Note that this enumerates only words derivable by suffixes and prefixes.
+   * If the language has compounds, some n-grams possible via those compounds can be missed. In the
+   * latter case, consider using {@link #fromWords}.
+   *
+   * @param n the length of n-grams
+   * @param dictionary the dictionary to traverse
+   * @param checkCanceled an object that's periodically called, allowing to interrupt the traversal
+   *     by throwing an exception
+   */
+  public static NGramFragmentChecker fromAllSimpleWords(
+      int n, Dictionary dictionary, Runnable checkCanceled) {
+    BitSet hashes = new BitSet(1 << (7 + n * 3));

Review Comment:
   That sparsity assertion inside NGramFragment and this code makes me wonder whether this is something that you came up with experimentally or is this something you can give a pointer/reference to? :) 
   
   I'd also restrict the maximum value of n on input (and in the docs); cap it to 5 or something - I don't think it makes sense to create a larger bitset with this implementation?
   
   Also, it's basically a bloom filter backed by a single hash function and the collision probability decreased by making the hash function's target table large enough, am I right?...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] donnerpeter commented on a diff in pull request #11909: hunspell: introduce FragmentChecker to speed up ModifyingSuggester

Posted by GitBox <gi...@apache.org>.
donnerpeter commented on code in PR #11909:
URL: https://github.com/apache/lucene/pull/11909#discussion_r1018377196


##########
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/FragmentChecker.java:
##########
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.hunspell;
+
+/**
+ * An oracle for quickly checking that a specific part of a word can never be a valid word. This
+ * allows speeding up the "Modification" part of {@link Suggester} but avoiding expensive checks on
+ * impossible words. Implementations may use character case, n-grams or whatever they like.
+ *
+ * @see NGramFragmentChecker
+ */
+public interface FragmentChecker {
+  FragmentChecker EVERYTHING_POSSIBLE = (word, start, end) -> false;
+
+  /**
+   * Check if some substring of the word that intersects with the given offsets is impossible in the

Review Comment:
   Expanded the documentation. Hopefully it's clearer now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] donnerpeter commented on a diff in pull request #11909: hunspell: introduce FragmentChecker to speed up ModifyingSuggester

Posted by GitBox <gi...@apache.org>.
donnerpeter commented on code in PR #11909:
URL: https://github.com/apache/lucene/pull/11909#discussion_r1018356474


##########
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/FragmentChecker.java:
##########
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.hunspell;
+
+/**
+ * An oracle for quickly checking that a specific part of a word can never be a valid word. This
+ * allows speeding up the "Modification" part of {@link Suggester} but avoiding expensive checks on
+ * impossible words. Implementations may use character case, n-grams or whatever they like.
+ *
+ * @see NGramFragmentChecker
+ */
+public interface FragmentChecker {
+  FragmentChecker EVERYTHING_POSSIBLE = (word, start, end) -> false;
+
+  /**
+   * Check if some substring of the word that intersects with the given offsets is impossible in the

Review Comment:
   Mostly it's a single-char interval there now. And the semantics is as described (albeit probably not very clearly): if there is any impossible substring (ngram) intersecting that 1-char interval, return true. Any ideas on the wording improvement?



##########
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/FragmentChecker.java:
##########
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.hunspell;
+
+/**
+ * An oracle for quickly checking that a specific part of a word can never be a valid word. This
+ * allows speeding up the "Modification" part of {@link Suggester} but avoiding expensive checks on
+ * impossible words. Implementations may use character case, n-grams or whatever they like.
+ *
+ * @see NGramFragmentChecker
+ */
+public interface FragmentChecker {
+  FragmentChecker EVERYTHING_POSSIBLE = (word, start, end) -> false;
+
+  /**
+   * Check if some substring of the word that intersects with the given offsets is impossible in the

Review Comment:
   Mostly it's a single-char interval there now, and those are all possible. And the semantics is as described (albeit probably not very clearly): if there is any impossible substring (ngram) intersecting that 1-char interval, return true. Any ideas on the wording improvement?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] donnerpeter merged pull request #11909: hunspell: introduce FragmentChecker to speed up ModifyingSuggester

Posted by GitBox <gi...@apache.org>.
donnerpeter merged PR #11909:
URL: https://github.com/apache/lucene/pull/11909


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] donnerpeter commented on a diff in pull request #11909: hunspell: introduce FragmentChecker to speed up ModifyingSuggester

Posted by GitBox <gi...@apache.org>.
donnerpeter commented on code in PR #11909:
URL: https://github.com/apache/lucene/pull/11909#discussion_r1018359280


##########
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/NGramFragmentChecker.java:
##########
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.hunspell;
+
+import java.util.BitSet;
+import java.util.Collection;
+import java.util.Locale;
+import java.util.function.Consumer;
+
+/**
+ * A {@link FragmentChecker} based on all character n-grams possible in a certain language, keeping
+ * them in a relatively memory-efficient, but probabilistic data structure.
+ *
+ * @see #fromAllSimpleWords for enumerating the whole dictionary automatically
+ * @see #fromWords for creating an instance from a precomputed set of all word forms or n-grams
+ */
+public class NGramFragmentChecker implements FragmentChecker {
+  private final int n;
+  private final BitSet hashes;
+
+  private NGramFragmentChecker(int n, BitSet hashes) {
+    if (n < 2) throw new IllegalArgumentException("N should be more >=2: " + n);
+
+    this.n = n;
+    this.hashes = hashes;
+
+    if (hashes.cardinality() > hashes.size() * 2 / 3) {
+      throw new IllegalArgumentException(
+          "Too many collisions, please report this to dev@lucene.apache.org");
+    }
+  }
+
+  int hashCount() {
+    return hashes.cardinality();
+  }
+
+  @Override
+  public boolean hasImpossibleFragmentAround(CharSequence word, int start, int end) {
+    if (word.length() < n) {
+      return false;
+    }
+    int firstIntersectingStart = Math.max(0, start - n + 1);
+    int lastIntersectingStart = Math.min(end - 1, word.length() - n);
+    for (int i = firstIntersectingStart; i <= lastIntersectingStart; i++) {
+      if (!hashes.get(Math.abs(lowCollisionHash(word, i, i + n) % hashes.size()))) {
+        return true;
+      }
+    }
+    return false;
+  }
+
+  private static int lowCollisionHash(CharSequence chars, int offset, int end) {
+    int result = 0;
+    for (int i = offset; i < end; i++) {
+      result = 239 * result + chars.charAt(i);
+    }
+    return result;
+  }
+
+  /**
+   * Iterate the whole dictionary, derive all word forms (using {@link WordFormGenerator}), vary the
+   * case to get all words acceptable by the spellchecker, and create a fragment checker based on
+   * their {@code n}-grams. Note that this enumerates only words derivable by suffixes and prefixes.
+   * If the language has compounds, some n-grams possible via those compounds can be missed. In the
+   * latter case, consider using {@link #fromWords}.
+   *
+   * @param n the length of n-grams
+   * @param dictionary the dictionary to traverse
+   * @param checkCanceled an object that's periodically called, allowing to interrupt the traversal
+   *     by throwing an exception
+   */
+  public static NGramFragmentChecker fromAllSimpleWords(
+      int n, Dictionary dictionary, Runnable checkCanceled) {
+    BitSet hashes = new BitSet(1 << (7 + n * 3));

Review Comment:
   It's from the experiment. I've forgot to restrict it, thanks for reminding! And yes, it's currently a single-function Bloom filter, the implementation can be changed later.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene] dweiss commented on a diff in pull request #11909: hunspell: introduce FragmentChecker to speed up ModifyingSuggester

Posted by GitBox <gi...@apache.org>.
dweiss commented on code in PR #11909:
URL: https://github.com/apache/lucene/pull/11909#discussion_r1018368778


##########
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/FragmentChecker.java:
##########
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.hunspell;
+
+/**
+ * An oracle for quickly checking that a specific part of a word can never be a valid word. This
+ * allows speeding up the "Modification" part of {@link Suggester} but avoiding expensive checks on
+ * impossible words. Implementations may use character case, n-grams or whatever they like.
+ *
+ * @see NGramFragmentChecker
+ */
+public interface FragmentChecker {
+  FragmentChecker EVERYTHING_POSSIBLE = (word, start, end) -> false;
+
+  /**
+   * Check if some substring of the word that intersects with the given offsets is impossible in the

Review Comment:
   Ok, please disregard the suggestion. I get it now, although perhaps the wording could be improved indeed. Or maybe a concrete example based on the ngram fragment checker would clarify things more than a formal contract? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org