You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@lucene.apache.org by GitBox <gi...@apache.org> on 2021/02/22 19:22:57 UTC

[GitHub] [lucene-solr] donnerpeter opened a new pull request #2416: LUCENE-9801: Hunspell suggestions: speed up expandWord by enumerating…

donnerpeter opened a new pull request #2416:
URL: https://github.com/apache/lucene-solr/pull/2416


   <!--
   _(If you are a project committer then you may remove some/all of the following template.)_
   
   Before creating a pull request, please file an issue in the ASF Jira system for Lucene or Solr:
   
   * https://issues.apache.org/jira/projects/LUCENE
   * https://issues.apache.org/jira/projects/SOLR
   
   You will need to create an account in Jira in order to create an issue.
   
   The title of the PR should reference the Jira issue number in the form:
   
   * LUCENE-####: <short description of problem or changes>
   * SOLR-####: <short description of problem or changes>
   
   LUCENE and SOLR must be fully capitalized. A short description helps people scanning pull requests for items they can work on.
   
   Properly referencing the issue in the title ensures that Jira is correctly updated with code review comments and commits. -->
   
   
   # Description
   
   … only applicable affixes.
   
   Before, all affixes were enumerated and checked for `startsWith/endsWith`. And enumerating a whole FST is expensive.
   
   # Solution
   
   Perform FST traversal similar to affix stripping in suggester as well. Extract a couple utility methods and clean up around that.
   
   # Tests
   
   No new tests. `TestPerformance.fr_suggest` becomes ~40% faster.
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [x] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability.
   - [x] I have created a Jira issue and added the issue ID to my pull request title.
   - [x] I have given Solr maintainers [access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to contribute to my PR branch. (optional but recommended)
   - [x] I have developed this patch against the `master` branch.
   - [x] I have run `./gradlew check`.
   - [ ] I have added tests for my changes.
   - [ ] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene-solr] donnerpeter commented on a change in pull request #2416: LUCENE-9801: Hunspell suggestions: speed up expandWord by enumerating…

Posted by GitBox <gi...@apache.org>.
donnerpeter commented on a change in pull request #2416:
URL: https://github.com/apache/lucene-solr/pull/2416#discussion_r580529040



##########
File path: lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/GeneratingSuggester.java
##########
@@ -175,67 +175,85 @@ private static int calcThreshold(String word) {
     }
 
     // suffixes
-    processFST(
-        dictionary.suffixes,
-        (key, ids) -> {
-          String suffix = new StringBuilder(toString(key)).reverse().toString();
-          if (misspelled.length() <= suffix.length() || !misspelled.endsWith(suffix)) return;
-
-          for (int i = 0; i < ids.length; i++) {
-            int suffixId = ids.ints[ids.offset + i];
-            if (!hasCompatibleFlags(root, suffixId) || !checkAffixCondition(suffixId, root.word)) {
-              continue;
-            }
+    processAffixes(
+        false,
+        misspelled,
+        (suffixLength, suffixId) -> {
+          if (!hasCompatibleFlags(root, suffixId) || !checkAffixCondition(suffixId, root.word)) {
+            return;
+          }
 
-            String withSuffix =
-                root.word.substring(0, root.word.length() - affixStripLength(suffixId)) + suffix;
-            result.add(withSuffix);
-            if (dictionary.isCrossProduct(suffixId)) {
-              crossProducts.add(withSuffix);
-            }
+          String suffix = misspelled.substring(misspelled.length() - suffixLength);
+          String withSuffix =
+              root.word.substring(0, root.word.length() - affixStripLength(suffixId)) + suffix;
+          result.add(withSuffix);
+          if (dictionary.isCrossProduct(suffixId)) {
+            crossProducts.add(withSuffix);
           }
         });
 
     // cross-product prefixes
-    processFST(
-        dictionary.prefixes,
-        (key, ids) -> {
-          String prefix = toString(key);
-          if (misspelled.length() <= prefix.length() || !misspelled.startsWith(prefix)) return;
-
-          for (int i = 0; i < ids.length; i++) {
-            int prefixId = ids.ints[ids.offset + i];
-            if (!dictionary.hasFlag(root.entryId, dictionary.affixData(prefixId, AFFIX_FLAG))
-                || !dictionary.isCrossProduct(prefixId)) {
-              continue;
-            }
+    processAffixes(
+        true,
+        misspelled,
+        (prefixLength, prefixId) -> {
+          if (!dictionary.hasFlag(root.entryId, dictionary.affixData(prefixId, AFFIX_FLAG))
+              || !dictionary.isCrossProduct(prefixId)) {
+            return;
+          }
 
-            for (String suffixed : crossProducts) {
-              if (checkAffixCondition(prefixId, suffixed)) {
-                result.add(prefix + suffixed.substring(affixStripLength(prefixId)));
-              }
+          String prefix = misspelled.substring(0, prefixLength);
+          for (String suffixed : crossProducts) {
+            if (checkAffixCondition(prefixId, suffixed)) {
+              result.add(prefix + suffixed.substring(affixStripLength(prefixId)));
             }
           }
         });
 
     // pure prefixes
-    processFST(
-        dictionary.prefixes,
-        (key, ids) -> {
-          String prefix = toString(key);
-          if (misspelled.length() <= prefix.length() || !misspelled.startsWith(prefix)) return;
-
-          for (int i = 0; i < ids.length; i++) {
-            int prefixId = ids.ints[ids.offset + i];
-            if (hasCompatibleFlags(root, prefixId) && checkAffixCondition(prefixId, root.word)) {
-              result.add(prefix + root.word.substring(affixStripLength(prefixId)));
-            }
+    processAffixes(
+        true,
+        misspelled,
+        (prefixLength, prefixId) -> {
+          if (hasCompatibleFlags(root, prefixId) && checkAffixCondition(prefixId, root.word)) {
+            String prefix = misspelled.substring(0, prefixLength);
+            result.add(prefix + root.word.substring(affixStripLength(prefixId)));
           }
         });
 
     return result.stream().limit(MAX_WORDS).collect(Collectors.toList());
   }
 
+  private void processAffixes(boolean prefixes, String word, AffixProcessor processor) {

Review comment:
       This code is quite similar to `Stemmer.stem`, but reusing it isn't trivial. My best attempts gave a slowdown of 5%.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene-solr] rmuir merged pull request #2416: LUCENE-9801: Hunspell suggestions: speed up expandWord by enumerating…

Posted by GitBox <gi...@apache.org>.
rmuir merged pull request #2416:
URL: https://github.com/apache/lucene-solr/pull/2416


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene-solr] donnerpeter commented on a change in pull request #2416: LUCENE-9801: Hunspell suggestions: speed up expandWord by enumerating…

Posted by GitBox <gi...@apache.org>.
donnerpeter commented on a change in pull request #2416:
URL: https://github.com/apache/lucene-solr/pull/2416#discussion_r580526535



##########
File path: lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/Dictionary.java
##########
@@ -297,29 +297,29 @@ IntsRef lookup(FST<IntsRef> fst, char[] word, int offset, int length) {
     final FST.BytesReader bytesReader = fst.getBytesReader();
     final FST.Arc<IntsRef> arc = fst.getFirstArc(new FST.Arc<>());
     // Accumulate output as we go
-    final IntsRef NO_OUTPUT = fst.outputs.getNoOutput();
-    IntsRef output = NO_OUTPUT;
+    IntsRef output = fst.outputs.getNoOutput();
 
     int l = offset + length;
-    try {
-      for (int i = offset, cp; i < l; i += Character.charCount(cp)) {
-        cp = Character.codePointAt(word, i, l);
-        if (fst.findTargetArc(cp, arc, arc, bytesReader) == null) {
-          return null;
-        } else if (arc.output() != NO_OUTPUT) {
-          output = fst.outputs.add(output, arc.output());
-        }
+    for (int i = offset, cp; i < l; i += Character.charCount(cp)) {
+      cp = Character.codePointAt(word, i, l);
+      output = nextArc(fst, arc, bytesReader, output, cp);

Review comment:
       extracted `nextArc` to reuse in other FST-traversing places




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene-solr] donnerpeter commented on a change in pull request #2416: LUCENE-9801: Hunspell suggestions: speed up expandWord by enumerating…

Posted by GitBox <gi...@apache.org>.
donnerpeter commented on a change in pull request #2416:
URL: https://github.com/apache/lucene-solr/pull/2416#discussion_r580529040



##########
File path: lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/GeneratingSuggester.java
##########
@@ -175,67 +175,85 @@ private static int calcThreshold(String word) {
     }
 
     // suffixes
-    processFST(
-        dictionary.suffixes,
-        (key, ids) -> {
-          String suffix = new StringBuilder(toString(key)).reverse().toString();
-          if (misspelled.length() <= suffix.length() || !misspelled.endsWith(suffix)) return;
-
-          for (int i = 0; i < ids.length; i++) {
-            int suffixId = ids.ints[ids.offset + i];
-            if (!hasCompatibleFlags(root, suffixId) || !checkAffixCondition(suffixId, root.word)) {
-              continue;
-            }
+    processAffixes(
+        false,
+        misspelled,
+        (suffixLength, suffixId) -> {
+          if (!hasCompatibleFlags(root, suffixId) || !checkAffixCondition(suffixId, root.word)) {
+            return;
+          }
 
-            String withSuffix =
-                root.word.substring(0, root.word.length() - affixStripLength(suffixId)) + suffix;
-            result.add(withSuffix);
-            if (dictionary.isCrossProduct(suffixId)) {
-              crossProducts.add(withSuffix);
-            }
+          String suffix = misspelled.substring(misspelled.length() - suffixLength);
+          String withSuffix =
+              root.word.substring(0, root.word.length() - affixStripLength(suffixId)) + suffix;
+          result.add(withSuffix);
+          if (dictionary.isCrossProduct(suffixId)) {
+            crossProducts.add(withSuffix);
           }
         });
 
     // cross-product prefixes
-    processFST(
-        dictionary.prefixes,
-        (key, ids) -> {
-          String prefix = toString(key);
-          if (misspelled.length() <= prefix.length() || !misspelled.startsWith(prefix)) return;
-
-          for (int i = 0; i < ids.length; i++) {
-            int prefixId = ids.ints[ids.offset + i];
-            if (!dictionary.hasFlag(root.entryId, dictionary.affixData(prefixId, AFFIX_FLAG))
-                || !dictionary.isCrossProduct(prefixId)) {
-              continue;
-            }
+    processAffixes(
+        true,
+        misspelled,
+        (prefixLength, prefixId) -> {
+          if (!dictionary.hasFlag(root.entryId, dictionary.affixData(prefixId, AFFIX_FLAG))
+              || !dictionary.isCrossProduct(prefixId)) {
+            return;
+          }
 
-            for (String suffixed : crossProducts) {
-              if (checkAffixCondition(prefixId, suffixed)) {
-                result.add(prefix + suffixed.substring(affixStripLength(prefixId)));
-              }
+          String prefix = misspelled.substring(0, prefixLength);
+          for (String suffixed : crossProducts) {
+            if (checkAffixCondition(prefixId, suffixed)) {
+              result.add(prefix + suffixed.substring(affixStripLength(prefixId)));
             }
           }
         });
 
     // pure prefixes
-    processFST(
-        dictionary.prefixes,
-        (key, ids) -> {
-          String prefix = toString(key);
-          if (misspelled.length() <= prefix.length() || !misspelled.startsWith(prefix)) return;
-
-          for (int i = 0; i < ids.length; i++) {
-            int prefixId = ids.ints[ids.offset + i];
-            if (hasCompatibleFlags(root, prefixId) && checkAffixCondition(prefixId, root.word)) {
-              result.add(prefix + root.word.substring(affixStripLength(prefixId)));
-            }
+    processAffixes(
+        true,
+        misspelled,
+        (prefixLength, prefixId) -> {
+          if (hasCompatibleFlags(root, prefixId) && checkAffixCondition(prefixId, root.word)) {
+            String prefix = misspelled.substring(0, prefixLength);
+            result.add(prefix + root.word.substring(affixStripLength(prefixId)));
           }
         });
 
     return result.stream().limit(MAX_WORDS).collect(Collectors.toList());
   }
 
+  private void processAffixes(boolean prefixes, String word, AffixProcessor processor) {

Review comment:
       This code is quite similar to `Stemmer.stem`, but reusing it isn't trivial. My best attempts gave a slowdown of 5%, probably due to lambda allocation.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene-solr] donnerpeter commented on a change in pull request #2416: LUCENE-9801: Hunspell suggestions: speed up expandWord by enumerating…

Posted by GitBox <gi...@apache.org>.
donnerpeter commented on a change in pull request #2416:
URL: https://github.com/apache/lucene-solr/pull/2416#discussion_r580529691



##########
File path: lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/Stemmer.java
##########
@@ -259,12 +258,8 @@ boolean doStem(
         }
       }
     }
-    try {
-      return stem(
-          word, offset, length, context, -1, Dictionary.FLAG_UNSET, -1, 0, true, false, processor);
-    } catch (IOException bogus) {
-      throw new RuntimeException(bogus);
-    }
+    return stem(

Review comment:
       catching `IOException` in `nextArc` allows us to get rid of declaring it everywhere




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org


[GitHub] [lucene-solr] donnerpeter commented on a change in pull request #2416: LUCENE-9801: Hunspell suggestions: speed up expandWord by enumerating…

Posted by GitBox <gi...@apache.org>.
donnerpeter commented on a change in pull request #2416:
URL: https://github.com/apache/lucene-solr/pull/2416#discussion_r580527272



##########
File path: lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/GeneratingSuggester.java
##########
@@ -175,67 +175,85 @@ private static int calcThreshold(String word) {
     }
 
     // suffixes
-    processFST(
-        dictionary.suffixes,
-        (key, ids) -> {
-          String suffix = new StringBuilder(toString(key)).reverse().toString();
-          if (misspelled.length() <= suffix.length() || !misspelled.endsWith(suffix)) return;

Review comment:
       We traversed the whole FST just to rule out everything not `endsWith`. FSTs can do better :)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org