You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucenenet.apache.org by ni...@apache.org on 2021/03/30 15:05:27 UTC

[lucenenet] branch master updated (7f40e2f -> 299f014)

This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git.


    from 7f40e2f  README.md, index.md: Updated to 4.8.0-beta00014, fixed minor branding issues
     new 2947718  docs: lucene-cli: Fixed command formatting and warnings to use DocFx formatting
     new 8caf7b6  docs: migration-guide.md: Fixed formatting so code examples are inside of lists and lists continue after the code
     new 0d970b9  docs: Lucene.Net.Analysis.Common/Collation/TokeAttributes/package.md: Fixed broken link (see #300)
     new a0cec61  docs: Lucene.Net.Analysis (submodules): Fixed broken formatting and links (see #284, #300)
     new 1244190  docs: Lucene.Net.Expressions: Fixed broken formatting and links (see #284, #300)
     new 8c404e9  docs: Lucene.Net.Facet: Fixed broken formatting and links (see #284, #300)
     new 1132c37  docs: Lucene.Net.Grouping: Fixed broken formatting and links (see #284, #300)
     new 2e320ea  docs: Lucene.Net.Highlighter: Fixed broken formatting and links (see #284, #300)
     new 2e415cd  docs: Lucene.Net.Join: Fixed broken formatting and links (see #284, #300)
     new ba2e0ae  docs: Lucene.Net.Misc: Fixed broken formatting and links (see #284, #300)
     new d667a61  docs: Lucene.Net.QueryParser: Fixed broken formatting and links (see #284, #300)
     new 4a096b9  docs: Lucene.Net.Spatial: Fixed broken formatting and links (see #284, #300)
     new cc551b0  docs: Lucene.Net.TestFramework: Fixed broken formatting and links (see #284, #300)
     new b1c353c  docs: Lucene.Net/overview.md: Changed fenced code block to console style
     new 299f014  docs: websites/apidocs/index.md: Updated links to OpenNLP and Highlighter projects, commented TODO work

The 15 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../Collation/TokenAttributes/package.md           |   4 +-
 src/Lucene.Net.Analysis.Kuromoji/overview.md       |   8 +-
 src/Lucene.Net.Analysis.Morfologik/overview.md     |   8 +-
 src/Lucene.Net.Analysis.Phonetic/overview.md       |   8 +-
 src/Lucene.Net.Analysis.SmartCn/overview.md        |   6 +-
 src/Lucene.Net.Analysis.SmartCn/package.md         |   8 +-
 src/Lucene.Net.Expressions/JS/package.md           |   4 +-
 src/Lucene.Net.Expressions/overview.md             |   7 +-
 src/Lucene.Net.Expressions/package.md              |   6 +-
 src/Lucene.Net.Facet/SortedSet/package.md          |   4 +-
 src/Lucene.Net.Facet/Taxonomy/package.md           |  11 +-
 src/Lucene.Net.Facet/package.md                    |  12 +-
 src/Lucene.Net.Grouping/Function/package.md        |   2 +-
 src/Lucene.Net.Grouping/package.md                 | 132 +++---
 src/Lucene.Net.Highlighter/Highlight/package.md    | 103 +++--
 .../VectorHighlight/package.md                     |  62 +--
 src/Lucene.Net.Highlighter/overview.md             |  11 +-
 src/Lucene.Net.Join/package.md                     |  41 +-
 src/Lucene.Net.Misc/Index/Sorter/package.md        |   6 +-
 src/Lucene.Net.Misc/overview.md                    |   9 +-
 .../Surround/Parser/package.md                     |   7 +-
 .../Surround/Query/package.md                      |  11 +-
 src/Lucene.Net.Spatial/overview.md                 |  17 +-
 src/Lucene.Net.TestFramework/Analysis/package.md   |   8 +-
 .../Codecs/Lucene41Ords/package.md                 |   4 +-
 src/Lucene.Net.TestFramework/Index/package.md      |   7 +-
 src/Lucene.Net.TestFramework/Search/package.md     |   7 +-
 src/Lucene.Net.TestFramework/Util/package.md       |   4 +-
 src/Lucene.Net/migration-guide.md                  | 460 +++++++++------------
 src/Lucene.Net/overview.md                         |   2 +-
 .../docs/analysis/kuromoji-build-dictionary.md     |   8 +-
 .../docs/analysis/stempel-compile-stems.md         |   8 +-
 .../docs/analysis/stempel-patch-stems.md           |   8 +-
 .../lucene-cli/docs/benchmark/extract-reuters.md   |   8 +-
 .../lucene-cli/docs/benchmark/extract-wikipedia.md |   8 +-
 .../docs/benchmark/find-quality-queries.md         |   8 +-
 .../lucene-cli/docs/benchmark/run-trec-eval.md     |   4 +-
 src/dotnet/tools/lucene-cli/docs/benchmark/run.md  |   8 +-
 .../tools/lucene-cli/docs/benchmark/sample.md      |   8 +-
 .../lucene-cli/docs/demo/associations-facets.md    |   8 +-
 .../tools/lucene-cli/docs/demo/distance-facets.md  |   8 +-
 .../docs/demo/expression-aggregation-facets.md     |   8 +-
 .../tools/lucene-cli/docs/demo/index-files.md      |   8 +-
 .../docs/demo/multi-category-lists-facets.md       |   8 +-
 .../tools/lucene-cli/docs/demo/range-facets.md     |   8 +-
 .../tools/lucene-cli/docs/demo/search-files.md     |  12 +-
 .../tools/lucene-cli/docs/demo/simple-facets.md    |   8 +-
 .../docs/demo/simple-sorted-set-facets.md          |   8 +-
 src/dotnet/tools/lucene-cli/docs/index.md          |   9 +-
 src/dotnet/tools/lucene-cli/docs/index/check.md    |  12 +-
 .../tools/lucene-cli/docs/index/copy-segments.md   |   8 +-
 .../tools/lucene-cli/docs/index/delete-segments.md |  13 +-
 .../tools/lucene-cli/docs/index/extract-cfs.md     |  12 +-
 src/dotnet/tools/lucene-cli/docs/index/fix.md      |  11 +-
 src/dotnet/tools/lucene-cli/docs/index/index.md    |   3 +-
 src/dotnet/tools/lucene-cli/docs/index/list-cfs.md |   8 +-
 .../lucene-cli/docs/index/list-high-freq-terms.md  |  13 +-
 .../tools/lucene-cli/docs/index/list-segments.md   |   8 +-
 .../lucene-cli/docs/index/list-taxonomy-stats.md   |  11 +-
 .../tools/lucene-cli/docs/index/list-term-info.md  |   8 +-
 src/dotnet/tools/lucene-cli/docs/index/merge.md    |   8 +-
 src/dotnet/tools/lucene-cli/docs/index/split.md    |  15 +-
 src/dotnet/tools/lucene-cli/docs/index/upgrade.md  |  15 +-
 .../tools/lucene-cli/docs/lock/stress-test.md      |   8 +-
 .../tools/lucene-cli/docs/lock/verify-server.md    |   8 +-
 websites/apidocs/index.md                          |   6 +-
 66 files changed, 740 insertions(+), 559 deletions(-)

[lucenenet] 05/15: docs: Lucene.Net.Expressions: Fixed broken formatting and links (see #284, #300)

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git

commit 12441908765e53e4d976876fa456ef4eb9c5ef77
Author: Shad Storhaug <sh...@shadstorhaug.com>
AuthorDate: Tue Mar 30 19:40:32 2021 +0700

    docs: Lucene.Net.Expressions: Fixed broken formatting and links (see #284, #300)
---
 src/Lucene.Net.Expressions/JS/package.md | 4 ++--
 src/Lucene.Net.Expressions/overview.md   | 7 +++++--
 src/Lucene.Net.Expressions/package.md    | 6 +++---
 3 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/src/Lucene.Net.Expressions/JS/package.md b/src/Lucene.Net.Expressions/JS/package.md
index 3ce22e2..3e361cb 100644
--- a/src/Lucene.Net.Expressions/JS/package.md
+++ b/src/Lucene.Net.Expressions/JS/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Expressions.JS
 summary: *content
 ---
@@ -46,4 +46,4 @@ A Javascript expression is a numeric expression specified using an expression sy
 
  JavaScript order of precedence rules apply for operators. Shortcut evaluation is used for logical operators—the second argument is only evaluated if the value of the expression cannot be determined after evaluating the first argument. For example, in the expression `a || b`, `b` is only evaluated if a is not true. 
 
- To compile an expression, use <xref:Lucene.Net.Expressions.Js.JavascriptCompiler>. 
\ No newline at end of file
+ To compile an expression, use <xref:Lucene.Net.Expressions.JS.JavascriptCompiler>. 
\ No newline at end of file
diff --git a/src/Lucene.Net.Expressions/overview.md b/src/Lucene.Net.Expressions/overview.md
index 77ce8cf..4cf4320 100644
--- a/src/Lucene.Net.Expressions/overview.md
+++ b/src/Lucene.Net.Expressions/overview.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Expressions
 summary: *content
 ---
@@ -24,6 +24,9 @@ summary: *content
 
  The expressions module is new to Lucene 4.6. It provides an API for dynamically computing per-document values based on string expressions. 
 
- The module is organized in two sections: 1. <xref:Lucene.Net.Expressions> - The abstractions and simple utilities for common operations like sorting on an expression 2. <xref:Lucene.Net.Expressions.Js> - A compiler for a subset of JavaScript expressions 
+ The module is organized in two sections:
+
+1. <xref:Lucene.Net.Expressions> - The abstractions and simple utilities for common operations like sorting on an expression
+2. <xref:Lucene.Net.Expressions.JS> - A compiler for a subset of JavaScript expressions 
 
  For sample code showing how to use the API, see <xref:Lucene.Net.Expressions.Expression>. 
\ No newline at end of file
diff --git a/src/Lucene.Net.Expressions/package.md b/src/Lucene.Net.Expressions/package.md
index c4c9646..593ffd0 100644
--- a/src/Lucene.Net.Expressions/package.md
+++ b/src/Lucene.Net.Expressions/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Expressions
 summary: *content
 ---
@@ -22,8 +22,8 @@ summary: *content
 
 # expressions
 
- <xref:Lucene.Net.Expressions.Expression> - result of compiling an expression, which can evaluate it for a given document. Each expression can have external variables are resolved by {@code Bindings}. 
+ <xref:Lucene.Net.Expressions.Expression> - result of compiling an expression, which can evaluate it for a given document. Each expression can have external variables are resolved by <xref:Lucene.Net.Expressions.Bindings>. 
 
  <xref:Lucene.Net.Expressions.Bindings> - abstraction for binding external variables to a way to get a value for those variables for a particular document (ValueSource). 
 
- <xref:Lucene.Net.Expressions.SimpleBindings> - default implementation of bindings which provide easy ways to bind sort fields and other expressions to external variables 
\ No newline at end of file
+ <xref:Lucene.Net.Expressions.SimpleBindings> - default implementation of bindings which provide easy ways to bind sort fields and other expressions to external variables.
\ No newline at end of file

[lucenenet] 01/15: docs: lucene-cli: Fixed command formatting and warnings to use DocFx formatting

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git

commit 294771827fbb7770c1d936fdc1231fcd49ea97bc
Author: Shad Storhaug <sh...@shadstorhaug.com>
AuthorDate: Tue Mar 30 19:15:45 2021 +0700

    docs: lucene-cli: Fixed command formatting and warnings to use DocFx formatting
---
 .../lucene-cli/docs/analysis/kuromoji-build-dictionary.md |  8 ++++++--
 .../lucene-cli/docs/analysis/stempel-compile-stems.md     |  8 ++++++--
 .../tools/lucene-cli/docs/analysis/stempel-patch-stems.md |  8 ++++++--
 .../tools/lucene-cli/docs/benchmark/extract-reuters.md    |  8 ++++++--
 .../tools/lucene-cli/docs/benchmark/extract-wikipedia.md  |  8 ++++++--
 .../lucene-cli/docs/benchmark/find-quality-queries.md     |  8 ++++++--
 .../tools/lucene-cli/docs/benchmark/run-trec-eval.md      |  4 +++-
 src/dotnet/tools/lucene-cli/docs/benchmark/run.md         |  8 ++++++--
 src/dotnet/tools/lucene-cli/docs/benchmark/sample.md      |  8 ++++++--
 .../tools/lucene-cli/docs/demo/associations-facets.md     |  8 ++++++--
 src/dotnet/tools/lucene-cli/docs/demo/distance-facets.md  |  8 ++++++--
 .../lucene-cli/docs/demo/expression-aggregation-facets.md |  8 ++++++--
 src/dotnet/tools/lucene-cli/docs/demo/index-files.md      |  8 +++++---
 .../lucene-cli/docs/demo/multi-category-lists-facets.md   |  8 ++++++--
 src/dotnet/tools/lucene-cli/docs/demo/range-facets.md     |  8 ++++++--
 src/dotnet/tools/lucene-cli/docs/demo/search-files.md     | 12 ++++++++----
 src/dotnet/tools/lucene-cli/docs/demo/simple-facets.md    |  8 ++++++--
 .../lucene-cli/docs/demo/simple-sorted-set-facets.md      |  8 ++++++--
 src/dotnet/tools/lucene-cli/docs/index.md                 |  9 ++++++---
 src/dotnet/tools/lucene-cli/docs/index/check.md           | 12 +++++++++---
 src/dotnet/tools/lucene-cli/docs/index/copy-segments.md   |  8 ++++++--
 src/dotnet/tools/lucene-cli/docs/index/delete-segments.md | 13 ++++++++++---
 src/dotnet/tools/lucene-cli/docs/index/extract-cfs.md     | 12 +++++++++---
 src/dotnet/tools/lucene-cli/docs/index/fix.md             | 11 ++++++++---
 src/dotnet/tools/lucene-cli/docs/index/index.md           |  3 ++-
 src/dotnet/tools/lucene-cli/docs/index/list-cfs.md        |  8 ++++++--
 .../tools/lucene-cli/docs/index/list-high-freq-terms.md   | 13 +++++++++----
 src/dotnet/tools/lucene-cli/docs/index/list-segments.md   |  8 ++++++--
 .../tools/lucene-cli/docs/index/list-taxonomy-stats.md    | 11 ++++++++---
 src/dotnet/tools/lucene-cli/docs/index/list-term-info.md  |  8 ++++++--
 src/dotnet/tools/lucene-cli/docs/index/merge.md           |  8 ++++++--
 src/dotnet/tools/lucene-cli/docs/index/split.md           | 15 +++++++++++----
 src/dotnet/tools/lucene-cli/docs/index/upgrade.md         | 15 +++++++++++----
 src/dotnet/tools/lucene-cli/docs/lock/stress-test.md      |  8 ++++++--
 src/dotnet/tools/lucene-cli/docs/lock/verify-server.md    |  8 ++++++--
 35 files changed, 231 insertions(+), 83 deletions(-)

diff --git a/src/dotnet/tools/lucene-cli/docs/analysis/kuromoji-build-dictionary.md b/src/dotnet/tools/lucene-cli/docs/analysis/kuromoji-build-dictionary.md
index 6fa08fe..7dfc989 100644
--- a/src/dotnet/tools/lucene-cli/docs/analysis/kuromoji-build-dictionary.md
+++ b/src/dotnet/tools/lucene-cli/docs/analysis/kuromoji-build-dictionary.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene analysis kuromoji-build-dictionary \<FORMAT> \<INPUT_DIRECTORY> \<OUTPUT_DIRECTORY> [-e|--encoding] [-n|--normalize] [?|-h|--help]</code>
+```console
+lucene analysis kuromoji-build-dictionary <FORMAT> <INPUT_DIRECTORY> <OUTPUT_DIRECTORY> [-e|--encoding] [-n|--normalize] [?|-h|--help]
+```
 
 ### Description
 
@@ -56,5 +58,7 @@ Normalize the entries using normalization form KC.
 
 ### Example
 
-<code>lucene analysis kuromoji-build-dictionary IPADIC X:\kuromoji-data X:\kuromoji-dictionary --normalize</code>
+```console
+lucene analysis kuromoji-build-dictionary IPADIC X:\kuromoji-data X:\kuromoji-dictionary --normalize
+```
 
diff --git a/src/dotnet/tools/lucene-cli/docs/analysis/stempel-compile-stems.md b/src/dotnet/tools/lucene-cli/docs/analysis/stempel-compile-stems.md
index 01ccdf6..9138e63 100644
--- a/src/dotnet/tools/lucene-cli/docs/analysis/stempel-compile-stems.md
+++ b/src/dotnet/tools/lucene-cli/docs/analysis/stempel-compile-stems.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene analysis stempel-compile-stems \<STEMMING_ALGORITHM> \<STEMMER_TABLE_FILE> [-e|--encoding] [?|-h|--help]</code>
+```console
+lucene analysis stempel-compile-stems <STEMMING_ALGORITHM> <STEMMER_TABLE_FILE> [-e|--encoding] [?|-h|--help]
+```
 
 ### Description
 
@@ -34,4 +36,6 @@ The file encoding used by the stemmer files. If not supplied, the default value
 
 ### Example
 
-<code>lucene analysis stempel-compile-stems test X:\stemmer-data\table1.txt X:\stemmer-data\table2.txt</code>
+```console
+lucene analysis stempel-compile-stems test X:\stemmer-data\table1.txt X:\stemmer-data\table2.txt
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/analysis/stempel-patch-stems.md b/src/dotnet/tools/lucene-cli/docs/analysis/stempel-patch-stems.md
index 18cff3a..0d5e38f 100644
--- a/src/dotnet/tools/lucene-cli/docs/analysis/stempel-patch-stems.md
+++ b/src/dotnet/tools/lucene-cli/docs/analysis/stempel-patch-stems.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene analysis stempel-patch-stems \<STEMMER_TABLE_FILE> [-e|--encoding] [?|-h|--help]</code>
+```console
+lucene analysis stempel-patch-stems <STEMMER_TABLE_FILE> [-e|--encoding] [?|-h|--help]
+```
 
 ### Description
 
@@ -30,5 +32,7 @@ The file encoding used by the stemmer files. If not supplied, the default value
 
 ### Example
 
-<code>lucene analysis stempel-patch-stems X:\stemmer-data\table1.txt X:\stemmer-data\table2.txt --encoding UTF-16</code>
+```console
+lucene analysis stempel-patch-stems X:\stemmer-data\table1.txt X:\stemmer-data\table2.txt --encoding UTF-16
+```
 
diff --git a/src/dotnet/tools/lucene-cli/docs/benchmark/extract-reuters.md b/src/dotnet/tools/lucene-cli/docs/benchmark/extract-reuters.md
index 892b5d0..a534dd5 100644
--- a/src/dotnet/tools/lucene-cli/docs/benchmark/extract-reuters.md
+++ b/src/dotnet/tools/lucene-cli/docs/benchmark/extract-reuters.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene benchmark extract-reuters \<INPUT_DIRECTORY> \<OUTPUT_DIRECTORY> [?|-h|--help]</code>
+```console
+lucene benchmark extract-reuters <INPUT_DIRECTORY> <OUTPUT_DIRECTORY> [?|-h|--help]
+```
 
 ### Arguments
 
@@ -28,4 +30,6 @@ Prints out a short help for the command.
 
 Extracts the reuters SGML files in the `z:\input` directory and places the content in the `z:\output` directory.
 
-<code>lucene benchmark extract-reuters z:\input z:\output</code>
+```console
+lucene benchmark extract-reuters z:\input z:\output
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/benchmark/extract-wikipedia.md b/src/dotnet/tools/lucene-cli/docs/benchmark/extract-wikipedia.md
index ccb27d2..310e0f7 100644
--- a/src/dotnet/tools/lucene-cli/docs/benchmark/extract-wikipedia.md
+++ b/src/dotnet/tools/lucene-cli/docs/benchmark/extract-wikipedia.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene benchmark extract-wikipedia \<INPUT_WIKIPEDIA_FILE> \<OUTPUT_DIRECTORY> [-d|--discard-image-only-docs] [?|-h|--help]</code>
+```console
+lucene benchmark extract-wikipedia <INPUT_WIKIPEDIA_FILE> <OUTPUT_DIRECTORY> [-d|--discard-image-only-docs] [?|-h|--help]
+```
 
 ### Arguments
 
@@ -32,4 +34,6 @@ Tells the extractor to skip WIKI docs that contain only images.
 
 Extracts the `c:\wiki.xml` file into the `c:\out` directory, skipping any docs that only contain images.
 
-<code>lucene benchmark extract-wikipedia c:\wiki.xml c:\out -d</code>
+```console
+lucene benchmark extract-wikipedia c:\wiki.xml c:\out -d
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/benchmark/find-quality-queries.md b/src/dotnet/tools/lucene-cli/docs/benchmark/find-quality-queries.md
index f1d4539..2709970 100644
--- a/src/dotnet/tools/lucene-cli/docs/benchmark/find-quality-queries.md
+++ b/src/dotnet/tools/lucene-cli/docs/benchmark/find-quality-queries.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene benchmark find-quality-queries \<INPUT_DIRECTORY> [?|-h|--help]</code>
+```console
+lucene benchmark find-quality-queries <INPUT_DIRECTORY> [?|-h|--help]
+```
 
 ### Arguments
 
@@ -24,4 +26,6 @@ Prints out a short help for the command.
 
 Finds quality queries on the `c:\lucene-index` index directory.
 
-<code>lucene benchmark find-quality-queries c:\lucene-index</code>
+```console
+lucene benchmark find-quality-queries c:\lucene-index
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/benchmark/run-trec-eval.md b/src/dotnet/tools/lucene-cli/docs/benchmark/run-trec-eval.md
index 0a22539..4f5fb64 100644
--- a/src/dotnet/tools/lucene-cli/docs/benchmark/run-trec-eval.md
+++ b/src/dotnet/tools/lucene-cli/docs/benchmark/run-trec-eval.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene benchmark run-trec-eval \<INPUT_TOPICS_FILE> \<INPUT_QUERY_RELEVANCE_FILE> \<OUTPUT_SUBMISSION_FILE> \<INDEX_DIRECTORY> [-t|--query-on-title] [-d|--query-on-description] [-n|--query-on-narrative] [?|-h|--help]</code>
+```console
+lucene benchmark run-trec-eval <INPUT_TOPICS_FILE> <INPUT_QUERY_RELEVANCE_FILE> <OUTPUT_SUBMISSION_FILE> <INDEX_DIRECTORY> [-t|--query-on-title] [-d|--query-on-description] [-n|--query-on-narrative] [?|-h|--help]
+```
 
 ### Arguments
 
diff --git a/src/dotnet/tools/lucene-cli/docs/benchmark/run.md b/src/dotnet/tools/lucene-cli/docs/benchmark/run.md
index c20cb85..d48ae4b 100644
--- a/src/dotnet/tools/lucene-cli/docs/benchmark/run.md
+++ b/src/dotnet/tools/lucene-cli/docs/benchmark/run.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene benchmark run \<ALGORITHM_FILE> \<OUTPUT_DIRECTORY> [?|-h|--help]</code>
+```console
+lucene benchmark run <ALGORITHM_FILE> <OUTPUT_DIRECTORY> [?|-h|--help]
+```
 
 ### Arguments
 
@@ -28,4 +30,6 @@ Prints out a short help for the command.
 
 Runs a benchmark on the `c:\check.alg` algorithm file.
 
-<code>lucene benchmark run c:\check.alg</code>
+```console
+lucene benchmark run c:\check.alg
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/benchmark/sample.md b/src/dotnet/tools/lucene-cli/docs/benchmark/sample.md
index e715ee4..245ef62 100644
--- a/src/dotnet/tools/lucene-cli/docs/benchmark/sample.md
+++ b/src/dotnet/tools/lucene-cli/docs/benchmark/sample.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene benchmark sample [-src|--view-source-code] [-out|--output-source-code]  [?|-h|--help]</code>
+```console
+lucene benchmark sample [-src|--view-source-code] [-out|--output-source-code]  [?|-h|--help]
+```
 
 ### Options
 
@@ -26,4 +28,6 @@ Outputs the source code to the specified directory.
 
 Runs the sample.
 
-<code>lucene benchmark sample</code>
+```console
+lucene benchmark sample
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/demo/associations-facets.md b/src/dotnet/tools/lucene-cli/docs/demo/associations-facets.md
index 1d5f9ce..282dbe8 100644
--- a/src/dotnet/tools/lucene-cli/docs/demo/associations-facets.md
+++ b/src/dotnet/tools/lucene-cli/docs/demo/associations-facets.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene demo associations-facets [-src|--view-source-code] [-out|--output-source-code] [?|-h|--help]</code>
+```console
+lucene demo associations-facets [-src|--view-source-code] [-out|--output-source-code] [?|-h|--help]
+```
 
 ### Options
 
@@ -24,4 +26,6 @@ Outputs the source code to the specified directory.
 
 ### Example
 
-<code>lucene demo associations-facets</code>
+```console
+lucene demo associations-facets
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/demo/distance-facets.md b/src/dotnet/tools/lucene-cli/docs/demo/distance-facets.md
index 4631ea5..bafc515 100644
--- a/src/dotnet/tools/lucene-cli/docs/demo/distance-facets.md
+++ b/src/dotnet/tools/lucene-cli/docs/demo/distance-facets.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene demo distance-facets [-src|--view-source-code] [-out|--output-source-code] [?|-h|--help]</code>
+```console
+lucene demo distance-facets [-src|--view-source-code] [-out|--output-source-code] [?|-h|--help]
+```
 
 ### Options
 
@@ -24,4 +26,6 @@ Outputs the source code to the specified directory.
 
 ### Example
 
-<code>lucene demo distance-facets</code>
+```console
+lucene demo distance-facets
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/demo/expression-aggregation-facets.md b/src/dotnet/tools/lucene-cli/docs/demo/expression-aggregation-facets.md
index 8dd0574..a101ed0 100644
--- a/src/dotnet/tools/lucene-cli/docs/demo/expression-aggregation-facets.md
+++ b/src/dotnet/tools/lucene-cli/docs/demo/expression-aggregation-facets.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene demo expression-aggregation-facets [-src|--view-source-code] [-out|--output-source-code] [?|-h|--help]</code>
+```console
+lucene demo expression-aggregation-facets [-src|--view-source-code] [-out|--output-source-code] [?|-h|--help]
+```
 
 ### Options
 
@@ -24,4 +26,6 @@ Outputs the source code to the specified directory.
 
 ### Example
 
-<code>lucene demo expression-aggregation-facets</code>
+```console
+lucene demo expression-aggregation-facets
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/demo/index-files.md b/src/dotnet/tools/lucene-cli/docs/demo/index-files.md
index a4ceadb..a02484c 100644
--- a/src/dotnet/tools/lucene-cli/docs/demo/index-files.md
+++ b/src/dotnet/tools/lucene-cli/docs/demo/index-files.md
@@ -6,8 +6,8 @@
 
 ### Synopsis
 
-```
-lucene demo index-files \<INDEX_DIRECTORY> \<SOURCE_DIRECTORY> [-u|--update] [?|-h|--help]
+```console
+lucene demo index-files <INDEX_DIRECTORY> <SOURCE_DIRECTORY> [-u|--update] [?|-h|--help]
 lucene demo index-files [-src|--view-source-code] [-out|--output-source-code]
 ```
 
@@ -47,5 +47,7 @@ Outputs the source code to the specified directory.
 
 Indexes the contents of `C:\Users\BGates\Documents\` and places the Lucene.Net index in `X:\test-index\`.
 
-<code>lucene demo index-files X:\test-index C:\Users\BGates\Documents</code>
+```console
+lucene demo index-files X:\test-index C:\Users\BGates\Documents
+```
 
diff --git a/src/dotnet/tools/lucene-cli/docs/demo/multi-category-lists-facets.md b/src/dotnet/tools/lucene-cli/docs/demo/multi-category-lists-facets.md
index 299df60..299f7f9 100644
--- a/src/dotnet/tools/lucene-cli/docs/demo/multi-category-lists-facets.md
+++ b/src/dotnet/tools/lucene-cli/docs/demo/multi-category-lists-facets.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene demo multi-category-lists-facets [-src|--view-source-code] [-out|--output-source-code] [?|-h|--help]</code>
+```console
+lucene demo multi-category-lists-facets [-src|--view-source-code] [-out|--output-source-code] [?|-h|--help]
+```
 
 ### Options
 
@@ -25,4 +27,6 @@ Outputs the source code to the specified directory.
 
 ### Example
 
-<code>lucene demo multi-category-lists-facets</code>
+```console
+lucene demo multi-category-lists-facets
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/demo/range-facets.md b/src/dotnet/tools/lucene-cli/docs/demo/range-facets.md
index f9b6121..e4fa1df 100644
--- a/src/dotnet/tools/lucene-cli/docs/demo/range-facets.md
+++ b/src/dotnet/tools/lucene-cli/docs/demo/range-facets.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene demo range-facets [-src|--view-source-code] [-out|--output-source-code] [?|-h|--help]</code>
+```console
+lucene demo range-facets [-src|--view-source-code] [-out|--output-source-code] [?|-h|--help]
+```
 
 ### Options
 
@@ -24,4 +26,6 @@ Outputs the source code to the specified directory.
 
 ### Example
 
-<code>lucene demo range-facets</code>
+```console
+lucene demo range-facets
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/demo/search-files.md b/src/dotnet/tools/lucene-cli/docs/demo/search-files.md
index cbbeec1..a217ec8 100644
--- a/src/dotnet/tools/lucene-cli/docs/demo/search-files.md
+++ b/src/dotnet/tools/lucene-cli/docs/demo/search-files.md
@@ -6,8 +6,8 @@
 
 ### Synopsis
 
-```
-lucene demo search-files \<INDEX_DIRECTORY> [-f|--field] [-r|--repeat] [-qf|--queries-file] [-q|--query] [--raw] [-p|--page-size] [?|-h|--help]
+```console
+lucene demo search-files <INDEX_DIRECTORY> [-f|--field] [-r|--repeat] [-qf|--queries-file] [-q|--query] [--raw] [-p|--page-size] [?|-h|--help]
 lucene demo search-files [-src|--view-source-code] [-out|--output-source-code]
 ```
 
@@ -65,8 +65,12 @@ Outputs the source code to the specified directory.
 
 Search the index located in the `X:\test-index` directory interactively, showing 15 results per page in raw format:
 
-<code>lucene demo search-files X:\test-index -p 15 --raw</code>
+```console
+lucene demo search-files X:\test-index -p 15 --raw
+```
 
 Run the query "foobar" against the "path" field in the index located in the `X:\test-index` directory:
 
-<code>lucene demo search-files X:\test-index --field path --query foobar</code>
+```console
+lucene demo search-files X:\test-index --field path --query foobar
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/demo/simple-facets.md b/src/dotnet/tools/lucene-cli/docs/demo/simple-facets.md
index e93cc78..6ee2eb7 100644
--- a/src/dotnet/tools/lucene-cli/docs/demo/simple-facets.md
+++ b/src/dotnet/tools/lucene-cli/docs/demo/simple-facets.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-</code>lucene demo simple-facets [-src|--view-source-code] [-out|--output-source-code] [?|-h|--help]</code>
+```console
+lucene demo simple-facets [-src|--view-source-code] [-out|--output-source-code] [?|-h|--help]
+```
 
 ### Options
 
@@ -24,4 +26,6 @@ Outputs the source code to the specified directory.
 
 ### Example
 
-<code>lucene demo simple-facets</code>
+```console
+lucene demo simple-facets
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/demo/simple-sorted-set-facets.md b/src/dotnet/tools/lucene-cli/docs/demo/simple-sorted-set-facets.md
index a92c9b9..fd25085 100644
--- a/src/dotnet/tools/lucene-cli/docs/demo/simple-sorted-set-facets.md
+++ b/src/dotnet/tools/lucene-cli/docs/demo/simple-sorted-set-facets.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene demo simple-sorted-set-facets [-src|--view-source-code] [-out|--output-source-code] [?|-h|--help]</code>
+```console
+lucene demo simple-sorted-set-facets [-src|--view-source-code] [-out|--output-source-code] [?|-h|--help]
+```
 
 ### Options
 
@@ -24,6 +26,8 @@ Outputs the source code to the specified directory.
 
 ### Example
 
-<code>lucene demo simple-sorted-set-facets</code>
+```console
+lucene demo simple-sorted-set-facets
+```
 
 
diff --git a/src/dotnet/tools/lucene-cli/docs/index.md b/src/dotnet/tools/lucene-cli/docs/index.md
index fad693d..55aedf9 100644
--- a/src/dotnet/tools/lucene-cli/docs/index.md
+++ b/src/dotnet/tools/lucene-cli/docs/index.md
@@ -10,11 +10,12 @@ The Lucene.NET command line interface (CLI) is a new cross-platform toolchain wi
 
 Perform a one-time install of the lucene-cli tool using the following dotnet CLI command:
 
-```
+```console
 dotnet tool install lucene-cli -g --version 4.8.0-beta00014
 ```
 
-> NOTE: The version of the CLI you install should match the version of Lucene.NET you use.
+> [!NOTE]
+> The version of the CLI you install should match the version of Lucene.NET you use.
 
 You may then use the lucene-cli tool to analyze and update Lucene.NET indexes and use its demos.
 
@@ -31,7 +32,9 @@ The following commands are installed:
 
 CLI command structure consists of the driver ("lucene"), the command, and possibly command arguments and options. You see this pattern in most CLI operations, such as checking a Lucene.NET index for problematic segments and fixing (removing) them:
 
-```
+```console
 lucene index check C:\my-index --verbose
 lucene index fix C:\my-index
 ```
+
+
diff --git a/src/dotnet/tools/lucene-cli/docs/index/check.md b/src/dotnet/tools/lucene-cli/docs/index/check.md
index aa35e04..b28cf14 100644
--- a/src/dotnet/tools/lucene-cli/docs/index/check.md
+++ b/src/dotnet/tools/lucene-cli/docs/index/check.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene index check [\<INDEX_DIRECTORY>] [-v|--verbose] [-c|--cross-check-term-vectors] [-dir|--directory-type] [-s|--segment] [?|-h|--help]</code>
+```console
+lucene index check [<INDEX_DIRECTORY>] [-v|--verbose] [-c|--cross-check-term-vectors] [-dir|--directory-type] [-s|--segment] [?|-h|--help]
+```
 
 ### Description
 
@@ -46,10 +48,14 @@ Only check the specified segment(s). This can be specified multiple times, to ch
 
 Check the index located at `X:\lucenenet-index\` verbosely, scanning only the segments named `_1j_Lucene41_0` and `_2u_Lucene41_0` for problems:
 
-<code>lucene index check X:\lucenenet-index -v -s _1j_Lucene41_0 -s _2u_Lucene41_0</code>
+```console
+lucene index check X:\lucenenet-index -v -s _1j_Lucene41_0 -s _2u_Lucene41_0
+```
 
 
 Check the index located at `C:\taxonomy\` using the `MMapDirectory` memory-mapped directory implementation:
 
-<code>lucene index check C:\taxonomy --directory-type MMapDirectory</code>
+```console
+lucene index check C:\taxonomy --directory-type MMapDirectory
+```
 
diff --git a/src/dotnet/tools/lucene-cli/docs/index/copy-segments.md b/src/dotnet/tools/lucene-cli/docs/index/copy-segments.md
index d4d1aa0..618f9cf 100644
--- a/src/dotnet/tools/lucene-cli/docs/index/copy-segments.md
+++ b/src/dotnet/tools/lucene-cli/docs/index/copy-segments.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene index copy-segments \<INPUT_DIRECTORY> \<OUTPUT_DIRECTORY> \<SEGMENT>[ \<SEGMENT_2>...] [?|-h|--help]</code>
+```console
+lucene index copy-segments <INPUT_DIRECTORY> <OUTPUT_DIRECTORY> <SEGMENT>[ <SEGMENT_2>...] [?|-h|--help]
+```
 
 ### Description
 
@@ -36,5 +38,7 @@ Prints out a short help for the command.
 
 Copy the `_71_Lucene41_0` segment from the index located at `X:\lucene-index` to the index located at `X:\output`:
 
-<code>lucene index copy-segments X:\lucene-index X:\output _71_Lucene41_0</code>
+```console
+lucene index copy-segments X:\lucene-index X:\output _71_Lucene41_0
+```
 
diff --git a/src/dotnet/tools/lucene-cli/docs/index/delete-segments.md b/src/dotnet/tools/lucene-cli/docs/index/delete-segments.md
index c60c228..5eccbfd 100644
--- a/src/dotnet/tools/lucene-cli/docs/index/delete-segments.md
+++ b/src/dotnet/tools/lucene-cli/docs/index/delete-segments.md
@@ -6,11 +6,16 @@
 
 ### Synopsis
 
-<code>lucene index delete-segments \<INDEX_DIRECTORY> \<SEGMENT>[ \<SEGMENT_2>...] [?|-h|--help]</code>
+```console
+lucene index delete-segments <INDEX_DIRECTORY> <SEGMENT>[ <SEGMENT_2>...] [?|-h|--help]
+```
 
 ### Description
 
-You can easily accidentally remove segments from your index, so be careful! Always make a backup of your index first.
+Deletes segments from an index.
+
+> [!WARNING]
+> You can easily accidentally remove segments from your index, so be careful! Always make a backup of your index first.
 
 ### Arguments
 
@@ -32,4 +37,6 @@ Prints out a short help for the command.
 
 Delete the segments named `_8c` and `_83` from the index located at `X:\category-data\`:
 
-<code>lucene index delete-segments X:\category-data _8c _83</code>
+```console
+lucene index delete-segments X:\category-data _8c _83
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/index/extract-cfs.md b/src/dotnet/tools/lucene-cli/docs/index/extract-cfs.md
index 96c40e8..5d2f1c4 100644
--- a/src/dotnet/tools/lucene-cli/docs/index/extract-cfs.md
+++ b/src/dotnet/tools/lucene-cli/docs/index/extract-cfs.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene index extract-cfs \<CFS_FILE_NAME> [-dir|--directory-type] [?|-h|--help]</code>
+```console
+lucene index extract-cfs <CFS_FILE_NAME> [-dir|--directory-type] [?|-h|--help]
+```
 
 ### Description
 
@@ -34,9 +36,13 @@ The FSDirectory implementation to use. If ommitted, it defaults to the optimal F
 
 Extract the files from the compound file at `X:\lucene-index\_81.cfs` to the current working directory:
 
-<code>lucene index extract-cfs X:\lucene-index\_81.cfs</code>
+```console
+lucene index extract-cfs X:\lucene-index\_81.cfs
+```
 
 
 Extract the files from the compound file at `X:\lucene-index\_64.cfs` to the current working directory using the `SimpleFSDirectory` implementation:
 
-<code>lucene index extract-cfs X:\lucene-index\_64.cfs --directory-type SimpleFSDirectory</code>
+```console
+lucene index extract-cfs X:\lucene-index\_64.cfs --directory-type SimpleFSDirectory
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/index/fix.md b/src/dotnet/tools/lucene-cli/docs/index/fix.md
index e352b8c..63ca2c8 100644
--- a/src/dotnet/tools/lucene-cli/docs/index/fix.md
+++ b/src/dotnet/tools/lucene-cli/docs/index/fix.md
@@ -6,13 +6,16 @@
 
 ### Synopsis
 
-<code>lucene index fix [\<INDEX_DIRECTORY>] [-v|--verbose] [-c|--cross-check-term-vectors] [-dir|--directory-type] [--dry-run] [?|-h|--help]</code>
+```console
+lucene index fix [<INDEX_DIRECTORY>] [-v|--verbose] [-c|--cross-check-term-vectors] [-dir|--directory-type] [--dry-run] [?|-h|--help]
+```
 
 ### Description
 
 Basic tool to write a new segments file that removes reference to problematic segments. As this tool checks every byte in the index, on a large index it can take quite a long time to run.
 
-> **WARNING:** This command should only be used on an emergency basis as it will cause documents (perhaps many) to be permanently removed from the index. Always make a backup copy of your index before running this! Do not run this tool on an index that is actively being written to. You have been warned!
+> [!WARNING] 
+> This command should only be used on an emergency basis as it will cause documents (perhaps many) to be permanently removed from the index. Always make a backup copy of your index before running this! Do not run this tool on an index that is actively being written to. You have been warned!
 
 ### Arguments
 
@@ -51,4 +54,6 @@ Check what a fix operation would do if run on the index located at `X:\product-i
 
 Fix the index located at `X:\product-index` and cross check term vectors:
 
-<code>lucene index fix X:\product-index -c</code>
+```console
+lucene index fix X:\product-index -c
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/index/index.md b/src/dotnet/tools/lucene-cli/docs/index/index.md
index 43f323e..bfaf7ba 100644
--- a/src/dotnet/tools/lucene-cli/docs/index/index.md
+++ b/src/dotnet/tools/lucene-cli/docs/index/index.md
@@ -4,7 +4,8 @@
 
 Utilities to manage specialized analyzers.
 
-> **WARNING:** Many of these operations change an index in ways that cannot be reversed. Always make a backup of your index before running these commands. 
+> [!WARNING]
+> Many of these operations change an index in ways that cannot be reversed. Always make a backup of your index before running these commands. 
 
 ## Commands
 
diff --git a/src/dotnet/tools/lucene-cli/docs/index/list-cfs.md b/src/dotnet/tools/lucene-cli/docs/index/list-cfs.md
index 55d92a4..5bbddde 100644
--- a/src/dotnet/tools/lucene-cli/docs/index/list-cfs.md
+++ b/src/dotnet/tools/lucene-cli/docs/index/list-cfs.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene index list-cfs \<CFS_FILE_NAME> [-dir|--directory-type] [?|-h|--help]</code>
+```console
+lucene index list-cfs <CFS_FILE_NAME> [-dir|--directory-type] [?|-h|--help]
+```
 
 ### Description
 
@@ -32,5 +34,7 @@ The `FSDirectory` implementation to use. If omitted, defaults to the optimal `FS
 
 Lists the files within the `X:\categories\_53.cfs` compound file using the `NIOFSDirectory` directory implementation:
 
-<code>lucene index list-cfs X:\categories\_53.cfs -dir NIOFSDirectory</code>
+```console
+lucene index list-cfs X:\categories\_53.cfs -dir NIOFSDirectory
+```
 
diff --git a/src/dotnet/tools/lucene-cli/docs/index/list-high-freq-terms.md b/src/dotnet/tools/lucene-cli/docs/index/list-high-freq-terms.md
index 4e69d2c..424d8d5 100644
--- a/src/dotnet/tools/lucene-cli/docs/index/list-high-freq-terms.md
+++ b/src/dotnet/tools/lucene-cli/docs/index/list-high-freq-terms.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene index list-high-freq-terms [\<INDEX_DIRECTORY>] [-t|--total-term-frequency] [-n|--number-of-terms] [-f|--field] [?|-h|--help]</code>
+```console
+lucene index list-high-freq-terms [<INDEX_DIRECTORY>] [-t|--total-term-frequency] [-n|--number-of-terms] [-f|--field] [?|-h|--help]
+```
 
 ### Description
 
@@ -41,9 +43,12 @@ The field to consider. If omitted, considers all fields.
 
 List the high frequency terms in the index located at `F:\product-index\` on the `description` field, reporting both document frequency and term frequency:
 
-<code>lucene index list-high-freq-terms F:\product-index --total-term-frequency --field description</code>
-
+```console
+lucene index list-high-freq-terms F:\product-index --total-term-frequency --field description
+```
 
 List the high frequency terms in the index located at `C:\lucene-index\` on the `name` field, tracking 30 terms:
 
-<code>lucene index list-high-freq-terms C:\lucene-index --f name -n 30</code>
+```console
+lucene index list-high-freq-terms C:\lucene-index --f name -n 30
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/index/list-segments.md b/src/dotnet/tools/lucene-cli/docs/index/list-segments.md
index a1278f3..194b294 100644
--- a/src/dotnet/tools/lucene-cli/docs/index/list-segments.md
+++ b/src/dotnet/tools/lucene-cli/docs/index/list-segments.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene index list-segments [\<INDEX_DIRECTORY>] [?|-h|--help]</code>
+```console
+lucene index list-segments [\<INDEX_DIRECTORY>] [?|-h|--help]
+```
 
 ### Description
 
@@ -28,5 +30,7 @@ Prints out a short help for the command.
 
 List the segments in the index located at `X:\lucene-index\`:
 
-<code>lucene index list-segments X:\lucene-index</code>
+```console
+lucene index list-segments X:\lucene-index
+```
 
diff --git a/src/dotnet/tools/lucene-cli/docs/index/list-taxonomy-stats.md b/src/dotnet/tools/lucene-cli/docs/index/list-taxonomy-stats.md
index 0916783..8e3eabf 100644
--- a/src/dotnet/tools/lucene-cli/docs/index/list-taxonomy-stats.md
+++ b/src/dotnet/tools/lucene-cli/docs/index/list-taxonomy-stats.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene index list-taxonomy-stats [\<INDEX_DIRECTORY>] [-tree|--show-tree] [?|-h|--help]</code>
+```console
+lucene index list-taxonomy-stats [<INDEX_DIRECTORY>] [-tree|--show-tree] [?|-h|--help]
+```
 
 ### Description
 
@@ -18,7 +20,8 @@ Prints how many ords are under each dimension.
 
 The directory of the index. If omitted, it defaults to the current working directory.
 
-> **NOTE:** This directory must be a facet taxonomy directory for the command to succeed.
+> [!NOTE] 
+> This directory must be a facet taxonomy directory for the command to succeed.
 
 ### Options
 
@@ -34,5 +37,7 @@ Recursively lists all descendant nodes.
 
 List the taxonomy statistics from the index located at `X:\category-taxonomy-index\`, viewing all descendant nodes:
 
-<code>lucene index list-taxonomy-stats X:\category-taxonomy-index -tree</code>
+```console
+lucene index list-taxonomy-stats X:\category-taxonomy-index -tree
+```
 
diff --git a/src/dotnet/tools/lucene-cli/docs/index/list-term-info.md b/src/dotnet/tools/lucene-cli/docs/index/list-term-info.md
index 25a0c7b..3c077c1 100644
--- a/src/dotnet/tools/lucene-cli/docs/index/list-term-info.md
+++ b/src/dotnet/tools/lucene-cli/docs/index/list-term-info.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene index list-term-info \<INDEX_DIRECTORY> \<FIELD> \<TERM> [?|-h|--help]</code>
+```console
+lucene index list-term-info <INDEX_DIRECTORY> <FIELD> <TERM> [?|-h|--help]
+```
 
 ### Description
 
@@ -36,5 +38,7 @@ Prints out a short help for the command.
 
 List the term information from the index located at `C:\project-index\`:
 
-<code>lucene index list-term-info C:\project-index</code>
+```console
+lucene index list-term-info C:\project-index
+```
 
diff --git a/src/dotnet/tools/lucene-cli/docs/index/merge.md b/src/dotnet/tools/lucene-cli/docs/index/merge.md
index 0ce1ca4..b492d8f 100644
--- a/src/dotnet/tools/lucene-cli/docs/index/merge.md
+++ b/src/dotnet/tools/lucene-cli/docs/index/merge.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene index merge \<OUTPUT_DIRECTORY> \<INPUT_DIRECTORY> \<INPUT_DIRECTORY_2>[ \<INPUT_DIRECTORY_N>...] [?|-h|--help]</code>
+```console
+lucene index merge <OUTPUT_DIRECTORY> <INPUT_DIRECTORY> <INPUT_DIRECTORY_2>[ <INPUT_DIRECTORY_N>...] [?|-h|--help]
+```
 
 ### Description
 
@@ -32,5 +34,7 @@ Prints out a short help for the command.
 
 Merge the indexes `C:\product-index1` and `C:\product-index2` into an index located at `X:\merged-index`:
 
-<code>lucene index merge X:\merged-index C:\product-index1 C:\product-index2</code>
+```console
+lucene index merge X:\merged-index C:\product-index1 C:\product-index2
+```
 
diff --git a/src/dotnet/tools/lucene-cli/docs/index/split.md b/src/dotnet/tools/lucene-cli/docs/index/split.md
index 68b3a46..e690c17 100644
--- a/src/dotnet/tools/lucene-cli/docs/index/split.md
+++ b/src/dotnet/tools/lucene-cli/docs/index/split.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene index split \<OUTPUT_DIRECTORY> \<INPUT_DIRECTORY>[ \<INPUT_DIRECTORY_2>...] [-n|--number-of-parts] [-s|--sequential] [?|-h|--help]</code>
+```console
+lucene index split <OUTPUT_DIRECTORY> <INPUT_DIRECTORY>[ <INPUT_DIRECTORY_2>...] [-n|--number-of-parts] [-s|--sequential] [?|-h|--help]
+```
 
 ### Description
 
@@ -16,7 +18,8 @@ Deletes are only applied to a buffered list of deleted documents and don't affec
 
 The disadvantage of this tool is that source index needs to be read as many times as there are parts to be created. The multiple passes may be slow.
 
-> **NOTE:** This tool is unaware of documents added automatically via `IndexWriter.AddDocuments(IEnumerable&lt;IEnumerable&lt;IIndexableField&gt;&gt;, Analyzer)` or `IndexWriter.UpdateDocuments(Term, IEnumerable&lt;IEnumerable&lt;IIndexableField&gt;&gt;, Analyzer)`, which means it can easily break up such document groups.
+> [!NOTE]
+> This tool is unaware of documents added automatically via `IndexWriter.AddDocuments(IEnumerable<IEnumerable<IIndexableField>>, Analyzer)` or `IndexWriter.UpdateDocuments(Term, IEnumerable<IEnumerable<IIndexableField>>, Analyzer)`, which means it can easily break up such document groups.
 
 ### Arguments
 
@@ -46,9 +49,13 @@ Sequential doc-id range split (default is round-robin).
 
 Split the index located at `X:\old-index\` sequentially, placing the resulting 2 indices into the `X:\new-index\` directory:
 
-<code>lucene index split X:\new-index X:\old-index --sequential</code>
+```console
+lucene index split X:\new-index X:\old-index --sequential
+```
 
 
 Split the index located at `T:\in\` into 4 parts and place them into the `T:\out\` directory:
 
-<code>lucene index split T:\out T:\in -n 4</code>
+```console
+lucene index split T:\out T:\in -n 4
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/index/upgrade.md b/src/dotnet/tools/lucene-cli/docs/index/upgrade.md
index 1e9d16a..cc3fee8 100644
--- a/src/dotnet/tools/lucene-cli/docs/index/upgrade.md
+++ b/src/dotnet/tools/lucene-cli/docs/index/upgrade.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene index upgrade [\<INDEX_DIRECTORY>] [-d|--delete-prior-commits] [-v|--verbose] [-dir|--directory-type] [?|-h|--help]</code>
+```console
+lucene index upgrade [<INDEX_DIRECTORY>] [-d|--delete-prior-commits] [-v|--verbose] [-dir|--directory-type] [?|-h|--help]
+```
 
 ### Description
 
@@ -14,7 +16,8 @@ This tool keeps only the last commit in an index; for this reason, if the incomi
 
 Specify an FSDirectory implementation through the --directory-type option to force its use. If not qualified by an AssemblyName, the Lucene.Net.dll assembly will be used. 
 
-> **WARNING:** This tool may reorder document IDs! Be sure to make a backup of your index before you use this. Also, ensure you are using the correct version of this utility to match your application's version of Lucene.Net. This operation cannot be reversed.
+> [!WARNING]
+> This tool may reorder document IDs! Be sure to make a backup of your index before you use this. Also, ensure you are using the correct version of this utility to match your application's version of Lucene.NET. This operation cannot be reversed.
 
 ### Arguments
 
@@ -44,9 +47,13 @@ The `FSDirectory` implementation to use. Defaults to the optional `FSDirectory`
 
 Upgrade the index format of the index located at `X:\lucene-index\` to the same version as this tool, using the `SimpleFSDirectory` implementation:
 
-<code>lucene index upgrade X:\lucene-index -dir SimpleFSDirectory</code>
+```console
+lucene index upgrade X:\lucene-index -dir SimpleFSDirectory
+```
 
 
 Upgrade the index located at `C:\indexes\category-index\` verbosely, deleting all but the last commit:
 
-<code>lucene index upgrade C:\indexes\category-index --verbose --delete-prior-commits</code>
+```console
+lucene index upgrade C:\indexes\category-index --verbose --delete-prior-commits
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/lock/stress-test.md b/src/dotnet/tools/lucene-cli/docs/lock/stress-test.md
index 73ee099..fafd708 100644
--- a/src/dotnet/tools/lucene-cli/docs/lock/stress-test.md
+++ b/src/dotnet/tools/lucene-cli/docs/lock/stress-test.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene lock stress-test \<ID> \<VERIFIER_HOST> \<VERIFIER_PORT> \<LOCK_FACTORY_TYPE> \<LOCK_DIRECTORY> \<SLEEP_TIME_MS> \<TRIES> [?|-h|--help]</code>
+```console
+lucene lock stress-test <ID> <VERIFIER_HOST> <VERIFIER_PORT> <LOCK_FACTORY_TYPE> <LOCK_DIRECTORY> <SLEEP_TIME_MS> <TRIES> [?|-h|--help]
+```
 
 ### Description
 
@@ -52,4 +54,6 @@ Prints out a short help for the command.
 
 Run the client (stress test), connecting to the server on IP address `127.0.0.4` and port `54464` using the ID 3, the `NativeFSLockFactory`, specifying the lock directory as `F:\temp`, sleep for 50 milliseconds, and try to obtain a lock up to 10 times:
 
-<code>lucene lock stress-test 3 127.0.0.4 54464 NativeFSLockFactory F:\temp 50 10</code>
+```console
+lucene lock stress-test 3 127.0.0.4 54464 NativeFSLockFactory F:\temp 50 10
+```
diff --git a/src/dotnet/tools/lucene-cli/docs/lock/verify-server.md b/src/dotnet/tools/lucene-cli/docs/lock/verify-server.md
index d3f1f2b..e908844 100644
--- a/src/dotnet/tools/lucene-cli/docs/lock/verify-server.md
+++ b/src/dotnet/tools/lucene-cli/docs/lock/verify-server.md
@@ -6,7 +6,9 @@
 
 ### Synopsis
 
-<code>lucene lock verify-server \<IP_HOSTNAME> \<MAX_CLIENTS> [?|-h|--help]</code>
+```console
+lucene lock verify-server <IP_HOSTNAME> <MAX_CLIENTS> [?|-h|--help]
+```
 
 ### Description
 
@@ -32,4 +34,6 @@ Prints out a short help for the command.
 
 Run the server on IP `127.0.0.4` with a 10 connected clients:
 
-<code>lucene lock verify-server 127.0.0.4 10</code>
+```console
+lucene lock verify-server 127.0.0.4 10
+```

[lucenenet] 04/15: docs: Lucene.Net.Analysis (submodules): Fixed broken formatting and links (see #284, #300)

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git

commit a0cec6174488826af4b970ff63d0bb70b5605bad
Author: Shad Storhaug <sh...@shadstorhaug.com>
AuthorDate: Tue Mar 30 19:39:40 2021 +0700

    docs: Lucene.Net.Analysis (submodules): Fixed broken formatting and links (see #284, #300)
---
 src/Lucene.Net.Analysis.Kuromoji/overview.md   | 8 ++++----
 src/Lucene.Net.Analysis.Morfologik/overview.md | 8 ++++----
 src/Lucene.Net.Analysis.Phonetic/overview.md   | 8 ++++----
 src/Lucene.Net.Analysis.SmartCn/overview.md    | 6 +++---
 src/Lucene.Net.Analysis.SmartCn/package.md     | 8 +++-----
 5 files changed, 18 insertions(+), 20 deletions(-)

diff --git a/src/Lucene.Net.Analysis.Kuromoji/overview.md b/src/Lucene.Net.Analysis.Kuromoji/overview.md
index 1ea768f..26b6544 100644
--- a/src/Lucene.Net.Analysis.Kuromoji/overview.md
+++ b/src/Lucene.Net.Analysis.Kuromoji/overview.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Analysis.Ja
 summary: *content
 ---
@@ -20,8 +20,8 @@ summary: *content
  limitations under the License.
 -->
 
-  Kuromoji is a morphological analyzer for Japanese text.  
+Kuromoji is a morphological analyzer for Japanese text.  
 
- This module provides support for Japanese text analysis, including features such as part-of-speech tagging, lemmatization, and compound word analysis. 
+This module provides support for Japanese text analysis, including features such as part-of-speech tagging, lemmatization, and compound word analysis. 
 
- For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation. 
\ No newline at end of file
+For an introduction to Lucene's analysis API, see the [Lucene.Net.Analysis](../core/Lucene.Net.Analysis.html) namespace documentation. 
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.Morfologik/overview.md b/src/Lucene.Net.Analysis.Morfologik/overview.md
index a758f48..629e5df 100644
--- a/src/Lucene.Net.Analysis.Morfologik/overview.md
+++ b/src/Lucene.Net.Analysis.Morfologik/overview.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Analysis.Morfologik
 title: Lucene.Net.Analysis.Morfologik
 summary: *content
@@ -21,8 +21,8 @@ summary: *content
  limitations under the License.
 -->
 
- This package provides dictionary-driven lemmatization ("accurate stemming") filter and analyzer for the Polish Language, driven by the [Morfologik library](http://morfologik.blogspot.com/) developed by Dawid Weiss and Marcin Miłkowski. 
+This package provides dictionary-driven lemmatization ("accurate stemming") filter and analyzer for the Polish Language, driven by the [Morfologik library](http://morfologik.blogspot.com/) developed by Dawid Weiss and Marcin Miłkowski. 
 
- For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation. 
+For an introduction to Lucene's analysis API, see the [Lucene.Net.Analysis](../core/Lucene.Net.Analysis.html) namespace documentation. 
 
- The MorfologikFilter yields one or more terms for each token. Each of those terms is given the same position in the index. 
\ No newline at end of file
+The MorfologikFilter yields one or more terms for each token. Each of those terms is given the same position in the index. 
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.Phonetic/overview.md b/src/Lucene.Net.Analysis.Phonetic/overview.md
index caa5345..f1bd231 100644
--- a/src/Lucene.Net.Analysis.Phonetic/overview.md
+++ b/src/Lucene.Net.Analysis.Phonetic/overview.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Analysis.Phonetic
 summary: *content
 ---
@@ -20,8 +20,8 @@ summary: *content
  limitations under the License.
 -->
 
-  Analysis for indexing phonetic signatures (for sounds-alike search)
+Analysis for indexing phonetic signatures (for sounds-alike search)
 
- For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation. 
+For an introduction to Lucene's analysis API, see the [Lucene.Net.Analysis](../core/Lucene.Net.Analysis.html) namespace documentation. 
 
- This module provides analysis components (using encoders from [Apache Commons Codec](http://commons.apache.org/codec/)) that index and search phonetic signatures. 
\ No newline at end of file
+This module provides analysis components (using encoders ported to .NET from [Apache Commons Codec](http://commons.apache.org/codec/)) that index and search phonetic signatures. 
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.SmartCn/overview.md b/src/Lucene.Net.Analysis.SmartCn/overview.md
index 0500484..061643b 100644
--- a/src/Lucene.Net.Analysis.SmartCn/overview.md
+++ b/src/Lucene.Net.Analysis.SmartCn/overview.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Analysis.Cn.Smart
 summary: *content
 ---
@@ -20,6 +20,6 @@ summary: *content
  limitations under the License.
 -->
 
-  Analyzer for Simplified Chinese, which indexes words.
+Analyzer for Simplified Chinese, which indexes words.
 
- For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation. 
\ No newline at end of file
+For an introduction to Lucene's analysis API, see the [Lucene.Net.Analysis](../core/Lucene.Net.Analysis.html) namespace documentation. 
\ No newline at end of file
diff --git a/src/Lucene.Net.Analysis.SmartCn/package.md b/src/Lucene.Net.Analysis.SmartCn/package.md
index 18dcfa9..5f52530 100644
--- a/src/Lucene.Net.Analysis.SmartCn/package.md
+++ b/src/Lucene.Net.Analysis.SmartCn/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Analysis.Cn.Smart
 summary: *content
 ---
@@ -22,12 +22,12 @@ summary: *content
 
 Analyzer for Simplified Chinese, which indexes words.
 @lucene.experimental
-<div>
+
 Three analyzers are provided for Chinese, each of which treats Chinese text in a different way.
 
 *   StandardAnalyzer: Index unigrams (individual Chinese characters) as a token.
 
-*   CJKAnalyzer (in the analyzers/cjk package): Index bigrams (overlapping groups of two adjacent Chinese characters) as tokens.
+*   CJKAnalyzer (in the <xref:Lucene.Net.Analysis.Cjk> namespace of <xref:Lucene.Net.Analysis.Common>): Index bigrams (overlapping groups of two adjacent Chinese characters) as tokens.
 
 *   SmartChineseAnalyzer (in this package): Index words (attempt to segment Chinese text into words) as tokens.
 
@@ -39,5 +39,3 @@ Example phrase: "我是中国人"
 2.  CJKAnalyzer: 我是-是中-中国-国人
 
 3.  SmartChineseAnalyzer: 我-是-中国-人
-
-</div>
\ No newline at end of file

[lucenenet] 08/15: docs: Lucene.Net.Highlighter: Fixed broken formatting and links (see #284, #300)

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git

commit 2e320ea9cd19788db380d6872da53918a977cc3c
Author: Shad Storhaug <sh...@shadstorhaug.com>
AuthorDate: Tue Mar 30 19:42:54 2021 +0700

    docs: Lucene.Net.Highlighter: Fixed broken formatting and links (see #284, #300)
---
 src/Lucene.Net.Highlighter/Highlight/package.md    | 103 +++++++++++++++------
 .../VectorHighlight/package.md                     |  62 +++++++------
 src/Lucene.Net.Highlighter/overview.md             |  11 ++-
 3 files changed, 118 insertions(+), 58 deletions(-)

diff --git a/src/Lucene.Net.Highlighter/Highlight/package.md b/src/Lucene.Net.Highlighter/Highlight/package.md
index 9181d31..7d5de00 100644
--- a/src/Lucene.Net.Highlighter/Highlight/package.md
+++ b/src/Lucene.Net.Highlighter/Highlight/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Search.Highlight
 summary: *content
 ---
@@ -25,35 +25,82 @@ The highlight package contains classes to provide "keyword in context" features
 typically used to highlight search terms in the text of results pages.
 The Highlighter class is the central component and can be used to extract the
 most interesting sections of a piece of text and highlight them, with the help of
-Fragmenter, fragment Scorer, and Formatter classes.
+[Fragmenter](xref:Lucene.Net.Search.Highlight.IFragmenter), fragment [Scorer](xref:Lucene.Net.Search.Highlight.IScorer), and [Formatter](xref:Lucene.Net.Search.Highlight.IFormatter) classes.
 
 ## Example Usage
 
-      //... Above, create documents with two fields, one with term vectors (tv) and one without (notv)
-      IndexSearcher searcher = new IndexSearcher(directory);
-      QueryParser parser = new QueryParser("notv", analyzer);
-      Query query = parser.parse("million");
-    
-  TopDocs hits = searcher.search(query, 10);
-    
-  SimpleHTMLFormatter htmlFormatter = new SimpleHTMLFormatter();
-      Highlighter highlighter = new Highlighter(htmlFormatter, new QueryScorer(query));
-      for (int i = 0; i < 10;="" i++)="" {="" int="" id="hits.scoreDocs[i].doc;" document="" doc="searcher.doc(id);" string="" text="doc.get(" notv");"="" tokenstream="" tokenstream="TokenSources.getAnyTokenStream(searcher.getIndexReader()," id,="" "notv",="" analyzer);="" textfragment[]="" frag="highlighter.getBestTextFragments(tokenStream," text,="" false,="" 10);//highlighter.getbestfragments(tokenstream,="" text,="" 3,="" "...");="" for="" (int="" j="0;" j="">< frag.length;="" j++)=" [...]
-            System.out.println((frag[j].toString()));
-          }
+```cs
+const LuceneVersion matchVersion = LuceneVersion.LUCENE_48;
+Analyzer analyzer = new StandardAnalyzer(matchVersion);
+
+// Create an index to search
+string indexPath = Path.Combine(Path.GetTempPath(), Path.GetFileNameWithoutExtension(Path.GetTempFileName()));
+Directory dir = FSDirectory.Open(indexPath);
+using IndexWriter writer = new IndexWriter(dir, new IndexWriterConfig(matchVersion, analyzer));
+
+// This field must store term vectors and term vector offsets
+var fieldType = new FieldType(TextField.TYPE_STORED)
+{
+    StoreTermVectors = true,
+    StoreTermVectorOffsets = true
+};
+fieldType.Freeze();
+
+// Create documents with two fields, one with term vectors (tv) and one without (notv)
+writer.AddDocument(new Document {
+    new Field("tv", "Thanks a million!", fieldType),
+    new TextField("notv", "A million ways to win.", Field.Store.YES)
+});
+writer.AddDocument(new Document {
+    new Field("tv", "Hopefully, this won't highlight a million times.", fieldType),
+    new TextField("notv", "There are a million different ways to do that!", Field.Store.YES)
+});
+
+using IndexReader indexReader = writer.GetReader(applyAllDeletes: true);
+writer.Dispose();
+
+// Now search our index using an existing or new IndexReader
+
+IndexSearcher searcher = new IndexSearcher(indexReader);
+QueryParser parser = new QueryParser(matchVersion, "notv", analyzer);
+Query query = parser.Parse("million");
+
+TopDocs hits = searcher.Search(query, 10);
+
+SimpleHTMLFormatter htmlFormatter = new SimpleHTMLFormatter();
+Highlighter highlighter = new Highlighter(htmlFormatter, new QueryScorer(query));
+int totalScoreDocs = hits.ScoreDocs.Length > 10 ? 10 : hits.ScoreDocs.Length;
+for (int i = 0; i < totalScoreDocs; i++)
+{
+    int id = hits.ScoreDocs[i].Doc;
+    Document doc = searcher.Doc(id);
+    string text = doc.Get("notv");
+    TokenStream tokenStream = TokenSources.GetAnyTokenStream(searcher.IndexReader, id, "notv", analyzer);
+    TextFragment[] frag = highlighter.GetBestTextFragments(
+        tokenStream, text, mergeContiguousFragments: false, maxNumFragments: 10); // highlighter.GetBestFragments(tokenStream, text, 3, "...");
+    for (int j = 0; j < frag.Length; j++)
+    {
+        if (frag[j] != null && frag[j].Score > 0)
+        {
+            Console.WriteLine(frag[j].ToString());
         }
-        //Term vector
-        text = doc.get("tv");
-        tokenStream = TokenSources.getAnyTokenStream(searcher.getIndexReader(), hits.scoreDocs[i].doc, "tv", analyzer);
-        frag = highlighter.getBestTextFragments(tokenStream, text, false, 10);
-        for (int j = 0; j < frag.length;="" j++)="" {="" if="" ((frag[j]="" !="null)" &&="" (frag[j].getscore()=""> 0)) {
-            System.out.println((frag[j].toString()));
-          }
+    }
+    //Term vector
+    text = doc.Get("tv");
+    tokenStream = TokenSources.GetAnyTokenStream(searcher.IndexReader, hits.ScoreDocs[i].Doc, "tv", analyzer);
+    frag = highlighter.GetBestTextFragments(tokenStream, text, false, 10);
+    for (int j = 0; j < frag.Length; j++)
+    {
+        if (frag[j] != null && frag[j].Score > 0)
+        {
+            Console.WriteLine(frag[j].ToString());
         }
-        System.out.println("-------------");
-      }
+    }
+    Console.WriteLine("-------------");
+}
+```
 
-## New features 06/02/2005
+## New features 2005-02-06
 
 
 This release adds options for encoding (thanks to Nicko Cadell).
@@ -62,7 +109,7 @@ all those non-xhtml standard characters such as & into legal values. This simple
 some languages -  Commons Lang has an implementation that could be used: escapeHtml(String) in
 http://svn.apache.org/viewcvs.cgi/jakarta/commons/proper/lang/trunk/src/java/org/apache/commons/lang/StringEscapeUtils.java?rev=137958&view=markup
 
-## New features 22/12/2004
+## New features 2004-12-22
 
 
 This release adds some new capabilities:
@@ -73,8 +120,8 @@ This release adds some new capabilities:
 
 3.  Options for better summarization by using term IDF scores to influence fragment selection
 
- The highlighter takes a TokenStream as input. Until now these streams have typically been produced using an Analyzer but the new class TokenSources provides helper methods for obtaining TokenStreams from the new TermVector position support (see latest CVS version).
+The highlighter takes a <xref:Lucene.Net.Analysis.TokenStream> as input. Until now these streams have typically been produced using an <xref:Lucene.Net.Analysis.Analyzer> but the new class TokenSources provides helper methods for obtaining TokenStreams from the new TermVector position support (see latest CVS version).
 
-The new class GradientFormatter can use a scale of colors to highlight terms according to their score. A subtle use of color can help emphasise the reasons for matching (useful when doing "MoreLikeThis" queries and you want to see what the basis of the similarities are).
+The new class <xref:Lucene.Net.Search.Highlight.GradientFormatter> can use a scale of colors to highlight terms according to their score. A subtle use of color can help emphasize the reasons for matching (useful when doing "MoreLikeThis" queries and you want to see what the basis of the similarities are).
 
-The QueryScorer class has a new constructor which can use an IndexReader to derive the IDF (inverse document frequency) for each term in order to influence the score. This is useful for helping to extracting the most significant sections of a document and in supplying scores used by the new GradientFormatter to color significant words more strongly. The QueryScorer.getMaxWeight method is useful when passed to the GradientFormatter constructor to define the top score which is associated w [...]
\ No newline at end of file
+The <xref:Lucene.Net.Search.Highlight.QueryScorer> class has a new constructor which can use an <xref:Lucene.Net.Index.IndexReader> to derive the IDF (inverse document frequency) for each term in order to influence the score. This is useful for helping to extracting the most significant sections of a document and in supplying scores used by the new GradientFormatter to color significant words more strongly. The [QueryScorer.MaxTermWeight](xref:Lucene.Net.Search.Highlight.QueryScorer#Luce [...]
\ No newline at end of file
diff --git a/src/Lucene.Net.Highlighter/VectorHighlight/package.md b/src/Lucene.Net.Highlighter/VectorHighlight/package.md
index 3aaa474..224fa4f 100644
--- a/src/Lucene.Net.Highlighter/VectorHighlight/package.md
+++ b/src/Lucene.Net.Highlighter/VectorHighlight/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Search.VectorHighlight
 summary: *content
 ---
@@ -32,8 +32,6 @@ This is an another highlighter implementation.
 
 *   support multi-term (includes wildcard, range, regexp, etc) queries
 
-*   need Java 1.5
-
 *   highlight fields need to be stored with Positions and Offsets
 
 *   take into account query boost and/or IDF-weight to score fragments
@@ -77,12 +75,15 @@ For your convenience, here is the offsets and positions info of the sample text.
 
 In Step 1, Fast Vector Highlighter generates <xref:Lucene.Net.Search.VectorHighlight.FieldQuery.QueryPhraseMap> from the user query. `QueryPhraseMap` consists of the following members:
 
-    public class QueryPhraseMap {
-      boolean terminal;
-      int slop;   // valid if terminal == true and phraseHighlight == true
-      float boost;  // valid if terminal == true
-      Map<String, QueryPhraseMap> subMap;
-    } 
+```cs
+public class QueryPhraseMap
+{
+    bool terminal;
+    int slop;   // valid if terminal == true and phraseHighlight == true
+    float boost;  // valid if terminal == true
+    IDictonary<string, QueryPhraseMap> subMap;
+}
+```
 
 `QueryPhraseMap` has subMap. The key of the subMap is a term text in the user query and the value is a subsequent `QueryPhraseMap`. If the query is a term (not phrase), then the subsequent `QueryPhraseMap` is marked as terminal. If the query is a phrase, then the subsequent `QueryPhraseMap` is not a terminal and it has the next term text in the phrase.
 
@@ -93,13 +94,13 @@ From the sample user query, the following `QueryPhraseMap` will be generated:
     |"Lucene"|o+->|boost=2|*|  * : terminal
     +--------+-+  +-------+-+
     
-+--------+-+  +---------+-+  +-------+------+-+
+    +--------+-+  +---------+-+  +-------+------+-+
     |"search"|o+->|"library"|o+->|boost=1|slop=1|*|
     +--------+-+  +---------+-+  +-------+------+-+
 
 ### Step 2.
 
-In Step 2, Fast Vector Highlighter generates <xref:Lucene.Net.Search.VectorHighlight.FieldTermStack>. Fast Vector Highlighter uses term vector data (must be stored [#setStoreTermVectorOffsets(boolean)](xref:Lucene.Net.Documents.FieldType) and [#setStoreTermVectorPositions(boolean)](xref:Lucene.Net.Documents.FieldType)) to generate it. `FieldTermStack` keeps the terms in the user query. Therefore, in this sample case, Fast Vector Highlighter generates the following `FieldTermStack`:
+In Step 2, Fast Vector Highlighter generates <xref:Lucene.Net.Search.VectorHighlight.FieldTermStack>. Fast Vector Highlighter uses term vector data (must be stored [FieldType.StoreTermVectorOffsets = true](xref:Lucene.Net.Documents.FieldType#Lucene_Net_Documents_FieldType_StoreTermVectorOffsets) and [FieldType.StoreTermVectorPositions = true](xref:Lucene.Net.Documents.FieldType#Lucene_Net_Documents_FieldType_StoreTermVectorPositions)) to generate it. `FieldTermStack` keeps the terms in t [...]
 
        FieldTermStack
     +------------------+
@@ -136,25 +137,32 @@ In Step 4, Fast Vector Highlighter creates `FieldFragList` by reference to `Fiel
     +---------------------------------+
 
 The calculation for each `FieldFragList.WeightedFragInfo.totalBoost` (weight)  
-depends on the implementation of `FieldFragList.add( ... )`:
-
-      public void add( int startOffset, int endOffset, List<WeightedPhraseInfo> phraseInfoList ) {
-        float totalBoost = 0;
-        List<SubInfo> subInfos = new ArrayList<SubInfo>();
-        for( WeightedPhraseInfo phraseInfo : phraseInfoList ){
-          subInfos.add( new SubInfo( phraseInfo.getText(), phraseInfo.getTermsOffsets(), phraseInfo.getSeqnum() ) );
-          totalBoost += phraseInfo.getBoost();
-        }
-        getFragInfos().add( new WeightedFragInfo( startOffset, endOffset, subInfos, totalBoost ) );
-      }
+depends on the implementation of `FieldFragList.Add( ... )`:
+
+```cs
+public override void Add(int startOffset, int endOffset, IList<WeightedPhraseInfo> phraseInfoList)
+{
+	float totalBoost = 0;
+	List<SubInfo> subInfos = new List<SubInfo>();
+	foreach (WeightedPhraseInfo phraseInfo in phraseInfoList)
+	{
+		subInfos.Add(new SubInfo(phraseInfo.GetText(), phraseInfo.TermsOffsets, phraseInfo.Seqnum, phraseInfo.Boost));
+		totalBoost += phraseInfo.Boost;
+	}
+	FragInfos.Add(new WeightedFragInfo(startOffset, endOffset, subInfos, totalBoost));
+}
+```
 
 The used implementation of `FieldFragList` is noted in `BaseFragListBuilder.createFieldFragList( ... )`:
 
-      public FieldFragList createFieldFragList( FieldPhraseList fieldPhraseList, int fragCharSize ){
-        return createFieldFragList( fieldPhraseList, new SimpleFieldFragList( fragCharSize ), fragCharSize );
-      }
+```cs
+public override FieldFragList CreateFieldFragList(FieldPhraseList fieldPhraseList, int fragCharSize)
+{
+	return CreateFieldFragList(fieldPhraseList, new SimpleFieldFragList(fragCharSize), fragCharSize);
+}
+```
 
- Currently there are basically to approaches available: 
+Currently there are basically to approaches available: 
 
 *   `SimpleFragListBuilder using SimpleFieldFragList`: _sum-of-boosts_-approach. The totalBoost is calculated by summarizing the query-boosts per term. Per default a term is boosted by 1.0
 
@@ -187,4 +195,4 @@ Comparison of the two approaches:
 
 ### Step 5.
 
-In Step 5, by using `FieldFragList` and the field stored data, Fast Vector Highlighter creates highlighted snippets!
\ No newline at end of file
+In Step 5, by using <xref:Lucene.Net.Search.VectorHighlight.FieldFragList> and the field stored data, Fast Vector Highlighter creates highlighted snippets!
\ No newline at end of file
diff --git a/src/Lucene.Net.Highlighter/overview.md b/src/Lucene.Net.Highlighter/overview.md
index 4580146..a62ce0b 100644
--- a/src/Lucene.Net.Highlighter/overview.md
+++ b/src/Lucene.Net.Highlighter/overview.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Highlighter
 title: Lucene.Net.Highlighter
 summary: *content
@@ -21,5 +21,10 @@ summary: *content
  limitations under the License.
 -->
 
-  The highlight package contains classes to provide "keyword in context" features
-  typically used to highlight search terms in the text of results pages.
\ No newline at end of file
+The highlight package contains classes to provide "keyword in context" features typically used to highlight search terms in the text of results pages. There are 3 main highlighters:
+
+* <xref:Lucene.Net.Search.Highlight> - A lightweight highlighter for basic usage.
+
+* <xref:Lucene.Net.Search.PostingsHighlight> (In the <xref:Lucene.Net.ICU> package) - Highlighter implementation that uses offsets from postings lists. This highlighter supports Unicode.
+
+* <xref:Lucene.Net.Search.VectorHighlight> - This highlighter is fast for large docs, supports N-gram fields, multi-term highlighting, colored highlight tags, and more. There is a <xref:Lucene.Net.Search.VectorHighlight.BreakIteratorBoundaryScanner> in the <xref:Lucene.Net.ICU> package that can be added on for Unicode support.
\ No newline at end of file

[lucenenet] 03/15: docs: Lucene.Net.Analysis.Common/Collation/TokeAttributes/package.md: Fixed broken link (see #300)

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git

commit 0d970b906197c5d2906fe7beba5ef68af5638319
Author: Shad Storhaug <sh...@shadstorhaug.com>
AuthorDate: Tue Mar 30 19:38:03 2021 +0700

    docs: Lucene.Net.Analysis.Common/Collation/TokeAttributes/package.md: Fixed broken link (see #300)
---
 src/Lucene.Net.Analysis.Common/Collation/TokenAttributes/package.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/Lucene.Net.Analysis.Common/Collation/TokenAttributes/package.md b/src/Lucene.Net.Analysis.Common/Collation/TokenAttributes/package.md
index 4c6ef88..e429c8b 100644
--- a/src/Lucene.Net.Analysis.Common/Collation/TokenAttributes/package.md
+++ b/src/Lucene.Net.Analysis.Common/Collation/TokenAttributes/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Collation.TokenAttributes
 summary: *content
 ---
@@ -20,4 +20,4 @@ summary: *content
  limitations under the License.
 -->
 
-Custom <xref:Lucene.Net.Util.AttributeImpl> for indexing collation keys as index terms.
\ No newline at end of file
+Custom <xref:Lucene.Net.Util.Attribute> for indexing collation keys as index terms.
\ No newline at end of file

[lucenenet] 14/15: docs: Lucene.Net/overview.md: Changed fenced code block to console style

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git

commit b1c353c23137add581fe45c27bb09774d273b0c9
Author: Shad Storhaug <sh...@shadstorhaug.com>
AuthorDate: Tue Mar 30 19:46:08 2021 +0700

    docs: Lucene.Net/overview.md: Changed fenced code block to console style
---
 src/Lucene.Net/overview.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/Lucene.Net/overview.md b/src/Lucene.Net/overview.md
index 45db0bd..8d540ae 100644
--- a/src/Lucene.Net/overview.md
+++ b/src/Lucene.Net/overview.md
@@ -134,7 +134,7 @@ queries and searches an index.
 
 To demonstrate this, try something like:
 
-```
+```console
 > dotnet demo index-files index rec.food.recipies/soups
 adding rec.food.recipes/soups/abalone-chowder
 [...]

[lucenenet] 10/15: docs: Lucene.Net.Misc: Fixed broken formatting and links (see #284, #300)

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git

commit ba2e0aef43255a83957bb33184be0403e9d0cc73
Author: Shad Storhaug <sh...@shadstorhaug.com>
AuthorDate: Tue Mar 30 19:43:39 2021 +0700

    docs: Lucene.Net.Misc: Fixed broken formatting and links (see #284, #300)
---
 src/Lucene.Net.Misc/Index/Sorter/package.md | 6 +++---
 src/Lucene.Net.Misc/overview.md             | 9 ++++++++-
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/src/Lucene.Net.Misc/Index/Sorter/package.md b/src/Lucene.Net.Misc/Index/Sorter/package.md
index 4477e73..a039ae3 100644
--- a/src/Lucene.Net.Misc/Index/Sorter/package.md
+++ b/src/Lucene.Net.Misc/Index/Sorter/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Index.Sorter
 summary: *content
 ---
@@ -20,9 +20,9 @@ summary: *content
  limitations under the License.
 -->
 
-Provides index sorting capablities. The application can use any
+Provides index sorting capabilities. The application can use any
 Sort specification, e.g. to sort by fields using DocValues or FieldCache, or to
-reverse the order of the documents (by using SortField.Type.DOC in reverse).
+reverse the order of the documents (by using [SortFieldType.DOC](xref:Lucene.Net.Search.SortFieldType#Lucene_Net_Search_SortFieldType_DOC) in reverse).
 Multi-level sorts can be specified the same way you would when searching, by
 building Sort from multiple SortFields.
 
diff --git a/src/Lucene.Net.Misc/overview.md b/src/Lucene.Net.Misc/overview.md
index 29319cb..733530a 100644
--- a/src/Lucene.Net.Misc/overview.md
+++ b/src/Lucene.Net.Misc/overview.md
@@ -26,6 +26,12 @@ summary: *content
 The misc package has various tools for splitting/merging indices,
 changing norms, finding high freq terms, and others.
 
+
+<!--
+
+LUCENENET specific - we didn't port the NativeUnixDirectory, and it is not clear whether there is any advantage to doing so in .NET.
+See: https://github.com/apache/lucenenet/issues/276
+
 ## NativeUnixDirectory
 
 __NOTE__: This uses C++ sources (accessible via JNI), which you'll
@@ -55,4 +61,5 @@ Steps to build:
 NativePosixUtil.cpp/java also expose access to the posix_madvise,
 madvise, posix_fadvise functions, which are somewhat more cross
 platform than O_DIRECT, however, in testing (see above link), these
-APIs did not seem to help prevent buffer cache eviction.
\ No newline at end of file
+APIs did not seem to help prevent buffer cache eviction.
+-->
\ No newline at end of file

[lucenenet] 15/15: docs: websites/apidocs/index.md: Updated links to OpenNLP and Highlighter projects, commented TODO work

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git

commit 299f01480f05453877f6d3bf2cd5cf79e06a0403
Author: Shad Storhaug <sh...@shadstorhaug.com>
AuthorDate: Tue Mar 30 19:47:30 2021 +0700

    docs: websites/apidocs/index.md: Updated links to OpenNLP and Highlighter projects, commented TODO work
---
 websites/apidocs/index.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/websites/apidocs/index.md b/websites/apidocs/index.md
index d57aefb..3b9e45f 100644
--- a/websites/apidocs/index.md
+++ b/websites/apidocs/index.md
@@ -30,7 +30,7 @@ on some of the conceptual or inner details of Lucene:
 ## Reference Documents
 
 - [Changes](https://github.com/apache/lucenenet/releases/tag/<EnvVar:LuceneNetReleaseTag>): List of changes in this release.
-- System Requirements: Minimum and supported .NET versions. __TODO: Add link__
+<!-- - System Requirements: Minimum and supported .NET versions. LUCENENT TODO: Add link -->
 - [Migration Guide](xref:Lucene.Net.Migration.Guide): What changed in Lucene 4; how to migrate code from Lucene 3.x.
 - [File Formats](xref:Lucene.Net.Codecs.Lucene46) : Guide to the supported index format used by Lucene.  This can be customized by using [an alternate codec](xref:Lucene.Net.Codecs).
 - [Search and Scoring in Lucene](xref:Lucene.Net.Search): Introduction to how Lucene scores documents.
@@ -43,7 +43,7 @@ on some of the conceptual or inner details of Lucene:
 - <xref:Lucene.Net.Analysis.Common> - Analyzers for indexing content in different languages and domains
 - [Lucene.Net.Analysis.Kuromoji](xref:Lucene.Net.Analysis.Ja) - Japanese Morphological Analyzer
 - <xref:Lucene.Net.Analysis.Morfologik> - Analyzer for dictionary stemming, built-in Polish dictionary
-- <xref:Lucene.Net.Analysis.OpenNlp> - OpenNLP Library Integration
+- [Lucene.Net.Analysis.OpenNLP](xref:Lucene.Net.Analysis.OpenNlp) - OpenNLP Library Integration
 - <xref:Lucene.Net.Analysis.Phonetic> - Analyzer for indexing phonetic signatures (for sounds-alike search)
 - [Lucene.Net.Analysis.SmartCn](xref:Lucene.Net.Analysis.Cn.Smart) - Analyzer for indexing Chinese
 - <xref:Lucene.Net.Analysis.Stempel> - Analyzer for indexing Polish
@@ -53,7 +53,7 @@ on some of the conceptual or inner details of Lucene:
 - [Lucene.Net.Expressions](xref:Lucene.Net.Expressions) - Dynamically computed values to sort/facet/search on based on a pluggable grammar
 - [Lucene.Net.Facet](xref:Lucene.Net.Facet) - Faceted indexing and search capabilities
 - <xref:Lucene.Net.Grouping> - Collectors for grouping search results
-- <xref:Lucene.Net.Search.Highlight> - Highlights search keywords in results
+- <xref:Lucene.Net.Highlighter> - Highlights search keywords in results
 - <xref:Lucene.Net.ICU> - Specialized ICU (International Components for Unicode) Analyzers and Highlighters
 - <xref:Lucene.Net.Join> - Index-time and Query-time joins for normalized content
 - [Lucene.Net.Memory](xref:Lucene.Net.Index.Memory) - Single-document in-memory index implementation

[lucenenet] 02/15: docs: migration-guide.md: Fixed formatting so code examples are inside of lists and lists continue after the code

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git

commit 8caf7b647ec37bfd2fe8e10bb7263bfdf3a2c6a4
Author: Shad Storhaug <sh...@shadstorhaug.com>
AuthorDate: Tue Mar 30 19:36:21 2021 +0700

    docs: migration-guide.md: Fixed formatting so code examples are inside of lists and lists continue after the code
---
 src/Lucene.Net/migration-guide.md | 460 +++++++++++++++++---------------------
 1 file changed, 200 insertions(+), 260 deletions(-)

diff --git a/src/Lucene.Net/migration-guide.md b/src/Lucene.Net/migration-guide.md
index 35aa0b9..bdf1201 100644
--- a/src/Lucene.Net/migration-guide.md
+++ b/src/Lucene.Net/migration-guide.md
@@ -66,69 +66,59 @@ enumeration APIs.  Here are the major changes:
 
 * Fields are separately enumerated (`Fields.GetEnumerator()`) from the terms
   within each field (`TermEnum`).  So instead of this:
-
-```cs
-        TermEnum termsEnum = ...;
-        while (termsEnum.Next())
-        {
-            Term t = termsEnum.Term;
-            Console.WriteLine("field=" + t.Field + "; text=" + t.Text);
-        }
-```
-
-  Do this:
-
-```cs
-        foreach (string field in fields)
+    ```cs
+    TermEnum termsEnum = ...;
+    while (termsEnum.Next())
+    {
+        Term t = termsEnum.Term;
+        Console.WriteLine("field=" + t.Field + "; text=" + t.Text);
+    }
+    ```
+    Do this:
+    ```cs
+    foreach (string field in fields)
+    {
+        Terms terms = fields.GetTerms(field);
+        TermsEnum termsEnum = terms.GetEnumerator();
+        BytesRef text;
+        while(termsEnum.MoveNext())
         {
-            Terms terms = fields.GetTerms(field);
-            TermsEnum termsEnum = terms.GetEnumerator();
-            BytesRef text;
-            while(termsEnum.MoveNext())
-            {
-                Console.WriteLine("field=" + field + "; text=" + termsEnum.Current.Utf8ToString());
-            }
+            Console.WriteLine("field=" + field + "; text=" + termsEnum.Current.Utf8ToString());
         }
-```
+    }
+    ```
 
 * `TermDocs` is renamed to `DocsEnum`.  Instead of this:
-
-```cs
-        while (td.Next())
-        {
-            int doc = td.Doc;
-            ...
-        }
-```
-
-  do this:
-
-```cs
-        int doc;
-        while ((doc = td.Next()) != DocsEnum.NO_MORE_DOCS)
-        {
-            ...
-        }
-```
-
-  Instead of this:
-
-```cs
-        if (td.SkipTo(target))
-        {
-            int doc = td.Doc;
-            ...
-        }
-```
-
-  do this:
-    
-```cs
-        if ((doc = td.Advance(target)) != DocsEnum.NO_MORE_DOCS)
-        {
-            ...
-        }
-```
+    ```cs
+    while (td.Next())
+    {
+        int doc = td.Doc;
+        ...
+    }
+    ```
+    do this:
+    ```cs
+    int doc;
+    while ((doc = td.Next()) != DocsEnum.NO_MORE_DOCS)
+    {
+        ...
+    }
+    ```
+    Instead of this:
+    ```cs
+    if (td.SkipTo(target))
+    {
+        int doc = td.Doc;
+        ...
+    }
+    ```
+    do this:
+    ```cs
+    if ((doc = td.Advance(target)) != DocsEnum.NO_MORE_DOCS)
+    {
+        ...
+    }
+    ```
 
 * `TermPositions` is renamed to `DocsAndPositionsEnum`, and no longer
   extends the docs only enumerator (`DocsEnum`).
@@ -142,32 +132,25 @@ enumeration APIs.  Here are the major changes:
   `TermsEnum` is able to seek, and then you request the
   docs/positions enum from that `TermsEnum`.
 
-* `TermsEnum`'s seek method returns more information.  So instead of
-  this:
-
-```cs
-        Term t;
-        TermEnum termEnum = reader.Terms(t);
-        if (t.Equals(termEnum.Term))
-        {
-            ...
-        }
-```
-
-  do this:
-
-```cs
-        TermsEnum termsEnum = ...;
-        BytesRef text;
-        if (termsEnum.Seek(text) == TermsEnum.SeekStatus.FOUND)
-        {
-            ...
-        }
-```
-
-  `SeekStatus` also contains `END` (enumerator is done) and `NOT_FOUND`
-  (term was not found but enumerator is now positioned to the next
-  term).
+* `TermsEnum`'s seek method returns more information.  So instead of this:
+    ```cs
+    Term t;
+    TermEnum termEnum = reader.Terms(t);
+    if (t.Equals(termEnum.Term))
+    {
+        ...
+    }
+    ```
+    do this:
+    ```cs
+    TermsEnum termsEnum = ...;
+    BytesRef text;
+    if (termsEnum.Seek(text) == TermsEnum.SeekStatus.FOUND)
+    {
+        ...
+    }
+    ```
+    `SeekStatus` also contains `END` (enumerator is done) and `NOT_FOUND` (term was not found but enumerator is now positioned to the next term).
 
 * `TermsEnum` has an `Ord` property, returning the long numeric
   ordinal (ie, first term is 0, next is 1, and so on) for the term
@@ -175,92 +158,62 @@ enumeration APIs.  Here are the major changes:
   ord) method.  Note that these members are optional; in
   particular the `MultiFields` `TermsEnum` does not implement them.
 
-
 * How you obtain the enums has changed.  The primary entry point is
   the `Fields` class.  If you know your reader is a single segment
   reader, do this:
-
-```cs
-        Fields fields = reader.Fields();
-        if (fields != null)
-        {
-            ...
-        }
-```
-
-  If the reader might be multi-segment, you must do this:
-
-```cs
-        Fields fields = MultiFields.GetFields(reader);
-        if (fields != null)
-        {
-            ...
-        }
-```
-  
-  The fields may be `null` (eg if the reader has no fields).
-
-  Note that the `MultiFields` approach entails a performance hit on
-  `MultiReaders`, as it must merge terms/docs/positions on the fly. It's
-  generally better to instead get the sequential readers (use
-  `Lucene.Net.Util.ReaderUtil`) and then step through those readers yourself,
-  if you can (this is how Lucene drives searches).
-
-  If you pass a `SegmentReader` to `MultiFields.GetFields()` it will simply
-  return `reader.GetFields(), so there is no performance hit in that
-  case.
-
-  Once you have a non-null `Fields` you can do this:
-
-```cs
-        Terms terms = fields.GetTerms("field");
-        if (terms != null)
-        {
-            ...
-        }
-```
-
-  The terms may be `null` (eg if the field does not exist).
-
-  Once you have a non-null terms you can get an enum like this:
-
-```cs
-        TermsEnum termsEnum = terms.GetIterator();
-```
-
-  The returned `TermsEnum` will not be `null`.
-
-  You can then .Next() through the TermsEnum, or Seek.  If you want a
-  `DocsEnum`, do this:
-
-```cs
-        IBits liveDocs = reader.GetLiveDocs();
-        DocsEnum docsEnum = null;
-
-        docsEnum = termsEnum.Docs(liveDocs, docsEnum, needsFreqs);
-```
-
-  You can pass in a prior `DocsEnum` and it will be reused if possible.
-
-  Likewise for `DocsAndPositionsEnum`.
-
-  `IndexReader` has several sugar methods (which just go through the
-  above steps, under the hood).  Instead of:
-
-```cs
-        Term t;
-        TermDocs termDocs = reader.TermDocs;
-        termDocs.Seek(t);
-```
-
-  do this:
-
-```cs
-        Term t;
-        DocsEnum docsEnum = reader.GetTermDocsEnum(t);
-```
-
-  Likewise for `DocsAndPositionsEnum`.
+    ```cs
+    Fields fields = reader.Fields();
+    if (fields != null)
+    {
+        ...
+    }
+    ```
+    If the reader might be multi-segment, you must do this:
+    ```cs
+    Fields fields = MultiFields.GetFields(reader);
+    if (fields != null)
+    {
+        ...
+    }
+    ```
+    The fields may be `null` (eg if the reader has no fields).<br/>
+    Note that the `MultiFields` approach entails a performance hit on `MultiReaders`, as it must merge terms/docs/positions on the fly. It's generally better to instead get the sequential readers (use `Lucene.Net.Util.ReaderUtil`) and then step through those readers yourself, if you can (this is how Lucene drives searches).<br/>
+    If you pass a `SegmentReader` to `MultiFields.GetFields()` it will simply return `reader.GetFields()`, so there is no performance hit in that case.<br/>
+    Once you have a non-null `Fields` you can do this:
+    ```cs
+    Terms terms = fields.GetTerms("field");
+    if (terms != null)
+    {
+        ...
+    }
+    ```
+    The terms may be `null` (eg if the field does not exist).<br/>
+    Once you have a non-null terms you can get an enum like this:
+    ```cs
+    TermsEnum termsEnum = terms.GetIterator();
+    ```
+    The returned `TermsEnum` will not be `null`.<br/>
+    You can then .Next() through the TermsEnum, or Seek.  If you want a `DocsEnum`, do this:
+    ```cs
+    IBits liveDocs = reader.GetLiveDocs();
+    DocsEnum docsEnum = null;
+
+    docsEnum = termsEnum.Docs(liveDocs, docsEnum, needsFreqs);
+    ```
+    You can pass in a prior `DocsEnum` and it will be reused if possible.<br/>
+    Likewise for `DocsAndPositionsEnum`.<br/>
+    `IndexReader` has several sugar methods (which just go through the above steps, under the hood).  Instead of:
+    ```cs
+    Term t;
+    TermDocs termDocs = reader.TermDocs;
+    termDocs.Seek(t);
+    ```
+    do this:
+    ```cs
+    Term t;
+    DocsEnum docsEnum = reader.GetTermDocsEnum(t);
+    ```
+    Likewise for `DocsAndPositionsEnum`.
 
 ## [LUCENE-2380](https://issues.apache.org/jira/browse/LUCENE-2380): FieldCache.GetStrings/Index --> FieldCache.GetDocTerms/Index
 
@@ -272,28 +225,22 @@ enumeration APIs.  Here are the major changes:
   with `GetTerms` (returning a `BinaryDocValues` instance).
   `BinaryDocValues` provides a `Get` method, taking a `docID` and a `BytesRef`
   to fill (which must not be `null`), and it fills it in with the
-  reference to the bytes for that term.
-
-  If you had code like this before:
-
-```cs
-        string[] values = FieldCache.DEFAULT.GetStrings(reader, field);
-        ...
-        string aValue = values[docID];
-```
-
-  you can do this instead:
-
-```cs
-        BinaryDocValues values = FieldCache.DEFAULT.GetTerms(reader, field);
-        ...
-        BytesRef term = new BytesRef();
-        values.Get(docID, term);
-        string aValue = term.Utf8ToString();
-```
-
-  Note however that it can be costly to convert to `String`, so it's
-  better to work directly with the `BytesRef`.
+  reference to the bytes for that term.<br/>
+    If you had code like this before:
+    ```cs
+    string[] values = FieldCache.DEFAULT.GetStrings(reader, field);
+    ...
+    string aValue = values[docID];
+    ```
+    you can do this instead:
+    ```cs
+    BinaryDocValues values = FieldCache.DEFAULT.GetTerms(reader, field);
+    ...
+    BytesRef term = new BytesRef();
+    values.Get(docID, term);
+    string aValue = term.Utf8ToString();
+    ```
+    Note however that it can be costly to convert to `String`, so it's better to work directly with the `BytesRef`.
 
 * Similarly, in `FieldCache`, GetStringIndex (returning a `StringIndex`
   instance, with direct arrays `int[]` order and `String[]` lookup) has
@@ -302,34 +249,25 @@ enumeration APIs.  Here are the major changes:
   `GetOrd(int docID)` method to lookup the int order for a document,
   `LookupOrd(int ord, BytesRef result)` to lookup the term from a given
   order, and the sugar method `Get(int docID, BytesRef result)`
-  which internally calls `GetOrd` and then `LookupOrd`.
-
-  If you had code like this before:
-
-```cs
-        StringIndex idx = FieldCache.DEFAULT.GetStringIndex(reader, field);
-        ...
-        int ord = idx.order[docID];
-        String aValue = idx.lookup[ord];
-```
-
-  you can do this instead:
-
-```cs
-        DocTermsIndex idx = FieldCache.DEFAULT.GetTermsIndex(reader, field);
-        ...
-        int ord = idx.GetOrd(docID);
-        BytesRef term = new BytesRef();
-        idx.LookupOrd(ord, term);
-        String aValue = term.Utf8ToString();
-```
-
-  Note however that it can be costly to convert to `String`, so it's
-  better to work directly with the `BytesRef`.
-
-  `DocTermsIndex` also has a `GetTermsEnum()` method, which returns an
-  iterator (`TermsEnum`) over the term values in the index (ie,
-  iterates ord = 0..NumOrd-1).
+  which internally calls `GetOrd` and then `LookupOrd`.<br/>
+    If you had code like this before:
+    ```cs
+    StringIndex idx = FieldCache.DEFAULT.GetStringIndex(reader, field);
+    ...
+    int ord = idx.order[docID];
+    String aValue = idx.lookup[ord];
+    ```
+    you can do this instead:
+    ```cs
+    DocTermsIndex idx = FieldCache.DEFAULT.GetTermsIndex(reader, field);
+    ...
+    int ord = idx.GetOrd(docID);
+    BytesRef term = new BytesRef();
+    idx.LookupOrd(ord, term);
+    string aValue = term.Utf8ToString();
+    ```
+    Note however that it can be costly to convert to `String`, so it's better to work directly with the `BytesRef`.<br/>
+    `DocTermsIndex` also has a `GetTermsEnum()` method, which returns an iterator (`TermsEnum`) over the term values in the index (ie, iterates ord = 0..NumOrd-1).
 
 * `FieldComparator.StringComparatorLocale` has been removed.
   (it was very CPU costly since it does not compare using
@@ -347,17 +285,17 @@ enumeration APIs.  Here are the major changes:
 
 ## [LUCENE-2600](https://issues.apache.org/jira/browse/LUCENE-2600): `IndexReader`s are now read-only
 
-  Instead of `IndexReader.IsDeleted(int n)`, do this:
+Instead of `IndexReader.IsDeleted(int n)`, do this:
 
 ```cs
-      using Lucene.Net.Util;
-      using Lucene.Net.Index;
-
-      IBits liveDocs = MultiFields.GetLiveDocs(indexReader);
-      if (liveDocs != null && !liveDocs.Get(docID))
-      {
-          // document is deleted...
-      }
+using Lucene.Net.Util;
+using Lucene.Net.Index;
+
+IBits liveDocs = MultiFields.GetLiveDocs(indexReader);
+if (liveDocs != null && !liveDocs.Get(docID))
+{
+    // document is deleted...
+}
 ```
     
 ## [LUCENE-2858](https://issues.apache.org/jira/browse/LUCENE-2858), [LUCENE-3733](https://issues.apache.org/jira/browse/LUCENE-3733): `IndexReader` --> `AtomicReader`/`CompositeReader`/`DirectoryReader` refactoring
@@ -561,28 +499,30 @@ add a separate `StoredField` to the document, or you can use
 `TYPE_STORED` for the field:
 
 ```cs
-    Field f = new Field("field", "value", StringField.TYPE_STORED);
+Field f = new Field("field", "value", StringField.TYPE_STORED);
 ```
 
 Alternatively, if an existing type is close to what you want but you
 need to make a few changes, you can copy that type and make changes:
 
 ```cs
-    FieldType bodyType = new FieldType(TextField.TYPE_STORED);
-    bodyType.setStoreTermVectors(true);
+FieldType bodyType = new FieldType(TextField.TYPE_STORED)
+{
+    StoreTermVectors = true
+};
 ```
 
 You can of course also create your own `FieldType` from scratch:
 
 ```cs
-    FieldType t = new FieldType
-    {
-        Indexed = true,
-        Stored = true,
-        OmitNorms = true,
-        IndexOptions = IndexOptions.DOCS_AND_FREQS
-    };
-    t.Freeze();
+FieldType t = new FieldType
+{
+    Indexed = true,
+    Stored = true,
+    OmitNorms = true,
+    IndexOptions = IndexOptions.DOCS_AND_FREQS
+};
+t.Freeze();
 ```
 
 `FieldType` has a `Freeze()` method to prevent further changes.
@@ -594,13 +534,13 @@ enums.
 When migrating from the 3.x API, if you did this before:
 
 ```cs
-    new Field("field", "value", Field.Store.NO, Field.Indexed.NOT_ANALYZED_NO_NORMS)
+new Field("field", "value", Field.Store.NO, Field.Indexed.NOT_ANALYZED_NO_NORMS)
 ```
 
 you can now do this:
 
 ```cs
-    new StringField("field", "value")
+new StringField("field", "value")
 ```
 
 (though note that `StringField` indexes `DOCS_ONLY`).
@@ -608,81 +548,81 @@ you can now do this:
 If instead the value was stored:
 
 ```cs
-    new Field("field", "value", Field.Store.YES, Field.Indexed.NOT_ANALYZED_NO_NORMS)
+new Field("field", "value", Field.Store.YES, Field.Indexed.NOT_ANALYZED_NO_NORMS)
 ```
 
 you can now do this:
 
 ```cs
-    new Field("field", "value", TextField.TYPE_STORED)
+new Field("field", "value", TextField.TYPE_STORED)
 ```
 
 If you didn't omit norms:
 
 ```cs
-    new Field("field", "value", Field.Store.YES, Field.Indexed.NOT_ANALYZED)
+new Field("field", "value", Field.Store.YES, Field.Indexed.NOT_ANALYZED)
 ```
 
 you can now do this:
 
 ```cs
-    FieldType ft = new FieldType(TextField.TYPE_STORED)
-    {
-        OmitNorms = false
-    };
-    new Field("field", "value", ft)
+FieldType ft = new FieldType(TextField.TYPE_STORED)
+{
+    OmitNorms = false
+};
+new Field("field", "value", ft)
 ```
 
 If you did this before (value can be `String` or `TextReader`):
 
 ```cs
-    new Field("field", value, Field.Store.NO, Field.Indexed.ANALYZED)
+new Field("field", value, Field.Store.NO, Field.Indexed.ANALYZED)
 ```
 
 you can now do this:
 
 ```cs
-    new TextField("field", value, Field.Store.NO)
+new TextField("field", value, Field.Store.NO)
 ```
 
 If instead the value was stored:
 
 ```cs
-    new Field("field", value, Field.Store.YES, Field.Indexed.ANALYZED)
+new Field("field", value, Field.Store.YES, Field.Indexed.ANALYZED)
 ```
 
 you can now do this:
 
 ```cs
-    new TextField("field", value, Field.Store.YES)
+new TextField("field", value, Field.Store.YES)
 ```
 
 If in addition you omit norms:
 
 ```cs
-    new Field("field", value, Field.Store.YES, Field.Indexed.ANALYZED_NO_NORMS)
+new Field("field", value, Field.Store.YES, Field.Indexed.ANALYZED_NO_NORMS)
 ```
 
 you can now do this:
 
 ```cs
-    FieldType ft = new FieldType(TextField.TYPE_STORED)
-    {
-        OmitNorms = true
-    };
-    new Field("field", value, ft)
+FieldType ft = new FieldType(TextField.TYPE_STORED)
+{
+    OmitNorms = true
+};
+new Field("field", value, ft)
 ```
 
 If you did this before (bytes is a `byte[]`):
 
 ```cs
-    new Field("field", bytes)
+new Field("field", bytes)
 ```
 
 you can now do this:
 
 ```cs
-    new StoredField("field", bytes)
+new StoredField("field", bytes)
 ```
 
 If you previously used the setter of `Document.Boost`, you must now pre-multiply

[lucenenet] 12/15: docs: Lucene.Net.Spatial: Fixed broken formatting and links (see #284, #300)

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git

commit 4a096b94b42de874bdaadb6c31d64debd87a65f7
Author: Shad Storhaug <sh...@shadstorhaug.com>
AuthorDate: Tue Mar 30 19:44:33 2021 +0700

    docs: Lucene.Net.Spatial: Fixed broken formatting and links (see #284, #300)
---
 src/Lucene.Net.Spatial/overview.md | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/src/Lucene.Net.Spatial/overview.md b/src/Lucene.Net.Spatial/overview.md
index 51b1967..1fca597 100644
--- a/src/Lucene.Net.Spatial/overview.md
+++ b/src/Lucene.Net.Spatial/overview.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Spatial
 summary: *content
 ---
@@ -20,12 +20,17 @@ summary: *content
  limitations under the License.
 -->
 
-# The Spatial Module for Apache Lucene
+# The Spatial Module for Apache Lucene.NET
+
+The spatial module is new to Lucene.NET 4, replacing the old "Lucene.Net.Contrib" module that came before it. The principle interface to the module is a <xref:Lucene.Net.Spatial.SpatialStrategy> which encapsulates an approach to indexing and searching based on shapes. Different Strategies have different features and performance profiles, which are documented at each Strategy implementation class level. 
 
- The spatial module is new to Lucene 4, replacing the old "contrib" module that came before it. The principle interface to the module is a <xref:Lucene.Net.Spatial.SpatialStrategy> which encapsulates an approach to indexing and searching based on shapes. Different Strategies have different features and performance profiles, which are documented at each Strategy implementation class level. 
+For some sample code showing how to use the API, see SpatialExample.cs in the tests. 
 
- For some sample code showing how to use the API, see SpatialExample.java in the tests. 
+The spatial module uses [Spatial4n](https://github.com/NightOwl888/Spatial4n), a .NET port of the ASL licensed [Spatial4j](https://github.com/spatial4j/spatial4j) heavily. Spatial4n is a library with these capabilities:
 
- The spatial module uses [Spatial4j](https://github.com/spatial4j/spatial4j) heavily. Spatial4j is an ASL licensed library with these capabilities: * Provides shape implementations, namely point, rectangle, and circle. Both geospatial contexts and plain 2D Euclidean/Cartesian contexts are supported. With an additional dependency, it adds polygon and other geometry shape support via integration with [JTS Topology Suite](http://sourceforge.net/projects/jts-topo-suite/). This includes datel [...]
+* Provides shape implementations, namely point, rectangle, and circle. Both geospatial contexts and plain 2D Euclidean/Cartesian contexts are supported. With an additional dependency, it adds polygon and other geometry shape support via integration with [NetTopologySuite](https://github.com/NetTopologySuite/NetTopologySuite) (often referred to as NTS). This includes dateline wrap support.
+* Shape parsing and serialization, including [Well-Known Text (WKT)](http://en.wikipedia.org/wiki/Well-known_text) (via NTS).
+* Distance and other spatial related math calculations. 
 
- Historical note: The new spatial module was once known as Lucene Spatial Playground (LSP) as an external project. In ~March 2012, LSP split into this new module as part of Lucene and Spatial4j externally. A large chunk of the LSP implementation originated as SOLR-2155 which uses trie/prefix-tree algorithms with a geohash encoding. That approach is implemented in <xref:Lucene.Net.Spatial.Prefix.RecursivePrefixTreeStrategy> today. 
\ No newline at end of file
+> [!NOTE]
+> Historical Fact: The new spatial module was once known as Lucene Spatial Playground (LSP) as an external project. In ~March 2012, LSP split into this new module as part of Lucene and Spatial4j externally. A large chunk of the LSP implementation originated as SOLR-2155 which uses trie/prefix-tree algorithms with a geohash encoding. That approach is implemented in <xref:Lucene.Net.Spatial.Prefix.RecursivePrefixTreeStrategy> today. 
\ No newline at end of file

[lucenenet] 06/15: docs: Lucene.Net.Facet: Fixed broken formatting and links (see #284, #300)

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git

commit 8c404e914996f32c39bb19a6932d1d7a525749fe
Author: Shad Storhaug <sh...@shadstorhaug.com>
AuthorDate: Tue Mar 30 19:41:26 2021 +0700

    docs: Lucene.Net.Facet: Fixed broken formatting and links (see #284, #300)
---
 src/Lucene.Net.Facet/SortedSet/package.md |  4 ++--
 src/Lucene.Net.Facet/Taxonomy/package.md  | 11 ++++-------
 src/Lucene.Net.Facet/package.md           | 12 ++++++++----
 3 files changed, 14 insertions(+), 13 deletions(-)

diff --git a/src/Lucene.Net.Facet/SortedSet/package.md b/src/Lucene.Net.Facet/SortedSet/package.md
index b62ecbb..8569652 100644
--- a/src/Lucene.Net.Facet/SortedSet/package.md
+++ b/src/Lucene.Net.Facet/SortedSet/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Facet.SortedSet
 summary: *content
 ---
@@ -20,4 +20,4 @@ summary: *content
  limitations under the License.
 -->
 
-Provides faceting capabilities over facets that were indexed with <xref:Lucene.Net.Facet.Sortedset.SortedSetDocValuesFacetField>.
+Provides faceting capabilities over facets that were indexed with <xref:Lucene.Net.Facet.SortedSet.SortedSetDocValuesFacetField>.
diff --git a/src/Lucene.Net.Facet/Taxonomy/package.md b/src/Lucene.Net.Facet/Taxonomy/package.md
index 4c91b30..98d5dd2 100644
--- a/src/Lucene.Net.Facet/Taxonomy/package.md
+++ b/src/Lucene.Net.Facet/Taxonomy/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Facet.Taxonomy
 summary: *content
 ---
@@ -22,8 +22,7 @@ summary: *content
 
 # Taxonomy of Categories
 
-	Facets are defined using a hierarchy of categories, known as a _Taxonomy_.
-	For example, the taxonomy of a book store application might have the following structure:
+Facets are defined using a hierarchy of categories, known as a _Taxonomy_. For example, the taxonomy of a book store application might have the following structure:
 
 *   Author
 
@@ -41,7 +40,5 @@ summary: *content
 
     *   2009
 
-	The _Taxonomy_ translates category-paths into interger identifiers (often termed _ordinals_) and vice versa.
-	The category `Author/Mark Twain` adds two nodes to the taxonomy: `Author` and 
-	`Author/Mark Twain`, each is assigned a different ordinal. The taxonomy maintains the invariant that a 
-	node always has an ordinal that is < all its children.
\ No newline at end of file
+The _Taxonomy_ translates category-paths into interger identifiers (often termed _ordinals_) and vice versa.
+The category `Author/Mark Twain` adds two nodes to the taxonomy: `Author` and `Author/Mark Twain`, each is assigned a different ordinal. The taxonomy maintains the invariant that a node always has an ordinal that is < all its children.
\ No newline at end of file
diff --git a/src/Lucene.Net.Facet/package.md b/src/Lucene.Net.Facet/package.md
index b2bdfb2..3b54f41 100644
--- a/src/Lucene.Net.Facet/package.md
+++ b/src/Lucene.Net.Facet/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Facet
 summary: *content
 ---
@@ -20,10 +20,14 @@ summary: *content
  limitations under the License.
 -->
 
-# faceted search
+# Lucene.Net.Facet Faceted Search
+
+ This module provides multiple methods for computing facet counts and value aggregations:
 
- This module provides multiple methods for computing facet counts and value aggregations: * Taxonomy-based methods rely on a separate taxonomy index to map hierarchical facet paths to global int ordinals for fast counting at search time; these methods can compute counts ((<xref:Lucene.Net.Facet.Taxonomy.FastTaxonomyFacetCounts>, <xref:Lucene.Net.Facet.Taxonomy.TaxonomyFacetCounts>) aggregate long or double values <xref:Lucene.Net.Facet.Taxonomy.TaxonomyFacetSumIntAssociations>, <xref:Luc [...]
+* Taxonomy-based methods rely on a separate taxonomy index to map hierarchical facet paths to global int ordinals for fast counting at search time; these methods can compute counts ((<xref:Lucene.Net.Facet.Taxonomy.FastTaxonomyFacetCounts>, <xref:Lucene.Net.Facet.Taxonomy.TaxonomyFacetCounts>) aggregate long or double values <xref:Lucene.Net.Facet.Taxonomy.TaxonomyFacetSumInt32Associations>, <xref:Lucene.Net.Facet.Taxonomy.TaxonomyFacetSumSingleAssociations>, <xref:Lucene.Net.Facet.Taxon [...]
+* Sorted-set doc values method does not require a separate taxonomy index, and computes counts based on sorted set doc values fields (<xref:Lucene.Net.Facet.SortedSet.SortedSetDocValuesFacetCounts>). Add <xref:Lucene.Net.Facet.SortedSet.SortedSetDocValuesFacetField> to your documents at index time to use sorted set facet counts.
+* Range faceting <xref:Lucene.Net.Facet.Range.Int64RangeFacetCounts>, <xref:Lucene.Net.Facet.Range.DoubleRangeFacetCounts> compute counts for a dynamic numeric range from a provided <xref:Lucene.Net.Queries.Function.ValueSource> (previously indexed numeric field, or a dynamic expression such as distance). 
 
  At search time you first run your search, but pass a <xref:Lucene.Net.Facet.FacetsCollector> to gather all hits (and optionally, scores for each hit). Then, instantiate whichever facet methods you'd like to use to compute aggregates. Finally, all methods implement a common <xref:Lucene.Net.Facet.Facets> base API that you use to obtain specific facet counts. 
 
- The various [#search](xref:Lucene.Net.Facet.FacetsCollector) utility methods are useful for doing an "ordinary" search (sorting by score, or by a specified Sort) but also collecting into a <xref:Lucene.Net.Facet.FacetsCollector> for subsequent faceting. 
\ No newline at end of file
+ The various [FacetsCollector.Search()](xref:Lucene.Net.Facet.FacetsCollector#Lucene_Net_Facet_FacetsCollector_Search_Lucene_Net_Search_IndexSearcher_Lucene_Net_Search_Query_Lucene_Net_Search_Filter_System_Int32_Lucene_Net_Search_ICollector_) utility methods are useful for doing an "ordinary" search (sorting by score, or by a specified Sort) but also collecting into a <xref:Lucene.Net.Facet.FacetsCollector> for subsequent faceting. 
\ No newline at end of file

[lucenenet] 09/15: docs: Lucene.Net.Join: Fixed broken formatting and links (see #284, #300)

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git

commit 2e415cdbea2ba81d3dcc8fdb4b804b71c5ec3a69
Author: Shad Storhaug <sh...@shadstorhaug.com>
AuthorDate: Tue Mar 30 19:43:16 2021 +0700

    docs: Lucene.Net.Join: Fixed broken formatting and links (see #284, #300)
---
 src/Lucene.Net.Join/package.md | 41 ++++++++++++++++++++++-------------------
 1 file changed, 22 insertions(+), 19 deletions(-)

diff --git a/src/Lucene.Net.Join/package.md b/src/Lucene.Net.Join/package.md
index 689d2c6..5c194b4 100644
--- a/src/Lucene.Net.Join/package.md
+++ b/src/Lucene.Net.Join/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Join
 summary: *content
 ---
@@ -24,21 +24,21 @@ This modules support index-time and query-time joins.
 
 ## Index-time joins
 
-The index-time joining support joins while searching, where joined documents are indexed as a single document block using [IndexWriter.addDocuments](xref:Lucene.Net.Index.IndexWriter#methods). This is useful for any normalized content (XML documents or database tables). In database terms, all rows for all joined tables matching a single row of the primary table must be indexed as a single document block, with the parent document being last in the group.
+The index-time joining support joins while searching, where joined documents are indexed as a single document block using [IndexWriter.AddDocuments()](xref:Lucene.Net.Index.IndexWriter#Lucene_Net_Index_IndexWriter_AddDocuments_System_Collections_Generic_IEnumerable_System_Collections_Generic_IEnumerable_Lucene_Net_Index_IIndexableField___). This is useful for any normalized content (XML documents or database tables). In database terms, all rows for all joined tables matching a single row [...]
 
 When you index in this way, the documents in your index are divided into parent documents (the last document of each block) and child documents (all others). You provide a <xref:Lucene.Net.Search.Filter> that identifies the parent documents, as Lucene does not currently record any information about doc blocks.
 
 At search time, use <xref:Lucene.Net.Join.ToParentBlockJoinQuery> to remap/join matches from any child <xref:Lucene.Net.Search.Query> (ie, a query that matches only child documents) up to the parent document space. The resulting query can then be used as a clause in any query that matches parent.
 
-If you only care about the parent documents matching the query, you can use any collector to collect the parent hits, but if you'd also like to see which child documents match for each parent document, use the <xref:Lucene.Net.Join.ToParentBlockJoinCollector> to collect the hits. Once the search is done, you retrieve a <xref:Lucene.Net.Grouping.TopGroups> instance from the [ToParentBlockJoinCollector.getTopGroups](xref:Lucene.Net.Join.ToParentBlockJoinCollector#methods) method.
+If you only care about the parent documents matching the query, you can use any collector to collect the parent hits, but if you'd also like to see which child documents match for each parent document, use the <xref:Lucene.Net.Join.ToParentBlockJoinCollector> to collect the hits. Once the search is done, you retrieve a <xref:Lucene.Net.Search.Grouping.ITopGroups`1> instance from the [ToParentBlockJoinCollector.GetTopGroups()](xref:Lucene.Net.Join.ToParentBlockJoinCollector#Lucene_Net_Joi [...]
 
 To map/join in the opposite direction, use <xref:Lucene.Net.Join.ToChildBlockJoinQuery>.  This wraps
-  any query matching parent documents, creating the joined query
-  matching only child documents.
+any query matching parent documents, creating the joined query
+matching only child documents.
 
 ## Query-time joins
 
- The query time joining is index term based and implemented as two pass search. The first pass collects all the terms from a fromField that match the fromQuery. The second pass returns all documents that have matching terms in a toField to the terms collected in the first pass. 
+The query time joining is index term based and implemented as two pass search. The first pass collects all the terms from a `fromField` that match the `fromQuery`. The second pass returns all documents that have matching terms in a `toField` to the terms collected in the first pass. 
 
 Query time joining has the following input:
 
@@ -46,22 +46,25 @@ Query time joining has the following input:
 
 *   `fromQuery`:  The query executed to collect the from terms. This is usually the user specified query.
 
-*   `multipleValuesPerDocument`:  Whether the fromField contains more than one value per document
+*   `multipleValuesPerDocument`:  Whether the `fromField` contains more than one value per document
 
 *   `scoreMode`:  Defines how scores are translated to the other join side. If you don't care about scoring
-  use [#None](xref:Lucene.Net.Join.ScoreMode) mode. This will disable scoring and is therefore more
+  use [ScoreMode.None](xref:Lucene.Net.Join.ScoreMode#Lucene_Net_Join_ScoreMode_None) mode. This will disable scoring and is therefore more
   efficient (requires less memory and is faster).
 
 *   `toField`: The to field to join to
 
- Basically the query-time joining is accessible from one static method. The user of this method supplies the method with the described input and a `IndexSearcher` where the from terms need to be collected from. The returned query can be executed with the same `IndexSearcher`, but also with another `IndexSearcher`. Example usage of the [JoinUtil.createJoinQuery](xref:Lucene.Net.Join.JoinUtil#methods) : 
-
-      String fromField = "from"; // Name of the from field
-      boolean multipleValuesPerDocument = false; // Set only yo true in the case when your fromField has multiple values per document in your index
-      String toField = "to"; // Name of the to field
-      ScoreMode scoreMode = ScoreMode.Max // Defines how the scores are translated into the other side of the join.
-      Query fromQuery = new TermQuery(new Term("content", searchTerm)); // Query executed to collect from values to join to the to values
-    
-  Query joinQuery = JoinUtil.createJoinQuery(fromField, multipleValuesPerDocument, toField, fromQuery, fromSearcher, scoreMode);
-      TopDocs topDocs = toSearcher.search(joinQuery, 10); // Note: toSearcher can be the same as the fromSearcher
-      // Render topDocs...
\ No newline at end of file
+Basically the query-time joining is accessible from one static method. The user of this method supplies the method with the described input and a `IndexSearcher` where the from terms need to be collected from. The returned query can be executed with the same `IndexSearcher`, but also with another `IndexSearcher`. Example usage of the [JoinUtil.CreateJoinQuery()](xref:Lucene.Net.Join.JoinUtil#Lucene_Net_Join_JoinUtil_CreateJoinQuery_System_String_System_Boolean_System_String_Lucene_Net_Se [...]
+
+
+```cs
+string fromField = "from"; // Name of the from field
+bool multipleValuesPerDocument = false; // Set only yo true in the case when your fromField has multiple values per document in your index
+string toField = "to"; // Name of the to field
+ScoreMode scoreMode = ScoreMode.Max; // Defines how the scores are translated into the other side of the join.
+Query fromQuery = new TermQuery(new Term("content", searchTerm)); // Query executed to collect from values to join to the to values
+
+Query joinQuery = JoinUtil.CreateJoinQuery(fromField, multipleValuesPerDocument, toField, fromQuery, fromSearcher, scoreMode);
+TopDocs topDocs = toSearcher.Search(joinQuery, 10); // Note: toSearcher can be the same as the fromSearcher
+// Render topDocs...
+```
\ No newline at end of file

[lucenenet] 13/15: docs: Lucene.Net.TestFramework: Fixed broken formatting and links (see #284, #300)

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git

commit cc551b0a1c94345896a6d2fdbbf2cdc4580f8fa2
Author: Shad Storhaug <sh...@shadstorhaug.com>
AuthorDate: Tue Mar 30 19:45:10 2021 +0700

    docs: Lucene.Net.TestFramework: Fixed broken formatting and links (see #284, #300)
---
 src/Lucene.Net.TestFramework/Analysis/package.md            | 8 ++++++--
 src/Lucene.Net.TestFramework/Codecs/Lucene41Ords/package.md | 4 ++--
 src/Lucene.Net.TestFramework/Index/package.md               | 7 +++++--
 src/Lucene.Net.TestFramework/Search/package.md              | 7 +++++--
 src/Lucene.Net.TestFramework/Util/package.md                | 4 ++--
 5 files changed, 20 insertions(+), 10 deletions(-)

diff --git a/src/Lucene.Net.TestFramework/Analysis/package.md b/src/Lucene.Net.TestFramework/Analysis/package.md
index e1bc0b4..ae6abce 100644
--- a/src/Lucene.Net.TestFramework/Analysis/package.md
+++ b/src/Lucene.Net.TestFramework/Analysis/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Analysis
 summary: *content
 ---
@@ -22,4 +22,8 @@ summary: *content
 
 Support for testing analysis components.
 
- The main classes of interest are: * <xref:Lucene.Net.Analysis.BaseTokenStreamTestCase>: Highly recommended to use its helper methods, (especially in conjunction with <xref:Lucene.Net.Analysis.MockAnalyzer> or <xref:Lucene.Net.Analysis.MockTokenizer>), as it contains many assertions and checks to catch bugs. * <xref:Lucene.Net.Analysis.MockTokenizer>: Tokenizer for testing. Tokenizer that serves as a replacement for WHITESPACE, SIMPLE, and KEYWORD tokenizers. If you are writing a compone [...]
\ No newline at end of file
+The main classes of interest are:
+
+* <xref:Lucene.Net.Analysis.BaseTokenStreamTestCase>: Highly recommended to use its helper methods, (especially in conjunction with <xref:Lucene.Net.Analysis.MockAnalyzer> or <xref:Lucene.Net.Analysis.MockTokenizer>), as it contains many assertions and checks to catch bugs.
+* <xref:Lucene.Net.Analysis.MockTokenizer>: Tokenizer for testing. Tokenizer that serves as a replacement for WHITESPACE, SIMPLE, and KEYWORD tokenizers. If you are writing a component such as a TokenFilter, its a great idea to test it wrapping this tokenizer instead for extra checks.
+* <xref:Lucene.Net.Analysis.MockAnalyzer>: Analyzer for testing. Analyzer that uses MockTokenizer for additional verification. If you are testing a custom component such as a query parser or analyzer-wrapper that consumes analysis streams, its a great idea to test it with this analyzer instead. 
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Codecs/Lucene41Ords/package.md b/src/Lucene.Net.TestFramework/Codecs/Lucene41Ords/package.md
index f6b2e22..34fdcc5 100644
--- a/src/Lucene.Net.TestFramework/Codecs/Lucene41Ords/package.md
+++ b/src/Lucene.Net.TestFramework/Codecs/Lucene41Ords/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Codecs.Lucene41Ords
 summary: *content
 ---
@@ -20,4 +20,4 @@ summary: *content
  limitations under the License.
 -->
 
-Codec for testing that supports [#ord()](xref:Lucene.Net.Index.TermsEnum)
\ No newline at end of file
+Codec for testing that supports [TermsEnum.Ord](xref:Lucene.Net.Index.TermsEnum#Lucene_Net_Index_TermsEnum_Ord)
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Index/package.md b/src/Lucene.Net.TestFramework/Index/package.md
index 7af00a8..23fc3ee 100644
--- a/src/Lucene.Net.TestFramework/Index/package.md
+++ b/src/Lucene.Net.TestFramework/Index/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Index
 summary: *content
 ---
@@ -22,4 +22,7 @@ summary: *content
 
 Support for testing of indexes. 
 
- The primary classes are: * <xref:Lucene.Net.Index.RandomIndexWriter>: Randomizes the indexing experience. * <xref:Lucene.Net.Index.MockRandomMergePolicy>: MergePolicy that makes random decisions. 
\ No newline at end of file
+The primary classes are:
+
+* <xref:Lucene.Net.Index.RandomIndexWriter>: Randomizes the indexing experience.
+* <xref:Lucene.Net.Index.MockRandomMergePolicy>: MergePolicy that makes random decisions. 
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Search/package.md b/src/Lucene.Net.TestFramework/Search/package.md
index f1e16bd..5052f93 100644
--- a/src/Lucene.Net.TestFramework/Search/package.md
+++ b/src/Lucene.Net.TestFramework/Search/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Search
 summary: *content
 ---
@@ -22,4 +22,7 @@ summary: *content
 
 Support for testing search components. 
 
- The primary classes are: * <xref:Lucene.Net.Search.QueryUtils>: Useful methods for testing Query classes. * <xref:Lucene.Net.Search.ShardSearchingTestBase>: Base class for simulating distributed search. 
\ No newline at end of file
+The primary classes are:
+
+* <xref:Lucene.Net.Search.QueryUtils>: Useful methods for testing Query classes.
+* <xref:Lucene.Net.Search.ShardSearchingTestBase>: Base class for simulating distributed search. 
\ No newline at end of file
diff --git a/src/Lucene.Net.TestFramework/Util/package.md b/src/Lucene.Net.TestFramework/Util/package.md
index 10c4151..97ba256 100644
--- a/src/Lucene.Net.TestFramework/Util/package.md
+++ b/src/Lucene.Net.TestFramework/Util/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Util
 summary: *content
 ---
@@ -21,4 +21,4 @@ summary: *content
 -->
 
 General test support.  The primary class is <xref:Lucene.Net.Util.LuceneTestCase>,
-which extends JUnit with additional functionality.
\ No newline at end of file
+which extends NUnit with additional functionality.
\ No newline at end of file

[lucenenet] 11/15: docs: Lucene.Net.QueryParser: Fixed broken formatting and links (see #284, #300)

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git

commit d667a619cebccc7c9c6df271dea690ea07b2ba33
Author: Shad Storhaug <sh...@shadstorhaug.com>
AuthorDate: Tue Mar 30 19:44:11 2021 +0700

    docs: Lucene.Net.QueryParser: Fixed broken formatting and links (see #284, #300)
---
 src/Lucene.Net.QueryParser/Surround/Parser/package.md |  7 +++----
 src/Lucene.Net.QueryParser/Surround/Query/package.md  | 11 ++++-------
 2 files changed, 7 insertions(+), 11 deletions(-)

diff --git a/src/Lucene.Net.QueryParser/Surround/Parser/package.md b/src/Lucene.Net.QueryParser/Surround/Parser/package.md
index 6a343ba..def4cbf 100644
--- a/src/Lucene.Net.QueryParser/Surround/Parser/package.md
+++ b/src/Lucene.Net.QueryParser/Surround/Parser/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.QueryParsers.Surround.Parser
 summary: *content
 ---
@@ -20,7 +20,6 @@ summary: *content
  limitations under the License.
 -->
 
-    This package contains the QueryParser.jj source file for the Surround parser.
+This package contains the `QueryParser.jj` source file for the Surround parser.
 
-    Parsing the text of a query results in a SrndQuery in the
-    org.apache.lucene.queryparser.surround.query package.
\ No newline at end of file
+Parsing the text of a query results in a SrndQuery in the <xref:Lucene.Net.QueryParsers.Surround.Query> namespace.
\ No newline at end of file
diff --git a/src/Lucene.Net.QueryParser/Surround/Query/package.md b/src/Lucene.Net.QueryParser/Surround/Query/package.md
index 8b13033..348d1c0 100644
--- a/src/Lucene.Net.QueryParser/Surround/Query/package.md
+++ b/src/Lucene.Net.QueryParser/Surround/Query/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.QueryParsers.Surround.Query
 summary: *content
 ---
@@ -20,11 +20,8 @@ summary: *content
  limitations under the License.
 -->
 
-    This package contains SrndQuery and its subclasses.
+This package contains <xref:Lucene.Net.QueryParsers.Surround.Query.SrndQuery> and its subclasses.
 
-    The parser in the org.apache.lucene.queryparser.surround.parser package
-    normally generates a SrndQuery.
+The parser in the <xref:Lucene.Net.QueryParsers.Surround.Parser> namespace normally generates a SrndQuery.
 
-    For searching an org.apache.lucene.search.Query is provided by
-    the SrndQuery.makeLuceneQueryField method.
-    For this, TermQuery, BooleanQuery and SpanQuery are used from Lucene.
\ No newline at end of file
+For searching an <xref:Lucene.Net.Search.Query> is provided by the [SrndQuery.MakeLuceneQueryField()](xref:Lucene.Net.QueryParsers.Surround.Query.SrndQuery#Lucene_Net_QueryParsers_Surround_Query_SrndQuery_MakeLuceneQueryField_System_String_Lucene_Net_QueryParsers_Surround_Query_BasicQueryFactory_) method. For this, TermQuery, BooleanQuery and SpanQuery are used from Lucene.
\ No newline at end of file

[lucenenet] 07/15: docs: Lucene.Net.Grouping: Fixed broken formatting and links (see #284, #300)

Posted by ni...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

nightowl888 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucenenet.git

commit 1132c37321cd7a48d1a386de602e7ebfa2040d62
Author: Shad Storhaug <sh...@shadstorhaug.com>
AuthorDate: Tue Mar 30 19:42:11 2021 +0700

    docs: Lucene.Net.Grouping: Fixed broken formatting and links (see #284, #300)
---
 src/Lucene.Net.Grouping/Function/package.md |   2 +-
 src/Lucene.Net.Grouping/package.md          | 132 +++++++++++++++-------------
 2 files changed, 72 insertions(+), 62 deletions(-)

diff --git a/src/Lucene.Net.Grouping/Function/package.md b/src/Lucene.Net.Grouping/Function/package.md
index 73ff1a5..76567d8 100644
--- a/src/Lucene.Net.Grouping/Function/package.md
+++ b/src/Lucene.Net.Grouping/Function/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Search.Grouping.Function
 summary: *content
 ---
diff --git a/src/Lucene.Net.Grouping/package.md b/src/Lucene.Net.Grouping/package.md
index 8b3118c..1c563a3 100644
--- a/src/Lucene.Net.Grouping/package.md
+++ b/src/Lucene.Net.Grouping/package.md
@@ -1,4 +1,4 @@
----
+---
 uid: Lucene.Net.Grouping
 title: Lucene.Net.Grouping
 summary: *content
@@ -21,7 +21,7 @@ summary: *content
  limitations under the License.
 -->
 
-This module enables search result grouping with Lucene, where hits with the same value in the specified single-valued group field are grouped together. For example, if you group by the `author` field, then all documents with the same value in the `author` field fall into a single group.
+This module enables search result grouping with Lucene.NET, where hits with the same value in the specified single-valued group field are grouped together. For example, if you group by the `author` field, then all documents with the same value in the `author` field fall into a single group.
 
 Grouping requires a number of inputs:
 
@@ -56,7 +56,7 @@ Grouping requires a number of inputs:
 *   `withinGroupOffset`: which "slice" of top
       documents you want to retrieve from each group.
 
-The implementation is two-pass: the first pass (<xref:Lucene.Net.Grouping.Term.TermFirstPassGroupingCollector>) gathers the top groups, and the second pass (<xref:Lucene.Net.Grouping.Term.TermSecondPassGroupingCollector>) gathers documents within those groups. If the search is costly to run you may want to use the <xref:Lucene.Net.Search.CachingCollector> class, which caches hits and can (quickly) replay them for the second pass. This way you only run the query once, but you pay a RAM co [...]
+The implementation is two-pass: the first pass (<xref:Lucene.Net.Search.Grouping.Terms.TermFirstPassGroupingCollector>) gathers the top groups, and the second pass (<xref:Lucene.Net.Search.Grouping.Terms.TermSecondPassGroupingCollector>) gathers documents within those groups. If the search is costly to run you may want to use the <xref:Lucene.Net.Search.CachingCollector> class, which caches hits and can (quickly) replay them for the second pass. This way you only run the query once, but  [...]
 
  This module abstracts away what defines group and how it is collected. All grouping collectors are abstract and have currently term based implementations. One can implement collectors that for example group on multiple fields. 
 
@@ -75,71 +75,79 @@ Known limitations:
 
 Typical usage for the generic two-pass grouping search looks like this using the grouping convenience utility (optionally using caching for the second pass search):
 
-      GroupingSearch groupingSearch = new GroupingSearch("author");
-      groupingSearch.setGroupSort(groupSort);
-      groupingSearch.setFillSortFields(fillFields);
-    
-  if (useCache) {
-        // Sets cache in MB
-        groupingSearch.setCachingInMB(4.0, true);
-      }
-    
-  if (requiredTotalGroupCount) {
-        groupingSearch.setAllGroups(true);
-      }
-    
-  TermQuery query = new TermQuery(new Term("content", searchTerm));
-      TopGroups<BytesRef> result = groupingSearch.search(indexSearcher, query, groupOffset, groupLimit);
-    
-  // Render groupsResult...
-      if (requiredTotalGroupCount) {
-        int totalGroupCount = result.totalGroupCount;
-      }
+```cs
+GroupingSearch groupingSearch = new GroupingSearch("author");
+groupingSearch.SetGroupSort(groupSort);
+groupingSearch.SetFillSortFields(fillFields);
+
+if (useCache)
+{
+    // Sets cache in MB
+    groupingSearch.SetCachingInMB(maxCacheRAMMB: 4.0, cacheScores: true);
+}
+
+if (requiredTotalGroupCount)
+{
+    groupingSearch.SetAllGroups(true);
+}
+
+TermQuery query = new TermQuery(new Term("content", searchTerm));
+TopGroups<BytesRef> result = groupingSearch.Search(indexSearcher, query, groupOffset, groupLimit);
+
+// Render groupsResult...
+if (requiredTotalGroupCount)
+{
+    // If null, the value is not computed
+    int? totalGroupCount = result.TotalGroupCount;
+}
+```
 
 To use the single-pass `BlockGroupingCollector`, first, at indexing time, you must ensure all docs in each group are added as a block, and you have some way to find the last document of each group. One simple way to do this is to add a marker binary field:
 
-      // Create Documents from your source:
-      List<Document> oneGroup = ...;
+```cs
+// Create Documents from your source:
+List<Document> oneGroup = ...;
 
-      Field groupEndField = new Field("groupEnd", "x", Field.Store.NO, Field.Index.NOT_ANALYZED);
-      groupEndField.setIndexOptions(IndexOptions.DOCS_ONLY);
-      groupEndField.setOmitNorms(true);
-      oneGroup.get(oneGroup.size()-1).add(groupEndField);
-    
-  // You can also use writer.updateDocuments(); just be sure you
-      // replace an entire previous doc block with this new one.  For
-      // example, each group could have a "groupID" field, with the same
-      // value for all docs in this group:
-      writer.addDocuments(oneGroup);
+Field groupEndField = new StringField("groupEnd", "x", Field.Store.NO);
+oneGroup[oneGroup.Count - 1].Add(groupEndField);
 
+// You can also use writer.UpdateDocuments(); just be sure you
+// replace an entire previous doc block with this new one.  For
+// example, each group could have a "groupID" field, with the same
+// value for all docs in this group:
+writer.AddDocuments(oneGroup);
+```
 
 Then, at search time, do this up front:
 
-      // Set this once in your app & save away for reusing across all queries:
-      Filter groupEndDocs = new CachingWrapperFilter(new QueryWrapperFilter(new TermQuery(new Term("groupEnd", "x"))));
-
+```cs
+// Set this once in your app & save away for reusing across all queries:
+Filter groupEndDocs = new CachingWrapperFilter(new QueryWrapperFilter(new TermQuery(new Term("groupEnd", "x"))));
+```
 
 Finally, do this per search:
 
-      // Per search:
-      BlockGroupingCollector c = new BlockGroupingCollector(groupSort, groupOffset+topNGroups, needsScores, groupEndDocs);
-      s.search(new TermQuery(new Term("content", searchTerm)), c);
-      TopGroups groupsResult = c.getTopGroups(withinGroupSort, groupOffset, docOffset, docOffset+docsPerGroup, fillFields);
-    
-  // Render groupsResult...
+```cs
+// Per search:
+BlockGroupingCollector c = new BlockGroupingCollector(groupSort, groupOffset + topNGroups, needsScores, groupEndDocs);
+s.Search(new TermQuery(new Term("content", searchTerm)), c);
+TopGroups<object> groupsResult = c.GetTopGroups(withinGroupSort, groupOffset, docOffset, docOffset + docsPerGroup, fillFields);
 
+// Render groupsResult...
+```
 
 Or alternatively use the `GroupingSearch` convenience utility:
 
-      // Per search:
-      GroupingSearch groupingSearch = new GroupingSearch(groupEndDocs);
-      groupingSearch.setGroupSort(groupSort);
-      groupingSearch.setIncludeScores(needsScores);
-      TermQuery query = new TermQuery(new Term("content", searchTerm));
-      TopGroups groupsResult = groupingSearch.search(indexSearcher, query, groupOffset, groupLimit);
-    
-  // Render groupsResult...
+```cs
+// Per search:
+GroupingSearch groupingSearch = new GroupingSearch(groupEndDocs);
+groupingSearch.SetGroupSort(groupSort);
+groupingSearch.SetIncludeScores(needsScores);
+TermQuery query = new TermQuery(new Term("content", searchTerm));
+TopGroups<object> groupsResult = groupingSearch.Search(indexSearcher, query, groupOffset, groupLimit);
 
+// Render groupsResult...
+```
 
 Note that the `groupValue` of each `GroupDocs`
 will be `null`, so if you need to present this value you'll
@@ -148,12 +156,14 @@ fields, `FieldCache`, etc.).
 
 Another collector is the `TermAllGroupHeadsCollector` that can be used to retrieve all most relevant documents per group. Also known as group heads. This can be useful in situations when one wants to compute group based facets / statistics on the complete query result. The collector can be executed during the first or second phase. This collector can also be used with the `GroupingSearch` convenience utility, but when if one only wants to compute the most relevant documents per group it  [...]
 
-      AbstractAllGroupHeadsCollector c = TermAllGroupHeadsCollector.create(groupField, sortWithinGroup);
-      s.search(new TermQuery(new Term("content", searchTerm)), c);
-      // Return all group heads as int array
-      int[] groupHeadsArray = c.retrieveGroupHeads()
-      // Return all group heads as FixedBitSet.
-      int maxDoc = s.maxDoc();
-      FixedBitSet groupHeadsBitSet = c.retrieveGroupHeads(maxDoc)
-
-For each of the above collector types there is also a variant that works with `ValueSource` instead of of fields. Concretely this means that these variants can work with functions. These variants are slower than there term based counter parts. These implementations are located in the `org.apache.lucene.search.grouping.function` package, but can also be used with the `GroupingSearch` convenience utility 
\ No newline at end of file
+```cs
+AbstractAllGroupHeadsCollector c = TermAllGroupHeadsCollector.Create(groupField, sortWithinGroup);
+s.Search(new TermQuery(new Term("content", searchTerm)), c);
+// Return all group heads as int array
+int[] groupHeadsArray = c.RetrieveGroupHeads();
+// Return all group heads as FixedBitSet.
+int maxDoc = s.MaxDoc;
+FixedBitSet groupHeadsBitSet = c.RetrieveGroupHeads(maxDoc);
+```
+
+For each of the above collector types there is also a variant that works with `ValueSource` instead of of fields. Concretely this means that these variants can work with functions. These variants are slower than there term based counter parts. These implementations are located in the `Lucene.Net.Search.Grouping.Function` package, but can also be used with the `GroupingSearch` convenience utility 
\ No newline at end of file