You are viewing a plain text version of this content. The canonical link for it is here.
Posted to java-commits@lucene.apache.org by ma...@apache.org on 2009/08/25 23:50:15 UTC
svn commit: r807819 -
/lucene/java/trunk/src/java/org/apache/lucene/analysis/package.html
Author: markrmiller
Date: Tue Aug 25 21:50:15 2009
New Revision: 807819
URL: http://svn.apache.org/viewvc?rev=807819&view=rev
Log:
a few more improvements to the package level javadoc
Modified:
lucene/java/trunk/src/java/org/apache/lucene/analysis/package.html
Modified: lucene/java/trunk/src/java/org/apache/lucene/analysis/package.html
URL: http://svn.apache.org/viewvc/lucene/java/trunk/src/java/org/apache/lucene/analysis/package.html?rev=807819&r1=807818&r2=807819&view=diff
==============================================================================
--- lucene/java/trunk/src/java/org/apache/lucene/analysis/package.html (original)
+++ lucene/java/trunk/src/java/org/apache/lucene/analysis/package.html Tue Aug 25 21:50:15 2009
@@ -72,10 +72,10 @@
up incoming text into tokens. In most cases, an Analyzer will use a Tokenizer as the first step in
the analysis process.</li>
<li>{@link org.apache.lucene.analysis.TokenFilter} – A TokenFilter is also a {@link org.apache.lucene.analysis.TokenStream} and is responsible
- for modifying tokenss that have been created by the Tokenizer. Common modifications performed by a
+ for modifying tokens that have been created by the Tokenizer. Common modifications performed by a
TokenFilter are: deletion, stemming, synonym injection, and down casing. Not all Analyzers require TokenFilters</li>
</ul>
- <b>Since Lucene 2.9 the TokenStream API has changed. Please see section "New TokenStream API" below for details.</b>
+ <b>Lucene 2.9 introduces a new TokenStream API. Please see the section "New TokenStream API" below for more details.</b>
</p>
<h2>Hints, Tips and Traps</h2>
<p>
@@ -358,6 +358,9 @@
while (stream.incrementToken()) {
System.out.println(termAtt.term());
}
+
+ stream.end()
+ stream.close();
}
}
</pre>