You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucy.apache.org by bu...@apache.org on 2011/08/24 02:26:09 UTC

[lucy-commits] svn commit: r794766 [1/4] - in /websites/staging/lucy/trunk/content/lucy/docs/perl: ./ Lucy/ Lucy/Analysis/ Lucy/Docs/ Lucy/Docs/Cookbook/ Lucy/Docs/Tutorial/ Lucy/Document/ Lucy/Highlight/ Lucy/Index/ Lucy/Object/ Lucy/Plan/ Lucy/Search/ Lucy/Search/C...

Author: buildbot
Date: Wed Aug 24 00:26:06 2011
New Revision: 794766

Log:
Staging update by buildbot

Added:
    websites/staging/lucy/trunk/content/lucy/docs/perl/
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/Analyzer.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/CaseFolder.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/PolyAnalyzer.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/RegexTokenizer.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/SnowballStemmer.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/SnowballStopFilter.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook/
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook/CustomQuery.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook/CustomQueryParser.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook/FastUpdates.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/DevGuide.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/DocIDs.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/FileFormat.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/FileLocking.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/IRTheory.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Tutorial/
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Tutorial.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Tutorial/Analysis.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Tutorial/BeyondSimple.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Tutorial/FieldType.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Tutorial/Highlighter.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Tutorial/QueryObjects.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Tutorial/Simple.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Document/
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Document/Doc.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Document/HitDoc.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Highlight/
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Highlight/Highlighter.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/BackgroundMerger.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/DataReader.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/DataWriter.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/DeletionsWriter.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/DocReader.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/IndexManager.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/IndexReader.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/Indexer.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/Lexicon.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/LexiconReader.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/PolyReader.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/PostingList.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/PostingListReader.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/SegReader.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/SegWriter.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/Segment.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/Similarity.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Index/Snapshot.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Object/
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Object/BitVector.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Object/Err.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Object/Obj.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Plan/
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Plan/Architecture.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Plan/BlobType.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Plan/FieldType.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Plan/FullTextType.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Plan/Schema.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Plan/StringType.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/ANDQuery.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/Collector/
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/Collector.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/Collector/BitCollector.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/Compiler.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/Hits.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/IndexSearcher.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/LeafQuery.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/MatchAllQuery.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/Matcher.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/NOTQuery.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/NoMatchQuery.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/ORQuery.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/PhraseQuery.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/PolyQuery.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/PolySearcher.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/Query.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/QueryParser.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/RangeQuery.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/RequiredOptionalQuery.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/Searcher.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/SortRule.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/SortSpec.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/Span.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Search/TermQuery.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Simple.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Store/
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Store/FSFolder.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Store/Folder.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Store/Lock.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Store/LockErr.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Store/LockFactory.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Store/RAMFolder.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/LucyX/
    websites/staging/lucy/trunk/content/lucy/docs/perl/LucyX/Index/
    websites/staging/lucy/trunk/content/lucy/docs/perl/LucyX/Index/ByteBufDocReader.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/LucyX/Index/ByteBufDocWriter.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/LucyX/Index/LongFieldSim.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/LucyX/Index/ZlibDocReader.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/LucyX/Index/ZlibDocWriter.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/LucyX/Remote/
    websites/staging/lucy/trunk/content/lucy/docs/perl/LucyX/Remote/SearchClient.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/LucyX/Remote/SearchServer.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/LucyX/Search/
    websites/staging/lucy/trunk/content/lucy/docs/perl/LucyX/Search/Filter.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/LucyX/Search/MockMatcher.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/LucyX/Search/ProximityQuery.html
    websites/staging/lucy/trunk/content/lucy/docs/perl/index.html

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,187 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy - Apache Lucy search engine library.</p>
+
+<h1 id="VERSION">VERSION</h1>
+
+<p>0.2.1</p>
+
+<h1 id="SYNOPSIS">SYNOPSIS</h1>
+
+<p>First, plan out your index structure, create the index, and add documents:</p>
+
+<pre><code>    # indexer.pl
+    
+    use Lucy::Index::Indexer;
+    use Lucy::Plan::Schema;
+    use Lucy::Analysis::PolyAnalyzer;
+    use Lucy::Plan::FullTextType;
+    
+    # Create a Schema which defines index fields.
+    my $schema = Lucy::Plan::Schema-&gt;new;
+    my $polyanalyzer = Lucy::Analysis::PolyAnalyzer-&gt;new( 
+        language =&gt; &#39;en&#39;,
+    );
+    my $type = Lucy::Plan::FullTextType-&gt;new(
+        analyzer =&gt; $polyanalyzer,
+    );
+    $schema-&gt;spec_field( name =&gt; &#39;title&#39;,   type =&gt; $type );
+    $schema-&gt;spec_field( name =&gt; &#39;content&#39;, type =&gt; $type );
+    
+    # Create the index and add documents.
+    my $indexer = Lucy::Index::Indexer-&gt;new(
+        schema =&gt; $schema,   
+        index  =&gt; &#39;/path/to/index&#39;,
+        create =&gt; 1,
+    );
+    while ( my ( $title, $content ) = each %source_docs ) {
+        $indexer-&gt;add_doc({
+            title   =&gt; $title,
+            content =&gt; $content,
+        });
+    }
+    $indexer-&gt;commit;</code></pre>
+
+<p>Then, search the index:</p>
+
+<pre><code>    # search.pl
+    
+    use Lucy::Search::IndexSearcher;
+    
+    my $searcher = Lucy::Search::IndexSearcher-&gt;new( 
+        index =&gt; &#39;/path/to/index&#39; 
+    );
+    my $hits = $searcher-&gt;hits( query =&gt; &quot;foo bar&quot; );
+    while ( my $hit = $hits-&gt;next ) {
+        print &quot;$hit-&gt;{title}\n&quot;;
+    }</code></pre>
+
+<h1 id="DESCRIPTION">DESCRIPTION</h1>
+
+<p>Apache Lucy is a high-performance, modular search engine library.</p>
+
+<h2 id="Features">Features</h2>
+
+<ul>
+
+<li><p>Extremely fast. A single machine can handle millions of documents.</p>
+
+</li>
+<li><p>Scalable to multiple machines.</p>
+
+</li>
+<li><p>Incremental indexing (addition/deletion of documents to/from an existing index).</p>
+
+</li>
+<li><p>Configurable near-real-time index updates.</p>
+
+</li>
+<li><p>Unicode support.</p>
+
+</li>
+<li><p>Support for boolean operators AND, OR, and AND NOT; parenthetical groupings; prepended +plus and -minus.</p>
+
+</li>
+<li><p>Algorithmic selection of relevant excerpts and highlighting of search terms within excerpts.</p>
+
+</li>
+<li><p>Highly customizable query and indexing APIs.</p>
+
+</li>
+<li><p>Customizable sorting.</p>
+
+</li>
+<li><p>Phrase matching.</p>
+
+</li>
+<li><p>Stemming.</p>
+
+</li>
+<li><p>Stoplists.</p>
+
+</li>
+</ul>
+
+<h2 id="Getting-Started">Getting Started</h2>
+
+<p><a href="Lucy/Simple.html">Lucy::Simple</a> provides a stripped down API which may suffice for many tasks.</p>
+
+<p><a href="Lucy/Docs/Tutorial.html">Lucy::Docs::Tutorial</a> demonstrates how to build a basic CGI search application.</p>
+
+<p>The tutorial spends most of its time on these five classes:</p>
+
+<ul>
+
+<li><p><a href="Lucy/Plan/Schema.html">Lucy::Plan::Schema</a> - Plan out your index.</p>
+
+</li>
+<li><p><a href="Lucy/Plan/FieldType.html">Lucy::Plan::FieldType</a> - Define index fields.</p>
+
+</li>
+<li><p><a href="Lucy/Index/Indexer.html">Lucy::Index::Indexer</a> - Manipulate index content.</p>
+
+</li>
+<li><p><a href="Lucy/Search/IndexSearcher.html">Lucy::Search::IndexSearcher</a> - Search an index.</p>
+
+</li>
+<li><p><a href="Lucy/Analysis/PolyAnalyzer.html">Lucy::Analysis::PolyAnalyzer</a> - A one-size-fits-all parser/tokenizer.</p>
+
+</li>
+</ul>
+
+<h2 id="Delving-Deeper">Delving Deeper</h2>
+
+<p><a href="Lucy/Docs/Cookbook.html">Lucy::Docs::Cookbook</a> augments the tutorial with more advanced recipes.</p>
+
+<p>For creating complex queries, see <a href="Lucy/Search/Query.html">Lucy::Search::Query</a> and its subclasses <a href="Lucy/Search/TermQuery.html">TermQuery</a>, <a href="Lucy/Search/PhraseQuery.html">PhraseQuery</a>, <a href="Lucy/Search/ANDQuery.html">ANDQuery</a>, <a href="Lucy/Search/ORQuery.html">ORQuery</a>, <a href="Lucy/Search/NOTQuery.html">NOTQuery</a>, <a href="Lucy/Search/RequiredOptionalQuery.html">RequiredOptionalQuery</a>, <a href="Lucy/Search/MatchAllQuery.html">MatchAllQuery</a>, and <a href="Lucy/Search/NoMatchQuery.html">NoMatchQuery</a>, plus <a href="Lucy/Search/QueryParser.html">Lucy::Search::QueryParser</a>.</p>
+
+<p>For distributed searching, see <a href="LucyX/Remote/SearchServer.html">LucyX::Remote::SearchServer</a>, <a href="LucyX/Remote/SearchClient.html">LucyX::Remote::SearchClient</a>, and <a href="Lucy/Search/PolySearcher.html">Lucy::Search::PolySearcher</a>.</p>
+
+<h2 id="Backwards-Compatibility-Policy">Backwards Compatibility Policy</h2>
+
+<p>Lucy will spin off stable forks into new namespaces periodically. The first will be named &quot;Lucy1&quot;. Users who require strong backwards compatibility should use a stable fork.</p>
+
+<p>The main namespace, &quot;Lucy&quot;, is an API-unstable development branch (as hinted at by its 0.x.x version number). Superficial interface changes happen frequently. Hard file format compatibility breaks which require reindexing are rare, as we generally try to provide continuity across multiple releases, but we reserve the right to make such changes.</p>
+
+<h1 id="CLASS-METHODS">CLASS METHODS</h1>
+
+<p>The Lucy module itself does not have a large interface, providing only a single public class method.</p>
+
+<h2 id="error">error</h2>
+
+<pre><code>    my $instream = $folder-&gt;open_in( file =&gt; &#39;foo&#39; ) or die Lucy-&gt;error;</code></pre>
+
+<p>Access a shared variable which is set by some routines on failure. It will always be either a <a href="Lucy/Object/Err.html">Lucy::Object::Err</a> object or undef.</p>
+
+<h1 id="SUPPORT">SUPPORT</h1>
+
+<p>The Apache Lucy homepage, where you&#39;ll find links to our mailing lists and so on, is <a href="http://incubator.apache.org/lucy">http://incubator.apache.org/lucy</a>. Please direct support questions to the Lucy users mailing list.</p>
+
+<h1 id="BUGS">BUGS</h1>
+
+<p>Not thread-safe.</p>
+
+<p>Some exceptions leak memory.</p>
+
+<p>If you find a bug, please inquire on the Lucy users mailing list about it, then report it on the Lucy issue tracker once it has been confirmed: <a href="https://issues.apache.org/jira/browse/LUCY">https://issues.apache.org/jira/browse/LUCY</a>.</p>
+
+<h1 id="DISCLAIMER">DISCLAIMER</h1>
+
+<p>Apache Lucy is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.</p>
+
+<h1 id="COPYRIGHT">COPYRIGHT</h1>
+
+<p>Apache Lucy is distributed under the Apache License, Version 2.0, as described in the file <code>LICENSE</code> included with the distribution.</p>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/Analyzer.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/Analyzer.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/Analyzer.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,28 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Analysis::Analyzer - Tokenize/modify/filter text.</p>
+
+<h1 id="SYNOPSIS">SYNOPSIS</h1>
+
+<pre><code>    # Abstract base class.</code></pre>
+
+<h1 id="DESCRIPTION">DESCRIPTION</h1>
+
+<p>An Analyzer is a filter which processes text, transforming it from one form into another. For instance, an analyzer might break up a long text into smaller pieces (<a href="../../Lucy/Analysis/RegexTokenizer.html">RegexTokenizer</a>), or it might perform case folding to facilitate case-insensitive search (<a href="../../Lucy/Analysis/CaseFolder.html">CaseFolder</a>).</p>
+
+<h1 id="INHERITANCE">INHERITANCE</h1>
+
+<p>Lucy::Analysis::Analyzer isa <a href="../../Lucy/Object/Obj.html">Lucy::Object::Obj</a>.</p>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/CaseFolder.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/CaseFolder.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/CaseFolder.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,40 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Analysis::CaseFolder - Normalize case, facilitating case-insensitive search.</p>
+
+<h1 id="SYNOPSIS">SYNOPSIS</h1>
+
+<pre><code>    my $case_folder = Lucy::Analysis::CaseFolder-&gt;new;
+
+    my $polyanalyzer = Lucy::Analysis::PolyAnalyzer-&gt;new(
+        analyzers =&gt; [ $case_folder, $tokenizer, $stemmer ],
+    );</code></pre>
+
+<h1 id="DESCRIPTION">DESCRIPTION</h1>
+
+<p>CaseFolder normalizes text according to Unicode case-folding rules, so that searches will be case-insensitive.</p>
+
+<h1 id="CONSTRUCTORS">CONSTRUCTORS</h1>
+
+<h2 id="new-">new()</h2>
+
+<pre><code>    my $case_folder = Lucy::Analysis::CaseFolder-&gt;new;</code></pre>
+
+<p>Constructor. Takes no arguments.</p>
+
+<h1 id="INHERITANCE">INHERITANCE</h1>
+
+<p>Lucy::Analysis::CaseFolder isa <a href="../../Lucy/Analysis/Analyzer.html">Lucy::Analysis::Analyzer</a> isa <a href="../../Lucy/Object/Obj.html">Lucy::Object::Obj</a>.</p>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/PolyAnalyzer.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/PolyAnalyzer.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/PolyAnalyzer.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,86 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Analysis::PolyAnalyzer - Multiple Analyzers in series.</p>
+
+<h1 id="SYNOPSIS">SYNOPSIS</h1>
+
+<pre><code>    my $schema = Lucy::Plan::Schema-&gt;new;
+    my $polyanalyzer = Lucy::Analysis::PolyAnalyzer-&gt;new( 
+        language =&gt; &#39;en&#39;,
+    );
+    my $type = Lucy::Plan::FullTextType-&gt;new(
+        analyzer =&gt; $polyanalyzer,
+    );
+    $schema-&gt;spec_field( name =&gt; &#39;title&#39;,   type =&gt; $type );
+    $schema-&gt;spec_field( name =&gt; &#39;content&#39;, type =&gt; $type );</code></pre>
+
+<h1 id="DESCRIPTION">DESCRIPTION</h1>
+
+<p>A PolyAnalyzer is a series of <a href="../../Lucy/Analysis/Analyzer.html">Analyzers</a>, each of which will be called upon to &quot;analyze&quot; text in turn. You can either provide the Analyzers yourself, or you can specify a supported language, in which case a PolyAnalyzer consisting of a <a href="../../Lucy/Analysis/CaseFolder.html">CaseFolder</a>, a <a href="../../Lucy/Analysis/RegexTokenizer.html">RegexTokenizer</a>, and a <a href="../../Lucy/Analysis/SnowballStemmer.html">SnowballStemmer</a> will be generated for you.</p>
+
+<p>Supported languages:</p>
+
+<pre><code>    en =&gt; English,
+    da =&gt; Danish,
+    de =&gt; German,
+    es =&gt; Spanish,
+    fi =&gt; Finnish,
+    fr =&gt; French,
+    hu =&gt; Hungarian,
+    it =&gt; Italian,
+    nl =&gt; Dutch,
+    no =&gt; Norwegian,
+    pt =&gt; Portuguese,
+    ro =&gt; Romanian,
+    ru =&gt; Russian,
+    sv =&gt; Swedish,
+    tr =&gt; Turkish,</code></pre>
+
+<h1 id="CONSTRUCTORS">CONSTRUCTORS</h1>
+
+<h2 id="new-labeled-params-">new( <i>[labeled params]</i> )</h2>
+
+<pre><code>    my $analyzer = Lucy::Analysis::PolyAnalyzer-&gt;new(
+        language  =&gt; &#39;es&#39;,
+    );
+    
+    # or...
+
+    my $case_folder  = Lucy::Analysis::CaseFolder-&gt;new;
+    my $tokenizer    = Lucy::Analysis::RegexTokenizer-&gt;new;
+    my $stemmer      = Lucy::Analysis::SnowballStemmer-&gt;new( language =&gt; &#39;en&#39; );
+    my $polyanalyzer = Lucy::Analysis::PolyAnalyzer-&gt;new(
+        analyzers =&gt; [ $case_folder, $whitespace_tokenizer, $stemmer, ], );</code></pre>
+
+<ul>
+
+<li><p><b>language</b> - An ISO code from the list of supported languages.</p>
+
+</li>
+<li><p><b>analyzers</b> - An array of Analyzers. The order of the analyzers matters. Don&#39;t put a SnowballStemmer before a RegexTokenizer (can&#39;t stem whole documents or paragraphs -- just individual words), or a SnowballStopFilter after a SnowballStemmer (stemmed words, e.g. &quot;themselv&quot;, will not appear in a stoplist). In general, the sequence should be: normalize, tokenize, stopalize, stem.</p>
+
+</li>
+</ul>
+
+<h1 id="METHODS">METHODS</h1>
+
+<h2 id="get_analyzers-">get_analyzers()</h2>
+
+<p>Getter for &quot;analyzers&quot; member.</p>
+
+<h1 id="INHERITANCE">INHERITANCE</h1>
+
+<p>Lucy::Analysis::PolyAnalyzer isa <a href="../../Lucy/Analysis/Analyzer.html">Lucy::Analysis::Analyzer</a> isa <a href="../../Lucy/Object/Obj.html">Lucy::Object::Obj</a>.</p>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/RegexTokenizer.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/RegexTokenizer.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/RegexTokenizer.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,75 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Analysis::RegexTokenizer - Split a string into tokens.</p>
+
+<h1 id="SYNOPSIS">SYNOPSIS</h1>
+
+<pre><code>    my $whitespace_tokenizer
+        = Lucy::Analysis::RegexTokenizer-&gt;new( pattern =&gt; &#39;\S+&#39; );
+
+    # or...
+    my $word_char_tokenizer
+        = Lucy::Analysis::RegexTokenizer-&gt;new( pattern =&gt; &#39;\w+&#39; );
+
+    # or...
+    my $apostrophising_tokenizer = Lucy::Analysis::RegexTokenizer-&gt;new;
+
+    # Then... once you have a tokenizer, put it into a PolyAnalyzer:
+    my $polyanalyzer = Lucy::Analysis::PolyAnalyzer-&gt;new(
+        analyzers =&gt; [ $case_folder, $word_char_tokenizer, $stemmer ], );</code></pre>
+
+<h1 id="DESCRIPTION">DESCRIPTION</h1>
+
+<p>Generically, &quot;tokenizing&quot; is a process of breaking up a string into an array of &quot;tokens&quot;. For instance, the string &quot;three blind mice&quot; might be tokenized into &quot;three&quot;, &quot;blind&quot;, &quot;mice&quot;.</p>
+
+<p>Lucy::Analysis::RegexTokenizer decides where it should break up the text based on a regular expression compiled from a supplied <code>pattern</code> matching one token. If our source string is...</p>
+
+<pre><code>    &quot;Eats, Shoots and Leaves.&quot;</code></pre>
+
+<p>... then a &quot;whitespace tokenizer&quot; with a <code>pattern</code> of <code>&quot;\\S+&quot;</code> produces...</p>
+
+<pre><code>    Eats,
+    Shoots
+    and
+    Leaves.</code></pre>
+
+<p>... while a &quot;word character tokenizer&quot; with a <code>pattern</code> of <code>&quot;\\w+&quot;</code> produces...</p>
+
+<pre><code>    Eats
+    Shoots
+    and
+    Leaves</code></pre>
+
+<p>... the difference being that the word character tokenizer skips over punctuation as well as whitespace when determining token boundaries.</p>
+
+<h1 id="CONSTRUCTORS">CONSTRUCTORS</h1>
+
+<h2 id="new-labeled-params-">new( <i>[labeled params]</i> )</h2>
+
+<pre><code>    my $word_char_tokenizer = Lucy::Analysis::RegexTokenizer-&gt;new(
+        pattern =&gt; &#39;\w+&#39;,    # required
+    );</code></pre>
+
+<ul>
+
+<li><p><b>pattern</b> - A string specifying a Perl-syntax regular expression which should match one token. The default value is <code>\w+(?:[\x{2019}&#39;]\w+)*</code>, which matches &quot;it&#39;s&quot; as well as &quot;it&quot; and &quot;O&#39;Henry&#39;s&quot; as well as &quot;Henry&quot;.</p>
+
+</li>
+</ul>
+
+<h1 id="INHERITANCE">INHERITANCE</h1>
+
+<p>Lucy::Analysis::RegexTokenizer isa <a href="../../Lucy/Analysis/Analyzer.html">Lucy::Analysis::Analyzer</a> isa <a href="../../Lucy/Object/Obj.html">Lucy::Object::Obj</a>.</p>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/SnowballStemmer.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/SnowballStemmer.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/SnowballStemmer.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,47 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Analysis::SnowballStemmer - Reduce related words to a shared root.</p>
+
+<h1 id="SYNOPSIS">SYNOPSIS</h1>
+
+<pre><code>    my $stemmer = Lucy::Analysis::SnowballStemmer-&gt;new( language =&gt; &#39;es&#39; );
+    
+    my $polyanalyzer = Lucy::Analysis::PolyAnalyzer-&gt;new(
+        analyzers =&gt; [ $case_folder, $tokenizer, $stemmer ],
+    );</code></pre>
+
+<p>This class is a wrapper around the Snowball stemming library, so it supports the same languages.</p>
+
+<h1 id="DESCRIPTION">DESCRIPTION</h1>
+
+<p>SnowballStemmer is an <a href="../../Lucy/Analysis/Analyzer.html">Analyzer</a> which reduces related words to a root form (using the &quot;Snowball&quot; stemming library). For instance, &quot;horse&quot;, &quot;horses&quot;, and &quot;horsing&quot; all become &quot;hors&quot; -- so that a search for &#39;horse&#39; will also match documents containing &#39;horses&#39; and &#39;horsing&#39;.</p>
+
+<h1 id="CONSTRUCTORS">CONSTRUCTORS</h1>
+
+<h2 id="new-labeled-params-">new( <i>[labeled params]</i> )</h2>
+
+<pre><code>    my $stemmer = Lucy::Analysis::SnowballStemmer-&gt;new( language =&gt; &#39;es&#39; );</code></pre>
+
+<ul>
+
+<li><p><b>language</b> - A two-letter ISO code identifying a language supported by Snowball.</p>
+
+</li>
+</ul>
+
+<h1 id="INHERITANCE">INHERITANCE</h1>
+
+<p>Lucy::Analysis::SnowballStemmer isa <a href="../../Lucy/Analysis/Analyzer.html">Lucy::Analysis::Analyzer</a> isa <a href="../../Lucy/Object/Obj.html">Lucy::Object::Obj</a>.</p>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/SnowballStopFilter.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/SnowballStopFilter.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Analysis/SnowballStopFilter.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,84 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Analysis::SnowballStopFilter - Suppress a &quot;stoplist&quot; of common words.</p>
+
+<h1 id="SYNOPSIS">SYNOPSIS</h1>
+
+<pre><code>    my $stopfilter = Lucy::Analysis::SnowballStopFilter-&gt;new(
+        language =&gt; &#39;fr&#39;,
+    );
+    my $polyanalyzer = Lucy::Analysis::PolyAnalyzer-&gt;new(
+        analyzers =&gt; [ $case_folder, $tokenizer, $stopfilter, $stemmer ],
+    );</code></pre>
+
+<h1 id="DESCRIPTION">DESCRIPTION</h1>
+
+<p>A &quot;stoplist&quot; is collection of &quot;stopwords&quot;: words which are common enough to be of little value when determining search results. For example, so many documents in English contain &quot;the&quot;, &quot;if&quot;, and &quot;maybe&quot; that it may improve both performance and relevance to block them.</p>
+
+<p>Before filtering stopwords:</p>
+
+<pre><code>    (&quot;i&quot;, &quot;am&quot;, &quot;the&quot;, &quot;walrus&quot;)</code></pre>
+
+<p>After filtering stopwords:</p>
+
+<pre><code>    (&quot;walrus&quot;)</code></pre>
+
+<p>SnowballStopFilter provides default stoplists for several languages, courtesy of the Snowball project (&lt;http://snowball.tartarus.org&gt;), or you may supply your own.</p>
+
+<pre><code>    |-----------------------|
+    | ISO CODE | LANGUAGE   |
+    |-----------------------|
+    | da       | Danish     |
+    | de       | German     |
+    | en       | English    |
+    | es       | Spanish    |
+    | fi       | Finnish    |
+    | fr       | French     |
+    | hu       | Hungarian  |
+    | it       | Italian    |
+    | nl       | Dutch      |
+    | no       | Norwegian  |
+    | pt       | Portuguese |
+    | sv       | Swedish    |
+    | ru       | Russian    |
+    |-----------------------|</code></pre>
+
+<h1 id="CONSTRUCTORS">CONSTRUCTORS</h1>
+
+<h2 id="new-labeled-params-">new( <i>[labeled params]</i> )</h2>
+
+<pre><code>    my $stopfilter = Lucy::Analysis::SnowballStopFilter-&gt;new(
+        language =&gt; &#39;de&#39;,
+    );
+    
+    # or...
+    my $stopfilter = Lucy::Analysis::SnowballStopFilter-&gt;new(
+        stoplist =&gt; \%stoplist,
+    );</code></pre>
+
+<ul>
+
+<li><p><b>stoplist</b> - A hash with stopwords as the keys.</p>
+
+</li>
+<li><p><b>language</b> - The ISO code for a supported language.</p>
+
+</li>
+</ul>
+
+<h1 id="INHERITANCE">INHERITANCE</h1>
+
+<p>Lucy::Analysis::SnowballStopFilter isa <a href="../../Lucy/Analysis/Analyzer.html">Lucy::Analysis::Analyzer</a> isa <a href="../../Lucy/Object/Obj.html">Lucy::Object::Obj</a>.</p>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,43 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Docs::Cookbook - Apache Lucy recipes.</p>
+
+<h1 id="DESCRIPTION">DESCRIPTION</h1>
+
+<p>The Cookbook provides thematic documentation covering some of Apache Lucy&#39;s more sophisticated features. For a step-by-step introduction to Lucy, see <a href="../../Lucy/Docs/Tutorial.html">Lucy::Docs::Tutorial</a>.</p>
+
+<h2 id="Chapters">Chapters</h2>
+
+<ul>
+
+<li><p><a href="../../Lucy/Docs/Cookbook/FastUpdates.html">Lucy::Docs::Cookbook::FastUpdates</a> - While index updates are fast on average, worst-case update performance may be significantly slower. To make index updates consistently quick, we must manually intervene to control the process of index segment consolidation.</p>
+
+</li>
+<li><p><a href="../../Lucy/Docs/Cookbook/CustomQuery.html">Lucy::Docs::Cookbook::CustomQuery</a> - Explore Lucy&#39;s support for custom query types by creating a &quot;PrefixQuery&quot; class to handle trailing wildcards.</p>
+
+</li>
+<li><p><a href="../../Lucy/Docs/Cookbook/CustomQueryParser.html">Lucy::Docs::Cookbook::CustomQueryParser</a> - Define your own custom search query syntax using Lucy::Search::QueryParser and <a href="http://search.cpan.org/perldoc?Parse::RecDescent">Parse::RecDescent</a>.</p>
+
+</li>
+</ul>
+
+<h2 id="Materials">Materials</h2>
+
+<p>Some of the recipes in the Cookbook reference the completed <a href="../../Lucy/Docs/Tutorial.html">Tutorial</a> application. These materials can be found in the <code>sample</code> directory at the root of the Lucy distribution:</p>
+
+<pre><code>    sample/indexer.pl        # indexing app
+    sample/search.cgi        # search app
+    sample/us_constitution   # corpus</code></pre>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook/CustomQuery.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook/CustomQuery.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook/CustomQuery.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,266 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Docs::Cookbook::CustomQuery - Sample subclass of Query.</p>
+
+<h1 id="ABSTRACT">ABSTRACT</h1>
+
+<p>Explore Apache Lucy&#39;s support for custom query types by creating a &quot;PrefixQuery&quot; class to handle trailing wildcards.</p>
+
+<pre><code>    my $prefix_query = PrefixQuery-&gt;new(
+        field        =&gt; &#39;content&#39;,
+        query_string =&gt; &#39;foo*&#39;,
+    );
+    my $hits = $searcher-&gt;hits( query =&gt; $prefix_query );
+    ...</code></pre>
+
+<h1 id="Query-Compiler-and-Matcher">Query, Compiler, and Matcher</h1>
+
+<p>To add support for a new query type, we need three classes: a Query, a Compiler, and a Matcher.</p>
+
+<ul>
+
+<li><p>PrefixQuery - a subclass of <a href="../../../Lucy/Search/Query.html">Lucy::Search::Query</a>, and the only class that client code will deal with directly.</p>
+
+</li>
+<li><p>PrefixCompiler - a subclass of <a href="../../../Lucy/Search/Compiler.html">Lucy::Search::Compiler</a>, whose primary role is to compile a PrefixQuery to a PrefixMatcher.</p>
+
+</li>
+<li><p>PrefixMatcher - a subclass of <a href="../../../Lucy/Search/Matcher.html">Lucy::Search::Matcher</a>, which does the heavy lifting: it applies the query to individual documents and assigns a score to each match.</p>
+
+</li>
+</ul>
+
+<p>The PrefixQuery class on its own isn&#39;t enough because a Query object&#39;s role is limited to expressing an abstract specification for the search. A Query is basically nothing but metadata; execution is left to the Query&#39;s companion Compiler and Matcher.</p>
+
+<p>Here&#39;s a simplified sketch illustrating how a Searcher&#39;s hits() method ties together the three classes.</p>
+
+<pre><code>    sub hits {
+        my ( $self, $query ) = @_;
+        my $compiler = $query-&gt;make_compiler( searcher =&gt; $self );
+        my $matcher = $compiler-&gt;make_matcher(
+            reader     =&gt; $self-&gt;get_reader,
+            need_score =&gt; 1,
+        );
+        my @hits = $matcher-&gt;capture_hits;
+        return \@hits;
+    }</code></pre>
+
+<h2 id="PrefixQuery">PrefixQuery</h2>
+
+<p>Our PrefixQuery class will have two attributes: a query string and a field name.</p>
+
+<pre><code>    package PrefixQuery;
+    use base qw( Lucy::Search::Query );
+    use Carp;
+    use Scalar::Util qw( blessed );
+    
+    # Inside-out member vars and hand-rolled accessors.
+    my %query_string;
+    my %field;
+    sub get_query_string { my $self = shift; return $query_string{$$self} }
+    sub get_field        { my $self = shift; return $field{$$self} }</code></pre>
+
+<p>PrefixQuery&#39;s constructor collects and validates the attributes.</p>
+
+<pre><code>    sub new {
+        my ( $class, %args ) = @_;
+        my $query_string = delete $args{query_string};
+        my $field        = delete $args{field};
+        my $self         = $class-&gt;SUPER::new(%args);
+        confess(&quot;&#39;query_string&#39; param is required&quot;)
+            unless defined $query_string;
+        confess(&quot;Invalid query_string: &#39;$query_string&#39;&quot;)
+            unless $query_string =~ /\*\s*$/;
+        confess(&quot;&#39;field&#39; param is required&quot;)
+            unless defined $field;
+        $query_string{$$self} = $query_string;
+        $field{$$self}        = $field;
+        return $self;
+    }</code></pre>
+
+<p>Since this is an inside-out class, we&#39;ll need a destructor:</p>
+
+<pre><code>    sub DESTROY {
+        my $self = shift;
+        delete $query_string{$$self};
+        delete $field{$$self};
+        $self-&gt;SUPER::DESTROY;
+    }</code></pre>
+
+<p>The equals() method determines whether two Queries are logically equivalent:</p>
+
+<pre><code>    sub equals {
+        my ( $self, $other ) = @_;
+        return 0 unless blessed($other);
+        return 0 unless $other-&gt;isa(&quot;PrefixQuery&quot;);
+        return 0 unless $field{$$self} eq $field{$$other};
+        return 0 unless $query_string{$$self} eq $query_string{$$other};
+        return 1;
+    }</code></pre>
+
+<p>The last thing we&#39;ll need is a make_compiler() factory method which kicks out a subclass of <a href="../../../Lucy/Search/Compiler.html">Compiler</a>.</p>
+
+<pre><code>    sub make_compiler {
+        my $self = shift;
+        return PrefixCompiler-&gt;new( @_, parent =&gt; $self );
+    }</code></pre>
+
+<h2 id="PrefixCompiler">PrefixCompiler</h2>
+
+<p>PrefixQuery&#39;s make_compiler() method will be called internally at search-time by objects which subclass <a href="../../../Lucy/Search/Searcher.html">Lucy::Search::Searcher</a> -- such as <a href="../../../Lucy/Search/IndexSearcher.html">IndexSearchers</a>.</p>
+
+<p>A Searcher is associated with a particular collection of documents. These documents may all reside in one index, as with IndexSearcher, or they may be spread out across multiple indexes on one or more machines, as with <a href="../../../Lucy/Search/PolySearcher.html">Lucy::Search::PolySearcher</a>.</p>
+
+<p>Searcher objects have access to certain statistical information about the collections they represent; for instance, a Searcher can tell you how many documents are in the collection...</p>
+
+<pre><code>    my $maximum_number_of_docs_in_collection = $searcher-&gt;doc_max;</code></pre>
+
+<p>... or how many documents a specific term appears in:</p>
+
+<pre><code>    my $term_appears_in_this_many_docs = $searcher-&gt;doc_freq(
+        field =&gt; &#39;content&#39;,
+        term  =&gt; &#39;foo&#39;,
+    );</code></pre>
+
+<p>Such information can be used by sophisticated Compiler implementations to assign more or less heft to individual queries or sub-queries. However, we&#39;re not going to bother with weighting for this demo; we&#39;ll just assign a fixed score of 1.0 to each matching document.</p>
+
+<p>We don&#39;t need to write a constructor, as it will suffice to inherit new() from Lucy::Search::Compiler. The only method we need to implement for PrefixCompiler is make_matcher().</p>
+
+<pre><code>    package PrefixCompiler;
+    use base qw( Lucy::Search::Compiler );
+
+    sub make_matcher {
+        my ( $self, %args ) = @_;
+        my $seg_reader = $args{reader};
+
+        # Retrieve low-level components LexiconReader and PostingListReader.
+        my $lex_reader
+            = $seg_reader-&gt;obtain(&quot;Lucy::Index::LexiconReader&quot;);
+        my $plist_reader
+            = $seg_reader-&gt;obtain(&quot;Lucy::Index::PostingListReader&quot;);
+        
+        # Acquire a Lexicon and seek it to our query string.
+        my $substring = $self-&gt;get_parent-&gt;get_query_string;
+        $substring =~ s/\*.\s*$//;
+        my $field = $self-&gt;get_parent-&gt;get_field;
+        my $lexicon = $lex_reader-&gt;lexicon( field =&gt; $field );
+        return unless $lexicon;
+        $lexicon-&gt;seek($substring);
+        
+        # Accumulate PostingLists for each matching term.
+        my @posting_lists;
+        while ( defined( my $term = $lexicon-&gt;get_term ) ) {
+            last unless $term =~ /^\Q$substring/;
+            my $posting_list = $plist_reader-&gt;posting_list(
+                field =&gt; $field,
+                term  =&gt; $term,
+            );
+            if ($posting_list) {
+                push @posting_lists, $posting_list;
+            }
+            last unless $lexicon-&gt;next;
+        }
+        return unless @posting_lists;
+        
+        return PrefixMatcher-&gt;new( posting_lists =&gt; \@posting_lists );
+    }</code></pre>
+
+<p>PrefixCompiler gets access to a <a href="../../../Lucy/Index/SegReader.html">SegReader</a> object when make_matcher() gets called. From the SegReader and its sub-components <a href="../../../Lucy/Index/LexiconReader.html">LexiconReader</a> and <a href="../../../Lucy/Index/PostingListReader.html">PostingListReader</a>, we acquire a <a href="../../../Lucy/Index/Lexicon.html">Lexicon</a>, scan through the Lexicon&#39;s unique terms, and acquire a <a href="../../../Lucy/Index/PostingList.html">PostingList</a> for each term that matches our prefix.</p>
+
+<p>Each of these PostingList objects represents a set of documents which match the query.</p>
+
+<h2 id="PrefixMatcher">PrefixMatcher</h2>
+
+<p>The Matcher subclass is the most involved.</p>
+
+<pre><code>    package PrefixMatcher;
+    use base qw( Lucy::Search::Matcher );
+    
+    # Inside-out member vars.
+    my %doc_ids;
+    my %tick;
+    
+    sub new {
+        my ( $class, %args ) = @_;
+        my $posting_lists = delete $args{posting_lists};
+        my $self          = $class-&gt;SUPER::new(%args);
+        
+        # Cheesy but simple way of interleaving PostingList doc sets.
+        my %all_doc_ids;
+        for my $posting_list (@$posting_lists) {
+            while ( my $doc_id = $posting_list-&gt;next ) {
+                $all_doc_ids{$doc_id} = undef;
+            }
+        }
+        my @doc_ids = sort { $a &lt;=&gt; $b } keys %all_doc_ids;
+        $doc_ids{$$self} = \@doc_ids;
+        
+        # Track our position within the array of doc ids.
+        $tick{$$self} = -1;
+        
+        return $self;
+    }
+    
+    sub DESTROY {
+        my $self = shift;
+        delete $doc_ids{$$self};
+        delete $tick{$$self};
+        $self-&gt;SUPER::DESTROY;
+    }</code></pre>
+
+<p>The doc ids must be in order, or some will be ignored; hence the <code>sort</code> above.</p>
+
+<p>In addition to the constructor and destructor, there are three methods that must be overridden.</p>
+
+<p>next() advances the Matcher to the next valid matching doc.</p>
+
+<pre><code>    sub next {
+        my $self    = shift;
+        my $doc_ids = $doc_ids{$$self};
+        my $tick    = ++$tick{$$self};
+        return 0 if $tick &gt;= scalar @$doc_ids;
+        return $doc_ids-&gt;[$tick];
+    }</code></pre>
+
+<p>get_doc_id() returns the current document id, or 0 if the Matcher is exhausted. (<a href="../../../Lucy/Docs/DocIDs.html">Document numbers</a> start at 1, so 0 is a sentinel.)</p>
+
+<pre><code>    sub get_doc_id {
+        my $self    = shift;
+        my $tick    = $tick{$$self};
+        my $doc_ids = $doc_ids{$$self};
+        return $tick &lt; scalar @$doc_ids ? $doc_ids-&gt;[$tick] : 0;
+    }</code></pre>
+
+<p>score() conveys the relevance score of the current match. We&#39;ll just return a fixed score of 1.0:</p>
+
+<pre><code>    sub score { 1.0 }</code></pre>
+
+<h1 id="Usage">Usage</h1>
+
+<p>To get a basic feel for PrefixQuery, insert the FlatQueryParser module described in <a href="../../../Lucy/Docs/Cookbook/CustomQueryParser.html">Lucy::Docs::Cookbook::CustomQueryParser</a> (which supports PrefixQuery) into the search.cgi sample app.</p>
+
+<pre><code>    my $parser = FlatQueryParser-&gt;new( schema =&gt; $searcher-&gt;get_schema );
+    my $query  = $parser-&gt;parse($q);</code></pre>
+
+<p>If you&#39;re planning on using PrefixQuery in earnest, though, you may want to change up analyzers to avoid stemming, because stemming -- another approach to prefix conflation -- is not perfectly compatible with prefix searches.</p>
+
+<pre><code>    # Polyanalyzer with no SnowballStemmer.
+    my $analyzer = Lucy::Analysis::PolyAnalyzer-&gt;new(
+        analyzers =&gt; [
+            Lucy::Analysis::RegexTokenizer-&gt;new,
+            Lucy::Analysis::CaseFolder-&gt;new,
+        ],
+    );</code></pre>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook/CustomQueryParser.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook/CustomQueryParser.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook/CustomQueryParser.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,192 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Docs::Cookbook::CustomQueryParser - Sample subclass of QueryParser.</p>
+
+<h1 id="ABSTRACT">ABSTRACT</h1>
+
+<p>Implement a custom search query language using a subclass of <a href="../../../Lucy/Search/QueryParser.html">Lucy::Search::QueryParser</a>.</p>
+
+<h1 id="The-language">The language</h1>
+
+<p>At first, our query language will support only simple term queries and phrases delimited by double quotes. For simplicity&#39;s sake, it will not support parenthetical groupings, boolean operators, or prepended plus/minus. The results for all subqueries will be unioned together -- i.e. joined using an OR -- which is usually the best approach for small-to-medium-sized document collections.</p>
+
+<p>Later, we&#39;ll add support for trailing wildcards.</p>
+
+<h1 id="Single-field-parser">Single-field parser</h1>
+
+<p>Our initial parser implentation will generate queries against a single fixed field, &quot;content&quot;, and it will analyze text using a fixed choice of English PolyAnalyzer. We won&#39;t subclass Lucy::Search::QueryParser just yet.</p>
+
+<pre><code>    package FlatQueryParser;
+    use Lucy::Search::TermQuery;
+    use Lucy::Search::PhraseQuery;
+    use Lucy::Search::ORQuery;
+    use Carp;
+    
+    sub new { 
+        my $analyzer = Lucy::Analysis::PolyAnalyzer-&gt;new(
+            language =&gt; &#39;en&#39;,
+        );
+        return bless { 
+            field    =&gt; &#39;content&#39;,
+            analyzer =&gt; $analyzer,
+        }, __PACKAGE__;
+    }</code></pre>
+
+<p>Some private helper subs for creating TermQuery and PhraseQuery objects will help keep the size of our main parse() subroutine down:</p>
+
+<pre><code>    sub _make_term_query {
+        my ( $self, $term ) = @_;
+        return Lucy::Search::TermQuery-&gt;new(
+            field =&gt; $self-&gt;{field},
+            term  =&gt; $term,
+        );
+    }
+    
+    sub _make_phrase_query {
+        my ( $self, $terms ) = @_;
+        return Lucy::Search::PhraseQuery-&gt;new(
+            field =&gt; $self-&gt;{field},
+            terms =&gt; $terms,
+        );
+    }</code></pre>
+
+<p>Our private _tokenize() method treats double-quote delimited material as a single token and splits on whitespace everywhere else.</p>
+
+<pre><code>    sub _tokenize {
+        my ( $self, $query_string ) = @_;
+        my @tokens;
+        while ( length $query_string ) {
+            if ( $query_string =~ s/^\s+// ) {
+                next;    # skip whitespace
+            }
+            elsif ( $query_string =~ s/^(&quot;[^&quot;]*(?:&quot;|$))// ) {
+                push @tokens, $1;    # double-quoted phrase
+            }
+            else {
+                $query_string =~ s/(\S+)//;
+                push @tokens, $1;    # single word
+            }
+        }
+        return \@tokens;
+    }</code></pre>
+
+<p>The main parsing routine creates an array of tokens by calling _tokenize(), runs the tokens through through the PolyAnalyzer, creates TermQuery or PhraseQuery objects according to how many tokens emerge from the PolyAnalyzer&#39;s split() method, and adds each of the sub-queries to the primary ORQuery.</p>
+
+<pre><code>    sub parse {
+        my ( $self, $query_string ) = @_;
+        my $tokens   = $self-&gt;_tokenize($query_string);
+        my $analyzer = $self-&gt;{analyzer};
+        my $or_query = Lucy::Search::ORQuery-&gt;new;
+    
+        for my $token (@$tokens) {
+            if ( $token =~ s/^&quot;// ) {
+                $token =~ s/&quot;$//;
+                my $terms = $analyzer-&gt;split($token);
+                my $query = $self-&gt;_make_phrase_query($terms);
+                $or_query-&gt;add_child($phrase_query);
+            }
+            else {
+                my $terms = $analyzer-&gt;split($token);
+                if ( @$terms == 1 ) {
+                    my $query = $self-&gt;_make_term_query( $terms-&gt;[0] );
+                    $or_query-&gt;add_child($query);
+                }
+                elsif ( @$terms &gt; 1 ) {
+                    my $query = $self-&gt;_make_phrase_query($terms);
+                    $or_query-&gt;add_child($query);
+                }
+            }
+        }
+    
+        return $or_query;
+    }</code></pre>
+
+<h1 id="Multi-field-parser">Multi-field parser</h1>
+
+<p>Most often, the end user will want their search query to match not only a single &#39;content&#39; field, but also &#39;title&#39; and so on. To make that happen, we have to turn queries such as this...</p>
+
+<pre><code>    foo AND NOT bar</code></pre>
+
+<p>... into the logical equivalent of this:</p>
+
+<pre><code>    (title:foo OR content:foo) AND NOT (title:bar OR content:bar)</code></pre>
+
+<p>Rather than continue with our own from-scratch parser class and write the routines to accomplish that expansion, we&#39;re now going to subclass Lucy::Search::QueryParser and take advantage of some of its existing methods.</p>
+
+<p>Our first parser implementation had the &quot;content&quot; field name and the choice of English PolyAnalyzer hard-coded for simplicity, but we don&#39;t need to do that once we subclass Lucy::Search::QueryParser. QueryParser&#39;s constructor -- which we will inherit, allowing us to eliminate our own constructor -- requires a Schema which conveys field and Analyzer information, so we can just defer to that.</p>
+
+<pre><code>    package FlatQueryParser;
+    use base qw( Lucy::Search::QueryParser );
+    use Lucy::Search::TermQuery;
+    use Lucy::Search::PhraseQuery;
+    use Lucy::Search::ORQuery;
+    use PrefixQuery;
+    use Carp;
+    
+    # Inherit new()</code></pre>
+
+<p>We&#39;re also going to jettison our _make_term_query() and _make_phrase_query() helper subs and chop our parse() subroutine way down. Our revised parse() routine will generate Lucy::Search::LeafQuery objects instead of TermQueries and PhraseQueries:</p>
+
+<pre><code>    sub parse {
+        my ( $self, $query_string ) = @_;
+        my $tokens = $self-&gt;_tokenize($query_string);
+        my $or_query = Lucy::Search::ORQuery-&gt;new;
+        for my $token (@$tokens) {
+            my $leaf_query = Lucy::Search::LeafQuery-&gt;new( text =&gt; $token );
+            $or_query-&gt;add_child($leaf_query);
+        }
+        return $self-&gt;expand($or_query);
+    }</code></pre>
+
+<p>The magic happens in QueryParser&#39;s expand() method, which walks the ORQuery object we supply to it looking for LeafQuery objects, and calls expand_leaf() for each one it finds. expand_leaf() performs field-specific analysis, decides whether each query should be a TermQuery or a PhraseQuery, and if multiple fields are required, creates an ORQuery which mults out e.g. <code>foo</code> into <code>(title:foo OR content:foo)</code>.</p>
+
+<h1 id="Extending-the-query-language">Extending the query language</h1>
+
+<p>To add support for trailing wildcards to our query language, we need to override expand_leaf() to accommodate PrefixQuery, while deferring to the parent class implementation on TermQuery and PhraseQuery.</p>
+
+<pre><code>    sub expand_leaf {
+        my ( $self, $leaf_query ) = @_;
+        my $text = $leaf_query-&gt;get_text;
+        if ( $text =~ /\*$/ ) {
+            my $or_query = Lucy::Search::ORQuery-&gt;new;
+            for my $field ( @{ $self-&gt;get_fields } ) {
+                my $prefix_query = PrefixQuery-&gt;new(
+                    field        =&gt; $field,
+                    query_string =&gt; $text,
+                );
+                $or_query-&gt;add_child($prefix_query);
+            }
+            return $or_query;
+        }
+        else {
+            return $self-&gt;SUPER::expand_leaf($leaf_query);
+        }
+    }</code></pre>
+
+<p>Ordinarily, those asterisks would have been stripped when running tokens through the PolyAnalyzer -- query strings containing &quot;foo*&quot; would produce TermQueries for the term &quot;foo&quot;. Our override intercepts tokens with trailing asterisks and processes them as PrefixQueries before <code>SUPER::expand_leaf</code> can discard them, so that a search for &quot;foo*&quot; can match &quot;food&quot;, &quot;foosball&quot;, and so on.</p>
+
+<h1 id="Usage">Usage</h1>
+
+<p>Insert our custom parser into the search.cgi sample app to get a feel for how it behaves:</p>
+
+<pre><code>    my $parser = FlatQueryParser-&gt;new( schema =&gt; $searcher-&gt;get_schema );
+    my $query  = $parser-&gt;parse( decode( &#39;UTF-8&#39;, $cgi-&gt;param(&#39;q&#39;) || &#39;&#39; ) );
+    my $hits   = $searcher-&gt;hits(
+        query      =&gt; $query,
+        offset     =&gt; $offset,
+        num_wanted =&gt; $page_size,
+    );
+    ...</code></pre>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook/FastUpdates.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook/FastUpdates.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Cookbook/FastUpdates.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,115 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Docs::Cookbook::FastUpdates - Near real-time index updates.</p>
+
+<h1 id="ABSTRACT">ABSTRACT</h1>
+
+<p>While index updates are fast on average, worst-case update performance may be significantly slower. To make index updates consistently quick, we must manually intervene to control the process of index segment consolidation.</p>
+
+<h1 id="The-problem">The problem</h1>
+
+<p>Ordinarily, modifying an index is cheap. New data is added to new segments, and the time to write a new segment scales more or less linearly with the number of documents added during the indexing session.</p>
+
+<p>Deletions are also cheap most of the time, because we don&#39;t remove documents immediately but instead mark them as deleted, and adding the deletion mark is cheap.</p>
+
+<p>However, as new segments are added and the deletion rate for existing segments increases, search-time performance slowly begins to degrade. At some point, it becomes necessary to consolidate existing segments, rewriting their data into a new segment.</p>
+
+<p>If the recycled segments are small, the time it takes to rewrite them may not be significant. Every once in a while, though, a large amount of data must be rewritten.</p>
+
+<h1 id="Procrastinating-and-playing-catch-up">Procrastinating and playing catch-up</h1>
+
+<p>The simplest way to force fast index updates is to avoid rewriting anything.</p>
+
+<p>Indexer relies upon <a href="../../../Lucy/Index/IndexManager.html">IndexManager</a>&#39;s recycle() method to tell it which segments should be consolidated. If we subclass IndexManager and override recycle() so that it always returns an empty array, we get consistently quick performance:</p>
+
+<pre><code>    package NoMergeManager;
+    use base qw( Lucy::Index::IndexManager );
+    sub recycle { [] }
+    
+    package main;
+    my $indexer = Lucy::Index::Indexer-&gt;new(
+        index =&gt; &#39;/path/to/index&#39;,
+        manager =&gt; NoMergeManager-&gt;new,
+    );
+    ...
+    $indexer-&gt;commit;</code></pre>
+
+<p>However, we can&#39;t procrastinate forever. Eventually, we&#39;ll have to run an ordinary, uncontrolled indexing session, potentially triggering a large rewrite of lots of small and/or degraded segments:</p>
+
+<pre><code>    my $indexer = Lucy::Index::Indexer-&gt;new( 
+        index =&gt; &#39;/path/to/index&#39;, 
+        # manager =&gt; NoMergeManager-&gt;new,
+    );
+    ...
+    $indexer-&gt;commit;</code></pre>
+
+<h1 id="Acceptable-worst-case-update-time-slower-degradation">Acceptable worst-case update time, slower degradation</h1>
+
+<p>Never merging anything at all in the main indexing process is probably overkill. Small segments are relatively cheap to merge; we just need to guard against the big rewrites.</p>
+
+<p>Setting a ceiling on the number of documents in the segments to be recycled allows us to avoid a mass proliferation of tiny, single-document segments, while still offering decent worst-case update speed:</p>
+
+<pre><code>    package LightMergeManager;
+    use base qw( Lucy::Index::IndexManager );
+    
+    sub recycle {
+        my $self = shift;
+        my $seg_readers = $self-&gt;SUPER::recycle(@_);
+        @$seg_readers = grep { $_-&gt;doc_max &lt; 10 } @$seg_readers;
+        return $seg_readers;
+    }</code></pre>
+
+<p>However, we still have to consolidate every once in a while, and while that happens content updates will be locked out.</p>
+
+<h1 id="Background-merging">Background merging</h1>
+
+<p>If it&#39;s not acceptable to lock out updates while the index consolidation process runs, the alternative is to move the consolidation process out of band, using Lucy::Index::BackgroundMerger.</p>
+
+<p>It&#39;s never safe to have more than one Indexer attempting to modify the content of an index at the same time, but a BackgroundMerger and an Indexer can operate simultaneously:</p>
+
+<pre><code>    # Indexing process.
+    use Scalar::Util qw( blessed );
+    my $retries = 0;
+    while (1) {
+        eval {
+            my $indexer = Lucy::Index::Indexer-&gt;new(
+                    index =&gt; &#39;/path/to/index&#39;,
+                    manager =&gt; LightMergeManager-&gt;new,
+                );
+            $indexer-&gt;add_doc($doc);
+            $indexer-&gt;commit;
+        };
+        last unless $@;
+        if ( blessed($@) and $@-&gt;isa(&quot;Lucy::Store::LockErr&quot;) ) {
+            # Catch LockErr.
+            warn &quot;Couldn&#39;t get lock ($retries retries)&quot;;
+            $retries++;
+        }
+        else {
+            die &quot;Write failed: $@&quot;;
+        }
+    }
+
+    # Background merge process.
+    my $manager = Lucy::Index::IndexManager-&gt;new;
+    $index_manager-&gt;set_write_lock_timeout(60_000);
+    my $bg_merger = Lucy::Index::BackgroundMerger-&gt;new(
+        index   =&gt; &#39;/path/to/index&#39;,
+        manager =&gt; $manager,
+    );
+    $bg_merger-&gt;commit;</code></pre>
+
+<p>The exception handling code becomes useful once you have more than one index modification process happening simultaneously. By default, Indexer tries several times to acquire a write lock over the span of one second, then holds it until commit() completes. BackgroundMerger handles most of its work without the write lock, but it does need it briefly once at the beginning and once again near the end. Under normal loads, the internal retry logic will resolve conflicts, but if it&#39;s not acceptable to miss an insert, you probably want to catch LockErr exceptions thrown by Indexer. In contrast, a LockErr from BackgroundMerger probably just needs to be logged.</p>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/DevGuide.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/DevGuide.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/DevGuide.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,36 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Docs::DevGuide - Quick-start guide to hacking on Apache Lucy.</p>
+
+<h1 id="DESCRIPTION">DESCRIPTION</h1>
+
+<p>The Apache Lucy code base is organized into roughly four layers:</p>
+
+<pre><code>   * Charmonizer - compiler and OS configuration probing.
+   * Clownfish - header files.
+   * C - implementation files.
+   * Host - binding language.</code></pre>
+
+<p>Charmonizer is a configuration prober which writes a single header file, &quot;charmony.h&quot;, describing the build environment and facilitating cross-platform development. It&#39;s similar to Autoconf or Metaconfig, but written in pure C.</p>
+
+<p>The &quot;.cfh&quot; files within the Lucy core are Clownfish header files. Clownfish is a purpose-built, declaration-only language which superimposes a single-inheritance object model on top of C which is specifically designed to co-exist happily with variety of &quot;host&quot; languages and to allow limited run-time dynamic subclassing. For more information see the Clownfish docs, but if there&#39;s one thing you should know about Clownfish OO before you start hacking, it&#39;s that method calls are differentiated from functions by capitalization:</p>
+
+<pre><code>    Indexer_Add_Doc   &lt;-- Method, typically uses dynamic dispatch.
+    Indexer_add_doc   &lt;-- Function, always a direct invocation.</code></pre>
+
+<p>The C files within the Lucy core are where most of Lucy&#39;s low-level functionality lies. They implement the interface defined by the Clownfish header files.</p>
+
+<p>The C core is intentionally left incomplete, however; to be usable, it must be bound to a &quot;host&quot; language. (In this context, even C is considered a &quot;host&quot; which must implement the missing pieces and be &quot;bound&quot; to the core.) Some of the binding code is autogenerated by Clownfish on a spec customized for each language. Other pieces are hand-coded in either C (using the host&#39;s C API) or the host language itself.</p>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/DocIDs.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/DocIDs.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/DocIDs.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,34 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Docs::DocIDs - Characteristics of Apache Lucy document ids.</p>
+
+<h1 id="DESCRIPTION">DESCRIPTION</h1>
+
+<h2 id="Document-ids-are-signed-32-bit-integers">Document ids are signed 32-bit integers</h2>
+
+<p>Document ids in Apache Lucy start at 1. Because 0 is never a valid doc id, we can use it as a sentinel value:</p>
+
+<pre><code>    while ( my $doc_id = $posting_list-&gt;next ) {
+        ...
+    }</code></pre>
+
+<h2 id="Document-ids-are-ephemeral">Document ids are ephemeral</h2>
+
+<p>The document ids used by Lucy are associated with a single index snapshot. The moment an index is updated, the mapping of document ids to documents is subject to change.</p>
+
+<p>Since IndexReader objects represent a point-in-time view of an index, document ids are guaranteed to remain static for the life of the reader. However, because they are not permanent, Lucy document ids cannot be used as foreign keys to locate records in external data sources. If you truly need a primary key field, you must define it and populate it yourself.</p>
+
+<p>Furthermore, the order of document ids does not tell you anything about the sequence in which documents were added to the index.</p>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/FileFormat.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/FileFormat.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/FileFormat.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,153 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Docs::FileFormat - Overview of index file format.</p>
+
+<h1 id="OVERVIEW">OVERVIEW</h1>
+
+<p>It is not necessary to understand the current implementation details of the index file format in order to use Apache Lucy effectively, but it may be helpful if you are interested in tweaking for high performance, exotic usage, or debugging and development.</p>
+
+<p>On a file system, an index is a directory. The files inside have a hierarchical relationship: an index is made up of &quot;segments&quot;, each of which is an independent inverted index with its own subdirectory; each segment is made up of several component parts.</p>
+
+<pre><code>    [index]--|
+             |--snapshot_XXX.json
+             |--schema_XXX.json
+             |--write.lock
+             |
+             |--seg_1--|
+             |         |--segmeta.json
+             |         |--cfmeta.json
+             |         |--cf.dat-------|
+             |                         |--[lexicon]
+             |                         |--[postings]
+             |                         |--[documents]
+             |                         |--[highlight]
+             |                         |--[deletions]
+             |
+             |--seg_2--|
+             |         |--segmeta.json
+             |         |--cfmeta.json
+             |         |--cf.dat-------|
+             |                         |--[lexicon]
+             |                         |--[postings]
+             |                         |--[documents]
+             |                         |--[highlight]
+             |                         |--[deletions]
+             |
+             |--[...]--| </code></pre>
+
+<h1 id="Write-once-philosophy">Write-once philosophy</h1>
+
+<p>All segment directory names consist of the string &quot;seg_&quot; followed by a number in base 36: seg_1, seg_5m, seg_p9s2 and so on, with higher numbers indicating more recent segments. Once a segment is finished and committed, its name is never re-used and its files are never modified.</p>
+
+<p>Old segments become obsolete and can be removed when their data has been consolidated into new segments during the process of segment merging and optimization. A fully-optimized index has only one segment.</p>
+
+<h1 id="Top-level-entries">Top-level entries</h1>
+
+<p>There are a handful of &quot;top-level&quot; files and directories which belong to the entire index rather than to a particular segment.</p>
+
+<h2 id="snapshot_XXX.json">snapshot_XXX.json</h2>
+
+<p>A &quot;snapshot&quot; file, e.g. <code>snapshot_m7p.json</code>, is list of index files and directories. Because index files, once written, are never modified, the list of entries in a snapshot defines a point-in-time view of the data in an index.</p>
+
+<p>Like segment directories, snapshot files also utilize the unique-base-36-number naming convention; the higher the number, the more recent the file. The appearance of a new snapshot file within the index directory constitutes an index update. While a new segment is being written new files may be added to the index directory, but until a new snapshot file gets written, a Searcher opening the index for reading won&#39;t know about them.</p>
+
+<h2 id="schema_XXX.json">schema_XXX.json</h2>
+
+<p>The schema file is a Schema object describing the index&#39;s format, serialized as JSON. It, too, is versioned, and a given snapshot file will reference one and only one schema file.</p>
+
+<h2 id="locks">locks</h2>
+
+<p>By default, only one indexing process may safely modify the index at any given time. Processes reserve an index by laying claim to the <code>write.lock</code> file within the <code>locks/</code> directory. A smattering of other lock files may be used from time to time, as well.</p>
+
+<h1 id="A-segments-component-parts">A segment&#39;s component parts</h1>
+
+<p>By default, each segment has up to five logical components: lexicon, postings, document storage, highlight data, and deletions. Binary data from these components gets stored in virtual files within the &quot;cf.dat&quot; compound file; metadata is stored in a shared &quot;segmeta.json&quot; file.</p>
+
+<h2 id="segmeta.json">segmeta.json</h2>
+
+<p>The segmeta.json file is a central repository for segment metadata. In addition to information such as document counts and field numbers, it also warehouses arbitrary metadata on behalf of individual index components.</p>
+
+<h2 id="Lexicon">Lexicon</h2>
+
+<p>Each indexed field gets its own lexicon in each segment. The exact files involved depend on the field&#39;s type, but generally speaking there will be two parts. First, there&#39;s a primary <code>lexicon-XXX.dat</code> file which houses a complete term list associating terms with corpus frequency statistics, postings file locations, etc. Second, one or more &quot;lexicon index&quot; files may be present which contain periodic samples from the primary lexicon file to facilitate fast lookups.</p>
+
+<h2 id="Postings">Postings</h2>
+
+<p>&quot;Posting&quot; is a technical term from the field of <a href="../../Lucy/Docs/IRTheory.html">information retrieval</a>, defined as a single instance of a one term indexing one document. If you are looking at the index in the back of a book, and you see that &quot;freedom&quot; is referenced on pages 8, 86, and 240, that would be three postings, which taken together form a &quot;posting list&quot;. The same terminology applies to an index in electronic form.</p>
+
+<p>Each segment has one postings file per indexed field. When a search is performed for a single term, first that term is looked up in the lexicon. If the term exists in the segment, the record in the lexicon will contain information about which postings file to look at and where to look.</p>
+
+<p>The first thing any posting record tells you is a document id. By iterating over all the postings associated with a term, you can find all the documents that match that term, a process which is analogous to looking up page numbers in a book&#39;s index. However, each posting record typically contains other information in addition to document id, e.g. the positions at which the term occurs within the field.</p>
+
+<h2 id="Documents">Documents</h2>
+
+<p>The document storage section is a simple database, organized into two files:</p>
+
+<ul>
+
+<li><p><b>documents.dat</b> - Serialized documents.</p>
+
+</li>
+<li><p><b>documents.ix</b> - Document storage index, a solid array of 64-bit integers where each integer location corresponds to a document id, and the value at that location points at a file position in the documents.dat file.</p>
+
+</li>
+</ul>
+
+<h2 id="Highlight-data">Highlight data</h2>
+
+<p>The files which store data used for excerpting and highlighting are organized similarly to the files used to store documents.</p>
+
+<ul>
+
+<li><p><b>highlight.dat</b> - Chunks of serialized highlight data, one per doc id.</p>
+
+</li>
+<li><p><b>highlight.ix</b> - Highlight data index -- as with the <code>documents.ix</code> file, a solid array of 64-bit file pointers.</p>
+
+</li>
+</ul>
+
+<h2 id="Deletions">Deletions</h2>
+
+<p>When a document is &quot;deleted&quot; from a segment, it is not actually purged right away; it is merely marked as &quot;deleted&quot; via a deletions file. Deletions files contains bit vectors with one bit for each document in the segment; if bit #254 is set then document 254 is deleted, and if that document turns up in a search it will be masked out.</p>
+
+<p>It is only when a segment&#39;s contents are rewritten to a new segment during the segment-merging process that deleted documents truly go away.</p>
+
+<h1 id="Compound-Files">Compound Files</h1>
+
+<p>If you peer inside an index directory, you won&#39;t actually find any files named &quot;documents.dat&quot;, &quot;highlight.ix&quot;, etc. unless there is an indexing process underway. What you will find instead is one &quot;cf.dat&quot; and one &quot;cfmeta.json&quot; file per segment.</p>
+
+<p>To minimize the need for file descriptors at search-time, all per-segment binary data files are concatenated together in &quot;cf.dat&quot; at the close of each indexing session. Information about where each file begins and ends is stored in <code>cfmeta.json</code>. When the segment is opened for reading, a single file descriptor per &quot;cf.dat&quot; file can be shared among several readers.</p>
+
+<h1 id="A-Typical-Search">A Typical Search</h1>
+
+<p>Here&#39;s a simplified narrative, dramatizing how a search for &quot;freedom&quot; against a given segment plays out:</p>
+
+<ol>
+
+<li><p>The searcher asks the relevant Lexicon Index, &quot;Do you know anything about &#39;freedom&#39;?&quot; Lexicon Index replies, &quot;Can&#39;t say for sure, but if the main Lexicon file does, &#39;freedom&#39; is probably somewhere around byte 21008&quot;.</p>
+
+</li>
+<li><p>The main Lexicon tells the searcher &quot;One moment, let me scan our records... Yes, we have 2 documents which contain &#39;freedom&#39;. You&#39;ll find them in seg_6/postings-4.dat starting at byte 66991.&quot;</p>
+
+</li>
+<li><p>The Postings file says &quot;Yep, we have &#39;freedom&#39;, all right! Document id 40 has 1 &#39;freedom&#39;, and document 44 has 8. If you need to know more, like if any &#39;freedom&#39; is part of the phrase &#39;freedom of speech&#39;, ask me about positions!</p>
+
+</li>
+<li><p>If the searcher is only looking for &#39;freedom&#39; in isolation, that&#39;s where it stops. It now knows enough to assign the documents scores against &quot;freedom&quot;, with the 8-freedom document likely ranking higher than the single-freedom document.</p>
+
+</li>
+</ol>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/FileLocking.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/FileLocking.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/FileLocking.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,55 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Docs::FileLocking - Manage indexes on shared volumes.</p>
+
+<h1 id="SYNOPSIS">SYNOPSIS</h1>
+
+<pre><code>    use Sys::Hostname qw( hostname );
+    my $hostname = hostname() or die &quot;Can&#39;t get unique hostname&quot;;
+    my $manager = Lucy::Index::IndexManager-&gt;new( host =&gt; $hostname );
+
+    # Index time:
+    my $indexer = Lucy::Index::Indexer-&gt;new(
+        index   =&gt; &#39;/path/to/index&#39;,
+        manager =&gt; $manager,
+    );
+
+    # Search time:
+    my $reader = Lucy::Index::IndexReader-&gt;open(
+        index   =&gt; &#39;/path/to/index&#39;,
+        manager =&gt; $manager,
+    );
+    my $searcher = Lucy::Search::IndexSearcher-&gt;new( index =&gt; $reader );</code></pre>
+
+<h1 id="DESCRIPTION">DESCRIPTION</h1>
+
+<p>Normally, index locking is an invisible process. Exclusive write access is controlled via lockfiles within the index directory and problems only arise if multiple processes attempt to acquire the write lock simultaneously; search-time processes do not ordinarily require locking at all.</p>
+
+<p>On shared volumes, however, the default locking mechanism fails, and manual intervention becomes necessary.</p>
+
+<p>Both read and write applications accessing an index on a shared volume need to identify themselves with a unique <code>host</code> id, e.g. hostname or ip address. Knowing the host id makes it possible to tell which lockfiles belong to other machines and therefore must not be removed when the lockfile&#39;s pid number appears not to correspond to an active process.</p>
+
+<p>At index-time, the danger is that multiple indexing processes from different machines which fail to specify a unique <code>host</code> id can delete each others&#39; lockfiles and then attempt to modify the index at the same time, causing index corruption. The search-time problem is more complex.</p>
+
+<p>Once an index file is no longer listed in the most recent snapshot, Indexer attempts to delete it as part of a post-commit() cleanup routine. It is possible that at the moment an Indexer is deleting files which it believes no longer needed, a Searcher referencing an earlier snapshot is in fact using them. The more often that an index is either updated or searched, the more likely it is that this conflict will arise from time to time.</p>
+
+<p>Ordinarily, the deletion attempts are not a problem. On a typical unix volume, the files will be deleted in name only: any process which holds an open filehandle against a given file will continue to have access, and the file won&#39;t actually get vaporized until the last filehandle is cleared. Thanks to &quot;delete on last close semantics&quot;, an Indexer can&#39;t truly delete the file out from underneath an active Searcher. On Windows, where file deletion fails whenever any process holds an open handle, the situation is different but still workable: Indexer just keeps retrying after each commit until deletion finally succeeds.</p>
+
+<p>On NFS, however, the system breaks, because NFS allows files to be deleted out from underneath active processes. Should this happen, the unlucky read process will crash with a &quot;Stale NFS filehandle&quot; exception.</p>
+
+<p>Under normal circumstances, it is neither necessary nor desirable for IndexReaders to secure read locks against an index, but for NFS we have to make an exception. LockFactory&#39;s make_shared_lock() method exists for this reason; supplying an IndexManager instance to IndexReader&#39;s constructor activates an internal locking mechanism using make_shared_lock() which prevents concurrent indexing processes from deleting files that are needed by active readers.</p>
+
+<p>Since shared locks are implemented using lockfiles located in the index directory (as are exclusive locks), reader applications must have write access for read locking to work. Stale lock files from crashed processes are ordinarily cleared away the next time the same machine -- as identified by the <code>host</code> parameter -- opens another IndexReader. (The classic technique of timing out lock files is not feasible because search processes may lie dormant indefinitely.) However, please be aware that if the last thing a given machine does is crash, lock files belonging to it may persist, preventing deletion of obsolete index data.</p>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/IRTheory.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/IRTheory.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/IRTheory.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,64 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Docs::IRTheory - Crash course in information retrieval.</p>
+
+<h1 id="ABSTRACT">ABSTRACT</h1>
+
+<p>Just enough Information Retrieval theory to find your way around Apache Lucy.</p>
+
+<h1 id="Terminology">Terminology</h1>
+
+<p>Lucy uses some terminology from the field of information retrieval which may be unfamiliar to many users. &quot;Document&quot; and &quot;term&quot; mean pretty much what you&#39;d expect them to, but others such as &quot;posting&quot; and &quot;inverted index&quot; need a formal introduction:</p>
+
+<ul>
+
+<li><p><i>document</i> - An atomic unit of retrieval.</p>
+
+</li>
+<li><p><i>term</i> - An attribute which describes a document.</p>
+
+</li>
+<li><p><i>posting</i> - One term indexing one document.</p>
+
+</li>
+<li><p><i>term list</i> - The complete list of terms which describe a document.</p>
+
+</li>
+<li><p><i>posting list</i> - The complete list of documents which a term indexes.</p>
+
+</li>
+<li><p><i>inverted index</i> - A data structure which maps from terms to documents.</p>
+
+</li>
+</ul>
+
+<p>Since Lucy is a practical implementation of IR theory, it loads these abstract, distilled definitions down with useful traits. For instance, a &quot;posting&quot; in its most rarefied form is simply a term-document pairing; in Lucy, the class <a href="../../Lucy/Index/Posting/MatchPosting.html">Lucy::Index::Posting::MatchPosting</a> fills this role. However, by associating additional information with a posting like the number of times the term occurs in the document, we can turn it into a <a href="../../Lucy/Index/Posting/ScorePosting.html">ScorePosting</a>, making it possible to rank documents by relevance rather than just list documents which happen to match in no particular order.</p>
+
+<h1 id="TF-IDF-ranking-algorithm">TF/IDF ranking algorithm</h1>
+
+<p>Lucy uses a variant of the well-established &quot;Term Frequency / Inverse Document Frequency&quot; weighting scheme. A thorough treatment of TF/IDF is too ambitious for our present purposes, but in a nutshell, it means that...</p>
+
+<ul>
+
+<li><p>in a search for <code>skate park</code>, documents which score well for the comparatively rare term <code>skate</code> will rank higher than documents which score well for the more common term <code>park</code>.</p>
+
+</li>
+<li><p>a 10-word text which has one occurrence each of both <code>skate</code> and <code>park</code> will rank higher than a 1000-word text which also contains one occurrence of each.</p>
+
+</li>
+</ul>
+
+<p>A web search for &quot;tf idf&quot; will turn up many excellent explanations of the algorithm.</p>
+
+</body>
+</html>
+

Added: websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Tutorial.html
==============================================================================
--- websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Tutorial.html (added)
+++ websites/staging/lucy/trunk/content/lucy/docs/perl/Lucy/Docs/Tutorial.html Wed Aug 24 00:26:06 2011
@@ -0,0 +1,64 @@
+
+<html>
+<head>
+<title></title>
+<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
+</head>
+<body>
+
+
+<h1 id="NAME">NAME</h1>
+
+<p>Lucy::Docs::Tutorial - Step-by-step introduction to Apache Lucy.</p>
+
+<h1 id="ABSTRACT">ABSTRACT</h1>
+
+<p>Explore Apache Lucy&#39;s basic functionality by starting with a minimalist CGI search app based on <a href="../../Lucy/Simple.html">Lucy::Simple</a> and transforming it, step by step, into an &quot;advanced search&quot; interface utilizing more flexible core modules like <a href="../../Lucy/Index/Indexer.html">Lucy::Index::Indexer</a> and <a href="../../Lucy/Search/IndexSearcher.html">Lucy::Search::IndexSearcher</a>.</p>
+
+<h1 id="DESCRIPTION">DESCRIPTION</h1>
+
+<h2 id="Chapters">Chapters</h2>
+
+<ul>
+
+<li><p><a href="../../Lucy/Docs/Tutorial/Simple.html">Lucy::Docs::Tutorial::Simple</a> - Build a bare-bones search app using <a href="../../Lucy/Simple.html">Lucy::Simple</a>.</p>
+
+</li>
+<li><p><a href="../../Lucy/Docs/Tutorial/BeyondSimple.html">Lucy::Docs::Tutorial::BeyondSimple</a> - Rebuild the app using core classes like <a href="../../Lucy/Index/Indexer.html">Indexer</a> and <a href="../../Lucy/Search/IndexSearcher.html">IndexSearcher</a> in place of Lucy::Simple.</p>
+
+</li>
+<li><p><a href="../../Lucy/Docs/Tutorial/FieldType.html">Lucy::Docs::Tutorial::FieldType</a> - Experiment with different field characteristics using subclasses of <a href="../../Lucy/Plan/FieldType.html">Lucy::Plan::FieldType</a>.</p>
+
+</li>
+<li><p><a href="../../Lucy/Docs/Tutorial/Analysis.html">Lucy::Docs::Tutorial::Analysis</a> - Examine how the choice of <a href="../../Lucy/Analysis/Analyzer.html">Lucy::Analysis::Analyzer</a> subclass affects search results.</p>
+
+</li>
+<li><p><a href="../../Lucy/Docs/Tutorial/Highlighter.html">Lucy::Docs::Tutorial::Highlighter</a> - Augment search results with highlighted excerpts.</p>
+
+</li>
+<li><p><a href="../../Lucy/Docs/Tutorial/QueryObjects.html">Lucy::Docs::Tutorial::QueryObjects</a> - Unlock advanced search features by using Query objects instead of query strings.</p>
+
+</li>
+</ul>
+
+<h2 id="Source-materials">Source materials</h2>
+
+<p>The source material used by the tutorial app -- a multi-text-file presentation of the United States constitution -- can be found in the <code>sample</code> directory at the root of the Lucy distribution, along with finished indexing and search apps.</p>
+
+<pre><code>    sample/indexer.pl        # indexing app
+    sample/search.cgi        # search app
+    sample/us_constitution   # corpus</code></pre>
+
+<h2 id="Conventions">Conventions</h2>
+
+<p>The user is expected to be familiar with OO Perl and basic CGI programming.</p>
+
+<p>The code in this tutorial assumes a Unix-flavored operating system and the Apache webserver, but will work with minor modifications on other setups.</p>
+
+<h1 id="SEE-ALSO">SEE ALSO</h1>
+
+<p>More advanced and esoteric subjects are covered in <a href="../../Lucy/Docs/Cookbook.html">Lucy::Docs::Cookbook</a>.</p>
+
+</body>
+</html>
+