You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tika.apache.org by mi...@apache.org on 2012/09/05 16:31:09 UTC

svn commit: r1381198 [1/15] - in /tika/site: publish/ publish/0.10/ publish/0.5/ publish/0.6/ publish/0.7/ publish/0.8/ publish/0.9/ publish/1.0/ publish/1.1/ publish/1.2/ src/site/apt/0.10/ src/site/apt/1.0/ src/site/apt/1.1/ src/site/apt/1.2/

Author: mikemccand
Date: Wed Sep  5 14:31:06 2012
New Revision: 1381198

URL: http://svn.apache.org/viewvc?rev=1381198&view=rev
Log:
fix broken links and broken unicode characters

Modified:
    tika/site/publish/0.10/detection.html
    tika/site/publish/0.10/gettingstarted.html
    tika/site/publish/0.10/index.html
    tika/site/publish/0.10/parser.html
    tika/site/publish/0.10/parser_guide.html
    tika/site/publish/0.5/documentation.html
    tika/site/publish/0.5/gettingstarted.html
    tika/site/publish/0.6/gettingstarted.html
    tika/site/publish/0.6/parser.html
    tika/site/publish/0.7/detection.html
    tika/site/publish/0.7/gettingstarted.html
    tika/site/publish/0.7/parser.html
    tika/site/publish/0.7/parser_guide.html
    tika/site/publish/0.8/detection.html
    tika/site/publish/0.8/gettingstarted.html
    tika/site/publish/0.8/parser.html
    tika/site/publish/0.8/parser_guide.html
    tika/site/publish/0.9/detection.html
    tika/site/publish/0.9/gettingstarted.html
    tika/site/publish/0.9/parser.html
    tika/site/publish/0.9/parser_guide.html
    tika/site/publish/1.0/detection.html
    tika/site/publish/1.0/formats.html
    tika/site/publish/1.0/gettingstarted.html
    tika/site/publish/1.0/index.html
    tika/site/publish/1.0/parser.html
    tika/site/publish/1.0/parser_guide.html
    tika/site/publish/1.1/detection.html
    tika/site/publish/1.1/formats.html
    tika/site/publish/1.1/gettingstarted.html
    tika/site/publish/1.1/index.html
    tika/site/publish/1.1/parser.html
    tika/site/publish/1.1/parser_guide.html
    tika/site/publish/1.2/detection.html
    tika/site/publish/1.2/formats.html
    tika/site/publish/1.2/gettingstarted.html
    tika/site/publish/1.2/index.html
    tika/site/publish/1.2/parser.html
    tika/site/publish/1.2/parser_guide.html
    tika/site/publish/dependencies.html
    tika/site/publish/dependency-management.html
    tika/site/publish/distribution-management.html
    tika/site/publish/download.html
    tika/site/publish/integration.html
    tika/site/publish/plugin-management.html
    tika/site/publish/plugins.html
    tika/site/publish/project-info.html
    tika/site/publish/project-summary.html
    tika/site/src/site/apt/0.10/index.apt
    tika/site/src/site/apt/1.0/formats.apt
    tika/site/src/site/apt/1.0/parser.apt
    tika/site/src/site/apt/1.1/formats.apt
    tika/site/src/site/apt/1.1/index.apt
    tika/site/src/site/apt/1.1/parser.apt
    tika/site/src/site/apt/1.1/parser_guide.apt
    tika/site/src/site/apt/1.2/formats.apt
    tika/site/src/site/apt/1.2/index.apt
    tika/site/src/site/apt/1.2/parser.apt
    tika/site/src/site/apt/1.2/parser_guide.apt

Modified: tika/site/publish/0.10/detection.html
URL: http://svn.apache.org/viewvc/tika/site/publish/0.10/detection.html?rev=1381198&r1=1381197&r2=1381198&view=diff
==============================================================================
--- tika/site/publish/0.10/detection.html (original)
+++ tika/site/publish/0.10/detection.html Wed Sep  5 14:31:06 2012
@@ -85,8 +85,7 @@
       </div>
       <div id="content">
         <!-- Licensed to the Apache Software Foundation (ASF) under one or more --><!-- contributor license agreements.  See the NOTICE file distributed with --><!-- this work for additional information regarding copyright ownership. --><!-- The ASF licenses this file to You under the Apache License, Version 2.0 --><!-- (the "License"); you may not use this file except in compliance with --><!-- the License.  You may obtain a copy of the License at --><!--  --><!-- http://www.apache.org/licenses/LICENSE-2.0 --><!--  --><!-- Unless required by applicable law or agreed to in writing, software --><!-- distributed under the License is distributed on an "AS IS" BASIS, --><!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. --><!-- See the License for the specific language governing permissions and --><!-- limitations under the License. --><div class="section"><h2>Content Detection<a name="Content_Detection"></a></h2><p>This page gives you information on h
 ow content and language detection works with Apache Tika, and how to tune the behaviour of Tika.</p><ul><li><a href="#Content_Detection">Content Detection</a><ul><li><a href="#The_Detector_Interface">The Detector Interface</a></li><li><a href="#Mime_Magic_Detction">Mime Magic Detction</a></li><li><a href="#Resource_Name_Based_Detection">Resource Name Based Detection</a></li><li><a href="#Known_Content_Type_Detection">Known Content Type &quot;Detection</a></li><li><a href="#The_default_Mime_Types_Detector">The default Mime Types Detector</a></li><li><a href="#Container_Aware_Detection">Container Aware Detection</a></li><li><a href="#Language_Detection">Language Detection</a></li></ul></li></ul><div class="section"><h3><a name="The_Detector_Interface">The Detector Interface</a></h3><p>The <a href="./api/org/apache/tika/detect/Detector.html">org.apache.tika.detect.Detector</a> interface is the basis for most of the content type detection in Apache Tika. All the different ways o
 f detecting content all implement the same common method:</p><div><pre>MediaType detect(java.io.InputStream input,
-                 Metadata metadata) throws java.io.IOException
-</pre></div><p>The <tt>detect</tt> method takes the stream to inspect, and a <tt>Metadata</tt> object that holds any additional information on the content. The detector will return a <a href="./api/org/apache/tika/mime/MediaType.html">MediaType</a> object describing its best guess as to the type of the file.</p><p>In general, only two keys on the Metadata object are used by Detectors. These are <tt>Metadata.RESOURCE_NAME_KEY</tt> which should hold the name of the file (where known), and <tt>Metadata.CONTENT_TYPE</tt> which should hold the advertised content type of the file (eg from a webserver or a content repository).</p></div><div class="section"><h3><a name="Mime_Magic_Detction">Mime Magic Detction</a></h3><p>By looking for special (&quot;magic&quot;) patterns of bytes near the start of the file, it is often possible to detect the type of the file. For some file types, this is a simple process. For others, typically container based formats, the magic detection may not be
  enough. (More detail on detecting container formats below)</p><p>Tika is able to make use of a a mime magic info file, in the <a class="externalLink" href="http://www.freedesktop.org/standards/shared-mime-info">Freedesktop MIME-info</a> format to peform mime magic detection.</p><p>This is provided within Tika by <a href="./api/org/apache/tika/detect/MagicDetector.html">org.apache.tika.detect.MagicDetector</a>. It is most commonly access via <a href="./api/org/apache/tika/mime/MimeTypes.html">org.apache.tika.mime.MimeTypes</a>, normally sourced from the <tt>tika-mimetypes.xml</tt> file.</p></div><div class="section"><h3><a name="Resource_Name_Based_Detection">Resource Name Based Detection</a></h3><p>Where the name of the file is known, it is sometimes possible to guess the file type from the name or extension. Within the <tt>tika-mimetypes.xml</tt> file is a list of patterns which are used to identify the type from the filename.</p><p>However, because files may be renamed, t
 his method of detection is quick but not always as accurate.</p><p>This is provided within Tika by <a href="./api/org/apache/tika/detect/NameDetector.html">org.apache.tika.detect.NameDetector</a>.</p></div><div class="section"><h3><a name="Known_Content_Type_Detection">Known Content Type &quot;Detection</a></h3><p>Sometimes, the mime type for a file is already known, such as when downloading from a webserver, or when retrieving from a content store. This information can be used by detectors, such as <a href="./api/org/apache/tika/mime/MimeTypes.html">org.apache.tika.mime.MimeTypes</a>,</p></div><div class="section"><h3><a name="The_default_Mime_Types_Detector">The default Mime Types Detector</a></h3><p>By default, the mime type detection in Tika is provided by <a href="./api/org/apache/tika/mime/MimeTypes.html">org.apache.tika.mime.MimeTypes</a>. This detector makes use of <tt>tika-mimetypes.xml</tt> to power magic based and filename based detection.</p><p>Firstly, magic bas
 ed detection is used on the start of the file. If the file is an XML file, then the start of the XML is processed to look for root elements. Next, if available, the filename (from <tt>Metadata.RESOURCE_NAME_KEY</tt>) is then used to improve the detail of the detection, such as when magic detects a text file, and the filename hints it's really a CSV. Finally, if available, the supplied content type (from <tt>Metadata.CONTENT_TYPE</tt>) is used to further refine the type.</p></div><div class="section"><h3><a name="Container_Aware_Detection">Container Aware Detection</a></h3><p>Several common file formats are actually held within a common container format. One example is the PowerPoint .ppt and Word .doc formats, which are both held within an OLE2 container. Another is Apple iWork formats, which are actually a series of XML files within a Zip file.</p><p>Using magic detection, it is easy to spot that a given file is an OLE2 document, or a Zip file. Using magic detection alone, 
 it is very difficult (and often impossible) to tell what kind of file lives inside the container.</p><p>For some use cases, speed is important, so having a quick way to know the container type is sufficient. For other cases however, you don't mind spending a bit of time (and memory!) processing the container to get a more accurate answer on its contents. For these cases, a container aware detector should be used.</p><p>Tika provides a wrapping detector in the parsers bundle, of <a href="./api/org/apache/tika/detect/ContainerAwareDetector.html">org.apache.tika.detect.ContainerAwareDetector</a>. This detector will check for certain known containers, and if found, will open them and detect the appropriate type based on the contents. If the file isn't a known container, it will fall back to another detector for the answer (most commonly the default <tt>MimeTypes</tt> detector)</p><p>Because this detector needs to read the whole file to process the container, it must be used with
  a <a href="./api/org/apache/tika/io/TikaInputStream.html">org.apache.tika.io.TikaInputStream</a>. If called with a regular <tt>InputStream</tt>, then all work will be done by the fallback detector.</p><p>For more information on container formats and Tika, see <a class="externalLink" href="http://wiki.apache.org/tika/MetadataDiscussion"></a></p></div><div class="section"><h3><a name="Language_Detection">Language Detection</a></h3><p>Tika is able to help identify the language of a piece of text, which is useful when extracting text from document formats which do not include language information in their metadata.</p><p>The language detection is provided by <a href="./api/org/apache/tika/language/LanguageIdentifier.html">org.apache.tika.language.LanguageIdentifier</a></p></div></div>
+                 Metadata metadata) throws java.io.IOException</pre></div><p>The <tt>detect</tt> method takes the stream to inspect, and a <tt>Metadata</tt> object that holds any additional information on the content. The detector will return a <a href="./api/org/apache/tika/mime/MediaType.html">MediaType</a> object describing its best guess as to the type of the file.</p><p>In general, only two keys on the Metadata object are used by Detectors. These are <tt>Metadata.RESOURCE_NAME_KEY</tt> which should hold the name of the file (where known), and <tt>Metadata.CONTENT_TYPE</tt> which should hold the advertised content type of the file (eg from a webserver or a content repository).</p></div><div class="section"><h3><a name="Mime_Magic_Detction">Mime Magic Detction</a></h3><p>By looking for special (&quot;magic&quot;) patterns of bytes near the start of the file, it is often possible to detect the type of the file. For some file types, this is a simple process. For others, typ
 ically container based formats, the magic detection may not be enough. (More detail on detecting container formats below)</p><p>Tika is able to make use of a a mime magic info file, in the <a class="externalLink" href="http://www.freedesktop.org/standards/shared-mime-info">Freedesktop MIME-info</a> format to peform mime magic detection.</p><p>This is provided within Tika by <a href="./api/org/apache/tika/detect/MagicDetector.html">org.apache.tika.detect.MagicDetector</a>. It is most commonly access via <a href="./api/org/apache/tika/mime/MimeTypes.html">org.apache.tika.mime.MimeTypes</a>, normally sourced from the <tt>tika-mimetypes.xml</tt> file.</p></div><div class="section"><h3><a name="Resource_Name_Based_Detection">Resource Name Based Detection</a></h3><p>Where the name of the file is known, it is sometimes possible to guess the file type from the name or extension. Within the <tt>tika-mimetypes.xml</tt> file is a list of patterns which are used to identify the type fro
 m the filename.</p><p>However, because files may be renamed, this method of detection is quick but not always as accurate.</p><p>This is provided within Tika by <a href="./api/org/apache/tika/detect/NameDetector.html">org.apache.tika.detect.NameDetector</a>.</p></div><div class="section"><h3><a name="Known_Content_Type_Detection">Known Content Type &quot;Detection</a></h3><p>Sometimes, the mime type for a file is already known, such as when downloading from a webserver, or when retrieving from a content store. This information can be used by detectors, such as <a href="./api/org/apache/tika/mime/MimeTypes.html">org.apache.tika.mime.MimeTypes</a>,</p></div><div class="section"><h3><a name="The_default_Mime_Types_Detector">The default Mime Types Detector</a></h3><p>By default, the mime type detection in Tika is provided by <a href="./api/org/apache/tika/mime/MimeTypes.html">org.apache.tika.mime.MimeTypes</a>. This detector makes use of <tt>tika-mimetypes.xml</tt> to power magi
 c based and filename based detection.</p><p>Firstly, magic based detection is used on the start of the file. If the file is an XML file, then the start of the XML is processed to look for root elements. Next, if available, the filename (from <tt>Metadata.RESOURCE_NAME_KEY</tt>) is then used to improve the detail of the detection, such as when magic detects a text file, and the filename hints it's really a CSV. Finally, if available, the supplied content type (from <tt>Metadata.CONTENT_TYPE</tt>) is used to further refine the type.</p></div><div class="section"><h3><a name="Container_Aware_Detection">Container Aware Detection</a></h3><p>Several common file formats are actually held within a common container format. One example is the PowerPoint .ppt and Word .doc formats, which are both held within an OLE2 container. Another is Apple iWork formats, which are actually a series of XML files within a Zip file.</p><p>Using magic detection, it is easy to spot that a given file is 
 an OLE2 document, or a Zip file. Using magic detection alone, it is very difficult (and often impossible) to tell what kind of file lives inside the container.</p><p>For some use cases, speed is important, so having a quick way to know the container type is sufficient. For other cases however, you don't mind spending a bit of time (and memory!) processing the container to get a more accurate answer on its contents. For these cases, a container aware detector should be used.</p><p>Tika provides a wrapping detector in the parsers bundle, of <a href="./api/org/apache/tika/detect/ContainerAwareDetector.html">org.apache.tika.detect.ContainerAwareDetector</a>. This detector will check for certain known containers, and if found, will open them and detect the appropriate type based on the contents. If the file isn't a known container, it will fall back to another detector for the answer (most commonly the default <tt>MimeTypes</tt> detector)</p><p>Because this detector needs to read
  the whole file to process the container, it must be used with a <a href="./api/org/apache/tika/io/TikaInputStream.html">org.apache.tika.io.TikaInputStream</a>. If called with a regular <tt>InputStream</tt>, then all work will be done by the fallback detector.</p><p>For more information on container formats and Tika, see <a class="externalLink" href="http://wiki.apache.org/tika/MetadataDiscussion"></a></p></div><div class="section"><h3><a name="Language_Detection">Language Detection</a></h3><p>Tika is able to help identify the language of a piece of text, which is useful when extracting text from document formats which do not include language information in their metadata.</p><p>The language detection is provided by <a href="./api/org/apache/tika/language/LanguageIdentifier.html">org.apache.tika.language.LanguageIdentifier</a></p></div></div>
       </div>
       <div id="sidebar">
         <div id="navigation">

Modified: tika/site/publish/0.10/gettingstarted.html
URL: http://svn.apache.org/viewvc/tika/site/publish/0.10/gettingstarted.html?rev=1381198&r1=1381197&r2=1381198&view=diff
==============================================================================
--- tika/site/publish/0.10/gettingstarted.html (original)
+++ tika/site/publish/0.10/gettingstarted.html Wed Sep  5 14:31:06 2012
@@ -84,18 +84,15 @@
                 width="387" height="100"/></a>
       </div>
       <div id="content">
-        <!-- Licensed to the Apache Software Foundation (ASF) under one or more --><!-- contributor license agreements.  See the NOTICE file distributed with --><!-- this work for additional information regarding copyright ownership. --><!-- The ASF licenses this file to You under the Apache License, Version 2.0 --><!-- (the "License"); you may not use this file except in compliance with --><!-- the License.  You may obtain a copy of the License at --><!--  --><!-- http://www.apache.org/licenses/LICENSE-2.0 --><!--  --><!-- Unless required by applicable law or agreed to in writing, software --><!-- distributed under the License is distributed on an "AS IS" BASIS, --><!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. --><!-- See the License for the specific language governing permissions and --><!-- limitations under the License. --><div class="section"><h2>Getting Started with Apache Tika<a name="Getting_Started_with_Apache_Tika"></a></h2><p>This d
 ocument describes how to build Apache Tika from sources and how to start using Tika in an application.</p></div><div class="section"><h2>Getting and building the sources<a name="Getting_and_building_the_sources"></a></h2><p>To build Tika from sources you first need to either <a href="../download.html">download</a> a source release or <a href="../source-repository.html">checkout</a> the latest sources from version control.</p><p>Once you have the sources, you can build them using the <a class="externalLink" href="http://maven.apache.org/">Maven 2</a> build system. Executing the following command in the base directory will build the sources and install the resulting artifacts in your local Maven repository.</p><div><pre>mvn install
-</pre></div><p>See the Maven documentation for more information about the available build options.</p><p>Note that you need Java 5 or higher to build Tika.</p></div><div class="section"><h2>Build artifacts<a name="Build_artifacts"></a></h2><p>The Tika 0.10 build consists of a number of components and produces the following main binaries:</p><dl><dt>tika-core/target/tika-core-0.10.jar</dt><dd> Tika core library. Contains the core interfaces and classes of Tika, but none of the parser implementations. Depends only on Java 5.</dd><dt>tika-parsers/target/tika-parsers-0.10.jar</dt><dd> Tika parsers. Collection of classes that implement the Tika Parser interface based on various external parser libraries.</dd><dt>tika-app/target/tika-app-0.10.jar</dt><dd> Tika application. Combines the above libraries and all the external parser libraries into a single runnable jar with a GUI and a command line interface.</dd><dt>tika-bundle/target/tika-bundle-0.10.jar</dt><dd> Tika bundle. An OSG
 i bundle that includes everything you need to use all Tika functionality in an OSGi environment.</dd></dl></div><div class="section"><h2>Using Tika as a Maven dependency<a name="Using_Tika_as_a_Maven_dependency"></a></h2><p>The core library, tika-core, contains the key interfaces and classes of Tika and can be used by itself if you don't need the full set of parsers from the tika-parsers component. The tika-core dependency looks like this:</p><div><pre>  &lt;dependency&gt;
+        <!-- Licensed to the Apache Software Foundation (ASF) under one or more --><!-- contributor license agreements.  See the NOTICE file distributed with --><!-- this work for additional information regarding copyright ownership. --><!-- The ASF licenses this file to You under the Apache License, Version 2.0 --><!-- (the "License"); you may not use this file except in compliance with --><!-- the License.  You may obtain a copy of the License at --><!--  --><!-- http://www.apache.org/licenses/LICENSE-2.0 --><!--  --><!-- Unless required by applicable law or agreed to in writing, software --><!-- distributed under the License is distributed on an "AS IS" BASIS, --><!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. --><!-- See the License for the specific language governing permissions and --><!-- limitations under the License. --><div class="section"><h2>Getting Started with Apache Tika<a name="Getting_Started_with_Apache_Tika"></a></h2><p>This d
 ocument describes how to build Apache Tika from sources and how to start using Tika in an application.</p></div><div class="section"><h2>Getting and building the sources<a name="Getting_and_building_the_sources"></a></h2><p>To build Tika from sources you first need to either <a href="../download.html">download</a> a source release or <a href="../source-repository.html">checkout</a> the latest sources from version control.</p><p>Once you have the sources, you can build them using the <a class="externalLink" href="http://maven.apache.org/">Maven 2</a> build system. Executing the following command in the base directory will build the sources and install the resulting artifacts in your local Maven repository.</p><div><pre>mvn install</pre></div><p>See the Maven documentation for more information about the available build options.</p><p>Note that you need Java 5 or higher to build Tika.</p></div><div class="section"><h2>Build artifacts<a name="Build_artifacts"></a></h2><p>The Tik
 a 0.10 build consists of a number of components and produces the following main binaries:</p><dl><dt>tika-core/target/tika-core-0.10.jar</dt><dd> Tika core library. Contains the core interfaces and classes of Tika, but none of the parser implementations. Depends only on Java 5.</dd><dt>tika-parsers/target/tika-parsers-0.10.jar</dt><dd> Tika parsers. Collection of classes that implement the Tika Parser interface based on various external parser libraries.</dd><dt>tika-app/target/tika-app-0.10.jar</dt><dd> Tika application. Combines the above libraries and all the external parser libraries into a single runnable jar with a GUI and a command line interface.</dd><dt>tika-bundle/target/tika-bundle-0.10.jar</dt><dd> Tika bundle. An OSGi bundle that includes everything you need to use all Tika functionality in an OSGi environment.</dd></dl></div><div class="section"><h2>Using Tika as a Maven dependency<a name="Using_Tika_as_a_Maven_dependency"></a></h2><p>The core library, tika-cor
 e, contains the key interfaces and classes of Tika and can be used by itself if you don't need the full set of parsers from the tika-parsers component. The tika-core dependency looks like this:</p><div><pre>  &lt;dependency&gt;
     &lt;groupId&gt;org.apache.tika&lt;/groupId&gt;
     &lt;artifactId&gt;tika-core&lt;/artifactId&gt;
     &lt;version&gt;0.10&lt;/version&gt;
-  &lt;/dependency&gt;
-</pre></div><p>If you want to use Tika to parse documents (instead of simply detecting document types, etc.), you'll want to depend on tika-parsers instead: </p><div><pre>  &lt;dependency&gt;
+  &lt;/dependency&gt;</pre></div><p>If you want to use Tika to parse documents (instead of simply detecting document types, etc.), you'll want to depend on tika-parsers instead: </p><div><pre>  &lt;dependency&gt;
     &lt;groupId&gt;org.apache.tika&lt;/groupId&gt;
     &lt;artifactId&gt;tika-parsers&lt;/artifactId&gt;
     &lt;version&gt;0.10&lt;/version&gt;
-  &lt;/dependency&gt;
-</pre></div><p>Note that adding this dependency will introduce a number of transitive dependencies to your project, including one on tika-core. You need to make sure that these dependencies won't conflict with your existing project dependencies. The listing below shows all the compile-scope dependencies of tika-parsers in the Tika 0.10 release.</p><div><pre>org.apache.tika:tika-parsers:bundle:0.10
+  &lt;/dependency&gt;</pre></div><p>Note that adding this dependency will introduce a number of transitive dependencies to your project, including one on tika-core. You need to make sure that these dependencies won't conflict with your existing project dependencies. The listing below shows all the compile-scope dependencies of tika-parsers in the Tika 0.10 release.</p><div><pre>org.apache.tika:tika-parsers:bundle:0.10
 +- org.apache.tika:tika-core:jar:0.10:compile
 +- edu.ucar:netcdf:jar:4.2-min:compile
 |  \- org.slf4j:slf4j-api:jar:1.5.6:compile
@@ -121,8 +118,7 @@
 +- com.drewnoakes:metadata-extractor:jar:2.4.0-beta-1:compile
 +- de.l3s.boilerpipe:boilerpipe:jar:1.1.0:compile
 +- rome:rome:jar:0.9:compile
-|  \- jdom:jdom:jar:1.0:compile
-</pre></div></div><div class="section"><h2>Using Tika in an Ant project<a name="Using_Tika_in_an_Ant_project"></a></h2><p>Unless you use a dependency manager tool like <a class="externalLink" href="http://ant.apache.org/ivy/">Apache Ivy</a>, to use Tika in you application you can include the Tika jar files and the dependencies individually.</p><div><pre>&lt;classpath&gt;
+|  \- jdom:jdom:jar:1.0:compile</pre></div></div><div class="section"><h2>Using Tika in an Ant project<a name="Using_Tika_in_an_Ant_project"></a></h2><p>Unless you use a dependency manager tool like <a class="externalLink" href="http://ant.apache.org/ivy/">Apache Ivy</a>, to use Tika in you application you can include the Tika jar files and the dependencies individually.</p><div><pre>&lt;classpath&gt;
   ... &lt;!-- your other classpath entries --&gt;
   &lt;pathelement location=&quot;path/to/tika-core-0.10.jar&quot;/&gt;
   &lt;pathelement location=&quot;path/to/tika-parsers-0.10.jar&quot;/&gt;
@@ -149,8 +145,7 @@
   &lt;pathelement location=&quot;path/to/boilerpipe-1.1.0.jar&quot;/&gt;
   &lt;pathelement location=&quot;path/to/rome-0.9.jar&quot;/&gt;
   &lt;pathelement location=&quot;path/to/jdom-1.0.jar&quot;/&gt;
-&lt;/classpath&gt;
-</pre></div><p>An easy way to gather all these libraries is to run &quot;mvn dependency:copy-dependencies&quot; in the tika-parsers source directory. This will copy all Tika dependencies to the <tt>target/dependencies</tt> directory.</p><p>Alternatively you can simply drop the entire tika-app jar to your classpath to get all of the above dependencies in a single archive.</p></div><div class="section"><h2>Using Tika as a command line utility<a name="Using_Tika_as_a_command_line_utility"></a></h2><p>The Tika application jar (tika-app-0.10.jar) can be used as a command line utility for extracting text content and metadata from all sorts of files. This runnable jar contains all the dependencies it needs, so you don't need to worry about classpath settings to run it.</p><p>The usage instructions are shown below.</p><div><pre>usage: java -jar tika-app-0.10.jar [option] [file]
+&lt;/classpath&gt;</pre></div><p>An easy way to gather all these libraries is to run &quot;mvn dependency:copy-dependencies&quot; in the tika-parsers source directory. This will copy all Tika dependencies to the <tt>target/dependencies</tt> directory.</p><p>Alternatively you can simply drop the entire tika-app jar to your classpath to get all of the above dependencies in a single archive.</p></div><div class="section"><h2>Using Tika as a command line utility<a name="Using_Tika_as_a_command_line_utility"></a></h2><p>The Tika application jar (tika-app-0.10.jar) can be used as a command line utility for extracting text content and metadata from all sorts of files. This runnable jar contains all the dependencies it needs, so you don't need to worry about classpath settings to run it.</p><p>The usage instructions are shown below.</p><div><pre>usage: java -jar tika-app-0.10.jar [option] [file]
 
 Options:
     -?  or --help          Print this usage message
@@ -207,12 +202,10 @@ Description:
 
     Use the &quot;-server&quot; (or &quot;-s&quot;) option to start the
     Apache Tika server. The server will listen to the
-    ports you specify as one or more arguments.
-</pre></div><p>You can also use the jar as a component in a Unix pipeline or as an external tool in many scripting languages.</p><div><pre># Check if an Internet resource contains a specific keyword
+    ports you specify as one or more arguments.</pre></div><p>You can also use the jar as a component in a Unix pipeline or as an external tool in many scripting languages.</p><div><pre># Check if an Internet resource contains a specific keyword
 curl http://.../document.doc \
   | java -jar tika-app-0.10.jar --text \
-  | grep -q keyword
-</pre></div></div>
+  | grep -q keyword</pre></div></div>
       </div>
       <div id="sidebar">
         <div id="navigation">

Modified: tika/site/publish/0.10/index.html
URL: http://svn.apache.org/viewvc/tika/site/publish/0.10/index.html?rev=1381198&r1=1381197&r2=1381198&view=diff
==============================================================================
--- tika/site/publish/0.10/index.html (original)
+++ tika/site/publish/0.10/index.html Wed Sep  5 14:31:06 2012
@@ -84,7 +84,7 @@
                 width="387" height="100"/></a>
       </div>
       <div id="content">
-        <!-- Licensed to the Apache Software Foundation (ASF) under one or more --><!-- contributor license agreements.  See the NOTICE file distributed with --><!-- this work for additional information regarding copyright ownership. --><!-- The ASF licenses this file to You under the Apache License, Version 2.0 --><!-- (the "License"); you may not use this file except in compliance with --><!-- the License.  You may obtain a copy of the License at --><!--  --><!-- http://www.apache.org/licenses/LICENSE-2.0 --><!--  --><!-- Unless required by applicable law or agreed to in writing, software --><!-- distributed under the License is distributed on an "AS IS" BASIS, --><!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. --><!-- See the License for the specific language governing permissions and --><!-- limitations under the License. --><div class="section"><h2>Apache Tika 0.10<a name="Apache_Tika_0.10"></a></h2><p>The most notable changes in Tika 0.10 
 over the previous release are:</p><ul><li>A parser for CHM help files was added. (<a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-245">TIKA-245</a>)</li><li>Invalid characters are now replaced with the Unicode replacement character (U+FFFD), whereas before such characters were replaced with spaces, so you may need to change your processing of Tika's output to now handle U+FFFD (<a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-698">TIKA-698</a>).</li><li>The RTF parser was rewritten to perform its own direct shallow parse of the RTF content, instead of using RTFEditorKit from javax.swing. This fixes several issues in the old parser, including doubling of Unicode characters in certain cases (<a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-683">TIKA-683</a>), exceptions on mal-formed RTF docs (<a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-666">TIKA-666</a>), and missing text from so
 me elements (header/footer, hyperlinks,footnotes, text inside pictures).</li><li>Handling of temporary files within Tika was much improved (<a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-701">TIKA-701</a>, <a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-654">TIKA-654</a>, <a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-645">TIKA-645</a>, <a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-153">TIKA-153</a>).</li><li>The Tika GUI got a facelift and some extra features (<a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-635">TIKA-635</a>).</li><li>The apache-mime4j dependency of the email message parser was upgraded from version 0.6 to 0.7 (<a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-716">TIKA-716</a>). The parser also now accepts a MimeConfig object in the ParseContext as configuration (<a class="externalLink" href="http://issues.apache
 .org/jira/browse/TIKA-640">TIKA-640</a>).</li></ul><p>The following people have contributed to Tika 0.10 by submitting or commenting on the issues resolved in this release:</p><ul><li>Alain Viret </li><li>Alex Ott</li><li>Alexander Chow</li><li>Andreas Kemkes</li><li>Andrew Khoury</li><li>Babak Farhang</li><li>Benjamin Douglas</li><li>Benson Margulies</li><li>Chris A. Mattmann</li><li>chris hudson</li><li>Chris Lott</li><li>Cristian Vat</li><li>Curt Arnold</li><li>Cynthia L Wong</li><li>Dave Brosius</li><li>David Benson</li><li>Enrico Donelli</li><li>Erik Hetzner</li><li>Erna de Groot</li><li>Gabriele Columbro</li><li>Gavin</li><li>Geoff Jarrad</li><li>Gregory Kanevsky </li><li>G&#xfffd;nter rombauts</li><li>Henning Gross</li><li>Henri Bergius</li><li>Ingo Renner</li><li>Ingo Wiarda</li><li>Izaak Alpert </li><li>Jan H&#xfffd;ydahl</li><li>Jens Wilmer</li><li>Jeremy Anderson</li><li>Joseph Vychtrle</li><li>Joshua Turner</li><li>Jukka Zitting</li><li>Julien Nioche</li><li>Karl
  Heinz Marbaise</li><li>Ken Krugler</li><li>Kostya Gribov</li><li>Luciano Leggieri</li><li>Mads Hansen</li><li>Mark Butler</li><li>Matt Sheppard</li><li>Maxim Valyanskiy</li><li>Michael McCandless</li><li>Michael Pisula</li><li>Murad Shahid</li><li>Nick Burch</li><li>Oleg Tikhonov </li><li>Pablo Queixalos</li><li>Paul Jakubik</li><li>Raimund Merkert</li><li>Rajiv Kumar</li><li>Robert Trickey</li><li>Sami Siren</li><li>samraj</li><li>Selva Ganesan</li><li>Sjoerd Smeets</li><li>Stephen Duncan Jr</li><li>Tran Nam Quang</li><li>Uwe Schindler</li><li>Vitaliy Filippov</li></ul><p>See <a class="externalLink" href="http://s.apache.org/vR">http://s.apache.org/vR</a> for more details on these contributions.</p></div>
+        <!-- Licensed to the Apache Software Foundation (ASF) under one or more --><!-- contributor license agreements.  See the NOTICE file distributed with --><!-- this work for additional information regarding copyright ownership. --><!-- The ASF licenses this file to You under the Apache License, Version 2.0 --><!-- (the "License"); you may not use this file except in compliance with --><!-- the License.  You may obtain a copy of the License at --><!--  --><!-- http://www.apache.org/licenses/LICENSE-2.0 --><!--  --><!-- Unless required by applicable law or agreed to in writing, software --><!-- distributed under the License is distributed on an "AS IS" BASIS, --><!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. --><!-- See the License for the specific language governing permissions and --><!-- limitations under the License. --><div class="section"><h2>Apache Tika 0.10<a name="Apache_Tika_0.10"></a></h2><p>The most notable changes in Tika 0.10 
 over the previous release are:</p><ul><li>A parser for CHM help files was added. (<a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-245">TIKA-245</a>)</li><li>Invalid characters are now replaced with the Unicode replacement character (U+FFFD), whereas before such characters were replaced with spaces, so you may need to change your processing of Tika's output to now handle U+FFFD (<a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-698">TIKA-698</a>).</li><li>The RTF parser was rewritten to perform its own direct shallow parse of the RTF content, instead of using RTFEditorKit from javax.swing. This fixes several issues in the old parser, including doubling of Unicode characters in certain cases (<a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-683">TIKA-683</a>), exceptions on mal-formed RTF docs (<a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-666">TIKA-666</a>), and missing text from so
 me elements (header/footer, hyperlinks,footnotes, text inside pictures).</li><li>Handling of temporary files within Tika was much improved (<a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-701">TIKA-701</a>, <a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-654">TIKA-654</a>, <a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-645">TIKA-645</a>, <a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-153">TIKA-153</a>).</li><li>The Tika GUI got a facelift and some extra features (<a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-635">TIKA-635</a>).</li><li>The apache-mime4j dependency of the email message parser was upgraded from version 0.6 to 0.7 (<a class="externalLink" href="http://issues.apache.org/jira/browse/TIKA-716">TIKA-716</a>). The parser also now accepts a MimeConfig object in the ParseContext as configuration (<a class="externalLink" href="http://issues.apache
 .org/jira/browse/TIKA-640">TIKA-640</a>).</li></ul><p>The following people have contributed to Tika 0.10 by submitting or commenting on the issues resolved in this release:</p><ul><li>Alain Viret </li><li>Alex Ott</li><li>Alexander Chow</li><li>Andreas Kemkes</li><li>Andrew Khoury</li><li>Babak Farhang</li><li>Benjamin Douglas</li><li>Benson Margulies</li><li>Chris A. Mattmann</li><li>chris hudson</li><li>Chris Lott</li><li>Cristian Vat</li><li>Curt Arnold</li><li>Cynthia L Wong</li><li>Dave Brosius</li><li>David Benson</li><li>Enrico Donelli</li><li>Erik Hetzner</li><li>Erna de Groot</li><li>Gabriele Columbro</li><li>Gavin</li><li>Geoff Jarrad</li><li>Gregory Kanevsky </li><li>G&#xfc;nter Rombauts</li><li>Henning Gross</li><li>Henri Bergius</li><li>Ingo Renner</li><li>Ingo Wiarda</li><li>Izaak Alpert </li><li>Jan H&#xf8;ydahl</li><li>Jens Wilmer</li><li>Jeremy Anderson</li><li>Joseph Vychtrle</li><li>Joshua Turner</li><li>Jukka Zitting</li><li>Julien Nioche</li><li>Karl Hei
 nz Marbaise</li><li>Ken Krugler</li><li>Kostya Gribov</li><li>Luciano Leggieri</li><li>Mads Hansen</li><li>Mark Butler</li><li>Matt Sheppard</li><li>Maxim Valyanskiy</li><li>Michael McCandless</li><li>Michael Pisula</li><li>Murad Shahid</li><li>Nick Burch</li><li>Oleg Tikhonov </li><li>Pablo Queixalos</li><li>Paul Jakubik</li><li>Raimund Merkert</li><li>Rajiv Kumar</li><li>Robert Trickey</li><li>Sami Siren</li><li>samraj</li><li>Selva Ganesan</li><li>Sjoerd Smeets</li><li>Stephen Duncan Jr</li><li>Tran Nam Quang</li><li>Uwe Schindler</li><li>Vitaliy Filippov</li></ul><p>See <a class="externalLink" href="http://s.apache.org/vR">http://s.apache.org/vR</a> for more details on these contributions.</p></div>
       </div>
       <div id="sidebar">
         <div id="navigation">

Modified: tika/site/publish/0.10/parser.html
URL: http://svn.apache.org/viewvc/tika/site/publish/0.10/parser.html?rev=1381198&r1=1381197&r2=1381198&view=diff
==============================================================================
--- tika/site/publish/0.10/parser.html (original)
+++ tika/site/publish/0.10/parser.html Wed Sep  5 14:31:06 2012
@@ -86,31 +86,26 @@
       <div id="content">
         <!-- Licensed to the Apache Software Foundation (ASF) under one or more --><!-- contributor license agreements.  See the NOTICE file distributed with --><!-- this work for additional information regarding copyright ownership. --><!-- The ASF licenses this file to You under the Apache License, Version 2.0 --><!-- (the "License"); you may not use this file except in compliance with --><!-- the License.  You may obtain a copy of the License at --><!--  --><!-- http://www.apache.org/licenses/LICENSE-2.0 --><!--  --><!-- Unless required by applicable law or agreed to in writing, software --><!-- distributed under the License is distributed on an "AS IS" BASIS, --><!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. --><!-- See the License for the specific language governing permissions and --><!-- limitations under the License. --><div class="section"><h2>The Parser interface<a name="The_Parser_interface"></a></h2><p>The <a href="./api/org/apache/
 tika/parser/Parser.html">org.apache.tika.parser.Parser</a> interface is the key concept of Apache Tika. It hides the complexity of different file formats and parsing libraries while providing a simple and powerful mechanism for client applications to extract structured text content and metadata from all sorts of documents. All this is achieved with a single method:</p><div><pre>void parse(
     InputStream stream, ContentHandler handler, Metadata metadata,
-    ParseContext context) throws IOException, SAXException, TikaException;
-</pre></div><p>The <tt>parse</tt> method takes the document to be parsed and related metadata as input and outputs the results as XHTML SAX events and extra metadata. The parse context argument is used to specify context information (like the current local) that is not related to any individual document. The main criteria that lead to this design were:</p><dl><dt>Streamed parsing</dt><dd>The interface should require neither the client application nor the parser implementation to keep the full document content in memory or spooled to disk. This allows even huge documents to be parsed without excessive resource requirements.</dd><dt>Structured content</dt><dd>A parser implementation should be able to include structural information (headings, links, etc.) in the extracted content. A client application can use this information for example to better judge the relevance of different parts of the parsed document.</dd><dt>Input metadata</dt><dd>A client application should be able to
  include metadata like the file name or declared content type with the document to be parsed. The parser implementation can use this information to better guide the parsing process.</dd><dt>Output metadata</dt><dd>A parser implementation should be able to return document metadata in addition to document content. Many document formats contain metadata like the name of the author that may be useful to client applications.</dd><dt>Context sensitivity</dt><dd>While the default settings and behaviour of Tika parsers should work well for most use cases, there are still situations where more fine-grained control over the parsing process is desirable. It should be easy to inject such context-specific information to the parsing process without breaking the layers of abstraction.</dd></dl><p>These criteria are reflected in the arguments of the <tt>parse</tt> method.</p><div class="section"><h3>Document input stream<a name="Document_input_stream"></a></h3><p>The first argument is an <a
  class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/java/io/InputStream.html">InputStream</a> for reading the document to be parsed.</p><p>If this document stream can not be read, then parsing stops and the thrown <a class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/java/io/IOException.html">IOException</a> is passed up to the client application. If the stream can be read but not parsed (for example if the document is corrupted), then the parser throws a <a href="./api/org/apache/tika/exception/TikaException.html">TikaException</a>.</p><p>The parser implementation will consume this stream but <i>will not close it</i>. Closing the stream is the responsibility of the client application that opened it in the first place. The recommended pattern for using streams with the <tt>parse</tt> method is:</p><div><pre>InputStream stream = ...;      // open the stream
+    ParseContext context) throws IOException, SAXException, TikaException;</pre></div><p>The <tt>parse</tt> method takes the document to be parsed and related metadata as input and outputs the results as XHTML SAX events and extra metadata. The parse context argument is used to specify context information (like the current local) that is not related to any individual document. The main criteria that lead to this design were:</p><dl><dt>Streamed parsing</dt><dd>The interface should require neither the client application nor the parser implementation to keep the full document content in memory or spooled to disk. This allows even huge documents to be parsed without excessive resource requirements.</dd><dt>Structured content</dt><dd>A parser implementation should be able to include structural information (headings, links, etc.) in the extracted content. A client application can use this information for example to better judge the relevance of different parts of the parsed docum
 ent.</dd><dt>Input metadata</dt><dd>A client application should be able to include metadata like the file name or declared content type with the document to be parsed. The parser implementation can use this information to better guide the parsing process.</dd><dt>Output metadata</dt><dd>A parser implementation should be able to return document metadata in addition to document content. Many document formats contain metadata like the name of the author that may be useful to client applications.</dd><dt>Context sensitivity</dt><dd>While the default settings and behaviour of Tika parsers should work well for most use cases, there are still situations where more fine-grained control over the parsing process is desirable. It should be easy to inject such context-specific information to the parsing process without breaking the layers of abstraction.</dd></dl><p>These criteria are reflected in the arguments of the <tt>parse</tt> method.</p><div class="section"><h3>Document input str
 eam<a name="Document_input_stream"></a></h3><p>The first argument is an <a class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/java/io/InputStream.html">InputStream</a> for reading the document to be parsed.</p><p>If this document stream can not be read, then parsing stops and the thrown <a class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/java/io/IOException.html">IOException</a> is passed up to the client application. If the stream can be read but not parsed (for example if the document is corrupted), then the parser throws a <a href="./api/org/apache/tika/exception/TikaException.html">TikaException</a>.</p><p>The parser implementation will consume this stream but <i>will not close it</i>. Closing the stream is the responsibility of the client application that opened it in the first place. The recommended pattern for using streams with the <tt>parse</tt> method is:</p><div><pre>InputStream stream = ...;      // open the stream
 try {
     parser.parse(stream, ...); // parse the stream
 } finally {
     stream.close();            // close the stream
-}
-</pre></div><p>Some document formats like the OLE2 Compound Document Format used by Microsoft Office are best parsed as random access files. In such cases the content of the input stream is automatically spooled to a temporary file that gets removed once parsed. A future version of Tika may make it possible to avoid this extra file if the input document is already a file in the local file system. See <a class="externalLink" href="https://issues.apache.org/jira/browse/TIKA-153">TIKA-153</a> for the status of this feature request.</p></div><div class="section"><h3>XHTML SAX events<a name="XHTML_SAX_events"></a></h3><p>The parsed content of the document stream is returned to the client application as a sequence of XHTML SAX events. XHTML is used to express structured content of the document and SAX events enable streamed processing. Note that the XHTML format is used here only to convey structural information, not to render the documents for browsing!</p><p>The XHTML SAX events
  produced by the parser implementation are sent to a <a class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/org/xml/sax/ContentHandler.html">ContentHandler</a> instance given to the <tt>parse</tt> method. If this the content handler fails to process an event, then parsing stops and the thrown <a class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/org/xml/sax/SAXException.html">SAXException</a> is passed up to the client application.</p><p>The overall structure of the generated event stream is (with indenting added for clarity):</p><div><pre>&lt;html xmlns=&quot;http://www.w3.org/1999/xhtml&quot;&gt;
+}</pre></div><p>Some document formats like the OLE2 Compound Document Format used by Microsoft Office are best parsed as random access files. In such cases the content of the input stream is automatically spooled to a temporary file that gets removed once parsed. A future version of Tika may make it possible to avoid this extra file if the input document is already a file in the local file system. See <a class="externalLink" href="https://issues.apache.org/jira/browse/TIKA-153">TIKA-153</a> for the status of this feature request.</p></div><div class="section"><h3>XHTML SAX events<a name="XHTML_SAX_events"></a></h3><p>The parsed content of the document stream is returned to the client application as a sequence of XHTML SAX events. XHTML is used to express structured content of the document and SAX events enable streamed processing. Note that the XHTML format is used here only to convey structural information, not to render the documents for browsing!</p><p>The XHTML SAX event
 s produced by the parser implementation are sent to a <a class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/org/xml/sax/ContentHandler.html">ContentHandler</a> instance given to the <tt>parse</tt> method. If this the content handler fails to process an event, then parsing stops and the thrown <a class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/org/xml/sax/SAXException.html">SAXException</a> is passed up to the client application.</p><p>The overall structure of the generated event stream is (with indenting added for clarity):</p><div><pre>&lt;html xmlns=&quot;http://www.w3.org/1999/xhtml&quot;&gt;
   &lt;head&gt;
     &lt;title&gt;...&lt;/title&gt;
   &lt;/head&gt;
   &lt;body&gt;
     ...
   &lt;/body&gt;
-&lt;/html&gt;
-</pre></div><p>Parser implementations typically use the <a href="./apidocs/org/apache/tika/sax/XHTMLContentHandler.html">XHTMLContentHandler</a> utility class to generate the XHTML output.</p><p>Dealing with the raw SAX events can be a bit complex, so Apache Tika comes with a number of utility classes that can be used to process and convert the event stream to other representations.</p><p>For example, the <a href="./api/org/apache/tika/sax/BodyContentHandler.html">BodyContentHandler</a> class can be used to extract just the body part of the XHTML output and feed it either as SAX events to another content handler or as characters to an output stream, a writer, or simply a string. The following code snippet parses a document from the standard input stream and outputs the extracted text content to standard output:</p><div><pre>ContentHandler handler = new BodyContentHandler(System.out);
-parser.parse(System.in, handler, ...);
-</pre></div><p>Another useful class is <a href="./api/org/apache/tika/parser/ParsingReader.html">ParsingReader</a> that uses a background thread to parse the document and returns the extracted text content as a character stream:</p><div><pre>InputStream stream = ...; // the document to be parsed
+&lt;/html&gt;</pre></div><p>Parser implementations typically use the <a href="./apidocs/org/apache/tika/sax/XHTMLContentHandler.html">XHTMLContentHandler</a> utility class to generate the XHTML output.</p><p>Dealing with the raw SAX events can be a bit complex, so Apache Tika comes with a number of utility classes that can be used to process and convert the event stream to other representations.</p><p>For example, the <a href="./api/org/apache/tika/sax/BodyContentHandler.html">BodyContentHandler</a> class can be used to extract just the body part of the XHTML output and feed it either as SAX events to another content handler or as characters to an output stream, a writer, or simply a string. The following code snippet parses a document from the standard input stream and outputs the extracted text content to standard output:</p><div><pre>ContentHandler handler = new BodyContentHandler(System.out);
+parser.parse(System.in, handler, ...);</pre></div><p>Another useful class is <a href="./api/org/apache/tika/parser/ParsingReader.html">ParsingReader</a> that uses a background thread to parse the document and returns the extracted text content as a character stream:</p><div><pre>InputStream stream = ...; // the document to be parsed
 Reader reader = new ParsingReader(parser, stream, ...);
 try {
     ...;                  // read the document text using the reader
 } finally {
     reader.close();       // the document stream is closed automatically
-}
-</pre></div></div><div class="section"><h3>Document metadata<a name="Document_metadata"></a></h3><p>The third argument to the <tt>parse</tt> method is used to pass document metadata both in and out of the parser. Document metadata is expressed as an <a href="./api/org/apache/tika/metadata/Metadata.html">Metadata</a> object.</p><p>The following are some of the more interesting metadata properties:</p><dl><dt>Metadata.RESOURCE_NAME_KEY</dt><dd>The name of the file or resource that contains the document.<p>A client application can set this property to allow the parser to use file name heuristics to determine the format of the document.</p><p>The parser implementation may set this property if the file format contains the canonical name of the file (for example the Gzip format has a slot for the file name).</p></dd><dt>Metadata.CONTENT_TYPE</dt><dd>The declared content type of the document.<p>A client application can set this property based on for example a HTTP Content-Type head
 er. The declared content type may help the parser to correctly interpret the document.</p><p>The parser implementation sets this property to the content type according to which the document was parsed.</p></dd><dt>Metadata.TITLE</dt><dd>The title of the document.<p>The parser implementation sets this property if the document format contains an explicit title field.</p></dd><dt>Metadata.AUTHOR</dt><dd>The name of the author of the document.<p>The parser implementation sets this property if the document format contains an explicit author field.</p></dd></dl><p>Note that metadata handling is still being discussed by the Tika development team, and it is likely that there will be some (backwards incompatible) changes in metadata handling before Tika 1.0.</p></div><div class="section"><h3>Parse context<a name="Parse_context"></a></h3><p>The final argument to the <tt>parse</tt> method is used to inject context-specific information to the parsing process. This is useful for example 
 when dealing with locale-specific date and number formats in Microsoft Excel spreadsheets. Another important use of the parse context is passing in the delegate parser instance to be used by two-phase parsers like the <a href="./api/org/apache/parser/pkg/PackageParser.html">PackageParser</a> subclasses. Some parser classes allow customization of the parsing process through strategy objects in the parse context.</p></div><div class="section"><h3>Parser implementations<a name="Parser_implementations"></a></h3><p>Apache Tika comes with a number of parser classes for parsing <a href="./formats.html">various document formats</a>. You can also extend Tika with your own parsers, and of course any contributions to Tika are warmly welcome.</p><p>The goal of Tika is to reuse existing parser libraries like <a class="externalLink" href="http://www.pdfbox.org/">PDFBox</a> or <a class="externalLink" href="http://poi.apache.org/">Apache POI</a> as much as possible, and so most of the parse
 r classes in Tika are adapters to such external libraries.</p><p>Tika also contains some general purpose parser implementations that are not targeted at any specific document formats. The most notable of these is the <a href="./apidocs/org/apache/tika/parser/AutoDetectParser.html">AutoDetectParser</a> class that encapsulates all Tika functionality into a single parser that can handle any types of documents. This parser will automatically determine the type of the incoming document based on various heuristics and will then parse the document accordingly.</p></div></div>
+}</pre></div></div><div class="section"><h3>Document metadata<a name="Document_metadata"></a></h3><p>The third argument to the <tt>parse</tt> method is used to pass document metadata both in and out of the parser. Document metadata is expressed as an <a href="./api/org/apache/tika/metadata/Metadata.html">Metadata</a> object.</p><p>The following are some of the more interesting metadata properties:</p><dl><dt>Metadata.RESOURCE_NAME_KEY</dt><dd>The name of the file or resource that contains the document.<p>A client application can set this property to allow the parser to use file name heuristics to determine the format of the document.</p><p>The parser implementation may set this property if the file format contains the canonical name of the file (for example the Gzip format has a slot for the file name).</p></dd><dt>Metadata.CONTENT_TYPE</dt><dd>The declared content type of the document.<p>A client application can set this property based on for example a HTTP Content-Type hea
 der. The declared content type may help the parser to correctly interpret the document.</p><p>The parser implementation sets this property to the content type according to which the document was parsed.</p></dd><dt>Metadata.TITLE</dt><dd>The title of the document.<p>The parser implementation sets this property if the document format contains an explicit title field.</p></dd><dt>Metadata.AUTHOR</dt><dd>The name of the author of the document.<p>The parser implementation sets this property if the document format contains an explicit author field.</p></dd></dl><p>Note that metadata handling is still being discussed by the Tika development team, and it is likely that there will be some (backwards incompatible) changes in metadata handling before Tika 1.0.</p></div><div class="section"><h3>Parse context<a name="Parse_context"></a></h3><p>The final argument to the <tt>parse</tt> method is used to inject context-specific information to the parsing process. This is useful for example
  when dealing with locale-specific date and number formats in Microsoft Excel spreadsheets. Another important use of the parse context is passing in the delegate parser instance to be used by two-phase parsers like the <a href="./api/org/apache/parser/pkg/PackageParser.html">PackageParser</a> subclasses. Some parser classes allow customization of the parsing process through strategy objects in the parse context.</p></div><div class="section"><h3>Parser implementations<a name="Parser_implementations"></a></h3><p>Apache Tika comes with a number of parser classes for parsing <a href="./formats.html">various document formats</a>. You can also extend Tika with your own parsers, and of course any contributions to Tika are warmly welcome.</p><p>The goal of Tika is to reuse existing parser libraries like <a class="externalLink" href="http://www.pdfbox.org/">PDFBox</a> or <a class="externalLink" href="http://poi.apache.org/">Apache POI</a> as much as possible, and so most of the pars
 er classes in Tika are adapters to such external libraries.</p><p>Tika also contains some general purpose parser implementations that are not targeted at any specific document formats. The most notable of these is the <a href="./apidocs/org/apache/tika/parser/AutoDetectParser.html">AutoDetectParser</a> class that encapsulates all Tika functionality into a single parser that can handle any types of documents. This parser will automatically determine the type of the incoming document based on various heuristics and will then parse the document accordingly.</p></div></div>
       </div>
       <div id="sidebar">
         <div id="navigation">

Modified: tika/site/publish/0.10/parser_guide.html
URL: http://svn.apache.org/viewvc/tika/site/publish/0.10/parser_guide.html?rev=1381198&r1=1381197&r2=1381198&view=diff
==============================================================================
--- tika/site/publish/0.10/parser_guide.html (original)
+++ tika/site/publish/0.10/parser_guide.html Wed Sep  5 14:31:06 2012
@@ -86,8 +86,7 @@
       <div id="content">
         <!-- Licensed to the Apache Software Foundation (ASF) under one or more --><!-- contributor license agreements.  See the NOTICE file distributed with --><!-- this work for additional information regarding copyright ownership. --><!-- The ASF licenses this file to You under the Apache License, Version 2.0 --><!-- (the "License"); you may not use this file except in compliance with --><!-- the License.  You may obtain a copy of the License at --><!--  --><!-- http://www.apache.org/licenses/LICENSE-2.0 --><!--  --><!-- Unless required by applicable law or agreed to in writing, software --><!-- distributed under the License is distributed on an "AS IS" BASIS, --><!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. --><!-- See the License for the specific language governing permissions and --><!-- limitations under the License. --><div class="section"><h2>Get Tika parsing up and running in 5 minutes<a name="Get_Tika_parsing_up_and_running_in_5_min
 utes"></a></h2><p>This page is a quick start guide showing how to add a new parser to Apache Tika. Following the simple steps listed below your new parser can be running in only 5 minutes.</p><ul><li><a href="#Get_Tika_parsing_up_and_running_in_5_minutes">Get Tika parsing up and running in 5 minutes</a><ul><li><a href="#Getting_Started">Getting Started</a></li><li><a href="#Add_your_MIME-Type">Add your MIME-Type</a></li><li><a href="#Create_your_Parser_class">Create your Parser class</a></li><li><a href="#List_the_new_parser">List the new parser</a></li></ul></li></ul><div class="section"><h3><a name="Getting_Started">Getting Started</a></h3><p>The <a href="./gettingstarted.html">Getting Started</a> document describes how to build Apache Tika from sources and how to start using Tika in an application. Pay close attention and follow the instructions in the &quot;Getting and building the sources&quot; section.</p></div><div class="section"><h3><a name="Add_your_MIME-Type">Add 
 your MIME-Type</a></h3><p>You first need to modify <a class="externalLink" href="http://svn.apache.org/repos/asf/tika/trunk/tika-core/src/main/resources/org/apache/tika/mime/tika-mimetypes.xml">tika-core/src/main/resources/org/apache/tika/mime/tika-mimetypes.xml</a> in order to Tika can map the file extension with its MIME-Type. You should add something like this:</p><div><pre> &lt;mime-type type=&quot;application/hello&quot;&gt;
         &lt;glob pattern=&quot;*.hi&quot;/&gt;
- &lt;/mime-type&gt;
-</pre></div></div><div class="section"><h3><a name="Create_your_Parser_class">Create your Parser class</a></h3><p>Now, you need to create your new parser. This is a class that must implement the Parser interface offered by Tika. A very simple Tika Parser looks like this:</p><div><pre>/*
+ &lt;/mime-type&gt;</pre></div></div><div class="section"><h3><a name="Create_your_Parser_class">Create your Parser class</a></h3><p>Now, you need to create your new parser. This is a class that must implement the Parser interface offered by Tika. A very simple Tika Parser looks like this:</p><div><pre>/*
  * Licensed to the Apache Software Foundation (ASF) under one or more
  * contributor license agreements.  See the NOTICE file distributed with
  * this work for additional information regarding copyright ownership.
@@ -151,8 +150,7 @@ public class HelloParser implements Pars
                         throws IOException, SAXException, TikaException {
                 parse(stream, handler, metadata, new ParseContext());
         }
-}
-</pre></div><p>Pay special attention to the definition of the SUPPORTED_TYPES static class field in the parser class that defines what MIME-Types it supports. </p><p>Is in the &quot;parse&quot; method where you will do all your work. This is, extract the information of the resource and then set the metadata.</p></div><div class="section"><h3><a name="List_the_new_parser">List the new parser</a></h3><p>Finally, you should explicitly tell the AutoDetectParser to include your new parser. This step is only needed if you want to use the AutoDetectParser functionality. If you figure out the correct parser in a different way, it isn't needed. </p><p>List your new parser in: <a class="externalLink" href="http://svn.apache.org/repos/asf/tika/trunk/tika-parsers/src/main/resources/META-INF/services/org.apache.tika.parser.Parser">tika-parsers/src/main/resources/META-INF/services/org.apache.tika.parser.Parser</a></p></div></div>
+}</pre></div><p>Pay special attention to the definition of the SUPPORTED_TYPES static class field in the parser class that defines what MIME-Types it supports. </p><p>Is in the &quot;parse&quot; method where you will do all your work. This is, extract the information of the resource and then set the metadata.</p></div><div class="section"><h3><a name="List_the_new_parser">List the new parser</a></h3><p>Finally, you should explicitly tell the AutoDetectParser to include your new parser. This step is only needed if you want to use the AutoDetectParser functionality. If you figure out the correct parser in a different way, it isn't needed. </p><p>List your new parser in: <a class="externalLink" href="http://svn.apache.org/repos/asf/tika/trunk/tika-parsers/src/main/resources/META-INF/services/org.apache.tika.parser.Parser">tika-parsers/src/main/resources/META-INF/services/org.apache.tika.parser.Parser</a></p></div></div>
       </div>
       <div id="sidebar">
         <div id="navigation">

Modified: tika/site/publish/0.5/documentation.html
URL: http://svn.apache.org/viewvc/tika/site/publish/0.5/documentation.html?rev=1381198&r1=1381197&r2=1381198&view=diff
==============================================================================
--- tika/site/publish/0.5/documentation.html (original)
+++ tika/site/publish/0.5/documentation.html Wed Sep  5 14:31:06 2012
@@ -85,31 +85,26 @@
       </div>
       <div id="content">
         <!-- Licensed to the Apache Software Foundation (ASF) under one or more --><!-- contributor license agreements.  See the NOTICE file distributed with --><!-- this work for additional information regarding copyright ownership. --><!-- The ASF licenses this file to You under the Apache License, Version 2.0 --><!-- (the "License"); you may not use this file except in compliance with --><!-- the License.  You may obtain a copy of the License at --><!--  --><!-- http://www.apache.org/licenses/LICENSE-2.0 --><!--  --><!-- Unless required by applicable law or agreed to in writing, software --><!-- distributed under the License is distributed on an "AS IS" BASIS, --><!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. --><!-- See the License for the specific language governing permissions and --><!-- limitations under the License. --><div class="section"><h2>Apache Tika Documentation<a name="Apache_Tika_Documentation"></a></h2><p>This document descri
 bes the key abstractions and usage of Apache Tika.</p></div><div class="section"><h2>The Parser interface<a name="The_Parser_interface"></a></h2><p>The <a href="./api/org/apache/tika/parser/Parser.html">org.apache.tika.parser.Parser</a> interface is the key concept of Apache Tika. It hides the complexity of different file formats and parsing libraries while providing a simple and powerful mechanism for client applications to extract structured text content and metadata from all sorts of documents. All this is achieved with a single method:</p><div><pre>void parse(InputStream stream, ContentHandler handler, Metadata metadata)
-    throws IOException, SAXException, TikaException;
-</pre></div><p>The <tt>parse</tt> method takes the document to be parsed and related metadata as input and outputs the results as XHTML SAX events and extra metadata. The main criteria that lead to this design were:</p><dl><dt>Streamed parsing</dt><dd>The interface should require neither the client application nor the parser implementation to keep the full document content in memory or spooled to disk. This allows even huge documents to be parsed without excessive resource requirements.</dd><dt>Structured content</dt><dd>A parser implementation should be able to include structural information (headings, links, etc.) in the extracted content. A client application can use this information for example to better judge the relevance of different parts of the parsed document.</dd><dt>Input metadata</dt><dd>A client application should be able to include metadata like the file name or declared content type with the document to be parsed. The parser implementation can use this inform
 ation to better guide the parsing process.</dd><dt>Output metadata</dt><dd>A parser implementation should be able to return document metadata in addition to document content. Many document formats contain metadata like the name of the author that may be useful to client applications.</dd></dl><p>These criteria are reflected in the arguments of the <tt>parse</tt> method.</p></div><div class="section"><h2>Document input stream<a name="Document_input_stream"></a></h2><p>The first argument is an <a class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/java/io/InputStream.html">InputStream</a> for reading the document to be parsed.</p><p>If this document stream can not be read, then parsing stops and the thrown <a class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/java/io/IOException.html">IOException</a> is passed up to the client application. If the stream can be read but not parsed (for example if the document is corrupted), then the parser throws 
 a <a href="./api/org/apache/tika/exception/TikaException.html">TikaException</a>.</p><p>The parser implementation will consume this stream but <i>will not close it</i>. Closing the stream is the responsibility of the client application that opened it in the first place. The recommended pattern for using streams with the <tt>parse</tt> method is:</p><div><pre>InputStream stream = ...;      // open the stream
+    throws IOException, SAXException, TikaException;</pre></div><p>The <tt>parse</tt> method takes the document to be parsed and related metadata as input and outputs the results as XHTML SAX events and extra metadata. The main criteria that lead to this design were:</p><dl><dt>Streamed parsing</dt><dd>The interface should require neither the client application nor the parser implementation to keep the full document content in memory or spooled to disk. This allows even huge documents to be parsed without excessive resource requirements.</dd><dt>Structured content</dt><dd>A parser implementation should be able to include structural information (headings, links, etc.) in the extracted content. A client application can use this information for example to better judge the relevance of different parts of the parsed document.</dd><dt>Input metadata</dt><dd>A client application should be able to include metadata like the file name or declared content type with the document to be p
 arsed. The parser implementation can use this information to better guide the parsing process.</dd><dt>Output metadata</dt><dd>A parser implementation should be able to return document metadata in addition to document content. Many document formats contain metadata like the name of the author that may be useful to client applications.</dd></dl><p>These criteria are reflected in the arguments of the <tt>parse</tt> method.</p></div><div class="section"><h2>Document input stream<a name="Document_input_stream"></a></h2><p>The first argument is an <a class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/java/io/InputStream.html">InputStream</a> for reading the document to be parsed.</p><p>If this document stream can not be read, then parsing stops and the thrown <a class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/java/io/IOException.html">IOException</a> is passed up to the client application. If the stream can be read but not parsed (for example if
  the document is corrupted), then the parser throws a <a href="./api/org/apache/tika/exception/TikaException.html">TikaException</a>.</p><p>The parser implementation will consume this stream but <i>will not close it</i>. Closing the stream is the responsibility of the client application that opened it in the first place. The recommended pattern for using streams with the <tt>parse</tt> method is:</p><div><pre>InputStream stream = ...;      // open the stream
 try {
     parser.parse(stream, ...); // parse the stream
 } finally {
     stream.close();            // close the stream
-}
-</pre></div><p>Some document formats like the OLE2 Compound Document Format used by Microsoft Office are best parsed as random access files. In such cases the content of the input stream is automatically spooled to a temporary file that gets removed once parsed. A future version of Tika may make it possible to avoid this extra file if the input document is already a file in the local file system. See <a class="externalLink" href="https://issues.apache.org/jira/browse/TIKA-153">TIKA-153</a> for the status of this feature request.</p></div><div class="section"><h2>XHTML SAX events<a name="XHTML_SAX_events"></a></h2><p>The parsed content of the document stream is returned to the client application as a sequence of XHTML SAX events. XHTML is used to express structured content of the document and SAX events enable streamed processing. Note that the XHTML format is used here only to convey structural information, not to render the documents for browsing!</p><p>The XHTML SAX events
  produced by the parser implementation are sent to a <a class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/org/xml/sax/ContentHandler.html">ContentHandler</a> instance given to the <tt>parse</tt> method. If this the content handler fails to process an event, then parsing stops and the thrown <a class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/org/xml/sax/SAXException.html">SAXException</a> is passed up to the client application.</p><p>The overall structure of the generated event stream is (with indenting added for clarity):</p><div><pre>&lt;html xmlns=&quot;http://www.w3.org/1999/xhtml&quot;&gt;
+}</pre></div><p>Some document formats like the OLE2 Compound Document Format used by Microsoft Office are best parsed as random access files. In such cases the content of the input stream is automatically spooled to a temporary file that gets removed once parsed. A future version of Tika may make it possible to avoid this extra file if the input document is already a file in the local file system. See <a class="externalLink" href="https://issues.apache.org/jira/browse/TIKA-153">TIKA-153</a> for the status of this feature request.</p></div><div class="section"><h2>XHTML SAX events<a name="XHTML_SAX_events"></a></h2><p>The parsed content of the document stream is returned to the client application as a sequence of XHTML SAX events. XHTML is used to express structured content of the document and SAX events enable streamed processing. Note that the XHTML format is used here only to convey structural information, not to render the documents for browsing!</p><p>The XHTML SAX event
 s produced by the parser implementation are sent to a <a class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/org/xml/sax/ContentHandler.html">ContentHandler</a> instance given to the <tt>parse</tt> method. If this the content handler fails to process an event, then parsing stops and the thrown <a class="externalLink" href="http://java.sun.com/j2se/1.5.0/docs/api/org/xml/sax/SAXException.html">SAXException</a> is passed up to the client application.</p><p>The overall structure of the generated event stream is (with indenting added for clarity):</p><div><pre>&lt;html xmlns=&quot;http://www.w3.org/1999/xhtml&quot;&gt;
   &lt;head&gt;
     &lt;title&gt;...&lt;/title&gt;
   &lt;/head&gt;
   &lt;body&gt;
     ...
   &lt;/body&gt;
-&lt;/html&gt;
-</pre></div><p>Parser implementations typically use the <a href="./api/org/apache/tika/sax/XHTMLContentHandler.html">XHTMLContentHandler</a> utility class to generate the XHTML output.</p><p>Dealing with the raw SAX events can be a bit complex, so Apache Tika (since version 0.2) comes with a number of utility classes that can be used to process and convert the event stream to other representations.</p><p>For example, the <a href="./api/org/apache/tika/sax/BodyContentHandler.html">BodyContentHandler</a> class can be used to extract just the body part of the XHTML output and feed it either as SAX events to another content handler or as characters to an output stream, a writer, or simply a string. The following code snippet parses a document from the standard input stream and outputs the extracted text content to standard output:</p><div><pre>ContentHandler handler = new BodyContentHandler(System.out);
-parser.parse(System.in, handler, ...);
-</pre></div><p>Another useful class is <a href="./api/org/apache/tika/parser/ParsingReader.html">ParsingReader</a> that uses a background thread to parse the document and returns the extracted text content as a character stream:</p><div><pre>InputStream stream = ...; // the document to be parsed
+&lt;/html&gt;</pre></div><p>Parser implementations typically use the <a href="./api/org/apache/tika/sax/XHTMLContentHandler.html">XHTMLContentHandler</a> utility class to generate the XHTML output.</p><p>Dealing with the raw SAX events can be a bit complex, so Apache Tika (since version 0.2) comes with a number of utility classes that can be used to process and convert the event stream to other representations.</p><p>For example, the <a href="./api/org/apache/tika/sax/BodyContentHandler.html">BodyContentHandler</a> class can be used to extract just the body part of the XHTML output and feed it either as SAX events to another content handler or as characters to an output stream, a writer, or simply a string. The following code snippet parses a document from the standard input stream and outputs the extracted text content to standard output:</p><div><pre>ContentHandler handler = new BodyContentHandler(System.out);
+parser.parse(System.in, handler, ...);</pre></div><p>Another useful class is <a href="./api/org/apache/tika/parser/ParsingReader.html">ParsingReader</a> that uses a background thread to parse the document and returns the extracted text content as a character stream:</p><div><pre>InputStream stream = ...; // the document to be parsed
 Reader reader = new ParsingReader(parser, stream, ...);
 try {
     ...;                  // read the document text using the reader
 } finally {
     reader.close();       // the document stream is closed automatically
-}
-</pre></div></div><div class="section"><h2>Document metadata<a name="Document_metadata"></a></h2><p>The final argument to the <tt>parse</tt> method is used to pass document metadata both in and out of the parser. Document metadata is expressed as an <a href="./api/org/apache/tika/metadata/Metadata.html">Metadata</a> object.</p><p>The following are some of the more interesting metadata properties:</p><dl><dt>Metadata.RESOURCE_NAME_KEY</dt><dd>The name of the file or resource that contains the document.<p>A client application can set this property to allow the parser to use file name heuristics to determine the format of the document.</p><p>The parser implementation may set this property if the file format contains the canonical name of the file (for example the Gzip format has a slot for the file name).</p></dd><dt>Metadata.CONTENT_TYPE</dt><dd>The declared content type of the document.<p>A client application can set this property based on for example a HTTP Content-Type head
 er. The declared content type may help the parser to correctly interpret the document.</p><p>The parser implementation sets this property to the content type according to which the document was parsed.</p></dd><dt>Metadata.TITLE</dt><dd>The title of the document.<p>The parser implementation sets this property if the document format contains an explicit title field.</p></dd><dt>Metadata.AUTHOR</dt><dd>The name of the author of the document.<p>The parser implementation sets this property if the document format contains an explicit author field.</p></dd></dl><p>Note that metadata handling is still being discussed by the Tika development team, and it is likely that there will be some (backwards incompatible) changes in metadata handling before Tika 1.0.</p></div><div class="section"><h2>Parser implementations<a name="Parser_implementations"></a></h2><p>Apache Tika comes with a number of parser classes for parsing <a href="./formats.html">various document formats</a>. You can als
 o extend Tika with your own parsers, and of course any contributions to Tika are warmly welcome.</p><p>The goal of Tika is to reuse existing parser libraries like <a class="externalLink" href="http://www.pdfbox.org/">PDFBox</a> or <a class="externalLink" href="http://poi.apache.org/">Apache POI</a> as much as possible, and so most of the parser classes in Tika are adapters to such external libraries.</p><p>Tika also contains some general purpose parser implementations that are not targeted at any specific document formats. The most notable of these is the <a href="./api/org/apache/tika/parser/AutoDetectParser.html">AutoDetectParser</a> class that encapsulates all Tika functionality into a single parser that can handle any types of documents. This parser will automatically determine the type of the incoming document based on various heuristics and will then parse the document accordingly.</p></div>
+}</pre></div></div><div class="section"><h2>Document metadata<a name="Document_metadata"></a></h2><p>The final argument to the <tt>parse</tt> method is used to pass document metadata both in and out of the parser. Document metadata is expressed as an <a href="./api/org/apache/tika/metadata/Metadata.html">Metadata</a> object.</p><p>The following are some of the more interesting metadata properties:</p><dl><dt>Metadata.RESOURCE_NAME_KEY</dt><dd>The name of the file or resource that contains the document.<p>A client application can set this property to allow the parser to use file name heuristics to determine the format of the document.</p><p>The parser implementation may set this property if the file format contains the canonical name of the file (for example the Gzip format has a slot for the file name).</p></dd><dt>Metadata.CONTENT_TYPE</dt><dd>The declared content type of the document.<p>A client application can set this property based on for example a HTTP Content-Type hea
 der. The declared content type may help the parser to correctly interpret the document.</p><p>The parser implementation sets this property to the content type according to which the document was parsed.</p></dd><dt>Metadata.TITLE</dt><dd>The title of the document.<p>The parser implementation sets this property if the document format contains an explicit title field.</p></dd><dt>Metadata.AUTHOR</dt><dd>The name of the author of the document.<p>The parser implementation sets this property if the document format contains an explicit author field.</p></dd></dl><p>Note that metadata handling is still being discussed by the Tika development team, and it is likely that there will be some (backwards incompatible) changes in metadata handling before Tika 1.0.</p></div><div class="section"><h2>Parser implementations<a name="Parser_implementations"></a></h2><p>Apache Tika comes with a number of parser classes for parsing <a href="./formats.html">various document formats</a>. You can al
 so extend Tika with your own parsers, and of course any contributions to Tika are warmly welcome.</p><p>The goal of Tika is to reuse existing parser libraries like <a class="externalLink" href="http://www.pdfbox.org/">PDFBox</a> or <a class="externalLink" href="http://poi.apache.org/">Apache POI</a> as much as possible, and so most of the parser classes in Tika are adapters to such external libraries.</p><p>Tika also contains some general purpose parser implementations that are not targeted at any specific document formats. The most notable of these is the <a href="./api/org/apache/tika/parser/AutoDetectParser.html">AutoDetectParser</a> class that encapsulates all Tika functionality into a single parser that can handle any types of documents. This parser will automatically determine the type of the incoming document based on various heuristics and will then parse the document accordingly.</p></div>
       </div>
       <div id="sidebar">
         <div id="navigation">