You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@stanbol.apache.org by rw...@apache.org on 2012/11/23 14:28:34 UTC

svn commit: r1412880 - in /stanbol/site/trunk/content/docs/trunk/components/enhancer/engines: opennlpchunker.mdtext opennlppos.mdtext opennlpsentence opennlptokenizer.mdtext

Author: rwesten
Date: Fri Nov 23 13:28:33 2012
New Revision: 1412880

URL: http://svn.apache.org/viewvc?rev=1412880&view=rev
Log:
initial documentation for STANBOL-733

Added:
    stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlpchunker.mdtext
    stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlppos.mdtext
    stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlpsentence
    stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlptokenizer.mdtext

Added: stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlpchunker.mdtext
URL: http://svn.apache.org/viewvc/stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlpchunker.mdtext?rev=1412880&view=auto
==============================================================================
--- stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlpchunker.mdtext (added)
+++ stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlpchunker.mdtext Fri Nov 23 13:28:33 2012
@@ -0,0 +1,54 @@
+title: OpenNLP Chunker Engine
+
+The OpenNLP Chunker Engine support the detection of Phrases (Noun, Verb, ...) within the parsed Text. For that it uses the OpenNLP Chunker feature. Detected Phrases are added as _Chunk_s to the _[AnalyzedText](../nlp/analyzedtext)_ content part. In addition added _Chunk_s are annotated with an [Phrase Annotation](../nlp/nlpannotations#phrase-annotations) providing the type of the Phrase represented by the _Chunk_.
+
+
+## Consumed information
+
+* __Language__ (required): The language of the text needs to be available. It is read as specified by [STANBOL-613](https://issues.apache.org/jira/browse/STANBOL-613) from the metadata of the ContentItem. Effectively this means that any Stanbol Language Detection engine will need to be executed before the OpenNLP POS Tagging Engine.
+* __Tokens with POS annotations__ (required): This Engine needs the Text to be tokenized and POS tagged. Even more the POS tags need to be compatible with the POS tags used to train the Chunker model. This effectively means that this Engine will only work as expected if the POS tagging was done by the OpenNLP POS Tagging Engine configured with a POS model using the same POS tag set as used for training the chunker model.
+* __Sentences__ (optional): In case _Sentence_s are available in the _AnalyzedText_ content part the tokenization of the text is done sentence by sentence. Otherwise the whole text is tokenized at once.
+
+## Configuration
+
+The OpenNLP Chunker Engine provides a default service instance (configuration policy is optional) that is configured to process all languages. For German the model parameter is set to 'OpenNLP_1.5.1-German-Chunker-TigerCorps07.zip' a chunker model that only detects Noun Phrases. This model is included in the 'o.a.stanbol.data.opennlp.lang.de' module. This Engine instance uses the name 'opennlp-chunker' and has a service ranking of '-100'.
+
+This engine supports the default configuration for Enhancement Engines including the __name__ _(stanbol.enhancer.engine.name)_ and the __ranking__ _(service.ranking)_ In addition it is possible to configure the __processed languages__ _(org.apache.stanbol.enhancer.chunker.languages)_ and an parameter to specify the name of the chunker model used for a language.
+
+__1. Processed Language Configuraiton:__
+
+For the configuration of the processed languages the following syntax is used:
+
+    de
+    en
+    
+This would configure the Engine to only process German and English texts. It is also possible to explicitly exclude languages
+
+    !fr
+    !it
+    *
+
+This specifies that all Languages other than French and Italien are processed.
+
+Values can be parsed as Array or Vector. This is done by using the ["elem1","elem2",...] syntax as defined by OSGI ".config" files. As fallback also ',' separated Strings are supported.
+
+The following example shows the two above examples combined to a single configuration.
+
+    org.apache.stanbol.enhancer.chunker.languages=["!fr","!it","de","en","*"]
+
+NOTE that the "processed language" configuration only specifies what languages are considered for processing. If "de" is enabled, but there is no sentence detection model available for that language, than German text will still not be processed. However if there is a POS model for "it" but the "processed language" configuration does not include Italian, than Italian text will NOT be processed.
+
+__2. Sentnece detection model parameter__
+
+The OpenNLP Sentence Detection engine supports the 'model' parameter to explicitly parse the name of the sentence detection model used for an language. Models are loaded via the Stanbol DataFile provider infrastructure. That means that models can be loaded from the {stanbol-working-dir}/stanbol/datafiles folder.
+
+The syntax for parameters is as follows
+
+    {language};{param-name}={param-value}
+
+As shown by the default configuration of this engine, to use "OpenNLP_1.5.1-German-Chunker-TigerCorps07.zip" for detecting sentences in German texts one can use a configuration like follows
+
+    de;model=OpenNLP_1.5.1-German-Chunker-TigerCorps07.zip
+    *
+
+By default OpenNLP chunker models are loaded from '{lang}-chunker.bin'. To use models with other names users need to use the 'model' parameter as described above.
\ No newline at end of file

Added: stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlppos.mdtext
URL: http://svn.apache.org/viewvc/stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlppos.mdtext?rev=1412880&view=auto
==============================================================================
--- stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlppos.mdtext (added)
+++ stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlppos.mdtext Fri Nov 23 13:28:33 2012
@@ -0,0 +1,101 @@
+title: OpenNLP POS Tagging Engine
+
+POS tagging Engine using the [AnalyzedText](../nlp/analyzedtext) ContentPart based on the [OpenNLP](http://opennlp.apache.org) POS tagging functionality.
+
+## Consumed information
+
+* __Language__ (required): The language of the text needs to be available. It is read as specified by [STANBOL-613](https://issues.apache.org/jira/browse/STANBOL-613) from the metadata of the ContentItem. Effectively this means that any Stanbol Language Detection engine will need to be executed before the OpenNLP POS Tagging Engine.
+* __Sentences__ (optional): In case _Sentence_s are available in the _AnalyzedText_ content part the tokenization of the text is done sentence by sentence. If no _Sentence_s are available this engine detects sentences if a sentence detection model is available for that language (see below for more information). If no _Sentence_s are present and no OpenNLP sentence detection model is available for the language of the processed text, than the whole text is processed as a single sentence.
+* __Tokens__ (optional): Foe POS tagging the Text needs to be tokenized. This Engine tries to consume _Tokens_ from the _AnalyzedText_ content part. If no Tokens are available it uses the OpenNLP tokenizer to tokenize the text (see below for more information).
+
+## POS Tagging
+
+POS tags are represented by adding _NlpAnnotations#POS_ANNOTATION_'s to the _Tokens_ of the _AnalyzedText_ content part. As the OpenNLP Tokenizer supports multiple Pos tags/probability suggestions the OpenNLP POS Tagging Engine can add multiple POS annotations to a Token.
+
+POS annotations are added by using the key "stanbol.enhancer.nlp.pos" and are represented by the _PosTag_ class. However typical users will rather use the _NlpAnnotations#POS_ANNOTATION_ to access POS annotations of tokens
+
+    :::java
+    //The POS tag with the highest probability
+    Value<PosTag> posAnnotation = token.getAnnotation(NlpAnnotations.POS_ANNOTATION);
+    //Get the list of all POS annotations
+    List<Value<PosTag>> posAnnotations = token.getAnnotations(NlpAnnotations.POS_ANNOTATION);
+
+    //Value provides the probability and the PosTag
+    double prob = posAnnotation.probability();
+    PosTag pos = posAnnotation.value();
+    //The string tag as used by the Tokenizer
+    String tag = pos.getTag();
+
+    //POS tags can be mapped to LexicalCategories and Pos types
+    //so we can check if a Token is a Noun without the need to
+    //know the POS tags used by the POS tagger of the current language
+    boolean isNoun = pos.hasCategory(LexicalCategory.Noun);
+    boolean isProperNoun = pos.hasPos(Pos.ProperNoun);
+
+    //but not all PosTags might be mapped so we should check for
+    boolean mapped = pos.isMapped();
+
+The OpenNLP Pos Tagging engine supports mapped PosTags for the following languages
+
+* English: based on the Penn Treebank mappings to the [OLiA Ontology](http://nlp2rdf.lod2.eu/olia/) ([annotation model](http://purl.org/olia/penn.owl), [linking model](http://purl.org/olia/penn-link.rdf))
+* German: based on the STTS mapping to the [OLiA Ontology](http://nlp2rdf.lod2.eu/olia/) ([annotation model](http://purl.org/olia/stts.owl), [linking model](http://purl.org/olia/stts-link.rdf))
+* Spanish: based on the PAROLE TagSet mapping to the [OLiA Ontology](http://nlp2rdf.lod2.eu/olia/) ([annotation model](http://purl.org/olia/parole_es_cat.owl))
+* Danish: mappings for the PAROLE Tagset as described by [this paper](http://korpus.dsl.dk/paroledoc_en.pdf).
+* Portuguese: mappings based on the [PALAVRAS tag set](http://beta.visl.sdu.dk/visl/pt/symbolset-floresta.html)
+* Dutch: mappings based on the WOTAN Tagset for Dutch as described by _"WOTAN: Een automatische grammatikale tagger voor het Nederlands", doctoral dissertation, Department of language & Speech, Nijmegen University (renamed to Radboud University), december 1994."_. _NOTE_ that this TagSet does NOT distinguish between _ProperNoun_s and _CommonNoun_s.
+* Swedish: based on the [Lexical categories in MAMBA](http://w3.msi.vxu.se/users/nivre/research/MAMBAlex.html)
+
+__TODO:__ Currently the Engine is limited to those TagSets as it is not yet possible to extend this by additional one.
+
+## Tokenizing and Sentence Detection Support
+
+The OpenNLP POS Tagging engine implicitly supports tokenizing and sentence detection. That means if the _[AnalyzedText](../nlp/analysedtext)_ is not present or does not contain _Token_s than this engine will use the OpenNLP Tokenizer to tokenize the text. If no language specific OpenNLP tokenizer model is available, than it will use the SIMPLE_TOKENIZER.
+
+Sentence detection is only done if no _Sentence_s are present in the _AnalyzedText_ AND if a language specific sentence detection model is available.
+
+__NOTE__: Support for Tokenizing and Sentence Detection is not a replacement for explicitly adding a Tokenizing and Sentence Detection Engine to a Enhancement Chain as this Engine does not guarantee that _Token_s or _Sentence_s are added to the _AnalyzedText_ content part. If no POS model is available for a language or a language is not configured to be processed there will be no _Token_s nor _Sentence_s added. Chains the relay on _Token_s and/or _Sentence_s MUST explicitly include a Tokenizing and Sentence detection engine!
+
+
+## Configuration
+
+_NOTE_ that the OpenNLP POS Tagging engine provides a default service instance (configuration policy is optional). This instance processes all languages where default POS models are provided by the OpenNLP service. This Engine instance uses the name 'opennlp-pos' and has a service ranking of '-100'.
+
+While this engine supports the default configuration including the __name__ _(stanbol.enhancer.engine.name)_ and the __ranking__ _(service.ranking)_ the engine also allows to configure __processed languages__ _(org.apache.stanbol.enhancer.pos.languages)_ and a parameter to specify the name of the POS model used for a language.
+
+__1. Processed Language Configuraiton:__
+
+For the configuration of the processed languages the following syntax is used:
+
+    de
+    en
+    
+This would configure the Engine to only process German and English texts. It is also possible to explicitly exclude languages
+
+    !fr
+    !it
+    *
+
+This specifies that all Languages other than French and Italien are processed.
+
+Values can be parsed as Array or Vector. This is done by using the ["elem1","elem2",...] syntax as defined by OSGI ".config" files. As fallback also ',' separated Strings are supported.
+
+The following example shows the two above examples combined to a single configuration.
+
+    org.apache.stanbol.enhancer.pos.languages=["!fr","!it","de","en","*"]
+
+NOTE that the "processed language" configuration only specifies what languages are considered for processing. If "de" is enabled, but there is no POS model available for that language, than German text will still not be processed. However if there is a POS model for "it" but the "processed language" configuration does not include Italian, than Italian text will NOT be processed.
+
+__2. POS model parameter__
+
+The OpenNLP POS annotation engine supports the 'model' parameter to explicitly parse the name of the POS model used for a language. POS models are loaded via the Stanbol DataFile provider infrastructure. That means that models can be loaded from the {stanbol-working-dir}/stanbol/datafiles folder.
+
+The syntax for parameters is as follows
+
+    {language};{param-name}={param-value}
+
+So to use the "my-de-pos-model.zip" for POS tagging German texts one can use a configuration like follows
+
+    de;model=my-de-pos-model.zip
+    *
+
+By default OpenNLP POS models are loaded for the names '{lang}-pos-perceptron.bin' and '{lang}-pos-maxent.bin' to use models with other names users need to use the 'model' parameter as described above. 
\ No newline at end of file

Added: stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlpsentence
URL: http://svn.apache.org/viewvc/stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlpsentence?rev=1412880&view=auto
==============================================================================
--- stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlpsentence (added)
+++ stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlpsentence Fri Nov 23 13:28:33 2012
@@ -0,0 +1,51 @@
+title: OpenNLP Sentence Detection Engine
+
+The OpenNLP Sentence Detection Engine adds _Sentence_s to the _[AnalyzedText](../nlp/analyzedtext)_ content part. If the _AnalyzedText_ content part is not yet present it is created by this engine.
+
+## Consumed information
+
+* __Language__ (required): The language of the text needs to be available. It is read as specified by [STANBOL-613](https://issues.apache.org/jira/browse/STANBOL-613) from the metadata of the ContentItem. Effectively this means that any Stanbol Language Detection engine will need to be executed before the OpenNLP POS Tagging Engine.
+
+## Configuration
+
+The OpenNLP Sentence Detector Engine provides a default service instance (configuration policy is optional). This instance processes all languages and adds _Sentence_s for all languages where a OpenNLP sentence detection model is available. This Engine instance uses the name 'opennlp-sentence' and has a service ranking of '-100'.
+
+This engine supports the default configuration for Enhancement Engines including the __name__ _(stanbol.enhancer.engine.name)_ and the __ranking__ _(service.ranking)_ In addition it is possible to configure the __processed languages__ _(org.apache.stanbol.enhancer.sentence.languages)_ and an parameter to specify the name of the sentence detection model used for a language.
+
+__1. Processed Language Configuraiton:__
+
+For the configuration of the processed languages the following syntax is used:
+
+    de
+    en
+    
+This would configure the Engine to only process German and English texts. It is also possible to explicitly exclude languages
+
+    !fr
+    !it
+    *
+
+This specifies that all Languages other than French and Italien are processed.
+
+Values can be parsed as Array or Vector. This is done by using the ["elem1","elem2",...] syntax as defined by OSGI ".config" files. As fallback also ',' separated Strings are supported.
+
+The following example shows the two above examples combined to a single configuration.
+
+    org.apache.stanbol.enhancer.sentence.languages=["!fr","!it","de","en","*"]
+
+NOTE that the "processed language" configuration only specifies what languages are considered for processing. If "de" is enabled, but there is no sentence detection model available for that language, than German text will still not be processed. However if there is a POS model for "it" but the "processed language" configuration does not include Italian, than Italian text will NOT be processed.
+
+__2. Sentnece detection model parameter__
+
+The OpenNLP Sentence Detection engine supports the 'model' parameter to explicitly parse the name of the sentence detection model used for an language. Models are loaded via the Stanbol DataFile provider infrastructure. That means that models can be loaded from the {stanbol-working-dir}/stanbol/datafiles folder.
+
+The syntax for parameters is as follows
+
+    {language};{param-name}={param-value}
+
+So to use the "my-de-sentence-model.zip" for detecting sentences in German texts one can use a configuration like follows
+
+    de;model=my-de-sentence-model.zip
+    *
+
+By default OpenNLP sentence detection models are loaded from '{lang}-sent.bin'. To use models with other names users need to use the 'model' parameter as described above. 
\ No newline at end of file

Added: stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlptokenizer.mdtext
URL: http://svn.apache.org/viewvc/stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlptokenizer.mdtext?rev=1412880&view=auto
==============================================================================
--- stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlptokenizer.mdtext (added)
+++ stanbol/site/trunk/content/docs/trunk/components/enhancer/engines/opennlptokenizer.mdtext Fri Nov 23 13:28:33 2012
@@ -0,0 +1,55 @@
+title: OpenNLP Tokenizer Engine
+
+The OpenNLP Tokenizer Engine adds _Token_s to the _AnalyzedText_ content part. If this content part is not yet present it adds it to the ContentItem.
+
+## Consumed information
+
+* __Language__ (required): The language of the text needs to be available. It is read as specified by [STANBOL-613](https://issues.apache.org/jira/browse/STANBOL-613) from the metadata of the ContentItem. Effectively this means that any Stanbol Language Detection engine will need to be executed before the OpenNLP POS Tagging Engine.
+* __Sentences__ (optional): In case _Sentence_s are available in the _AnalyzedText_ content part the tokenization of the text is done sentence by sentence. Otherwise the whole text is tokenized at once.
+
+## Configuration
+
+The OpenNLP Tokenizer engine provides a default service instance (configuration policy is optional). This instance processes all languages. Language specific tokenizer models are used if available. For other languages the OpenNLP SimpleTokenizer is used. This Engine instance uses the name 'opennlp-token' and has a service ranking of '-100'.
+
+While this engine supports the default configuration including the __name__ _(stanbol.enhancer.engine.name)_ and the __ranking__ _(service.ranking)_ the engine also allows to configure __processed languages__ _(org.apache.stanbol.enhancer.token.languages)_ and an parameter to specify the name of the tokenizer model used for a language.
+
+__1. Processed Language Configuraiton:__
+
+For the configuration of the processed languages the following syntax is used:
+
+    de
+    en
+    
+This would configure the Engine to only process German and English texts. It is also possible to explicitly exclude languages
+
+    !fr
+    !it
+    *
+
+This specifies that all Languages other than French and Italien are tokenized.
+
+Values can be parsed as Array or Vector. This is done by using the ["elem1","elem2",...] syntax as defined by OSGI ".config" files. As fallback also ',' separated Strings are supported.
+
+The following example shows the two above examples combined to a single configuration.
+
+    org.apache.stanbol.enhancer.token.languages=["!fr","!it","de","en","*"]
+
+__2. Tokenizer model parameter__
+
+The OpenNLP Tokenizer engine supports the 'model' parameter to explicitly parse the name of the Tokenizer model used for an language. Tokenizer models are loaded via the Stanbol DataFile provider infrastructure. That means that models can be loaded from the {stanbol-working-dir}/stanbol/datafiles folder.
+
+The syntax for parameters is as follows
+
+    {language};{param-name}={param-value}
+
+So to use the "my-de-tokenizer-model.zip" for tokenizing German texts one can use a configuration like follows
+
+    de;model=my-de-tokenizer-model.zip
+    *
+
+To configure that the SimpleTokenizer should be used for a given language the 'model' parameter needs to be set to 'SIMPLE' as shown in the following example
+
+    de;model=SIMPLE
+    *
+
+By default OpenNLP Tokenizer models are loaded for '{lang}-token.bin'. To use models with other names users need to use the 'model' parameter as described above. 
\ No newline at end of file