You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@uima.apache.org by pk...@apache.org on 2013/01/08 10:39:00 UTC

svn commit: r1430189 - /uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/

Author: pkluegl
Date: Tue Jan  8 09:39:00 2013
New Revision: 1430189

URL: http://svn.apache.org/viewvc?rev=1430189&view=rev
Log:
UIMA-2285
- fixed some typos

Modified:
    uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.create_dictionaries.xml
    uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.explain_perspective.xml
    uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.install.xml
    uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.overview.xml
    uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.projects.xml
    uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.query.xml
    uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.testing.xml
    uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.textruler.xml
    uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.tm_perspective.xml
    uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.xml

Modified: uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.create_dictionaries.xml
URL: http://svn.apache.org/viewvc/uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.create_dictionaries.xml?rev=1430189&r1=1430188&r2=1430189&view=diff
==============================================================================
--- uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.create_dictionaries.xml (original)
+++ uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.create_dictionaries.xml Tue Jan  8 09:39:00 2013
@@ -27,21 +27,15 @@ under the License.
 <section id="section.ugr.tools.tm.workbench.create_dictionaries">
   <title>Creation of Tree Word Lists</title>
   <para>
-    Tree word lists are external resources which can be used
-    to annotate
-    all occurrences of list items in a document
-    with a given annotation
-    type very fast. For more details
-    on their use, see
-    <xref linkend='ugr.tools.tm.language.external_resources' />
-    . Since simple tree and multi tree word lists have to be compiled
-    the
-    TextMarker workbench provides an easy way to compile
-    them from ordinary
-    text files. These text files have to
-    containing one item per line, for
-    example like in the
-    following list of first names:
+    Tree word lists are external resources, which can be used
+    to annotate all occurrences of list items in a document
+    with a given annotation type, very fast. For more details
+    on their use, see <xref linkend='ugr.tools.tm.language.external_resources' />. 
+    Since simple tree and multi tree word lists have to be compiled
+    the TextMarker workbench provides an easy way to compile
+    them from ordinary text files. These text files have to
+    containing one item per line, for example, like in the
+    following list of first names: 
     <programlisting><![CDATA[Frank
 Peter
 Jochen
@@ -51,9 +45,8 @@ Martin
   <para>
     To compile a simple tree word list from a text file,
     right-click on the text file in TextMarker script
-    explorer. You get the menu seen in
-    <xref linkend='figure.ugr.tools.tm.workbench.create_dictionaries_1' />
-    .
+    explorer. The resulting menu is shown in
+    <xref linkend='figure.ugr.tools.tm.workbench.create_dictionaries_1' />.
 
     <figure id="figure.ugr.tools.tm.workbench.create_dictionaries_1">
       <title>Create a simple tree word list
@@ -79,18 +72,18 @@ Martin
     When hovering over TextMarker item you can choose
     <quote>Convert to TWL</quote>
     .
-    Click on it and a tree word list with the same name as the origin
+    Click on it and a tree word list with the same name as the original
     file is
     generated in the same folder.
   </para>
   <para>
     You can also generate several tree word lists at once. To do so,
     just select
-    multiple files and then right-click and do the same as for a single
+    multiple files and then right-click and do the same like for a single
     list. You will get one tree word list for every selected file.
   </para>
   <para>
-    To generate a multi tree work list, select all files which should be
+    To generate a multi tree work list, select all files, which should be
     generated
     into the multi tree word list. Again right-click and select
     <quote>Convert to Multi TWL</quote>

Modified: uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.explain_perspective.xml
URL: http://svn.apache.org/viewvc/uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.explain_perspective.xml?rev=1430189&r1=1430188&r2=1430189&view=diff
==============================================================================
--- uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.explain_perspective.xml (original)
+++ uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.explain_perspective.xml Tue Jan  8 09:39:00 2013
@@ -20,44 +20,24 @@
   <title>Explain Perspective</title>
   <para>
     Writing new rules is laborious, especially if the newly written
-    rules do not
-    behave as
-    expected. The TextMarker system is able to
-    protocol the application of each
-    single rule and
-    block in order to
+    rules do not behave as expected. The TextMarker system is able to
+    record the application of each single rule and block in order to
     provide an explanation of the rule inference and a
-    minimal debugging
-    functionality. The information about the application of the rules
-    itself is stored
-    in the resulting
-    xmiCAS output file if the parameters
-    of the executed engine are
-    configured correctly. The simplest way to
-    generate these explanation information
-    is to click on the common
-    'Debug' button (looks like a green bug)
-    while having the TextMarker
-    script file, you want to debug, active in
-    your editor. The current
-    TextMarker file will then be executed on the text files in the input
-    directory and xmiCAS are
-    created in the output directory containing the
-    additional UIMA
-    feature structures describing the
-    rule inference. To
-    show the newly created execution information, you can either open the
-    Explain
-    perspective or open the necessary views separately and arrange
-    them as you like. There are eight
-    views that display information about
-    the execution of
-    the rules: Applied Rules, Covering Rules, Created By,
+    minimal debugging functionality. The information about the application of the rules
+    itself is stored in the resulting xmiCAS output file, if the parameters
+    of the executed engine are configured correctly. The simplest way to
+    generate these explanation information is to click on the common 'Debug' button (looks like a green bug)
+    while having the TextMarker script file you want to debug active in
+    your editor. The current TextMarker file will then be executed on the text files in the input
+    directory and xmiCAS are created in the output directory containing the
+    additional UIMA feature structures describing the
+    rule inference. To show the newly created execution information, you can either open the
+    Explain perspective or open the necessary views separately and arrange
+    them as you like. There are eight views that display information about
+    the execution of the rules: Applied Rules, Covering Rules, Created By,
     Failed Rules, Matched Rules, Rule Elements, Rule List
-    and Statistics.
-    All of theses views are further explained in detail, using the
-    TextMarker example project
-    for examples.
+    and Statistics. All of theses views are further explained in detail, using the
+    TextMarker example project for examples.
   </para>
 
   <para>
@@ -79,20 +59,13 @@
       rules that tried to apply to the input documents.
     </para>
     <para>
-      The structure is as
-      follows: if BLOCK constructs were used in the
-      executed TextMarker
-      file, the rules contained in that block will be
-      represented as child
-      node in the tree of the view. Each TextMarker
-      file is itself a BLOCK
-      construct named after the file. Therefore the
-      root node of the view
-      is always a BLOCK containing the rules of the
-      executed TextMarker
-      script. Additionally, if a rule calls a different
-      TextMarker file,
-      then the root block of that file is the child of the
+      The structure is as follows: if BLOCK constructs were used in the
+      executed TextMarker file, the rules contained in that block will be
+      represented as child node in the tree of the view. Each TextMarker
+      file is a BLOCK construct itself and named after the file. The
+      root node of the view is, therefore, always a BLOCK containing the rules of the
+      executed TextMarker script. Additionally, if a rule calls a different
+      TextMarker file, then the root block of that file is the child of the
       calling rule.
     </para>
     <para>
@@ -135,29 +108,24 @@
     <para>
       Besides the hierarchy, the view shows how often a rule tried to match
       in total and how often it succeeded. This is shown in brackets at the
-      beginning of each rule entry. E.g., the Applied Rules view tells us
+      beginning of each rule entry. The Applied Rules view tells us
       that the rule
       <literal>NUM{REGEXP("19..|20..") -> MARK(Year)};</literal>
       within script 'Year.tm' tried to match twice but only succeeded once.
     </para>
     <para>
-      After this information the rule itself is given. Notice that
-      each rule
-      is given with all the parameters it has been executed.
-      E.g., have a
-      look at rule entry
+      After this information, the rule itself is given. Notice that
+      each rule is given with all the parameters it has been executed.
+      Have a look at rule entry
       <literal>[1/1]Document{->MARKFAST(FirstName,FirstNameList,false,0,true)}
-      </literal>
-      within BLOCK Author. The rule obviously has been executed with five
+      </literal> within BLOCK Author. The rule obviously has been executed with five
       parameters. If you double-click on this rule, you will get to the
       rule in the script file 'Author.tm'. It shows the rule as follows:
-      <literal>Document{-> MARKFAST(FirstName, FirstNameList)};
-      </literal>
-      . This means the last three parameters have been default values used
+      <literal>Document{-> MARKFAST(FirstName, FirstNameList)};</literal>. This means the last three parameters have been default values used
       to execute the rule.
     </para>
     <para>
-      Additionally some profiling information, giving details about
+      Additionally, some profiling information, giving details about
       the absolute time and the percentage of total execution time the rule
       needed, is added at the end of each rule entry.
     </para>
@@ -178,15 +146,12 @@
       the instances on which the rule tried but failed to match.
     </para>
     <para>
-      E.g. select rule
-      <literal>[2/3]Name{-PARTOF(NameListPart)} NameLinker[1,2]{->
-        MARK(NameListPart,1,2)}
-      </literal>
-      within BLOCK Author.
+      Select rule <literal>[2/3]Name{-PARTOF(NameListPart)} NameLinker[1,2]{->
+        MARK(NameListPart,1,2)};</literal> within BLOCK Author.
       <xref
         linkend='figure.ugr.tools.tm.workbench.explain_perspective.matched_and_failed_rules' />
       shows the text passages this rule tried to match on. One did not
-      succeed. Therefore it is displayed within the Failed Rules view. Two
+      succeed. It is displayed within the Failed Rules view. Two
       succeeded and are shown in the Matched Rules view.
     </para>
     <para>
@@ -224,40 +189,29 @@
     <title>Rule Elements</title>
     <para>
       If you select one of the listed instances in the Matched or
-      Failed Rules view,
-      then the Rule Elements view contains a listing
-      of
-      the rule elements and their
-      conditions belonging to the related rule
+      Failed Rules view, then the Rule Elements view contains a listing
+      of the rule elements and their conditions belonging to the related rule
       used on the specific text passage. There is detailed
-      information
-      available on what text
-      passage each rule element did or
-      did not match
-      and which condition did
-      or did not evaluate true.
+      information available on what text passage each rule element did or
+      did not match and which condition did or did not evaluate true.
     </para>
     <para>
       Within the Rule Elements view, each rule element generates its
-      own explanation hierarchy. On the root level the rule element itself
+      own explanation hierarchy. On the root level, the rule element itself
       is given. An apostrophe at the beginning of the rule element
       indicates that this rule was the anchor for the rule execution. On
-      the next level the text passage on which the rule element tried to
-      match on is given. The last level then explains why the rule element
-      did or did not match. The first entry on this level tells if the text
+      the next level, the text passage on which the rule element tried to
+      match on is given. The last level explains, why the rule element
+      did or did not match. The first entry on this level tells, if the text
       passage is of the requested annotation type. If it is, a green hook
-      is shown in front of the requested type. Otherwise a red cross is
+      is shown in front of the requested type. Otherwise, a red cross is
       shown. In the following the rule conditions and their evaluation on
       the given text passage are shown.
     </para>
     <para>
       In the previous example, select the listed instance
-      <literal>Bethard, S.</literal>
-      . The Rule Elements view then shows the related explanation displayed
-      in
-      <xref
-        linkend='figure.ugr.tools.tm.workbench.explain_perspective.rule_elements' />
-      .
+      <literal>Bethard, S.</literal>. The Rule Elements view shows the related explanation displayed
+      in <xref linkend='figure.ugr.tools.tm.workbench.explain_perspective.rule_elements' />.
     </para>
     <para>
       The following image shows the TextMarker Rule Elements view.
@@ -283,17 +237,11 @@
       </figure>
     </para>
     <para>
-      As you can see, the first rule element
-      <literal>Name{-PARTOF(NameListPart)}</literal>
-      matched on the text passage
-      <literal>Bethard, S.</literal>
-      since it is firstly annotated with an
-      <quote>Name</quote>
-      annotation and secondly it is not part of an annotation
-      <quote>NameListPart</quote>
-      . But as this first text passage is not followed by a
-      <quote>NameLinker</quote>
-      annotation the whole rule fails.
+      As you can see, the first rule element <literal>Name{-PARTOF(NameListPart)}</literal>
+      matched on the text passage <literal>Bethard, S.</literal>
+      since it is firstly annotated with a <quote>Name</quote> annotation and secondly it is not part of an annotation
+      <quote>NameListPart</quote>. However, as this first text passage is not followed by a
+      <quote>NameLinker</quote> annotation the whole rule fails.
     </para>
   </section>
 
@@ -303,14 +251,12 @@
     <para>
       This views is very similar to the Applied Rules view, but
       displays only rules and blocks under a given selection. If the user
-      clicks on any position in the xmiCAS document, an Covering Rules view
+      clicks on any position in the xmiCAS document, a Covering Rules view
       is generated containing only rule elements that affect that position
       in the document. The Matched Rules,
       Failed Rules and Rule Elements
-      views
-      then only contain match
-      information
-      of that position.
+      views only contain match
+      information of that position.
     </para>
   </section>
 
@@ -320,9 +266,9 @@
       This views is very similar to the Applied Rules view and the
       Covering Rules view, but displays only rules and NO blocks under a
       given selection. If the user clicks on any position in the xmiCAS
-      document, a list of rules, that matched or tried to match on that
-      position in the document, is generated within the Rule List view. The
-      Matched Rules, Failed Rules and Rule Elements views then only contain
+      document, a list of rules that matched or tried to match on that
+      position in the document is generated within the Rule List view. The
+      Matched Rules, Failed Rules and Rule Elements views only contain
       match information of that position. Additionally, this view provides
       a text field for filtering the rules. Only those rules remain that
       contain the entered text.
@@ -334,22 +280,18 @@
     <title>Created By</title>
     <para>
       The Created By view tells you which rule created a specific
-      annotation. To get this information just select an annotation in the
-      Annotation Browser. After doing this the Created By view shows the
+      annotation. To get this information, select an annotation in the
+      Annotation Browser. After doing this, the Created By view shows the
       related information.
     </para>
     <para>
       To see how this works, use the example project and go to the
       Annotation view. Select the
       <quote>d.u.e.Year</quote>
-      annotation
-      <quote>(2008)</quote>
-      . The Created By view displays the information, seen in
-      <xref linkend='figure.ugr.tools.tm.workbench.explain_perspective.created_by' />
-      . You can double-click on the shown rule to jump to the related
-      document
-      <quote>Year.tm</quote>
-      .
+      annotation <quote>(2008)</quote>. The Created By view displays the information, shown in
+      <xref linkend='figure.ugr.tools.tm.workbench.explain_perspective.created_by' />. 
+      You can double-click on the shown rule to jump to the related
+      document <quote>Year.tm</quote>.
     </para>
     <para>
       The following image shows the TextMarker Created By view.

Modified: uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.install.xml
URL: http://svn.apache.org/viewvc/uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.install.xml?rev=1430189&r1=1430188&r2=1430189&view=diff
==============================================================================
--- uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.install.xml (original)
+++ uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.install.xml Tue Jan  8 09:39:00 2013
@@ -31,46 +31,31 @@ under the License.
     <orderedlist numeration="arabic">
       <listitem>
         <para>
-          Download, install and start an Eclipse instance in version
-          3.7.
-          Eclipse can be obtained
-          from the
-          <ulink
-            url="http://www.eclipse.org/downloads/packages/release/indigo/sr2">eclipse.org</ulink>
-          download site.
+          Download, install and start an Eclipse 3.7., which can be obtained
+          from the <ulink url="http://www.eclipse.org/downloads/packages/release/indigo/sr2">eclipse.org</ulink> download site.
         </para>
       </listitem>
       <listitem>
         <para>
           Add the Apache UIMA update site (
           <ulink url="http://www.apache.org/dist/uima/eclipse-update-site/">http://www.apache.org/dist/uima/eclipse-update-site/
-          </ulink>
-          ) to the available
+          </ulink>) to the available
           software sites in your Eclipse installation.
-          To do
-          so, click
-          <quote>Help &rarr;
-            Install New Software
-          </quote>
-          and add each site. This opens the install wizard which can be
-          seen
-          in
-          <xref linkend='figure.ugr.tools.tm.workbench.install.update' />
+          Click on <quote>Help &rarr; Install New Software</quote>. This opens the install wizard, which can be
+          seen in <xref linkend='figure.ugr.tools.tm.workbench.install.update' />
         </para>
       </listitem>
       <listitem>
         <para>
-          Enter the TextMarker update site in field
+          Select or enter the TextMarker update site (<ulink url="http://www.apache.org/dist/uima/eclipse-update-site/">http://www.apache.org/dist/uima/eclipse-update-site/
+          </ulink>)in field
           <quote>Work with:</quote>
-          and press
-          <quote>Enter</quote>
-          .
+          and press <quote>Enter</quote>.
         </para>
       </listitem>
       <listitem>
         <para>
-          The
-          <quote>Apache UIMA TextMarker
+          The <quote>Apache UIMA TextMarker
             Eclipse tooling and runtime support
           </quote>
           feature will be displayed. Select
@@ -84,23 +69,18 @@ under the License.
             sites during install to find required software
           </quote>
           and click on
-          <quote>
-            Next
-          </quote>
-          .
+          <quote>Next</quote>.
         </para>
       </listitem>
       <listitem>
         <para>
-          On the next page click
-          <quote>Next</quote>
+          On the next page, click <quote>Next</quote>
           again. Now, the license
           agreement site is diplayed. To install TextMarker read the license and
           choose
           <quote>I accept the ...</quote>
-          if you agree to it. If you agreed,
-          click on
-          <quote>Finish</quote>
+          if you agree to it. Then,
+          click on <quote>Finish</quote>
         </para>
       </listitem>
     </orderedlist>
@@ -130,10 +110,8 @@ under the License.
     Now, TextMarker is going to be installed.
     After the successful
     installation, switch to the TextMarker
-    perspective. To get an overview
-    see
-    <xref linkend='section.ugr.tools.tm.workbench.overview' />
-    .
+    perspective. To get an overview, see
+    <xref linkend='section.ugr.tools.tm.workbench.overview' />.
   </para>
   <para>
     Several times within this chapter we use a TextMarker example

Modified: uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.overview.xml
URL: http://svn.apache.org/viewvc/uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.overview.xml?rev=1430189&r1=1430188&r2=1430189&view=diff
==============================================================================
--- uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.overview.xml (original)
+++ uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.overview.xml Tue Jan  8 09:39:00 2013
@@ -21,8 +21,7 @@
       <listitem>
         <para>
           The
-          <quote>TextMarker perspective</quote>
-          which provides the main functionality for working on TextMarker projects. See
+          <quote>TextMarker perspective</quote>, which provides the main functionality for working on TextMarker projects. See
           <xref linkend='section.ugr.tools.tm.workbench.tm_perspective' />
           for detailed information.
         </para>
@@ -30,12 +29,9 @@
       <listitem>
         <para>
           The
-          <quote>Explain perspective</quote>
-          which provides functionality primarily used to explain how a set of written rules
-          behaved
-          on a number of input documents. See
-          <xref linkend='section.ugr.tools.tm.workbench.explain_perspective' />
-          .
+          <quote>Explain perspective</quote>, which provides functionality primarily used to explain how a set of rules
+          are executed on input documents. See
+          <xref linkend='section.ugr.tools.tm.workbench.explain_perspective' />.
         </para>
       </listitem>
     </orderedlist>
@@ -61,10 +57,9 @@
         </textobject>
       </mediaobject>
     </figure>
-    As you can see, the TextMarker perspective provides an editor for editing documents, e.g.
-    TextMarker scripts, and several views for different other tasks. The Script Explorer for
-    example
-    helps to manage your TextMarker projects.
+    As you can see, the TextMarker perspective provides an editor for editing documents, e.g.,
+    TextMarker scripts, and several views for different other tasks. The Script Explorer, for
+    example, helps to manage your TextMarker projects.
   </para>
   <para>
     The following

Modified: uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.projects.xml
URL: http://svn.apache.org/viewvc/uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.projects.xml?rev=1430189&r1=1430188&r2=1430189&view=diff
==============================================================================
--- uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.projects.xml (original)
+++ uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.projects.xml Tue Jan  8 09:39:00 2013
@@ -21,14 +21,11 @@
   <title>TextMarker Projects</title>
   <para>
     TextMarker projects used within the TextMarker workbench need to have
-    a certain folder
-    structure. The parts of this folder structure are
+    a certain folder structure. The parts of this folder structure are
     explained in
-    <xref linkend='table.ugr.tools.tm.workbench.create_project.folder_strucutre' />
-    . To create a TextMarker project it is recommended to use the provided
+    <xref linkend='table.ugr.tools.tm.workbench.create_project.folder_strucutre' />. To create a TextMarker project it is recommended to use the provided
     wizard, explained in
-    <xref linkend='section.ugr.tools.tm.workbench.projects.create_projects' />
-    . If this wizard is used, the required folder structure is
+    <xref linkend='section.ugr.tools.tm.workbench.projects.create_projects' />. If this wizard is used, the required folder structure is
     automatically created.
   </para>
 
@@ -67,7 +64,7 @@
               launching a
               TextMarker script. Such input files could be plain
               text,
-              HTML, xmiCAS files or others.
+              HTML or xmiCAS files.
             </entry>
           </row>
           <row>
@@ -121,10 +118,8 @@
   <section id="section.ugr.tools.tm.workbench.projects.create_projects">
     <title>TextMarker create project wizard</title>
     <para>
-      To create a new TextMarker project switch to TextMarker perspective
-      and click
-      <quote>File &rarr; New &rarr; TextMarker Project</quote>
-      . This opens the corresponding wizard.
+      To create a new TextMarker project, switch to TextMarker perspective
+      and click <quote>File &rarr; New &rarr; TextMarker Project</quote>. This opens the corresponding wizard.
     </para>
 
     <para>
@@ -152,24 +147,26 @@
       </figure>
     </para>
     <para>
-      To create a simple TextMarker project just enter a project name for
+      To create a simple TextMarker project, enter a project name for
       your project and click
-      <quote>Finish</quote>. This will create all you need to start.
+      <quote>Finish</quote>. This will create everything you need to start.
     </para>
     <para>
       Other possible settings on this page are the desired location of
       the project,
       the interpreter to use and the working set you wish to
-      work on, all of them really self-explaining.
+      work on, all of them are self-explaining.
     </para>
+    <!-- 
     <para>
-      On the second page of the wizard you can mainly configure the
+      On the second page of the wizard, you can mainly configure the
       needed build path. This is necessary if you like to use external
       source
       folders or if the project to create will be dependent on other
       projects or if external libraries have to be found. Add the desired
       configuration in the related tab.
     </para>
+     -->
     <para>
       <xref
         linkend='figure.ugr.tools.tm.workbench.projects.create_projects.wizard2' />

Modified: uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.query.xml
URL: http://svn.apache.org/viewvc/uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.query.xml?rev=1430189&r1=1430188&r2=1430189&view=diff
==============================================================================
--- uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.query.xml (original)
+++ uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.query.xml Tue Jan  8 09:39:00 2013
@@ -26,7 +26,7 @@ under the License.
 
 <section id="section.ugr.tools.tm.workbench.tm_query">
   <title>Query View</title>
-  <para> With the Query View the TextMarker language can be used to write queries on a set of
+  <para> With the Query View, the TextMarker language can be used to write queries on a set of
     documents. A query is simply a set of TextMarker rules. Each query returns a list of all text
     passages the query applies to. For example, if you have a set of annotated documents containing
     a number of Author annotations, you could use the Query View to get a list of all the author
@@ -63,7 +63,7 @@ under the License.
           <quote>Query Data</quote>
           specifies the folder containing the documents on which the query should be executed. You
           can either click on the button next to the field to specify the folder by browsing through
-          the file system, or you can drag and drop a folder directly into the field. If the
+          the file system or you can drag and drop a folder directly into the field. If the
           checkbox is activated, all subfolders are included.
         </para>
       </listitem>
@@ -73,7 +73,7 @@ under the License.
           <quote>Type System</quote>
           has to contain a type system or a TextMarker script that specifies all types that are used
           in the query. You can either click on the button next to the field to specify the type
-          system by browsing through the file system, or you can drag and drop a type system
+          system by browsing through the file system or you can drag and drop a type system
           directly into the field.
         </para>
       </listitem>
@@ -82,7 +82,7 @@ under the License.
           the middle of the view.</para>
       </listitem>
       <listitem>
-        <para> After pressing the start button the query is started. The results are subsequently
+        <para> After pressing the start button, the query is started. The results are subsequently
           displayed in the bottom text field.</para>
       </listitem>
     </orderedlist>
@@ -93,8 +93,8 @@ under the License.
     brackets the document related to the text passage. By double-clicking on one of the listed
     items, the related document is opened in the editor and the matched text passage is selected. If
     the related document is already open you can jump to another matched text passage within the the
-    same document with just one click on the listed item. Of course this text passage is then
-    selected. By clicking on the export button a list of all matched text passaged is showed in a
+    same document with one click on the listed item. Of course, this text passage is selected. 
+    By clicking on the export button, a list of all matched text passaged is showed in a
     separate window. For further usage, e.g. as a list of authors in another TextMarker project,
     copy the content of this window to another text file.
   </para>

Modified: uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.testing.xml
URL: http://svn.apache.org/viewvc/uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.testing.xml?rev=1430189&r1=1430188&r2=1430189&view=diff
==============================================================================
--- uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.testing.xml (original)
+++ uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.testing.xml Tue Jan  8 09:39:00 2013
@@ -26,7 +26,7 @@ under the License.
 
 <section id="section.ugr.tools.tm.workbench.testing">
   <title>Testing</title>
-  <para> The TextMarker workbench comes bundled with its own testing environment, that allows you to
+  <para> The TextMarker workbench comes bundled with its own testing environment that allows you to
     test and evaluate TextMarker scripts. It provides full back-end testing capabilities and allows
     you to examine test results in detail.
   </para>
@@ -44,16 +44,13 @@ under the License.
   <para>
     <xref linkend='figure.ugr.tools.tm.workbench.testing.script_explorer' />
     shows the script explorer. Every TextMarker project contains a folder called
-    <quote>test</quote>
-    . This folder is the default location for the test-files. In the folder each script file has its
+    <quote>test</quote>. This folder is the default location for the test-files. In the folder each script file has its
     own subfolder with a relative path equal to the scripts package path in the
-    <quote>script</quote>
-    folder. This folder contains the test files. In every scripts test folder you will also find a
+    <quote>script</quote> folder. This folder contains the test files. In every scripts test folder, you will also find a
     result folder where the results of the tests are saved. If you like to use test files from
     another location in the file system, the results will be saved in the
-    <quote>temp</quote>
-    subfolder of the projects test folder. All files in the temp folder will be deleted, once
-    eclipse is closed.
+    <quote>temp</quote> subfolder of the project's test folder. All files in the temp folder will be deleted once
+    Eclipse is closed.
   </para>
   <para>
     <figure id="figure.ugr.tools.tm.workbench.testing.script_explorer">
@@ -77,16 +74,16 @@ under the License.
     <title>Usage</title>
     <para> This section describes the general proceeding when using the testing environment. </para>
     <para>
-      Currently the testing environment has no own perspective associated to it. It is recommended
+      Currently, the testing environment has no own perspective associated to it. It is recommended
       to start within the TextMarker perspective. There, the Annotation Test view is open by
       default. The True Positive, False Positive and False Negative views have to be opened
       manually:
       <quote>Window -> Show View -> True Positive/False Positive/False Negative </quote>.
     </para>
-    <para> To explain the usage of the TextMarker testing environment the TextMarker example project
-      is again used. Therefore, open this project. 
-      Firstly one has to select a script for testing: TextMarker will always test the script, that
-      is currently open and active in the script editor. So open the
+    <para> To explain the usage of the TextMarker testing environment, the TextMarker example project
+      is used again. Open this project. 
+      Firstly, one has to select a script for testing: TextMarker will always test the script, that
+      is currently open and active in the script editor. So, open the
       <quote>Main.tm</quote>
       script file of the TextMarker example project.
       The next <link linkend='figure.ugr.tools.tm.workbench.testing.annotation_test_initial_view'>figure</link>.
@@ -137,24 +134,23 @@ under the License.
         </mediaobject>
       </figure>
     </para>
-    <para> All control elements, that are needed for the interaction with the testing environment,
-      are located here. At the right top, there is the buttons bar (label (1)-(6)). At the left top
+    <para> All control elements that are needed for the interaction with the testing environment
+      are located here. At the top right, there is the buttons bar (label (1)-(6)). At the top left
       of the view (label (7)) the name of the script that is going to be tested is shown. It is
-      always same to the script active in the editor. Below this (label (10)) the test list is
-      located. This list contains the different files to be tested. Right next to name of the script
-      file (label (8)) you get select the desired view. Right to this (label (9)) you get statistics
+      always equal to the script active in the editor. Below this (label (10)), the test list is
+      located. This list contains the different files for testing. Right next to the name of the script
+      file (label (8)) you can select the desired view. Right to this (label (9)) you get statistics
       over all ran tests: the number of all true positives (TP), false positives (FP) and false
-      negatives (FN). In the field bellow (label (11)), you will find a table with statistic
-      information for a single selected test file. To change this view select a file in the test
+      negatives (FN). In the field below (label (11)), you will find a table with statistic
+      information for a single selected test file. To change this view, select a file in the test
       list field. The table shows a total TP, FP and FN information, as well as precision, recall
       and f1-score for every type as well as for the whole file. </para>
     <para>
-      Next you have to add test files to your project. A test file is a previously annotated xmiCAS
+      Next, you have to add test files to your project. A test file is a previously annotated xmiCAS
       file that can be used as a golden standard for the test. You can use any xmiCAS file. The
-      TextMarker example project already contains such test files. Therefore these files are listed
+      TextMarker example project already contains such test files. These files are listed
       in the Annotation Test view. Try do delete these files by selecting them and clicking on
-      <literal>Del</literal>
-      . Add these files again by simply dragging them from the Script Explorer into the test file
+      <literal>Del</literal>. Add these files again by simply dragging them from the Script Explorer into the test file
       list. A different way to add test-files is to use the
       <quote>Load all test files from selected folder</quote>
       button (green plus). It can be used to add all xmiCAS files from a selected folder.
@@ -166,32 +162,24 @@ under the License.
       perspective delivered with the UIMA workbench.
     </para>
     <para>
-      Selecting a CAS View to test: TextMarker supports different views, that allow you to operate
-      on different levels in a document. The
-      <quote>InitialView</quote>
-      is selected as default, however you can also switch the evaluation to another view by typing
-      the views name into the list or selecting the view you wish to use from the list.
-    </para>
-    <para>
-      Selecting the evaluator: The testing environment supports different evaluators that allow a
+      The testing environment supports different evaluators that allow a
       sophisticated analysis of the behavior of a TextMarker script. The evaluator can be chosen in
       the testing environment's preference page. The preference page can be opened either through
       the menu or by clicking on the
       <quote>Select evaluator</quote>
       button (blue gear wheels) in the testing view's toolbar. Clicking the button will open a
       filtered version of the TextMarker preference page. The default evaluator is the "Exact CAS
-      Evaluator" which compares the offsets of the annotations between the test file and the file
+      Evaluator", which compares the offsets of the annotations between the test file and the file
       annotated by the tested script. To get an overview of all available evaluators, see
       <xref linkend='section.ugr.tools.tm.workbench.testing.evaluators' />
     </para>
     <para>
-      This preference page (see
-      <xref linkend='figure.ugr.tools.tm.workbench.testing.preference' />
-      ) offers a few options that will modify the plug-ins general behavior. For example the
+      This preference page (see <xref linkend='figure.ugr.tools.tm.workbench.testing.preference' />) 
+      offers a few options that will modify the plug-ins general behavior. For example, the
       preloading of previously collected result data can be turned off. An important option in the
       preference page is the evaluator you can select. On default the "exact evaluator" is selected,
       which compares the offsets of the annotations, that are contained in the file produced by the
-      selected script, with the annotations in the test file. Other evaluators will compare
+      selected script with the annotations in the test file. Other evaluators will compare
       annotations in a different way.
     </para>
     <para>
@@ -211,12 +199,11 @@ under the License.
       </figure>
     </para>
     <para>
-      Excluding Types: During a test-run it might be convenient to disable testing for specific
+      During a test-run it might be convenient to disable testing for specific
       types like punctuation or tags. The
       <quote>Select excluded types</quote>
-      button (white exclamation in a red disk) will open a dialog (see
-      <xref linkend='figure.ugr.tools.tm.workbench.testing.excluded_types' />
-      ) where all types can be selected that should not be considered in the test.
+      button (white exclamation in a red disk) will open a dialog (see <xref linkend='figure.ugr.tools.tm.workbench.testing.excluded_types' />) 
+      where all types can be selected that should not be considered in the test.
     </para>
     <para>
       <figure id="figure.ugr.tools.tm.workbench.testing.excluded_types">
@@ -237,7 +224,7 @@ under the License.
       </figure>
     </para>
     <para>
-      Running the test: A test-run can be started by clicking on the start button. Do this for the
+      A test-run can be started by clicking on the start button. Do this for the
       TextMarker example project.
       <xref linkend='figure.ugr.tools.tm.workbench.testing.annotation_test_test_run' />
       shows the results.
@@ -260,10 +247,10 @@ under the License.
         </mediaobject>
       </figure>
     </para>
-    <para> Result Overview: The testing main view displays some information, on how well the script
-      did, after every test run. It will display an overall number of true positive, false positive
+    <para>The testing main view displays some information on how well the script
+      did after every test run. It will display an overall number of true positive, false positive
       and false negatives annotations of all result files as well as an overall f1-score.
-      Furthermore a table will be displayed that contains the overall statistics of the selected
+      Furthermore, a table will be displayed that contains the overall statistics of the selected
       test file as well as statistics for every single type in the test file. The information
       displayed are true positives, false positives, false negatives, precision, recall and
       f1-measure. </para>
@@ -275,22 +262,22 @@ under the License.
       copied and easily imported into other applications.
     </para>
     <para>
-      Result Files: When running a test, the evaluator will create a new result xmiCAS file and will
+      When running a test, the evaluator will create a new result xmiCAS file and will
       add new true positive, false positive and false negative annotations. By clicking on a file in
       the test-file list, you can open the corresponding result xmiCAS file in the CAS
       Editor. While displaying the result xmiCAS file in the CAS Editor, the True Positive, False
       Positive and False Negative views allow easy navigation through the new tp, fp and fn
       annotations. The corresponding annotations are displayed in a hierarchic tree structure. This
-      allows an easy tracing of the results inside the testing document. Clicking on one of the
-      annotations in those views, will highlight the annotation in the CAS Editor. Opening
+      allows an easy tracing of the results within the testing document. Clicking on one of the
+      annotations in those views will highlight the annotation in the CAS Editor. Opening
       <quote>test1.result.xmi</quote>
-      in the TextMarker example project, changes the True Positive view as shown in
+      in the TextMarker example project changes the True Positive view as shown in
       <xref linkend='figure.ugr.tools.tm.workbench.testing.true_positive' />.
       Notice that the type system, which will be used by the CAS Editor to open the evaluated file, 
       can only be resolved for the tested script, if the test files are located in the associated
-      folder structure, that is the folder with the name of the script. If the files are located 
+      folder structure that is the folder with the name of the script. If the files are located 
       in the temp folder, for example by adding the files to the list of test cases by drag and drop,
-      then other strategies to find the correct type system will be applied. For TextMarker projects, 
+      other strategies to find the correct type system will be applied. For TextMarker projects, 
       for example, this will be the type system of the last launched script in this project. 
     </para>
     <para>
@@ -317,41 +304,41 @@ under the License.
     <para> When testing a CAS file, the system compared the offsets of the annotations of a
       previously annotated gold standard file with the offsets of the annotations of the result file
       the script produced. Responsible for comparing annotations in the two CAS files are
-      evaluators. These evaluators have different methods and strategies, for comparing the
-      annotations, implemented. Also a extension point is provided that allows easy implementation
+      evaluators. These evaluators have different methods and strategies implemented for comparing the
+      annotations. Also, an extension point is provided that allows easy implementation
       of new evaluators. </para>
     <para> Exact Match Evaluator: The Exact Match Evaluator compares the offsets of the annotations
-      in the result and the golden standard file. Any difference will be marked with either an false
+      in the result and the golden standard file. Any difference will be marked with either a false
       positive or false negative annotations. </para>
     <para> Partial Match Evaluator: The Partial Match Evaluator compares the offsets of the
       annotations in the result and golden standard file. It will allow differences in the beginning
-      or the end of an annotation. For example "corresponding" and "corresponding " will not be
+      or the end of an annotation. For example, "corresponding" and "corresponding " will not be
       annotated as an error. </para>
     <para> Core Match Evaluator: The Core Match Evaluator accepts annotations that share a core
-      expression. In this context a core expression is at least four digits long and starts with a
-      capitalized letter. For example the two annotations "L404-123-421" and "L404-321-412" would be
-      considered a true positive match, because of "L404" is considered a core expression that is
+      expression. In this context, a core expression is at least four digits long and starts with a
+      capitalized letter. For example, the two annotations "L404-123-421" and "L404-321-412" would be
+      considered a true positive match, because "L404" is considered a core expression that is
       contained in both annotations. </para>
     <para> Word Accuracy Evaluator: Compares the labels of all words/numbers in an annotation,
       whereas the label equals the type of the annotation. This has the consequence, for example,
       that each word or number that is not part of the annotation is counted as a single false
-      negative. For example we have the sentence: "Christmas is on the 24.12 every year." The script
+      negative. For example in the sentence: "Christmas is on the 24.12 every year." The script
       labels "Christmas is on the 12" as a single sentence, while the test file labels the sentence
-      correctly with a single sentence annotation. While for example the Exact CAS Evaluator while
-      only assign a single False Negative annotation, Word Accuracy Evaluator will mark every word
-      or number as a single False Negative. </para>
+      correctly with a single sentence annotation. While, for example, the Exact CAS Evaluator is
+      only assigning a single False Negative annotation, Word Accuracy Evaluator will mark every word
+      or number as a single false negative. </para>
     <para> Template Only Evaluator: This Evaluator compares the offsets of the annotations and the
-      features, that have been created by the script. For example the text "Alan Mathison Turing" is
+      features, that have been created by the script. For example, the text "Alan Mathison Turing" is
       marked with the author annotation and "author" contains 2 features: "FirstName" and
       "LastName". If the script now creates an author annotation with only one feature, the
       annotation will be marked as a false positive. </para>
     <para> Template on Word Level Evaluator: The Template On Word Evaluator compares the offsets of
-      the annotations. In addition it also compares the features and feature structures and the
-      values stored in the features. For example the annotation "author" might have features like
-      "FirstName" and "LastName" The authors name is "Alan Mathison Turing" and the script correctly
+      the annotations. In addition, it also compares the features and feature structures and the
+      values stored in the features. For example, the annotation "author" might have features like
+      "FirstName" and "LastName". The authors name is "Alan Mathison Turing" and the script correctly
       assigns the author annotation. The feature assigned by the script are "Firstname : Alan",
-      "LastName : Mathison", while the correct feature values would be "FirstName Alan", "LastName
-      Turing". In this case the Template Only Evaluator will mark an annotation as a false positive,
+      "LastName : Mathison", while the correct feature values are "FirstName Alan" and "LastName
+      Turing". In this case, the Template Only Evaluator will mark an annotation as a false positive,
       since the feature values differ. </para>
   </section>
 </section>
\ No newline at end of file

Modified: uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.textruler.xml
URL: http://svn.apache.org/viewvc/uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.textruler.xml?rev=1430189&r1=1430188&r2=1430189&view=diff
==============================================================================
--- uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.textruler.xml (original)
+++ uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.textruler.xml Tue Jan  8 09:39:00 2013
@@ -27,15 +27,14 @@ under the License.
 <section id="section.ugr.tools.tm.workbench.textruler">
   <title>TextRuler</title>
   <para> Using the knowledge engineering approach, a knowledge engineer normally writes handcrafted
-    rules to create a domain dependent information extraction application, often supported by a gold
+    rules to create a domain dependent information extraction application often supported by a gold
     standard. When starting the engineering process for the acquisition of the extraction knowledge
-    for possibly new slot or more general for new concepts, machine learning methods are often able
+    for possible new slot or more generally for new concepts, machine learning methods are often able
     to offer support in an iterative engineering process. This section gives a conceptual overview
     of the process model for the semi-automatic development of rule-based information extraction
     applications.
   </para>
-  <para> First, a suitable set of documents that contain the text fragments with interesting
-    patterns needs to be selected and annotated with the target concepts. Then, the knowledge
+  <para> First, a suitable set of documents that contains the text fragments with patterns needs to be selected and annotated with the target concepts. Then, the knowledge
     engineer chooses and configures the methods for automatic rule acquisition to the best of his
     knowledge for the learning task: Lambda expressions based on tokens and linguistic features, for
     example, differ in their application domain from wrappers that process generated HTML pages.
@@ -43,7 +42,7 @@ under the License.
   <para> Furthermore, parameters like the window size defining relevant features need to be set to
     an appropriate level. Before the annotated training documents form the input of the learning
     task, they are enriched with features generated by the partial rule set of the developed
-    application. The result of the methods, that is the learned rules, are proposed to the knowledge
+    application. The result of the methods, which are the learned rules, are proposed to the knowledge
     engineer for the extraction of the target concept.
   </para>
   <para> The knowledge engineer has different options to proceed: If the quality, amount or
@@ -99,18 +98,14 @@ under the License.
         </listitem>
         <listitem>
           <para>
-            Document: The type of the document may be
-            <quote>free</quote>
-            like in newspapers,
-            <quote>semi</quote>
-            or
-            <quote>struct</quote>
-            like in HTML pages.
+            Document: The type of the document may be <quote>free</quote>
+            like in newspapers, <quote>semi</quote>
+            or <quote>struct</quote> like in HTML pages.
    </para>
         </listitem>
         <listitem>
           <para> Slots: The slots refer to a single annotation that represents the goal of the
-            learning task. Some rule are able to create several annotation at once in the same
+            learning task. Some rule are able to create several annotations at once in the same
             context (multi-slot). However, only single slots are supported by the current
             implementations.</para>
         </listitem>
@@ -202,54 +197,104 @@ under the License.
      -->
     <section id="section.ugr.tools.tm.workbench.textruler.lp2">
       <title>LP2</title>
-      <para>LP2 This method operates on all three kinds of documents. It learns separate rules for
-        the beginning and the end of a single slot. So called tagging rules insert boundary SGML
-        tags and additionally induced correction rules shift misplaced tags to their correct
+      <para>This method operates on all three kinds of documents. It learns separate rules for
+        the beginning and the end of a single slot. Tagging rules insert boundary SGML
+        tags and, additionally, induced correction rules shift misplaced tags to their correct
         positions in order to improve precision. The learning strategy is a bottom-up covering
         algorithm. It starts by creating a specific seed instance with a window of w tokens to the
         left and right of the target boundary and searches for the best generalization. Other
         linguistic NLP-features can be used in order to generalize over the flat word sequence.
       </para>
-      <para> Parameters Context Window Size (to the left and right): Best Rules List Size: Minimum
-        Covered Positives per Rule: Maximum Error Threshold: Contextual Rules List Size:   </para>
+      <para>
+        Parameters: 
+      </para>
+      <itemizedlist>
+        <listitem>
+          <para>Context Window Size (to the left and right)</para>
+        </listitem>
+        <listitem>
+          <para>Best Rules List Size: Minimum</para>
+        </listitem>
+        <listitem>
+          <para>Covered Positives per Rule</para>
+        </listitem>
+        <listitem>
+          <para>Maximum Error Threshold</para>
+        </listitem>
+        <listitem>
+          <para>Contextual Rules List Size</para>
+        </listitem>
+      </itemizedlist>
     </section>
     <section id="section.ugr.tools.tm.workbench.textruler.rapier">
       <title>RAPIER</title>
       <para>RAPIER induces single slot extraction rules for semi-structured documents. The rules
-        consist of three patterns: a pre-filler, a filler and a post-filler pattern. Each can hold
+        consist of three patterns: a pre-filler, a filler and a post-filler pattern. Each pattern can hold
         several constraints on tokens and their according POS-tag- and semantic information. The
-        algorithm uses a bottom-up compression strategy, starting with a most specific seed rule for
+        algorithm uses a bottom-up compression strategy starting with a most specific seed rule for
         each training instance. This initial rule base is compressed by randomly selecting rule
         pairs and search for the best generalization. Considering two rules, the least general
         generalization (LGG) of the slot fillers are created and specialized by adding rule items to
         the pre- and post-filler until the new rules operate well on the training set. The best of
         the k rules (k-beam search) is added to the rule base and all empirically subsumed rules are
         removed.   </para>
-      <para> Parameters Maximum Compression Fail Count: Internal Rules List Size: Rule Pairs for
-        Generalizing: Maximum 'No improvement' Count: Maximum Noise Threshold: Minimum Covered
-        Positives Per Rule: PosTag Root Type: Use All 3 GenSets at Specialization:   </para>
+      <para>
+        Parameters: 
+      </para>
+      <itemizedlist>
+        <listitem>
+          <para>Parameters Maximum Compression Fail Count</para>
+        </listitem>
+        <listitem>
+          <para>Internal Rules List Size: Rule Pairs for Generalizing</para>
+        </listitem>
+        <listitem>
+          <para>Maximum 'No improvement' Count</para>
+        </listitem>
+        <listitem>
+          <para>Maximum Noise Threshold: Minimum Covered Positives Per Rule</para>
+        </listitem>
+        <listitem>
+          <para>PosTag Root Type</para>
+        </listitem>
+        <listitem>
+          <para>Use All 3 GenSets at Specialization</para>
+        </listitem>
+      </itemizedlist>
     </section>
     <section id="section.ugr.tools.tm.workbench.textruler.whisk">
       <title>WHISK</title>
       <para> WHISK is a multi-slot method that operates on all three kinds of documents and learns
         single- or multi-slot rules looking similar to regular expressions. The top-down covering
         algorithm begins with the most general rule and specializes it by adding single rule terms
-        until the rule makes no errors on the training set. Domain specific classes or linguistic
+        until the rule does not make errors anymore on the training set. Domain specific classes or linguistic
         information obtained by a syntactic analyzer can be used as additional features. The exact
-        definition of a rule term (e.g. a token) and of a problem instance (e.g. a whole document or
+        definition of a rule term (e.g., a token) and of a problem instance (e.g., a whole document or
         a single sentence) depends on the operating domain and document type.   </para>
-      <para> Parameters Window Size: Maximum Error Threshold: PosTag Root Type.   </para>
+      <para>
+        Parameters: 
+      </para>
+      <itemizedlist>
+        <listitem>
+          <para>Parameters Window Size</para>
+        </listitem>
+        <listitem>
+          <para>Maximum Error Threshold</para>
+        </listitem>
+        <listitem>
+          <para>PosTag Root Type</para>
+        </listitem>
+      </itemizedlist>
     </section>
     <section id="section.ugr.tools.tm.workbench.textruler.wien">
       <title>WIEN </title>
       <para> WIEN is the only method listed here that operates on highly structured texts only. It
-        induces so called wrappers that anchor the slots by their structured context around them.
+        induces wrappers that anchor the slots by their structured context.
         The HLRT (head left right tail) wrapper class for example can determine and extract several
         multi-slot-templates by first separating the important information block from unimportant
-        head and tail portions and then extracting multiple data rows from table like data
+        head and tail portions and extracting multiple data rows from table like data
         structures from the remaining document. Inducing a wrapper is done by solving a CSP for all
         possible pattern combinations from the training data.   </para>
-      <para> Parameters No parameters are available.   </para>
     </section>
   </section>
 </section>
\ No newline at end of file

Modified: uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.tm_perspective.xml
URL: http://svn.apache.org/viewvc/uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.tm_perspective.xml?rev=1430189&r1=1430188&r2=1430189&view=diff
==============================================================================
--- uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.tm_perspective.xml (original)
+++ uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.tm_perspective.xml Tue Jan  8 09:39:00 2013
@@ -22,8 +22,8 @@
     The TextMarker perspective is the main view to manage TextMarker
     projects. There are several views associated with the TextMarker
     perspective: Annotation Test, Annotation Browser, Selection,
-    TextRuler, TextMarker Query. Since Annotation Test, TextRuler and
-    TextMarker Query have a stand-alone functionality they are explained
+    TextRuler and TextMarker Query. Since Annotation Test, TextRuler and
+    TextMarker Query have a stand-alone functionality. They are explained
     in separate sections.
   </para>
 
@@ -33,9 +33,8 @@
     workbench.
     Import the TextMarker example project and open the main
     TextMarker script file 'Main.tm'. Now press the 'Run' button (green
-    arrow)and wait for the end of execution. Open the resulting xmiCAS
-    file
-    'Test1.txt.xmi', which you can find in the output folder.
+    arrow) and wait for the end of execution. Open the resulting xmiCAS
+    file 'Test1.txt.xmi', which you can find in the output folder.
   </para>
 
   <section
@@ -49,10 +48,7 @@
     </para>
     <para>
       The result of the execution of the TextMarker example project is
-      shown in
-      <xref
-        linkend='figure.ugr.tools.tm.workbench.tm_perspective.annotation_browser' />
-      .
+      shown in <xref linkend='figure.ugr.tools.tm.workbench.tm_perspective.annotation_browser' />.
     </para>
     <para>
       <figure
@@ -96,12 +92,9 @@
       document or select a certain text passage.
     </para>
     <para>
-      E.g., if you select the text passage
-      <literal>2008</literal>
-      , the Selection view will be generated as shown in
-      <xref
-        linkend='figure.ugr.tools.tm.workbench.tm_perspective.annotation_browser' />
-      .
+      If you select the text passage
+      <literal>2008</literal>, the Selection view will be generated as shown in
+      <xref linkend='figure.ugr.tools.tm.workbench.tm_perspective.annotation_browser' />.
     </para>
     <para>
       <figure id="figure.ugr.tools.tm.workbench.tm_perspective.selection">

Modified: uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.xml
URL: http://svn.apache.org/viewvc/uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.xml?rev=1430189&r1=1430188&r2=1430189&view=diff
==============================================================================
--- uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.xml (original)
+++ uima/sandbox/TextMarker/trunk/uima-docbook-textmarker/src/docbook/tools.textmarker.workbench.xml Tue Jan  8 09:39:00 2013
@@ -27,12 +27,11 @@ under the License.
 <chapter id="ugr.tools.tm.workbench">
   <title>TextMarker Workbench</title>
   <para>
-  The TextMarker workbench, which is made available as an Eclipse-
-  plugin, offers a powerful environment for creating and working on TextMarker projects. It provides two main
+  The TextMarker workbench, which is made available as an Eclipse-plugin, offers a powerful environment for creating and working on TextMarker projects. It provides two main
   perspectives and several views to develop, run, debug, test and evaluate TextMarker
-  rules in a comfortable way, supporting many of the known Eclipse features e.g. auto-completion.
-  Moreover it makes the creation of dictionaries like tree word lists easy and supports machine
-  learning methods which can be used within a knowledge engineering process. The following chapter
+  rules in a comfortable way, supporting many of the known Eclipse features, e.g., auto-completion.
+  Moreover, it makes the creation of dictionaries like tree word lists easy and supports machine
+  learning methods, which can be used within a knowledge engineering process. The following chapter
   starts with the installation of the workbench, followed by a description of all its features.
   </para>
 
@@ -55,28 +54,34 @@ under the License.
   <xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="tools.textmarker.workbench.create_dictionaries.xml" />
   
   <section id="ugr.tools.tm.workbench.apply">
-    <title>Apply a TextMarker script on a folder</title>
+    <title>Apply a TextMarker script to a folder</title>
     <para>
-      The TextMarker Workbench makes it possible to apply a TextMarker script on any folder of the workspace. 
+      The TextMarker Workbench makes it possible to apply a TextMarker script to any folder of the workspace. 
       Select a folder in the script explorer, right-click to open the context menu and select the menu entry TextMarker.
-      There are three option to apply a TextMarker script on the files of the selected folder, 
+      There are three options to apply a TextMarker script to the files of the selected folder, 
       cf. <xref linkend='figure.ugr.tools.tm.workbench.apply' />.
     </para>
     <para>
       <orderedlist numeration="arabic">
         <listitem>
+        <para>
           <emphasis role="bold">Quick TextMarker</emphasis> applies the TextMarker script that is currently opened and focused
-          in the TextMarker editor on all suitable files in the selected folder. File of the type <quote>xmi</quote> will be adapted 
+          in the TextMarker editor to all suitable files in the selected folder. Files of the type <quote>xmi</quote> will be adapted 
           and a new xmi-file will be created for other files like txt-files.
+        </para>
         </listitem>
         <listitem>
+        <para>
           <emphasis role="bold">Quick TextMarker (remove basics)</emphasis> is very similar to the previous menu entry,
            but removes the annotations of the type <quote>TextMarkerBasic</quote> after processing a CAS.
+        </para>
         </listitem>
         <listitem>
+        <para>
           <emphasis role="bold">Quick TextMarker (no xmi)</emphasis> applies the TextMarker script, but does not change
-           or create an xmi-file. This menu entry can, for example, be used in combination with an imported XMIWriter Analysis Engine, which 
+           nor create an xmi-file. This menu entry can, for example, be used in combination with an imported XMIWriter Analysis Engine, which 
            stores the result of the script in a different folder depending on the execution of the rules.
+        </para>
         </listitem>
       </orderedlist>
     </para>