You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@uima.apache.org by cw...@apache.org on 2013/01/02 20:12:11 UTC

svn commit: r1427917 [4/8] - in /uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook: ./ images/ images/ducc-overview/ images/job-manager/ part-admin/ part-admin/admin/ part-introduction/ part-user/ part-user/cli/ unused/

Added: uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-admin/chapter-install.xml
URL: http://svn.apache.org/viewvc/uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-admin/chapter-install.xml?rev=1427917&view=auto
==============================================================================
--- uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-admin/chapter-install.xml (added)
+++ uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-admin/chapter-install.xml Wed Jan  2 19:12:10 2013
@@ -0,0 +1,1249 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+  
+       http://www.apache.org/licenses/LICENSE-2.0
+  
+  Unless required by applicable law or agreed to in writing,
+  software distributed under the License is distributed on an
+  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  KIND, either express or implied.  See the License for the
+  specific language governing permissions and limitations
+  under the License.
+-->
+<chapter id="ducc.install">
+<title>Installation, Configuration, and Verification</title>
+
+    <para>
+      <emphasis>The source for this chapter is ducc_ducbook/documents/chapter-install.xml</emphasis>
+    </para>
+
+  <para>
+      This chapter describes how to install, configure, and verify DUCC.
+  </para>
+
+  <para>
+    In this document we refer to the machines in a DUCC cluster as the "worker" machines (or
+    nodes) and the "administrative" machines (or nodes).  Applications are distributed to the
+    "worker" nodes.  The DUCC processes which manage resources, process deployment, web serving,
+    etc, are run on the "administrative" nodes.
+  </para>
+
+  <para>
+    In secure environments it may be desirable to run both the "worker" and "administrative"
+    processes behind a firewall, inaccessible to the public at large.  In this case it is possible
+    to configure the DUCC web-server to run on a gateway machine.  We thus may refer to the node
+    with the DUCC web-server as the "web-server" node.
+  </para>
+
+  <section>
+    <title>General Considerations</title>
+      <para>
+        DUCC should be installed on systems <emphasis>dedicated</emphasis> to running DUCC and
+        applications managed by DUCC.  DUCC is designed to manage applications that are highly
+        memory-intensive.  The DUCC Resource Manager assumes that every processor in the cluster is
+        dedicated to a single instance of the DUCC Agent and its spawned children.  Prohibitively
+        high levels of page / swap activity may result from sharing processors with DUCC, preventing
+        applications from making progress and in worst cases, locking out the processors.
+      </para>
+
+  </section>
+  <section>
+    <title>Hardware Requirements</title>
+       <para>
+         The following are minimal hardware requirements for running DUCC.
+       </para>
+       <itemizedlist>
+
+         <listitem>
+           <para>
+             Three or more dedicated Intel-based or IBM Power-7 systems.  One system is dedicated to
+             running the DUCC administrative processes and cannot run applications.  One system is
+             dedicated to running the JD process.  The other systems are used to run workers. 
+           </para>
+         </listitem>
+
+         <listitem>
+           <para>
+           Sixteen GB (16GB) RAM minimum for each worker machine and the JD machine.  DUCC can be
+           run on smaller systems but most DeepQA applications will not run on such small
+           processors.
+           </para>
+         </listitem>
+
+         <listitem>
+           <para>
+             Eight GB (8GB) RAM for the machine ducc administrative processes. Under higher loads
+             it may be desirable to have 16GB or more.
+           </para>
+         </listitem>
+       </itemizedlist>
+  </section>
+
+  <section>
+    <title>Software Requirements</title>
+    
+    <para>
+    The following are minimal software requirements for DUCC.  This software must
+    be installed on all DUCC nodes.
+    </para>
+    
+    <itemizedlist>
+
+      <listitem>
+        <para>
+        A modern Linux system.  DUCC has been developed and tested on SUSE and Debian
+        distributions.
+        </para>
+      </listitem>
+
+      <listitem>
+        <para>
+        IBM or Sun JRE 1.6. 
+        </para>
+      </listitem>
+
+      <listitem>
+        <para>
+        Apache ActiveMQ 5.5.  This supplied to IBM internal DUCC customers as part of the
+        DUCC distribution.
+        </para>
+      </listitem>
+
+      <listitem>
+        <para>
+        Python 2.x where "x" is at least 4.  The oldest version of Python supported by DUCC is
+        2.4.  DUCC has not been tested under any version of Python 3.  All modern Linux
+        distributions  supply an acceptable version by default.  It may be necessary for the
+        System Administrator to install Python from the Linux distribution media as it
+        is not installed in some default configurations.
+        </para>
+      </listitem>
+      
+      <listitem>
+        <para>
+          User and group "ducc" must be established on all machines.  For security
+          reasons, the group "ducc" should not be shared with any other users.
+        </para>
+        <para>
+          Currently user "ducc" is hard-coded into the security code of DUCC and
+          cannot be changed. 
+        </para>                
+      </listitem>
+      
+      <listitem>
+        <para>
+        All machines in the DUCC cluster must be connected via a shared file system
+        and a shared user space.  DUCC assumes all user home directories as well as the
+        "ducc" home directory are cross mounted on all machines.
+        </para>
+      </listitem>
+      
+      <listitem>
+        <para>
+          Password-less ssh must be installed on the JD and worker machines for user id "ducc".
+        </para>
+      </listitem>
+      
+      <listitem>
+        <para>
+          At least one user id other than "ducc" that is available to all nodes, to submit
+          jobs from.
+        </para>
+        
+        <note>
+          <para>
+            User "root" cannot be used to submit jobs to DUCC.  User "ducc" should not
+            be used to submit jobs.
+          </para>
+        </note>
+      </listitem>
+
+      </itemizedlist>
+   </section>
+
+  <section>
+    <title>Quick Installation Checklist</title>
+    
+      <note>
+        <para>
+          Throughout this document the location where DUCC is installed is referred to as
+          <emphasis>ducc_runtime</emphasis>.  By default, the installation procedures
+          install DUCC in the home directory of user <emphasis>ducc</emphasis> as
+          <emphasis>~ducc/ducc_runtime</emphasis> where <emphasis>~ducc</emphasis> refers
+          to ducc's home directory.
+        </para>
+      </note>
+
+      <para>
+        This is an overview of the installation and verification procedures.  Details
+        for this checklist follow in the next section.
+      </para>
+
+      <orderedlist>
+        <listitem>
+          <para>
+          Configure user ducc  and group ducc on all systems.
+          </para>
+        </listitem>
+
+        <listitem>
+          <para>
+          Expand the distribution tarfile.
+          </para>
+        </listitem>
+
+        <listitem>
+          <para>
+          Run the installation script <emphasis>ducc_install</emphasis>.
+          </para>
+        </listitem>
+
+        <listitem>
+          <para>
+          Install the utility <emphasis>ducc_ling</emphasis> on local disk space and set permissions.
+          </para>
+        </listitem>
+
+        <listitem>
+          <para>
+            Update <emphasis>ducc_runtime/resources/ducc.properties</emphasis>:
+            <itemizedlist>
+              <listitem>
+                <para>
+                  Specify location of installed <emphasis>ducc_ling</emphasis>.
+                </para>
+              </listitem>
+              <listitem>
+                <para>
+                  Specify the correct ActiveMQ broker address.
+                </para>
+              </listitem>
+              <listitem>
+                <para>
+                  Specify location of the installed JRE.
+                </para>
+              </listitem>
+              <listitem>
+                <para>
+                  Configure the HTTP hostname and optionally, the HTTP port for the Orchestrator.
+                </para>
+              </listitem>
+              <listitem>
+                <para>
+                  Configure the HTTP hostname and optionally, the HTTP port for the Service Manager.
+                </para>
+              </listitem>
+              <listitem>
+                <para>
+                  Optionally specify the node for the DUCC webserver.
+                </para>
+              </listitem>
+            </itemizedlist>
+          </para>
+        </listitem>
+
+        <listitem>
+          <para>
+          Create node configuration ducc.nodes in ducc_runtime/resources.
+          </para>
+        </listitem>
+
+        <listitem>
+          <para>
+            Optionally update the file ducc_runtime/resources/reserved.nodes.
+          </para>
+        </listitem>
+
+        <listitem>
+          <para>
+          Optionally create or update the file ducc_runtime/resources/ducc.administrators.
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+          Run the verify_ducc utility, repeating and correcting problems, until no errors are reported.
+          </para>
+        </listitem>
+
+        <listitem>
+          <para>
+          Start the ActiveMQ broker and ensure it is running.
+          </para>
+        </listitem>
+
+
+        <listitem>
+          <para>
+          Start DUCC.
+          </para>
+        </listitem>
+
+        <listitem>
+          <para>
+          From a web browser, go to the URL http://ducchost:42133 and ensure the machines and
+          DUCC daemons are present and running, where <emphasis>ducchost</emphasis> is the nodename
+          where the browser is started.
+          </para>
+        </listitem>
+
+        <listitem>
+          <para>
+          Run the verification procedures.
+          </para>
+        </listitem>
+
+
+      </orderedlist>
+  </section>
+
+  <section id="ducc.install.detail">
+    <title>Detailed Installation Procedures</title>
+
+    <para>
+      This section provides detail instructions for installing DUCC.
+    </para>
+    <section id="ducc.install.detail.basic">
+      <title>Basic System Initialization</title>
+
+      <para>
+        Create a user "ducc" and a group "ducc".  Currently the user and group must both
+        be "ducc".  This ID is hard-coded into the <emphasis>ducc_ling</emphasis> utility
+        for security reasons.
+      </para>
+
+      <para>
+        Ensure Python 2.x is installed as the "default" Python.  DUCC has only been
+        tested on Python version 2.4 and 2.6.  It may not work on Python 3.0.
+      </para>
+
+      <para>
+         Ensure that the IBM or SUN JRE 1.6 is installed on every node.  The full JDK is only needed
+         on nodes where applications are being developed.  The location of this JRE must be coded
+         into <emphasis>ducc.properties</emphasis> as described below and is used to run the DUCC
+         processes.  It is possible for applications to use different JREs via the 
+         job specifications.
+      </para>
+    </section>
+
+    <section id="ducc.install.detail.tarball">
+      <title>Install DUCC Distribution</title>
+            
+      <para>
+        Log in as user <emphasis>ducc</emphasis> and expand the DUCC distribution file:
+        <programlisting>
+          tar -zxf [ducc-distribution-file].tgz
+        </programlisting>
+      </para>
+
+      <para>
+        This creates a directory <emphasis>ducc=distribution-0.6.4-beta</emphasis> with the installation materials.
+      </para>
+
+      <para>
+        Now execute the installation scripting:
+        <programlisting>
+          cd ducc-distribution-0.6.4-beta
+          ./ducc_install
+        </programlisting>
+      </para>
+
+      <para>
+        You will be prompted for the location of the ducc_runtime and ActiveMQ
+        installations.  First-time users should take the defaults and simply hit
+        <emphasis>enter</emphasis> at each prompt.
+      </para>
+
+      <para>
+        This will create and populate two directories:
+        <programlisting>
+          ~ducc/activemq - the ActiveMQ distribution
+          ~ducc/ducc_run-time - the DUCC run-time
+        </programlisting>
+      </para>
+
+      <para>
+        Installation also ensures all necessary programs are made executable and it installs the
+        ActiveMQ configuration that is tested and customized for DUCC.
+      </para>
+
+      <note>
+        <para>
+          It is possible to use an existing ActiveMQ broker instead of the one supplied with DUCC
+          as long as it is fully compatible with ActiveMQ 5.5.  If this is desired, enter NONE
+          at the prompt for the ActiveMQ location.  Be aware that careful tuning of the ActiveMQ
+          broker may be necessary to support both the DUCC load and the existing load however.
+        </para>
+      </note>
+
+    </section>
+
+    <section id="ducc.install.detail.post">
+      <title>Perform Post-Installation Tasks</title>
+      <para>
+        This section describes how to configure DUCC and secure the <emphasis>ducc_ling</emphasis> utility.
+      </para>
+
+      <section id="ducc.install.detail.post.duccling">
+        <title><emphasis>ducc_ling</emphasis></title>
+        <para>
+          <emphasis>Ducc_ling</emphasis> is a setuid-root program that DUCC uses to spawn jobs
+          under the identity of the submitting user.  To do this, <emphasis>ducc_ling</emphasis>
+          must briefly acquire <emphasis>root</emphasis> privileges in order to switch to the
+          user's identity.  <emphasis>ducc_ling</emphasis> itself takes care not to open any
+          security holes while doing this but it must be correctly installed to prevent malicious
+          or errant processes from compromising system security.
+        </para>
+
+        <para>
+          There are three points to make about <emphasis>ducc_ling</emphasis>, described in detail below:
+          <orderedlist>
+            <listitem>
+              <para>
+              <emphasis>Ducc_ling</emphasis> must be carefully secured to avoid accidental breach
+              of security by setting ownership and file permissions correctly.
+              </para>
+            </listitem>
+            <listitem>
+              <para>
+              It is possible to run <emphasis>ducc_ling</emphasis> without root privileges, albeit
+              with some restrictions of DUCC function.
+              </para>
+            </listitem>
+            <listitem>
+              <para>
+              <emphasis>Ducc_ling</emphasis> may need to be rebuilt for your hardware.
+              </para>
+            </listitem>
+          </orderedlist>
+        </para>
+
+        <section id="ducc.install.detail.post.duccling.secure">
+          <title>Securing <emphasis>ducc_ling</emphasis></title>
+
+          <para>
+            To secure <emphasis>ducc_ling</emphasis>, it must be installed on local disk space (not on a
+            shared file system), on all of the DUCC nodes.  The necessary procedure is 
+            to create a directory dedicated to containing <emphasis>ducc_ling</emphasis> and set the
+            privileges on that directory so only user <emphasis>ducc</emphasis> is able to access its
+            contents.
+          </para>
+
+          <para>
+            Next, copy <emphasis>ducc_ling</emphasis> into the local, now protected, directory,
+            and set its privileges and ownership so that when it executes, it executes as user
+            <emphasis>root</emphasis>.  When invoked, <emphasis>ducc_ling</emphasis> immediately assumes
+            the identity of the job owner, sets the working directory for the process, 
+            establishes log directories for the job, and execs into
+            the specified job process.
+          </para>
+
+          <para>
+            The following steps illustrate how to do this.  Root authority is needed to perform these
+            steps.  If local procedures prohibit the use of setuid-root programs, or root authority cannot
+            be obtained, it is still possible to run DUCC; however, 
+            <orderedlist>
+              <listitem>
+                <para>
+                All jobs will then run as user 
+                <emphasis>ducc</emphasis> as it will be impossible for them to assume the submitter's identity.
+                </para>
+              </listitem>
+              <listitem>
+                <para>
+                File-system permissions must be set for all DUCC users so that user <emphasis>ducc</emphasis> is able to read
+                their applications and data during execution.
+                </para>
+              </listitem>
+            </orderedlist>
+          </para>
+
+          <para>
+            For the sake of these procedures, assume that <emphasis>ducc_ling</emphasis> is to
+            be installed on local disk in the directory:
+            <screen>
+/local/ducc/bin
+            </screen>
+            
+            <emphasis>Ducc_ling</emphasis> is supplied in the installation directory as
+            <screen>
+ducc_runtime/admin/ducc_ling
+            </screen>
+          </para>
+          
+          <para>
+            Remember that this procedure must be performed <emphasis>as root</emphasis> on <emphasis>every</emphasis>
+            node in the DUCC cluster.
+          </para>
+          
+          <orderedlist>
+            
+            <listitem>
+              <para>
+                Create the directory to contain <emphasis>ducc_ling</emphasis>:
+                <screen>
+mkdir /local
+mkdir /local/bin
+mkdir /local/ducc/bin
+                </screen>
+              </para>
+            </listitem>
+
+            <listitem>
+              <para>
+                Ensure that <emphasis>/local/ducc/bin</emphasis> has correct permissions, allowing
+                only the <emphasis>ducc</emphasis> user to read, write, or execute its contents.
+                <screen>
+chown ducc.ducc /local/ducc/bin
+chmod 700 /local/ducc/bin                
+                </screen>
+              </para>
+            </listitem>
+
+            <listitem>
+              <para>
+                Copy <emphasis>ducc_ling</emphasis> into place:
+                <screen>
+cp ducc_runtime/admin/ducc_ling /local/ducc/bin
+                </screen>
+              </para>
+            </listitem>
+
+            <listitem>
+              <para>
+                Set ownership of <emphasis>ducc_ling</emphasis>.  It is necessary to ensure that
+                user ownership is <emphasis>root</emphasis> and that group ownership is <emphasis>ducc</emphasis>.
+                <screen>
+chown root.ducc /local/ducc/bin/ducc_ling
+                </screen>
+              </para>
+            </listitem>
+
+            <listitem>
+              <para>
+                Set permisisons so that user <emphasis>root</emphasis> can read, write, and execute 
+                <emphasis>ducc_ling</emphasis>, group <emphasis>ducc</emphasis> can read and execute,
+                and that when <emphasis>ducc_ling</emphasis> is executed, it is run as the user who
+                owns it (the "setuid" bit).
+                <screen>
+chmod 4750 /usr/bin/ducc/ducc_ling
+                </screen>
+              </para>
+            </listitem>
+
+          </orderedlist>
+
+          <para>
+            When done correctly, only user ducc will have the ability to access ducc_ling.
+            ducc_ling has internal checks to prevent it from operating when invoked by root and
+            to prevent it from executing jobs as user root.  Assuming
+            ducc_ling is installed in /local/ducc/bin, the ducc_ling permissions should be as follows
+            (the date and file-sizes will not match this example):
+            
+            <screen>
+ducc@f7n1:~/ducc-0.1-beta> ls -l /local/ducc/bin                                                                  
+-rwsr-x--- 1 root ducc 22311 2011-10-08 11:42 ducc_ling
+            </screen>
+            
+            NOTE the <emphasis role="bold">-rwsr-x--- </emphasis>permissions on ducc_ling.  If this is not what you
+            see then retry the procedure. 
+          </para> 
+
+        </section> <!-- ducc.install.detail.post.duccling.secure -->
+
+        <section id="ducc.install.detail.post.duccling.root">
+          <title>Running <emphasis>ducc_ling</emphasis> Without Root Authority</title>
+
+          <para>
+            It is possible to run DUCC without giving <emphasis>ducc_ling</emphasis> root authority
+            if there are security concerns or simply if you wish to experiment with DUCC on a
+            machine where you cannot get (or do not want) root privileges.  If you do this, all jobs
+            will execute under the identity of the user that starts DUCC.  For example, if you
+            install DUCC and start it as user "bob", then all jobs run as user "bob".  Most of DUCC
+            is developed and tested in this mode and it is expected to work correctly.
+          </para>
+
+          <para>
+            To run <emphasis>ducc_ling</emphasis> in this mode , simply use the default configuration as
+            distributed in ducc.properties, and the DUCC agents will use the non-privileged version
+            instead.  <emphasis>Ducc_ling</emphasis> will execute from the directory 
+            <screen>
+ducc_runtime/admin
+            </screen>
+
+            This is very convenient for running small test systems or for simply evaluating
+            DUCC before performing a more extensive installation.
+          </para>
+
+          <para>
+            The default configuration line for ducc_ling to run in this mode is as follows:
+            <screen>
+ducc.agent.launcher.ducc_spawn_path=${DUCC_HOME}/admin/ducc_ling
+            </screen>
+          </para>
+
+          <para>
+            Notes:
+            <itemizedlist>
+              <listitem>
+                <para>
+                If you run in this mode, you do NOT need to install ducc_ling in local disk space; the
+                <emphasis>ducc_ling</emphasis> that is packaged in 
+                <emphasis>ducc_runtime/admin</emphasis> will work.
+                </para>
+              </listitem>
+
+              <listitem>
+                <para>
+                If <emphasis>ducc_ling</emphasis> is compiled for an architecture other than the one you installed
+                in, you will need to rebuild it for your architecture as described below.
+                </para>
+              </listitem>
+            </itemizedlist>
+          </para>
+        </section> <!-- ducc.install.detail.post.duccling.root" -->
+
+        <section id="ducc.install.detail.post.duccling.arch">
+          <title>Running On Architectures Other Than That In The Prebuilt Distribution.</title>
+          <para>
+            DUCC is almost a pure-Java application.  However a small bit of C code called
+            ducc_ling is required to allow DUCC to assume different user's identity.  Your
+            tarball will come with ducc_ling compiled for some specific architecture. To build
+            ducc_ling for a different architecture (e.g. Intel, Power, or other), all that is
+            needed is normal gcc.
+          </para>
+
+          <para>
+            To rebuild ducc_ling:
+            <itemizedlist>
+              <listitem>
+                <para>
+                CD to the  directory with the ducc_ling source:
+                <screen>
+cd ducc_runtime/duccling/src
+                </screen>
+                </para>
+              </listitem>
+              <listitem>
+                <para>
+                Build ducc_ling:
+                <screen>
+make clean all
+                </screen>
+                </para>
+              </listitem>
+            </itemizedlist>
+          </para>
+
+          <para>
+            When done you have an architecture-specific ducc_ling binary that must be
+            installed as described above.
+          </para>
+        </section>
+      </section> <!-- ducc.install.detail.post.duccling.arch -->      
+    </section> <!-- ducc.install.detail.post.duccling.secure -->
+
+    <section id="ducc.install.detail.props">
+      <title>Update ducc.properties</title>
+
+      <para>
+        The file <emphasis>ducc.properties</emphasis> is the main configuration file for DUCC.  Some properties
+        must not be changed or DUCC will not function; these properties control internal DUCC operations.  Other
+        properties are tuning parameters that should not be adjusted until experience with the local 
+        installation is gained the the tuning requirements are known.  Some properties define the local
+        environment and must be set when DUCC is first installed.
+      </para>
+
+      <para>
+        The properties that must be updated as part of installation are:
+
+        <programlisting>
+          ducc.broker.hostname
+          ducc.broker.port
+          ducc.jvm
+          ducc.ws.node
+          ducc.ws.address
+          ducc.sm.http.port
+          ducc.sm.http.node
+          ducc.orchestrator.http.port
+          ducc.orchestrator.node
+          ducc.agent.launcher.ducc_spawn.path
+        </programlisting>
+
+        The full set of properties is described in <xref linkend="ducc.properties"  />
+      </para>
+
+      <para>
+        Edit ducc_runtime/resources/ducc.properties and adjust the required properties as follows:
+
+        <variablelist>
+
+          <varlistentry>
+            <term>ducc.broker.hostname</term>
+            <listitem>
+              <para>
+              Set this to the host where your ActiveMQ broker is running.  This MUST be set to the
+              host-name, not "localhost", even if your broker port is configured to "localhost" or
+              "0.0.0.0".  There is no default for this parameter.
+              </para>
+            </listitem>
+          </varlistentry>
+
+          <varlistentry>
+            <term>ducc.broker.port</term>
+            <listitem>
+              <para>
+              Set this to the port configured for ActiveMQ.  The default is 61616.
+              </para>
+            </listitem>
+          </varlistentry>
+
+          <varlistentry>
+            <term>ducc.jvm</term>
+            <listitem>
+              <para>
+                Set this to the full path to the "java" command on your systems.  If this is not
+                set DUCC will attempt to use the "java" command in its path and will fail if this
+                is not the correct version of java, or if it is not in the default path.
+              </para>
+              <para>
+                Note that Java must
+                be installed on all nodes in the same location.  For example:
+              </para>
+              <para>
+                ducc.jvm = /share/bin/jdk1.6/bin/java
+              </para>
+            </listitem>
+          </varlistentry>
+
+          <varlistentry>
+            <term>ducc.ws.node</term>
+            <listitem>
+              <para>
+              Set this to the node name where you want your web-server to run.  If not set, the
+              web-server starts on the same node as the rest of the DUCC management processes.
+              </para>
+            </listitem>
+          </varlistentry>
+
+          <varlistentry>
+            <term>ducc.ws.address</term>
+            <listitem>
+              <para>
+              In multi-homed systems (more than one network card), the DUCC web-server will not know
+              which address it should listen on for requests.  Set this address to the desired
+              web-server address.  If the system is not multi-homed this property need not be set.
+              </para>
+            </listitem>
+          </varlistentry>
+
+          <varlistentry>
+            <term>ducc.sm.http.port</term>
+            <listitem>
+              <para>
+                This is the HTTP port for SM requests.  The default is 19989.  If this is acceptable,
+                it may be left as is; otherwise, select a port and configure it here.
+              </para>
+            </listitem>
+          </varlistentry>
+
+          <varlistentry>
+            <term>ducc.sm.http.node</term>
+            <listitem>
+              <para>
+                This MUST be configured to the node where the SM is running.  The default is a placeholder,
+                "localhost", which will not generally work.
+              </para>
+            </listitem>
+          </varlistentry>
+
+          <varlistentry>
+            <term>ducc.orchestrator.http.port</term>
+            <listitem>
+              <para>
+                This is the HTTP port for most commands (ducc_submit, ducc_reserve, etc.)  The
+                default is 19988.  If this is acceptable, it may be left as is; otherwise, select a
+                port and configure it here.
+              </para>
+            </listitem>
+          </varlistentry>
+
+          <varlistentry>
+            <term>ducc.orchestrator.node</term>
+            <listitem>
+              <para>
+                This MUST be configured to the node where the Orchestrator is running.  The default is a placeholder,
+                "localhost", which will not generally work.
+              </para>
+            </listitem>
+          </varlistentry>
+
+          <varlistentry>
+            <term>ducc.agent.launcher.ducc_spawn.path</term>
+            <listitem>
+              <para>
+              Set this to the full path where <emphasis>ducc_ling</emphasis> is installed.
+              </para>
+            </listitem>
+          </varlistentry>
+
+        </variablelist>
+
+      </para>
+    </section> <!-- ducc.install.detail.props -->
+
+    <section id="ducc.install.detail.nodelist">
+      <title>Create the DUCC Node list</title>
+      <para>
+        Update the file "ducc.nodes" in the directory "ducc_runtime/resources/".  For initial
+        installation this should be a simple flat file with the name of each host that
+        participates in the DUCC cluster on one line.  The section on <xref linkend="ducc.nodes" />
+        provides full details on node configuration. Note that line comments are allowed and
+        are denoted with <emphasis role="bold">#</emphasis>. For example:
+        <screen>
+# Frame 6 nodes
+f6n6           # management node
+f6n7 
+f6n8 
+f6n9 
+f6n10
+# Frame 7 nodes
+f7n1 
+# Frame 10 nodes
+f10n1
+f10n2
+f10n3
+f10n8
+f10n9            
+        </screen>
+      </para>
+
+      <note>
+        <para>
+          It is important that the node running the management processes is NOT in the nodelist.
+          If the management node is in the nodelist an agent will be started on that node and
+          Job Processes (JPs) will be started on it.  Because JPs use a very large amount of
+          memory this can prevent the management processes from functioning.
+        </para>
+        <para>
+          However, if nodes are at a premium on your cluster, it is possible to allow the Job Driver (JD)
+          processes to run on the management node along with the management processes.  If this
+          is desired, then:
+          <orderedlist>
+            <listitem>
+              <para>
+              Do include the management node in the nodelist, and
+              </para>
+            </listitem>
+            <listitem>
+              <para>
+              Configure the management node to be be reserved for use only by the Job Driver as described below.
+              </para>
+            </listitem>
+          </orderedlist>
+        </para>
+      </note>
+
+    </section>
+
+    <section id="ducc.install.detail.jobdriver.nodepool">
+      <title>Define the Job Driver nodepool</title>
+      <para>
+        One node should be defined for running the Job Driver (JD) processes.  This may be any
+        node in the cluster.  The node must be reserved to prevent Job Processes (JP) from
+        running on it.  It is permissible for the JD reserved node to be the management 
+        node, as long as sufficient memory (at least 16GB) is available.  To constrain the
+        Job Driver node to a specific set of nodes, it is necessary to define a nodepool
+        containing those nodes, and to update the JobDriver class to use that node pool.  Details
+        on nodepool and class configuration are in <xref linkend="ducc.classes"/>.
+      </para>
+
+      <para>
+        If it doesn't matter which node is reserved for the Job Driver this step may be skipped.
+      </para>
+
+      <para>
+        Configure the Job Driver node thus:
+        <orderedlist>
+          <listitem>
+            <para>
+            Create the file <emphasis>ducc_runtime/resources/jobdriver.nodepool</emphasis>
+            </para>
+          </listitem>
+          <listitem>
+            <para>
+            Add the name of the management node to the file.  This should be the only line in the file.
+            </para>
+          </listitem>
+          <listitem>
+            <para>
+              Configure the <emphasis>JobDriver</emphasis> class in <emphasis>ducc.properties</emphasis>
+              to be in the jobdriver nodepool.
+            </para>
+          </listitem>
+        </orderedlist>
+        
+        For example:
+        <programlisting>
+bash-3.2$ cat jobdriver.nodepool
+f6n6    # management and job driver node
+        </programlisting>
+      </para>    
+    </section>
+    
+    <section id="ducc.install.detail.ducc.administrators">
+      <title>Define the system administrators</title>
+      <para>
+        Userids listed in file <emphasis>ducc_runtime/resources/ducc.administrators</emphasis> are granted expanded privileges, 
+        for example the ability to cancel any job on the system via the DUCC web-server. The format of the file is simply one userid 
+        per line, with commented lines denoted by a leading <emphasis role="bold">#</emphasis>.  For example:
+       <programlisting>
+# administrators
+degenaro
+challngr
+cwiklik
+eae     
+        </programlisting>
+      </para>
+  </section>
+
+  </section> <!-- ducc.install.detail.post -->
+
+  <section>
+    <title>Run The Verification Script</title>
+    <para>
+      The script <emphasis>~ducc/ducc_runtime/admin/verify_ducc</emphasis>
+      checks your ActiveMQ configuration, ducc.nodes, and
+      ducc_ling setup to ensure the steps above were completed correctly.
+    </para>
+    
+    <para>
+      Simply execute the script, fixing problems and rerunning until
+      no errors are reported.  If ANY errors are reported they must be
+      fixed and <emphasis>verify_ducc</emphasis> rerun before continuing.
+
+      <screen>
+cd ducc_runtime/admin
+./verify_ducc
+      </screen>
+    </para>
+  </section>
+
+  <section>
+    <title>Start DUCC</title>
+    
+    <!-- For next release test and package our AMQ scripting and configuration. -->
+
+    <para>
+      You should add the directory <filename>ducc_runtime/admin</filename> to your path
+      to simplify DUCC administration.  As well you should add <filename>ducc_runtime/bin</filename>
+      to your path in order to submit and cancel jobs and reservations.
+    </para>
+
+    <orderedlist>
+      <listitem>
+        <para>
+                
+        Start the ActiveMQ broker.  If you're using the broker supplied with DUCC use the 
+        following procedure, otherwise use your local procedures.
+        <screen>
+cd ~ducc/activemq/apache-activemq-5.5.0/bin
+./activemq start
+        </screen>
+        </para>
+      </listitem>
+      
+      <listitem>
+        <para>
+          Ensure the broker is running.  If you use the ActiveMQ distribution supplied
+          with DUCC and are using the default port, then use the following command, otherwise
+          use your local procedures
+          <screen>
+netstat -an | grep 61616 | grep LISTEN
+          </screen>
+          
+          You should see something similar to the following  if ActiveMQ is started
+          correctly.  Be sure ActiveMQ is started before continuing (because ActiveMQ
+          manages all message flows and acts as the DUCC name server.)
+          
+          <screen>
+tcp46      0      0  *.61616      *.*        LISTEN
+          </screen>
+        </para>
+      </listitem>
+      
+      <listitem>
+        <para>
+          Start DUCC.  The command below starts DUCC using the default node list,
+          <emphasis>ducc.nodes.</emphasis>  See the section describing
+          <emphasis>start_ducc</emphasis> for other options.
+          <programlisting>
+cd ~ducc/ducc_runtime/admin
+./start_ducc
+          </programlisting>
+        </para>
+      </listitem>
+      
+      <listitem>
+        <para>
+          Make sure DUCC is running on all the expected nodes by running the
+          <emphasis>check_ducc</emphasis> script.  You would expect to see a process for each of 
+          
+          <itemizedlist>
+            <listitem>
+              <para>
+              rm: the Resource manager
+              </para>
+            </listitem>
+            <listitem>
+              <para>
+              sm - the Services manager
+              </para>
+            </listitem>
+            <listitem>
+              <para>
+              pm - the Process manager
+              </para>
+            </listitem>
+            <listitem>
+              <para>
+              ws - the web-server
+              </para>
+            </listitem>
+            <listitem>
+              <para>
+              or - the job flow manager
+              </para>
+            </listitem>
+          </itemizedlist>
+          
+          and you would expect to see one agent on each node specified in ducc.nodes.
+        </para>
+        
+        <para>
+          For example:
+          <screen>
+            <![CDATA[
+ ducc@f10n1:~/projects/ducc/ducc_build/runtime/admin> ./check_ducc
+ Checking f10n1 ... Found rm @ f10n1 PID 95288 owned by ducc
+ Found pm @ f10n1 PID 95337 owned by ducc
+ Found sm @ f10n1 PID 95409 owned by ducc
+ Found or @ f10n1 PID 95478 owned by ducc
+ Found agent @ f10n1 PID 95621 owned by ducc
+ Checking f10n2 ... Found agent @ f10n2 PID 92113 owned by ducc
+ Checking f10n3 ... Found agent @ f10n3 PID 58602 owned by ducc
+ Checking f10n4 ... Found agent @ f10n4 PID 31689 owned by ducc
+ Checking f10n5 ... Found agent @ f10n5 PID 122128 owned by ducc
+ Checking f10n6 ... Found agent @ f10n6 PID 8301 owned by ducc
+ Checking f10n7 ... Found agent @ f10n7 PID 106659 owned by ducc
+ Checking f10n8 ... Found agent @ f10n8 PID 43946 owned by ducc
+ Checking f10n9 ... Found agent @ f10n9 PID 115101 owned by ducc
+ Checking f10n10 ... Found agent @ f10n10 PID 93730 owned by ducc
+ Checking f9n2 ... Found ws @ f9n2 PID 88351 owned by ducc
+            ]]>
+          </screen>
+        </para>
+      </listitem>
+
+    </orderedlist>
+
+  </section> <!-- ducc.install.detail.start -->
+
+  <section id="ducc.install.detail.browser">
+    <title>Start DUCC Browser</title>
+
+    <para>
+      Open a browser to the URL http://wshost:42133, where "wshost" is
+      the host where the DUCC web-server is started in the previous step.  Feel free
+      to explore.  
+    </para>
+
+    <para>
+      Click the "Status" and then "Machines" link at the upper left to see the
+      machines that are configured above.  If they do not show up after a minute or
+      two there is something wrong with the installation.
+    </para>
+
+    <para>
+      Click the "Status" and then "Reservations" link.  This should show a reservation
+      for user "System" and class "JobDriver".  The status should show "Assigned" or
+      "Waiting For Resources".  If it shows "Waiting For Resources" it may take two to
+      three minutes to advance to "Assigned".  If it never becomes "Assigned" there is
+      something wrong with the installation.
+    </para>
+
+    <para>
+      Once the machines and JobDriver reservation show up correctly DUCC is ready to
+      run work.
+    </para>
+
+  </section> <!-- ducc.install.detail.browser -->
+
+  <section id="ducc.install.detail.job" >
+    <title>Run a Job</title>
+
+    <note> 
+      <para>
+        Jobs cannot be scheduled until all DUCC components have initialized and
+        stabilized, which can take a minute or two.  Check the web console, under
+        Status -> Reservations and wait until the reservation for JobDriver is in
+        state "Assigned" before attempting to run jobs.
+      </para>
+    </note>
+
+    <para>
+      A set of very simple jobs is provided in the distribution for
+      testing and demonstration.  The jobs are installed
+      into <filename class="directory">ducc_run-time/test</filename>
+      as part of the installation above.  The jobs run 
+      UIMA analytics but instead of computation, they simply sleep, in
+      order to verify and demonstrate DUCC without the need for
+      high-powered hardware and complex software installation.
+    </para>
+
+    <para>
+      To run a job:
+      <orderedlist>
+        <listitem>
+          <para>
+          Set your path to
+          include <emphasis>ducc_runtim/bin</emphasis>. This
+          directory has all the commands for the Command Line
+          Interface (CLI).
+          </para>
+        </listitem>
+
+        <listitem>
+          <para>
+          As some user other than <emphasis>ducc</emphasis>, go to the
+          directory <filename class="directory">ducc_runtime/test/jobs</filename>
+          and run <emphasis>ducc_submit</emphasis>:
+          <programlisting>
+cd ~ducc/ducc_runtime/test/jobs
+ducc_submit --specification 1.job
+          </programlisting>
+          
+          A job id number is printed to the console.
+          </para>
+        </listitem>
+      </orderedlist>
+    </para>
+
+    <para>
+      It will take a few moments for resources to be scheduled and the
+      job to start up. You can follow the progress of the job in the
+      web browser using the <emphasis>Status -> Jobs</emphasis> link.
+    </para>
+    
+    <para>
+      In your home directory expect to find the following:
+      <itemizedlist>
+        <listitem>
+          <para>
+          The directory <filename class="directory">ducc/logs</filename>
+          is created.
+          </para>
+        </listitem>
+        
+        <listitem>
+          <para>
+          Inside <filename class="directory">ducc/logs</filename> is a
+          directory with the same id as was given when you submitted
+          should appear.  As the job progresses, a number of logs and
+          other files will be created in this directory.
+          </para>
+        </listitem>
+      </itemizedlist>
+    </para>
+
+    <para> 
+      There are five sample jobs provided, each of which runs a different number of
+      work items, and one which submits a <emphasis>reservation</emphasis>.  One need not
+      wait for one job to complete before submitting another; try submitting several of
+      the jobs and watch progress on the web-server and visualization.
+    </para>
+
+    <para>
+      You may cancel any job while it is running by executing <emphasis>ducc_cancel:</emphasis>
+      <programlisting>
+ducc_cancel --id [id]
+      </programlisting>
+      where the ID you supply is the one returned by <emphasis>ducc_submit</emphasis>.  The ID
+      is also shown in the web-server.
+    </para>
+
+    <para>
+      To submit a reservation:
+      <programlisting>
+ducc_reserve --specification reserve.job
+      </programlisting>
+      This will take a few moments and if all is well, will return an ID.  The reservation will 
+      have been scheduled when the ID is returned.  It is possible to view the reservation
+      in the web-server under Status -> Reservations.
+    </para>
+
+    <para>
+      To cancel the reservation:
+      <programlisting>
+ducc_unreserve --id [id]
+      </programlisting>
+      again, using the ID returned from <emphasis>ducc_reserve.</emphasis>
+    </para>
+
+    <para>
+      The commands issued here and the format of the inputs are described in detail in the
+      Command Line Interface chapter.
+    </para>
+
+  </section> <!-- ducc.install.detail.job -->
+
+  <section id="ducc.install.detail.shutdown">
+    <title>Shutdown DUCC</title>
+    <para>
+      To stop DUCC, execute
+      
+      <programlisting>
+~ducc/ducc_runtime/admin/ducc_stop -a
+      </programlisting>
+    </para>
+    
+    <para>
+      This broadcasts a message to all DUCC processes instructing them to terminate.  Any 
+      job processes still alive are also killed.
+    </para>
+
+    <para>
+      Shutdown attempts to be "graceful".  If there is still a job running a signal is sent
+      indicating shutdown is occurring and DUCC waits a few moments for processes to exit. If
+      the processes do not exit DUCC issues <emphasis>kill -9</emphasis> to forcibly stop 
+      them, and then exits.
+    </para>
+
+    <para>
+      Occasionally system problems prevent a DUCC process from stopping.  It is good practice,
+      after stopping DUCC, to ensure the processes actually exited by running
+      <programlisting>
+check_ducc
+      </programlisting>
+
+      <emphasis>check_ducc</emphasis> searches all the nodes in the node list and the local
+      node for DUCC processes and prints a status line for everything it finds.
+    </para>
+    
+    <para>
+      If, after a minute or two, <emphasis>check_ducc</emphasis> shows some DUCC process still running, you can
+      have <emphasis>check_ducc</emphasis> issue <emphasis>kill -9</emphasis> against them:
+      <programlisting>
+check_ducc -k
+      </programlisting>
+    </para>
+  </section>
+         
+</chapter>

Propchange: uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-admin/chapter-install.xml
------------------------------------------------------------------------------
    svn:eol-style = native

Added: uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-admin/chapter-resource-manager.xml
URL: http://svn.apache.org/viewvc/uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-admin/chapter-resource-manager.xml?rev=1427917&view=auto
==============================================================================
--- uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-admin/chapter-resource-manager.xml (added)
+++ uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-admin/chapter-resource-manager.xml Wed Jan  2 19:12:10 2013
@@ -0,0 +1,541 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+  
+       http://www.apache.org/licenses/LICENSE-2.0
+  
+  Unless required by applicable law or agreed to in writing,
+  software distributed under the License is distributed on an
+  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  KIND, either express or implied.  See the License for the
+  specific language governing permissions and limitations
+  under the License.
+-->
+<chapter id="ducc.rm">
+  <title>Resource Management, Operation, and Configuration</title>
+  
+  <para>
+    <emphasis>The source for this chapter is ducc_ducbook/documents/chapter-resource-manager.xml</emphasis>
+  </para>
+  
+  <section>
+    <title>Overview</title>
+    <para>
+      The DUCC Resource Manager is responsible for allocating cluster resources among the various
+      requests for work in the system.  DUCC recognizes three classes of work:
+      <orderedlist>
+        <listitem>
+          <para>
+            <emphasis>Managed Jobs</emphasis>.  Managed jobs are Java applications implemented in the
+            UIMA framework.  They are scaled out by DUCC using UIMA-AS.  Managed jobs are executed as
+            some number of discrete processes distributed over the cluster resources.  All processes
+            of all jobs are by definition preemptable; the number of
+            processes is allowed to increase and decrease over time in order to provide all users
+            access to the computing resources.
+          </para>
+        </listitem>
+        <listitem>
+          <para>
+            <emphasis>Services</emphasis>.  Services are long-running processes which perform
+            some function on behalf of jobs or other services.  Most DUCC services are UIMA-AS
+            services and are managed the same as <emphasis>managed jobs</emphasis>.  From a 
+            scheduling point of view, there is no difference between services and managed jobs.
+          </para>
+        </listitem>
+        <listitem>
+          <para>
+            <emphasis>Reservations</emphasis>.  A reservation provides persistent, dedicated use of
+            some portion of the resources to a specific user.  A reservation may be for an entire
+            machine, or it may be for some portion of a machine.  Machines are subdivided 
+            according to the amount of memory installed on the machine.
+          </para>
+        </listitem>
+      </orderedlist>
+    </para>
+    
+    <para>
+      The work that DUCC is designed to support is extremely memory-intensive.  In most cases
+      resources are significantly more constrained by memory than by CPU processing poser.  The
+      entire resource pool in a DUCC cluster therefore consists of the total memory of all the
+      processors in the cluster.
+    </para>
+
+    <para>
+      In order to apportion the cumulative memory resource among requests, the Resource Manager
+      defines some minimum unit of memory and allocates machines such that a "fair" 
+      number of "memory units" are awarded to every user of the system.  This minimum quantity is called
+      a <emphasis>share quantum</emphasis>, or simply, a <emphasis>share</emphasis>.  The scheduling
+      goal is to award an equitable number of memory <emphasis>shares</emphasis> to every user of
+      the system.
+    </para>
+
+    <para>
+      The Resource Manager awards shares according to a <emphasis>fair share</emphasis> policy.  The
+      memory shares in a system are divided equally among all the users who have work in the system.
+      Once an allocation is assigned to a user, that user's jobs are then also assigned an equal
+      number of shares, out of the user's allocation.  Finally, the Resource Manager maps the share
+      allotments to physical resources.
+    </para>
+
+    <para>
+      To map a share allotment to physical resources, the Resource Manager considers the amount of
+      memory that each job declares it requires for each process.  That per-process memory
+      requirement is translated into the minimum number of collocated quantum shares required for the
+      process to run.
+    </para>
+    
+    <para>
+      For example, suppose the share quantum is 15GB.  A job that declares it requires 14GB per
+      process is assigned one quantum share per process. If that job is assigned 20 shares, it will
+      be allocated 20 processes across the cluster.  A job that declares 28GB per process would be
+      assigned <emphasis>two</emphasis> quanta per process.  If that job is assigned 20 shares, it
+      is allocated 10 processes across the cluster.  Both jobs occupy the same amount of memory;
+      they consume the same level of system resources.  The second job does so in half as many
+      processes however.
+    </para>
+
+    <para>
+      The output of each scheduling cycle is always in terms of <emphasis>processes</emphasis>,
+      where each process is allowed to occupy some number of shares. The DUCC agents implement a
+      mechanism to ensure that no user's job processes exceed their allocated memory assignments.
+    </para>
+    
+    <para>
+      Some work may be deemed to be more "important" than other work.  To accommodate this, DUCC
+      allows jobs to be submitted with an indication of their relative importance: more important
+      jobs are assigned a higher "weight"; less important jobs are assigned a lower weight.
+
+      During the fair share calculations, jobs with higher weights are assigned more shares
+      proportional to their weights; jobs with lower weights are assigned proportionally fewer
+      shares.  Jobs with equal weights are assigned an equal number of shares.  This weighed
+      adjustment of fair-share assignments is called <emphasis>weighted fair share.</emphasis>
+    </para>
+
+    <para>
+      The abstraction used to organized jobs by importance is the <emphasis>job class</emphasis>
+      or simply <emphasis>class.</emphasis>  As jobs enter the system they are grouped with 
+      other jobs of the same importance and assigned to a common <emphasis>class.</emphasis>  The
+      class and its attributes are described in subsequent sections.
+    </para>
+
+    <para>
+      The scheduler executes in two phases:
+      <orderedlist>
+        <listitem>
+          <para>
+            The <emphasis>How-Much</emphasis> phase: every job is assigned some number of shares,
+            which is converted to the number of processes of the declared size.
+          </para>
+        </listitem>
+        <listitem>
+          <para>
+            The <emphasis>What-Of</emphasis> phase: physical machines are found which can
+            accommodate the number of processes allocated by the <emphasis>How-Much</emphasis>
+            phase.  Jobs are mapped to physical machines such that the total declared per-process
+            amount of memory does not exceed the physical memory on the machine.  
+          </para>
+        </listitem>
+      </orderedlist>
+    </para>
+
+    <para>
+      The <emphasis>How-Much</emphasis> phase is itself subdivided into three phases:
+      <orderedlist>
+        <listitem>
+          <para>
+            <emphasis role="bold">Class counts:</emphasis>Apply <emphasis>weighed
+              fair-share</emphasis> to all the job classes that have jobs assigned to them.  This
+            apportions all shares in the system among all the classes according to their weights.
+          </para>
+        </listitem>
+        <listitem>
+          <para>
+            <emphasis role="bold">User counts:</emphasis> For each class, collect all the users with
+            jobs submitted to that class, and apply <emphasis>fair-share</emphasis> (with equal weights)
+            to equally divide all the class shares among the users.  This apportions all shares
+            assigned to the class among the users in this class.
+          </para>
+          <para>
+            A user may have jobs in more than one class, in which case that user's fair share
+            is calculated independently within each class.
+          </para>
+        </listitem>
+        <listitem>
+          <para>
+            <emphasis role="bold">Job counts:</emphasis> For each user (independently within each
+            class), collect all the jobs assigned to that user and
+            apply <emphasis>fair-share</emphasis> to equally divide all the user's shares among their
+            jobs.  This apportions all shares given to this user for each class among the user's
+            jobs in that class.
+          </para>
+        </listitem>
+      </orderedlist>
+    </para>
+
+    <para>
+      Reservations are relatively simple.  If the number of shares or machines requested is
+      available or can be made available through preemption of fair-share jobs, the reservation
+      is satisfied and resources are allocated.  If not, the reservation fails.  In the case
+      where preemptions are required, the reservation is delayed until all necessary
+      resources have been freed.
+    </para>
+
+  </section>
+
+  <section>
+    <title>Scheduling policies</title>
+
+    <para>
+      The Resource Manager implements three coexistent scheduling policies. 
+
+      <variablelist>
+        <varlistentry>
+          <term><emphasis role="bold">FAIR_SHARE</emphasis></term>
+          <listitem>
+            <para>
+              This is the weighted-fair-share policy described in detail above.
+            </para>
+          </listitem>
+        </varlistentry>
+
+        <varlistentry>
+          <term><emphasis role="bold">FIXED_SHARE</emphasis></term>
+          <listitem>
+            <para>
+              The <emphasis>FIXED_SHARE</emphasis> policy is used to reserve a portion of a
+              machine.  The allocation is treated as a reservation in that it is permanently
+              allocated (until it is canceled) and it cannot be preempted by any other
+              request.
+            </para>
+            <para>
+              A fixed-share request specifies a number of processes of a given size,
+              for example, "10 processes of 32GB each".  The ten processes may or
+              may not be collocated on the same machine.  Note that the resource manager attempts
+              to minimize fragmentation so if there is a very large machine with few
+              allocations, it is likely that there will be some collocation of the
+              assigned processes.
+            </para>
+            <para>
+              A fixed-share allocation may be thought of a reservation for a "partial"
+              machine.
+            </para>
+          </listitem>
+        </varlistentry>
+
+        <varlistentry>
+          <term><emphasis role="bold">RESERVE</emphasis></term>
+          <listitem>
+            <para>
+              The <emphasis>RESERVE</emphasis> policy is used to reserve a full
+              machine.  It always returns an allocation for an entire machine. The
+              reservation is permanent (until it is canceled) and it cannot be
+              preempted by any other request.
+            </para>
+            <para>
+              It is possible to configure the scheduling policy so that a
+              reservation returns any machine in the cluster that is available,
+              or to restrict it to machines of the size specified in the
+              reservation request.
+            </para>
+          </listitem>
+        </varlistentry>
+      </variablelist>
+    </para>
+
+  </section>
+
+  <section>
+    <title>Priority vs Weight</title>
+
+    <para>
+      It is possible that the various policies may interfere with each other.  It is also possible that
+      the fair share weights are not sufficient to guarantee sufficient resources are allocated to
+      high importance jobs.
+      <emphasis>Priorities</emphasis> 
+      are used to resolve these conflicts
+    </para>
+
+    <para>
+      Simply: <emphasis>priority</emphasis> is used to specify the order of evaluation of
+      the job classes.  <emphasis>Weight</emphasis> is used to specify the importance (or
+      weights) of the job classes for use by the weighted fair-share
+      scheduling policy.
+    </para>
+
+    <formalpara>
+      <title>Priority</title>
+      <para>
+        It is possible that conflicts may arise in scheduling policies.  For example, it may be
+        desired that reservations be fulfilled before any fair-share jobs are scheduled.  It may
+        be desired that some types of jobs are so important that when they enter the system
+        all other fair-share jobs be evicted.  Other such examples can be found.
+      </para>
+    </formalpara>
+    
+    <para>
+      To help resolve this, the Resource Manager allows job classes to be prioritized. Priority
+      is used to determine the <emphasis>order of evaluation</emphasis> of the scheduling 
+      classes.
+    </para>
+    
+    <para>
+      When a scheduling cycle starts, the scheduling classes are ordered from "best" to 
+      "worst" priority.  The scheduler then attempts to allocate ALL of the system's
+      resources to the "best" priority class.  If any resources are left, the scheduler
+      goes on to the next class and so on, until either all the resources are exhausted
+      or there is no more work to schedule.
+    </para>
+    
+    <para>
+      It is possible to have multiple job classes of the same priority.  What this means
+      is that resources are allocated for the set of job classes from the same set of
+      resources.  Resources for higher priority classes will have already been allocated,
+      resources for lower priority classes may never become available.
+    </para>
+    
+    <para>
+      To constrain high priority jobs from completely monopolizing the system, 
+      <emphasis>class caps</emphasis> may be assigned.
+      Higher priority guarantees that <emphasis>some</emphasis> resources will be available (or
+      made available) but doesn't that that <emphasis>all</emphasis> resources necessarily be used.
+    </para>
+
+
+    <formalpara>
+      <title>Weight</title>
+      <para>
+        Weight is used to determine the relative importance of jobs in a set of job
+        classes of the same priority when doing fair-share allocation.  All job classes
+        of the same priority are assigned shares from the full set of available
+        resources according to their weights using weighted fair-share.  Weights are
+        used only for fair-share allocation.
+      </para>
+    </formalpara>
+
+    <para>
+      <emphasis>Class caps</emphasis> may also be used to insure that very high
+      importance jobs cannot fully monopolize all of the resources in the system.
+    </para>
+    
+  </section>
+
+  <section>
+    <title>Node Pools</title>
+
+    <para>
+      It may be desired or necessary to constrain certain types of resource allocations to
+      a specific subset of the resources.  Some nodes may have special hardware, or perhaps
+      it is desired to prevent certain types of jobs from being scheduled on some specific
+      set of machines.  Nodepools are designed to provide this function.
+    </para>
+
+    <para>
+      Nodepools impose hierarchical partitioning on the set of available machines.  A nodepool
+      is a subset of the full set of machines in the cluster.  Nodepools may not overlap.
+      A nodepool may itself contain non-overlapping subpools. The highest level nodepool is called the "global"
+      nodepool.  If a job class does not have an associated nodepool, the global nodepool
+      is implicitly associated with he class.
+    </para>
+
+    <para>
+      Nodepools are associated with job classes.  During scheduling, a job may be assigned resources
+      from its associated nodepool, or from any of the subpools which divide the associated
+      nodepool.  The scheduler attempts to fully exhaust resources in the associated nodepool before
+      allocating within the subpools, and during eviction, attempts to first evict from the
+      subpools.  The scheduler insures that the nodepool mechanism does not disrupt fair-share
+      allocation.
+    </para>
+
+    <para>
+      If it is desired that jobs assigned to some subpool take priority over jobs that have 
+      spilled over from the "superpool", then the class associated with the subpool should
+      be given greater weight, or greater priority, as appropriate.  (See the Weight vs
+      Priority discussion.)
+    </para>
+
+    <para>
+      There is no explicit priority associated with nodepools.  However, it is possible to 
+      assign a "preference" to a specific nodepool, if it is desired that those nodes be
+      chosen first when the are available.  Use the nodepool configurations "order" directive to 
+      do this.
+    </para>
+
+  </section>
+
+
+  <section>
+    <title>Job Classes</title>
+    <para>
+      The primary abstraction to control and configure the scheduler is the <emphasis>class</emphasis>.
+      A <emphasis>class</emphasis> is simply a set of rules used to parameterize how resources
+      are assigned to jobs.  Every job that enters the system is associated with one job class.
+    </para>
+
+    <para>
+      The job class defines the following rules:
+      <variablelist>
+
+        <varlistentry>
+          <term><emphasis role="bold">Priority</emphasis></term>
+          <listitem>
+            <para>
+              This is the order of evaluation and assignment of resources to this class.  See the
+              discussion of Priority vs Weight for details.
+            </para>
+          </listitem>
+        </varlistentry>
+
+        <varlistentry>
+          <term><emphasis role="bold">Weight</emphasis></term>
+          <listitem>
+            <para>
+              This defines the "importance" of jobs in this class and is used in the 
+              weighted fair-share calculations.
+            </para>
+          </listitem>
+        </varlistentry>
+
+
+        <varlistentry>
+          <term><emphasis role="bold">Scheduling Policy</emphasis></term>
+          <listitem>
+            <para>
+              This defines the policy, <emphasis>fair share, fixed share, or reserve</emphasis>
+              used to schedule the jobs in this class.
+            </para>
+          </listitem>
+        </varlistentry>
+
+        <varlistentry>
+          <term><emphasis role="bold">Caps</emphasis></term>
+          <listitem>
+            <para>
+              Class caps limit the total resources assigned to a class.  This is designed to prevent high
+              importance and high priority job classes from fully monopolizing the resources.  It can be
+              used to limit the total resources available to lower importance and lower priority classes.
+            </para>
+          </listitem>
+        </varlistentry>
+
+        <varlistentry>
+          <term><emphasis role="bold">Nodepool</emphasis></term>
+          <listitem>
+            <para>
+              A class may be associated with exactly one nodepool.  Jobs submitted to the class are assigned
+              only resources which lie in that nodepool, or in any of the subpools defined within that
+              nodepool.
+            </para>
+          </listitem>
+        </varlistentry>
+
+
+        <varlistentry>
+          <term><emphasis role="bold">Prediction</emphasis></term>
+          <listitem>
+            <para>
+              For the type of work that DUCC is designed to run, new processes typically take a great
+              deal of time initializing.  It is not unusual to experience 30 minutes or more of
+              initialization before work items start to be processed.
+            </para>
+            <para>
+              When a job is expanding (i.e. the number of assigned processes is allowed to
+              dynamically increase), it may be that the job will complete before the new processes
+              can be assigned and the work items within them complete initialization.  In this
+              situation it is wasteful to allow the job to expand, even if its fair-share is greater
+              than the number of processes it currently has assigned.
+            </para>
+            <para>
+              By enabling prediction, the scheduler will consider the average initialization time for
+              processes in this job, current rate of work completion, and predict the number of
+              processes needed to complete the job in the optimal amount of time.  If this number
+              is less than the job's fair, share, the fair share is capped by the predicted
+              needs.
+            </para>
+          </listitem>
+        </varlistentry>
+
+        
+        <varlistentry>
+          <term><emphasis role="bold">Prediction Fudge</emphasis></term>
+          <listitem>
+            <para>
+              When doing prediction, it may be desired to look some time into the future 
+              past initialization times to
+              predict if the job will end soon after it is expanded.  The prediction fudge 
+              specifies a time past the expected initialization time that is used to 
+              predict the number of future shares needed.
+            </para>
+          </listitem>
+        </varlistentry>
+
+        <varlistentry>
+          <term><emphasis role="bold">Initialization cap</emphasis></term>
+          <listitem>
+            <para>
+              Because of the long initialization time of processes in most DUCC jobs, process
+              failure during the initialization phase can be very expensive in terms of 
+              wasted resources.  If a process is going to fail because of bugs, missing
+              services, or any other reason, it is best to catch it early.
+            </para>
+            <para>
+              The initialization cap is used to limit the number of processes assigned to
+              a job until it is known that at least one process has successfully passed 
+              from initialization to running.  As soon as this occurs the scheduler will
+              proceed to assign the job its full fair-share of resources.
+            </para>
+          </listitem>
+        </varlistentry>
+
+
+
+        <varlistentry>
+          <term><emphasis role="bold">Expand By Doubling</emphasis></term>
+          <listitem>
+            <para>
+              Even after initialization has succeeded, it may be desired to throttle
+              the rate of expansion of a job into new processes.
+            </para>
+            <para>
+              When expand-by-doubling is enabled, the scheduler allocates either 
+              twice the number of resources a job currently has, or its fair-share
+              of resources, whichever is smallest.
+            </para>
+          </listitem>
+        </varlistentry>
+
+
+        <varlistentry>
+          <term><emphasis role="bold">Maximum Shares</emphasis></term>
+          <listitem>
+            <para>
+              This is for FIXED_SHARE policies only.  Because fixed share allocations are
+              not preemptable, it may be desirable to limit the number of shares that
+              any given request is allowed to receive.
+            </para>
+          </listitem>
+        </varlistentry>
+
+        <varlistentry>
+          <term><emphasis role="bold">Enforce Memory</emphasis></term>
+          <listitem>
+            <para>
+              This is for RESERVE policies only.  It may be desired to allow a reservation request
+              receive any machine in the cluster, regardless of its memory capacity.  It may also
+              be desired to require that an exact size be specified (to ensure the right size
+              of machine is allocated).  The <emphasis>enforce memory</emphasis> rule allows
+              installations to create reservation classes for either policy.
+            </para>
+          </listitem>
+        </varlistentry>
+
+
+      </variablelist>
+    </para>
+  </section>
+</chapter>

Propchange: uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-admin/chapter-resource-manager.xml
------------------------------------------------------------------------------
    svn:eol-style = native

Added: uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-admin/chapter-service-manager.xml
URL: http://svn.apache.org/viewvc/uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-admin/chapter-service-manager.xml?rev=1427917&view=auto
==============================================================================
--- uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-admin/chapter-service-manager.xml (added)
+++ uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-admin/chapter-service-manager.xml Wed Jan  2 19:12:10 2013
@@ -0,0 +1,267 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+  
+       http://www.apache.org/licenses/LICENSE-2.0
+  
+  Unless required by applicable law or agreed to in writing,
+  software distributed under the License is distributed on an
+  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  KIND, either express or implied.  See the License for the
+  specific language governing permissions and limitations
+  under the License.
+-->
+<chapter id="ducc.sm.ov">
+<title>Service Manager</title>
+
+    <para>
+      SM maintains map of all jobs and services and the states of these entities
+      relative to their dependencies.  This is called the <emphasis>service map</emphasis>.
+    </para>
+
+    <para>
+      A job may contain a list of service endpoints.  The SM maintains the state of these
+      in the job's service map entry.
+    </para>
+
+    <para>
+      The SM API is used to register, deregister, start, stop, and query services.
+      <variablelist>
+        <varlistentry>
+          <term><emphasis role="bold">Register</emphasis></term>
+          <listitem>
+            Register sends a service specification to the SM.  Register optionally
+            starts the service.  The SM uses the OR's DuccServiceSubmit API to
+            start the service.  The service definition and state is persisted
+            over system restarts.
+          </listitem>
+        </varlistentry>
+
+        <varlistentry>
+          <term><emphasis role="bold">Unregister</emphasis></term>
+          <listitem>
+            Unregister removes the service spec.  It is stopped if it is
+            started and not busy. If still busy it is
+            marked implicit and stopped when the reference count goes to 0.
+          </listitem>
+        </varlistentry>
+
+        <varlistentry>
+          <term><emphasis role="bold">Start</emphasis></term>
+          <listitem>
+            Start starts a service and marks it explicit.  If already started but marked
+            implicit it is marked explicit. Only registered services can be started.
+          </listitem>
+        </varlistentry>
+
+        <varlistentry>
+          <term><emphasis role="bold">Stop</emphasis></term>
+          <listitem>
+            Stop stops a service.  If busy it is marked implict and stopped when
+            the reference count goes to 0.  Only registered services can be stopped.
+          </listitem>
+        </varlistentry>
+      </variablelist>
+    </para>
+
+    <para>
+      The OR's API allows services to be started and stopped directly.  It is intended for
+      but not restricted to use by the SM.  Services started with this API other than 
+      through the SM are "established" by the SM but not persisted.  There are two verbs:
+      <variablelist>
+        <varlistentry>
+          <term><emphasis role="bold">Submit</emphasis></term>
+          <listitem>
+            Submit is used to present a service specification to the OR for starting.  OR
+            passes it to SM which coordinates with OR to start it.  When started, SM
+            "establishes" it by starting a ping thread.
+          </listitem>
+        </varlistentry>
+
+        <varlistentry>
+          <term><emphasis role="bold">Cancel</emphasis></term>
+          <listitem>
+            Cancel stops a service.  If the service is still busy it doesn't stop
+            until the reference count is 0.
+          </listitem>
+        </varlistentry>
+      </variablelist>
+    </para>
+
+    <para>
+      A service is defined to be <emphasis role="bold">established</emphasis> if it has a ping
+      thread.  The service may or may not be registered.  If registered, it isn't established until
+      it is started and has a ping thread.  If not registered the service is discovered only by
+      reference; on discovery a ping thread is started to establish it.
+    </para>
+
+    <para>
+      We distinguish implicitly started services (by reference from a job) and explicitly started
+      services (by API).  For short we call these implicit an explicit services.  This is orthogonal
+      to whether the service is registered.
+    </para>
+
+    <para>
+      A registered service can be started and stopped.  It stays registered until explicitly unregistered
+      by API.  An unregistered service is pinged on the endpoint provided by the job but cannot otherwise
+      be managed.
+    </para>
+
+    <para>
+      The service state indicates wheter a service is implicit or explicit and maintains a reference
+      count.  When the count goes to 0 for implicit services the service is stopped and the ping
+      thread deleted, perhaps after some linger period.  When it goes to 0 for unregistered services
+      the ping is stopped and the ping thread deleted.
+    </para>
+
+
+    <para>
+      If a reference is made to a service that is registered but not established the mechansism to
+      establish it is started: start the service and when it's ready, start it's ping thread,
+      marking it implicit.  Similarly if started by API only mark it explicit.  If a started
+      implicit service receives start from the API it is moved to explicit.  If a started busy
+      explicit service receives a stop from the API mark it implicit and stop it if the ref count is
+      0. If not 0, wait for 0 before stopping it.
+    </para>
+
+    <para>
+      There is one thread to manage the service map and publish to OR.  It is notified after the
+      incoming map is diffed and split.  New work, both job and service, is updated according to
+      service state and added to the map, removed jobs are deleted from the map.  New, updated, and
+      removed services are moved to the service handler thread.  The service map is then published.
+    </para>
+
+    <para>
+      There is another thread that handles only services (the service handler thread).  This one
+      runs on a clock.  The actions below are only in response to OR state, not the
+      register/deregister/start/stop API.  New services with specification are put in a list for
+      starting.  New services without a specification have ping threads started.  Modified services
+      are managed:
+      <itemizedlist>
+        <listitem>
+          If moved from not running to running, start a ping thread.
+        </listitem>
+        <listitem>
+          If moved from running to not running, kill the ping thread, update the service map, and
+          check reason.  If canceled by user or admin / removed (disappeared), delete.  If canceled
+          by system (restart) or crashed, restart.  We depend on OR state accuracy to know whether
+          to restart.
+        </listitem>
+      </itemizedlist>
+    </para>
+
+
+    <para>
+      Threads:
+      <orderedlist>
+        <listitem>One for incoming camel, notified on OR state arrival.
+          Splits the OR state and maintains the localMap.  Notifies the job 
+           threads.
+        </listitem>
+
+        <listitem>One to manage service map.  Notified by splitter thread,
+          updates map and publishes immediately.
+        </listitem>
+
+        <listitem>One thread per service, running pings on a timer.
+        </listitem>
+        
+        <listitem>One temporary thread per OR request used to handle the 
+          APIs to the Orchestrator.  This is created and runs on demand to
+          manage OR communication sessions.
+        </listitem>
+      </orderedlist>
+    </para>
+
+    <para>
+
+      The SM becomes aware of services by registration, submission via OR, and by job reference of 
+      endpoints in the job spec.  This table summarizes the rules for managing services.
+
+      <table frame="all">
+        <title>Service Management Rules</title>
+        <tgroup cols="6">
+          <thead>
+            <row>
+              <entry>Discover</entry>
+              <entry>Persist</entry>
+              <entry>Start By</entry>
+              <entry>Stop By</entry>
+              <entry>Undiscover</entry>
+              <entry>Validate jobs</entry>
+            </row>
+          </thead>
+          <tbody>
+            <row>
+              <entry>SM Register API</entry>
+              <entry>Yes</entry>
+              <entry>SM Start API, Job Reference</entry>
+              <entry>SM Stop API, Last De-reference </entry>
+              <entry>SM Unregister API</entry>
+              <entry>Yes</entry>
+            </row>
+            <row>
+              <entry>OR Submit API</entry>
+              <entry>No</entry>
+              <entry>At Submission</entry>
+              <entry>OR Cancel API</entry>
+              <entry>On Cancel</entry>
+              <entry>Yes</entry>
+            </row>
+            <row>
+              <entry>Reference</entry>
+              <entry>N/A</entry>
+              <entry>N/A </entry>
+              <entry>N/A </entry>
+              <entry>Last De-reference</entry>
+              <entry>Yes</entry>
+            </row>
+          </tbody>
+        </tgroup>
+      </table>
+
+    </para>
+
+    
+    <para>
+      These are discoverable in these ways:
+
+      <variablelist>
+        <varlistentry>
+          <term><emphasis role="bold">By Reference</emphasis></term>
+          <listitem>
+            A job's descriptor has a UIMA-AS endpoint as a service dependency.  SM starts a listener
+            and updates the service state of the job accordingly.  The listener stays alive until
+            the last reference is removed.  The service is not otherwise managed (started or stopped).
+          </listitem>
+        </varlistentry>
+        
+        <varlistentry>
+          <term><emphasis role="bold">By Submission</emphasis></term>
+          <listitem>
+            A service type of job is submitted for startup.  SM starts a listener and updates the
+            service state of any job that references it accordingly.  The listener stays alive until
+            the service is stopped by the OR's service_cancel API and the last reference is removed.
+          </listitem>
+        </varlistentry>
+        
+        <varlistentry>
+          <term><emphasis role="bold">By Registration</emphasis></term>
+          <listitem>
+            A service specification is registered with SM.  When the service is started, SM starts a
+            listener and updates the service state of any referencing job.  The listener stays alive
+            until the service is stopped and the last reference is removed. (If the service is
+            started implicitly it is stopped when the last reference is removed.  If the service is
+            started by the SM's start_service API it is stopped by the SM's stop_service API.)
+          </listitem>
+        </varlistentry>
+      </variablelist>
+    </para>
+    
+</chapter>

Propchange: uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-admin/chapter-service-manager.xml
------------------------------------------------------------------------------
    svn:eol-style = native

Added: uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-introduction/chapter-acronyms.xml
URL: http://svn.apache.org/viewvc/uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-introduction/chapter-acronyms.xml?rev=1427917&view=auto
==============================================================================
--- uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-introduction/chapter-acronyms.xml (added)
+++ uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-introduction/chapter-acronyms.xml Wed Jan  2 19:12:10 2013
@@ -0,0 +1,81 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+  
+       http://www.apache.org/licenses/LICENSE-2.0
+  
+  Unless required by applicable law or agreed to in writing,
+  software distributed under the License is distributed on an
+  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  KIND, either express or implied.  See the License for the
+  specific language governing permissions and limitations
+  under the License.
+-->
+<chapter id="ducc.acronyms">
+<title>Acronyms</title>
+    <para>
+      <emphasis>The source for this chapter is ducc_ducbook/documents/chapter-acronyms.xml</emphasis>
+    </para>
+
+  <para>
+    AE: UIMA Analysis Engine
+  </para>
+  <para>
+
+    CAS: UIMA Common Analysis Structure
+  </para>
+
+  <para>
+    CC: CAS Consumer
+  </para>
+
+  <para>
+    CM: UIMA CAS Multiplier
+  </para>
+
+  <para>
+    CR: UIMA Collection Reader
+  </para>
+
+  <para>
+    DUCC: Distributed UIMA Cluster Computing
+  </para>
+
+  <para>
+    JD: Job Driver
+  </para>
+
+  <para>
+    JP: Job Process
+  </para>
+
+  <para>
+    OR: Orchestrator
+  </para>
+
+  <para>
+    PM: Process Manager
+  </para>
+
+  <para>
+    RM: Resource Manager
+  </para>
+
+  <para>
+    SM: Service Manager
+  </para>
+
+  <para>
+    UIMA: Unstructured Information Management Architecture (see http://uima.apache.org/)
+  </para>
+
+  <para>
+    UIMA-AS: UIMA Asynchronous Scaleout (see http://uima.apache.org/doc-uimaas-what.html)
+  </para>
+</chapter>

Propchange: uima/sandbox/uima-ducc/trunk/uima-ducc-ducbook/docbook/part-introduction/chapter-acronyms.xml
------------------------------------------------------------------------------
    svn:eol-style = native