You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@oozie.apache.org by an...@apache.org on 2018/09/14 14:07:10 UTC

[04/11] oozie git commit: OOZIE-2734 [docs] Switch from TWiki to Markdown (asalamon74 via andras.piros, pbacsko, gezapeti)

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_QuickStart.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_QuickStart.twiki b/docs/src/site/twiki/DG_QuickStart.twiki
deleted file mode 100644
index d6a0069..0000000
--- a/docs/src/site/twiki/DG_QuickStart.twiki
+++ /dev/null
@@ -1,230 +0,0 @@
-<noautolink>
-
-[[index][::Go back to Oozie Documentation Index::]]
-
----+!! Oozie Quick Start
-
-These instructions install and run Oozie using an embedded Jetty server and an embedded Derby database.
-
-For detailed install and configuration instructions refer to [[AG_Install][Oozie Install]].
-
-%TOC%
-
----++ Building Oozie
-
----+++ System Requirements:
-   * Unix box (tested on Mac OS X and Linux)
-   * Java JDK 1.8+
-   * Maven 3.0.1+
-   * Hadoop 2.6.0+
-   * Pig 0.10.1+
-
-JDK commands (java, javac) must be in the command path.
-
-The Maven command (mvn) must be in the command path.
-
----+++ Building Oozie
-
-Download a source distribution of Oozie from the "Releases" drop down menu on the [[http://oozie.apache.org][Oozie site]].
-
-Expand the source distribution =tar.gz= and change directories into it.
-
-The simplest way to build Oozie is to run the =mkdistro.sh= script:
-<verbatim>
-$ bin/mkdistro.sh [-DskipTests]
-
-Running =mkdistro.sh= will create the binary distribution of Oozie. By default, oozie war will not contain hadoop and
-hcatalog libraries, however they are required for oozie to work. There are 2 options to add these libraries:
-
-1. At install time, copy the hadoop and hcatalog libraries to libext and run oozie-setup.sh to setup Oozie. This is
-suitable when same oozie package needs to be used in multiple set-ups with different hadoop/hcatalog versions.
-
-2. Build with -Puber which will bundle the required libraries in the oozie war. Further, the following options are
-available to customise the versions of the dependencies:
--Dhadoop.version=<version> - default 2.6.0
--Ptez - Bundle tez jars in hive and pig sharelibs. Useful if you want to use tez
-+as the execution engine for those applications.
--Dpig.version=<version> - default 0.16.0
--Dpig.classifier=<classifier> - default h2
--Dsqoop.version=<version> - default 1.4.3
--Dsqoop.classifier=<classifier> - default hadoop100
--Djetty.version=<version> - default 9.3.20.v20170531
--Dopenjpa.version=<version> - default 2.2.2
--Dxerces.version=<version> - default 2.10.0
--Dcurator.version=<version> - default 2.5.0
--Dhive.version=<version - default 1.2.0
--Dhbase.version=<version> - default 1.2.3
--Dtez.version=<version> - default 0.8.4
-
-*IMPORTANT:* Profile hadoop-3 must be activated if building against Hadoop 3
-</verbatim>
-
-More details on building Oozie can be found on the [[ENG_Building][Building Oozie]] page.
-
----++ Server Installation
-
----+++ System Requirements
-
-   * Unix (tested in Linux and Mac OS X)
-   * Java 1.8+
-   * Hadoop
-      * [[http://hadoop.apache.org][Apache Hadoop]] (tested with 1.2.1 & 2.6.0+)
-   * ExtJS library (optional, to enable Oozie webconsole)
-      * [[http://archive.cloudera.com/gplextras/misc/ext-2.2.zip][ExtJS 2.2]]
-
-The Java 1.8+ =bin= directory should be in the command path.
-
----+++ Server Installation
-
-*IMPORTANT:* Oozie ignores any set value for =OOZIE_HOME=, Oozie computes its home automatically.
-
-   * Build an Oozie binary distribution
-   * Download a Hadoop binary distribution
-   * Download ExtJS library (it must be version 2.2)
-
-*NOTE:* The ExtJS library is not bundled with Oozie because it uses a different license.
-
-*NOTE:* Oozie UI browser compatibility Chrome (all), Firefox (3.5), Internet Explorer (8.0), Opera (10.5).
-
-*NOTE:* It is recommended to use a Oozie Unix user for the Oozie server.
-
-Expand the Oozie distribution =tar.gz=.
-
-Expand the Hadoop distribution =tar.gz= (as the Oozie Unix user).
-
-#HadoopProxyUser
-
-*NOTE:* Configure the Hadoop cluster with proxyuser for the Oozie process.
-
-The following two properties are required in Hadoop core-site.xml:
-
-<verbatim>
-  <!-- OOZIE -->
-  <property>
-    <name>hadoop.proxyuser.[OOZIE_SERVER_USER].hosts</name>
-    <value>[OOZIE_SERVER_HOSTNAME]</value>
-  </property>
-  <property>
-    <name>hadoop.proxyuser.[OOZIE_SERVER_USER].groups</name>
-    <value>[USER_GROUPS_THAT_ALLOW_IMPERSONATION]</value>
-  </property>
-</verbatim>
-
-Replace the capital letter sections with specific values and then restart Hadoop.
-
-The ExtJS library is optional (only required for the Oozie web-console to work)
-
-*IMPORTANT:* all Oozie server scripts (=oozie-setup.sh=, =oozied.sh=, =oozie-start.sh=, =oozie-run.sh=
-and =oozie-stop.sh=) run only under the Unix user that owns the Oozie installation directory,
-if necessary use =sudo -u OOZIE_USER= when invoking the scripts.
-
-As of Oozie 3.3.2, use of =oozie-start.sh=, =oozie-run.sh=, and =oozie-stop.sh= has
-been deprecated and will print a warning. The =oozied.sh= script should be used
-instead; passing it =start=, =run=, or =stop= as an argument will perform the
-behaviors of =oozie-start.sh=, =oozie-run.sh=, and =oozie-stop.sh= respectively.
-
-Create a *libext/* directory in the directory where Oozie was expanded.
-
-If using the ExtJS library copy the ZIP file to the *libext/* directory. If hadoop and hcatalog libraries are not
-already included in the war, add the corresponding libraries to *libext/* directory.
-
-A "sharelib create -fs fs_default_name [-locallib sharelib]" command is available when running oozie-setup.sh
-for uploading new sharelib into hdfs where the first argument is the default fs name
-and the second argument is the Oozie sharelib to install, it can be a tarball or the expanded version of it.
-If the second argument is omitted, the Oozie sharelib tarball from the Oozie installation directory will be used.
-Upgrade command is deprecated, one should use create command to create new version of sharelib.
-Sharelib files are copied to new lib_<timestamped> directory. At start, server picks the sharelib from latest time-stamp directory.
-While starting server also purge sharelib directory which is older than sharelib retention days
-(defined as oozie.service.ShareLibService.temp.sharelib.retention.days and 7 days is default).
-
-db create|upgrade|postupgrade -run [-sqlfile <FILE>] command is for create, upgrade or postupgrade oozie db with an
-optional sql file
-
-Run the =oozie-setup.sh= script to configure Oozie with all the components added to the *libext/* directory.
-
-<verbatim>
-$ bin/oozie-setup.sh sharelib create -fs <FS_URI> [-locallib <PATH>]
-                     sharelib upgrade -fs <FS_URI> [-locallib <PATH>]
-                     db create|upgrade|postupgrade -run [-sqlfile <FILE>]
-</verbatim>
-
-*IMPORTANT*: If the Oozie server needs to establish secure connection with an external server with a self-signed certificate,
-make sure you specify the location of a truststore that contains required certificates. It can be done by configuring
-=oozie.https.truststore.file= in =oozie-site.xml=, or by setting the =javax.net.ssl.trustStore= system property.
-If it is set in both places, the value passed as system property will be used.
-
-Create the Oozie DB using the 'ooziedb.sh' command line tool:
-
-<verbatim>
-$ bin/ooziedb.sh create -sqlfile oozie.sql -run
-
-Validate DB Connection.
-DONE
-Check DB schema does not exist
-DONE
-Check OOZIE_SYS table does not exist
-DONE
-Create SQL schema
-DONE
-DONE
-Create OOZIE_SYS table
-DONE
-
-Oozie DB has been created for Oozie version '3.2.0'
-
-$
-</verbatim>
-
-Start Oozie as a daemon process run:
-
-<verbatim>
-$ bin/oozied.sh start
-</verbatim>
-
-To start Oozie as a foreground process run:
-
-<verbatim>
-$ bin/oozied.sh run
-</verbatim>
-
-Check the Oozie log file =logs/oozie.log= to ensure Oozie started properly.
-
-Using the Oozie command line tool check the status of Oozie:
-
-<verbatim>
-$ bin/oozie admin -oozie http://localhost:11000/oozie -status
-</verbatim>
-
-Using a browser go to the [[http://localhost:11000/oozie][Oozie web console]], Oozie status should be *NORMAL*.
-
-Refer to the [[DG_Examples][Running the Examples]] document for details on running the examples.
-
----++ Client Installation
-
----+++ System Requirements
-
-   * Unix (tested in Linux and Mac OS X)
-   * Java 1.8+
-
-The Java 1.8+ =bin= directory should be in the command path.
-
----+++ Client Installation
-
-Copy and expand the =oozie-client= TAR.GZ file bundled with the distribution. Add the =bin/= directory to the =PATH=.
-
-Refer to the [[DG_CommandLineTool][Command Line Interface Utilities]] document for a full reference of the =oozie=
-command line tool.
-
-NOTE: The Oozie server installation includes the Oozie client. The Oozie client should be installed in remote machines
-only.
-
-#OozieShareLib
----++ Oozie Share Lib Installation
-
-Oozie share lib has been installed by oozie-setup.sh create command explained in the earlier section.
-
-See the [[WorkflowFunctionalSpec#ShareLib][Workflow Functional Specification]] and [[AG_Install#Oozie_Share_Lib][Installation]] for more information about the Oozie ShareLib.
-
-[[index][::Go back to Oozie Documentation Index::]]
-
-</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_SLAMonitoring.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_SLAMonitoring.twiki b/docs/src/site/twiki/DG_SLAMonitoring.twiki
index c91c227..0831b93 100644
--- a/docs/src/site/twiki/DG_SLAMonitoring.twiki
+++ b/docs/src/site/twiki/DG_SLAMonitoring.twiki
@@ -1,12 +1,12 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
 
----+!! Oozie SLA Monitoring
+[::Go back to Oozie Documentation Index::](index.html)
 
-%TOC%
+# Oozie SLA Monitoring
 
----++ Overview
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
+
+## Overview
 
 Critical jobs can have certain SLA requirements associated with them. This SLA can be in terms of time
 i.e. a maximum allowed time limit associated with when the job should start, by when should it end,
@@ -23,27 +23,30 @@ Oozie now also has a SLA tab in the Oozie UI, where users can query for SLA info
 of how their jobs fared against their SLAs.
 
 
----++ Oozie Server Configuration
+## Oozie Server Configuration
 
-Refer to [[AG_Install#Notifications_Configuration][Notifications Configuration]] for configuring Oozie server to track
+Refer to [Notifications Configuration](AG_Install.html#Notifications_Configuration) for configuring Oozie server to track
 SLA for jobs and send notifications.
 
----++ SLA Tracking
+## SLA Tracking
 
 Oozie allows tracking SLA for meeting the following criteria:
+
    * Start time
    * End time
    * Job Duration
 
----++++ Event Status
+### Event Status
 Corresponding to each of these 3 criteria, your jobs are processed for whether Met or Miss i.e.
+
    * START_MET, START_MISS
    * END_MET, END_MISS
    * DURATION_MET, DURATION_MISS
 
----++++ SLA Status
+### SLA Status
 Expected end-time is the most important criterion for majority of users while deciding overall SLA Met or Miss.
 Hence the _"SLA_Status"_ for a job will transition through these four stages
+
    * Not_Started <-- Job not yet begun
    * In_Process <-- Job started and is running, and SLAs are being tracked
    * Met <-- caused by an END_MET
@@ -52,19 +55,26 @@ Hence the _"SLA_Status"_ for a job will transition through these four stages
 In addition to overshooting expected end-time, and END_MISS (and so an eventual SLA MISS) also occurs when the
 job does not end successfully e.g. goes to error state - Failed/Killed/Error/Timedout.
 
----++ Configuring SLA in Applications
+## Configuring SLA in Applications
 
-To make your jobs trackable for SLA, you simply need to add the =<sla:info>= tag to your workflow application definition.
-If you were already using the existing SLA schema in your workflows (Schema xmlns:sla="uri:oozie:sla:0.1"), you don't need to
+To make your jobs trackable for SLA, you simply need to add the `<sla:info>` tag to your workflow application definition.
+If you were already using the existing SLA schema
+in your workflows (Schema xmlns:sla="uri:oozie:sla:0.1"), you don't need to
 do anything extra to receive SLA notifications via JMS messages. This new SLA monitoring framework is backward-compatible -
-no need to change application XML for now and you can continue to fetch old records via the [[DG_CommandLineTool#SLAOperations][command line API]].
-However, usage of old schema and API is deprecated and we strongly recommend using new schema.
-   * New SLA schema is 'uri:oozie:sla:0.2'
-   * In order to use new SLA schema, you will need to upgrade your workflow/coordinator schema to 0.5 i.e. 'uri:oozie:workflow:0.5'
+no need to change application XML for now and you can continue to fetch old records via the [command line API](DG_CommandLineTool.html#SLAOperations).
+However, usage of old schema
+and API is deprecated and we strongly recommend using new schema.
 
----+++ SLA Definition in Workflow
+   * New SLA schema
+is 'uri:oozie:sla:0.2'
+   * In order to use new SLA schema,
+ you will need to upgrade your workflow/coordinator schema
+to 0.5 i.e. 'uri:oozie:workflow:0.5'
+
+### SLA Definition in Workflow
 Example:
-<verbatim>
+
+```
 <workflow-app name="test-wf-job-sla"
               xmlns="uri:oozie:workflow:0.5"
               xmlns:sla="uri:oozie:sla:0.2">
@@ -97,25 +107,28 @@ Example:
         <sla:alert-contact>joe@example.com</sla:alert-contact>
     </sla:info>
 </workflow-app>
-</verbatim>
+```
 
-For the list of tags usable under =<sla:info>=, refer to [[WorkflowFunctionalSpec#SLASchema][Schemas Appendix]].
-This new schema is much more compact and meaningful, getting rid of redundant and unused tags.
+For the list of tags usable under `<sla:info>`, refer to [Schemas Appendix](WorkflowFunctionalSpec.html#SLASchema).
+This new schema
+is much more compact and meaningful, getting rid of redundant and unused tags.
 
-   * ==nominal-time==: As the name suggests, this is the time relative to which your jobs' SLAs will be calculated. Generally since Oozie workflows are aligned with synchronous data dependencies, this nominal time can be parameterized to be passed the value of your coordinator nominal time. Nominal time is also required in case of independent workflows and you can specify the time in which you expect the workflow to be run if you don't have a synchronous dataset associated with it.
-   * ==should-start==: Relative to =nominal-time= this is the amount of time (along with time-unit - MINUTES, HOURS, DAYS) within which your job should *start running* to meet SLA. This is optional.
-   * ==should-end==: Relative to =nominal-time= this is the amount of time (along with time-unit - MINUTES, HOURS, DAYS) within which your job should *finish* to meet SLA.
-   * ==max-duration==: This is the maximum amount of time (along with time-unit - MINUTES, HOURS, DAYS) your job is expected to run. This is optional.
-   * ==alert-events==: Specify the types of events for which *Email* alerts should be sent. Allowable values in this comma-separated list are start_miss, end_miss and duration_miss. *_met events can generally be deemed low priority and hence email alerting for these is not necessary. However, note that this setting is only for alerts via *email* alerts and not via JMS messages, where all events send out notifications, and user can filter them using desired selectors. This is optional and only applicable when alert-contact is configured.
-   * ==alert-contact==: Specify a comma separated list of email addresses where you wish your alerts to be sent. This is optional and need not be configured if you just want to view your job SLA history in the UI and do not want to receive email alerts.
+   * `nominal-time`: As the name suggests, this is the time relative to which your jobs' SLAs will be calculated. Generally since Oozie workflows are aligned with synchronous data dependencies, this nominal time can be parameterized to be passed the value of your coordinator nominal time. Nominal time is also required in case of independent workflows and you can specify the time in which you expect the workflow to be run if you don't have a synchronous dataset associated with it.
+   * `should-start`: Relative to `nominal-time` this is the amount of time (along with time-unit - MINUTES, HOURS, DAYS) within which your job should *start running* to meet SLA. This is optional.
+   * `should-end`: Relative to `nominal-time` this is the amount of time (along with time-unit - MINUTES, HOURS, DAYS) within which your job should *finish* to meet SLA.
+   * `max-duration`: This is the maximum amount of time (along with time-unit - MINUTES, HOURS, DAYS) your job is expected to run. This is optional.
+   * `alert-events`: Specify the types of events for which **Email** alerts should be sent. Allowable values in this comma-separated list are start_miss, end_miss and duration_miss. *_met events can generally be deemed low priority and hence email alerting for these is not necessary. However, note that this setting is only for alerts via *email* alerts and not via JMS messages, where all events send out notifications, and user can filter them using desired selectors. This is optional and only applicable when alert-contact is configured.
+   * `alert-contact`: Specify a comma separated list of email addresses where you wish your alerts to be sent. This is optional and need not be configured if you just want to view your job SLA history in the UI and do not want to receive email alerts.
 
 NOTE: All tags can be parameterized as a EL function or a fixed value.
 
-Same schema can be applied to and embedded under Workflow-Action as well as Coordinator-Action XML.
+Same schema
+can be applied to and embedded under Workflow-Action as well as Coordinator-Action XML.
+
+### SLA Definition in Workflow Action
 
----+++ SLA Definition in Workflow Action
 
-<verbatim>
+```
 <workflow-app name="test-wf-action-sla" xmlns="uri:oozie:workflow:0.5" xmlns:sla="uri:oozie:sla:0.2">
     <start to="grouper"/>
     <action name="grouper">
@@ -130,10 +143,11 @@ Same schema can be applied to and embedded under Workflow-Action as well as Coor
     </action>
     <end name="end"/>
 </workflow-app>
-</verbatim>
+```
 
----+++ SLA Definition in Coordinator Action
-<verbatim>
+### SLA Definition in Coordinator Action
+
+```
 <coordinator-app name="test-coord-sla" frequency="${coord:days(1)}" freq_timeunit="DAY"
     end_of_duration="NONE" start="2013-06-20T08:01Z" end="2013-12-01T08:01Z"
     timezone="America/Los_Angeles" xmlns="uri:oozie:coordinator:0.4" xmlns:sla="uri:oozie:sla:0.2">
@@ -147,22 +161,24 @@ Same schema can be applied to and embedded under Workflow-Action as well as Coor
         </sla:info>
     </action>
 </coordinator-app>
-</verbatim>
+```
 
----++ Accessing SLA Information
+## Accessing SLA Information
 
 SLA information is accessible via the following ways:
+
    * Through the SLA tab of the Oozie Web UI.
    * JMS messages sent to a configured JMS provider for instantaneous tracking.
    * RESTful API to query for SLA summary.
-   * As an =Instrumentation.Counter= entry that is accessible via RESTful API and reflects to the number of all SLA tracked external
-   entities. Name of this counter is =sla-calculator.sla-map=.
+   * As an `Instrumentation.Counter` entry that is accessible via RESTful API and reflects to the number of all SLA tracked external
+   entities. Name of this counter is `sla-calculator.sla-map`.
 
 For JMS Notifications, you have to have a message broker in place, on which Oozie publishes messages and you can
 hook on a subscriber to receive those messages. For more info on setting up and consuming JMS messages, refer
-[[DG_JMSNotifications][JMS Notifications]] documentation.
+[JMS Notifications](DG_JMSNotifications.html) documentation.
 
 In the REST API, the following filters can be applied while fetching SLA information:
+
    * app_name - Application name
    * id  - id of the workflow job, workflow action or coordinator action
    * parent_id - Parent id of the workflow job, workflow action or coordinator action
@@ -179,14 +195,16 @@ the number of milliseconds that have elapsed since January 1, 1970 00:00:00.000
 
 The examples below demonstrate the use of REST API and explains the JSON response.
 
----+++ Scenario 1: Workflow Job Start_Miss
-*Request:*
-<verbatim>
+### Scenario 1: Workflow Job Start_Miss
+**Request:**
+
+```
 GET <oozie-host>:<port>/oozie/v2/sla?timezone=GMT&filter=nominal_start=2013-06-18T00:01Z;nominal_end=2013-06-23T00:01Z;app_name=my-sla-app
-</verbatim>
+```
+
+**JSON Response**
 
-*JSON Response*
-<verbatim>
+```
 {
 
     id : "000056-1238791320234-oozie-joe-W"
@@ -208,16 +226,18 @@ GET <oozie-host>:<port>/oozie/v2/sla?timezone=GMT&filter=nominal_start=2013-06-1
     upstreamApps: "dependent-app-1, dependent-app-2"
 
 }
-</verbatim>
+```
+
+### Scenario 2: Workflow Action End_Miss
+**Request:**
 
----+++ Scenario 2: Workflow Action End_Miss
-*Request:*
-<verbatim>
+```
 GET <oozie-host>:<port>/oozie/v2/sla?timezone=GMT&filter=parent_id=000056-1238791320234-oozie-joe-W
-</verbatim>
+```
 
-*JSON Response*
-<verbatim>
+**JSON Response**
+
+```
 {
 
     id : "000056-1238791320234-oozie-joe-W@map-reduce-action"
@@ -239,16 +259,18 @@ GET <oozie-host>:<port>/oozie/v2/sla?timezone=GMT&filter=parent_id=000056-123879
     upstreamApps: "dependent-app-1, dependent-app-2"
 
 }
-</verbatim>
+```
+
+### Scenario 3: Coordinator Action Duration_Miss
+**Request:**
 
----+++ Scenario 3: Coordinator Action Duration_Miss
-*Request:*
-<verbatim>
+```
 GET <oozie-host>:<port>/oozie/v2/sla?timezone=GMT&filter=id=000001-1238791320234-oozie-joe-C
-</verbatim>
+```
+
+**JSON Response**
 
-*JSON Response*
-<verbatim>
+```
 {
 
     id : "000001-1238791320234-oozie-joe-C@2"
@@ -270,19 +292,21 @@ GET <oozie-host>:<port>/oozie/v2/sla?timezone=GMT&filter=id=000001-1238791320234
     upstreamApps: "dependent-app-1, dependent-app-2"
 
 }
-</verbatim>
+```
 
 Scenario #3 is particularly interesting because it is an overall "MET" because it met its expected End-time,
 but it is "Duration_Miss" because the actual run (between actual start and actual end) exceeded expected duration.
 
----+++ Scenario 4: All Coordinator actions in a Bundle
-*Request:*
-<verbatim>
+### Scenario 4: All Coordinator actions in a Bundle
+**Request:**
+
+```
 GET <oozie-host>:<port>/oozie/v2/sla?timezone=GMT&filter=bundle=1234567-150130225116604-oozie-B;event_status=END_MISS
-</verbatim>
+```
+
+**JSON Response**
 
-*JSON Response*
-<verbatim>
+```
 {
     id : "000001-1238791320234-oozie-joe-C@1"
     parentId : "000001-1238791320234-oozie-joe-C"
@@ -323,13 +347,14 @@ GET <oozie-host>:<port>/oozie/v2/sla?timezone=GMT&filter=bundle=1234567-15013022
     actualDuration: 3360000 <-- (actual duration in milliseconds)
     durationDelay: -4 <-- (duration delay in minutes)
 }
-</verbatim>
+```
 
 Scenario #4 (All Coordinator actions in a Bundle) is to get SLA information of all coordinator actions under bundle job in one call.
 startDelay/durationDelay/endDelay values returned indicate how much delay compared to expected time (positive values in case of MISS, and negative values in case of MET).
 
----+++ Sample Email Alert
-<verbatim>
+### Sample Email Alert
+
+```
 Subject: OOZIE - SLA END_MISS (AppName=wf-sla-job, JobID=0000004-130610225200680-oozie-oozi-W)
 
 
@@ -353,77 +378,82 @@ SLA Details:
   Expected End Time - Mon Jun 10 23:38:00 UTC 2013
   Expected Duration (in mins) - 5
   Actual Duration (in mins) - -1
-</verbatim>
+```
 
----+++ Changing job SLA definition and alerting
+### Changing job SLA definition and alerting
 Following are ways to enable/disable SLA alerts for coordinator actions.
 
----++++ 1. Specify in Bundle XML during submission.
+#### 1. Specify in Bundle XML during submission.
 Following properties can be specified in bundle xml as properties for coordinator.
 
-=oozie.sla.disable.alerts.older.than= this property can be specified in hours, the SLA notification for
+`oozie.sla.disable.alerts.older.than` this property can be specified in hours, the SLA notification for
 coord actions will be disabled whose nominal is time older then this value. Default is 48 hours.
-<verbatim>
+
+```
 <property>
     <name>oozie.sla.disable.alerts.older.than</name>
     <value>12</value>
 </property>
-</verbatim>
+```
+
+`oozie.sla.disable.alerts` List of coord actions to be disabled. Value can be specified as list of coord actions or date range.
 
-=oozie.sla.disable.alerts= List of coord actions to be disabled. Value can be specified as list of coord actions or date range.
-<verbatim>
+```
 <property>
     <name>oozie.sla.disable.alerts</name>
     <value>1,3-4,7-10</value>
 </property>
-</verbatim>
+```
 Will disable alert for coord actions 1,3,5,7,8,9,10
 
-=oozie.sla.enable.alerts= List of coord actions to be enabled. Value can be specified as list of coord actions or date range.
-<verbatim>
+`oozie.sla.enable.alerts` List of coord actions to be enabled. Value can be specified as list of coord actions or date range.
+
+```
 <property>
     <name>oozie.sla.enable.alerts</name>
     <value>2009-01-01T01:00Z::2009-05-31T23:59Z</value>
 </property>
-</verbatim>
+```
 This will enable SLA alert for coord actions whose nominal time is in between (inclusive) 2009-01-01T01:00Z and 2009-05-31T23:59Z.
 
 ALL keyword can be specified to specify all actions. Below property will disable SLA notifications for all coord actions.
-<verbatim>
+
+```
 <property>
     <name>oozie.sla.disable.alerts</name>
     <value>ALL</value>
 </property>
-</verbatim>
+```
 
----++++ 2. Specify during Coordinator job submission or update
+#### 2. Specify during Coordinator job submission or update
 Above properties can be specified in job.properties in
-[[DG_CommandLineTool#Updating_coordinator_definition_and_properties][Coord job update command]],
-in [[DG_CommandLineTool#Submitting_a_Workflow_Coordinator_or_Bundle_Job][Coord job submit command]]
-or in [[DG_CommandLineTool#Running_a_Workflow_Coordinator_or_Bundle_Job][Coord job run command]]
+[Coord job update command](DG_CommandLineTool.html#Updating_coordinator_definition_and_properties),
+in [Coord job submit command](DG_CommandLineTool.html#Submitting_a_Workflow_Coordinator_or_Bundle_Job)
+or in [Coord job run command](DG_CommandLineTool.html#Running_a_Workflow_Coordinator_or_Bundle_Job)
 
----++++ 3. Change using command line
-Refer [[DG_CommandLineTool#Changing_job_SLA_definition_and_alerting][Changing job SLA definition and alerting]] for commandline usage.
+#### 3. Change using command line
+Refer [Changing job SLA definition and alerting](DG_CommandLineTool.html#Changing_job_SLA_definition_and_alerting) for commandline usage.
 
----++++ 4. Change using REST API
-Refer the REST API [[WebServicesAPI#Changing_job_SLA_definition_and_alerting][Changing job SLA definition and alerting]].
+#### 4. Change using REST API
+Refer the REST API [Changing job SLA definition and alerting](WebServicesAPI.html#Changing_job_SLA_definition_and_alerting).
 
----++ In-memory SLA entries and database content
+## In-memory SLA entries and database content
 
-There are special circumstances when the in-memory =SLACalcStatus= entries can exist without the workflow or coordinator job or
+There are special circumstances when the in-memory `SLACalcStatus` entries can exist without the workflow or coordinator job or
 action instances in database. For example:
-   * SLA tracked database content may already have been deleted, and =SLA_SUMMARY= entry is not present anymore in database
-   * SLA tracked database content and =SLA_SUMMARY= entry aren't yet present in database
 
-By the time =SLAService= scheduled job will be running, SLA map contents are checked. When the =SLA_SUMMARY= entry for the in-memory
+   * SLA tracked database content may already have been deleted, and `SLA_SUMMARY` entry is not present anymore in database
+   * SLA tracked database content and `SLA_SUMMARY` entry aren't yet present in database
+
+By the time `SLAService` scheduled job will be running, SLA map contents are checked. When the `SLA_SUMMARY` entry for the in-memory
 SLA entry is missing, a counter is increased. When this counter reaches the server-wide preconfigured value
-=oozie.sla.service.SLAService.maximum.retry.count= (by default =3=), in-memory SLA entry will get purged.
+`oozie.sla.service.SLAService.maximum.retry.count` (by default `3`), in-memory SLA entry will get purged.
 
----++ Known issues
+## Known issues
 There are two known issues when you define SLA for a workflow action.
    * If there are decision nodes and SLA is defined for a workflow action not in the execution path because of the decision node, you will still get an SLA_MISS notification.
    * If you have dangling action nodes in your workflow definition and SLA is defined for it, you will still get an SLA_MISS notification.
 
-[[index][::Go back to Oozie Documentation Index::]]
+[::Go back to Oozie Documentation Index::](index.html)
+
 
-</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_ShellActionExtension.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_ShellActionExtension.twiki b/docs/src/site/twiki/DG_ShellActionExtension.twiki
index 5894c28..eff4b08 100644
--- a/docs/src/site/twiki/DG_ShellActionExtension.twiki
+++ b/docs/src/site/twiki/DG_ShellActionExtension.twiki
@@ -1,39 +1,39 @@
-<<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
+
+[::Go back to Oozie Documentation Index::](index.html)
 
 -----
 
----+!! Oozie Shell Action Extension
+# Oozie Shell Action Extension
 
-%TOC%
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
 
-#ShellAction
----++ Shell Action
+<a name="ShellAction"></a>
+## Shell Action
 
-The =shell= action runs a Shell command.
+The `shell` action runs a Shell command.
 
 The workflow job will wait until the Shell command completes before
 continuing to the next action.
 
-To run the Shell job, you have to configure the =shell= action with the
-=job-tracker=, =name-node= and Shell =exec= elements as
+To run the Shell job, you have to configure the `shell` action with the
+`job-tracker`, `name-node` and Shell `exec` elements as
 well as the necessary arguments and configuration.
 
-A =shell= action can be configured to create or delete HDFS directories
+A `shell` action can be configured to create or delete HDFS directories
 before starting the Shell job.
 
-Shell _launcher_ configuration can be specified with a file, using the =job-xml=
-element, and inline, using the =configuration= elements.
+Shell _launcher_ configuration can be specified with a file, using the `job-xml`
+element, and inline, using the `configuration` elements.
 
 Oozie EL expressions can be used in the inline configuration. Property
-values specified in the =configuration= element override values specified
-in the =job-xml= file.
+values specified in the `configuration` element override values specified
+in the `job-xml` file.
 
-Note that YARN =yarn.resourcemanager.address= (=resource-manager=) and HDFS =fs.default.name= (=name-node=) properties
+Note that YARN `yarn.resourcemanager.address` (`resource-manager`) and HDFS `fs.default.name` (`name-node`) properties
 must not be present in the inline configuration.
 
-As with Hadoop =map-reduce= jobs, it is possible to add files and
+As with Hadoop `map-reduce` jobs, it is possible to add files and
 archives in order to make them available to the Shell job. Refer to the
 [WorkflowFunctionalSpec#FilesArchives][Adding Files and Archives for the Job]
 section for more information about this feature.
@@ -45,9 +45,10 @@ command must follow the following requirements:
    * The format of the output must be a valid Java Properties file.
    * The size of the output must not exceed 2KB.
 
-*Syntax:*
+**Syntax:**
+
 
-<verbatim>
+```
 <workflow-app name="[WF-DEF-NAME]" xmlns="uri:oozie:workflow:1.0">
     ...
     <action name="[NODE-NAME]">
@@ -86,49 +87,50 @@ command must follow the following requirements:
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
-The =prepare= element, if present, indicates a list of paths to delete
-or create before starting the job. Specified paths must start with =hdfs://HOST:PORT=.
+The `prepare` element, if present, indicates a list of paths to delete
+or create before starting the job. Specified paths must start with `hdfs://HOST:PORT`.
 
-The =job-xml= element, if present, specifies a file containing configuration
-for the Shell job. As of schema 0.2, multiple =job-xml= elements are allowed in order to 
-specify multiple =job.xml= files.
+The `job-xml` element, if present, specifies a file containing configuration
+for the Shell job. As of schema 0.2, multiple `job-xml` elements are allowed in order to
+specify multiple `job.xml` files.
 
-The =configuration= element, if present, contains configuration
+The `configuration` element, if present, contains configuration
 properties that are passed to the Shell job.
 
-The =exec= element must contain the path of the Shell command to
+The `exec` element must contain the path of the Shell command to
 execute. The arguments of Shell command can then be specified
-using one or more =argument= element.
+using one or more `argument` element.
 
-The =argument= element, if present, contains argument to be passed to
+The `argument` element, if present, contains argument to be passed to
 the Shell command.
 
-The =env-var= element, if present, contains the environment to be passed
-to the Shell command. =env-var= should contain only one pair of environment variable
+The `env-var` element, if present, contains the environment to be passed
+to the Shell command. `env-var` should contain only one pair of environment variable
 and value. If the pair contains the variable such as $PATH, it should follow the
 Unix convention such as PATH=$PATH:mypath. Don't use ${PATH} which will be
 substituted by Oozie's EL evaluator.
 
-A =shell= action creates a Hadoop configuration. The Hadoop configuration is made available as a local file to the
+A `shell` action creates a Hadoop configuration. The Hadoop configuration is made available as a local file to the
 Shell application in its running directory. The exact file path is exposed to the spawned shell using the environment
-variable called =OOZIE_ACTION_CONF_XML=.The Shell application can access the environment variable to read the action
+variable called `OOZIE_ACTION_CONF_XML`.The Shell application can access the environment variable to read the action
 configuration XML file path.
- 
-If the =capture-output= element is present, it indicates Oozie to capture output of the STDOUT of the shell command
+
+If the `capture-output` element is present, it indicates Oozie to capture output of the STDOUT of the shell command
 execution. The Shell command output must be in Java Properties file format and it must not exceed 2KB. From within the
-workflow definition, the output of an Shell action node is accessible via the =String action:output(String node,
-String key)= function (Refer to section '4.2.6 Action EL Functions').
+workflow definition, the output of an Shell action node is accessible via the `String action:output(String node,
+String key)` function (Refer to section '4.2.6 Action EL Functions').
 
 All the above elements can be parameterized (templatized) using EL
 expressions.
 
-*Example:*
+**Example:**
 
 How to run any shell script or perl script or CPP executable
 
-<verbatim>
+
+```
 <workflow-app xmlns='uri:oozie:workflow:1.0' name='shell-wf'>
     <start to='shell1' />
     <action name='shell1'>
@@ -154,11 +156,12 @@ How to run any shell script or perl script or CPP executable
     </kill>
     <end name='end' />
 </workflow-app>
-</verbatim>
+```
 
 The corresponding job properties file used to submit Oozie job could be as follows:
 
-<verbatim>
+
+```
 oozie.wf.application.path=hdfs://localhost:8020/user/kamrul/workflows/script
 
 #Execute is expected to be in the Workflow directory.
@@ -173,11 +176,12 @@ resourceManager=localhost:8032
 nameNode=hdfs://localhost:8020
 queueName=default
 
-</verbatim>
+```
 
 How to run any java program bundles in a jar.
 
-<verbatim>
+
+```
 <workflow-app xmlns='uri:oozie:workflow:1.0' name='shell-wf'>
     <start to='shell1' />
     <action name='shell1'>
@@ -204,11 +208,12 @@ How to run any java program bundles in a jar.
     </kill>
     <end name='end' />
 </workflow-app>
-</verbatim>
+```
 
 The corresponding job properties file used to submit Oozie job could be as follows:
 
-<verbatim>
+
+```
 oozie.wf.application.path=hdfs://localhost:8020/user/kamrul/workflows/script
 
 #Hello.jar file is expected to be in the Workflow directory.
@@ -217,42 +222,45 @@ EXEC=Hello.jar
 resourceManager=localhost:8032
 nameNode=hdfs://localhost:8020
 queueName=default
-</verbatim>
+```
 
----+++ Shell Action Configuration
+### Shell Action Configuration
 
-=oozie.action.shell.setup.hadoop.conf.dir= - Generates a config directory with various core/hdfs/yarn/mapred-site.xml files and points =HADOOP_CONF_DIR= and =YARN_CONF_DIR= env-vars to it, before the Script is invoked. XML is sourced from the action configuration. Useful when the Shell script passed uses various =hadoop= commands. Default is false.
-=oozie.action.shell.setup.hadoop.conf.dir.write.log4j.properties= - When =oozie.action.shell.setup.hadoop.conf.dir= is enabled, toggle if a log4j.properties file should also be written under the configuration files directory. Default is true.
-=oozie.action.shell.setup.hadoop.conf.dir.log4j.content= - When =oozie.action.shell.setup.hadoop.conf.dir.write.log4j.properties= is enabled, the content to write into the log4j.properties file under the configuration files directory. Default is a simple console based stderr logger, as presented below:
-<verbatim>
+ * `oozie.action.shell.setup.hadoop.conf.dir` - Generates a config directory with various core/hdfs/yarn/mapred-site.xml files and points `HADOOP_CONF_DIR` and `YARN_CONF_DIR` env-vars to it, before the Script is invoked. XML is sourced from the action configuration. Useful when the Shell script passed uses various `hadoop` commands. Default is false.
+ * `oozie.action.shell.setup.hadoop.conf.dir.write.log4j.properties` - When `oozie.action.shell.setup.hadoop.conf.dir` is enabled, toggle if a log4j.properties file should also be written under the configuration files directory. Default is true.
+ * `oozie.action.shell.setup.hadoop.conf.dir.log4j.content` - When `oozie.action.shell.setup.hadoop.conf.dir.write.log4j.properties` is enabled, the content to write into the log4j.properties file under the configuration files directory. Default is a simple console based stderr logger, as presented below:
+
+```
 log4j.rootLogger=${hadoop.root.logger}
 hadoop.root.logger=INFO,console
 log4j.appender.console=org.apache.log4j.ConsoleAppender
 log4j.appender.console.target=System.err
 log4j.appender.console.layout=org.apache.log4j.PatternLayout
 log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
-</verbatim>
+```
 
----+++ Shell Action Logging
+### Shell Action Logging
 
 Shell action's stdout and stderr output are redirected to the Oozie Launcher map-reduce job task STDOUT that runs the shell command.
 
 From Oozie web-console, from the Shell action pop up using the 'Console URL' link, it is possible
 to navigate to the Oozie Launcher map-reduce job task logs via the Hadoop job-tracker web-console.
 
----+++ Shell Action Limitations
+### Shell Action Limitations
 Although Shell action can execute any shell command, there are some limitations.
+
    * No interactive command is supported.
    * Command can't be executed as different user using sudo.
-   * User has to explicitly upload the required 3rd party packages (such as jar, so lib, executable etc). Oozie provides a way using <file> and <archive> tag through Hadoop's Distributed Cache to upload.
+   * User has to explicitly upload the required 3rd party packages (such as jar, so lib, executable etc). Oozie provides a way using \<file\> and \<archive\> tag through Hadoop's Distributed Cache to upload.
    * Since Oozie will execute the shell command into a Hadoop compute node, the default installation of utility in the compute node might not be fixed. However, the most common unix utilities are usually installed on all compute nodes. It is important to note that Oozie could only support the commands that are installed into the compute nodes or that are uploaded through Distributed Cache.
 
----++ Appendix, Shell XML-Schema
+## Appendix, Shell XML-Schema
 
----+++ AE.A Appendix A, Shell XML-Schema
+### AE.A Appendix A, Shell XML-Schema
 
----++++ Shell Action Schema Version 1.0
-<verbatim>
+#### Shell Action Schema Version 1.0
+
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
            xmlns:shell="uri:oozie:shell-action:1.0"
            elementFormDefault="qualified"
@@ -283,10 +291,11 @@ Although Shell action can execute any shell command, there are some limitations.
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
+```
+
+#### Shell Action Schema Version 0.3
 
----++++ Shell Action Schema Version 0.3
-<verbatim>
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
            xmlns:shell="uri:oozie:shell-action:0.3" elementFormDefault="qualified"
            targetNamespace="uri:oozie:shell-action:0.3">
@@ -341,10 +350,11 @@ Although Shell action can execute any shell command, there are some limitations.
     </xs:complexType>
 
 </xs:schema>
-</verbatim>
+```
 
----++++ Shell Action Schema Version 0.2
-<verbatim>
+#### Shell Action Schema Version 0.2
+
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
            xmlns:shell="uri:oozie:shell-action:0.2" elementFormDefault="qualified"
            targetNamespace="uri:oozie:shell-action:0.2">
@@ -399,10 +409,11 @@ Although Shell action can execute any shell command, there are some limitations.
     </xs:complexType>
 
 </xs:schema>
-</verbatim>
+```
+
+#### Shell Action Schema Version 0.1
 
----++++ Shell Action Schema Version 0.1
-<verbatim>
+```
 <?xml version="1.0" encoding="UTF-8"?>
 <!--
   Licensed to the Apache Software Foundation (ASF) under one
@@ -476,8 +487,8 @@ Although Shell action can execute any shell command, there are some limitations.
     </xs:complexType>
 
 </xs:schema>
-</verbatim>
+```
+
+[::Go back to Oozie Documentation Index::](index.html)
 
-[[index][::Go back to Oozie Documentation Index::]]
 
-</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_SparkActionExtension.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_SparkActionExtension.twiki b/docs/src/site/twiki/DG_SparkActionExtension.twiki
index ce80e45..5a56cca 100644
--- a/docs/src/site/twiki/DG_SparkActionExtension.twiki
+++ b/docs/src/site/twiki/DG_SparkActionExtension.twiki
@@ -1,36 +1,37 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
+
+[::Go back to Oozie Documentation Index::](index.html)
 
 -----
 
----+!! Oozie Spark Action Extension
+# Oozie Spark Action Extension
 
-%TOC%
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
 
----++ Spark Action
+## Spark Action
 
-The =spark= action runs a Spark job.
+The `spark` action runs a Spark job.
 
 The workflow job will wait until the Spark job completes before
 continuing to the next action.
 
-To run the Spark job, you have to configure the =spark= action with
-the =resource-manager=, =name-node=, Spark =master= elements as
+To run the Spark job, you have to configure the `spark` action with
+the `resource-manager`, `name-node`, Spark `master` elements as
 well as the necessary elements, arguments and configuration.
 
-Spark options can be specified in an element called =spark-opts=.
+Spark options can be specified in an element called `spark-opts`.
 
-A =spark= action can be configured to create or delete HDFS directories
+A `spark` action can be configured to create or delete HDFS directories
 before starting the Spark job.
 
 Oozie EL expressions can be used in the inline configuration. Property
-values specified in the =configuration= element override values specified
-in the =job-xml= file.
+values specified in the `configuration` element override values specified
+in the `job-xml` file.
+
+**Syntax:**
 
-*Syntax:*
 
-<verbatim>
+```
 <workflow-app name="[WF-DEF-NAME]" xmlns="uri:oozie:workflow:1.0">
     ...
     <action name="[NODE-NAME]">
@@ -67,41 +68,43 @@ in the =job-xml= file.
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
-The =prepare= element, if present, indicates a list of paths to delete
-or create before starting the job. Specified paths must start with =hdfs://HOST:PORT=.
+The `prepare` element, if present, indicates a list of paths to delete
+or create before starting the job. Specified paths must start with `hdfs://HOST:PORT`.
 
-The =job-xml= element, if present, specifies a file containing configuration
-for the Spark job. Multiple =job-xml= elements are allowed in order to
-specify multiple =job.xml= files.
+The `job-xml` element, if present, specifies a file containing configuration
+for the Spark job. Multiple `job-xml` elements are allowed in order to
+specify multiple `job.xml` files.
 
-The =configuration= element, if present, contains configuration
+The `configuration` element, if present, contains configuration
 properties that are passed to the Spark job.
 
-The =master= element indicates the url of the Spark Master. Ex: spark://host:port, mesos://host:port, yarn-cluster, yarn-client,
+The `master` element indicates the url of the Spark Master. Ex: `spark://host:port`, `mesos://host:port`, yarn-cluster, yarn-client,
 or local.
 
-The =mode= element if present indicates the mode of spark, where to run spark driver program. Ex: client,cluster.  This is typically
-not required because you can specify it as part of =master= (i.e. master=yarn, mode=client is equivalent to master=yarn-client).
-A local =master= always runs in client mode.
+The `mode` element if present indicates the mode of spark, where to run spark driver program. Ex: client,cluster.  This is typically
+not required because you can specify it as part of `master` (i.e. master`yarn, mode`client is equivalent to master=yarn-client).
+A local `master` always runs in client mode.
+
+Depending on the `master` (and `mode`) entered, the Spark job will run differently as follows:
 
-Depending on the =master= (and =mode=) entered, the Spark job will run differently as follows:
    * local mode: everything runs here in the Launcher Job.
    * yarn-client mode: the driver runs here in the Launcher Job and the executor in Yarn.
    * yarn-cluster mode: the driver and executor run in Yarn.
 
-The =name= element indicates the name of the spark application.
+The `name` element indicates the name of the spark application.
 
-The =class= element if present, indicates the spark's application main class.
+The `class` element if present, indicates the spark's application main class.
 
-The =jar= element indicates a comma separated list of jars or python files.
+The `jar` element indicates a comma separated list of jars or python files.
 
-The =spark-opts= element, if present, contains a list of Spark options that can be passed to Spark. Spark configuration
+The `spark-opts` element, if present, contains a list of Spark options that can be passed to Spark. Spark configuration
 options can be passed by specifying '--conf key=value' or other Spark CLI options.
 Values containing whitespaces can be enclosed by double quotes.
 
-Some examples of the =spark-opts= element:
+Some examples of the `spark-opts` element:
+
    * '--conf key=value'
    * '--conf key1=value1 value2'
    * '--conf key1="value1 value2"'
@@ -109,32 +112,35 @@ Some examples of the =spark-opts= element:
    * '--conf key=value --verbose --properties-file user.properties'
 
 There are several ways to define properties that will be passed to Spark. They are processed in the following order:
-   * propagated from =oozie.service.SparkConfigurationService.spark.configurations=
-   * read from a localized =spark-defaults.conf= file
-   * read from a file defined in =spark-opts= via the =--properties-file=
-   * properties defined in =spark-opts= element
+
+   * propagated from `oozie.service.SparkConfigurationService.spark.configurations`
+   * read from a localized `spark-defaults.conf` file
+   * read from a file defined in `spark-opts` via the `--properties-file`
+   * properties defined in `spark-opts` element
 
 (The latter takes precedence over the former.)
-The server propagated properties, the =spark-defaults.conf= and the user-defined properties file are merged together into a
-single properties file as Spark handles only one file in its =--properties-file= option.
+The server propagated properties, the `spark-defaults.conf` and the user-defined properties file are merged together into a
+single properties file as Spark handles only one file in its `--properties-file` option.
 
-The =arg= element if present, contains arguments that can be passed to spark application.
+The `arg` element if present, contains arguments that can be passed to spark application.
 
-In case some property values are present both in =spark-defaults.conf= and as property key/value pairs generated by Oozie, the user
-configured values from =spark-defaults.conf= are prepended to the ones generated by Oozie, as part of the Spark arguments list.
+In case some property values are present both in `spark-defaults.conf` and as property key/value pairs generated by Oozie, the user
+configured values from `spark-defaults.conf` are prepended to the ones generated by Oozie, as part of the Spark arguments list.
 
 Following properties to prepend to Spark arguments:
-   * =spark.executor.extraClassPath=
-   * =spark.driver.extraClassPath=
-   * =spark.executor.extraJavaOptions=
-   * =spark.driver.extraJavaOptions=
+
+   * `spark.executor.extraClassPath`
+   * `spark.driver.extraClassPath`
+   * `spark.executor.extraJavaOptions`
+   * `spark.driver.extraJavaOptions`
 
 All the above elements can be parameterized (templatized) using EL
 expressions.
 
-*Example:*
+**Example:**
+
 
-<verbatim>
+```
 <workflow-app name="sample-wf" xmlns="uri:oozie:workflow:1.0">
     ...
     <action name="myfirstsparkjob">
@@ -165,34 +171,35 @@ expressions.
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
----+++ Spark Action Logging
+### Spark Action Logging
 
 Spark action logs are redirected to the Oozie Launcher map-reduce job task STDOUT/STDERR that runs Spark.
 
 From Oozie web-console, from the Spark action pop up using the 'Console URL' link, it is possible
 to navigate to the Oozie Launcher map-reduce job task logs via the Hadoop job-tracker web-console.
 
----+++ Spark on YARN
+### Spark on YARN
 
 To ensure that your Spark job shows up in the Spark History Server, make sure to specify these three Spark configuration properties
-either in =spark-opts= with =--conf= or from =oozie.service.SparkConfigurationService.spark.configurations= in oozie-site.xml.
+either in `spark-opts` with `--conf` or from `oozie.service.SparkConfigurationService.spark.configurations` in oozie-site.xml.
 
 1. spark.yarn.historyServer.address=SPH-HOST:18088
 
-2. spark.eventLog.dir=hdfs://NN:8020/user/spark/applicationHistory
+2. spark.eventLog.dir=`hdfs://NN:8020/user/spark/applicationHistory`
 
 3. spark.eventLog.enabled=true
 
----+++ PySpark with Spark Action
+### PySpark with Spark Action
 
 To submit PySpark scripts with Spark Action, pyspark dependencies must be available in sharelib or in workflow's lib/ directory.
-For more information, please refer to [[AG_Install#Oozie_Share_Lib][installation document.]]
+For more information, please refer to [installation document.](AG_Install.html#Oozie_Share_Lib)
 
-*Example:*
+**Example:**
 
-<verbatim>
+
+```
 <workflow-app name="sample-wf" xmlns="uri:oozie:workflow:1.0">
     ....
     <action name="myfirstpysparkjob">
@@ -220,24 +227,24 @@ For more information, please refer to [[AG_Install#Oozie_Share_Lib][installation
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
-The =jar= element indicates python file. Refer to the file by it's localized name, because only local files are allowed
-in PySpark. The py file should be in the lib/ folder next to the workflow.xml or added using the =file= element so that
+The `jar` element indicates python file. Refer to the file by it's localized name, because only local files are allowed
+in PySpark. The py file should be in the lib/ folder next to the workflow.xml or added using the `file` element so that
 it's localized to the working directory with just its name.
 
----+++ Using Symlink in <jar>
+### Using Symlink in \<jar\>
 
-A symlink must be specified using =[[WorkflowFunctionalSpec#a3.2.2.
-1_Adding_Files_and_Archives_for_the_Job][file]]= element. Then, you can use
-the symlink name in =jar= element.
+A symlink must be specified using [file](WorkflowFunctionalSpec.html#a3.2.2.1_Adding_Files_and_Archives_for_the_Job) element. Then, you can use
+the symlink name in `jar` element.
 
-*Example:*
+**Example:**
 
 Specifying relative path for symlink:
 
-Make sure that the file is within the application directory i.e. =oozie.wf.application.path= .
-<verbatim>
+Make sure that the file is within the application directory i.e. `oozie.wf.application.path` .
+
+```
         <spark xmlns="uri:oozie:spark-action:1.0">
         ...
             <jar>py-spark-example-symlink.py</jar>
@@ -246,10 +253,11 @@ Make sure that the file is within the application directory i.e. =oozie.wf.appli
             <file>py-spark.py#py-spark-example-symlink.py</file>
         ...
         </spark>
-</verbatim>
+```
 
 Specifying full path for symlink:
-<verbatim>
+
+```
         <spark xmlns="uri:oozie:spark-action:1.0">
         ...
             <jar>spark-example-symlink.jar</jar>
@@ -258,16 +266,17 @@ Specifying full path for symlink:
             <file>hdfs://localhost:8020/user/testjars/all-oozie-examples.jar#spark-example-symlink.jar</file>
         ...
         </spark>
-</verbatim>
+```
+
 
 
+## Appendix, Spark XML-Schema
 
----++ Appendix, Spark XML-Schema
+### AE.A Appendix A, Spark XML-Schema
 
----+++ AE.A Appendix A, Spark XML-Schema
+#### Spark Action Schema Version 1.0
 
----++++ Spark Action Schema Version 1.0
-<verbatim>
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
            xmlns:spark="uri:oozie:spark-action:1.0" elementFormDefault="qualified"
            targetNamespace="uri:oozie:spark-action:1.0">
@@ -300,10 +309,11 @@ Specifying full path for symlink:
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
+```
 
----++++ Spark Action Schema Version 0.2
-<verbatim>
+#### Spark Action Schema Version 0.2
+
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
            xmlns:spark="uri:oozie:spark-action:0.2" elementFormDefault="qualified"
            targetNamespace="uri:oozie:spark-action:0.2">
@@ -359,10 +369,11 @@ Specifying full path for symlink:
     </xs:complexType>
 
 </xs:schema>
-</verbatim>
+```
+
+#### Spark Action Schema Version 0.1
 
----++++ Spark Action Schema Version 0.1
-<verbatim>
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
            xmlns:spark="uri:oozie:spark-action:0.1" elementFormDefault="qualified"
            targetNamespace="uri:oozie:spark-action:0.1">
@@ -416,10 +427,10 @@ Specifying full path for symlink:
     </xs:complexType>
 
 </xs:schema>
-</verbatim>
-[[index][::Go back to Oozie Documentation Index::]]
+```
+[::Go back to Oozie Documentation Index::](index.html)
+
 
-</noautolink>
 
 
 

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_SqoopActionExtension.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_SqoopActionExtension.twiki b/docs/src/site/twiki/DG_SqoopActionExtension.twiki
index 0317d0c..b186c5a 100644
--- a/docs/src/site/twiki/DG_SqoopActionExtension.twiki
+++ b/docs/src/site/twiki/DG_SqoopActionExtension.twiki
@@ -1,46 +1,47 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
+
+[::Go back to Oozie Documentation Index::](index.html)
 
 -----
 
----+!! Oozie Sqoop Action Extension
+# Oozie Sqoop Action Extension
 
-%TOC%
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
 
----++ Sqoop Action
+## Sqoop Action
 
-*IMPORTANT:* The Sqoop action requires Apache Hadoop 1.x or 2.x.
+**IMPORTANT:** The Sqoop action requires Apache Hadoop 1.x or 2.x.
 
-The =sqoop= action runs a Sqoop job.
+The `sqoop` action runs a Sqoop job.
 
 The workflow job will wait until the Sqoop job completes before
 continuing to the next action.
 
-To run the Sqoop job, you have to configure the =sqoop= action with the =resource-manager=, =name-node= and Sqoop =command=
-or =arg= elements as well as configuration.
+To run the Sqoop job, you have to configure the `sqoop` action with the `resource-manager`, `name-node` and Sqoop `command`
+or `arg` elements as well as configuration.
 
-A =sqoop= action can be configured to create or delete HDFS directories
+A `sqoop` action can be configured to create or delete HDFS directories
 before starting the Sqoop job.
 
-Sqoop configuration can be specified with a file, using the =job-xml=
-element, and inline, using the =configuration= elements.
+Sqoop configuration can be specified with a file, using the `job-xml`
+element, and inline, using the `configuration` elements.
 
 Oozie EL expressions can be used in the inline configuration. Property
-values specified in the =configuration= element override values specified
-in the =job-xml= file.
+values specified in the `configuration` element override values specified
+in the `job-xml` file.
 
-Note that YARN =yarn.resourcemanager.address= / =resource-manager= and HDFS =fs.default.name= / =name-node= properties must not
+Note that YARN `yarn.resourcemanager.address` / `resource-manager` and HDFS `fs.default.name` / `name-node` properties must not
 be present in the inline configuration.
 
-As with Hadoop =map-reduce= jobs, it is possible to add files and
+As with Hadoop `map-reduce` jobs, it is possible to add files and
 archives in order to make them available to the Sqoop job. Refer to the
 [WorkflowFunctionalSpec#FilesArchives][Adding Files and Archives for the Job]
 section for more information about this feature.
 
-*Syntax:*
+**Syntax:**
+
 
-<verbatim>
+```
 <workflow-app name="[WF-DEF-NAME]" xmlns="uri:oozie:workflow:1.0">
     ...
     <action name="[NODE-NAME]">
@@ -73,40 +74,41 @@ section for more information about this feature.
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
-The =prepare= element, if present, indicates a list of paths to delete
-or create before starting the job. Specified paths must start with =hdfs://HOST:PORT=.
+The `prepare` element, if present, indicates a list of paths to delete
+or create before starting the job. Specified paths must start with `hdfs://HOST:PORT`.
 
-The =job-xml= element, if present, specifies a file containing configuration
-for the Sqoop job. As of schema 0.3, multiple =job-xml= elements are allowed in order to 
-specify multiple =job.xml= files.
+The `job-xml` element, if present, specifies a file containing configuration
+for the Sqoop job. As of schema 0.3, multiple `job-xml` elements are allowed in order to
+specify multiple `job.xml` files.
 
-The =configuration= element, if present, contains configuration
+The `configuration` element, if present, contains configuration
 properties that are passed to the Sqoop job.
 
-*Sqoop command*
+**Sqoop command**
 
-The Sqoop command can be specified either using the =command= element or multiple =arg=
+The Sqoop command can be specified either using the `command` element or multiple `arg`
 elements.
 
-When using the =command= element, Oozie will split the command on every space
+When using the `command` element, Oozie will split the command on every space
 into multiple arguments.
 
-When using the =arg= elements, Oozie will pass each argument value as an argument to Sqoop.
+When using the `arg` elements, Oozie will pass each argument value as an argument to Sqoop.
 
-The =arg= variant should be used when there are spaces within a single argument.
+The `arg` variant should be used when there are spaces within a single argument.
 
 Consult the Sqoop documentation for a complete list of valid Sqoop commands.
 
 All the above elements can be parameterized (templatized) using EL
 expressions.
 
-*Examples:*
+**Examples:**
 
-Using the =command= element:
+Using the `command` element:
 
-<verbatim>
+
+```
 <workflow-app name="sample-wf" xmlns="uri:oozie:workflow:1.0">
     ...
     <action name="myfirsthivejob">
@@ -129,11 +131,12 @@ Using the =command= element:
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
+
+The same Sqoop action using `arg` elements:
 
-The same Sqoop action using =arg= elements:
 
-<verbatim>
+```
 <workflow-app name="sample-wf" xmlns="uri:oozie:workflow:1.0">
     ...
     <action name="myfirstsqoopjob">
@@ -164,20 +167,20 @@ The same Sqoop action using =arg= elements:
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
-NOTE: The =arg= elements syntax, while more verbose, allows to have spaces in a single argument, something useful when
+NOTE: The `arg` elements syntax, while more verbose, allows to have spaces in a single argument, something useful when
 using free from queries.
 
----+++ Sqoop Action Counters
+### Sqoop Action Counters
 
 The counters of the map-reduce job run by the Sqoop action are available to be used in the workflow via the
-[[WorkflowFunctionalSpec#HadoopCountersEL][hadoop:counters() EL function]].
+[hadoop:counters() EL function](WorkflowFunctionalSpec.html#HadoopCountersEL).
 
-If the Sqoop action run an import all command, the =hadoop:counters()= EL will return the aggregated counters
+If the Sqoop action run an import all command, the `hadoop:counters()` EL will return the aggregated counters
 of all map-reduce jobs run by the Sqoop import all command.
 
----+++ Sqoop Action Logging
+### Sqoop Action Logging
 
 Sqoop action logs are redirected to the Oozie Launcher map-reduce job task STDOUT/STDERR that runs Sqoop.
 
@@ -185,14 +188,15 @@ From Oozie web-console, from the Sqoop action pop up using the 'Console URL' lin
 to navigate to the Oozie Launcher map-reduce job task logs via the Hadoop job-tracker web-console.
 
 The logging level of the Sqoop action can set in the Sqoop action configuration using the
-property =oozie.sqoop.log.level=. The default value is =INFO=.
+property `oozie.sqoop.log.level`. The default value is `INFO`.
 
----++ Appendix, Sqoop XML-Schema
+## Appendix, Sqoop XML-Schema
 
----+++ AE.A Appendix A, Sqoop XML-Schema
+### AE.A Appendix A, Sqoop XML-Schema
 
----++++ Sqoop Action Schema Version 1.0
-<verbatim>
+#### Sqoop Action Schema Version 1.0
+
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
            xmlns:sqoop="uri:oozie:sqoop-action:1.0"
            elementFormDefault="qualified"
@@ -223,10 +227,11 @@ property =oozie.sqoop.log.level=. The default value is =INFO=.
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
+```
+
+#### Sqoop Action Schema Version 0.3
 
----++++ Sqoop Action Schema Version 0.3
-<verbatim>
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
            xmlns:sqoop="uri:oozie:sqoop-action:0.3" elementFormDefault="qualified"
            targetNamespace="uri:oozie:sqoop-action:0.3">
@@ -279,10 +284,11 @@ property =oozie.sqoop.log.level=. The default value is =INFO=.
     </xs:complexType>
 
 </xs:schema>
-</verbatim>
+```
 
----++++ Sqoop Action Schema Version 0.2
-<verbatim>
+#### Sqoop Action Schema Version 0.2
+
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
            xmlns:sqoop="uri:oozie:sqoop-action:0.2" elementFormDefault="qualified"
            targetNamespace="uri:oozie:sqoop-action:0.2">
@@ -335,8 +341,8 @@ property =oozie.sqoop.log.level=. The default value is =INFO=.
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
+```
+
+[::Go back to Oozie Documentation Index::](index.html)
 
-[[index][::Go back to Oozie Documentation Index::]]
 
-</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_SshActionExtension.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_SshActionExtension.twiki b/docs/src/site/twiki/DG_SshActionExtension.twiki
index 5a51d49..e53e1c3 100644
--- a/docs/src/site/twiki/DG_SshActionExtension.twiki
+++ b/docs/src/site/twiki/DG_SshActionExtension.twiki
@@ -1,16 +1,16 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
+
+[::Go back to Oozie Documentation Index::](index.html)
 
 -----
 
----+!! Oozie Ssh Action Extension
+# Oozie Ssh Action Extension
 
-%TOC%
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
 
----++ Ssh Action
+## Ssh Action
 
-The =ssh= action starts a shell command on a remote machine as a remote secure shell in background. The workflow job
+The `ssh` action starts a shell command on a remote machine as a remote secure shell in background. The workflow job
 will wait until the remote shell command completes before continuing to the next action.
 
 The shell command must be present in the remote machine and it must be available for execution via the command path.
@@ -32,9 +32,10 @@ Note: Ssh Action will fail if oozie fails to ssh connect to host for action stat
 The first retry will wait a configurable period of time ( 3 seconds by default) before check.
 The following retries will wait 2 times of previous wait time.
 
-*Syntax:*
+**Syntax:**
+
 
-<verbatim>
+```
 <workflow-app name="[WF-DEF-NAME]" xmlns="uri:oozie:workflow:1.0">
     ...
     <action name="[NODE-NAME]">
@@ -50,32 +51,33 @@ The following retries will wait 2 times of previous wait time.
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
-The =host= indicates the user and host where the shell will be executed.
+The `host` indicates the user and host where the shell will be executed.
 
-*IMPORTANT:* The =oozie.action.ssh.allow.user.at.host= property, in the =oozie-site.xml= configuration, indicates if
+**IMPORTANT:** The `oozie.action.ssh.allow.user.at.host` property, in the `oozie-site.xml` configuration, indicates if
 an alternate user than the one submitting the job can be used for the ssh invocation. By default this property is set
-to =true=.
+to `true`.
 
-The =command= element indicates the shell command to execute.
+The `command` element indicates the shell command to execute.
 
-The =args= element, if present, contains parameters to be passed to the shell command. If more than one =args= element
-is present they are concatenated in order. When an =args= element contains a space, even when quoted, it will be considered as
-separate arguments (i.e. "Hello World" becomes "Hello" and "World").  Starting with ssh schema 0.2, you can use the =arg= element
-(note that this is different than the =args= element) to specify arguments that have a space in them (i.e. "Hello World" is
-preserved as "Hello World").  You can use either =args= elements, =arg= elements, or neither; but not both in the same action.
+The `args` element, if present, contains parameters to be passed to the shell command. If more than one `args` element
+is present they are concatenated in order. When an `args` element contains a space, even when quoted, it will be considered as
+separate arguments (i.e. "Hello World" becomes "Hello" and "World").  Starting with ssh schema 0.2, you can use the `arg` element
+(note that this is different than the `args` element) to specify arguments that have a space in them (i.e. "Hello World" is
+preserved as "Hello World").  You can use either `args` elements, `arg` elements, or neither; but not both in the same action.
 
-If the =capture-output= element is present, it indicates Oozie to capture output of the STDOUT of the ssh command
+If the `capture-output` element is present, it indicates Oozie to capture output of the STDOUT of the ssh command
 execution. The ssh command output must be in Java Properties file format and it must not exceed 2KB. From within the
 workflow definition, the output of an ssh action node is accessible via the =String action:output(String node,
 String key)= function (Refer to section '4.2.6 Action EL Functions').
 
-The configuration of the =ssh= action can be parameterized (templatized) using EL expressions.
+The configuration of the `ssh` action can be parameterized (templatized) using EL expressions.
+
+**Example:**
 
-*Example:*
 
-<verbatim>
+```
 <workflow-app name="sample-wf" xmlns="uri:oozie:workflow:1.0">
     ...
     <action name="myssjob">
@@ -90,22 +92,23 @@ The configuration of the =ssh= action can be parameterized (templatized) using E
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
-In the above example, the =uploaddata= shell command is executed with two arguments, =jdbc:derby://foo.com:1527/myDB=
-and =hdfs://foobar.com:8020/usr/tucu/myData=.
+In the above example, the `uploaddata` shell command is executed with two arguments, `jdbc:derby://foo.com:1527/myDB`
+and `hdfs://foobar.com:8020/usr/tucu/myData`.
 
-The =uploaddata= shell must be available in the remote host and available in the command path.
+The `uploaddata` shell must be available in the remote host and available in the command path.
 
-The output of the command will be ignored because the =capture-output= element is not present.
+The output of the command will be ignored because the `capture-output` element is not present.
 
----++ Appendix, Ssh XML-Schema
+## Appendix, Ssh XML-Schema
 
----+++ AE.A Appendix A, Ssh XML-Schema
+### AE.A Appendix A, Ssh XML-Schema
 
----++++ Ssh Action Schema Version 0.2
+#### Ssh Action Schema Version 0.2
 
-<verbatim>
+
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
            xmlns:ssh="uri:oozie:ssh-action:0.2" elementFormDefault="qualified"
            targetNamespace="uri:oozie:ssh-action:0.2">
@@ -127,11 +130,12 @@ The output of the command will be ignored because the =capture-output= element i
     <xs:complexType name="FLAG"/>
 .
 </xs:schema>
-</verbatim>
+```
+
+#### Ssh Action Schema Version 0.1
 
----++++ Ssh Action Schema Version 0.1
 
-<verbatim>
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
            xmlns:ssh="uri:oozie:ssh-action:0.1" elementFormDefault="qualified"
            targetNamespace="uri:oozie:ssh-action:0.1">
@@ -150,8 +154,8 @@ The output of the command will be ignored because the =capture-output= element i
     <xs:complexType name="FLAG"/>
 .
 </xs:schema>
-</verbatim>
+```
+
+[::Go back to Oozie Documentation Index::](index.html)
 
-[[index][::Go back to Oozie Documentation Index::]]
 
-</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_WorkflowReRun.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_WorkflowReRun.twiki b/docs/src/site/twiki/DG_WorkflowReRun.twiki
index 88d982b..c128681 100644
--- a/docs/src/site/twiki/DG_WorkflowReRun.twiki
+++ b/docs/src/site/twiki/DG_WorkflowReRun.twiki
@@ -1,33 +1,34 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
 
----+!! Workflow ReRrun
+[::Go back to Oozie Documentation Index::](index.html)
 
-%TOC%
----++ Configs
+# Workflow ReRrun
+
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
+## Configs
 
    * oozie.wf.application.path
-    * Only one of following two configurations is mandatory. Both should not be defined at the same time
+   * Only one of following two configurations is mandatory. Both should not be defined at the same time
       * oozie.wf.rerun.skip.nodes
       * oozie.wf.rerun.failnodes
    * Skip nodes are comma separated list of action names. They can be any action nodes including decision node.
-   * The valid value of  =oozie.wf.rerun.failnodes= is true or false.
+   * The valid value of  `oozie.wf.rerun.failnodes` is true or false.
    * If secured hadoop version is used, the following two properties needs to be specified as well
       * mapreduce.jobtracker.kerberos.principal
       * dfs.namenode.kerberos.principal.
    * Configurations can be passed as -D param.
-<verbatim>
+
+```
 $ oozie job -oozie http://localhost:11000/oozie -rerun 14-20090525161321-oozie-joe -Doozie.wf.rerun.skip.nodes=<>
-</verbatim>
+```
 
----++ Pre-Conditions
+## Pre-Conditions
 
    * Workflow with id wfId should exist.
    * Workflow with id wfId should be in SUCCEEDED/KILLED/FAILED.
    * If specified , nodes in the config oozie.wf.rerun.skip.nodes must be completed successfully.
 
----++ ReRun
+## ReRun
 
    * Reloads the configs.
    * If no configuration is passed, existing coordinator/workflow configuration will be used. If configuration is passed then, it will be merged with existing workflow configuration. Input configuration will take the precedence.
@@ -36,6 +37,6 @@ $ oozie job -oozie http://localhost:11000/oozie -rerun 14-20090525161321-oozie-j
    * Deletes the actions that are not skipped from the DB and copies data from old Workflow Instance to new one for skipped actions.
    * Action handler will skip the nodes given in the config with the same exit transition as before.
 
-[[index][::Go back to Oozie Documentation Index::]]
+[::Go back to Oozie Documentation Index::](index.html)
+
 
-</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/ENG_Building.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/ENG_Building.twiki b/docs/src/site/twiki/ENG_Building.twiki
deleted file mode 100644
index b861026..0000000
--- a/docs/src/site/twiki/ENG_Building.twiki
+++ /dev/null
@@ -1,270 +0,0 @@
-<noautolink>
-
-[[index][::Go back to Oozie Documentation Index::]]
-
----+!! Building Oozie
-
-%TOC%
-
----++ System Requirements
-
-   * Unix box (tested on Mac OS X and Linux)
-   * Java JDK 1.8+
-   * [[http://maven.apache.org/][Maven 3.0.1+]]
-   * [[http://hadoop.apache.org/core/releases.html][Hadoop 2.6.0+]]
-   * [[http://hadoop.apache.org/pig/releases.html][Pig 0.10.1+]]
-
-JDK commands (java, javac) must be in the command path.
-
-The Maven command (mvn) must be in the command path.
-
----++ Oozie Documentation Generation
-
-To generate the documentation, Oozie uses a patched Doxia plugin for Maven with improved twiki support.
-
-The source of the modified plugin is available in the Oozie GitHub repository, in the =ydoxia= branch.
-
-To build and install it locally run the following command in the =ydoxia= branch:
-
-<verbatim>
-$ mvn install
-</verbatim>
-
-#SshSetup
----++ Passphrase-less SSH Setup
-
-*NOTE: SSH actions are deprecated in Oozie 2.*
-
-To run SSH Testcases and for easier Hadoop start/stop configure SSH to localhost to be passphrase-less.
-
-Create your SSH keys without a passphrase and add the public key to the authorized file:
-
-<verbatim>
-$ ssh-keygen -t dsa
-$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys2
-</verbatim>
-
-Test that you can ssh without password:
-
-<verbatim>
-$ ssh localhost
-</verbatim>
-
----++ Building with different Java Versions
-
-Oozie requires a minimum Java version of 1.8. Any newer version can be used but by default bytecode will be generated
-which is compatible with 1.8. This can be changed by specifying the build property *targetJavaVersion*.
-
----++ Building and Testing Oozie
-
-The JARs for the specified Hadoop and Pig versions must be available in one of the Maven repositories defined in Oozie
-main 'pom.xml' file. Or they must be installed in the local Maven cache.
-
----+++ Examples Running Oozie Testcases with Different Configurations
-
-*Using embedded Hadoop minicluster with 'simple' authentication:*
-
-<verbatim>
-$ mvn clean test
-</verbatim>
-
-*Using a Hadoop cluster with 'simple' authentication:*
-
-<verbatim>
-$ mvn clean test -Doozie.test.hadoop.minicluster=false
-</verbatim>
-
-*Using embedded Hadoop minicluster with 'simple' authentication and Derby database:*
-
-<verbatim>
-$ mvn clean test -Doozie.test.hadoop.minicluster=false -Doozie.test.db=derby
-</verbatim>
-
-*Using a Hadoop cluster with 'kerberos' authentication:*
-
-<verbatim>
-$ mvn clean test -Doozie.test.hadoop.minicluster=false -Doozie.test.hadoop.security=kerberos
-</verbatim>
-
-NOTE: The embedded minicluster cannot be used when testing with 'kerberos' authentication.
-
-*Using a custom Oozie configuration for testcases:*
-
-<verbatim>
-$ mvn clean test -Doozie.test.config.file=/home/tucu/custom-oozie-sitel.xml
-</verbatim>
-
-*Running the testcases with different databases:*
-
-<verbatim>
-$ mvn clean test -Doozie.test.db=[hsqldb*|derby|mysql|postgres|oracle]
-</verbatim>
-
-Using =mysql= and =oracle= enables profiles that will include their JARs files in the build. If using
- =oracle=, the Oracle JDBC JAR file must be manually installed in the local Maven cache (the JAR is
-not available in public Maven repos).
-
----+++ Build Options Reference
-
-All these options can be set using *-D*.
-
-Except for the options marked with =(*)=, the options can be specified in the =test.properties= in the root
-of the Oozie project. The options marked with =(*)= are used in Maven POMs, thus they don't take effect if
-specified in the =test.properties= file (which is loaded by the =XTestCase= class at class initialization time).
-
-*hadoop.version* =(*)=: indicates the Hadoop version you wish to build Oozie against specifically. It will
-substitute this value in the Oozie POM properties and pull the corresponding Hadoop artifacts from Maven.
-The default version is 2.6.0 and that is the minimum supported Hadoop version.
-
-*generateSite* (*): generates Oozie documentation, default is undefined (no documentation is generated)
-
-*skipTests* (*): skips the execution of all testcases, no value required, default is undefined
-
-*test*= (*): runs a single test case, to run a test give the test class name without package and extension, no default
-
-*oozie.test.db*= (*): indicates the database to use for running the testcases, supported values are 'hsqldb', 'derby',
- 'mysql', 'postgres' and 'oracle'; default value is 'hsqldb'. For each database there is
- =core/src/test/resources/DATABASE-oozie-site.xml= file preconfigured.
-
-*oozie.test.properties* (*): indicates the file to load the test properties from, by default is =test.properties=.
-Having this option allows having different test properties sets, for example: minicluster, simple & kerberos.
-
-*oozie.test.waitfor.ratio*= : multiplication factor for testcases using waitfor, the ratio is used to adjust the
-effective time out. For slow machines the ratio should be increased. The default value is =1=.
-
-*oozie.test.config.file*= : indicates a custom Oozie configuration file for running the testcases. The specified file
-must be an absolute path. For example, it can be useful to specify different database than HSQL for running the
-testcases.
-
-*oozie.test.hadoop.minicluster*= : indicates if Hadoop minicluster should be started for testcases, default value 'true'
-
-*oozie.test.job.tracker*= : indicates the URI of the JobTracker when using a Hadoop cluster for testing, default value
-'localhost:8021'
-
-*oozie.test.name.node*= : indicates the URI of the NameNode when using a Hadoop cluster for testing, default value
-'hdfs://localhost:8020'
-
-*oozie.test.hadoop.security*= : indicates the type of Hadoop authentication for testing, valid values are 'simple' or
-'kerberos, default value 'simple'
-
-*oozie.test.kerberos.keytab.file*= : indicates the location of the keytab file, default value
-'${user.home}/oozie.keytab'
-
-*oozie.test.kerberos.realm*= : indicates the Kerberos real, default value 'LOCALHOST'
-
-*oozie.test.kerberos.oozie.principal*= : indicates the Kerberos principal for oozie, default value
-'${user.name}/localhost'
-
-*oozie.test.kerberos.jobtracker.principal*= : indicates the Kerberos principal for the JobTracker, default value
-'mapred/localhost'
-
-*oozie.test.kerberos.namenode.principal*= : indicates the Kerberos principal for the NameNode, default value
-'hdfs/localhost'
-
-*oozie.test.user.oozie*= : specifies the user ID used to start Oozie server in testcases, default value
-is =${user.name}=.
-
-*oozie.test.user.test*= : specifies primary user ID used as the user submitting jobs to Oozie Server in testcases,
-default value is =test=.
-
-*oozie.test.user.test2*= : specifies secondary user ID used as the user submitting jobs to Oozie Server in testcases,
-default value is =test2=.
-
-*oozie.test.user.test3*= : specifies secondary user ID used as the user submitting jobs to Oozie Server in testcases,
-default value is =test3=.
-
-*oozie.test.group*= : specifies group ID used as group when submitting jobs to Oozie Server in testcases,
-default value is =testg=.
-
-NOTE: The users/group specified in *oozie.test.user.test2*, *oozie.test.user.test3*= and *oozie.test.user.group*=
-are used for the authorization testcases only.
-
-*oozie.test.dir*= : specifies the directory where the =oozietests= directory will be created, default value is =/tmp=.
-The =oozietests= directory is used by testcases when they need a local filesystem directory.
-
-*hadoop.log.dir*= : specifies the directory where Hadoop minicluster will write its logs during testcases, default
-value is =/tmp=.
-
-*test.exclude*= : specifies a testcase class (just the class name) to exclude for the tests run, for example =TestSubmitCommand=.
-
-*test.exclude.pattern*= : specifies one or more patterns for testcases to exclude, for example =**/Test*Command.java=.
-
----+++ Testing Map Reduce Pipes Action
-
-Pipes testcases require Hadoop's *wordcount-simple* pipes binary example to run. The  *wordcount-simple* pipes binary
-should be compiled for the build platform and copied into Oozie's *core/src/test/resources/* directory. The binary file
-must be named *wordcount-simple*.
-
-If the  *wordcount-simple* pipes binary file is not available the testcase will do a NOP and it will print to its output
-file the following message 'SKIPPING TEST: TestPipesMain, binary 'wordcount-simple' not available in the classpath'.
-
-There are 2 testcases that use the *wordcount-simple* pipes binary, *TestPipesMain* and *TestMapReduceActionExecutor*,
-the 'SKIPPING TEST..." message would appear in the testcase log file of both testcases.
-
----+++ Testing using dist_test and grind
-
-Testing using [[https://github.com/cloudera/dist_test][dist_test]] framework with
-[[https://github.com/cloudera/dist_test/blob/master/docs/grind.md][grind]] front end might not work using the default 3.0.2
-version of the maven dependency plugin. It is necessary to downgrade to version 2.10 using
-<code>-Dmaven-dependency-plugin.version=2.10</code> .
-
-Maven flags for grind can be specified using <code>GRIND_MAVEN_FLAGS</code> environment variable:
-
-<verbatim>
-export GRIND_MAVEN_FLAGS=-Dmaven.dependency.plugin.version=2.10
-grind test --java-version 8
-</verbatim>
-
----++ Building an Oozie Distribution
-
-An Oozie distribution bundles an embedded Jetty server.
-
-The simplest way to build Oozie is to run the =mkdistro.sh= script:
-<verbatim>
-$ bin/mkdistro.sh [-DskipTests]
-Running =mkdistro.sh= will create the binary distribution of Oozie. The following options are available to customise
-the versions of the dependencies:
--Puber - Bundle required hadoop and hcatalog libraries in oozie war
--Dhadoop.version=<version> - default 2.6.0
--Ptez - Bundle tez jars in hive and pig sharelibs. Useful if you want to use tez
-as the execution engine for those applications.
--Dpig.version=<version> - default 0.16.0
--Dpig.classifier=<classifier> - default h2
--Dsqoop.version=<version> - default 1.4.3
--Dsqoop.classifier=<classifier> - default hadoop100
--jetty.version=<version> - default 9.3.20.v20170531
--Dopenjpa.version=<version> - default 2.2.2
--Dxerces.version=<version> - default 2.10.0
--Dcurator.version=<version> - default 2.5.0
--Dhive.version=<version> - default 1.2.0
--Dhbase.version=<version> - default 1.2.3
--Dtez.version=<version> - default 0.8.4
-</verbatim>
-
-*IMPORTANT:* Profile hadoop-3 must be activated if building against Hadoop 3
-
-The following properties should be specified when building a release:
-
-   * -DgenerateDocs : forces the generation of Oozie documentation
-   * -Dbuild.time=  : timestamps the distribution
-   * -Dvc.revision= : specifies the source control revision number of the distribution
-   * -Dvc.url=      : specifies the source control URL of the distribution
-
-The provided <code>bin/mkdistro.sh</code> script runs the above Maven invocation setting all these properties to the
-right values (the 'vc.*' properties are obtained from the local git repository).
-
----++ IDE Setup
-
-Eclipse and IntelliJ can use directly Oozie Maven project files.
-
-The only special consideration is that the following source directories from the =client= module must be added to
-the =core= module source path:
-
-   * =client/src/main/java= : as source directory
-   * =client/src/main/resources= : as source directory
-   * =client/src/test/java= : as test-source directory
-   * =client/src/test/resources= : as test-source directory
-
-[[index][::Go back to Oozie Documentation Index::]]
-
-</noautolink>