You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@oozie.apache.org by tu...@apache.org on 2012/10/04 05:45:33 UTC

svn commit: r1393905 - in /oozie/trunk: ./ docs/src/site/twiki/

Author: tucu
Date: Thu Oct  4 03:45:33 2012
New Revision: 1393905

URL: http://svn.apache.org/viewvc?rev=1393905&view=rev
Log:
OOZIE-1009 Documentation pages should use default ports for Oozie/JT/NN (tucu)

Modified:
    oozie/trunk/docs/src/site/twiki/BundleFunctionalSpec.twiki
    oozie/trunk/docs/src/site/twiki/CoordinatorFunctionalSpec.twiki
    oozie/trunk/docs/src/site/twiki/DG_CommandLineTool.twiki
    oozie/trunk/docs/src/site/twiki/DG_Examples.twiki
    oozie/trunk/docs/src/site/twiki/DG_HiveActionExtension.twiki
    oozie/trunk/docs/src/site/twiki/DG_ShellActionExtension.twiki
    oozie/trunk/docs/src/site/twiki/DG_SqoopActionExtension.twiki
    oozie/trunk/docs/src/site/twiki/DG_SshActionExtension.twiki
    oozie/trunk/docs/src/site/twiki/ENG_Building.twiki
    oozie/trunk/docs/src/site/twiki/WebServicesAPI.twiki
    oozie/trunk/docs/src/site/twiki/WorkflowFunctionalSpec.twiki
    oozie/trunk/release-log.txt

Modified: oozie/trunk/docs/src/site/twiki/BundleFunctionalSpec.twiki
URL: http://svn.apache.org/viewvc/oozie/trunk/docs/src/site/twiki/BundleFunctionalSpec.twiki?rev=1393905&r1=1393904&r2=1393905&view=diff
==============================================================================
--- oozie/trunk/docs/src/site/twiki/BundleFunctionalSpec.twiki (original)
+++ oozie/trunk/docs/src/site/twiki/BundleFunctionalSpec.twiki Thu Oct  4 03:45:33 2012
@@ -175,7 +175,7 @@ The previous Bundle Job application defi
       </property>
       <property>
           <name>appPath2</name>
-          <value>hdfs://foo:9000/user/joe/job/job.properties</value>
+          <value>hdfs://foo:8020/user/joe/job/job.properties</value>
       </property>
   </parameters>
   <controls>
@@ -211,7 +211,7 @@ The previous Bundle Job application defi
 </verbatim>
 
 In the above example, if =appPath= is not specified, Oozie will print an error message instead of submitting the job. If
-=appPath2= is not specified, Oozie will use the default value, =hdfs://foo:9000/user/joe/job/job.properties=.
+=appPath2= is not specified, Oozie will use the default value, =hdfs://foo:8020/user/joe/job/job.properties=.
 
 
 ---++ 5. User Propagation
@@ -254,7 +254,7 @@ submitted to the Oozie using an XML conf
     </property>
     <property>
         <name>oozie.bundle.application.path</name>
-        <value>hdfs://foo:9000/user/joe/mybundles/hello-bundle1.xml</value>
+        <value>hdfs://foo:8020/user/joe/mybundles/hello-bundle1.xml</value>
     </property>
     ...
 </configuration>

Modified: oozie/trunk/docs/src/site/twiki/CoordinatorFunctionalSpec.twiki
URL: http://svn.apache.org/viewvc/oozie/trunk/docs/src/site/twiki/CoordinatorFunctionalSpec.twiki?rev=1393905&r1=1393904&r2=1393905&view=diff
==============================================================================
--- oozie/trunk/docs/src/site/twiki/CoordinatorFunctionalSpec.twiki (original)
+++ oozie/trunk/docs/src/site/twiki/CoordinatorFunctionalSpec.twiki Thu Oct  4 03:45:33 2012
@@ -97,7 +97,7 @@ This document defines the functional spe
 
 *Dataset:* Collection of data referred to by a logical name. A dataset normally has several instances of data and each one of them can be referred individually. Each dataset instance is represented by a unique set of URIs.
 
-*Synchronous Dataset:* Synchronous datasets instances are generated at fixed time intervals and there is a dataset instance associated with each time interval. Synchronous dataset instances are identified by their nominal time. For example, in the case of a file system based dataset, the nominal time would be somewhere in the file path of the dataset instance: =hdfs://foo:9000/usr/logs/2009/04/15/23/30= .
+*Synchronous Dataset:* Synchronous datasets instances are generated at fixed time intervals and there is a dataset instance associated with each time interval. Synchronous dataset instances are identified by their nominal time. For example, in the case of a file system based dataset, the nominal time would be somewhere in the file path of the dataset instance: =hdfs://foo:8020/usr/logs/2009/04/15/23/30= .
 
 *Coordinator Action:* A coordinator action is a workflow job that is started when a set of conditions are met (input dataset instances are available).
 
@@ -454,7 +454,7 @@ IMPORTANT: The values of the EL constant
   <dataset name="logs" frequency="${coord:days(1)}"
            initial-instance="2009-02-15T08:15Z" timezone="America/Los_Angeles">
     <uri-template>
-      hdfs://foo:9000/app/logs/${market}/${YEAR}${MONTH}/${DAY}/data
+      hdfs://foo:8020/app/logs/${market}/${YEAR}${MONTH}/${DAY}/data
     </uri-template>
     <done-flag></done-flag>
   </dataset>
@@ -466,9 +466,9 @@ The dataset would resolve to the followi
 <verbatim>
   [market] will be replaced with user given property.
   
-  hdfs://foo:9000/usr/app/[market]/2009/02/15/data
-  hdfs://foo:9000/usr/app/[market]/2009/02/16/data
-  hdfs://foo:9000/usr/app/[market]/2009/02/17/data
+  hdfs://foo:8020/usr/app/[market]/2009/02/15/data
+  hdfs://foo:8020/usr/app/[market]/2009/02/16/data
+  hdfs://foo:8020/usr/app/[market]/2009/02/17/data
   ...
 </verbatim>
 
@@ -478,25 +478,25 @@ The dataset would resolve to the followi
 <verbatim>
   <dataset name="stats" frequency="${coord:months(1)}"
            initial-instance="2009-01-10T10:00Z" timezone="America/Los_Angeles">
-    <uri-template>hdfs://foo:9000/usr/app/stats/${YEAR}/${MONTH}/data</uri-template>
+    <uri-template>hdfs://foo:8020/usr/app/stats/${YEAR}/${MONTH}/data</uri-template>
   </dataset>
 </verbatim>
 
 The dataset would resolve to the following URIs:
 
 <verbatim>
-  hdfs://foo:9000/usr/app/stats/2009/01/data
-  hdfs://foo:9000/usr/app/stats/2009/02/data
-  hdfs://foo:9000/usr/app/stats/2009/03/data
+  hdfs://foo:8020/usr/app/stats/2009/01/data
+  hdfs://foo:8020/usr/app/stats/2009/02/data
+  hdfs://foo:8020/usr/app/stats/2009/03/data
   ...
 </verbatim>
 
 The dataset are ready until '_SUCCESS' exists in each path:
 
 <verbatim>
-  hdfs://foo:9000/usr/app/stats/2009/01/data/_SUCCESS
-  hdfs://foo:9000/usr/app/stats/2009/02/data/_SUCCESS
-  hdfs://foo:9000/usr/app/stats/2009/03/data/_SUCCESS
+  hdfs://foo:8020/usr/app/stats/2009/01/data/_SUCCESS
+  hdfs://foo:8020/usr/app/stats/2009/02/data/_SUCCESS
+  hdfs://foo:8020/usr/app/stats/2009/03/data/_SUCCESS
   ...
 </verbatim>
 
@@ -507,7 +507,7 @@ The dataset are ready until '_SUCCESS' e
   <dataset name="stats" frequency="${coord:months(3)}"
            initial-instance="2009-01-31T20:00Z" timezone="America/Los_Angeles">
     <uri-template>
-      hdfs://foo:9000/usr/app/stats/${YEAR}/${MONTH}/data
+      hdfs://foo:8020/usr/app/stats/${YEAR}/${MONTH}/data
     </uri-template>
     <done-flag>trigger.dat</done-flag>
   </dataset>
@@ -516,18 +516,18 @@ The dataset are ready until '_SUCCESS' e
 The dataset would resolve to the following URIs:
 
 <verbatim>
-  hdfs://foo:9000/usr/app/stats/2009/01/data
-  hdfs://foo:9000/usr/app/stats/2009/04/data
-  hdfs://foo:9000/usr/app/stats/2009/07/data
+  hdfs://foo:8020/usr/app/stats/2009/01/data
+  hdfs://foo:8020/usr/app/stats/2009/04/data
+  hdfs://foo:8020/usr/app/stats/2009/07/data
   ...
 </verbatim>
 
 The dataset are ready until 'trigger.dat' exists in each path:
 
 <verbatim>
-  hdfs://foo:9000/usr/app/stats/2009/01/data/trigger.dat
-  hdfs://foo:9000/usr/app/stats/2009/04/data/trigger.dat
-  hdfs://foo:9000/usr/app/stats/2009/07/data/trigger.dat
+  hdfs://foo:8020/usr/app/stats/2009/01/data/trigger.dat
+  hdfs://foo:8020/usr/app/stats/2009/04/data/trigger.dat
+  hdfs://foo:8020/usr/app/stats/2009/07/data/trigger.dat
   ...
 </verbatim>
 
@@ -538,7 +538,7 @@ The dataset are ready until 'trigger.dat
   <dataset name="logs" frequency="${coord:days(1)}"
            initial-instance="2009-01-01T10:30Z" timezone="America/Los_Angeles">
     <uri-template>
-      hdfs://foo:9000/usr/app/logs/${YEAR}/${MONTH}/${DAY}/data
+      hdfs://foo:8020/usr/app/logs/${YEAR}/${MONTH}/${DAY}/data
     </uri-template>
   </dataset>
 </verbatim>
@@ -546,9 +546,9 @@ The dataset are ready until 'trigger.dat
 The dataset would resolve to the following URIs:
 
 <verbatim>
-  hdfs://foo:9000/usr/app/logs/2009/01/01/data
-  hdfs://foo:9000/usr/app/logs/2009/01/02/data
-  hdfs://foo:9000/usr/app/logs/2009/01/03/data
+  hdfs://foo:8020/usr/app/logs/2009/01/01/data
+  hdfs://foo:8020/usr/app/logs/2009/01/02/data
+  hdfs://foo:8020/usr/app/logs/2009/01/03/data
   ...
 </verbatim>
 
@@ -558,7 +558,7 @@ The dataset would resolve to the followi
   <dataset name="logs" frequency="${coord:days(1)}"
            initial-instance="2009-01-01T10:30Z" timezone="America/Los_Angeles">
     <uri-template>
-      hdfs://foo:9000/usr/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}/${MINUTE}/data
+      hdfs://foo:8020/usr/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}/${MINUTE}/data
     </uri-template>
   </dataset>
 </verbatim>
@@ -566,9 +566,9 @@ The dataset would resolve to the followi
 The dataset resolves to the following URIs with fixed values for the finer precision template variables:
 
 <verbatim>
-  hdfs://foo:9000/usr/app/logs/2009/01/01/10/30/data
-  hdfs://foo:9000/usr/app/logs/2009/01/02/10/30/data
-  hdfs://foo:9000/usr/app/logs/2009/01/03/10/30/data
+  hdfs://foo:8020/usr/app/logs/2009/01/01/10/30/data
+  hdfs://foo:8020/usr/app/logs/2009/01/02/10/30/data
+  hdfs://foo:8020/usr/app/logs/2009/01/03/10/30/data
   ...
 </verbatim>
 
@@ -601,18 +601,18 @@ Dataset definitions are grouped in XML f
 <verbatim>
 <datasets>
 .
-  <include>hdfs://foo:9000/app/dataset-definitions/globallogs.xml</include>
+  <include>hdfs://foo:8020/app/dataset-definitions/globallogs.xml</include>
 .
   <dataset name="logs" frequency="${coord:hours(12)}"
            initial-instance="2009-02-15T08:15Z" timezone="Americas/Los_Angeles">
     <uri-template>
-    hdfs://foo:9000/app/logs/${market}/${YEAR}${MONTH}/${DAY}/${HOUR}/${MINUTE}/data
+    hdfs://foo:8020/app/logs/${market}/${YEAR}${MONTH}/${DAY}/${HOUR}/${MINUTE}/data
     </uri-template>
   </dataset>
 .
   <dataset name="stats" frequency="${coord:months(1)}"
            initial-instance="2009-01-10T10:00Z" timezone="Americas/Los_Angeles">
-    <uri-template>hdfs://foo:9000/usr/app/stats/${YEAR}/${MONTH}/data</uri-template>
+    <uri-template>hdfs://foo:8020/usr/app/stats/${YEAR}/${MONTH}/data</uri-template>
   </dataset>
 .
 </datasets>
@@ -880,11 +880,11 @@ The following example describes a synchr
       <datasets>
         <dataset name="logs" frequency="${coord:days(1)}"
                  initial-instance="2009-01-02T08:00Z" timezone="America/Los_Angeles">
-          <uri-template>hdfs://bar:9000/app/logs/${YEAR}${MONTH}/${DAY}/data</uri-template>
+          <uri-template>hdfs://bar:8020/app/logs/${YEAR}${MONTH}/${DAY}/data</uri-template>
         </dataset>
         <dataset name="siteAccessStats" frequency="${coord:days(1)}"
                  initial-instance="2009-01-02T08:00Z" timezone="America/Los_Angeles">
-          <uri-template>hdfs://bar:9000/app/stats/${YEAR}/${MONTH}/${DAY}/data</uri-template>
+          <uri-template>hdfs://bar:8020/app/stats/${YEAR}/${MONTH}/${DAY}/data</uri-template>
         </dataset>
       </datasets>
       <input-events>
@@ -899,7 +899,7 @@ The following example describes a synchr
       </output-events>
       <action>
         <workflow>
-          <app-path>hdfs://bar:9000/usr/joe/logsprocessor-wf</app-path>
+          <app-path>hdfs://bar:8020/usr/joe/logsprocessor-wf</app-path>
           <configuration>
             <property>
               <name>wfInput</name>
@@ -923,15 +923,15 @@ The workflow job invocation for the sing
 
 <verbatim>
   <workflow>
-    <app-path>hdfs://bar:9000/usr/joe/logsprocessor-wf</app-path>
+    <app-path>hdfs://bar:8020/usr/joe/logsprocessor-wf</app-path>
     <configuration>
       <property>
         <name>wfInput</name>
-        <value>hdfs://bar:9000/app/logs/200901/02/data</value>
+        <value>hdfs://bar:8020/app/logs/200901/02/data</value>
       </property>
       <property>
         <name>wfOutput</name>
-        <value>hdfs://bar:9000/app/stats/2009/01/02/data</value>
+        <value>hdfs://bar:8020/app/stats/2009/01/02/data</value>
       </property>
     </configuration>
   </workflow>
@@ -959,11 +959,11 @@ The coordinator application is identical
       <datasets>
         <dataset name="logs" frequency="${coord:days(1)}"
                  initial-instance="2009-01-02T08:00Z" timezone="America/Los_Angeles">
-          <uri-template>hdfs://bar:9000/app/logs/${YEAR}${MONTH}/${DAY}/data</uri-template>
+          <uri-template>hdfs://bar:8020/app/logs/${YEAR}${MONTH}/${DAY}/data</uri-template>
         </dataset>
         <dataset name="siteAccessStats" frequency="${coord:days(1)}"
                  initial-instance="2009-01-02T08:00Z" timezone="America/Los_Angeles">
-          <uri-template>hdfs://bar:9000/app/stats/${YEAR}/${MONTH}/${DAY}/data</uri-template>
+          <uri-template>hdfs://bar:8020/app/stats/${YEAR}/${MONTH}/${DAY}/data</uri-template>
         </dataset>
       </datasets>
       <input-events>
@@ -978,7 +978,7 @@ The coordinator application is identical
       </output-events>
       <action>
         <workflow>
-          <app-path>hdfs://bar:9000/usr/joe/logsprocessor-wf</app-path>
+          <app-path>hdfs://bar:8020/usr/joe/logsprocessor-wf</app-path>
           <configuration>
             <property>
               <name>wfInput</name>
@@ -1004,15 +1004,15 @@ The workflow job invocation for the firs
 
 <verbatim>
   <workflow>
-    <app-path>hdfs://bar:9000/usr/joe/logsprocessor-wf</app-path>
+    <app-path>hdfs://bar:8020/usr/joe/logsprocessor-wf</app-path>
     <configuration>
       <property>
         <name>wfInput</name>
-        <value>hdfs://bar:9000/app/logs/200901/02/data</value>
+        <value>hdfs://bar:8020/app/logs/200901/02/data</value>
       </property>
       <property>
         <name>wfOutput</name>
-        <value>hdfs://bar:9000/app/stats/2009/01/02/data</value>
+        <value>hdfs://bar:8020/app/stats/2009/01/02/data</value>
       </property>
     </configuration>
   </workflow>
@@ -1022,15 +1022,15 @@ For the second coordinator action it wou
 
 <verbatim>
   <workflow>
-    <app-path>hdfs://bar:9000/usr/joe/logsprocessor-wf</app-path>
+    <app-path>hdfs://bar:8020/usr/joe/logsprocessor-wf</app-path>
     <configuration>
       <property>
         <name>wfInput</name>
-        <value>hdfs://bar:9000/app/logs/200901/03/data</value>
+        <value>hdfs://bar:8020/app/logs/200901/03/data</value>
       </property>
       <property>
         <name>wfOutput</name>
-        <value>hdfs://bar:9000/app/stats/2009/01/03/data</value>
+        <value>hdfs://bar:8020/app/stats/2009/01/03/data</value>
       </property>
     </configuration>
   </workflow>
@@ -1056,11 +1056,11 @@ The coordinator application frequency is
       <datasets>
         <dataset name="logs" frequency="${coord:days(1)}"
                  initial-instance="2009-01-01T24:00Z" timezone="UTC">
-          <uri-template>hdfs://bar:9000/app/logs/${YEAR}${MONTH}/${DAY}</uri-template>
+          <uri-template>hdfs://bar:8020/app/logs/${YEAR}${MONTH}/${DAY}</uri-template>
         </dataset>
         <dataset name="weeklySiteAccessStats" frequency="${coord:days(7)}"
                  initial-instance="2009-01-07T24:00Z" timezone="UTC">
-          <uri-template>hdfs://bar:9000/app/weeklystats/${YEAR}/${MONTH}/${DAY}</uri-template>
+          <uri-template>hdfs://bar:8020/app/weeklystats/${YEAR}/${MONTH}/${DAY}</uri-template>
         </dataset>
       </datasets>
       <input-events>
@@ -1076,7 +1076,7 @@ The coordinator application frequency is
       </output-events>
       <action>
         <workflow>
-          <app-path>hdfs://bar:9000/usr/joe/logsprocessor-wf</app-path>
+          <app-path>hdfs://bar:8020/usr/joe/logsprocessor-wf</app-path>
           <configuration>
             <property>
               <name>wfInput</name>
@@ -1102,20 +1102,20 @@ The workflow job invocation for the firs
 
 <verbatim>
   <workflow>
-    <app-path>hdfs://bar:9000/usr/joe/logsprocessor-wf</app-path>
+    <app-path>hdfs://bar:8020/usr/joe/logsprocessor-wf</app-path>
     <configuration>
       <property>
         <name>wfInput</name>
         <value>
-               hdfs://bar:9000/app/logs/200901/01,hdfs://bar:9000/app/logs/200901/02,
-               hdfs://bar:9000/app/logs/200901/03,hdfs://bar:9000/app/logs/200901/05,
-               hdfs://bar:9000/app/logs/200901/05,hdfs://bar:9000/app/logs/200901/06,
-               hdfs://bar:9000/app/logs/200901/07
+               hdfs://bar:8020/app/logs/200901/01,hdfs://bar:8020/app/logs/200901/02,
+               hdfs://bar:8020/app/logs/200901/03,hdfs://bar:8020/app/logs/200901/05,
+               hdfs://bar:8020/app/logs/200901/05,hdfs://bar:8020/app/logs/200901/06,
+               hdfs://bar:8020/app/logs/200901/07
         </value>
       </property>
       <property>
         <name>wfOutput</name>
-        <value>hdfs://bar:9000/app/stats/2009/01/07</value>
+        <value>hdfs://bar:8020/app/stats/2009/01/07</value>
       </property>
     </configuration>
   </workflow>
@@ -1125,20 +1125,20 @@ For the second coordinator action it wou
 
 <verbatim>
   <workflow>
-    <app-path>hdfs://bar:9000/usr/joe/logsprocessor-wf</app-path>
+    <app-path>hdfs://bar:8020/usr/joe/logsprocessor-wf</app-path>
     <configuration>
       <property>
         <name>wfInput</name>
         <value>
-               hdfs://bar:9000/app/logs/200901/08,hdfs://bar:9000/app/logs/200901/09,
-               hdfs://bar:9000/app/logs/200901/10,hdfs://bar:9000/app/logs/200901/11,
-               hdfs://bar:9000/app/logs/200901/12,hdfs://bar:9000/app/logs/200901/13,
-               hdfs://bar:9000/app/logs/200901/16
+               hdfs://bar:8020/app/logs/200901/08,hdfs://bar:8020/app/logs/200901/09,
+               hdfs://bar:8020/app/logs/200901/10,hdfs://bar:8020/app/logs/200901/11,
+               hdfs://bar:8020/app/logs/200901/12,hdfs://bar:8020/app/logs/200901/13,
+               hdfs://bar:8020/app/logs/200901/16
         </value>
       </property>
       <property>
         <name>wfOutput</name>
-        <value>hdfs://bar:9000/app/stats/2009/01/16</value>
+        <value>hdfs://bar:8020/app/stats/2009/01/16</value>
       </property>
     </configuration>
   </workflow>
@@ -1175,7 +1175,7 @@ Coordinator application definition:
         <dataset name="logs" frequency="${coord:hours(1)}"
                  initial-instance="${logsInitialInstance}" timezone="${timezone}">
           <uri-template>
-            hdfs://bar:9000/app/logs/${market}/${language}/${YEAR}${MONTH}/${DAY}/${HOUR}
+            hdfs://bar:8020/app/logs/${market}/${language}/${YEAR}${MONTH}/${DAY}/${HOUR}
           </uri-template>
         </dataset>
       </datasets>
@@ -1230,7 +1230,7 @@ The previous parameterized coordinator a
         <dataset name="logs" frequency="${coord:hours(1)}"
                  initial-instance="${logsInitialInstance}" timezone="${timezone}">
           <uri-template>
-            hdfs://bar:9000/app/logs/${market}/${language}/${YEAR}${MONTH}/${DAY}/${HOUR}
+            hdfs://bar:8020/app/logs/${market}/${language}/${YEAR}${MONTH}/${DAY}/${HOUR}
           </uri-template>
         </dataset>
       </datasets>
@@ -1294,12 +1294,12 @@ Datasets Definition:
 .
   <dataset name="logs" frequency="${coord:days(1)}"
            initial-instance="2009-01-01T24:00Z" timezone="UTC">
-    <uri-template>hdfs://bar:9000/app/logs/${YEAR}${MONTH}/${DAY}</uri-template>
+    <uri-template>hdfs://bar:8020/app/logs/${YEAR}${MONTH}/${DAY}</uri-template>
   </dataset>
 .
   <dataset name="weeklySiteAccessStats" frequency="${coord:days(7)}"
            initial-instance="2009-01-07T24:00Z" timezone="UTC">
-    <uri-template>hdfs://bar:9000/app/weeklystats/${YEAR}/${MONTH}/${DAY}</uri-template>
+    <uri-template>hdfs://bar:8020/app/weeklystats/${YEAR}/${MONTH}/${DAY}</uri-template>
   </dataset>
 .
 </datasets>
@@ -1322,7 +1322,7 @@ Datasets Definition file 'datasets.xml':
 
   <dataset name="logs" frequency="${coord:hours(1)}"
            initial-instance="2009-01-01T01:00Z" timezone="UTC">
-    <uri-template>hdfs://bar:9000/app/logs/${YEAR}${MONTH}/${DAY}/${HOUR}</uri-template>
+    <uri-template>hdfs://bar:8020/app/logs/${YEAR}${MONTH}/${DAY}/${HOUR}</uri-template>
   </dataset>
 
 </datasets>
@@ -1335,7 +1335,7 @@ a. Coordinator application definition th
                     start="2009-01-01T24:00Z" end="2009-12-31T24:00Z" timezone="UTC"
                     xmlns="uri:oozie:coordinator:0.1">
       <datasets>
-      	<include>hdfs://foo:9000/app/dataset-definitions/datasets.xml</include>
+        <include>hdfs://foo:8020/app/dataset-definitions/datasets.xml</include>
       </datasets>
       <input-events>
         <data-in name="input" dataset="logs">
@@ -1362,7 +1362,7 @@ b. Coordinator application definition th
                     start="2009-01-01T24:00Z" end="2009-12-31T24:00Z" timezone="UTC"
                     xmlns="uri:oozie:coordinator:0.1">
       <datasets>
-        <include>hdfs://foo:9000/app/dataset-definitions/datasets.xml</include>
+        <include>hdfs://foo:8020/app/dataset-definitions/datasets.xml</include>
       </datasets>
       <input-events>
         <data-in name="input" dataset="logs">
@@ -1393,12 +1393,12 @@ Datasets Definition file 'datasets.xml':
 .
   <dataset name="logs" frequency="${coord:hours(1)}"
            initial-instance="2009-01-01T01:00Z" timezone="UTC">
-    <uri-template>hdfs://bar:9000/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
+    <uri-template>hdfs://bar:8020/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
   </dataset>
 .
   <dataset name="stats" frequency="${coord:days(1)}"
            initial-instance="2009-01-01T24:00Z" timezone="UTC">
-    <uri-template>hdfs://bar:9000/app/logs/${YEAR}/${MONTH}/${DAY}</uri-template>
+    <uri-template>hdfs://bar:8020/app/logs/${YEAR}/${MONTH}/${DAY}</uri-template>
   </dataset>
 .
 </datasets>
@@ -1411,7 +1411,7 @@ Coordinator application definition:
                     start="2009-01-01T24:00Z" end="2009-12-31T24:00Z" timezone="UTC"
                     xmlns="uri:oozie:coordinator:0.1">
       <datasets>
-        <include>hdfs://foo:9000/app/dataset-definitions/datasets.xml</include>
+        <include>hdfs://foo:8020/app/dataset-definitions/datasets.xml</include>
       </datasets>
       <input-events>
         <data-in name="input" dataset="logs">
@@ -1451,17 +1451,17 @@ Dataset definitions file 'datasets.xml':
 .
   <dataset name="15MinLogs" frequency="${coord:minutes(15)}"
            initial-instance="2009-01-01T00:15:00Z" timezone="UTC">
-    <uri-template>hdfs://bar:9000/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}/${MINUTE}</uri-template>
+    <uri-template>hdfs://bar:8020/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}/${MINUTE}</uri-template>
   </dataset>
 .
   <dataset name="1HourLogs" frequency="${coord:hours(1)}"
            initial-instance="2009-01-01T01:00:00Z" timezone="UTC">
-    <uri-template>hdfs://bar:9000/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
+    <uri-template>hdfs://bar:8020/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
   </dataset>
 .
   <dataset name="1DayLogs" frequency="${coord:hours(24)}"
            initial-instance="2009-01-01T24:00:00Z" timezone="UTC">
-    <uri-template>hdfs://bar:9000/app/logs/${YEAR}/${MONTH}/${DAY}</uri-template>
+    <uri-template>hdfs://bar:8020/app/logs/${YEAR}/${MONTH}/${DAY}</uri-template>
   </dataset>
 </verbatim>
 
@@ -1472,7 +1472,7 @@ Coordinator application definitions. A d
                     start="2009-01-01T01:00Z" end="2009-12-31T24:00Z" timezone="UTC"
                     xmlns="uri:oozie:coordinator:0.1">
       <datasets>
-        <include>hdfs://foo:9000/app/dataset-definitions/datasets.xml</include>
+        <include>hdfs://foo:8020/app/dataset-definitions/datasets.xml</include>
       </datasets>
       <input-events>
         <data-in name="input" dataset="15MinLogs">
@@ -1498,7 +1498,7 @@ Coordinator application definitions. A d
                     start="2009-01-01T24:00Z" end="2009-12-31T24:00Z" timezone="UTC"
                     xmlns="uri:oozie:coordinator:0.1">
       <datasets>
-        <include>hdfs://foo:9000/app/dataset-definitions/datasets.xml</include>
+        <include>hdfs://foo:8020/app/dataset-definitions/datasets.xml</include>
       </datasets>
       <input-events>
         <data-in name="input" dataset="1HourLogs">
@@ -1560,7 +1560,7 @@ Coordinator application definition:
         <dataset name="logs" frequency="${coord:hours(1)}"
                  initial-instance="${logsInitialInstance}" timezone="${timezone}">
           <uri-template>
-            hdfs://bar:9000/app/logs/${market}/${language}/${YEAR}${MONTH}/${DAY}/${HOUR}
+            hdfs://bar:8020/app/logs/${market}/${language}/${YEAR}${MONTH}/${DAY}/${HOUR}
           </uri-template>
         </dataset>
       </datasets>
@@ -1624,7 +1624,7 @@ Coordinator application definition:
         <dataset name="logs" frequency="${coord:days(1)}"
                  initial-instance="2009-01-01T24:00Z" timezone="UTC">
           <uri-template>
-            hdfs://bar:9000/app/logs/${market}/${language}/${YEAR}${MONTH}/${DAY}
+            hdfs://bar:8020/app/logs/${market}/${language}/${YEAR}${MONTH}/${DAY}
           </uri-template>
         </dataset>
       </datasets>
@@ -1689,7 +1689,7 @@ Coordinator application definition:
         <dataset name="logs" frequency="${coord:hours(1)}"
                  initial-instance="2009-01-01T01:00Z" timezone="UTC">
           <uri-template>
-            hdfs://bar:9000/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}
+            hdfs://bar:8020/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}
           </uri-template>
         </dataset>
       </datasets>
@@ -1710,23 +1710,23 @@ Coordinator application definition:
 If the available dataset instances in HDFS at time of a coordinator action being executed are:
 
 <verbatim>
-  hdfs://bar:9000/app/logs/2009/01/01
-  hdfs://bar:9000/app/logs/2009/01/02
-  hdfs://bar:9000/app/logs/2009/01/03
+  hdfs://bar:8020/app/logs/2009/01/01
+  hdfs://bar:8020/app/logs/2009/01/02
+  hdfs://bar:8020/app/logs/2009/01/03
   	(missing)
-  hdfs://bar:9000/app/logs/2009/01/05
+  hdfs://bar:8020/app/logs/2009/01/05
   (missing)
-  hdfs://bar:9000/app/logs/2009/01/07
+  hdfs://bar:8020/app/logs/2009/01/07
   (missing)
   (missing)
-  hdfs://bar:9000/app/logs/2009/01/10
+  hdfs://bar:8020/app/logs/2009/01/10
 </verbatim>
 
 Then, the dataset instances for the input events for the coordinator action will be:
 
 <verbatim>
-  hdfs://bar:9000/app/logs/2009/01/05
-  hdfs://bar:9000/app/logs/2009/01/10
+  hdfs://bar:8020/app/logs/2009/01/05
+  hdfs://bar:8020/app/logs/2009/01/10
 </verbatim>
 
 ---++++ 6.6.6. coord:future(int n, int limit) EL Function for Synchronous Datasets
@@ -1753,7 +1753,7 @@ Coordinator application definition:
         <dataset name="logs" frequency="${coord:hours(1)}"
                  initial-instance="2009-01-01T01:00Z" timezone="UTC">
           <uri-template>
-            hdfs://bar:9000/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}
+            hdfs://bar:8020/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}
           </uri-template>
         </dataset>
       </datasets>
@@ -1774,30 +1774,30 @@ Coordinator application definition:
 If the available dataset instances in HDFS at time of a coordinator action being executed are:
 
 <verbatim>
-  hdfs://bar:9000/app/logs/2009/02/01
+  hdfs://bar:8020/app/logs/2009/02/01
   (missing)
   (missing)
   (missing)
-  hdfs://bar:9000/app/logs/2009/02/04
+  hdfs://bar:8020/app/logs/2009/02/04
  (missing)
  (missing)
-  hdfs://bar:9000/app/logs/2009/02/07
+  hdfs://bar:8020/app/logs/2009/02/07
   (missing)
   (missing)
   (missing)
-  hdfs://bar:9000/app/logs/2009/02/11
+  hdfs://bar:8020/app/logs/2009/02/11
   (missing)
   (missing)
-  hdfs://bar:9000/app/logs/2009/02/14
+  hdfs://bar:8020/app/logs/2009/02/14
   (missing)
-  hdfs://bar:9000/app/logs/2009/02/16
+  hdfs://bar:8020/app/logs/2009/02/16
 </verbatim>
 
 Then, the dataset instances for the input events for the coordinator action will be:
 
 <verbatim>
-  hdfs://bar:9000/app/logs/2009/02/01
-  hdfs://bar:9000/app/logs/2009/02/07
+  hdfs://bar:8020/app/logs/2009/02/01
+  hdfs://bar:8020/app/logs/2009/02/07
 </verbatim>
 
 ---++++ 6.6.7. coord:version(int n) EL Function for Asynchronous Datasets
@@ -1824,7 +1824,7 @@ Coordinator application definition:
         <dataset name="logs" frequency="${coord:hours(1)}"
                  initial-instance="2009-01-01T00:00Z"  timezone="UTC">
           <uri-template>
-            hdfs://bar:9000/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}
+            hdfs://bar:8020/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}
           </uri-template>
         </dataset>
       </datasets>
@@ -1868,7 +1868,7 @@ Coordinator application definition:
         <dataset name="logs" frequency="${coord:hours(1)}"
                  initial-instance="2009-01-01T01:00Z" timezone="UTC">
           <uri-template>
-             hdfs://bar:9000/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}
+             hdfs://bar:8020/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}
           </uri-template>
         </dataset>
       </datasets>
@@ -1880,7 +1880,7 @@ Coordinator application definition:
       </input-events>
       <action>
         <workflow>
-          <app-path>hdfs://bar:9000/usr/joe/logsprocessor-wf</app-path>
+          <app-path>hdfs://bar:8020/usr/joe/logsprocessor-wf</app-path>
           <configuration>
             <property>
               <name>wfInput</name>
@@ -1897,11 +1897,11 @@ In this example, each coordinator action
 The =${coord:dataIn(String name)}= function enables the coordinator application to pass the URIs of all the dataset instances for the last day to the workflow job triggered by the coordinator action. For the =2009-01-02T00:00Z" run, the =${coord:dataIn('inputLogs')}= function will resolve to:
 
 <verbatim>
-  hdfs://bar:9000/app/logs/2009/01/01/01,
-  hdfs://bar:9000/app/logs/2009/01/01/02,
+  hdfs://bar:8020/app/logs/2009/01/01/01,
+  hdfs://bar:8020/app/logs/2009/01/01/02,
   ...
-  hdfs://bar:9000/app/logs/2009/01/01/23,
-  hdfs://bar:9000/app/logs/2009/02/00/00
+  hdfs://bar:8020/app/logs/2009/01/01/23,
+  hdfs://bar:8020/app/logs/2009/02/00/00
 </verbatim>
 
 The =${coord:dataIn('inputLogs')}= is used for workflow job configuration property 'wfInput' for the workflow job that will be submitted by the coordinator action on January 2nd 2009. Thus, when the workflow job gets started, the 'wfInput' workflow job configuration property will contain all the above URIs.
@@ -1923,12 +1923,12 @@ Datasets Definition file 'datasets.xml'
 .
   <dataset name="hourlyLogs" frequency="${coord:hours(1)}"
            initial-instance="2009-01-01T01:00Z" timezone="UTC">
-    <uri-template>hdfs://bar:9000/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
+    <uri-template>hdfs://bar:8020/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
   </dataset>
 .
   <dataset name="dailyLogs" frequency="${coord:days(1)}"
            initial-instance="2009-01-01T24:00Z" timezone="UTC">
-    <uri-template>hdfs://bar:9000/app/daily-logs/${YEAR}/${MONTH}/${DAY}</uri-template>
+    <uri-template>hdfs://bar:8020/app/daily-logs/${YEAR}/${MONTH}/${DAY}</uri-template>
   </dataset>
 </datasets>
 </verbatim>
@@ -1940,7 +1940,7 @@ Coordinator application definition:
                     start="2009-01-01T24:00Z" end="2009-12-31T24:00Z" timezone="UTC"
                     xmlns="uri:oozie:coordinator:0.1">
       <datasets>
-        <include>hdfs://foo:9000/app/dataset-definitions/datasets.xml</include>
+        <include>hdfs://foo:8020/app/dataset-definitions/datasets.xml</include>
       </datasets>
       <input-events>
         <data-in name="inputLogs" dataset="hourlyLogs">
@@ -1955,7 +1955,7 @@ Coordinator application definition:
       </output-events>
       <action>
         <workflow>
-          <app-path>hdfs://bar:9000/usr/joe/logsaggretor-wf</app-path>
+          <app-path>hdfs://bar:8020/usr/joe/logsaggretor-wf</app-path>
           <configuration>
             <property>
               <name>wfInput</name>
@@ -1976,7 +1976,7 @@ In this example, each coordinator action
 The =${coord:dataOut(String name)}= function enables the coordinator application to pass the URIs of the the dataset instance that will be created by the workflow job triggered by the coordinator action. For the =2009-01-01T24:00Z" run, the =${coord:dataOut('dailyLogs')}= function will resolve to:
 
 <verbatim>
-  hdfs://bar:9000/app/logs/2009/01/02
+  hdfs://bar:8020/app/logs/2009/01/02
 </verbatim>
 
 NOTE: The use of =24:00= as hour is useful for human to denote end of the day, but internally Oozie handles it as the zero hour of the next day.
@@ -2002,7 +2002,7 @@ Coordinator application definition:
      <datasets>
        <dataset name="hourlyLogs" frequency="${coord:hours(1)}"
                 initial-instance="2009-01-01T01:00Z" timezone="UTC">
-         <uri-template>hdfs://bar:9000/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
+         <uri-template>hdfs://bar:8020/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
        </dataset>
      </datasets>
       <input-events>
@@ -2058,7 +2058,7 @@ Coordinator application definition:
      <datasets>
        <dataset name="hourlyLogs" frequency="${coord:hours(1)}"
                 initial-instance="2011-04-01T01:00Z" timezone="UTC">
-         <uri-template>hdfs://bar:9000/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
+         <uri-template>hdfs://bar:8020/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
        </dataset>
      </datasets>
       <input-events>
@@ -2114,7 +2114,7 @@ For example, if baseDate is '2009-01-01T
       ......
       <action>
         <workflow>
-          <app-path>hdfs://bar:9000/usr/joe/logsaggretor-wf</app-path>
+          <app-path>hdfs://bar:8020/usr/joe/logsaggretor-wf</app-path>
           <configuration>
             <property>
               <name>nextInstance</name>
@@ -2161,7 +2161,7 @@ Coordinator application definition: A da
       <datasets>
         <dataset name="hourlyLogs" frequency="${coord:hours(1)}"
                  initial-instance="2008-12-31T19:30Z"  timezone="UTC">
-          <uri-template>hdfs://bar:9000/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
+          <uri-template>hdfs://bar:8020/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
         </dataset>
       </datasets>
       <input-events>
@@ -2195,7 +2195,7 @@ The following 2 use cases will be used t
     <dataset name="eastlogs" frequency="${coord:hours(1)}"
              initial-instance="2009-01-01T06:00Z" timezone="America/New_York">
       <uri-template>
-         hdfs://bar:9000/app/logs/eastcoast/${YEAR}/${MONTH}/${DAY}/${HOUR}
+         hdfs://bar:8020/app/logs/eastcoast/${YEAR}/${MONTH}/${DAY}/${HOUR}
       </uri-template>
     </dataset>
   </datasets>
@@ -2207,7 +2207,7 @@ The following 2 use cases will be used t
   </input-events>
   <action>
    <workflow>
-     <app-path>hdfs://bar:9000/usr/joe/logsaggretor-wf</app-path>
+     <app-path>hdfs://bar:8020/usr/joe/logsaggretor-wf</app-path>
      <configuration>
        <property>
          <name>wfInput</name>
@@ -2237,13 +2237,13 @@ Note that because the coordinator applic
     <dataset name="eastlogs" frequency="${coord:hours(1)}"
              initial-instance="2009-01-01T06:00Z" timezone="America/New_York">
       <uri-template>
-         hdfs://bar:9000/app/logs/eastcoast/${YEAR}/${MONTH}/${DAY}/${HOUR}
+         hdfs://bar:8020/app/logs/eastcoast/${YEAR}/${MONTH}/${DAY}/${HOUR}
       </uri-template>
     </dataset>
     <dataset name="estlogs" frequency="${coord:hours(1)}"
              initial-instance="2009-01-01T09:00Z" timezone="America/Los_Angeles">
       <uri-template>
-         hdfs://bar:9000/app/logs/westcoast/${YEAR}/${MONTH}/${DAY}/${HOUR}
+         hdfs://bar:8020/app/logs/westcoast/${YEAR}/${MONTH}/${DAY}/${HOUR}
       </uri-template>
     </dataset>
   </datasets>
@@ -2259,7 +2259,7 @@ Note that because the coordinator applic
   </input-events>
   <action>
    <workflow>
-     <app-path>hdfs://bar:9000/usr/joe/logsaggretor-wf</app-path>
+     <app-path>hdfs://bar:8020/usr/joe/logsaggretor-wf</app-path>
      <configuration>
        <property>
          <name>wfInput</name>
@@ -2287,13 +2287,13 @@ The data input range for the East coast 
     <dataset name="eastlogs" frequency="${coord:hours(1)}"
              initial-instance="2009-01-01T06:00Z" timezone="America/New_York">
       <uri-template>
-         hdfs://bar:9000/app/logs/eastcoast/${YEAR}/${MONTH}/${DAY}/${HOUR}
+         hdfs://bar:8020/app/logs/eastcoast/${YEAR}/${MONTH}/${DAY}/${HOUR}
       </uri-template>
     </dataset>
     <dataset name="europelogs" frequency="${coord:hours(1)}"
              initial-instance="2009-01-01T01:00Z" timezone="Europe/Berlin">
       <uri-template>
-         hdfs://bar:9000/app/logs/europe/${YEAR}/${MONTH}/${DAY}/${HOUR}
+         hdfs://bar:8020/app/logs/europe/${YEAR}/${MONTH}/${DAY}/${HOUR}
       </uri-template>
     </dataset>
   </datasets>
@@ -2309,7 +2309,7 @@ The data input range for the East coast 
   </input-events>
   <action>
    <workflow>
-     <app-path>hdfs://bar:9000/usr/joe/logsaggretor-wf</app-path>
+     <app-path>hdfs://bar:8020/usr/joe/logsaggretor-wf</app-path>
      <configuration>
        <property>
          <name>wfInput</name>
@@ -2410,7 +2410,7 @@ must be submitted to the Oozie coordinat
     </property>
     <property>
         <name>oozie.coord.application.path</name>
-        <value>hdfs://foo:9000/user/joe/myapps/hello-coord.xml</value>
+        <value>hdfs://foo:8020/user/joe/myapps/hello-coord.xml</value>
     </property>
     ...
 </configuration>
@@ -2435,7 +2435,7 @@ If you add *sla* tags to the Coordinator
                  initial-instance="2009-01-01T09:00Z"
                  timezone="America/Los_Angeles">
             <uri-template>
-                hdfs://bar:9000/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}/data
+                hdfs://bar:8020/app/logs/${YEAR}/${MONTH}/${DAY}/${HOUR}/data
             </uri-template>
         </dataset>
     </datasets>
@@ -2447,7 +2447,7 @@ If you add *sla* tags to the Coordinator
     </input-events>
     <action>
         <workflow>
-            <app-path>hdfs://bar:9000/usr/joe/hello-wf</app-path>
+            <app-path>hdfs://bar:8020/usr/joe/hello-wf</app-path>
             <configuration>
                 <property>
                     <name>input</name>

Modified: oozie/trunk/docs/src/site/twiki/DG_CommandLineTool.twiki
URL: http://svn.apache.org/viewvc/oozie/trunk/docs/src/site/twiki/DG_CommandLineTool.twiki?rev=1393905&r1=1393904&r2=1393905&view=diff
==============================================================================
--- oozie/trunk/docs/src/site/twiki/DG_CommandLineTool.twiki (original)
+++ oozie/trunk/docs/src/site/twiki/DG_CommandLineTool.twiki Thu Oct  4 03:45:33 2012
@@ -169,7 +169,7 @@ supported in Oozie 3.0 or later.
 Example:
 
 <verbatim>
-$ oozie job -oozie http://localhost:8080/oozie -config job.properties -submit
+$ oozie job -oozie http://localhost:11000/oozie -config job.properties -submit
 .
 job: 14-20090525161321-oozie-joe
 </verbatim>
@@ -189,7 +189,7 @@ The job will be created, but it will not
 Example:
 
 <verbatim>
-$ oozie job -oozie http://localhost:8080/oozie -start 14-20090525161321-oozie-joe
+$ oozie job -oozie http://localhost:11000/oozie -start 14-20090525161321-oozie-joe
 </verbatim>
 
 The =start= option starts a previously submitted workflow job, coordinator job or bundle job that is in =PREP= status.
@@ -202,7 +202,7 @@ bundle job will be in =RUNNING=status.
 Example:
 
 <verbatim>
-$ oozie job -oozie http://localhost:8080/oozie -config job.properties -run
+$ oozie job -oozie http://localhost:11000/oozie -config job.properties -run
 .
 job: 15-20090525161321-oozie-joe
 </verbatim>
@@ -224,7 +224,7 @@ The job will be created and it will star
 Example:
 
 <verbatim>
-$ oozie job -oozie http://localhost:8080/oozie -suspend 14-20090525161321-oozie-joe
+$ oozie job -oozie http://localhost:11000/oozie -suspend 14-20090525161321-oozie-joe
 </verbatim>
 
 The =suspend= option suspends a workflow job in =RUNNING= status.
@@ -240,7 +240,7 @@ When the bundle job is suspended, runnin
 Example:
 
 <verbatim>
-$ oozie job -oozie http://localhost:8080/oozie -resume 14-20090525161321-oozie-joe
+$ oozie job -oozie http://localhost:11000/oozie -resume 14-20090525161321-oozie-joe
 </verbatim>
 
 The =resume= option resumes a workflow job in =SUSPENDED= status.
@@ -262,7 +262,7 @@ When the bundle job is resumed, suspende
 Example:
 
 <verbatim>
-$ oozie job -oozie http://localhost:8080/oozie -kill 14-20090525161321-oozie-joe
+$ oozie job -oozie http://localhost:11000/oozie -kill 14-20090525161321-oozie-joe
 </verbatim>
 
 The =kill= option kills a workflow job in =PREP=, =SUSPENDED= or =RUNNING= status and a coordinator/bundle job in
@@ -275,7 +275,7 @@ After the command is executed the job wi
 Example:
 
 <verbatim>
-$ oozie job -oozie http://localhost:8080/oozie -change 14-20090525161321-oozie-joe -value endtime=2011-12-01T05:00Z\;concurrency=100\;2011-10-01T05:00Z
+$ oozie job -oozie http://localhost:11000/oozie -change 14-20090525161321-oozie-joe -value endtime=2011-12-01T05:00Z\;concurrency=100\;2011-10-01T05:00Z
 </verbatim>
 
 The =change= option changes a coordinator job that is not in =KILLED= status.
@@ -296,7 +296,7 @@ After the command is executed the job's 
 Example:
 
 <verbatim>
-$ oozie job -oozie http://localhost:8080/oozie -change 14-20090525161321-oozie-joe -value pausetime=2011-12-01T05:00Z
+$ oozie job -oozie http://localhost:11000/oozie -change 14-20090525161321-oozie-joe -value pausetime=2011-12-01T05:00Z
 </verbatim>
 
 The =change= option changes a bundle job that is not in =KILLED= status.
@@ -313,7 +313,7 @@ After the command is executed the job's 
 Example:
 
 <verbatim>
-$ oozie job -oozie http://localhost:8080/oozie -config job.properties -rerun 14-20090525161321-oozie-joe
+$ oozie job -oozie http://localhost:11000/oozie -config job.properties -rerun 14-20090525161321-oozie-joe
 </verbatim>
 
 The =rerun= option reruns a completed ( =SUCCCEDED=, =FAILED= or =KILLED= ) job skipping the specified nodes.
@@ -369,11 +369,11 @@ After the command is executed the rerun 
 Example:
 
 <verbatim>
-$ oozie job -oozie http://localhost:8080/oozie -info 14-20090525161321-oozie-joe
+$ oozie job -oozie http://localhost:11000/oozie -info 14-20090525161321-oozie-joe
 .
 .----------------------------------------------------------------------------------------------------------------------------------------------------------------
 Workflow Name :  map-reduce-wf
-App Path      :  hdfs://localhost:9000/user/joe/workflows/map-reduce
+App Path      :  hdfs://localhost:8020/user/joe/workflows/map-reduce
 Status        :  SUCCEEDED
 Run           :  0
 User          :  joe
@@ -406,7 +406,7 @@ Currently, the filter option can be used
 An example below shows how the =verbose= option can be used to gather action statistics information for a job:
 
 <verbatim>
-$ oozie job -oozie http://localhost:8080/oozie -info 0000001-111219170928042-oozie-para-W@mr-node -verbose
+$ oozie job -oozie http://localhost:11000/oozie -info 0000001-111219170928042-oozie-para-W@mr-node -verbose
 ID : 0000001-111219170928042-oozie-para-W@mr-node
 ------------------------------------------------------------------------------------------------------------------------------------
 Console URL       : http://localhost:50030/jobdetails.jsp?jobid=job_201112191708_0006
@@ -416,7 +416,7 @@ External ID       : job_201112191708_000
 External Status   : SUCCEEDED
 Name              : mr-node
 Retries           : 0
-Tracker URI       : localhost:9001
+Tracker URI       : localhost:8021
 Type              : map-reduce
 Started           : 2011-12-20 01:12
 Status            : OK
@@ -435,7 +435,7 @@ Note that the user can turn on/off Exter
 Example:
 
 <verbatim>
-$ oozie job -oozie http://localhost:8080/oozie -definition 14-20090525161321-oozie-joe
+$ oozie job -oozie http://localhost:11000/oozie -definition 14-20090525161321-oozie-joe
 
 <workflow-app xmlns="uri:oozie:workflow:0.2" name="sm3-segment-2413">
 	<start to="p0"/>
@@ -452,7 +452,7 @@ Example:
 
 <verbatim>
 
-$ oozie job -oozie http://localhost:8080/oozie -log 14-20090525161321-oozie-joe
+$ oozie job -oozie http://localhost:11000/oozie -log 14-20090525161321-oozie-joe
 
 </verbatim>
 
@@ -474,14 +474,14 @@ Example:
 
 <verbatim>
 
-$ oozie job -oozie http://localhost:8080/oozie job -dryrun -config job.properties
+$ oozie job -oozie http://localhost:11000/oozie job -dryrun -config job.properties
 ***coordJob after parsing: ***
 <coordinator-app xmlns="uri:oozie:coordinator:0.1" name="sla_coord" frequency="20"
 start="2009-03-06T010:00Z" end="2009-03-20T11:00Z" timezone="America/Los_Angeles">
   <output-events>
     <data-out name="Output" dataset="DayLogs">
       <dataset name="DayLogs" frequency="1440" initial-instance="2009-01-01T00:00Z" timezone="UTC" freq_timeunit="MINUTE" end_of_duration="NONE">
-        <uri-template>hdfs://localhost:9000/user/angeloh/coord_examples/${YEAR}/${MONTH}/${DAY}</uri-template>
+        <uri-template>hdfs://localhost:8020/user/angeloh/coord_examples/${YEAR}/${MONTH}/${DAY}</uri-template>
       </dataset>
       <instance>${coord:current(0)}</instance>
     </data-out>
@@ -498,9 +498,9 @@ coordAction instance: 1:
 start="2009-03-06T010:00Z" end="2009-03-20T11:00Z" timezone="America/Los_Angeles">
   <output-events>
     <data-out name="Output" dataset="DayLogs">
-      <uris>hdfs://localhost:9000/user/angeloh/coord_examples/2009/03/06</uris>
+      <uris>hdfs://localhost:8020/user/angeloh/coord_examples/2009/03/06</uris>
       <dataset name="DayLogs" frequency="1440" initial-instance="2009-01-01T00:00Z" timezone="UTC" freq_timeunit="MINUTE" end_of_duration="NONE">
-        <uri-template>hdfs://localhost:9000/user/angeloh/coord_examples/${YEAR}/${MONTH}/${DAY}</uri-template>
+        <uri-template>hdfs://localhost:8020/user/angeloh/coord_examples/${YEAR}/${MONTH}/${DAY}</uri-template>
       </dataset>
     </data-out>
   </output-events>
@@ -526,7 +526,7 @@ specified path must be an HDFS path.
 Example:
 
 <verbatim>
-$ oozie jobs -oozie http://localhost:8080/oozie -localtime -len 2 -fliter status=RUNNING
+$ oozie jobs -oozie http://localhost:11000/oozie -localtime -len 2 -fliter status=RUNNING
 .
 Job Id                          Workflow Name         Status     Run  User      Group     Created                Started                 Ended
 .----------------------------------------------------------------------------------------------------------------------------------------------------------------
@@ -567,7 +567,7 @@ name. Multiple values must be specified 
 Example:
 
 <verbatim>
-$ oozie jobs -oozie http://localhost:8080/oozie -jobtype coordinator
+$ oozie jobs -oozie http://localhost:11000/oozie -jobtype coordinator
 .
 Job ID                                                                                   App Name               Status      Freq Unit                    Started                 Next Materialized
 .----------------------------------------------------------------------------------------------------------------------------------------------------------------
@@ -586,7 +586,7 @@ The =jobtype= option specified the job t
 Example:
 
 <verbatim>
-$ oozie jobs -oozie http://localhost:8080/oozie -jobtype bundle
+$ oozie jobs -oozie http://localhost:11000/oozie -jobtype bundle
 Job ID                                   Bundle Name    Status    Kickoff             Created             User         Group
 ------------------------------------------------------------------------------------------------------------------------------------
 0000027-110322105610515-oozie-chao-B     BUNDLE-TEST    RUNNING   2012-01-15 00:24    2011-03-22 18:07    joe        users
@@ -606,7 +606,7 @@ The =jobtype= option specified the job t
 Example:
 
 <verbatim>
-$ oozie admin -oozie http://localhost:8080/oozie -status
+$ oozie admin -oozie http://localhost:11000/oozie -status
 .
 Safemode: OFF
 </verbatim>
@@ -620,7 +620,7 @@ It returns the current status of the Ooz
 Example:
 
 <verbatim>
-$ oozie admin -oozie http://localhost:8080/oozie -systemmode [NORMAL|NOWEBSERVICE|SAFEMODE]
+$ oozie admin -oozie http://localhost:11000/oozie -systemmode [NORMAL|NOWEBSERVICE|SAFEMODE]
 .
 Safemode: ON
 </verbatim>
@@ -632,7 +632,7 @@ It returns the current status of the Ooz
 Example:
 
 <verbatim>
-$ oozie admin -oozie http://localhost:8080/oozie -version
+$ oozie admin -oozie http://localhost:11000/oozie -version
 .
 Oozie server build version: 2.0.2.1-0.20.1.3092118008--
 </verbatim>
@@ -646,7 +646,7 @@ It returns the Oozie server build versio
 Example:
 
 <verbatim>
-$ oozie admin -oozie http://localhost:8080/oozie -queuedump
+$ oozie admin -oozie http://localhost:11000/oozie -queuedump
 
 [Server Queue Dump]:
 (coord_action_start,1),(coord_action_start,1),(coord_action_start,1)
@@ -683,7 +683,7 @@ It performs an XML Schema validation on 
 Example:
 
 <verbatim>
-$ oozie sla -oozie http://localhost:8080/oozie -len 1
+$ oozie sla -oozie http://localhost:11000/oozie -len 1
 .
 <sla-message>
    <event>
@@ -712,15 +712,15 @@ The return message is XML format that ca
 Example:
 
 <verbatim>
-$ oozie pig -oozie http://localhost:8080/oozie -file pigScriptFile -config job.properties -X -param_file params
+$ oozie pig -oozie http://localhost:11000/oozie -file pigScriptFile -config job.properties -X -param_file params
 .
 job: 14-20090525161321-oozie-joe-W
 .
 $cat job.properties
-fs.default.name=hdfs://localhost:9000
+fs.default.name=hdfs://localhost:8020
 mapreduce.jobtracker.kerberos.principal=ccc
 dfs.namenode.kerberos.principal=ddd
-oozie.libpath=hdfs://localhost:9000/user/oozie/pig/lib/
+oozie.libpath=hdfs://localhost:8020/user/oozie/pig/lib/
 </verbatim>
 
 The parameters for the job must be provided in a Java Properties file (.properties). jobtracker, namenode, libpath must be
@@ -766,7 +766,7 @@ and =jobs= sub-commands
 Example:
 
 <verbatim>
-$ oozie mapreduce -oozie http://localhost:8080/oozie -config job.properties
+$ oozie mapreduce -oozie http://localhost:11000/oozie -config job.properties
 </verbatim>
 
 The parameters must be in the Java Properties file (.properties). This file must be specified for a map-reduce job.

Modified: oozie/trunk/docs/src/site/twiki/DG_Examples.twiki
URL: http://svn.apache.org/viewvc/oozie/trunk/docs/src/site/twiki/DG_Examples.twiki?rev=1393905&r1=1393904&r2=1393905&view=diff
==============================================================================
--- oozie/trunk/docs/src/site/twiki/DG_Examples.twiki (original)
+++ oozie/trunk/docs/src/site/twiki/DG_Examples.twiki Thu Oct  4 03:45:33 2012
@@ -29,7 +29,7 @@ For the Streaming and Pig example, the [
 
 Add Oozie =bin/= to the environment PATH.
 
-The examples assume the JobTracker is =localhost:9001= and the NameNode is =hdfs://localhost:9000=. If the actual
+The examples assume the JobTracker is =localhost:8021= and the NameNode is =hdfs://localhost:8020=. If the actual
 values are different, the job properties files in the examples directory must be edited to the correct values.
 
 The example applications are under the examples/app directory, one directory per example. The directory contains the
@@ -45,7 +45,7 @@ The examples create output under the =ex
 *How to run an example application:*
 
 <verbatim>
-$ oozie job -oozie http://localhost:8080/oozie -config examples/apps/map-reduce/job.properties -run
+$ oozie job -oozie http://localhost:11000/oozie -config examples/apps/map-reduce/job.properties -run
 .
 job: 14-20090525161321-oozie-tucu
 </verbatim>
@@ -53,11 +53,11 @@ job: 14-20090525161321-oozie-tucu
 Check the workflow job status:
 
 <verbatim>
-$ oozie job -oozie http://localhost:8080/oozie -info 14-20090525161321-oozie-tucu
+$ oozie job -oozie http://localhost:11000/oozie -info 14-20090525161321-oozie-tucu
 .
 .----------------------------------------------------------------------------------------------------------------------------------------------------------------
 Workflow Name :  map-reduce-wf
-App Path      :  hdfs://localhost:9000/user/tucu/examples/apps/map-reduce
+App Path      :  hdfs://localhost:8020/user/tucu/examples/apps/map-reduce
 Status        :  SUCCEEDED
 Run           :  0
 User          :  tucu
@@ -73,13 +73,13 @@ mr-node                 map-reduce  OK  
 .----------------------------------------------------------------------------------------------------------------------------------------------------------------
 </verbatim>
 
-To check the workflow job status via the Oozie web console, with a browser go to =http://localhost:8080/oozie=.
+To check the workflow job status via the Oozie web console, with a browser go to =http://localhost:11000/oozie=.
 
 To avoid having to provide the =-oozie= option with the Oozie URL with every =oozie= command, set =OOZIE_URL= env 
 variable to the Oozie URL in the shell environment. For example:
 
 <verbatim>
-$ export OOZIE_URL="http://localhost:8080/oozie"
+$ export OOZIE_URL="http://localhost:11000/oozie"
 $
 $ oozie job -info 14-20090525161321-oozie-tucu
 </verbatim>
@@ -101,14 +101,14 @@ import java.util.Properties;
     ...
 .
     // get a OozieClient for local Oozie
-    OozieClient wc = new OozieClient("http://bar:8080/oozie");
+    OozieClient wc = new OozieClient("http://bar:11000/oozie");
 .
     // create a workflow job configuration and set the workflow application path
     Properties conf = wc.createConfiguration();
-    conf.setProperty(OozieClient.APP_PATH, "hdfs://foo:9000/usr/tucu/my-wf-app");
+    conf.setProperty(OozieClient.APP_PATH, "hdfs://foo:8020/usr/tucu/my-wf-app");
 .
     // setting workflow parameters
-    conf.setProperty("jobTracker", "foo:9001");
+    conf.setProperty("jobTracker", "foo:8021");
     conf.setProperty("inputDir", "/usr/tucu/inputdir");
     conf.setProperty("outputDir", "/usr/tucu/outputdir");
     ...
@@ -156,10 +156,10 @@ import java.util.Properties;
 .
     // create a workflow job configuration and set the workflow application path
     Properties conf = wc.createConfiguration();
-    conf.setProperty(OozieClient.APP_PATH, "hdfs://foo:9000/usr/tucu/my-wf-app");
+    conf.setProperty(OozieClient.APP_PATH, "hdfs://foo:8020/usr/tucu/my-wf-app");
 .
     // setting workflow parameters
-    conf.setProperty("jobTracker", "foo:9001");
+    conf.setProperty("jobTracker", "foo:8021");
     conf.setProperty("inputDir", "/usr/tucu/inputdir");
     conf.setProperty("outputDir", "/usr/tucu/outputdir");
     ...

Modified: oozie/trunk/docs/src/site/twiki/DG_HiveActionExtension.twiki
URL: http://svn.apache.org/viewvc/oozie/trunk/docs/src/site/twiki/DG_HiveActionExtension.twiki?rev=1393905&r1=1393904&r2=1393905&view=diff
==============================================================================
--- oozie/trunk/docs/src/site/twiki/DG_HiveActionExtension.twiki (original)
+++ oozie/trunk/docs/src/site/twiki/DG_HiveActionExtension.twiki Thu Oct  4 03:45:33 2012
@@ -108,8 +108,8 @@ expressions.
     ...
     <action name="myfirsthivejob">
         <hive xmlns="uri:oozie:hive-action:0.2">
-            <job-traker>foo:9001</job-tracker>
-            <name-node>bar:9000</name-node>
+            <job-traker>foo:8021</job-tracker>
+            <name-node>bar:8020</name-node>
             <prepare>
                 <delete path="${jobOutput}"/>
             </prepare>

Modified: oozie/trunk/docs/src/site/twiki/DG_ShellActionExtension.twiki
URL: http://svn.apache.org/viewvc/oozie/trunk/docs/src/site/twiki/DG_ShellActionExtension.twiki?rev=1393905&r1=1393904&r2=1393905&view=diff
==============================================================================
--- oozie/trunk/docs/src/site/twiki/DG_ShellActionExtension.twiki (original)
+++ oozie/trunk/docs/src/site/twiki/DG_ShellActionExtension.twiki Thu Oct  4 03:45:33 2012
@@ -159,7 +159,7 @@ How to run any shell script or perl scri
 The corresponding job properties file used to submit Oozie job could be as follows:
 
 <verbatim>
-oozie.wf.application.path=hdfs://localhost:9000/user/kamrul/workflows/script
+oozie.wf.application.path=hdfs://localhost:8020/user/kamrul/workflows/script
 
 #Execute is expected to be in the Workflow directory.
 #Shell Script to run
@@ -169,8 +169,8 @@ EXEC=script.sh
 #Perl script
 #EXEC=script.pl
 
-jobTracker=localhost:9001
-nameNode=hdfs://localhost:9000
+jobTracker=localhost:8021
+nameNode=hdfs://localhost:8020
 queueName=default
 
 </verbatim>
@@ -209,13 +209,13 @@ How to run any java program bundles in a
 The corresponding job properties file used to submit Oozie job could be as follows:
 
 <verbatim>
-oozie.wf.application.path=hdfs://localhost:9000/user/kamrul/workflows/script
+oozie.wf.application.path=hdfs://localhost:8020/user/kamrul/workflows/script
 
 #Hello.jar file is expected to be in the Workflow directory.
 EXEC=Hello.jar
 
-jobTracker=localhost:9001
-nameNode=hdfs://localhost:9000
+jobTracker=localhost:8021
+nameNode=hdfs://localhost:8020
 queueName=default
 </verbatim>
 

Modified: oozie/trunk/docs/src/site/twiki/DG_SqoopActionExtension.twiki
URL: http://svn.apache.org/viewvc/oozie/trunk/docs/src/site/twiki/DG_SqoopActionExtension.twiki?rev=1393905&r1=1393904&r2=1393905&view=diff
==============================================================================
--- oozie/trunk/docs/src/site/twiki/DG_SqoopActionExtension.twiki (original)
+++ oozie/trunk/docs/src/site/twiki/DG_SqoopActionExtension.twiki Thu Oct  4 03:45:33 2012
@@ -112,8 +112,8 @@ Using the =command= element:
     ...
     <action name="myfirsthivejob">
         <sqoop xmlns="uri:oozie:sqoop-action:0.2">
-            <job-traker>foo:9001</job-tracker>
-            <name-node>bar:9000</name-node>
+            <job-traker>foo:8021</job-tracker>
+            <name-node>bar:8020</name-node>
             <prepare>
                 <delete path="${jobOutput}"/>
             </prepare>
@@ -123,7 +123,7 @@ Using the =command= element:
                     <value>true</value>
                 </property>
             </configuration>
-            <command>import  --connect jdbc:hsqldb:file:db.hsqldb --table TT --target-dir hdfs://localhost:9000/user/tucu/foo -m 1</command>
+            <command>import  --connect jdbc:hsqldb:file:db.hsqldb --table TT --target-dir hdfs://localhost:8020/user/tucu/foo -m 1</command>
         </sqoop>
         <ok to="myotherjob"/>
         <error to="errorcleanup"/>
@@ -139,8 +139,8 @@ The same Sqoop action using =arg= elemen
     ...
     <action name="myfirsthivejob">
         <sqoop xmlns="uri:oozie:sqoop-action:0.2">
-            <job-traker>foo:9001</job-tracker>
-            <name-node>bar:9000</name-node>
+            <job-traker>foo:8021</job-tracker>
+            <name-node>bar:8020</name-node>
             <prepare>
                 <delete path="${jobOutput}"/>
             </prepare>
@@ -156,7 +156,7 @@ The same Sqoop action using =arg= elemen
             <arg>--table</arg>
             <arg>TT</arg>
             <arg>--target-dir</arg>
-            <arg>hdfs://localhost:9000/user/tucu/foo</arg>
+            <arg>hdfs://localhost:8020/user/tucu/foo</arg>
             <arg>-m</arg>
             <arg>1</arg>
         </sqoop>

Modified: oozie/trunk/docs/src/site/twiki/DG_SshActionExtension.twiki
URL: http://svn.apache.org/viewvc/oozie/trunk/docs/src/site/twiki/DG_SshActionExtension.twiki?rev=1393905&r1=1393904&r2=1393905&view=diff
==============================================================================
--- oozie/trunk/docs/src/site/twiki/DG_SshActionExtension.twiki (original)
+++ oozie/trunk/docs/src/site/twiki/DG_SshActionExtension.twiki Thu Oct  4 03:45:33 2012
@@ -72,7 +72,7 @@ The configuration of the =ssh= action ca
             <host>foo@bar.com<host>
             <command>uploaddata</command>
             <args>jdbc:derby://bar.com:1527/myDB</args>
-            <args>hdfs://foobar.com:9000/usr/tucu/myData</args>
+            <args>hdfs://foobar.com:8020/usr/tucu/myData</args>
         </ssh>
         <ok to="myotherjob"/>
         <error to="errorcleanup"/>
@@ -82,7 +82,7 @@ The configuration of the =ssh= action ca
 </verbatim>
 
 In the above example, the =uploaddata= shell command is executed with two arguments, =jdbc:derby://foo.com:1527/myDB=
-and =hdfs://foobar.com:9000/usr/tucu/myData=.
+and =hdfs://foobar.com:8020/usr/tucu/myData=.
 
 The =uploaddata= shell must be available in the remote host and available in the command path.
 

Modified: oozie/trunk/docs/src/site/twiki/ENG_Building.twiki
URL: http://svn.apache.org/viewvc/oozie/trunk/docs/src/site/twiki/ENG_Building.twiki?rev=1393905&r1=1393904&r2=1393905&view=diff
==============================================================================
--- oozie/trunk/docs/src/site/twiki/ENG_Building.twiki (original)
+++ oozie/trunk/docs/src/site/twiki/ENG_Building.twiki Thu Oct  4 03:45:33 2012
@@ -135,10 +135,10 @@ testcases.
 *oozie.test.hadoop.minicluster*= : indicates if Hadoop minicluster should be started for testcases, default value 'true'
 
 *oozie.test.job.tracker*= : indicates the URI of the JobTracker when using a Hadoop cluster for testing, default value
-'localhost:9001'
+'localhost:8021'
 
 *oozie.test.name.node*= : indicates the URI of the NameNode when using a Hadoop cluster for testing, default value
-'hdfs://localhost:9000'
+'hdfs://localhost:8020'
 
 *oozie.test.hadoop.security*= : indicates the type of Hadoop authentication for testing, valid values are 'simple' or
 'kerberos, default value 'simple'

Modified: oozie/trunk/docs/src/site/twiki/WebServicesAPI.twiki
URL: http://svn.apache.org/viewvc/oozie/trunk/docs/src/site/twiki/WebServicesAPI.twiki?rev=1393905&r1=1393904&r2=1393905&view=diff
==============================================================================
--- oozie/trunk/docs/src/site/twiki/WebServicesAPI.twiki (original)
+++ oozie/trunk/docs/src/site/twiki/WebServicesAPI.twiki Thu Oct  4 03:45:33 2012
@@ -369,7 +369,7 @@ Content-Type: application/xml;charset=UT
     </property>
     <property>
         <name>oozie.wf.application.path</name>
-        <value>hdfs://foo:9000/user/bansalm/myapp/</value>
+        <value>hdfs://foo:8020/user/bansalm/myapp/</value>
     </property>
     ...
 </configuration>
@@ -437,7 +437,7 @@ Content-Type: application/xml;charset=UT
     </property>
     <property>
         <name>oozie.wf.application.path</name>
-        <value>hdfs://foo:9000/user/tucu/myapp/</value>
+        <value>hdfs://foo:8020/user/tucu/myapp/</value>
     </property>
     <property>
         <name>oozie.wf.rerun.skip.nodes</name>
@@ -529,7 +529,7 @@ Content-Type: application/json;charset=U
       status: "OK",
       externalId: "job-123-200903101010",
       externalStatus: "SUCCEEDED",
-      trackerUri: "foo:9001",
+      trackerUri: "foo:8021",
       consoleUrl: "http://foo:50040/jobdetailshistory.jsp?jobId=...",
       transition: "reporter",
       data: null,

Modified: oozie/trunk/docs/src/site/twiki/WorkflowFunctionalSpec.twiki
URL: http://svn.apache.org/viewvc/oozie/trunk/docs/src/site/twiki/WorkflowFunctionalSpec.twiki?rev=1393905&r1=1393904&r2=1393905&view=diff
==============================================================================
--- oozie/trunk/docs/src/site/twiki/WorkflowFunctionalSpec.twiki (original)
+++ oozie/trunk/docs/src/site/twiki/WorkflowFunctionalSpec.twiki Thu Oct  4 03:45:33 2012
@@ -436,8 +436,8 @@ fork arrive to the join node.
     </fork>
     <action name="firstparallejob">
         <map-reduce>
-            <job-tracker>foo:9001</job-tracker>
-            <name-node>bar:9000</name-node>
+            <job-tracker>foo:8021</job-tracker>
+            <name-node>bar:8020</name-node>
             <job-xml>job1.xml</job-xml>
         </map-reduce>
         <ok to="joining"/>
@@ -445,8 +445,8 @@ fork arrive to the join node.
     </action>
     <action name="secondparalleljob">
         <map-reduce>
-            <job-tracker>foo:9001</job-tracker>
-            <name-node>bar:9000</name-node>
+            <job-tracker>foo:8021</job-tracker>
+            <name-node>bar:8020</name-node>
             <job-xml>job2.xml</job-xml>
         </map-reduce>
         <ok to="joining"/>
@@ -685,10 +685,10 @@ The =mapper= and =reducer= process for s
     ...
     <action name="myfirstHadoopJob">
         <map-reduce>
-            <job-tracker>foo:9001</job-tracker>
-            <name-node>bar:9000</name-node>
+            <job-tracker>foo:8021</job-tracker>
+            <name-node>bar:8020</name-node>
             <prepare>
-                <delete path="hdfs://foo:9000/usr/tucu/output-data"/>
+                <delete path="hdfs://foo:8020/usr/tucu/output-data"/>
             </prepare>
             <job-xml>/myfirstjob.xml</job-xml>
             <configuration>
@@ -727,8 +727,8 @@ the workflow job configuration when crea
     ...
     <action name="firstjob">
         <map-reduce>
-            <job-tracker>foo:9001</job-tracker>
-            <name-node>bar:9000</name-node>
+            <job-tracker>foo:8021</job-tracker>
+            <name-node>bar:8020</name-node>
             <prepare>
                 <delete path="${output}"/>
             </prepare>
@@ -768,8 +768,8 @@ the workflow job configuration when crea
     ...
     <action name="firstjob">
         <map-reduce>
-            <job-tracker>foo:9001</job-tracker>
-            <name-node>bar:9000</name-node>
+            <job-tracker>foo:8021</job-tracker>
+            <name-node>bar:8020</name-node>
             <prepare>
                 <delete path="${output}"/>
             </prepare>
@@ -942,8 +942,8 @@ All the above elements can be parameteri
     ...
     <action name="myfirstpigjob">
         <pig>
-            <job-tracker>foo:9001</job-tracker>
-            <name-node>bar:9000</name-node>
+            <job-tracker>foo:8021</job-tracker>
+            <name-node>bar:8020</name-node>
             <prepare>
                 <delete path="${jobOutput}"/>
             </prepare>
@@ -978,8 +978,8 @@ All the above elements can be parameteri
     ...
     <action name="myfirstpigjob">
         <pig>
-            <job-tracker>foo:9001</job-tracker>
-            <name-node>bar:9000</name-node>
+            <job-tracker>foo:8021</job-tracker>
+            <name-node>bar:8020</name-node>
             <prepare>
                 <delete path="${jobOutput}"/>
             </prepare>
@@ -1070,7 +1070,7 @@ If relative paths are used it will be re
     ...
     <action name="hdfscommands">
          <fs>
-            <delete path='hdfs://foo:9000/usr/tucu/temp-data'/>
+            <delete path='hdfs://foo:8020/usr/tucu/temp-data'/>
             <mkdir path='archives/${wf:id()}'/>
             <move source='${jobInput}' target='archives/${wf:id()}/processed-input'/>
             <chmod path='${jobOutput}' permissions='-rwxrw-rw-' dir-files='true'/>
@@ -1103,7 +1103,7 @@ by any =job-xml= elements.
     ...
     <action name="hdfscommands">
         <fs>
-           <name-node>hdfs://foo:9000</name-node>
+           <name-node>hdfs://foo:8020</name-node>
            <job-xml>fs-info.xml</job-xml>
            <configuration>
              <property>
@@ -1183,7 +1183,7 @@ The configuration of the =ssh= action ca
             <host>foo@bar.com<host>
             <command>uploaddata</command>
             <args>jdbc:derby://bar.com:1527/myDB</args>
-            <args>hdfs://foobar.com:9000/usr/tucu/myData</args>
+            <args>hdfs://foobar.com:8020/usr/tucu/myData</args>
         </ssh>
         <ok to="myotherjob"/>
         <error to="errorcleanup"/>
@@ -1193,7 +1193,7 @@ The configuration of the =ssh= action ca
 </verbatim>
 
 In the above example, the =uploaddata= shell command is executed with two arguments, =jdbc:derby://foo.com:1527/myDB=
-and =hdfs://foobar.com:9000/usr/tucu/myData=.
+and =hdfs://foobar.com:8020/usr/tucu/myData=.
 
 The =uploaddata= shell must be available in the remote host and available in the command path.
 
@@ -1265,7 +1265,7 @@ The configuration of the =sub-workflow= 
 </verbatim>
 
 In the above example, the workflow definition with the name =child-wf= will be run on the Oozie instance at
- =.http://myhost:8080/oozie=. The specified workflow application must be already deployed on the target Oozie instance.
+ =.http://myhost:11000/oozie=. The specified workflow application must be already deployed on the target Oozie instance.
 
 A configuration parameter =input.dir= is being passed as job property to the child workflow job.
 
@@ -1370,8 +1370,8 @@ All the above elements can be parameteri
     ...
     <action name="myfirstjavajob">
         <java>
-            <job-tracker>foo:9001</job-tracker>
-            <name-node>bar:9000</name-node>
+            <job-tracker>foo:8021</job-tracker>
+            <name-node>bar:8020</name-node>
             <prepare>
                 <delete path="${jobOutput}"/>
             </prepare>
@@ -1965,7 +1965,7 @@ An example is shown below.
 
 ---++++ 4.2.7 HDFS EL Functions
 
-For all the functions in this section the path must include the FS URI. For example =hdfs://foo:9000/user/tucu=.
+For all the functions in this section the path must include the FS URI. For example =hdfs://foo:8020/user/tucu=.
 
 *boolean fs:exists(String path)*
 

Modified: oozie/trunk/release-log.txt
URL: http://svn.apache.org/viewvc/oozie/trunk/release-log.txt?rev=1393905&r1=1393904&r2=1393905&view=diff
==============================================================================
--- oozie/trunk/release-log.txt (original)
+++ oozie/trunk/release-log.txt Thu Oct  4 03:45:33 2012
@@ -1,6 +1,7 @@
 -- Oozie 3.4.0 release (trunk - unreleased)
 
-OOZIE-669 Remove oozie-start.sh, oozie-stop.sh & oozie-run.sh scripts, leaving oozied.sh (rkanter via tucu)
+OOZIE-1009 Documentation pages should use default ports for Oozie/JT/NN (tucu)
+OOZIE-669 Deprecate oozie-start.sh, oozie-stop.sh & oozie-run.sh scripts (rkanter via tucu)
 OOZIE-1005 Tests from OOZIE-994 use wrong condition in waitFor (rkanter via virag)
 OOZIE-992 Add overall status to test-patch messages and add some color-coding for negative results (tucu)
 OOZIE-1003 TestOozieCLI.testSubmitDoAs() should disable anonymous request (tucu)