You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@falcon.apache.org by sr...@apache.org on 2014/07/05 16:55:59 UTC

svn commit: r1608028 [2/4] - in /incubator/falcon: site/ site/docs/ site/docs/restapi/ site/wiki/ trunk/general/src/site/

Modified: incubator/falcon/site/docs/FalconCLI.html
URL: http://svn.apache.org/viewvc/incubator/falcon/site/docs/FalconCLI.html?rev=1608028&r1=1608027&r2=1608028&view=diff
==============================================================================
--- incubator/falcon/site/docs/FalconCLI.html (original)
+++ incubator/falcon/site/docs/FalconCLI.html Sat Jul  5 14:55:57 2014
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at Feb 6, 2014
+ | Generated by Apache Maven Doxia at 2014-07-05
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20140206" />
+    <meta name="Date-Revision-yyyymmdd" content="20140705" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Falcon - FalconCLI</title>
     <link rel="stylesheet" href="../css/apache-maven-fluido-1.3.0.min.css" />
@@ -153,6 +153,9 @@
                   
                       <li>      <a href="../docs/HiveIntegration.html"  title="Hive Integration">Hive Integration</a>
 </li>
+                  
+                      <li>      <a href="../docs/Security.html"  title="Security">Security</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -236,7 +239,7 @@
         
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 2014-02-06</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 2014-07-05</li> 
             
                             </ul>
       </div>
@@ -245,7 +248,106 @@
                         
         <div id="bodyColumn" >
                                   
-            <div class="section"><h2>FalconCLI<a name="FalconCLI"></a></h2><p>FalconCLI is a interface between user and Falcon. It is a command line utility provided by Falcon. FalconCLI supports Entity Management, Instance Management and Admin operations.There is a set of web services that are used by FalconCLI to interact with Falcon.</p></div><div class="section"><h3>Entity Management Operations<a name="Entity_Management_Operations"></a></h3></div><div class="section"><h4>Submit<a name="Submit"></a></h4><p>Submit option is used to set up entity definition.</p><p>Example:  $FALCON_HOME/bin/falcon entity -submit -type cluster -file /cluster/definition.xml</p><p>Note: The url option in the above and all subsequent commands is optional. If not mentioned it will be picked from client.properties file. If the option is not provided and also not set in client.properties, Falcon CLI will fail.</p></div><div class="section"><h4>Schedule<a name="Schedule"></a></h4><p>Once submitted, an enti
 ty can be scheduled using schedule option. Process and feed can only be scheduled.</p><p>Usage: $FALCON_HOME/bin/falcon entity  -type [process|feed] -name &lt;&lt;name&gt;&gt; -schedule</p><p>Example: $FALCON_HOME/bin/falcon entity  -type process -name sampleProcess -schedule</p></div><div class="section"><h4>Suspend<a name="Suspend"></a></h4><p>Suspend on an entity results in suspension of the oozie bundle that was scheduled earlier through the schedule function. No further instances are executed on a suspended entity. Only schedulable entities(process/feed) can be suspended.</p><p>Usage: $FALCON_HOME/bin/falcon entity  -type [feed|process] -name &lt;&lt;name&gt;&gt; -suspend</p></div><div class="section"><h4>Resume<a name="Resume"></a></h4><p>Puts a suspended process/feed back to active, which in turn resumes applicable oozie bundle.</p><p>Usage:  $FALCON_HOME/bin/falcon entity  -type [feed|process] -name &lt;&lt;name&gt;&gt; -resume</p></div><div class="section"><h4>Delete<a name
 ="Delete"></a></h4><p>Delete removes the submitted entity definition for the specified entity and put it into the archive.</p><p>Usage: $FALCON_HOME/bin/falcon entity  -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -delete</p></div><div class="section"><h4>List<a name="List"></a></h4><p>Entities of a particular type can be listed with list sub-command.</p><p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -list</p></div><div class="section"><h4>Update<a name="Update"></a></h4><p>Update operation allows an already submitted/scheduled entity to be updated. Cluster update is currently not allowed.</p><p>Usage: $FALCON_HOME/bin/falcon entity  -type [feed|process] -name &lt;&lt;name&gt;&gt; -update [-effective &lt;&lt;effective time&gt;&gt;]</p></div><div class="section"><h4>Status<a name="Status"></a></h4><p>Status returns the current status of the entity.</p><p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -sta
 tus</p></div><div class="section"><h4>Dependency<a name="Dependency"></a></h4><p>With the use of dependency option, we can list all the entities on which the specified entity is dependent. For example for a feed, dependency return the cluster name and for process it returns all the input feeds, output feeds and cluster names.</p><p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -dependency</p></div><div class="section"><h4>Definition<a name="Definition"></a></h4><p>Definition option returns the entity definition submitted earlier during submit step.</p><p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -definition</p></div><div class="section"><h3>Instance Management Options<a name="Instance_Management_Options"></a></h3></div><div class="section"><h4>Kill<a name="Kill"></a></h4><p>Kill sub-command is used to kill all the instances of the specified process whose nominal time is between the gi
 ven start time and end time.</p><p>Note:  1. For all the instance management sub-commands, if end time is not specified, Falcon will perform the actions on all the instances whose instance time falls after the start time.</p><p>2. The start time and end time needs to be specified in TZ format.  Example:   01 Jan 2012 01:00  =&gt; 2012-01-01T01:00Z</p><p>3. Process name is compulsory parameter for each instance management command.</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -kill -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div><div class="section"><h4>Suspend<a name="Suspend"></a></h4><p>Suspend is used to suspend a instance or instances  for the given process. This option pauses the parent workflow at the state, which it was in at the time of execution of this command.</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -sus
 pend -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div><div class="section"><h4>Continue<a name="Continue"></a></h4><p>Continue option is used to continue the failed workflow instance. This option is valid only for process instances in terminal state, i.e. SUCCEDDED, KILLED or FAILED.</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -re-run -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div><div class="section"><h4>Rerun<a name="Rerun"></a></h4><p>Rerun option is used to rerun instances of a given process. This option is valid only for process instances in terminal state, i.e. SUCCEDDED, KILLED or FAILED. Optionally, you can specify the properties to override.</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -re-run -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&qu
 ot; [-file &lt;&lt;properties file&gt;&gt;]</p></div><div class="section"><h4>Resume<a name="Resume"></a></h4><p>Resume option is used to resume any instance that  is in suspended state.</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -resume -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div><div class="section"><h4>Status<a name="Status"></a></h4><p>Status option via CLI can be used to get the status of a single or multiple instances.  If the instance is not yet materialized but is within the process validity range, WAITING is returned as the state. Along with the status of the instance time is also returned. Log location gives the oozie workflow url If the instance is in WAITING state, missing dependencies are listed</p><p>Example : Suppose a process has 3 instance, one has succeeded,one is in running state and other one is waiting, the expected output is:</p><p>{&quot;status&quot;:&
 quot;SUCCEEDED&quot;,&quot;message&quot;:&quot;getStatus is successful&quot;,&quot;instances&quot;:[{&quot;instance&quot;:&quot;2012-05-07T05:02Z&quot;,&quot;status&quot;:&quot;SUCCEEDED&quot;,&quot;logFile&quot;:&quot;http://oozie-dashboard-url&quot;},{&quot;instance&quot;:&quot;2012-05-07T05:07Z&quot;,&quot;status&quot;:&quot;RUNNING&quot;,&quot;logFile&quot;:&quot;http://oozie-dashboard-url&quot;}, {&quot;instance&quot;:&quot;2010-01-02T11:05Z&quot;,&quot;status&quot;:&quot;WAITING&quot;}]</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -status -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div><div class="section"><h4>Summary<a name="Summary"></a></h4><p>Summary option via CLI can be used to get the consolidated status of the instances between the specified time period. Each status along with the corresponding instance count are listed for each of the applicable colos. The unschedul
 ed instances between the specified time period are included as UNSCHEDULED in the output to provide more clarity.</p><p>Example : Suppose a process has 3 instance, one has succeeded,one is in running state and other one is waiting, the expected output is:</p><p>{&quot;status&quot;:&quot;SUCCEEDED&quot;,&quot;message&quot;:&quot;getSummary is successful&quot;, &quot;cluster&quot;: &lt;&lt;name&gt;&gt; [{&quot;SUCCEEDED&quot;:&quot;1&quot;}, {&quot;WAITING&quot;:&quot;1&quot;}, {&quot;RUNNING&quot;:&quot;1&quot;}]}</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -summary -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div><div class="section"><h4>Running<a name="Running"></a></h4><p>Running option provides all the running instances of the mentioned process.</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -running</p></div><div cla
 ss="section"><h4>Logs<a name="Logs"></a></h4><p>Get logs for instance actions</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -logs -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; [-end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;] [-runid &lt;&lt;runid&gt;&gt;]</p></div><div class="section"><h3>Admin Options<a name="Admin_Options"></a></h3></div><div class="section"><h4>Help<a name="Help"></a></h4><p>Usage: $FALCON_HOME/bin/falcon admin -version</p></div><div class="section"><h4>Version<a name="Version"></a></h4><p>Version returns the current verion of Falcon installed. Usage: $FALCON_HOME/bin/falcon admin -help</p></div>
+            <div class="section">
+<h2>FalconCLI<a name="FalconCLI"></a></h2>
+<p>FalconCLI is a interface between user and Falcon. It is a command line utility provided by Falcon. FalconCLI supports Entity Management, Instance Management and Admin operations.There is a set of web services that are used by FalconCLI to interact with Falcon.</p></div>
+<div class="section">
+<h3>Entity Management Operations<a name="Entity_Management_Operations"></a></h3></div>
+<div class="section">
+<h4>Submit<a name="Submit"></a></h4>
+<p>Submit option is used to set up entity definition.</p>
+<p>Example:  $FALCON_HOME/bin/falcon entity -submit -type cluster -file /cluster/definition.xml</p>
+<p>Note: The url option in the above and all subsequent commands is optional. If not mentioned it will be picked from client.properties file. If the option is not provided and also not set in client.properties, Falcon CLI will fail.</p></div>
+<div class="section">
+<h4>Schedule<a name="Schedule"></a></h4>
+<p>Once submitted, an entity can be scheduled using schedule option. Process and feed can only be scheduled.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity  -type [process|feed] -name &lt;&lt;name&gt;&gt; -schedule</p>
+<p>Example: $FALCON_HOME/bin/falcon entity  -type process -name sampleProcess -schedule</p></div>
+<div class="section">
+<h4>Suspend<a name="Suspend"></a></h4>
+<p>Suspend on an entity results in suspension of the oozie bundle that was scheduled earlier through the schedule function. No further instances are executed on a suspended entity. Only schedulable entities(process/feed) can be suspended.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity  -type [feed|process] -name &lt;&lt;name&gt;&gt; -suspend</p></div>
+<div class="section">
+<h4>Resume<a name="Resume"></a></h4>
+<p>Puts a suspended process/feed back to active, which in turn resumes applicable oozie bundle.</p>
+<p>Usage:  $FALCON_HOME/bin/falcon entity  -type [feed|process] -name &lt;&lt;name&gt;&gt; -resume</p></div>
+<div class="section">
+<h4>Delete<a name="Delete"></a></h4>
+<p>Delete removes the submitted entity definition for the specified entity and put it into the archive.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity  -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -delete</p></div>
+<div class="section">
+<h4>List<a name="List"></a></h4>
+<p>Entities of a particular type can be listed with list sub-command.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -list</p></div>
+<div class="section">
+<h4>Update<a name="Update"></a></h4>
+<p>Update operation allows an already submitted/scheduled entity to be updated. Cluster update is currently not allowed.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity  -type [feed|process] -name &lt;&lt;name&gt;&gt; -update [-effective &lt;&lt;effective time&gt;&gt;]</p></div>
+<div class="section">
+<h4>Status<a name="Status"></a></h4>
+<p>Status returns the current status of the entity.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -status</p></div>
+<div class="section">
+<h4>Dependency<a name="Dependency"></a></h4>
+<p>With the use of dependency option, we can list all the entities on which the specified entity is dependent. For example for a feed, dependency return the cluster name and for process it returns all the input feeds, output feeds and cluster names.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -dependency</p></div>
+<div class="section">
+<h4>Definition<a name="Definition"></a></h4>
+<p>Definition option returns the entity definition submitted earlier during submit step.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -definition</p></div>
+<div class="section">
+<h3>Instance Management Options<a name="Instance_Management_Options"></a></h3></div>
+<div class="section">
+<h4>Kill<a name="Kill"></a></h4>
+<p>Kill sub-command is used to kill all the instances of the specified process whose nominal time is between the given start time and end time.</p>
+<p>Note:  1. For all the instance management sub-commands, if end time is not specified, Falcon will perform the actions on all the instances whose instance time falls after the start time.</p>
+<p>2. The start time and end time needs to be specified in TZ format.  Example:   01 Jan 2012 01:00  =&gt; 2012-01-01T01:00Z</p>
+<p>3. Process name is compulsory parameter for each instance management command.</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -kill -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div>
+<div class="section">
+<h4>Suspend<a name="Suspend"></a></h4>
+<p>Suspend is used to suspend a instance or instances  for the given process. This option pauses the parent workflow at the state, which it was in at the time of execution of this command.</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -suspend -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div>
+<div class="section">
+<h4>Continue<a name="Continue"></a></h4>
+<p>Continue option is used to continue the failed workflow instance. This option is valid only for process instances in terminal state, i.e. SUCCEDDED, KILLED or FAILED.</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -re-run -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div>
+<div class="section">
+<h4>Rerun<a name="Rerun"></a></h4>
+<p>Rerun option is used to rerun instances of a given process. This option is valid only for process instances in terminal state, i.e. SUCCEDDED, KILLED or FAILED. Optionally, you can specify the properties to override.</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -re-run -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; [-file &lt;&lt;properties file&gt;&gt;]</p></div>
+<div class="section">
+<h4>Resume<a name="Resume"></a></h4>
+<p>Resume option is used to resume any instance that  is in suspended state.</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -resume -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div>
+<div class="section">
+<h4>Status<a name="Status"></a></h4>
+<p>Status option via CLI can be used to get the status of a single or multiple instances.  If the instance is not yet materialized but is within the process validity range, WAITING is returned as the state. Along with the status of the instance time is also returned. Log location gives the oozie workflow url If the instance is in WAITING state, missing dependencies are listed</p>
+<p>Example : Suppose a process has 3 instance, one has succeeded,one is in running state and other one is waiting, the expected output is:</p>
+<p>{&quot;status&quot;:&quot;SUCCEEDED&quot;,&quot;message&quot;:&quot;getStatus is successful&quot;,&quot;instances&quot;:[{&quot;instance&quot;:&quot;2012-05-07T05:02Z&quot;,&quot;status&quot;:&quot;SUCCEEDED&quot;,&quot;logFile&quot;:&quot;http://oozie-dashboard-url&quot;},{&quot;instance&quot;:&quot;2012-05-07T05:07Z&quot;,&quot;status&quot;:&quot;RUNNING&quot;,&quot;logFile&quot;:&quot;http://oozie-dashboard-url&quot;}, {&quot;instance&quot;:&quot;2010-01-02T11:05Z&quot;,&quot;status&quot;:&quot;WAITING&quot;}]</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -status -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div>
+<div class="section">
+<h4>Summary<a name="Summary"></a></h4>
+<p>Summary option via CLI can be used to get the consolidated status of the instances between the specified time period. Each status along with the corresponding instance count are listed for each of the applicable colos. The unscheduled instances between the specified time period are included as UNSCHEDULED in the output to provide more clarity.</p>
+<p>Example : Suppose a process has 3 instance, one has succeeded,one is in running state and other one is waiting, the expected output is:</p>
+<p>{&quot;status&quot;:&quot;SUCCEEDED&quot;,&quot;message&quot;:&quot;getSummary is successful&quot;, &quot;cluster&quot;: &lt;&lt;name&gt;&gt; [{&quot;SUCCEEDED&quot;:&quot;1&quot;}, {&quot;WAITING&quot;:&quot;1&quot;}, {&quot;RUNNING&quot;:&quot;1&quot;}]}</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -summary -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div>
+<div class="section">
+<h4>Running<a name="Running"></a></h4>
+<p>Running option provides all the running instances of the mentioned process.</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -running</p></div>
+<div class="section">
+<h4>Logs<a name="Logs"></a></h4>
+<p>Get logs for instance actions</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -logs -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; [-end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;] [-runid &lt;&lt;runid&gt;&gt;]</p></div>
+<div class="section">
+<h3>Admin Options<a name="Admin_Options"></a></h3></div>
+<div class="section">
+<h4>Help<a name="Help"></a></h4>
+<p>Usage: $FALCON_HOME/bin/falcon admin -version</p></div>
+<div class="section">
+<h4>Version<a name="Version"></a></h4>
+<p>Version returns the current verion of Falcon installed. Usage: $FALCON_HOME/bin/falcon admin -help</p></div>
                   </div>
           </div>
 

Modified: incubator/falcon/site/docs/GettingStarted.html
URL: http://svn.apache.org/viewvc/incubator/falcon/site/docs/GettingStarted.html?rev=1608028&r1=1608027&r2=1608028&view=diff
==============================================================================
--- incubator/falcon/site/docs/GettingStarted.html (original)
+++ incubator/falcon/site/docs/GettingStarted.html Sat Jul  5 14:55:57 2014
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at Feb 6, 2014
+ | Generated by Apache Maven Doxia at 2014-07-05
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20140206" />
+    <meta name="Date-Revision-yyyymmdd" content="20140705" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Falcon - Apache Falcon - Data management and processing platform</title>
     <link rel="stylesheet" href="../css/apache-maven-fluido-1.3.0.min.css" />
@@ -153,6 +153,9 @@
                   
                       <li>      <a href="../docs/HiveIntegration.html"  title="Hive Integration">Hive Integration</a>
 </li>
+                  
+                      <li>      <a href="../docs/Security.html"  title="Security">Security</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -236,7 +239,7 @@
         
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 2014-02-06</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 2014-07-05</li> 
             
                             </ul>
       </div>
@@ -245,7 +248,34 @@
                         
         <div id="bodyColumn" >
                                   
-            <div class="section"><h2>Apache Falcon - Data management and processing platform<a name="Apache_Falcon_-_Data_management_and_processing_platform"></a></h2><p>Apache Falcon is a feed processing and feed management system aimed at making it easier for end consumers to onboard their feed processing and feed management on hadoop clusters.</p></div><div class="section"><h3>Why?<a name="Why"></a></h3><p></p><ul><li>Establishes relationship between various data and processing elements on a Hadoop environment</li></ul><p></p><ul><li>Feed management services such as feed retention, replications across clusters, archival etc</li></ul><p></p><ul><li>Easy to onboard new workflows/pipelines, with support for late data handling, retry policies</li></ul><p></p><ul><li>Integration with metastore/catalog</li></ul><p></p><ul><li>Provide notification to end customer based on availability of feed groups</li></ul>(logical group of related feeds, which are likely to be used together)<p></p><u
 l><li>Enables use cases for local processing in colo and global aggregations</li></ul></div><div class="section"><h2>Getting Started<a name="Getting_Started"></a></h2><p>Start with these simple steps to install an falcon instance <a href="./InstallationSteps.html">Simple setup</a>. Also refer to Falcon architecture and documentation in <a href="./FalconArchitecture.html">Documentation</a>.</p><p><a href="./OnBoarding.html">On boarding</a> describes steps to on-board a pipeline to Falcon. It also gives a sample pipeline for reference. <a href="./EntitySpecification.html">Entity Specification</a> gives complete details of all Falcon entities.</p><p><a href="./FalconCLI.html">Falcon CLI</a> describes the various options for the command line utility provided by Falcon.</p></div>
+            <div class="section">
+<h2>Apache Falcon - Data management and processing platform<a name="Apache_Falcon_-_Data_management_and_processing_platform"></a></h2>
+<p>Apache Falcon is a feed processing and feed management system aimed at making it easier for end consumers to onboard their feed processing and feed management on hadoop clusters.</p></div>
+<div class="section">
+<h3>Why?<a name="Why"></a></h3>
+<p></p>
+<ul>
+<li>Establishes relationship between various data and processing elements on a Hadoop environment</li></ul>
+<p></p>
+<ul>
+<li>Feed management services such as feed retention, replications across clusters, archival etc</li></ul>
+<p></p>
+<ul>
+<li>Easy to onboard new workflows/pipelines, with support for late data handling, retry policies</li></ul>
+<p></p>
+<ul>
+<li>Integration with metastore/catalog</li></ul>
+<p></p>
+<ul>
+<li>Provide notification to end customer based on availability of feed groups</li></ul>(logical group of related feeds, which are likely to be used together)
+<p></p>
+<ul>
+<li>Enables use cases for local processing in colo and global aggregations</li></ul></div>
+<div class="section">
+<h2>Getting Started<a name="Getting_Started"></a></h2>
+<p>Start with these simple steps to install an falcon instance <a href="./InstallationSteps.html">Simple setup</a>. Also refer to Falcon architecture and documentation in <a href="./FalconArchitecture.html">Documentation</a>.</p>
+<p><a href="./OnBoarding.html">On boarding</a> describes steps to on-board a pipeline to Falcon. It also gives a sample pipeline for reference. <a href="./EntitySpecification.html">Entity Specification</a> gives complete details of all Falcon entities.</p>
+<p><a href="./FalconCLI.html">Falcon CLI</a> describes the various options for the command line utility provided by Falcon.</p></div>
                   </div>
           </div>
 

Modified: incubator/falcon/site/docs/HiveIntegration.html
URL: http://svn.apache.org/viewvc/incubator/falcon/site/docs/HiveIntegration.html?rev=1608028&r1=1608027&r2=1608028&view=diff
==============================================================================
--- incubator/falcon/site/docs/HiveIntegration.html (original)
+++ incubator/falcon/site/docs/HiveIntegration.html Sat Jul  5 14:55:57 2014
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at Feb 6, 2014
+ | Generated by Apache Maven Doxia at 2014-07-05
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20140206" />
+    <meta name="Date-Revision-yyyymmdd" content="20140705" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Falcon - Hive Integration</title>
     <link rel="stylesheet" href="../css/apache-maven-fluido-1.3.0.min.css" />
@@ -153,6 +153,9 @@
                   
                       <li>      <a href="../docs/HiveIntegration.html"  title="Hive Integration">Hive Integration</a>
 </li>
+                  
+                      <li>      <a href="../docs/Security.html"  title="Security">Security</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -236,7 +239,7 @@
         
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 2014-02-06</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 2014-07-05</li> 
             
                             </ul>
       </div>
@@ -245,18 +248,122 @@
                         
         <div id="bodyColumn" >
                                   
-            <div class="section"><h2>Hive Integration<a name="Hive_Integration"></a></h2></div><div class="section"><h3>Overview<a name="Overview"></a></h3><p>Falcon provides data management functions for feeds declaratively. It allows users to represent feed locations as time-based partition directories on HDFS containing files.</p><p>Hive provides a simple and familiar database like tabular model of data management to its users, which are backed by HDFS. It supports two classes of tables, managed tables and external tables.</p><p>Falcon allows users to represent feed location as Hive tables. Falcon supports both managed and external tables and provide data management services for tables such as replication, eviction, archival, etc. Falcon will notify HCatalog as a side effect of either acquiring, replicating or evicting a data set instance and adds the missing capability of HCatalog table replication.</p><p>In the near future, Falcon will allow users to express pipeline processing
  in Hive scripts apart from Pig and Oozie workflows.</p></div><div class="section"><h3>Assumptions<a name="Assumptions"></a></h3><p></p><ul><li>Date is a mandatory first-level partition for Hive tables<ul><li>Data availability triggers are based on date pattern in Oozie</li></ul></li><li>Tables must be created in Hive prior to adding it as a Feed in Falcon.<ul><li>Duplicating this in Falcon will create confusion on the real source of truth. Also propagating schema changes</li></ul></li></ul>between systems is a hard problem.<ul><li>Falcon does not know about the encoding of the data and data should be in HCatalog supported format.</li></ul></div><div class="section"><h3>Configuration<a name="Configuration"></a></h3><p>Falcon provides a system level option to enable Hive integration. Falcon must be configured with an implementation for the catalog registry. The default implementation for Hive is shipped with Falcon.</p><div class="source"><pre class="prettyprint">
+            <div class="section">
+<h2>Hive Integration<a name="Hive_Integration"></a></h2></div>
+<div class="section">
+<h3>Overview<a name="Overview"></a></h3>
+<p>Falcon provides data management functions for feeds declaratively. It allows users to represent feed locations as time-based partition directories on HDFS containing files.</p>
+<p>Hive provides a simple and familiar database like tabular model of data management to its users, which are backed by HDFS. It supports two classes of tables, managed tables and external tables.</p>
+<p>Falcon allows users to represent feed location as Hive tables. Falcon supports both managed and external tables and provide data management services for tables such as replication, eviction, archival, etc. Falcon will notify HCatalog as a side effect of either acquiring, replicating or evicting a data set instance and adds the missing capability of HCatalog table replication.</p>
+<p>In the near future, Falcon will allow users to express pipeline processing in Hive scripts apart from Pig and Oozie workflows.</p></div>
+<div class="section">
+<h3>Assumptions<a name="Assumptions"></a></h3>
+<p></p>
+<ul>
+<li>Date is a mandatory first-level partition for Hive tables
+<ul>
+<li>Data availability triggers are based on date pattern in Oozie</li></ul></li>
+<li>Tables must be created in Hive prior to adding it as a Feed in Falcon.
+<ul>
+<li>Duplicating this in Falcon will create confusion on the real source of truth. Also propagating schema changes</li></ul></li></ul>between systems is a hard problem.
+<ul>
+<li>Falcon does not know about the encoding of the data and data should be in HCatalog supported format.</li></ul></div>
+<div class="section">
+<h3>Configuration<a name="Configuration"></a></h3>
+<p>Falcon provides a system level option to enable Hive integration. Falcon must be configured with an implementation for the catalog registry. The default implementation for Hive is shipped with Falcon.</p>
+<div class="source">
+<pre>
 catalog.service.impl=org.apache.falcon.catalog.HiveCatalogService
 
-</pre></div></div><div class="section"><h3>Incompatible changes<a name="Incompatible_changes"></a></h3><p>Falcon depends heavily on data-availability triggers for scheduling Falcon workflows. Oozie must support data-availability triggers based on HCatalog partition availability. This is only available in oozie 4.x.</p><p>Hence, Falcon for Hive support needs Oozie 4.x.</p></div><div class="section"><h3>Oozie Shared Library setup<a name="Oozie_Shared_Library_setup"></a></h3><p>Falcon post Hive integration depends heavily on the <a class="externalLink" href="http://oozie.apache.org/docs/4.0.0/WorkflowFunctionalSpec.html#a17_HDFS_Share_Libraries_for_Workflow_Applications_since_Oozie_2.3">shared library feature of Oozie</a>. Since the sheer number of jars for HCatalog, Pig and Hive are in the many 10s in numbers, its quite daunting to redistribute the dependent jars from Falcon.</p><p><a class="externalLink" href="http://oozie.apache.org/docs/4.0.0/DG_QuickStart.html#Oozie_Share_Lib_Inst
 allation">This is a one time effort in Oozie setup and is quite straightforward.</a></p></div><div class="section"><h3>Approach<a name="Approach"></a></h3></div><div class="section"><h4>Entity Changes<a name="Entity_Changes"></a></h4><p></p><ul><li>Cluster DSL will have an additional registry-interface section, specifying the endpoint for the</li></ul>HCatalog server. If this is absent, no HCatalog publication will be done from Falcon for this cluster.<div class="source"><pre class="prettyprint">thrift://hcatalog-server:port
-</pre></div><p></p><ul><li>Feed DSL will allow users to specify the URI (location) for HCatalog tables as:</li></ul><div class="source"><pre class="prettyprint">catalog:database_name:table_name#partitions(key=value?)*
-</pre></div><p></p><ul><li>Failure to publish to HCatalog will be retried (configurable # of retires) with back off. Permanent failures</li></ul>after all the retries are exhausted will fail the Falcon workflow</div><div class="section"><h4>Eviction<a name="Eviction"></a></h4><p></p><ul><li>Falcon will construct DDL statements to filter candidate partitions eligible for eviction drop partitions</li><li>Falcon will construct DDL statements to drop the eligible partitions</li><li>Additionally, Falcon will nuke the data on HDFS for external tables</li></ul></div><div class="section"><h4>Replication<a name="Replication"></a></h4><p></p><ul><li>Falcon will use HCatalog (Hive) API to export the data for a given table and the partition,</li></ul>which will result in a data collection that includes metadata on the data's storage format, the schema, how the data is sorted, what table the data came from, and values of any partition keys from that table.<ul><li>Falcon will use <a href="./DistC
 p.html">DistCp</a> tool to copy the exported data collection into the secondary cluster into a staging</li></ul>directory used by Falcon.<ul><li>Falcon will then import the data into HCatalog (Hive) using the HCatalog (Hive) API. If the specified table does</li></ul>not yet exist, Falcon will create it, using the information in the imported metadata to set defaults for the table such as schema, storage format, etc.<ul><li>The partition is not complete and hence not visible to users until all the data is committed on the secondary</li></ul>cluster, (no dirty reads)<ul><li>Data collection is staged by Falcon and retries for copy continues from where it left off.</li><li>Failure to register with Hive will be retired. After all the attempts are exhausted,</li></ul>the data will be cleaned up by Falcon.</div><div class="section"><h4>Security<a name="Security"></a></h4><p>The user owns all data managed by Falcon. Falcon runs as the user who submitted the feed. Falcon will authenticate wit
 h HCatalog as the end user who owns the entity and the data.</p><p>For Hive managed tables, the table may be owned by the end user or &#xe2;&#x80;&#x9c;hive&#xe2;&#x80;&#x9d;. For &#xe2;&#x80;&#x9c;hive&#xe2;&#x80;&#x9d; owned tables, user will have to configure the feed as &#xe2;&#x80;&#x9c;hive&#xe2;&#x80;&#x9d;.</p></div><div class="section"><h3>Load on HCatalog from Falcon<a name="Load_on_HCatalog_from_Falcon"></a></h3><p>It generally depends on the frequency of the feeds configured in Falcon and how often data is ingested, replicated, or processed.</p></div><div class="section"><h3>User Impact<a name="User_Impact"></a></h3><p></p><ul><li>There should not be any impact to user due to this integration</li><li>Falcon will be fully backwards compatible</li><li>Users have a choice to either choose storage based on files on HDFS as they do today or use HCatalog for</li></ul>accessing the data in tables</div><div class="section"><h3>Known Limitations<a name="Known_Limitations"></a></h
 3></div><div class="section"><h4>Oozie<a name="Oozie"></a></h4><p></p><ul><li>Falcon with Hadoop 1.x requires copying guava jars manually to sharelib in oozie. Hadoop 2.x ships this.</li><li>hcatalog-pig-adapter needs to be copied manually to oozie sharelib.</li></ul><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h3>Incompatible changes<a name="Incompatible_changes"></a></h3>
+<p>Falcon depends heavily on data-availability triggers for scheduling Falcon workflows. Oozie must support data-availability triggers based on HCatalog partition availability. This is only available in oozie 4.x.</p>
+<p>Hence, Falcon for Hive support needs Oozie 4.x.</p></div>
+<div class="section">
+<h3>Oozie Shared Library setup<a name="Oozie_Shared_Library_setup"></a></h3>
+<p>Falcon post Hive integration depends heavily on the <a class="externalLink" href="http://oozie.apache.org/docs/4.0.0/WorkflowFunctionalSpec.html#a17_HDFS_Share_Libraries_for_Workflow_Applications_since_Oozie_2.3">shared library feature of Oozie</a>. Since the sheer number of jars for HCatalog, Pig and Hive are in the many 10s in numbers, its quite daunting to redistribute the dependent jars from Falcon.</p>
+<p><a class="externalLink" href="http://oozie.apache.org/docs/4.0.0/DG_QuickStart.html#Oozie_Share_Lib_Installation">This is a one time effort in Oozie setup and is quite straightforward.</a></p></div>
+<div class="section">
+<h3>Approach<a name="Approach"></a></h3></div>
+<div class="section">
+<h4>Entity Changes<a name="Entity_Changes"></a></h4>
+<p></p>
+<ul>
+<li>Cluster DSL will have an additional registry-interface section, specifying the endpoint for the</li></ul>HCatalog server. If this is absent, no HCatalog publication will be done from Falcon for this cluster.
+<div class="source">
+<pre>thrift://hcatalog-server:port
+</pre></div>
+<p></p>
+<ul>
+<li>Feed DSL will allow users to specify the URI (location) for HCatalog tables as:</li></ul>
+<div class="source">
+<pre>catalog:database_name:table_name#partitions(key=value?)*
+</pre></div>
+<p></p>
+<ul>
+<li>Failure to publish to HCatalog will be retried (configurable # of retires) with back off. Permanent failures</li></ul>after all the retries are exhausted will fail the Falcon workflow</div>
+<div class="section">
+<h4>Eviction<a name="Eviction"></a></h4>
+<p></p>
+<ul>
+<li>Falcon will construct DDL statements to filter candidate partitions eligible for eviction drop partitions</li>
+<li>Falcon will construct DDL statements to drop the eligible partitions</li>
+<li>Additionally, Falcon will nuke the data on HDFS for external tables</li></ul></div>
+<div class="section">
+<h4>Replication<a name="Replication"></a></h4>
+<p></p>
+<ul>
+<li>Falcon will use HCatalog (Hive) API to export the data for a given table and the partition,</li></ul>which will result in a data collection that includes metadata on the data's storage format, the schema, how the data is sorted, what table the data came from, and values of any partition keys from that table.
+<ul>
+<li>Falcon will use <a href="./DistCp.html">DistCp</a> tool to copy the exported data collection into the secondary cluster into a staging</li></ul>directory used by Falcon.
+<ul>
+<li>Falcon will then import the data into HCatalog (Hive) using the HCatalog (Hive) API. If the specified table does</li></ul>not yet exist, Falcon will create it, using the information in the imported metadata to set defaults for the table such as schema, storage format, etc.
+<ul>
+<li>The partition is not complete and hence not visible to users until all the data is committed on the secondary</li></ul>cluster, (no dirty reads)
+<ul>
+<li>Data collection is staged by Falcon and retries for copy continues from where it left off.</li>
+<li>Failure to register with Hive will be retired. After all the attempts are exhausted,</li></ul>the data will be cleaned up by Falcon.</div>
+<div class="section">
+<h4>Security<a name="Security"></a></h4>
+<p>The user owns all data managed by Falcon. Falcon runs as the user who submitted the feed. Falcon will authenticate with HCatalog as the end user who owns the entity and the data.</p>
+<p>For Hive managed tables, the table may be owned by the end user or &#xe2;&#x80;&#x9c;hive&#xe2;&#x80;&#x9d;. For &#xe2;&#x80;&#x9c;hive&#xe2;&#x80;&#x9d; owned tables, user will have to configure the feed as &#xe2;&#x80;&#x9c;hive&#xe2;&#x80;&#x9d;.</p></div>
+<div class="section">
+<h3>Load on HCatalog from Falcon<a name="Load_on_HCatalog_from_Falcon"></a></h3>
+<p>It generally depends on the frequency of the feeds configured in Falcon and how often data is ingested, replicated, or processed.</p></div>
+<div class="section">
+<h3>User Impact<a name="User_Impact"></a></h3>
+<p></p>
+<ul>
+<li>There should not be any impact to user due to this integration</li>
+<li>Falcon will be fully backwards compatible</li>
+<li>Users have a choice to either choose storage based on files on HDFS as they do today or use HCatalog for</li></ul>accessing the data in tables</div>
+<div class="section">
+<h3>Known Limitations<a name="Known_Limitations"></a></h3></div>
+<div class="section">
+<h4>Oozie<a name="Oozie"></a></h4>
+<p></p>
+<ul>
+<li>Falcon with Hadoop 1.x requires copying guava jars manually to sharelib in oozie. Hadoop 2.x ships this.</li>
+<li>hcatalog-pig-adapter needs to be copied manually to oozie sharelib.</li></ul>
+<div class="source">
+<pre>
 bin/hadoop dfs -copyFromLocal $LFS/share/lib/hcatalog/hcatalog-pig-adapter-0.5.0-incubating.jar share/lib/hcatalog
 
-</pre></div></div><div class="section"><h4>Hive<a name="Hive"></a></h4><p></p><ul><li><a class="externalLink" href="https://issues.apache.org/jira/browse/HIVE-5550">Hive table import fails for tables created with default text and sequence file formats using HCatalog API</a></li></ul>For some arcane reason, hive substitutes the output format for text and sequence to be prefixed with Hive. Hive table import fails since it compares against the input and output formats of the source table and they are different. Say, a table was created with out specifying the file format, it defaults to:<div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h4>Hive<a name="Hive"></a></h4>
+<p></p>
+<ul>
+<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HIVE-5550">Hive table import fails for tables created with default text and sequence file formats using HCatalog API</a></li></ul>For some arcane reason, hive substitutes the output format for text and sequence to be prefixed with Hive. Hive table import fails since it compares against the input and output formats of the source table and they are different. Say, a table was created with out specifying the file format, it defaults to:
+<div class="source">
+<pre>
 fileFormat=TextFile, inputformat=org.apache.hadoop.mapred.TextInputFormat, outputformat=org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat
 
-</pre></div><p>But, when hive fetches the table from the metastore, it replaces the output format with org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat and the comparison between source and target table fails.</p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p>But, when hive fetches the table from the metastore, it replaces the output format with org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat and the comparison between source and target table fails.</p>
+<div class="source">
+<pre>
 org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer#checkTable
       // check IF/OF/Serde
       String existingifc = table.getInputFormatClass().getName();
@@ -270,7 +377,16 @@ org.apache.hadoop.hive.ql.parse.ImportSe
                 .getMsg(&quot; Table inputformat/outputformats do not match&quot;));
       }
 
-</pre></div></div><div class="section"><h3>Hive Examples<a name="Hive_Examples"></a></h3><p>Following is an example entity configuration for lifecycle management functions for tables in Hive.</p></div><div class="section"><h4>Hive Table Lifecycle Management - Replication and Retention<a name="Hive_Table_Lifecycle_Management_-_Replication_and_Retention"></a></h4></div><div class="section"><h5>Primary Cluster<a name="Primary_Cluster"></a></h5><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h3>Hive Examples<a name="Hive_Examples"></a></h3>
+<p>Following is an example entity configuration for lifecycle management functions for tables in Hive.</p></div>
+<div class="section">
+<h4>Hive Table Lifecycle Management - Replication and Retention<a name="Hive_Table_Lifecycle_Management_-_Replication_and_Retention"></a></h4></div>
+<div class="section">
+<h5>Primary Cluster<a name="Primary_Cluster"></a></h5>
+<div class="source">
+<pre>
 &lt;?xml version=&quot;1.0&quot;?&gt;
 &lt;!--
     Primary cluster configuration for demo vm
@@ -299,7 +415,11 @@ org.apache.hadoop.hive.ql.parse.ImportSe
     &lt;/locations&gt;
 &lt;/cluster&gt;
 
-</pre></div></div><div class="section"><h5>BCP Cluster<a name="BCP_Cluster"></a></h5><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h5>BCP Cluster<a name="BCP_Cluster"></a></h5>
+<div class="source">
+<pre>
 &lt;?xml version=&quot;1.0&quot;?&gt;
 &lt;!--
     BCP cluster configuration for demo vm
@@ -328,7 +448,11 @@ org.apache.hadoop.hive.ql.parse.ImportSe
     &lt;/locations&gt;
 &lt;/cluster&gt;
 
-</pre></div></div><div class="section"><h5>Feed with replication and eviction policy<a name="Feed_with_replication_and_eviction_policy"></a></h5><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h5>Feed with replication and eviction policy<a name="Feed_with_replication_and_eviction_policy"></a></h5>
+<div class="source">
+<pre>
 &lt;?xml version=&quot;1.0&quot;?&gt;
 &lt;!--
     Replicating Hourly customer table from primary to secondary cluster.
@@ -357,7 +481,16 @@ org.apache.hadoop.hive.ql.parse.ImportSe
     &lt;schema location=&quot;&quot; provider=&quot;hcatalog&quot;/&gt;
 &lt;/feed&gt;
 
-</pre></div></div><div class="section"><h4>Hive Table used in Processing Pipelines<a name="Hive_Table_used_in_Processing_Pipelines"></a></h4></div><div class="section"><h5>Primary Cluster<a name="Primary_Cluster"></a></h5><p>The cluster definition from the lifecycle example can be used.</p></div><div class="section"><h5>Input Feed<a name="Input_Feed"></a></h5><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h4>Hive Table used in Processing Pipelines<a name="Hive_Table_used_in_Processing_Pipelines"></a></h4></div>
+<div class="section">
+<h5>Primary Cluster<a name="Primary_Cluster"></a></h5>
+<p>The cluster definition from the lifecycle example can be used.</p></div>
+<div class="section">
+<h5>Input Feed<a name="Input_Feed"></a></h5>
+<div class="source">
+<pre>
 &lt;?xml version=&quot;1.0&quot;?&gt;
 &lt;feed description=&quot;clicks log table &quot; name=&quot;input-table&quot; xmlns=&quot;uri:falcon:feed:0.1&quot;&gt;
     &lt;groups&gt;online,bi&lt;/groups&gt;
@@ -377,7 +510,11 @@ org.apache.hadoop.hive.ql.parse.ImportSe
     &lt;schema location=&quot;/schema/clicks&quot; provider=&quot;protobuf&quot;/&gt;
 &lt;/feed&gt;
 
-</pre></div></div><div class="section"><h5>Output Feed<a name="Output_Feed"></a></h5><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h5>Output Feed<a name="Output_Feed"></a></h5>
+<div class="source">
+<pre>
 &lt;?xml version=&quot;1.0&quot;?&gt;
 &lt;feed description=&quot;clicks log identity table&quot; name=&quot;output-table&quot; xmlns=&quot;uri:falcon:feed:0.1&quot;&gt;
     &lt;groups&gt;online,bi&lt;/groups&gt;
@@ -397,7 +534,11 @@ org.apache.hadoop.hive.ql.parse.ImportSe
     &lt;schema location=&quot;/schema/clicks&quot; provider=&quot;protobuf&quot;/&gt;
 &lt;/feed&gt;
 
-</pre></div></div><div class="section"><h5>Process<a name="Process"></a></h5><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h5>Process<a name="Process"></a></h5>
+<div class="source">
+<pre>
 &lt;?xml version=&quot;1.0&quot;?&gt;
 &lt;process name=&quot;##processName##&quot; xmlns=&quot;uri:falcon:process:0.1&quot;&gt;
     &lt;clusters&gt;
@@ -428,7 +569,11 @@ org.apache.hadoop.hive.ql.parse.ImportSe
     &lt;retry policy=&quot;periodic&quot; delay=&quot;minutes(10)&quot; attempts=&quot;3&quot;/&gt;
 &lt;/process&gt;
 
-</pre></div></div><div class="section"><h5>Pig Script<a name="Pig_Script"></a></h5><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h5>Pig Script<a name="Pig_Script"></a></h5>
+<div class="source">
+<pre>
 A = load '$input_database.$input_table' using org.apache.hcatalog.pig.HCatLoader();
 B = FILTER A BY $input_filter;
 C = foreach B generate id, value;

Modified: incubator/falcon/site/docs/InstallationSteps.html
URL: http://svn.apache.org/viewvc/incubator/falcon/site/docs/InstallationSteps.html?rev=1608028&r1=1608027&r2=1608028&view=diff
==============================================================================
--- incubator/falcon/site/docs/InstallationSteps.html (original)
+++ incubator/falcon/site/docs/InstallationSteps.html Sat Jul  5 14:55:57 2014
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at Feb 6, 2014
+ | Generated by Apache Maven Doxia at 2014-07-05
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20140206" />
+    <meta name="Date-Revision-yyyymmdd" content="20140705" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Falcon - Building & Installing Falcon</title>
     <link rel="stylesheet" href="../css/apache-maven-fluido-1.3.0.min.css" />
@@ -153,6 +153,9 @@
                   
                       <li>      <a href="../docs/HiveIntegration.html"  title="Hive Integration">Hive Integration</a>
 </li>
+                  
+                      <li>      <a href="../docs/Security.html"  title="Security">Security</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -236,7 +239,7 @@
         
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 2014-02-06</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 2014-07-05</li> 
             
                             </ul>
       </div>
@@ -245,7 +248,12 @@
                         
         <div id="bodyColumn" >
                                   
-            <div class="section"><h3>Building &amp; Installing Falcon<a name="Building__Installing_Falcon"></a></h3></div><div class="section"><h4>Building Falcon<a name="Building_Falcon"></a></h4><div class="source"><pre class="prettyprint">
+            <div class="section">
+<h3>Building &amp; Installing Falcon<a name="Building__Installing_Falcon"></a></h3></div>
+<div class="section">
+<h4>Building Falcon<a name="Building_Falcon"></a></h4>
+<div class="source">
+<pre>
 git clone https://git-wip-us.apache.org/repos/asf/incubator-falcon.git falcon
 
 cd falcon
@@ -257,13 +265,21 @@ export MAVEN_OPTS=&quot;-Xmx1024m -XX:Ma
 [optionally -Doozie.version=&lt;&lt;oozie version&gt;&gt; can be appended to build with a specific version of oozie. Oozie versions &gt;= 3.oozie-3.2.0-incubating are supported]
 
 
-</pre></div><p>Once the build successfully completes, artifacts can be packaged for deployment. The package can be built in embedded or distributed mode.</p><p><b>Embedded Mode</b></p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p>Once the build successfully completes, artifacts can be packaged for deployment. The package can be built in embedded or distributed mode.</p>
+<p><b>Embedded Mode</b></p>
+<div class="source">
+<pre>
 
 mvn clean assembly:assembly -DskipTests -DskipCheck=true [For hadoop 1]
 mvn clean assembly:assembly -DskipTests -DskipCheck=true -P hadoop-2 [For hadoop 2]
 
 
-</pre></div><p>Tar can be found in {project dir}/target/falcon-${project.version}-bin.tar.gz</p><p>Tar is structured as follows</p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p>Tar can be found in {project dir}/target/falcon-${project.version}-bin.tar.gz</p>
+<p>Tar is structured as follows</p>
+<div class="source">
+<pre>
 
 |- bin
    |- falcon
@@ -291,13 +307,20 @@ mvn clean assembly:assembly -DskipTests 
 |- DISCLAIMER.txt
 |- CHANGES.txt
 
-</pre></div><p><b>Distributed Mode</b></p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p><b>Distributed Mode</b></p>
+<div class="source">
+<pre>
 
 mvn clean assembly:assembly -DskipTests -DskipCheck=true -Pdistributed,hadoop-1 [For hadoop 1]
 mvn clean assembly:assembly -DskipTests -DskipCheck=true -Pdistributed,hadoop-2 [For hadoop 2]
 
 
-</pre></div><p>Tar can be found in {project dir}/target/falcon-distributed-${project.version}-server.tar.gz</p><p>Tar is structured as follows</p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p>Tar can be found in {project dir}/target/falcon-distributed-${project.version}-server.tar.gz</p>
+<p>Tar is structured as follows</p>
+<div class="source">
+<pre>
 
 |- bin
    |- falcon
@@ -328,11 +351,21 @@ mvn clean assembly:assembly -DskipTests 
 |- DISCLAIMER.txt
 |- CHANGES.txt
 
-</pre></div></div><div class="section"><h4>Installing &amp; running Falcon<a name="Installing__running_Falcon"></a></h4><p><b>Installing falcon</b></p><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h4>Installing &amp; running Falcon<a name="Installing__running_Falcon"></a></h4>
+<p><b>Installing falcon</b></p>
+<div class="source">
+<pre>
 tar -xzvf {falcon package}
 cd falcon-distributed-${project.version} or falcon-${project.version}
 
-</pre></div><p><b>Configuring Falcon</b></p><p>By default config directory used by falcon is {package dir}/conf. To override this set environment variable FALCON_CONF to the path of the conf dir.</p><p>falcon-env.sh has been added to the falcon conf. This file can be used to set various environment variables that you need for you services. In addition you can set any other environment variables you might need. This file will be sourced by falcon scripts before any commands are executed. The following environment variables are available to set.</p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p><b>Configuring Falcon</b></p>
+<p>By default config directory used by falcon is {package dir}/conf. To override this set environment variable FALCON_CONF to the path of the conf dir.</p>
+<p>falcon-env.sh has been added to the falcon conf. This file can be used to set various environment variables that you need for you services. In addition you can set any other environment variables you might need. This file will be sourced by falcon scripts before any commands are executed. The following environment variables are available to set.</p>
+<div class="source">
+<pre>
 # The java implementation to use. If JAVA_HOME is not found we expect java and jar to be in path
 #export JAVA_HOME=
 
@@ -372,13 +405,30 @@ cd falcon-distributed-${project.version}
 # Where do you want to expand the war file. By Default it is in /server/webapp dir under the base install dir.
 #export FALCON_EXPANDED_WEBAPP_DIR=
 
-</pre></div><p><b>Starting Falcon Server</b></p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p><b>Starting Falcon Server</b></p>
+<div class="source">
+<pre>
 bin/falcon-start [-port &lt;port&gt;]
 
-</pre></div><p>By default,  * falcon server starts at port 15000. To change the port, use -port option * falcon server starts embedded active mq. To control this behaviour, set the following system properties using -D option in environment variable FALCON_OPTS:</p><ul><li>falcon.embeddedmq=&lt;true/false&gt; - Should server start embedded active mq, default true</li><li>falcon.emeddedmq.port=&lt;port&gt; - Port for embedded active mq, default 61616</li><li>falcon.embeddedmq.data=&lt;path&gt; - Data path for embedded active mq, default {package dir}/logs/data</li></ul>* falcon server starts with conf from {package dir}/conf. To override this (to use the same conf with multiple falcon upgrades), set environment variable FALCON_CONF to the path of conf dir<p><b><i>Adding Extension Libraries</i></b> Library extensions allows users to add custom libraries to entity lifecycles such as feed retention, feed replication and process execution. This is useful for usecases such as adding filesy
 stem extensions. To enable this, add the following configs to startup.properties: *.libext.paths=&lt;paths to be added to all entity lifecycles&gt; *.libext.feed.paths=&lt;paths to be added to all feed lifecycles&gt; *.libext.feed.retentions.paths=&lt;paths to be added to feed retention workflow&gt; *.libext.feed.replication.paths=&lt;paths to be added to feed replication workflow&gt; *.libext.process.paths=&lt;paths to be added to process workflow&gt;</p><p>The configured jars are added to falcon classpath and the corresponding workflows</p><p><b>Starting Prism</b></p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p>By default,  * falcon server starts at port 15000. To change the port, use -port option * falcon server starts embedded active mq. To control this behaviour, set the following system properties using -D option in environment variable FALCON_OPTS:</p>
+<ul>
+<li>falcon.embeddedmq=&lt;true/false&gt; - Should server start embedded active mq, default true</li>
+<li>falcon.emeddedmq.port=&lt;port&gt; - Port for embedded active mq, default 61616</li>
+<li>falcon.embeddedmq.data=&lt;path&gt; - Data path for embedded active mq, default {package dir}/logs/data</li></ul>* falcon server starts with conf from {package dir}/conf. To override this (to use the same conf with multiple falcon upgrades), set environment variable FALCON_CONF to the path of conf dir
+<p><b><i>Adding Extension Libraries</i></b> Library extensions allows users to add custom libraries to entity lifecycles such as feed retention, feed replication and process execution. This is useful for usecases such as adding filesystem extensions. To enable this, add the following configs to startup.properties: *.libext.paths=&lt;paths to be added to all entity lifecycles&gt; *.libext.feed.paths=&lt;paths to be added to all feed lifecycles&gt; *.libext.feed.retentions.paths=&lt;paths to be added to feed retention workflow&gt; *.libext.feed.replication.paths=&lt;paths to be added to feed replication workflow&gt; *.libext.process.paths=&lt;paths to be added to process workflow&gt;</p>
+<p>The configured jars are added to falcon classpath and the corresponding workflows</p>
+<p><b>Starting Prism</b></p>
+<div class="source">
+<pre>
 bin/prism-start [-port &lt;port&gt;]
 
-</pre></div><p>By default,  * falcon server starts at port 16000. To change the port, use -port option * prism starts with conf from {package dir}/conf. To override this (to use the same conf with multiple prism upgrades), set environment variable FALCON_CONF to the path of conf dir</p><p><b>Using Falcon</b></p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p>By default,  * falcon server starts at port 16000. To change the port, use -port option * prism starts with conf from {package dir}/conf. To override this (to use the same conf with multiple prism upgrades), set environment variable FALCON_CONF to the path of conf dir</p>
+<p><b>Using Falcon</b></p>
+<div class="source">
+<pre>
 bin/falcon admin -version
 Falcon server build version: {Version:&quot;0.3-SNAPSHOT-rd7e2be9afa2a5dc96acd1ec9e325f39c6b2f17f7&quot;,Mode:&quot;embedded&quot;}
 
@@ -387,13 +437,25 @@ Falcon server build version: {Version:&q
 bin/falcon help
 (for more details about falcon cli usage)
 
-</pre></div><p><b>Dashboard</b></p><p>Once falcon / prism is started, you can view the status of falcon entities using the Web-based dashboard. The web UI works in both distributed and embedded mode. You can open your browser at the corresponding port to use the web UI.</p><p><b>Stopping Falcon Server</b></p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p><b>Dashboard</b></p>
+<p>Once falcon / prism is started, you can view the status of falcon entities using the Web-based dashboard. The web UI works in both distributed and embedded mode. You can open your browser at the corresponding port to use the web UI.</p>
+<p><b>Stopping Falcon Server</b></p>
+<div class="source">
+<pre>
 bin/falcon-stop
 
-</pre></div><p><b>Stopping Prism</b></p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p><b>Stopping Prism</b></p>
+<div class="source">
+<pre>
 bin/prism-stop
 
-</pre></div></div><div class="section"><h4>Preparing Oozie and Falcon packages for deployment<a name="Preparing_Oozie_and_Falcon_packages_for_deployment"></a></h4><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h4>Preparing Oozie and Falcon packages for deployment<a name="Preparing_Oozie_and_Falcon_packages_for_deployment"></a></h4>
+<div class="source">
+<pre>
 cd &lt;&lt;project home&gt;&gt;
 src/bin/package.sh &lt;&lt;hadoop-version&gt;&gt; &lt;&lt;oozie-version&gt;&gt;
 
@@ -401,27 +463,49 @@ src/bin/package.sh &lt;&lt;hadoop-versio
 &gt;&gt; Falcon package is available in &lt;&lt;falcon home&gt;&gt;/target/falcon-&lt;&lt;version&gt;&gt;-bin.tar.gz
 &gt;&gt; Oozie package is available in &lt;&lt;falcon home&gt;&gt;/target/oozie-3.3.2-distro.tar.gz
 
-</pre></div></div><div class="section"><h4>Running Examples using embedded package<a name="Running_Examples_using_embedded_package"></a></h4><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h4>Running Examples using embedded package<a name="Running_Examples_using_embedded_package"></a></h4>
+<div class="source">
+<pre>
 bin/falcon-start
 
-</pre></div><p>Make sure the hadoop and oozie endpoints are according to your setup in examples/entity/standalone-cluster.xml</p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p>Make sure the hadoop and oozie endpoints are according to your setup in examples/entity/standalone-cluster.xml</p>
+<div class="source">
+<pre>
 bin/falcon entity -submit -type cluster -file examples/entity/standalone-cluster.xml
 
-</pre></div><p>Submit input and output feeds:</p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p>Submit input and output feeds:</p>
+<div class="source">
+<pre>
 bin/falcon entity -submit -type feed -file examples/entity/in-feed.xml
 bin/falcon entity -submit -type feed -file examples/entity/out-feed.xml
 
-</pre></div><p>Set-up workflow for the process:</p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p>Set-up workflow for the process:</p>
+<div class="source">
+<pre>
 hadoop fs -put examples/app /
 
-</pre></div><p>Submit and schedule the process:</p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p>Submit and schedule the process:</p>
+<div class="source">
+<pre>
 bin/falcon entity -submitAndSchedule -type process -file examples/entity/oozie-mr-process.xml
 bin/falcon entity -submitAndSchedule -type process -file examples/entity/pig-process.xml
 
-</pre></div><p>Generate input data:</p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p>Generate input data:</p>
+<div class="source">
+<pre>
 examples/data/generate.sh &lt;&lt;hdfs endpoint&gt;&gt;
 
-</pre></div><p>Get status of instances:</p><div class="source"><pre class="prettyprint">
+</pre></div>
+<p>Get status of instances:</p>
+<div class="source">
+<pre>
 bin/falcon instance -status -type process -name oozie-mr-process -start 2013-11-15T00:05Z -end 2013-11-15T01:00Z
 
 </pre></div></div>

Modified: incubator/falcon/site/docs/OnBoarding.html
URL: http://svn.apache.org/viewvc/incubator/falcon/site/docs/OnBoarding.html?rev=1608028&r1=1608027&r2=1608028&view=diff
==============================================================================
--- incubator/falcon/site/docs/OnBoarding.html (original)
+++ incubator/falcon/site/docs/OnBoarding.html Sat Jul  5 14:55:57 2014
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at Feb 6, 2014
+ | Generated by Apache Maven Doxia at 2014-07-05
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20140206" />
+    <meta name="Date-Revision-yyyymmdd" content="20140705" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Falcon - Contents</title>
     <link rel="stylesheet" href="../css/apache-maven-fluido-1.3.0.min.css" />
@@ -153,6 +153,9 @@
                   
                       <li>      <a href="../docs/HiveIntegration.html"  title="Hive Integration">Hive Integration</a>
 </li>
+                  
+                      <li>      <a href="../docs/Security.html"  title="Security">Security</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -236,7 +239,7 @@
         
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 2014-02-06</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 2014-07-05</li> 
             
                             </ul>
       </div>
@@ -245,7 +248,31 @@
                         
         <div id="bodyColumn" >
                                   
-            <div class="section"><h3>Contents<a name="Contents"></a></h3><p></p><ul><li><a href="#Onboarding Steps">Onboarding Steps</a></li><li><a href="#Sample Pipeline">Sample Pipeline</a></li><li><a href="./HiveIntegration.html">Hive Examples</a></li></ul></div><div class="section"><h4>Onboarding Steps<a name="Onboarding_Steps"></a></h4><p></p><ul><li>Create cluster definition for the cluster, specifying name node, job tracker, workflow engine endpoint, messaging endpoint. Refer to <a href="./EntitySpecification.html">cluster definition</a> for details.</li><li>Create Feed definitions for each of the input and output specifying frequency, data path, ownership. Refer to <a href="./EntitySpecification.html">feed definition</a> for details.</li><li>Create Process definition for your job. Process defines configuration for the workflow job. Important attributes are frequency, inputs/outputs and workflow path. Refer to <a href="./EntitySpecification.html">process definition</a> for pr
 ocess details.</li><li>Define workflow for your job using the workflow engine(only oozie is supported as of now). Refer <a class="externalLink" href="http://incubator.apache.org/oozie/docs/3.1.3/docs/WorkflowFunctionalSpec.html">Oozie Workflow Specification</a>. The libraries required for the workflow should be available in lib folder in workflow path.</li><li>Set-up workflow definition, libraries and referenced scripts on hadoop.</li><li>Submit cluster definition</li><li>Submit and schedule feed and process definitions</li></ul></div><div class="section"><h4>Sample Pipeline<a name="Sample_Pipeline"></a></h4></div><div class="section"><h5>Cluster   <a name="Cluster"></a></h5><p>Cluster definition that contains end points for name node, job tracker, oozie and jms server:</p><div class="source"><pre class="prettyprint">
+            <div class="section">
+<h3>Contents<a name="Contents"></a></h3>
+<p></p>
+<ul>
+<li><a href="#Onboarding Steps">Onboarding Steps</a></li>
+<li><a href="#Sample Pipeline">Sample Pipeline</a></li>
+<li><a href="./HiveIntegration.html">Hive Examples</a></li></ul></div>
+<div class="section">
+<h4>Onboarding Steps<a name="Onboarding_Steps"></a></h4>
+<p></p>
+<ul>
+<li>Create cluster definition for the cluster, specifying name node, job tracker, workflow engine endpoint, messaging endpoint. Refer to <a href="./EntitySpecification.html">cluster definition</a> for details.</li>
+<li>Create Feed definitions for each of the input and output specifying frequency, data path, ownership. Refer to <a href="./EntitySpecification.html">feed definition</a> for details.</li>
+<li>Create Process definition for your job. Process defines configuration for the workflow job. Important attributes are frequency, inputs/outputs and workflow path. Refer to <a href="./EntitySpecification.html">process definition</a> for process details.</li>
+<li>Define workflow for your job using the workflow engine(only oozie is supported as of now). Refer <a class="externalLink" href="http://incubator.apache.org/oozie/docs/3.1.3/docs/WorkflowFunctionalSpec.html">Oozie Workflow Specification</a>. The libraries required for the workflow should be available in lib folder in workflow path.</li>
+<li>Set-up workflow definition, libraries and referenced scripts on hadoop.</li>
+<li>Submit cluster definition</li>
+<li>Submit and schedule feed and process definitions</li></ul></div>
+<div class="section">
+<h4>Sample Pipeline<a name="Sample_Pipeline"></a></h4></div>
+<div class="section">
+<h5>Cluster   <a name="Cluster"></a></h5>
+<p>Cluster definition that contains end points for name node, job tracker, oozie and jms server:</p>
+<div class="source">
+<pre>
 &lt;?xml version=&quot;1.0&quot;?&gt;
 &lt;!--
     Cluster configuration
@@ -271,7 +298,12 @@
     &lt;/locations&gt;
 &lt;/cluster&gt;
 
-</pre></div></div><div class="section"><h5>Input Feed<a name="Input_Feed"></a></h5><p>Hourly feed that defines feed path, frequency, ownership and validity:</p><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h5>Input Feed<a name="Input_Feed"></a></h5>
+<p>Hourly feed that defines feed path, frequency, ownership and validity:</p>
+<div class="source">
+<pre>
 &lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
 &lt;!--
     Hourly sample input data
@@ -303,7 +335,12 @@
     &lt;schema location=&quot;/none&quot; provider=&quot;none&quot; /&gt;
 &lt;/feed&gt;
 
-</pre></div></div><div class="section"><h5>Output Feed<a name="Output_Feed"></a></h5><p>Daily feed that defines feed path, frequency, ownership and validity:</p><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h5>Output Feed<a name="Output_Feed"></a></h5>
+<p>Daily feed that defines feed path, frequency, ownership and validity:</p>
+<div class="source">
+<pre>
 &lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
 &lt;!--
     Daily sample output data
@@ -335,7 +372,12 @@ xmlns:xsi=&quot;http://www.w3.org/2001/X
     &lt;schema location=&quot;/none&quot; provider=&quot;none&quot; /&gt;
 &lt;/feed&gt;
 
-</pre></div></div><div class="section"><h5>Process<a name="Process"></a></h5><p>Sample process which runs daily at 6th hour on corp cluster. It takes one input - <a href="./SampleInput.html">SampleInput</a> for the previous day(24 instances). It generates one output - <a href="./SampleOutput.html">SampleOutput</a> for previous day. The workflow is defined at /projects/bootcamp/workflow/workflow.xml. Any libraries available for the workflow should be at /projects/bootcamp/workflow/lib. The process also defines properties queueName, ssh.host, and fileTimestamp which are passed to the workflow. In addition, Falcon exposes the following properties to the workflow: nameNode, jobTracker(hadoop properties), input and output(Input/Output properties).</p><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h5>Process<a name="Process"></a></h5>
+<p>Sample process which runs daily at 6th hour on corp cluster. It takes one input - <a href="./SampleInput.html">SampleInput</a> for the previous day(24 instances). It generates one output - <a href="./SampleOutput.html">SampleOutput</a> for previous day. The workflow is defined at /projects/bootcamp/workflow/workflow.xml. Any libraries available for the workflow should be at /projects/bootcamp/workflow/lib. The process also defines properties queueName, ssh.host, and fileTimestamp which are passed to the workflow. In addition, Falcon exposes the following properties to the workflow: nameNode, jobTracker(hadoop properties), input and output(Input/Output properties).</p>
+<div class="source">
+<pre>
 &lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
 &lt;!--
     Daily sample process. Runs at 6th hour every day. Input - last day's hourly data. Generates output for yesterday
@@ -370,7 +412,16 @@ xmlns:xsi=&quot;http://www.w3.org/2001/X
     &lt;/late-process&gt;
 &lt;/process&gt;
 
-</pre></div></div><div class="section"><h5>Oozie Workflow<a name="Oozie_Workflow"></a></h5><p>The sample user workflow contains 3 actions:</p><ul><li>Pig action - Executes pig script /projects/bootcamp/workflow/script.pig</li><li>concatenator - Java action that concatenates part files and generates a single file</li><li>file upload - ssh action that gets the concatenated file from hadoop and sends the file to a remote host</li></ul><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h5>Oozie Workflow<a name="Oozie_Workflow"></a></h5>
+<p>The sample user workflow contains 3 actions:</p>
+<ul>
+<li>Pig action - Executes pig script /projects/bootcamp/workflow/script.pig</li>
+<li>concatenator - Java action that concatenates part files and generates a single file</li>
+<li>file upload - ssh action that gets the concatenated file from hadoop and sends the file to a remote host</li></ul>
+<div class="source">
+<pre>
 &lt;workflow-app xmlns=&quot;uri:oozie:workflow:0.2&quot; name=&quot;sample-wf&quot;&gt;
         &lt;start to=&quot;pig&quot; /&gt;
 
@@ -449,7 +500,12 @@ xmlns:xsi=&quot;http://www.w3.org/2001/X
         &lt;end name=&quot;end&quot; /&gt;
 &lt;/workflow-app&gt;
 
-</pre></div></div><div class="section"><h5>File Upload Script<a name="File_Upload_Script"></a></h5><p>The script gets the file from hadoop, rsyncs the file to /tmp on remote host and deletes the file from hadoop</p><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h5>File Upload Script<a name="File_Upload_Script"></a></h5>
+<p>The script gets the file from hadoop, rsyncs the file to /tmp on remote host and deletes the file from hadoop</p>
+<div class="source">
+<pre>
 #!/bin/bash
 
 trap 'echo &quot;output=$?&quot;; exit $?' ERR INT TERM

Modified: incubator/falcon/site/docs/Security.html
URL: http://svn.apache.org/viewvc/incubator/falcon/site/docs/Security.html?rev=1608028&r1=1608027&r2=1608028&view=diff
==============================================================================
--- incubator/falcon/site/docs/Security.html (original)
+++ incubator/falcon/site/docs/Security.html Sat Jul  5 14:55:57 2014
@@ -153,6 +153,9 @@
                   
                       <li>      <a href="../docs/HiveIntegration.html"  title="Hive Integration">Hive Integration</a>
 </li>
+                  
+                      <li>      <a href="../docs/Security.html"  title="Security">Security</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">

Modified: incubator/falcon/site/docs/restapi/AdminConfig.html
URL: http://svn.apache.org/viewvc/incubator/falcon/site/docs/restapi/AdminConfig.html?rev=1608028&r1=1608027&r2=1608028&view=diff
==============================================================================
--- incubator/falcon/site/docs/restapi/AdminConfig.html (original)
+++ incubator/falcon/site/docs/restapi/AdminConfig.html Sat Jul  5 14:55:57 2014
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at Feb 6, 2014
+ | Generated by Apache Maven Doxia at 2014-07-05
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20140206" />
+    <meta name="Date-Revision-yyyymmdd" content="20140705" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Falcon - GET /api/admin/config/:config-type</title>
     <link rel="stylesheet" href="../../css/apache-maven-fluido-1.3.0.min.css" />
@@ -153,6 +153,9 @@
                   
                       <li>      <a href="../../docs/HiveIntegration.html"  title="Hive Integration">Hive Integration</a>
 </li>
+                  
+                      <li>      <a href="../../docs/Security.html"  title="Security">Security</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -236,7 +239,7 @@
         
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 2014-02-06</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 2014-07-05</li> 
             
                             </ul>
       </div>
@@ -245,11 +248,39 @@
                         
         <div id="bodyColumn" >
                                   
-            <div class="section"><h3>GET /api/admin/config/:config-type<a name="GET_apiadminconfig:config-type"></a></h3><p></p><ul><li><a href="#Description">Description</a></li><li><a href="#Parameters">Parameters</a></li><li><a href="#Results">Results</a></li><li><a href="#Examples">Examples</a></li></ul></div><div class="section"><h3>Description<a name="Description"></a></h3><p>Get configuration information of the falcon server.</p></div><div class="section"><h3>Parameters<a name="Parameters"></a></h3><p></p><ul><li>:config-type can be build, deploy, startup or runtime</li></ul></div><div class="section"><h3>Results<a name="Results"></a></h3><p>Configuration information of the server.</p></div><div class="section"><h3>Examples<a name="Examples"></a></h3></div><div class="section"><h4>Rest Call<a name="Rest_Call"></a></h4><div class="source"><pre class="prettyprint">
+            <div class="section">
+<h3>GET /api/admin/config/:config-type<a name="GET_apiadminconfig:config-type"></a></h3>
+<p></p>
+<ul>
+<li><a href="#Description">Description</a></li>
+<li><a href="#Parameters">Parameters</a></li>
+<li><a href="#Results">Results</a></li>
+<li><a href="#Examples">Examples</a></li></ul></div>
+<div class="section">
+<h3>Description<a name="Description"></a></h3>
+<p>Get configuration information of the falcon server.</p></div>
+<div class="section">
+<h3>Parameters<a name="Parameters"></a></h3>
+<p></p>
+<ul>
+<li>:config-type can be build, deploy, startup or runtime</li></ul></div>
+<div class="section">
+<h3>Results<a name="Results"></a></h3>
+<p>Configuration information of the server.</p></div>
+<div class="section">
+<h3>Examples<a name="Examples"></a></h3></div>
+<div class="section">
+<h4>Rest Call<a name="Rest_Call"></a></h4>
+<div class="source">
+<pre>
 GET http://localhost:15000/api/admin/config/deploy
 Remote-User: rgautam
 
-</pre></div></div><div class="section"><h4>Result<a name="Result"></a></h4><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h4>Result<a name="Result"></a></h4>
+<div class="source">
+<pre>
 {
     &quot;properties&quot;: [
         {

Modified: incubator/falcon/site/docs/restapi/AdminStack.html
URL: http://svn.apache.org/viewvc/incubator/falcon/site/docs/restapi/AdminStack.html?rev=1608028&r1=1608027&r2=1608028&view=diff
==============================================================================
--- incubator/falcon/site/docs/restapi/AdminStack.html (original)
+++ incubator/falcon/site/docs/restapi/AdminStack.html Sat Jul  5 14:55:57 2014
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at Feb 6, 2014
+ | Generated by Apache Maven Doxia at 2014-07-05
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20140206" />
+    <meta name="Date-Revision-yyyymmdd" content="20140705" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Falcon - GET /api/admin/stack</title>
     <link rel="stylesheet" href="../../css/apache-maven-fluido-1.3.0.min.css" />
@@ -153,6 +153,9 @@
                   
                       <li>      <a href="../../docs/HiveIntegration.html"  title="Hive Integration">Hive Integration</a>
 </li>
+                  
+                      <li>      <a href="../../docs/Security.html"  title="Security">Security</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -236,7 +239,7 @@
         
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 2014-02-06</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 2014-07-05</li> 
             
                             </ul>
       </div>
@@ -245,11 +248,37 @@
                         
         <div id="bodyColumn" >
                                   
-            <div class="section"><h3>GET /api/admin/stack<a name="GET_apiadminstack"></a></h3><p></p><ul><li><a href="#Description">Description</a></li><li><a href="#Parameters">Parameters</a></li><li><a href="#Results">Results</a></li><li><a href="#Examples">Examples</a></li></ul></div><div class="section"><h3>Description<a name="Description"></a></h3><p>Get stack trace of the falcon server.</p></div><div class="section"><h3>Parameters<a name="Parameters"></a></h3><p>None.</p></div><div class="section"><h3>Results<a name="Results"></a></h3><p>Stack trace of the server.</p></div><div class="section"><h3>Examples<a name="Examples"></a></h3></div><div class="section"><h4>Rest Call<a name="Rest_Call"></a></h4><div class="source"><pre class="prettyprint">
+            <div class="section">
+<h3>GET /api/admin/stack<a name="GET_apiadminstack"></a></h3>
+<p></p>
+<ul>
+<li><a href="#Description">Description</a></li>
+<li><a href="#Parameters">Parameters</a></li>
+<li><a href="#Results">Results</a></li>
+<li><a href="#Examples">Examples</a></li></ul></div>
+<div class="section">
+<h3>Description<a name="Description"></a></h3>
+<p>Get stack trace of the falcon server.</p></div>
+<div class="section">
+<h3>Parameters<a name="Parameters"></a></h3>
+<p>None.</p></div>
+<div class="section">
+<h3>Results<a name="Results"></a></h3>
+<p>Stack trace of the server.</p></div>
+<div class="section">
+<h3>Examples<a name="Examples"></a></h3></div>
+<div class="section">
+<h4>Rest Call<a name="Rest_Call"></a></h4>
+<div class="source">
+<pre>
 GET http://localhost:15000/api/admin/stack
 Remote-User: rgautam
 
-</pre></div></div><div class="section"><h4>Result<a name="Result"></a></h4><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h4>Result<a name="Result"></a></h4>
+<div class="source">
+<pre>
 Reference Handler
 State: WAITING
 java.lang.Object.wait(Native Method)

Modified: incubator/falcon/site/docs/restapi/AdminVersion.html
URL: http://svn.apache.org/viewvc/incubator/falcon/site/docs/restapi/AdminVersion.html?rev=1608028&r1=1608027&r2=1608028&view=diff
==============================================================================
--- incubator/falcon/site/docs/restapi/AdminVersion.html (original)
+++ incubator/falcon/site/docs/restapi/AdminVersion.html Sat Jul  5 14:55:57 2014
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at Feb 6, 2014
+ | Generated by Apache Maven Doxia at 2014-07-05
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20140206" />
+    <meta name="Date-Revision-yyyymmdd" content="20140705" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Falcon - GET /api/admin/version</title>
     <link rel="stylesheet" href="../../css/apache-maven-fluido-1.3.0.min.css" />
@@ -153,6 +153,9 @@
                   
                       <li>      <a href="../../docs/HiveIntegration.html"  title="Hive Integration">Hive Integration</a>
 </li>
+                  
+                      <li>      <a href="../../docs/Security.html"  title="Security">Security</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -236,7 +239,7 @@
         
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 2014-02-06</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 2014-07-05</li> 
             
                             </ul>
       </div>
@@ -245,11 +248,37 @@
                         
         <div id="bodyColumn" >
                                   
-            <div class="section"><h3>GET /api/admin/version<a name="GET_apiadminversion"></a></h3><p></p><ul><li><a href="#Description">Description</a></li><li><a href="#Parameters">Parameters</a></li><li><a href="#Results">Results</a></li><li><a href="#Examples">Examples</a></li></ul></div><div class="section"><h3>Description<a name="Description"></a></h3><p>Get version of the falcon server.</p></div><div class="section"><h3>Parameters<a name="Parameters"></a></h3><p>None.</p></div><div class="section"><h3>Results<a name="Results"></a></h3><p>Version of the server.</p></div><div class="section"><h3>Examples<a name="Examples"></a></h3></div><div class="section"><h4>Rest Call<a name="Rest_Call"></a></h4><div class="source"><pre class="prettyprint">
+            <div class="section">
+<h3>GET /api/admin/version<a name="GET_apiadminversion"></a></h3>
+<p></p>
+<ul>
+<li><a href="#Description">Description</a></li>
+<li><a href="#Parameters">Parameters</a></li>
+<li><a href="#Results">Results</a></li>
+<li><a href="#Examples">Examples</a></li></ul></div>
+<div class="section">
+<h3>Description<a name="Description"></a></h3>
+<p>Get version of the falcon server.</p></div>
+<div class="section">
+<h3>Parameters<a name="Parameters"></a></h3>
+<p>None.</p></div>
+<div class="section">
+<h3>Results<a name="Results"></a></h3>
+<p>Version of the server.</p></div>
+<div class="section">
+<h3>Examples<a name="Examples"></a></h3></div>
+<div class="section">
+<h4>Rest Call<a name="Rest_Call"></a></h4>
+<div class="source">
+<pre>
 GET http://localhost:15000/api/admin/version
 Remote-User: rgautam
 
-</pre></div></div><div class="section"><h4>Result<a name="Result"></a></h4><div class="source"><pre class="prettyprint">
+</pre></div></div>
+<div class="section">
+<h4>Result<a name="Result"></a></h4>
+<div class="source">
+<pre>
 {
     &quot;properties&quot;:[
         {