You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@falcon.apache.org by ba...@apache.org on 2016/08/08 23:16:19 UTC

[45/49] falcon git commit: FALCON-2006 Update documentation on site for 0.10 release

http://git-wip-us.apache.org/repos/asf/falcon/blob/4612c3f7/content/0.10/ImportExport.html
----------------------------------------------------------------------
diff --git a/content/0.10/ImportExport.html b/content/0.10/ImportExport.html
new file mode 100644
index 0000000..e359cfb
--- /dev/null
+++ b/content/0.10/ImportExport.html
@@ -0,0 +1,293 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2016-08-08
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20160808" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Falcon Data Import and Export</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( '.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                                                <img src="images/falcon-logo.png"  alt="Apache Falcon" width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Falcon Data Import and Export</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 2016-08-08</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.10</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>Falcon Data Import and Export<a name="Falcon_Data_Import_and_Export"></a></h2></div>
+<div class="section">
+<h3>Overview<a name="Overview"></a></h3>
+<p>Falcon provides constructs to periodically bring raw data from external data sources (like databases, drop boxes etc) onto Hadoop and push derived data computed on Hadoop onto external data sources.</p>
+<p>As of this release, Falcon only supports Relational Databases (e.g. Oracle, MySQL etc) via JDBC as external data source. The future releases will add support for other external data sources.</p></div>
+<div class="section">
+<h3>Prerequisites<a name="Prerequisites"></a></h3>
+<p>Following are the prerequisites to import external data from and export to databases.</p>
+<p></p>
+<ul>
+<li><b>Sqoop 1.4.6+</b></li>
+<li><b>Oozie 4.2.0+</b></li>
+<li><b>Appropriate database connector</b></li></ul>
+<p><b>Note:</b> Falcon uses Sqoop for import/export operation. Sqoop will require appropriate database driver to connect to the relational database. Please refer to the Sqoop documentation for any Sqoop related question. Please make sure the database driver jar is copied into oozie share lib for Sqoop.</p>
+<div class="source">
+<pre>
+For example, in order to import and export with MySQL, please make sure the latest MySQL connector
+*mysql-connector-java-5.1.31.jar+* is copied into oozie's Sqoop share lib
+
+/user/oozie/share/lib/{lib-dir}/sqoop/mysql-connector-java-5.1.31.jar+
+
+where {lib-dir} value varies in oozie deployments.
+
+
+</pre></div></div>
+<div class="section">
+<h3>Usage<a name="Usage"></a></h3></div>
+<div class="section">
+<h4>Entity Definition and Setup<a name="Entity_Definition_and_Setup"></a></h4>
+<p></p>
+<ul>
+<li><b>Datasource Entity</b></li></ul>Datasource entity abstracts connection and credential details to external data sources. The Datasource entity       supports read and write interfaces with specific credentials. The default credential will be used if the read       or write interface does not have its own credentials. In general, the Datasource entity will be defined by       system administrator. Please refer to datasource XSD for more details.
+<p>The following example defines a Datasource entity for a MySQL database. The import operation will use       the read interface with url &quot;jdbc:mysql://dbhost/test&quot;, user name &quot;import_usr&quot; and password text &quot;sqoop&quot;.       Where as, the export operation will use the write interface with url &quot;jdbc:mysql://dbhost/test&quot; with user       name &quot;export_usr&quot; and password specified in a HDFS file at the location &quot;/user/ambari-qa/password-store/password_write_user&quot;.</p>
+<p>The default credential specified will be used if either the read or write interface does not provide its own       credentials. The default credential specifies the password using password alias feature available via hadoop credential       functionality. User will be able to create a password alias using &quot;hadoop credential -create &lt;alias&gt; -provider       &lt;provider-path&gt;&quot; command, where &lt;alias&gt; is a string and &lt;provider-path&gt; is a HDFS jceks file. During runtime,       the specified alias will be used to look up the password stored encrypted in the jceks hdfs file specified under       the providerPath element.</p>
+<p>The available read and write interfaces enable database administrators to segregate read and write workloads.</p>
+<div class="source">
+<pre>
+
+      File: mysql-database.xml
+
+      &lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
+      &lt;datasource colo=&quot;west-coast&quot; description=&quot;MySQL database on west coast&quot; type=&quot;mysql&quot; name=&quot;mysql-db&quot; xmlns=&quot;uri:falcon:datasource:0.1&quot;&gt;
+          &lt;tags&gt;owner=foobar@ambari.apache.org, consumer=phoe@ambari.apache.org&lt;/tags&gt;
+          &lt;interfaces&gt;
+              &lt;!-- ***** read interface ***** --&gt;
+              &lt;interface type=&quot;readonly&quot; endpoint=&quot;jdbc:mysql://dbhost/test&quot;&gt;
+                  &lt;credential type=&quot;password-text&quot;&gt;
+                      &lt;userName&gt;import_usr&lt;/userName&gt;
+                      &lt;passwordText&gt;sqoop&lt;/passwordFile&gt;
+                  &lt;/credential&gt;
+              &lt;/interface&gt;
+
+              &lt;!-- ***** write interface ***** --&gt;
+              &lt;interface type=&quot;write&quot;  endpoint=&quot;jdbc:mysql://dbhost/test&quot;&gt;
+                  &lt;credential type=&quot;password-file&quot;&gt;
+                      &lt;userName&gt;export_usr&lt;/userName&gt;
+                      &lt;passwordFile&gt;/user/ambari-qa/password-store/password_write_user&lt;/passwordFile&gt;
+                  &lt;/credential&gt;
+              &lt;/interface&gt;
+
+              &lt;!-- *** default credential *** --&gt;
+              &lt;credential type=&quot;password-alias&quot;&gt;
+                &lt;userName&gt;sqoop2_user&lt;/userName&gt;
+                &lt;passwordAlias&gt;
+                    &lt;alias&gt;sqoop.password.alias&lt;/alias&gt;
+                    &lt;providerPath&gt;hdfs://namenode:8020/user/ambari-qa/sqoop_password.jceks&lt;/providerPath&gt;
+                &lt;/passwordAlias&gt;
+              &lt;/credential&gt;
+
+          &lt;/interfaces&gt;
+
+          &lt;driver&gt;
+              &lt;clazz&gt;com.mysql.jdbc.Driver&lt;/clazz&gt;
+              &lt;jar&gt;/user/oozie/share/lib/lib_20150721010816/sqoop/mysql-connector-java-5.1.31&lt;/jar&gt;
+          &lt;/driver&gt;
+      &lt;/datasource&gt;
+      
+</pre></div>
+<p></p>
+<ul>
+<li><b>Feed  Entity</b></li></ul>Feed entity now enables users to define IMPORT and EXPORT policies in addition to RETENTION and REPLICATION.       The IMPORT and EXPORT policies will refer to a already defined Datasource entity for connection and credential       details and take a table name from the policy to operate on. Please refer to feed entity XSD for details.
+<p>The following example defines a Feed entity with IMPORT and EXPORT policies. Both the IMPORT and EXPORT operations       refer to a datasource entity &quot;mysql-db&quot;. The IMPORT operation will use the read interface and credentials while       the EXPORT operation will use the write interface and credentials. A feed instance is created every 1 hour       since the frequency of the Feed is hour(1) and the Feed instances are deleted after 90 days because of the       retention policy.</p>
+<div class="source">
+<pre>
+
+      File: customer_email_feed.xml
+
+      &lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
+      &lt;!--
+       A feed representing Hourly customer email data retained for 90 days
+       --&gt;
+      &lt;feed description=&quot;Raw customer email feed&quot; name=&quot;customer_feed&quot; xmlns=&quot;uri:falcon:feed:0.1&quot;&gt;
+          &lt;tags&gt;externalSystem=USWestEmailServers,classification=secure&lt;/tags&gt;
+          &lt;groups&gt;DataImportPipeline&lt;/groups&gt;
+          &lt;frequency&gt;hours(1)&lt;/frequency&gt;
+          &lt;late-arrival cut-off=&quot;hours(4)&quot;/&gt;
+          &lt;clusters&gt;
+              &lt;cluster name=&quot;primaryCluster&quot; type=&quot;source&quot;&gt;
+                  &lt;validity start=&quot;2015-12-15T00:00Z&quot; end=&quot;2016-03-31T00:00Z&quot;/&gt;
+                  &lt;retention limit=&quot;days(90)&quot; action=&quot;delete&quot;/&gt;
+                  &lt;import&gt;
+                      &lt;source name=&quot;mysql-db&quot; tableName=&quot;simple&quot;&gt;
+                          &lt;extract type=&quot;full&quot;&gt;
+                              &lt;mergepolicy&gt;snapshot&lt;/mergepolicy&gt;
+                          &lt;/extract&gt;
+                          &lt;fields&gt;
+                              &lt;includes&gt;
+                                  &lt;field&gt;id&lt;/field&gt;
+                                  &lt;field&gt;name&lt;/field&gt;
+                              &lt;/includes&gt;
+                          &lt;/fields&gt;
+                      &lt;/source&gt;
+                      &lt;arguments&gt;
+                          &lt;argument name=&quot;--split-by&quot; value=&quot;id&quot;/&gt;
+                          &lt;argument name=&quot;--num-mappers&quot; value=&quot;2&quot;/&gt;
+                      &lt;/arguments&gt;
+                  &lt;/import&gt;
+                  &lt;export&gt;
+                        &lt;target name=&quot;mysql-db&quot; tableName=&quot;simple_export&quot;&gt;
+                            &lt;load type=&quot;insert&quot;/&gt;
+                            &lt;fields&gt;
+                              &lt;includes&gt;
+                                &lt;field&gt;id&lt;/field&gt;
+                                &lt;field&gt;name&lt;/field&gt;
+                              &lt;/includes&gt;
+                            &lt;/fields&gt;
+                        &lt;/target&gt;
+                        &lt;arguments&gt;
+                             &lt;argument name=&quot;--update-key&quot; value=&quot;id&quot;/&gt;
+                        &lt;/arguments&gt;
+                    &lt;/export&gt;
+              &lt;/cluster&gt;
+          &lt;/clusters&gt;
+
+          &lt;locations&gt;
+              &lt;location type=&quot;data&quot; path=&quot;/user/ambari-qa/falcon/demo/primary/importfeed/${YEAR}-${MONTH}-${DAY}-${HOUR}-${MINUTE}&quot;/&gt;
+              &lt;location type=&quot;stats&quot; path=&quot;/none&quot;/&gt;
+              &lt;location type=&quot;meta&quot; path=&quot;/none&quot;/&gt;
+          &lt;/locations&gt;
+
+          &lt;ACL owner=&quot;ambari-qa&quot; group=&quot;users&quot; permission=&quot;0755&quot;/&gt;
+          &lt;schema location=&quot;/none&quot; provider=&quot;none&quot;/&gt;
+
+      &lt;/feed&gt;
+      
+</pre></div>
+<p></p>
+<ul>
+<li><b>Import policy</b></li></ul>The import policy uses the datasource entity specified in the &quot;source&quot; to connect to the database. The tableName      specified should exist in the source datasource.
+<p>Extraction type specifies whether to pull data from external datasource &quot;full&quot; everytime or &quot;incrementally&quot;.      The mergepolicy specifies how to organize (snapshot or append, i.e time series partiitons) the data on hadoop.      The valid combinations are:</p>
+<ul>
+<li>
+<ul>
+<li>[full,snapshot] - data is extracted in full and dumped into the feed instance location.</li>
+<li>[incremental, append] - data is extracted incrementally using the key specified in the <b>deltacolumn</b></li></ul></li></ul>and added as a partition to the feed instance location.
+<ul>
+<li>
+<ul>
+<li>[incremental, snapshot] - data is extracted incrementally and merged with already existing data on hadoop to</li></ul></li></ul>produce one latest feed instance.*This feature is not supported currently*. The use case for this feature is         to efficiently import very large dimention tables that have updates and inserts onto hadoop and make it available         as a snapshot with latest updates to consumers.
+<p>The following example defines an incremental extraction with append organization:</p>
+<div class="source">
+<pre>
+           &lt;import&gt;
+                &lt;source name=&quot;mysql-db&quot; tableName=&quot;simple&quot;&gt;
+                    &lt;extract type=&quot;incremental&quot;&gt;
+                        &lt;deltacolumn&gt;modified_time&lt;/deltacolumn&gt;
+                        &lt;mergepolicy&gt;append&lt;/mergepolicy&gt;
+                    &lt;/extract&gt;
+                    &lt;fields&gt;
+                        &lt;includes&gt;
+                            &lt;field&gt;id&lt;/field&gt;
+                            &lt;field&gt;name&lt;/field&gt;
+                        &lt;/includes&gt;
+                    &lt;/fields&gt;
+                &lt;/source&gt;
+                &lt;arguments&gt;
+                    &lt;argument name=&quot;--split-by&quot; value=&quot;id&quot;/&gt;
+                    &lt;argument name=&quot;--num-mappers&quot; value=&quot;2&quot;/&gt;
+                &lt;/arguments&gt;
+            &lt;/import&gt;
+        
+</pre></div>
+<p>The fields option enables to control what fields get imported. By default, all fields get import. The &quot;includes&quot; option      brings only those fields specified. The &quot;excludes&quot; option brings all the fields other than specified.</p>
+<p>The arguments section enables to pass in any extra arguments needed for fine control on the underlying implementation --      in this case, Sqoop.</p>
+<p></p>
+<ul>
+<li><b>Export policy</b></li></ul>The export, like import, uses the datasource for connecting to the database. Load type specifies whether to insert      or only update data onto the external table. Fields option behaves the same way as in import policy.      The tableName specified should exist in the external datasource.</div>
+<div class="section">
+<h4>Operation<a name="Operation"></a></h4>
+<p>Once the Datasource and Feed entity with import and export policies are defined, Users can submit and schedule    the Import and Export operations via CLI and REST API as below:</p>
+<div class="source">
+<pre>
+
+    ## submit the mysql-db datasource defined in the file mysql_datasource.xml
+    falcon entity -submit -type datasource -file mysql_datasource.xml
+
+    ## submit the customer_feed specified in the customer_email_feed.xml
+    falcon entity -submit -type feed -file customer_email_feed.xml
+
+    ## schedule the customer_feed
+    falcon entity -schedule -type feed -name customer_feed
+
+   
+</pre></div>
+<p>Falcon will create corresponding oozie bundles with coordinator and workflow for import and export operation.</p></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    2013-2016
+                        <a href="http://www.apache.org">Apache Software Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4612c3f7/content/0.10/InstallationSteps.html
----------------------------------------------------------------------
diff --git a/content/0.10/InstallationSteps.html b/content/0.10/InstallationSteps.html
new file mode 100644
index 0000000..4d5293a
--- /dev/null
+++ b/content/0.10/InstallationSteps.html
@@ -0,0 +1,153 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2016-08-08
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20160808" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Building & Installing Falcon</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( '.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                                                <img src="images/falcon-logo.png"  alt="Apache Falcon" width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Building & Installing Falcon</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 2016-08-08</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.10</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>Building &amp; Installing Falcon<a name="Building__Installing_Falcon"></a></h2></div>
+<div class="section">
+<h3>Building Falcon<a name="Building_Falcon"></a></h3></div>
+<div class="section">
+<h4>Prerequisites<a name="Prerequisites"></a></h4>
+<p></p>
+<ul>
+<li>JDK 1.7/1.8</li>
+<li>Maven 3.2.x</li></ul></div>
+<div class="section">
+<h4>Step 1 - Clone the Falcon repository<a name="Step_1_-_Clone_the_Falcon_repository"></a></h4>
+<div class="source">
+<pre>
+$git clone https://git-wip-us.apache.org/repos/asf/falcon.git falcon
+
+</pre></div></div>
+<div class="section">
+<h4>Step 2 - Build Falcon<a name="Step_2_-_Build_Falcon"></a></h4>
+<div class="source">
+<pre>
+$ cd falcon
+$ export MAVEN_OPTS=&quot;-Xmx1024m -XX:MaxPermSize=256m -noverify&quot;
+$ mvn clean install 
+
+
+</pre></div>
+<p>It builds and installs the package into the local repository, for use as a dependency in other projects locally.</p>
+<p>[optionally -Dhadoop.version=&lt;&lt;hadoop.version&gt;&gt; can be appended to build for a specific version of hadoop]</p>
+<p><b>Note 1:</b> Falcon drops support for Hadoop-1 and only supports Hadoop-2 from Falcon 0.6 onwards           Falcon build with JDK 1.7 using -noverify option</p>
+<p><b>Note 2:</b> To compile Falcon with addon extensions, append additional profiles to build command using syntax -P&lt;&lt;profile1,profile2&gt;&gt;           For Hive Mirroring extension, use profile&quot;hivedr&quot;. Hive &gt;= 1.2.0 and Oozie &gt;= 4.2.0 is required           For HDFS Snapshot mirroring extension, use profile &quot;hdfs-snapshot-mirroring&quot;. Hadoop &gt;= 2.7.0 is required           For ADF integration, use profile &quot;adf&quot;</p></div>
+<div class="section">
+<h4>Step 3 - Package and Deploy Falcon<a name="Step_3_-_Package_and_Deploy_Falcon"></a></h4>
+<p>Once the build successfully completes, artifacts can be packaged for deployment using the assembly plugin. The Assembly Plugin for Maven is primarily intended to allow users to aggregate the project output along with its dependencies, modules, site documentation, and other files into a single distributable archive. There are two basic ways in which you can deploy Falcon - Embedded mode(also known as Stand Alone Mode) and Distributed mode. Your next steps will vary based on the mode in which you want to deploy Falcon.</p>
+<p><b>NOTE</b> : Oozie is being extended by Falcon (particularly on el-extensions) and hence the need for Falcon to build &amp; re-package Oozie, so that users of Falcon can work with the right Oozie setup. Though Oozie is packaged by Falcon, it needs to be deployed separately by the administrator and is not auto deployed along with Falcon.</p></div>
+<div class="section">
+<h5>Embedded/Stand Alone Mode<a name="EmbeddedStand_Alone_Mode"></a></h5>
+<p>Embedded mode is useful when the Hadoop jobs and relevant data processing involve only one Hadoop cluster. In this mode  there is a single Falcon server that contacts the scheduler to schedule jobs on Hadoop. All the process/feed requests  like submit, schedule, suspend, kill etc. are sent to this server. For running Falcon in this mode one should use the  Falcon which has been built using standalone option. You can find the instructions for Embedded mode setup  <a href="./Embedded-mode.html">here</a>.</p></div>
+<div class="section">
+<h5>Distributed Mode<a name="Distributed_Mode"></a></h5>
+<p>Distributed mode is for multiple (colos) instances of Hadoop clusters, and multiple workflow schedulers to handle them. In this mode Falcon has 2 components: Prism and Server(s). Both Prism and Server(s) have their own their own config locations(startup and runtime properties). In this mode Prism acts as a contact point for Falcon servers. While  all commands are available through Prism, only read and instance api's are available through Server. You can find the  instructions for Distributed Mode setup <a href="./Distributed-mode.html">here</a>.</p></div>
+<div class="section">
+<h4>Preparing Oozie and Falcon packages for deployment<a name="Preparing_Oozie_and_Falcon_packages_for_deployment"></a></h4>
+<div class="source">
+<pre>
+$cd &lt;&lt;project home&gt;&gt;
+$src/bin/package.sh &lt;&lt;hadoop-version&gt;&gt; &lt;&lt;oozie-version&gt;&gt;
+
+&gt;&gt; ex. src/bin/package.sh 1.1.2 4.0.1 or src/bin/package.sh 0.20.2-cdh3u5 4.0.1
+&gt;&gt; ex. src/bin/package.sh 2.5.0 4.0.0
+&gt;&gt; Falcon package is available in &lt;&lt;falcon home&gt;&gt;/target/apache-falcon-&lt;&lt;version&gt;&gt;-bin.tar.gz
+&gt;&gt; Oozie package is available in &lt;&lt;falcon home&gt;&gt;/target/oozie-4.0.1-distro.tar.gz
+&gt;&gt; __IMPORTANT:  You need to download the je-5.0.73 version from http://download.oracle.com/otn/berkeley-db/je-5.0.73.zip and extract je-5.0.73 under the Falcon webapp directory or provision an HBase cluster for use as Falcon graphdb backend DB.
+    Depending on the Graphdb backend choice, update the startup.properties appropriately.__
+
+</pre></div>
+<p><b>NOTE:</b> If you have a separate Apache Oozie installation, you will need to follow some additional steps:</p>
+<ol style="list-style-type: decimal">
+<li>Once you have setup the Falcon Server, copy libraries under {falcon-server-dir}/oozie/libext/ to {oozie-install-dir}/libext.</li>
+<li>Modify Oozie's configuration file. Copy all Falcon related properties from {falcon-server-dir}/oozie/conf/oozie-site.xml to {oozie-install-dir}/conf/oozie-site.xml</li>
+<li>Restart oozie:
+<ol style="list-style-type: decimal">
+<li>cd {oozie-install-dir}</li>
+<li>sudo -u oozie ./bin/oozie-stop.sh</li>
+<li>sudo -u oozie ./bin/oozie-setup.sh prepare-war</li>
+<li>sudo -u oozie ./bin/oozie-start.sh</li></ol></li></ol></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    2013-2016
+                        <a href="http://www.apache.org">Apache Software Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4612c3f7/content/0.10/MigrationInstructions.html
----------------------------------------------------------------------
diff --git a/content/0.10/MigrationInstructions.html b/content/0.10/MigrationInstructions.html
new file mode 100644
index 0000000..66e0f38
--- /dev/null
+++ b/content/0.10/MigrationInstructions.html
@@ -0,0 +1,113 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2016-08-08
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20160808" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Migration Instructions</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( '.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                                                <img src="images/falcon-logo.png"  alt="Apache Falcon" width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Migration Instructions</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 2016-08-08</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.10</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>Migration Instructions<a name="Migration_Instructions"></a></h2></div>
+<div class="section">
+<h3>Migrate from 0.9 to 0.10<a name="Migrate_from_0.9_to_0.10"></a></h3>
+<p>FALCON-1333 (Instance Search feature) requires Falcon to use titan-berkeleyje version 0.5.4 to support indexing. Up until version 0.9 - Falcon used titan-berkeleyje-jre6 version 0.4.2. GraphDB created by version 0.4.2 cannot be read by version 0.5.4. The solution is to migrate the GraphDB to be compatible with Falcon 0.10 release. Please make sure that no falcon server is running while performing the migration.</p></div>
+<div class="section">
+<h4>1. Install Falcon 0.10<a name="a1._Install_Falcon_0.10"></a></h4>
+<p>Install Falcon 0.10 by following the <a href="./InstallationSteps.html">Installation Steps</a>. Do not start the falcon server yet. The tool to migrate graphDB is packaged with 0.10 Falcon server in falcon-common-0.10.jar.</p></div>
+<div class="section">
+<h4>2. Export GraphDB to JSON file using Falcon 0.9<a name="a2._Export_GraphDB_to_JSON_file_using_Falcon_0.9"></a></h4>
+<p>Please run the following command to generate the JSON file.</p>
+<div class="source">
+<pre>
+ $FALCON_HOME/bin/graphdbutil.sh export &lt;&lt;java_home&gt; &lt;&lt;hadoop_home&gt;&gt; &lt;&lt;falcon_0.9_home&gt;&gt; &lt;&lt;path_to_falcon-common-0.10.jar&gt;&gt; /jsonFile/dir/
+
+</pre></div>
+<p>This command will create /jsonFile/dir/instanceMetadata.json</p></div>
+<div class="section">
+<h4>3. Import GraphDB from JSON file using Falcon 0.10<a name="a3._Import_GraphDB_from_JSON_file_using_Falcon_0.10"></a></h4>
+<p>Please run the following command to import graphDB the JSON file. The location of graphDB will be based on property &quot;*.falcon.graph.storage.directory&quot; set in startup.properties file.</p>
+<div class="source">
+<pre>
+  $FALCON_HOME/bin/graphdbutil.sh export &lt;&lt;java_home&gt; &lt;&lt;hadoop_home&gt;&gt; &lt;&lt;falcon_0.10_home&gt;&gt; &lt;&lt;path_to_falcon-common-0.10.jar&gt;&gt; /jsonFile/dir/
+
+</pre></div>
+<p>This command will import from /jsonFile/dir/instanceMetadata.json, now start the Falcon 0.10 server.</p></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    2013-2016
+                        <a href="http://www.apache.org">Apache Software Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4612c3f7/content/0.10/OnBoarding.html
----------------------------------------------------------------------
diff --git a/content/0.10/OnBoarding.html b/content/0.10/OnBoarding.html
new file mode 100644
index 0000000..7d89d5f
--- /dev/null
+++ b/content/0.10/OnBoarding.html
@@ -0,0 +1,368 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2016-08-08
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20160808" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Contents</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( '.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                                                <img src="images/falcon-logo.png"  alt="Apache Falcon" width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Contents</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 2016-08-08</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.10</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h3>Contents<a name="Contents"></a></h3>
+<p></p>
+<ul>
+<li><a href="#Onboarding Steps">Onboarding Steps</a></li>
+<li><a href="#Sample Pipeline">Sample Pipeline</a></li>
+<li><a href="./HiveIntegration.html">Hive Examples</a></li></ul></div>
+<div class="section">
+<h4>Onboarding Steps<a name="Onboarding_Steps"></a></h4>
+<p></p>
+<ul>
+<li>Create cluster definition for the cluster, specifying name node, job tracker, workflow engine endpoint, messaging endpoint. Refer to <a href="./EntitySpecification.html">cluster definition</a> for details.</li>
+<li>Create Feed definitions for each of the input and output specifying frequency, data path, ownership. Refer to <a href="./EntitySpecification.html">feed definition</a> for details.</li>
+<li>Create Process definition for your job. Process defines configuration for the workflow job. Important attributes are frequency, inputs/outputs and workflow path. Refer to <a href="./EntitySpecification.html">process definition</a> for process details.</li>
+<li>Define workflow for your job using the workflow engine(only oozie is supported as of now). Refer <a class="externalLink" href="http://oozie.apache.org/docs/3.1.3-incubating/WorkflowFunctionalSpec.html">Oozie Workflow Specification</a>. The libraries required for the workflow should be available in lib folder in workflow path.</li>
+<li>Set-up workflow definition, libraries and referenced scripts on hadoop.</li>
+<li>Submit cluster definition</li>
+<li>Submit and schedule feed and process definitions</li></ul></div>
+<div class="section">
+<h4>Sample Pipeline<a name="Sample_Pipeline"></a></h4></div>
+<div class="section">
+<h5>Cluster   <a name="Cluster"></a></h5>
+<p>Cluster definition that contains end points for name node, job tracker, oozie and jms server: The cluster locations MUST be created prior to submitting a cluster entity to Falcon. <b>staging</b> must have 777 permissions and the parent dirs must have execute permissions <b>working</b> must have 755 permissions and the parent dirs must have execute permissions</p>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot;?&gt;
+&lt;!--
+    Cluster configuration
+  --&gt;
+&lt;cluster colo=&quot;ua2&quot; description=&quot;&quot; name=&quot;corp&quot; xmlns=&quot;uri:falcon:cluster:0.1&quot;
+    xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;    
+    &lt;interfaces&gt;
+        &lt;interface type=&quot;readonly&quot; endpoint=&quot;hftp://name-node.com:50070&quot; version=&quot;2.5.0&quot; /&gt;
+
+        &lt;interface type=&quot;write&quot; endpoint=&quot;hdfs://name-node.com:54310&quot; version=&quot;2.5.0&quot; /&gt;
+
+        &lt;interface type=&quot;execute&quot; endpoint=&quot;job-tracker:54311&quot; version=&quot;2.5.0&quot; /&gt;
+
+        &lt;interface type=&quot;workflow&quot; endpoint=&quot;http://oozie.com:11000/oozie/&quot; version=&quot;4.0.1&quot; /&gt;
+
+        &lt;interface type=&quot;messaging&quot; endpoint=&quot;tcp://jms-server.com:61616?daemon=true&quot; version=&quot;5.1.6&quot; /&gt;
+    &lt;/interfaces&gt;
+
+    &lt;locations&gt;
+        &lt;location name=&quot;staging&quot; path=&quot;/projects/falcon/staging&quot; /&gt;
+        &lt;location name=&quot;temp&quot; path=&quot;/tmp&quot; /&gt;
+        &lt;location name=&quot;working&quot; path=&quot;/projects/falcon/working&quot; /&gt;
+    &lt;/locations&gt;
+&lt;/cluster&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Input Feed<a name="Input_Feed"></a></h5>
+<p>Hourly feed that defines feed path, frequency, ownership and validity:</p>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
+&lt;!--
+    Hourly sample input data
+  --&gt;
+
+&lt;feed description=&quot;sample input data&quot; name=&quot;SampleInput&quot; xmlns=&quot;uri:falcon:feed:0.1&quot;
+    xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;
+    &lt;groups&gt;group&lt;/groups&gt;
+
+    &lt;frequency&gt;hours(1)&lt;/frequency&gt;
+
+    &lt;late-arrival cut-off=&quot;hours(6)&quot; /&gt;
+
+    &lt;clusters&gt;
+        &lt;cluster name=&quot;corp&quot; type=&quot;source&quot;&gt;
+            &lt;validity start=&quot;2009-01-01T00:00Z&quot; end=&quot;2099-12-31T00:00Z&quot; timezone=&quot;UTC&quot; /&gt;
+            &lt;retention limit=&quot;months(24)&quot; action=&quot;delete&quot; /&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+    &lt;locations&gt;
+        &lt;location type=&quot;data&quot; path=&quot;/projects/bootcamp/data/${YEAR}-${MONTH}-${DAY}-${HOUR}/SampleInput&quot; /&gt;
+        &lt;location type=&quot;stats&quot; path=&quot;/projects/bootcamp/stats/SampleInput&quot; /&gt;
+        &lt;location type=&quot;meta&quot; path=&quot;/projects/bootcamp/meta/SampleInput&quot; /&gt;
+    &lt;/locations&gt;
+
+    &lt;ACL owner=&quot;suser&quot; group=&quot;users&quot; permission=&quot;0755&quot; /&gt;
+
+    &lt;schema location=&quot;/none&quot; provider=&quot;none&quot; /&gt;
+&lt;/feed&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Output Feed<a name="Output_Feed"></a></h5>
+<p>Daily feed that defines feed path, frequency, ownership and validity:</p>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
+&lt;!--
+    Daily sample output data
+  --&gt;
+
+&lt;feed description=&quot;sample output data&quot; name=&quot;SampleOutput&quot; xmlns=&quot;uri:falcon:feed:0.1&quot;
+xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;
+    &lt;groups&gt;group&lt;/groups&gt;
+
+    &lt;frequency&gt;days(1)&lt;/frequency&gt;
+
+    &lt;late-arrival cut-off=&quot;hours(6)&quot; /&gt;
+
+    &lt;clusters&gt;
+        &lt;cluster name=&quot;corp&quot; type=&quot;source&quot;&gt;
+            &lt;validity start=&quot;2009-01-01T00:00Z&quot; end=&quot;2099-12-31T00:00Z&quot; timezone=&quot;UTC&quot; /&gt;
+            &lt;retention limit=&quot;months(24)&quot; action=&quot;delete&quot; /&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+    &lt;locations&gt;
+        &lt;location type=&quot;data&quot; path=&quot;/projects/bootcamp/output/${YEAR}-${MONTH}-${DAY}/SampleOutput&quot; /&gt;
+        &lt;location type=&quot;stats&quot; path=&quot;/projects/bootcamp/stats/SampleOutput&quot; /&gt;
+        &lt;location type=&quot;meta&quot; path=&quot;/projects/bootcamp/meta/SampleOutput&quot; /&gt;
+    &lt;/locations&gt;
+
+    &lt;ACL owner=&quot;suser&quot; group=&quot;users&quot; permission=&quot;0755&quot; /&gt;
+
+    &lt;schema location=&quot;/none&quot; provider=&quot;none&quot; /&gt;
+&lt;/feed&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Process<a name="Process"></a></h5>
+<p>Sample process which runs daily at 6th hour on corp cluster. It takes one input - SampleInput for the previous day(24 instances). It generates one output - SampleOutput for previous day. The workflow is defined at /projects/bootcamp/workflow/workflow.xml. Any libraries available for the workflow should be at /projects/bootcamp/workflow/lib. The process also defines properties queueName, ssh.host, and fileTimestamp which are passed to the workflow. In addition, Falcon exposes the following properties to the workflow: nameNode, jobTracker(hadoop properties), input and output(Input/Output properties).</p>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
+&lt;!--
+    Daily sample process. Runs at 6th hour every day. Input - last day's hourly data. Generates output for yesterday
+ --&gt;
+&lt;process name=&quot;SampleProcess&quot;&gt;
+    &lt;cluster name=&quot;corp&quot; /&gt;
+
+    &lt;frequency&gt;days(1)&lt;/frequency&gt;
+
+    &lt;validity start=&quot;2012-04-03T06:00Z&quot; end=&quot;2022-12-30T00:00Z&quot; timezone=&quot;UTC&quot; /&gt;
+
+    &lt;inputs&gt;
+        &lt;input name=&quot;input&quot; feed=&quot;SampleInput&quot; start=&quot;yesterday(0,0)&quot; end=&quot;today(-1,0)&quot; /&gt;
+    &lt;/inputs&gt;
+
+    &lt;outputs&gt;
+            &lt;output name=&quot;output&quot; feed=&quot;SampleOutput&quot; instance=&quot;yesterday(0,0)&quot; /&gt;
+    &lt;/outputs&gt;
+
+    &lt;properties&gt;
+        &lt;property name=&quot;queueName&quot; value=&quot;reports&quot; /&gt;
+        &lt;property name=&quot;ssh.host&quot; value=&quot;host.com&quot; /&gt;
+        &lt;property name=&quot;fileTimestamp&quot; value=&quot;${coord:formatTime(coord:nominalTime(), 'yyyy-MM-dd')}&quot; /&gt;
+    &lt;/properties&gt;
+
+    &lt;workflow engine=&quot;oozie&quot; path=&quot;/projects/bootcamp/workflow&quot; /&gt;
+
+    &lt;retry policy=&quot;periodic&quot; delay=&quot;minutes(5)&quot; attempts=&quot;3&quot; /&gt;
+    
+    &lt;late-process policy=&quot;exp-backoff&quot; delay=&quot;hours(1)&quot;&gt;
+        &lt;late-input input=&quot;input&quot; workflow-path=&quot;/projects/bootcamp/workflow/lateinput&quot; /&gt;
+    &lt;/late-process&gt;
+&lt;/process&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Oozie Workflow<a name="Oozie_Workflow"></a></h5>
+<p>The sample user workflow contains 3 actions:</p>
+<ul>
+<li>Pig action - Executes pig script /projects/bootcamp/workflow/script.pig</li>
+<li>concatenator - Java action that concatenates part files and generates a single file</li>
+<li>file upload - ssh action that gets the concatenated file from hadoop and sends the file to a remote host</li></ul>
+<div class="source">
+<pre>
+&lt;workflow-app xmlns=&quot;uri:oozie:workflow:0.2&quot; name=&quot;sample-wf&quot;&gt;
+        &lt;start to=&quot;pig&quot; /&gt;
+
+        &lt;action name=&quot;pig&quot;&gt;
+                &lt;pig&gt;
+                        &lt;job-tracker&gt;${jobTracker}&lt;/job-tracker&gt;
+                        &lt;name-node&gt;${nameNode}&lt;/name-node&gt;
+                        &lt;prepare&gt;
+                                &lt;delete path=&quot;${output}&quot;/&gt;
+                        &lt;/prepare&gt;
+                        &lt;configuration&gt;
+                                &lt;property&gt;
+                                        &lt;name&gt;mapred.job.queue.name&lt;/name&gt;
+                                        &lt;value&gt;${queueName}&lt;/value&gt;
+                                &lt;/property&gt;
+                                &lt;property&gt;
+                                        &lt;name&gt;mapreduce.fileoutputcommitter.marksuccessfuljobs&lt;/name&gt;
+                                        &lt;value&gt;true&lt;/value&gt;
+                                &lt;/property&gt;
+                        &lt;/configuration&gt;
+                        &lt;script&gt;${nameNode}/projects/bootcamp/workflow/script.pig&lt;/script&gt;
+                        &lt;param&gt;input=${input}&lt;/param&gt;
+                        &lt;param&gt;output=${output}&lt;/param&gt;
+                        &lt;file&gt;lib/dependent.jar&lt;/file&gt;
+                &lt;/pig&gt;
+                &lt;ok to=&quot;concatenator&quot; /&gt;
+                &lt;error to=&quot;fail&quot; /&gt;
+        &lt;/action&gt;
+
+        &lt;action name=&quot;concatenator&quot;&gt;
+                &lt;java&gt;
+                        &lt;job-tracker&gt;${jobTracker}&lt;/job-tracker&gt;
+                        &lt;name-node&gt;${nameNode}&lt;/name-node&gt;
+                        &lt;prepare&gt;
+                                &lt;delete path=&quot;${nameNode}/projects/bootcamp/concat/data-${fileTimestamp}.csv&quot;/&gt;
+                        &lt;/prepare&gt;
+                        &lt;configuration&gt;
+                                &lt;property&gt;
+                                        &lt;name&gt;mapred.job.queue.name&lt;/name&gt;
+                                        &lt;value&gt;${queueName}&lt;/value&gt;
+                                &lt;/property&gt;
+                        &lt;/configuration&gt;
+                        &lt;main-class&gt;com.wf.Concatenator&lt;/main-class&gt;
+                        &lt;arg&gt;${output}&lt;/arg&gt;
+                        &lt;arg&gt;${nameNode}/projects/bootcamp/concat/data-${fileTimestamp}.csv&lt;/arg&gt;
+                &lt;/java&gt;
+                &lt;ok to=&quot;fileupload&quot; /&gt;
+                &lt;error to=&quot;fail&quot;/&gt;
+        &lt;/action&gt;
+                        
+        &lt;action name=&quot;fileupload&quot;&gt;
+                &lt;ssh&gt;
+                        &lt;host&gt;localhost&lt;/host&gt;
+                        &lt;command&gt;/tmp/fileupload.sh&lt;/command&gt;
+                        &lt;args&gt;${nameNode}/projects/bootcamp/concat/data-${fileTimestamp}.csv&lt;/args&gt;
+                        &lt;args&gt;${wf:conf(&quot;ssh.host&quot;)}&lt;/args&gt;
+                        &lt;capture-output/&gt;
+                &lt;/ssh&gt;
+                &lt;ok to=&quot;fileUploadDecision&quot; /&gt;
+                &lt;error to=&quot;fail&quot;/&gt;
+        &lt;/action&gt;
+
+        &lt;decision name=&quot;fileUploadDecision&quot;&gt;
+                &lt;switch&gt;
+                        &lt;case to=&quot;end&quot;&gt;
+                                ${wf:actionData('fileupload')['output'] == '0'}
+                        &lt;/case&gt;
+                        &lt;default to=&quot;fail&quot;/&gt;
+                &lt;/switch&gt;
+        &lt;/decision&gt;
+
+        &lt;kill name=&quot;fail&quot;&gt;
+                &lt;message&gt;Workflow failed, error message[${wf:errorMessage(wf:lastErrorNode())}]&lt;/message&gt;
+        &lt;/kill&gt;
+
+        &lt;end name=&quot;end&quot; /&gt;
+&lt;/workflow-app&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>File Upload Script<a name="File_Upload_Script"></a></h5>
+<p>The script gets the file from hadoop, rsyncs the file to /tmp on remote host and deletes the file from hadoop</p>
+<div class="source">
+<pre>
+#!/bin/bash
+
+trap 'echo &quot;output=$?&quot;; exit $?' ERR INT TERM
+
+echo &quot;Arguments: $@&quot;
+SRCFILE=$1
+DESTHOST=$3
+
+FILENAME=`basename $SRCFILE`
+rm -f /tmp/$FILENAME
+hadoop fs -copyToLocal $SRCFILE /tmp/
+echo &quot;Copied $SRCFILE to /tmp&quot;
+
+rsync -ztv --rsh=ssh --stats /tmp/$FILENAME $DESTHOST:/tmp
+echo &quot;rsynced $FILENAME to $DESTUSER@$DESTHOST:$DESTFILE&quot;
+
+hadoop fs -rmr $SRCFILE
+echo &quot;Deleted $SRCFILE&quot;
+
+rm -f /tmp/$FILENAME
+echo &quot;output=0&quot;
+
+</pre></div></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    2013-2016
+                        <a href="http://www.apache.org">Apache Software Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4612c3f7/content/0.10/Operability.html
----------------------------------------------------------------------
diff --git a/content/0.10/Operability.html b/content/0.10/Operability.html
new file mode 100644
index 0000000..a3c2949
--- /dev/null
+++ b/content/0.10/Operability.html
@@ -0,0 +1,274 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2016-08-08
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20160808" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Operationalizing Falcon</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( '.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                                                <img src="images/falcon-logo.png"  alt="Apache Falcon" width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Operationalizing Falcon</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 2016-08-08</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.10</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>Operationalizing Falcon<a name="Operationalizing_Falcon"></a></h2></div>
+<div class="section">
+<h3>Overview<a name="Overview"></a></h3>
+<p>Apache Falcon provides various tools to operationalize Falcon consisting of Alerts for unrecoverable errors, Audits of user actions, Metrics, and Notifications. They are detailed below.</p>
+<p>++ Lineage</p>
+<p>Currently Lineage has no way to access or restore information about entity instances created during the time lineage was disabled. Information about entities however, is preserved and bootstrapped when lineage is enabled. If you have to reset the graph db then you can delete the graph db files as specified in the startup.properties and restart the falcon. Please note: you will loose all the information about the instances if you delete the graph db.</p></div>
+<div class="section">
+<h3>Monitoring<a name="Monitoring"></a></h3>
+<p>Falcon provides monitoring of various events by capturing metrics of those events. The metric numbers can then be used to monitor performance and health of the Falcon system and the entire processing pipelines.</p>
+<p>Falcon also exposes <a class="externalLink" href="https://github.com/thinkaurelius/titan/wiki/Titan-Performance-and-Monitoring">metrics for titandb</a></p>
+<p>Users can view the logs of these events in the metric.log file, by default this file is created under ${user.dir}/logs/ directory. Users may also extend the Falcon monitoring framework to send events to systems like Mondemand/lwes by implementingorg.apache.falcon.plugin.MonitoringPlugin interface.</p>
+<p>The following events are captured by Falcon for logging the metrics:</p>
+<ol style="list-style-type: decimal">
+<li>New cluster definitions posted to Falcon (success &amp; failures)</li>
+<li>New feed definition posted to Falcon (success &amp; failures)</li>
+<li>New process definition posted to Falcon (success &amp; failures)</li>
+<li>Process update events (success &amp; failures)</li>
+<li>Feed update events (success &amp; failures)</li>
+<li>Cluster update events (success &amp; failures)</li>
+<li>Process suspend events (success &amp; failures)</li>
+<li>Feed suspend events (success &amp; failures)</li>
+<li>Process resume events (success &amp; failures)</li>
+<li>Feed resume events (success &amp; failures)</li>
+<li>Process remove events (success &amp; failures)</li>
+<li>Feed remove events (success &amp; failures)</li>
+<li>Cluster remove events (success &amp; failures)</li>
+<li>Process instance kill events (success &amp; failures)</li>
+<li>Process instance re-run events (success &amp; failures)</li>
+<li>Process instance generation events</li>
+<li>Process instance failure events</li>
+<li>Process instance auto-retry events</li>
+<li>Process instance retry exhaust events</li>
+<li>Feed instance deletion event</li>
+<li>Feed instance deletion failure event (no retries)</li>
+<li>Feed instance replication event</li>
+<li>Feed instance replication failure event</li>
+<li>Feed instance replication auto-retry event</li>
+<li>Feed instance replication retry exhaust event</li>
+<li>Feed instance late arrival event</li>
+<li>Feed instance post cut-off arrival event</li>
+<li>Process re-run due to late feed event</li>
+<li>Transaction rollback failed event</li></ol>
+<p>The metric logged for an event has the following properties:</p>
+<ol style="list-style-type: decimal">
+<li>Action - Name of the event.</li>
+<li>Dimensions - A list of name/value pairs of various attributes for a given action.</li>
+<li>Status- Status of an action FAILED/SUCCEEDED.</li>
+<li>Time-taken - Time taken in nanoseconds for a given action.</li></ol>
+<p>An example for an event logged for a submit of a new process definition:</p>
+<p>2012-05-04 12:23:34,026 {Action:submit, Dimensions:{entityType=process}, Status: SUCCEEDED, Time-taken:97087000 ns}</p>
+<p>Users may parse the metric.log or capture these events from custom monitoring frameworks and can plot various graphs or send alerts according to their requirements.</p></div>
+<div class="section">
+<h3>Notifications<a name="Notifications"></a></h3>
+<p>Falcon has two types of notifications - System and User notifications.</p></div>
+<div class="section">
+<h4>System notifications<a name="System_notifications"></a></h4>
+<p>The System notifications are internally generated and used by Falcon to monitor the Falcon orchestrated workflow jobs. By default, Falcon starts an ActiveMQ embedded JMS server on Falcon machine on port 61616 as a daemon. Alternatively, users can make Falcon to use an existing JMS server instead of starting an embedded instance by doing the following 2 steps:</p>
+<p></p>
+<ul>
+<li>Setting the property broker.url in the startup.properties as below</li></ul>
+<div class="source">
+<pre>
+   *.broker.url=tcp://jms-server-host:61616
+
+</pre></div>
+<p></p>
+<ul>
+<li>Set the system property falcon.embeddedmq to false as below</li></ul>
+<div class="source">
+<pre>
+   &lt;FALCON-INSTALL-DIR&gt;/bin/falcon-start -Dfalcon.embeddedmq=false
+
+</pre></div>
+<p>Falcon uses FALCON.ENTITY.TOPIC to publish system notifications. This topic and the Map Message fields are internal and could change between releases.</p></div>
+<div class="section">
+<h4>User notifications<a name="User_notifications"></a></h4>
+<p>Falcon, in addition to the FALCON.ENTITY.TOPIC, also creates a JMS topic for every process/feed that is scheduled in Falcon as part of User notification. To enable User notifications, the broker url and implementation class of the JMS engine need to be specified in the cluster definition associated with the feed/process. Users may register consumers on the required topic to check the availability or status of feed instances. The User notification JMS broker instance can be same as the System notification or different.</p>
+<p>The name of the JMS topic is same as the process/feed name. Falcon sends a map message for every feed instance that is created/deleted/replicated/imported/exported to the JMS topic. The JMS Map Message sent to a topic has the following fields:</p>
+<p></p>
+<ol style="list-style-type: decimal">
+<li>cluster - name of the current cluster the feed/process is dependent on.</li>
+<li>entityType - type of the entity (feed or process).</li>
+<li>entityName - name of the entity.</li>
+<li>nominalTime - instance time (or data date).</li>
+<li>operation - operation like generate, delete, replicate, import, export.</li>
+<li>feedNames - name of the feeds which are generated/replicated/deleted/imported/exported.</li>
+<li>feedInstancePaths - comma separated feed instance paths.</li>
+<li>workflowId - current workflow-id of the instance.</li>
+<li>workflowUser - user who owns the feed instance (i.e partition).</li>
+<li>runId - current run-id of the instance.</li>
+<li>status - status of the user workflow instance.</li>
+<li>timeStamp - current timestamp.</li>
+<li>logDir - log dir where lineage can be recorded.</li></ol>
+<p>The JMS messages are automatically purged after a certain period (default 3 days) by the Falcon JMS house-keeping service. TTL (Time-to-live) for JMS message can be configured in the Falcon's startup.properties file.</p>
+<p>The following example shows how to enable and read user notification by connecting to the JMS broker.</p>
+<p>First, specify the JMS broker url in the cluster definition XML as shown below.</p>
+<div class="source">
+<pre>
+
+&lt;?xml version=&quot;1.0&quot;?&gt;
+&lt;!-- filename : primaryCluster.xml --&gt;
+&lt;cluster colo=&quot;USWestOregon&quot; description=&quot;oregonHadoopCluster&quot; name=&quot;primaryCluster&quot; xmlns=&quot;uri:falcon:cluster:0.1&quot;&gt;
+    &lt;interfaces&gt;
+        ...
+        ...
+        &lt;interface type=&quot;messaging&quot; endpoint=&quot;tcp://user-jms-broker-host:61616?daemon=true&quot; version=&quot;5.1.6&quot; /&gt;
+        ...
+    &lt;/interfaces&gt;
+&lt;/cluster&gt;
+
+
+</pre></div>
+<p>Next, use a JMS consumer (example below in Java) to read the message from the topic with the name FALCON.&lt;feed-or-process-name&gt;</p>
+<div class="source">
+<pre>
+import org.apache.activemq.ActiveMQConnectionFactory;
+import org.apache.activemq.command.ActiveMQMapMessage;
+import javax.jms.ConnectionFactory;
+import javax.jms.Connection;
+import javax.jms.MessageConsumer;
+import javax.jms.Topic;
+import javax.jms.Session;
+import javax.jms.TopicSession;
+
+public class FalconUserJMSClient {
+    public static void main(String[] args)throws Exception {
+        // Note: specify the JMS broker URL
+        String brokerUrl = &quot;tcp://localhost:61616&quot;;
+
+        ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(brokerUrl);
+        Connection connection = connectionFactory.createConnection();
+        connection.setClientID(&quot;Falcon User JMS Consumer&quot;);
+        TopicSession session = (TopicSession) connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
+        try {
+
+            // Note: the topic name for the feed will be FALCON.&lt;feed-name&gt;
+            Topic falconTopic = session.createTopic(&quot;FALCON.feed-sample&quot;);
+            MessageConsumer consumer = session.createConsumer(falconTopic);
+            connection.start();
+            while (true) {
+                ActiveMQMapMessage msg = (ActiveMQMapMessage) consumer.receive();
+                System.out.println(&quot;cluster             : &quot; + msg.getString(&quot;cluster&quot;));
+                System.out.println(&quot;entityType          : &quot; + msg.getString(&quot;entityType&quot;));
+                System.out.println(&quot;entityName          : &quot; + msg.getString(&quot;entityName&quot;));
+                System.out.println(&quot;nominalTime         : &quot; + msg.getString(&quot;nominalTime&quot;));
+                System.out.println(&quot;operation           : &quot; + msg.getString(&quot;operation&quot;));
+
+                System.out.println(&quot;feedNames           : &quot; + msg.getString(&quot;feedNames&quot;));
+                System.out.println(&quot;feedInstancePaths   : &quot; + msg.getString(&quot;feedInstancePaths&quot;));
+
+                System.out.println(&quot;workflowId          : &quot; + msg.getString(&quot;workflowId&quot;));
+                System.out.println(&quot;workflowUser        : &quot; + msg.getString(&quot;workflowUser&quot;));
+                System.out.println(&quot;runId               : &quot; + msg.getString(&quot;runId&quot;));
+                System.out.println(&quot;status              : &quot; + msg.getString(&quot;status&quot;));
+                System.out.println(&quot;timeStamp           : &quot; + msg.getString(&quot;timeStamp&quot;));
+                System.out.println(&quot;logDir              : &quot; + msg.getString(&quot;logDir&quot;));
+
+                System.out.println(&quot;brokerUrl           : &quot; + msg.getString(&quot;brokerUrl&quot;));
+                System.out.println(&quot;brokerImplClass     : &quot; + msg.getString(&quot;brokerImplClass&quot;));
+                System.out.println(&quot;logFile             : &quot; + msg.getString(&quot;logFile&quot;));
+                System.out.println(&quot;topicName           : &quot; + msg.getString(&quot;topicName&quot;));
+                System.out.println(&quot;brokerTTL           : &quot; + msg.getString(&quot;brokerTTL&quot;));
+            }
+        } finally {
+            if (session != null) {
+                session.close();
+            }
+            if (connection != null) {
+                connection.close();
+            }
+        }
+    }
+}
+
+</pre></div></div>
+<div class="section">
+<h3>Alerts<a name="Alerts"></a></h3>
+<p>Falcon generates alerts for unrecoverable errors into a log file by default. Users can view these alerts in the alerts.log file, by default this file is created under ${user.dir}/logs/ directory.</p>
+<p>Users may also extend the Falcon Alerting plugin to send events to systems like Nagios, etc. by extending org.apache.falcon.plugin.AlertingPlugin interface.</p></div>
+<div class="section">
+<h3>Audits<a name="Audits"></a></h3>
+<p>Falcon audits all user activity and captures them into a log file by default. Users can view these audits in the audit.log file, by default this file is created under ${user.dir}/logs/ directory.</p>
+<p>Users may also extend the Falcon Audit plugin to send audits to systems like Apache Argus, etc. by extending org.apache.falcon.plugin.AuditingPlugin interface.</p></div>
+<div class="section">
+<h3>Metrics Collection In Graphite<a name="Metrics_Collection_In_Graphite"></a></h3>
+<p>Falcon has support to send metrics to graphite more details regarding this can be found on <a href="./GraphiteMetricCollection.html">Graphite Metric Collection</a></p></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    2013-2016
+                        <a href="http://www.apache.org">Apache Software Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4612c3f7/content/0.10/PrismSetup.png
----------------------------------------------------------------------
diff --git a/content/0.10/PrismSetup.png b/content/0.10/PrismSetup.png
new file mode 100644
index 0000000..b0dc9a5
Binary files /dev/null and b/content/0.10/PrismSetup.png differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4612c3f7/content/0.10/ProcessSchedule.png
----------------------------------------------------------------------
diff --git a/content/0.10/ProcessSchedule.png b/content/0.10/ProcessSchedule.png
new file mode 100644
index 0000000..a7dd788
Binary files /dev/null and b/content/0.10/ProcessSchedule.png differ