You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@falcon.apache.org by ba...@apache.org on 2016/08/08 23:16:20 UTC

[46/49] falcon git commit: FALCON-2006 Update documentation on site for 0.10 release

http://git-wip-us.apache.org/repos/asf/falcon/blob/4612c3f7/content/0.10/FalconNativeScheduler.html
----------------------------------------------------------------------
diff --git a/content/0.10/FalconNativeScheduler.html b/content/0.10/FalconNativeScheduler.html
new file mode 100644
index 0000000..5402744
--- /dev/null
+++ b/content/0.10/FalconNativeScheduler.html
@@ -0,0 +1,330 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2016-08-08
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20160808" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Falcon Native Scheduler</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( '.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                                                <img src="images/falcon-logo.png"  alt="Apache Falcon" width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Falcon Native Scheduler</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 2016-08-08</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.10</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>Falcon Native Scheduler<a name="Falcon_Native_Scheduler"></a></h2></div>
+<div class="section">
+<h3>Overview<a name="Overview"></a></h3>
+<p>Falcon has been using Oozie as its scheduling engine.  While the use of Oozie works reasonably well, there are scenarios where Oozie scheduling is proving to be a limiting factor. In its current form, Falcon relies on Oozie for both scheduling and for workflow execution, due to which the scheduling is limited to time based/cron based scheduling with additional gating conditions on data availability. Also, this imposes restrictions on datasets being periodic in nature. In order to offer better scheduling capabilities, Falcon comes with its own native scheduler.</p></div>
+<div class="section">
+<h3>Capabilities<a name="Capabilities"></a></h3>
+<p>The native scheduler will offer the capabilities offered by Oozie co-ordinator and more. The native scheduler will be built and released over the next few releases of Falcon giving users an opportunity to use it and provide feedback.</p>
+<p>Currently, the native scheduler offers the following capabilities:</p>
+<ol style="list-style-type: decimal">
+<li>Submit and schedule a Falcon process that runs periodically (without data dependency) - It could be a PIG script, oozie workflow, Hive (all the engine types currently supported).</li>
+<li>Monitor/Query/Modify the scheduled process - All applicable entity APIs and instance APIs should work as it does now.  Falcon provides data management functions for feeds declaratively. It allows users to represent feed locations as time-based partition directories on HDFS containing files.</li></ol>
+<p><b>NOTE: Execution order is FIFO. LIFO and LAST_ONLY are not supported yet.</b></p>
+<p>In the near future, Falcon scheduler will provide feature parity with Oozie scheduler and in subsequent releases will provide the following features:</p>
+<ul>
+<li>Periodic, cron-based, calendar-based scheduling.</li>
+<li>Data availability based scheduling.</li>
+<li>External trigger/notification based scheduling.</li>
+<li>Support for periodic/a-periodic datasets.</li>
+<li>Support for optional/mandatory datasets. Option to specify minumum/maximum/exactly-N instances of data to consume.</li>
+<li>Handle dependencies across entities during re-run.</li></ul></div>
+<div class="section">
+<h3>Configuring Native Scheduler<a name="Configuring_Native_Scheduler"></a></h3>
+<p>You can enable native scheduler by making changes to <b><i>$FALCON_HOME/conf/startup.properties</i></b> as follows. You will need to restart Falcon Server for the changes to take effect.</p>
+<div class="source">
+<pre>
+*.dag.engine.impl=org.apache.falcon.workflow.engine.OozieDAGEngine
+*.application.services=org.apache.falcon.security.AuthenticationInitializationService,\
+                        org.apache.falcon.workflow.WorkflowJobEndNotificationService, \
+                        org.apache.falcon.service.ProcessSubscriberService,\
+                        org.apache.falcon.service.FeedSLAMonitoringService,\
+                        org.apache.falcon.service.LifecyclePolicyMap,\
+                        org.apache.falcon.service.FalconJPAService,\
+                        org.apache.falcon.entity.store.ConfigurationStore,\
+                        org.apache.falcon.rerun.service.RetryService,\
+                        org.apache.falcon.rerun.service.LateRunService,\
+                        org.apache.falcon.metadata.MetadataMappingService,\
+                        org.apache.falcon.service.LogCleanupService,\
+                        org.apache.falcon.service.GroupsService,\
+                        org.apache.falcon.service.ProxyUserService,\
+                        org.apache.falcon.notification.service.impl.JobCompletionService,\
+                        org.apache.falcon.notification.service.impl.SchedulerService,\
+                        org.apache.falcon.notification.service.impl.AlarmService,\
+                        org.apache.falcon.notification.service.impl.DataAvailabilityService,\
+                        org.apache.falcon.execution.FalconExecutionService
+
+</pre></div></div>
+<div class="section">
+<h4>Making the Native Scheduler the default scheduler<a name="Making_the_Native_Scheduler_the_default_scheduler"></a></h4>
+<p>To ensure backward compatibility, even when the native scheduler is enabled, the default scheduler is still Oozie. This means users will be scheduling entities on Oozie scheduler, by default. They will need to explicitly specify the scheduler as native, if they wish to schedule entities using native scheduler.</p>
+<p><a href="#Scheduling_new_entities_on_Native_Scheduler">This section</a> has more details on how to schedule on either of the schedulers.</p>
+<p>If you wish to make the Falcon Native Scheduler your default scheduler and remove Oozie as the scheduler, set the following property in <b><i>$FALCON_HOME/conf/startup.properties</i></b></p>
+<div class="source">
+<pre>
+## If you wish to use Falcon native scheduler as your default scheduler, set the workflow engine to FalconWorkflowEngine instead of OozieWorkflowEngine. ##
+*.workflow.engine.impl=org.apache.falcon.workflow.engine.FalconWorkflowEngine
+
+</pre></div></div>
+<div class="section">
+<h4>Configuring the state store for Native Scheduler<a name="Configuring_the_state_store_for_Native_Scheduler"></a></h4>
+<p>You can configure statestore by making changes to <b><i>$FALCON_HOME/conf/statestore.properties</i></b> as follows. You will need to restart Falcon Server for the changes to take effect.</p>
+<p>Falcon Server needs to maintain state of the entities and instances in a persistent store for the system to be recoverable. Since Prism only federates, it does not need to maintain any state information. Following properties need to be set in statestore.properties of Falcon Servers:</p>
+<div class="source">
+<pre>
+######### StateStore Properties #####
+*.falcon.state.store.impl=org.apache.falcon.state.store.jdbc.JDBCStateStore
+*.falcon.statestore.jdbc.driver=org.apache.derby.jdbc.EmbeddedDriver
+*.falcon.statestore.jdbc.url=jdbc:derby:data/falcon.db
+# StateStore credentials file where username,password and other properties can be stored securely.
+# Set this credentials file permission 400 and make sure user who starts falcon should only have read permission.
+# Give Absolute path to credentials file along with file name or put in classpath with file name statestore.credentials.
+# Credentials file should be present either in given location or class path, otherwise falcon won't start.
+*.falcon.statestore.credentials.file=
+*.falcon.statestore.jdbc.username=sa
+*.falcon.statestore.jdbc.password=
+*.falcon.statestore.connection.data.source=org.apache.commons.dbcp.BasicDataSource
+# Maximum number of active connections that can be allocated from this pool at the same time.
+*.falcon.statestore.pool.max.active.conn=10
+*.falcon.statestore.connection.properties=
+# Indicates the interval (in milliseconds) between eviction runs.
+*.falcon.statestore.validate.db.connection.eviction.interval=300000
+## The number of objects to examine during each run of the idle object evictor thread.
+*.falcon.statestore.validate.db.connection.eviction.num=10
+## Creates Falcon DB.
+## If set to true, it creates the DB schema if it does not exist. If the DB schema exists is a NOP.
+## If set to false, it does not create the DB schema. If the DB schema does not exist it fails start up.
+*.falcon.statestore.create.db.schema=true
+
+</pre></div>
+<p>The _*.falcon.statestore.jdbc.url_ property in statestore.properties determines the DB and data location. All other properties are common across RDBMS.</p>
+<p><b>NOTE : Although multiple Falcon Servers can share a DB (not applicable for Derby DB), it is recommended that you have different DBs for different Falcon Servers for better performance.</b></p>
+<p>You will need to create the state DB and tables before starting the Falcon Server. To create tables, a tool comes bundled with the Falcon installation. You can use the <i>falcon-db.sh</i> script to create tables in the DB. The script needs to be run only for Falcon Servers and can be run by any user that has execute permission on the script. The script picks up the DB connection details from <b><i>$FALCON_HOME/conf/statestore.properties</i></b>. Ensure that you have granted the right privileges to the user mentioned in statestore.properties_, so the tables can be created.</p>
+<p>You can use the help command to get details on the sub-commands supported:</p>
+<div class="source">
+<pre>
+./bin/falcon-db.sh help
+Hadoop home is set, adding libraries from '/Users/pallavi.rao/falcon/hadoop-2.6.0/bin/hadoop classpath' into falcon classpath
+usage: 
+      Falcon DB initialization tool currently supports Derby DB/ Mysql
+
+      falcondb help : Display usage for all commands or specified command
+
+      falcondb version : Show Falcon DB version information
+
+      falcondb create &lt;OPTIONS&gt; : Create Falcon DB schema
+                      -run             Confirmation option regarding DB schema creation/upgrade
+                      -sqlfile &lt;arg&gt;   Generate SQL script instead of creating/upgrading the DB
+                                       schema
+
+      falcondb upgrade &lt;OPTIONS&gt; : Upgrade Falcon DB schema
+                       -run             Confirmation option regarding DB schema creation/upgrade
+                       -sqlfile &lt;arg&gt;   Generate SQL script instead of creating/upgrading the DB
+                                        schema
+
+
+</pre></div>
+<p>Currently, MySQL, postgreSQL and Derby are supported as state stores. We may extend support to other DBs in the future. Falcon has been tested against MySQL v5.5 and PostgreSQL v9.5. If you are using MySQL ensure you also copy mysql-connector-java-&lt;version&gt;.jar under <b><i>$FALCON_HOME/server/webapp/falcon/WEB-INF/lib</i></b> and <b><i>$FALCON_HOME/client/lib</i></b></p></div>
+<div class="section">
+<h5>Using Derby as the State Store<a name="Using_Derby_as_the_State_Store"></a></h5>
+<p>Using Derby is ideal for QA and staging setup. Falcon comes bundled with a Derby connector and no explicit setup is required (although you can set it up) in terms creating the DB or tables. For example,</p>
+<div class="source">
+<pre> *.falcon.statestore.jdbc.url=jdbc:derby:data/falcon.db;create=true 
+</pre></div>
+<p>tells Falcon to use the Derby JDBC connector, with data directory, $FALCON_HOME/data/ and DB name 'falcon'. If <i>create=true</i> is specified, you will not need to create a DB up front; a database will be created if it does not exist.</p></div>
+<div class="section">
+<h5>Using MySQL as the State Store<a name="Using_MySQL_as_the_State_Store"></a></h5>
+<p>The jdbc.url property in statestore.properties determines the DB and data location. For example,</p>
+<div class="source">
+<pre> *.falcon.statestore.jdbc.url=jdbc:mysql://localhost:3306/falcon 
+</pre></div>
+<p>tells Falcon to use the MySQL JDBC connector, which is accessible @localhost:3306, with DB name 'falcon'.</p></div>
+<div class="section">
+<h3>Scheduling new entities on Native Scheduler<a name="Scheduling_new_entities_on_Native_Scheduler"></a></h3>
+<p>To schedule an entity (currently only process is supported) using the native scheduler, you need to specify the scheduler in the schedule command as shown below:</p>
+<div class="source">
+<pre>
+$FALCON_HOME/bin/falcon entity -type process -name &lt;process name&gt; -schedule -properties falcon.scheduler:native
+
+</pre></div>
+<p>If Oozie is configured as the default scheduler, you can skip the scheduler option or explicitly set it to <i>oozie</i>, as shown below:</p>
+<div class="source">
+<pre>
+$FALCON_HOME/bin/falcon entity -type process -name &lt;process name&gt; -schedule
+OR
+$FALCON_HOME/bin/falcon entity -type process -name &lt;process name&gt; -schedule -properties falcon.scheduler:oozie
+
+</pre></div>
+<p>If the native scheduler is configured as the default scheduler, then, you can omit the scheduler option, as shown below:</p>
+<div class="source">
+<pre>
+$FALCON_HOME/bin/falcon entity -type process -name &lt;process name&gt; -schedule 
+
+</pre></div></div>
+<div class="section">
+<h3>Migrating entities from Oozie Scheduler to Native Scheduler<a name="Migrating_entities_from_Oozie_Scheduler_to_Native_Scheduler"></a></h3>
+<p>Currently, user will have to delete and re-create entities in order to move across schedulers. Attempting to schedule an already scheduled entity on a different scheduler will result in an error. Note that the history of instances prior to scheduling on native scheduler will not be available via the instance APIs. However, user can retrieve that information using metadata APIs. Native scheduler must be enabled before migrating entities to native scheduler.</p>
+<p><a href="#Configuring_Native_Scheduler">Configuring Native Scheduler</a> has more details on how to enable native scheduler.</p></div>
+<div class="section">
+<h4>Migrating from Oozie to Native Scheduler<a name="Migrating_from_Oozie_to_Native_Scheduler"></a></h4>
+<p></p>
+<ul>
+<li>Delete the entity (process).</li></ul>
+<div class="source">
+<pre>$FALCON_HOME/bin/falcon entity -type process -name &lt;process name&gt; -delete 
+</pre></div>
+<p></p>
+<ul>
+<li>Submit the entity (process) with start time from where the Oozie scheduler left off.</li></ul>
+<div class="source">
+<pre>$FALCON_HOME/bin/falcon entity -type process -submit &lt;path to process xml&gt; 
+</pre></div>
+<p></p>
+<ul>
+<li>Schedule the entity on native scheduler.</li></ul>
+<div class="source">
+<pre> $FALCON_HOME/bin/falcon entity -type process -name &lt;process name&gt; -schedule -properties falcon.scheduler:native 
+</pre></div></div>
+<div class="section">
+<h4>Reverting to Oozie from Native Scheduler<a name="Reverting_to_Oozie_from_Native_Scheduler"></a></h4>
+<p></p>
+<ul>
+<li>Delete the entity (process).</li></ul>
+<div class="source">
+<pre>$FALCON_HOME/bin/falcon entity -type process -name &lt;process name&gt; -delete 
+</pre></div>
+<p></p>
+<ul>
+<li>Submit the entity (process) with start time from where the Native scheduler left off.</li></ul>
+<div class="source">
+<pre>$FALCON_HOME/bin/falcon entity -type process -submit &lt;path to process xml&gt; 
+</pre></div>
+<p></p>
+<ul>
+<li>Schedule the entity on the default scheduler (Oozie).</li></ul>
+<div class="source">
+<pre> $FALCON_HOME/bin/falcon entity -type process -name &lt;process name&gt; -schedule 
+</pre></div></div>
+<div class="section">
+<h4>Differences in API responses between Oozie and Native Scheduler<a name="Differences_in_API_responses_between_Oozie_and_Native_Scheduler"></a></h4>
+<p>Most API responses are similar whether the entity is scheduled via Oozie or via Native scheduler. However, there are a few exceptions and those are listed below.</p></div>
+<div class="section">
+<h5>Rerun API<a name="Rerun_API"></a></h5>
+<p>When a user performs a rerun using Oozie scheduler, Falcon directly reruns the workflow on Oozie and the instance will be moved to 'RUNNING'.</p>
+<p>Example response:</p>
+<div class="source">
+<pre>
+$ falcon instance -rerun processMerlinOozie -start 2016-01-08T12:13Z -end 2016-01-08T12:15Z
+Consolidated Status: SUCCEEDED
+
+Instances:
+Instance		Cluster		SourceCluster		Status		Start		End		Details					Log
+-----------------------------------------------------------------------------------------------
+2016-01-08T12:13Z	ProcessMultipleClustersTest-corp-9706f068	-	RUNNING	2016-01-08T13:03Z	2016-01-08T13:03Z	-	http://8RPCG32.corp.inmobi.com:11000/oozie?job=0001811-160104160825636-oozie-oozi-W
+2016-01-08T12:13Z	ProcessMultipleClustersTest-corp-0b270a1d	-	RUNNING	2016-01-08T13:03Z	2016-01-08T13:03Z	-	http://lda01:11000/oozie?job=0002247-160104115615658-oozie-oozi-W
+
+Additional Information:
+Response: ua1/RERUN
+ua2/RERUN
+Request Id: ua1/871377866@qtp-630572412-35 - 7190c4c8-bacb-4639-8d48-c9e639f544da
+ua2/1554129706@qtp-536122141-13 - bc18127b-1bf8-4ea1-99e6-b1f10ba3a441
+
+</pre></div>
+<p>However, when a user performs a rerun on native scheduler, the instance is scheduled again. This is done intentionally so as to not violate the number of instances running in parallel.  Hence, the user will see the status of the instance as 'READY'.</p>
+<p>Example response:</p>
+<div class="source">
+<pre>
+$ falcon instance -rerun ProcessMultipleClustersTest-agregator-coord16-8f55f59b -start 2016-01-08T12:13Z -end 2016-01-08T12:15Z
+Consolidated Status: SUCCEEDED
+
+Instances:
+Instance		Cluster		SourceCluster		Status		Start		End		Details					Log
+-----------------------------------------------------------------------------------------------
+2016-01-08T12:13Z	ProcessMultipleClustersTest-corp-9706f068	-	READY	2016-01-08T13:03Z	2016-01-08T13:03Z	-	http://8RPCG32.corp.inmobi.com:11000/oozie?job=0001812-160104160825636-oozie-oozi-W
+
+2016-01-08T12:13Z	ProcessMultipleClustersTest-corp-0b270a1d	-	READY	2016-01-08T13:03Z	2016-01-08T13:03Z	-	http://lda01:11000/oozie?job=0002248-160104115615658-oozie-oozi-W
+
+Additional Information:
+Response: ua1/RERUN
+ua2/RERUN
+Request Id: ua1/871377866@qtp-630572412-35 - 8d118d4d-c0ef-4335-a9af-10364498ec4f
+ua2/1554129706@qtp-536122141-13 - c2a3fc50-8b05-47ce-9c85-ca432b96d923
+
+</pre></div></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    2013-2016
+                        <a href="http://www.apache.org">Apache Software Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4612c3f7/content/0.10/FeedSLAMonitoring.html
----------------------------------------------------------------------
diff --git a/content/0.10/FeedSLAMonitoring.html b/content/0.10/FeedSLAMonitoring.html
new file mode 100644
index 0000000..aa57f26
--- /dev/null
+++ b/content/0.10/FeedSLAMonitoring.html
@@ -0,0 +1,109 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2016-08-08
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20160808" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Falcon Feed SLA Monitoring</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( '.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                                                <img src="images/falcon-logo.png"  alt="Apache Falcon" width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Falcon Feed SLA Monitoring</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 2016-08-08</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.10</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h3>Falcon Feed SLA Monitoring<a name="Falcon_Feed_SLA_Monitoring"></a></h3>
+<p>Feed SLA monitoring allows you to monitor the feed ,It keeps track of the instances of the feed which are running and stores them in the db.</p>
+<p>Feed SLA monitoring service requires FalconJPAService to be up.Following are the values we need to set to run FeedSLAMonitoring - In startup.properties :</p>
+<p>*.application.services= org.apache.falcon.state.store.service.FalconJPAService,                         org.apache.falcon.service.FeedSLAMonitoringService</p>
+<p>These properties are required for FalconJPAService in statestore.properties:</p>
+<p></p>
+<ul>
+<li><b>falcon.state.store.impl</b> - org.apache.falcon.state.store.jdbc.JDBCStateStore</li>
+<li><b>falcon.statestore.jdbc.driver</b>  -org.apache.derby.jdbc.EmbeddedDriver</li>
+<li><b>falcon.statestore.jdbc.url</b> -jdbc:derby:target/test-data/data.db;create=true</li>
+<li><b>falcon.statestore.connection.data.source</b>   -org.apache.commons.dbcp.BasicDataSource</li></ul>Maximum number of active connections that can be allocated from this pool at the same time.
+<ul>
+<li><b>falcon.statestore.pool.max.active.conn</b> -10</li>
+<li><b>falcon.statestore.connection.properties</b> -</li></ul>Indicates the interval (in milliseconds) between eviction runs.
+<ul>
+<li><b>falcon.statestore.validate.db.connection.eviction.interval</b> -300000</li></ul>The number of objects to examine during each run of the idle object evictor thread.
+<ul>
+<li><b>falcon.statestore.validate.db.connection.eviction.num</b>  -10</li></ul>Creates Falcon DB. If set to true, it creates the DB schema if it does not exist. If the DB schema exists is a NOP.If set to false, it does not create the DB schema. If the DB schema does not exist it fails start up.
+<ul>
+<li><b>falcon.statestore.create.db.schema</b> -false</li></ul>
+<p>Note: First time we have to manually create the schema in production as we have set falcon.statestore.create.db.schema = false</p></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    2013-2016
+                        <a href="http://www.apache.org">Apache Software Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4612c3f7/content/0.10/FeedSchedule.png
----------------------------------------------------------------------
diff --git a/content/0.10/FeedSchedule.png b/content/0.10/FeedSchedule.png
new file mode 100644
index 0000000..105c6b1
Binary files /dev/null and b/content/0.10/FeedSchedule.png differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4612c3f7/content/0.10/GraphiteMetricCollection.html
----------------------------------------------------------------------
diff --git a/content/0.10/GraphiteMetricCollection.html b/content/0.10/GraphiteMetricCollection.html
new file mode 100644
index 0000000..f729213
--- /dev/null
+++ b/content/0.10/GraphiteMetricCollection.html
@@ -0,0 +1,101 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2016-08-08
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20160808" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Graphite Metric Collection</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( '.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                                                <img src="images/falcon-logo.png"  alt="Apache Falcon" width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Graphite Metric Collection</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 2016-08-08</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.10</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h3>Graphite Metric Collection<a name="Graphite_Metric_Collection"></a></h3>
+<p>Graphite Metric Collection currently allows to collect the following metrics at process level :</p>
+<p>1. Processing time the process spent in the running state in seconds (workflow_end_time - workflow_start_time) 2. Wait time that the process spent in the waiting/ready state. (workflow_start_time - workflow_nominal_time) 3. Number of instances that are failed for a process.</p>
+<p>To send data to graphite we need to intialize metricNotificationService in startup.properties:</p>
+<p>*.application.services= org.apache.falcon.metrics.MetricNotificationService,</p>
+<p>Add following properties for graphiteNotificationPlugin :</p>
+<p>Graphite properties</p>
+<ul>
+<li>*.falcon.graphite.hostname=localhost</li>
+<li>*.falcon.graphite.port=2003</li>
+<li>*.falcon.graphite.frequency=1</li>
+<li>*.falcon.graphite.prefix=falcon</li></ul>
+<p>The falcon.graphite.frequency is in seconds and all the time that is being sent to graphite is in seconds.</p></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    2013-2016
+                        <a href="http://www.apache.org">Apache Software Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4612c3f7/content/0.10/HDFSMirroring.html
----------------------------------------------------------------------
diff --git a/content/0.10/HDFSMirroring.html b/content/0.10/HDFSMirroring.html
new file mode 100644
index 0000000..1fd68fe
--- /dev/null
+++ b/content/0.10/HDFSMirroring.html
@@ -0,0 +1,118 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2016-08-08
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20160808" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - HDFS mirroring Extension</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( '.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                                                <img src="images/falcon-logo.png"  alt="Apache Falcon" width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">HDFS mirroring Extension</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 2016-08-08</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.10</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>HDFS mirroring Extension<a name="HDFS_mirroring_Extension"></a></h2></div>
+<div class="section">
+<h3>Overview<a name="Overview"></a></h3>
+<p>Falcon supports HDFS mirroring extension to replicate data from source cluster to destination cluster. This extension implements replicating arbitrary directories on HDFS and piggy backs on replication solution in Falcon which uses the <a href="./DistCp.html">DistCp</a> tool. It also allows users to replicate data from on-premise to cloud, either Azure WASB or S3.</p></div>
+<div class="section">
+<h3>Use Case<a name="Use_Case"></a></h3>
+<p>* Copy directories between HDFS clusters with out dated partitions * Archive directories from HDFS to Cloud. Ex: S3, Azure WASB</p></div>
+<div class="section">
+<h3>Limitations<a name="Limitations"></a></h3>
+<p>As the data volume and number of files grow, this can get inefficient.</p></div>
+<div class="section">
+<h3>Usage<a name="Usage"></a></h3></div>
+<div class="section">
+<h4>Setup source and destination clusters<a name="Setup_source_and_destination_clusters"></a></h4>
+<div class="source">
+<pre>
+    $FALCON_HOME/bin/falcon entity -submit -type cluster -file /cluster/definition.xml
+   
+</pre></div></div>
+<div class="section">
+<h4>HDFS mirroring extension properties<a name="HDFS_mirroring_extension_properties"></a></h4>
+<p>Extension artifacts are expected to be installed on HDFS at the path specified by &quot;extension.store.uri&quot; in startup properties. hdfs-mirroring-properties.json file located at &quot;&lt;extension.store.uri&gt;/hdfs-mirroring/META/hdfs-mirroring-properties.json&quot; lists all the required and optional parameters/arguments for scheduling HDFS mirroring job.</p></div>
+<div class="section">
+<h4>Submit and schedule HDFS mirroring extension<a name="Submit_and_schedule_HDFS_mirroring_extension"></a></h4>
+<div class="source">
+<pre>
+    $FALCON_HOME/bin/falcon extension -submitAndSchedule -extensionName hdfs-mirroring -file /process/definition.xml
+   
+</pre></div>
+<p>Please Refer to <a href="./Falconcli/FalconCLI.html">Falcon CLI</a> and <a href="./Restapi/ResourceList.html">REST API</a> for more details on usage of CLI and REST API's.</p></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    2013-2016
+                        <a href="http://www.apache.org">Apache Software Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4612c3f7/content/0.10/HdfsSnapshotMirroring.html
----------------------------------------------------------------------
diff --git a/content/0.10/HdfsSnapshotMirroring.html b/content/0.10/HdfsSnapshotMirroring.html
new file mode 100644
index 0000000..7a21088
--- /dev/null
+++ b/content/0.10/HdfsSnapshotMirroring.html
@@ -0,0 +1,181 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2016-08-08
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20160808" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - HDFS Snapshot based Mirroring</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( '.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                                                <img src="images/falcon-logo.png"  alt="Apache Falcon" width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">HDFS Snapshot based Mirroring</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 2016-08-08</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.10</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>HDFS Snapshot based Mirroring<a name="HDFS_Snapshot_based_Mirroring"></a></h2></div>
+<div class="section">
+<h3>Overview<a name="Overview"></a></h3>
+<p>HDFS snapshots are very cost effective to create ( cost is O(1) excluding iNode lookup time). Once created, it is very efficient to find modifications relative to a snapshot and copy over these modifications for disaster recovery (DR). This makes for cost effective HDFS mirroring.</p></div>
+<div class="section">
+<h3>Prerequisites<a name="Prerequisites"></a></h3>
+<p>Following is the prerequisite to use HDFS Snapshot based Mirrroring.</p>
+<p></p>
+<ul>
+<li>Hadoop version 2.7.0 or higher.</li>
+<li>User submitting and scheduling falcon snapshot based mirroring job should have permission to create and manage snapshots on both source and target directories.</li></ul></div>
+<div class="section">
+<h3>Use Case<a name="Use_Case"></a></h3>
+<p>Create and manage snapshots on source/target directories. Mirror data from source to target for disaster recovery using these snapshots. Perform retention on the snapshots created on source and target.</p></div>
+<div class="section">
+<h3>Usage<a name="Usage"></a></h3></div>
+<div class="section">
+<h4>Setup<a name="Setup"></a></h4>
+<p></p>
+<ul>
+<li>Submit a source cluster and target cluster entities to Falcon.</li></ul>
+<div class="source">
+<pre>
+    $FALCON_HOME/bin/falcon entity -submit -type cluster -file source-cluster-definition.xml
+    $FALCON_HOME/bin/falcon entity -submit -type cluster -file target-cluster-definition.xml
+   
+</pre></div>
+<p></p>
+<ul>
+<li>Ensure that source directory on source cluster and target directory on target cluster exists.</li>
+<li>Ensure that these dirs are snapshot-able by user submitting extension. You can find more <a class="externalLink" href="https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">information on snapshots here</a>.</li></ul></div>
+<div class="section">
+<h4>HDFS Snapshot based mirroring extension properties<a name="HDFS_Snapshot_based_mirroring_extension_properties"></a></h4>
+<p>Extension artifacts are expected to be installed on HDFS at the path specified by &quot;extension.store.uri&quot; in startup properties.    hdfs-snapshot-mirroring-properties.json file located at &quot;&lt;extension.store.uri&gt;/hdfs-snapshot-mirroring/META/hdfs-snapshot-mirroring-properties.json&quot;    lists all the required and optional parameters/arguments for scheduling the mirroring job.</p>
+<p>Here is a sample set of properties,</p>
+<div class="source">
+<pre>
+   ## Job Properties
+   jobName=hdfs-snapshot-test
+   jobClusterName=backupCluster
+   jobValidityStart=2016-01-01T00:00Z
+   jobValidityEnd=2016-04-01T00:00Z
+   jobFrequency=hours(12)
+   jobTimezone=UTC
+   jobTags=consumer=consumer@xyz.com
+   jobRetryPolicy=periodic
+   jobRetryDelay=minutes(30)
+   jobRetryAttempts=3
+
+   ## Job owner
+   jobAclOwner=ambari-qa
+   jobAclGroup=users
+   jobAclPermission=*
+
+   ## Source information
+   sourceCluster=primaryCluster
+   sourceSnapshotDir=/apps/falcon/snapshots/source/
+   sourceSnapshotRetentionPolicy=delete
+   sourceSnapshotRetentionAgeLimit=days(15)
+   sourceSnapshotRetentionNumber=10
+
+   ## Target information
+   targetCluster=backupCluster
+   targetSnapshotDir=/apps/falcon/snapshots/target/
+   targetSnapshotRetentionPolicy=delete
+   targetSnapshotRetentionAgeLimit=months(6)
+   targetSnapshotRetentionNumber=20
+
+   ## Distcp properties
+   distcpMaxMaps=1
+   distcpMapBandwidth=100
+   tdeEncryptionEnabled=false
+   
+</pre></div>
+<p>The above properties ensure Falcon hdfs snapshot based mirroring extension does the following every 12 hours.</p>
+<ul>
+<li>Create snapshot on dir /apps/falcon/snapshots/source/ on primaryCluster.</li>
+<li>DistCP data from /apps/falcon/snapshots/source/ on primaryCluster to /apps/falcon/snapshots/target/ on backupCluster.</li>
+<li>Create snapshot on dir /apps/falcon/snapshots/target/ on backupCluster.</li>
+<li>Perform retention job on source and target.
+<ul>
+<li>Maintain at least N latest snapshots and delete all other snapshots older than specified age limit.</li>
+<li>Today, only &quot;delete&quot; policy is supported for snapshot retention.</li></ul></li></ul>
+<p><b>Note:</b> When TDE encryption is enabled on source/target directories, DistCP ignores the snapshots and treats it like a regular replication. While user may not get the performance benefit of using snapshot based DistCP, the extension is still useful for creating and maintaining snapshots.</p></div>
+<div class="section">
+<h4>Submit and schedule HDFS snapshot mirroring extension<a name="Submit_and_schedule_HDFS_snapshot_mirroring_extension"></a></h4>
+<p>User can submit extension using CLI or RestAPI. CLI command looks as follows</p>
+<div class="source">
+<pre>
+    $FALCON_HOME/bin/falcon extension -submitAndSchedule -extensionName hdfs-snapshot-mirroring -file propeties-file.txt
+   
+</pre></div>
+<p>Please Refer to <a href="./Falconcli/FalconCLI.html">Falcon CLI</a> and <a href="./Restapi/ResourceList.html">REST API</a> for more details on usage of CLI and REST API's.</p></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    2013-2016
+                        <a href="http://www.apache.org">Apache Software Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4612c3f7/content/0.10/HiveIntegration.html
----------------------------------------------------------------------
diff --git a/content/0.10/HiveIntegration.html b/content/0.10/HiveIntegration.html
new file mode 100644
index 0000000..c8f571c
--- /dev/null
+++ b/content/0.10/HiveIntegration.html
@@ -0,0 +1,453 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2016-08-08
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20160808" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Hive Integration</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( '.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                                                <img src="images/falcon-logo.png"  alt="Apache Falcon" width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Hive Integration</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 2016-08-08</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.10</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>Hive Integration<a name="Hive_Integration"></a></h2></div>
+<div class="section">
+<h3>Overview<a name="Overview"></a></h3>
+<p>Falcon provides data management functions for feeds declaratively. It allows users to represent feed locations as time-based partition directories on HDFS containing files.</p>
+<p>Hive provides a simple and familiar database like tabular model of data management to its users, which are backed by HDFS. It supports two classes of tables, managed tables and external tables.</p>
+<p>Falcon allows users to represent feed location as Hive tables. Falcon supports both managed and external tables and provide data management services for tables such as replication, eviction, archival, etc. Falcon will notify HCatalog as a side effect of either acquiring, replicating or evicting a data set instance and adds the missing capability of HCatalog table replication.</p>
+<p>In the near future, Falcon will allow users to express pipeline processing in Hive scripts apart from Pig and Oozie workflows.</p></div>
+<div class="section">
+<h3>Assumptions<a name="Assumptions"></a></h3>
+<p></p>
+<ul>
+<li>Date is a mandatory first-level partition for Hive tables
+<ul>
+<li>Data availability triggers are based on date pattern in Oozie</li></ul></li>
+<li>Tables must be created in Hive prior to adding it as a Feed in Falcon.
+<ul>
+<li>Duplicating this in Falcon will create confusion on the real source of truth. Also propagating schema changes</li></ul></li></ul>between systems is a hard problem.
+<ul>
+<li>Falcon does not know about the encoding of the data and data should be in HCatalog supported format.</li></ul></div>
+<div class="section">
+<h3>Configuration<a name="Configuration"></a></h3>
+<p>Falcon provides a system level option to enable Hive integration. Falcon must be configured with an implementation for the catalog registry. The default implementation for Hive is shipped with Falcon.</p>
+<div class="source">
+<pre>
+catalog.service.impl=org.apache.falcon.catalog.HiveCatalogService
+
+</pre></div></div>
+<div class="section">
+<h3>Incompatible changes<a name="Incompatible_changes"></a></h3>
+<p>Falcon depends heavily on data-availability triggers for scheduling Falcon workflows. Oozie must support data-availability triggers based on HCatalog partition availability. This is only available in oozie 4.x.</p>
+<p>Hence, Falcon for Hive support needs Oozie 4.x.</p></div>
+<div class="section">
+<h3>Oozie Shared Library setup<a name="Oozie_Shared_Library_setup"></a></h3>
+<p>Falcon post Hive integration depends heavily on the <a class="externalLink" href="http://oozie.apache.org/docs/4.0.1/WorkflowFunctionalSpec.html#a17_HDFS_Share_Libraries_for_Workflow_Applications_since_Oozie_2.3">shared library feature of Oozie</a>. Since the sheer number of jars for HCatalog, Pig and Hive are in the many 10s in numbers, its quite daunting to redistribute the dependent jars from Falcon.</p>
+<p><a class="externalLink" href="http://oozie.apache.org/docs/4.0.1/DG_QuickStart.html#Oozie_Share_Lib_Installation">This is a one time effort in Oozie setup and is quite straightforward.</a></p></div>
+<div class="section">
+<h3>Approach<a name="Approach"></a></h3></div>
+<div class="section">
+<h4>Entity Changes<a name="Entity_Changes"></a></h4>
+<p></p>
+<ul>
+<li>Cluster DSL will have an additional registry-interface section, specifying the endpoint for the</li></ul>HCatalog server. If this is absent, no HCatalog publication will be done from Falcon for this cluster.
+<div class="source">
+<pre>thrift://hcatalog-server:port
+</pre></div>
+<p></p>
+<ul>
+<li>Feed DSL will allow users to specify the URI (location) for HCatalog tables as:</li></ul>
+<div class="source">
+<pre>catalog:database_name:table_name#partitions(key=value?)*
+</pre></div>
+<p></p>
+<ul>
+<li>Failure to publish to HCatalog will be retried (configurable # of retires) with back off. Permanent failures</li></ul>after all the retries are exhausted will fail the Falcon workflow</div>
+<div class="section">
+<h4>Eviction<a name="Eviction"></a></h4>
+<p></p>
+<ul>
+<li>Falcon will construct DDL statements to filter candidate partitions eligible for eviction drop partitions</li>
+<li>Falcon will construct DDL statements to drop the eligible partitions</li>
+<li>Additionally, Falcon will nuke the data on HDFS for external tables</li></ul></div>
+<div class="section">
+<h4>Replication<a name="Replication"></a></h4>
+<p></p>
+<ul>
+<li>Falcon will use HCatalog (Hive) API to export the data for a given table and the partition,</li></ul>which will result in a data collection that includes metadata on the data's storage format, the schema, how the data is sorted, what table the data came from, and values of any partition keys from that table.
+<ul>
+<li>Falcon will use discp tool to copy the exported data collection into the secondary cluster into a staging</li></ul>directory used by Falcon.
+<ul>
+<li>Falcon will then import the data into HCatalog (Hive) using the HCatalog (Hive) API. If the specified table does</li></ul>not yet exist, Falcon will create it, using the information in the imported metadata to set defaults for the table such as schema, storage format, etc.
+<ul>
+<li>The partition is not complete and hence not visible to users until all the data is committed on the secondary</li></ul>cluster, (no dirty reads)
+<ul>
+<li>Data collection is staged by Falcon and retries for copy continues from where it left off.</li>
+<li>Failure to register with Hive will be retired. After all the attempts are exhausted,</li></ul>the data will be cleaned up by Falcon.</div>
+<div class="section">
+<h4>Security<a name="Security"></a></h4>
+<p>The user owns all data managed by Falcon. Falcon runs as the user who submitted the feed. Falcon will authenticate with HCatalog as the end user who owns the entity and the data.</p>
+<p>For Hive managed tables, the table may be owned by the end user or &#xe2;&#x80;&#x9c;hive&#xe2;&#x80;&#x9d;. For &#xe2;&#x80;&#x9c;hive&#xe2;&#x80;&#x9d; owned tables, user will have to configure the feed as &#xe2;&#x80;&#x9c;hive&#xe2;&#x80;&#x9d;.</p></div>
+<div class="section">
+<h3>Load on HCatalog from Falcon<a name="Load_on_HCatalog_from_Falcon"></a></h3>
+<p>It generally depends on the frequency of the feeds configured in Falcon and how often data is ingested, replicated, or processed.</p></div>
+<div class="section">
+<h3>User Impact<a name="User_Impact"></a></h3>
+<p></p>
+<ul>
+<li>There should not be any impact to user due to this integration</li>
+<li>Falcon will be fully backwards compatible</li>
+<li>Users have a choice to either choose storage based on files on HDFS as they do today or use HCatalog for</li></ul>accessing the data in tables</div>
+<div class="section">
+<h3>Known Limitations<a name="Known_Limitations"></a></h3></div>
+<div class="section">
+<h4>Oozie<a name="Oozie"></a></h4>
+<p></p>
+<ul>
+<li>Falcon with Hadoop 1.x requires copying guava jars manually to sharelib in oozie. Hadoop 2.x ships this.</li>
+<li>hcatalog-pig-adapter needs to be copied manually to oozie sharelib.</li></ul>
+<div class="source">
+<pre>
+bin/hadoop dfs -copyFromLocal $LFS/share/lib/hcatalog/hcatalog-pig-adapter-0.5.0-incubating.jar share/lib/hcatalog
+
+</pre></div>
+<p></p>
+<ul>
+<li>Oozie 4.x with Hadoop-2.x</li></ul>Replication jobs are submitted to oozie on the destination cluster. Oozie runs a table export job on RM on source cluster. Oozie server on the target cluster must be configured with source hadoop configs else jobs fail with errors on secure and non-secure clusters as below:
+<div class="source">
+<pre>
+org.apache.hadoop.security.token.SecretManager$InvalidToken: Password not found for ApplicationAttempt appattempt_1395965672651_0010_000002
+
+</pre></div>
+<p>Make sure all oozie servers that falcon talks to has the hadoop configs configured in oozie-site.xml</p>
+<div class="source">
+<pre>
+&lt;property&gt;
+      &lt;name&gt;oozie.service.HadoopAccessorService.hadoop.configurations&lt;/name&gt;
+      &lt;value&gt;*=/etc/hadoop/conf,arpit-new-falcon-1.cs1cloud.internal:8020=/etc/hadoop-1,arpit-new-falcon-1.cs1cloud.internal:8032=/etc/hadoop-1,arpit-new-falcon-2.cs1cloud.internal:8020=/etc/hadoop-2,arpit-new-falcon-2.cs1cloud.internal:8032=/etc/hadoop-2,arpit-new-falcon-5.cs1cloud.internal:8020=/etc/hadoop-3,arpit-new-falcon-5.cs1cloud.internal:8032=/etc/hadoop-3&lt;/value&gt;
+      &lt;description&gt;
+          Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of
+          the Hadoop service (JobTracker, HDFS). The wildcard '*' configuration is
+          used when there is no exact match for an authority. The HADOOP_CONF_DIR contains
+          the relevant Hadoop *-site.xml files. If the path is relative is looked within
+          the Oozie configuration directory; though the path can be absolute (i.e. to point
+          to Hadoop client conf/ directories in the local filesystem.
+      &lt;/description&gt;
+    &lt;/property&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Hive<a name="Hive"></a></h4>
+<p></p>
+<ul>
+<li>Dated Partitions</li></ul>Falcon does not work well when table partition contains multiple dated columns. Falcon only works with a single dated partition. This is being tracked in FALCON-357 which is a limitation in Oozie.
+<div class="source">
+<pre>
+catalog:default:table4#year=${YEAR};month=${MONTH};day=${DAY};hour=${HOUR};minute=${MINUTE}
+
+</pre></div>
+<p></p>
+<ul>
+<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HIVE-5550">Hive table import fails for tables created with default text and sequence file formats using HCatalog API</a></li></ul>For some arcane reason, hive substitutes the output format for text and sequence to be prefixed with Hive. Hive table import fails since it compares against the input and output formats of the source table and they are different. Say, a table was created with out specifying the file format, it defaults to:
+<div class="source">
+<pre>
+fileFormat=TextFile, inputformat=org.apache.hadoop.mapred.TextInputFormat, outputformat=org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat
+
+</pre></div>
+<p>But, when hive fetches the table from the metastore, it replaces the output format with org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat and the comparison between source and target table fails.</p>
+<div class="source">
+<pre>
+org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer#checkTable
+      // check IF/OF/Serde
+      String existingifc = table.getInputFormatClass().getName();
+      String importedifc = tableDesc.getInputFormat();
+      String existingofc = table.getOutputFormatClass().getName();
+      String importedofc = tableDesc.getOutputFormat();
+      if ((!existingifc.equals(importedifc))
+          || (!existingofc.equals(importedofc))) {
+        throw new SemanticException(
+            ErrorMsg.INCOMPATIBLE_SCHEMA
+                .getMsg(&quot; Table inputformat/outputformats do not match&quot;));
+      }
+
+</pre></div>
+<p>The above is not an issue with Hive 0.13.</p></div>
+<div class="section">
+<h3>Hive Examples<a name="Hive_Examples"></a></h3>
+<p>Following is an example entity configuration for lifecycle management functions for tables in Hive.</p></div>
+<div class="section">
+<h4>Hive Table Lifecycle Management - Replication and Retention<a name="Hive_Table_Lifecycle_Management_-_Replication_and_Retention"></a></h4></div>
+<div class="section">
+<h5>Primary Cluster<a name="Primary_Cluster"></a></h5>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot;?&gt;
+&lt;!--
+    Primary cluster configuration for demo vm
+  --&gt;
+&lt;cluster colo=&quot;west-coast&quot; description=&quot;Primary Cluster&quot;
+         name=&quot;primary-cluster&quot;
+         xmlns=&quot;uri:falcon:cluster:0.1&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;
+    &lt;interfaces&gt;
+        &lt;interface type=&quot;readonly&quot; endpoint=&quot;hftp://localhost:10070&quot;
+                   version=&quot;1.1.1&quot; /&gt;
+        &lt;interface type=&quot;write&quot; endpoint=&quot;hdfs://localhost:10020&quot;
+                   version=&quot;1.1.1&quot; /&gt;
+        &lt;interface type=&quot;execute&quot; endpoint=&quot;localhost:10300&quot;
+                   version=&quot;1.1.1&quot; /&gt;
+        &lt;interface type=&quot;workflow&quot; endpoint=&quot;http://localhost:11010/oozie/&quot;
+                   version=&quot;4.0.1&quot; /&gt;
+        &lt;interface type=&quot;registry&quot; endpoint=&quot;thrift://localhost:19083&quot;
+                   version=&quot;0.11.0&quot; /&gt;
+        &lt;interface type=&quot;messaging&quot; endpoint=&quot;tcp://localhost:61616?daemon=true&quot;
+                   version=&quot;5.4.3&quot; /&gt;
+    &lt;/interfaces&gt;
+    &lt;locations&gt;
+        &lt;location name=&quot;staging&quot; path=&quot;/apps/falcon/staging&quot; /&gt;
+        &lt;location name=&quot;temp&quot; path=&quot;/tmp&quot; /&gt;
+        &lt;location name=&quot;working&quot; path=&quot;/apps/falcon/working&quot; /&gt;
+    &lt;/locations&gt;
+&lt;/cluster&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>BCP Cluster<a name="BCP_Cluster"></a></h5>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot;?&gt;
+&lt;!--
+    BCP cluster configuration for demo vm
+  --&gt;
+&lt;cluster colo=&quot;east-coast&quot; description=&quot;BCP Cluster&quot;
+         name=&quot;bcp-cluster&quot;
+         xmlns=&quot;uri:falcon:cluster:0.1&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;
+    &lt;interfaces&gt;
+        &lt;interface type=&quot;readonly&quot; endpoint=&quot;hftp://localhost:20070&quot;
+                   version=&quot;1.1.1&quot; /&gt;
+        &lt;interface type=&quot;write&quot; endpoint=&quot;hdfs://localhost:20020&quot;
+                   version=&quot;1.1.1&quot; /&gt;
+        &lt;interface type=&quot;execute&quot; endpoint=&quot;localhost:20300&quot;
+                   version=&quot;1.1.1&quot; /&gt;
+        &lt;interface type=&quot;workflow&quot; endpoint=&quot;http://localhost:11020/oozie/&quot;
+                   version=&quot;4.0.1&quot; /&gt;
+        &lt;interface type=&quot;registry&quot; endpoint=&quot;thrift://localhost:29083&quot;
+                   version=&quot;0.11.0&quot; /&gt;
+        &lt;interface type=&quot;messaging&quot; endpoint=&quot;tcp://localhost:61616?daemon=true&quot;
+                   version=&quot;5.4.3&quot; /&gt;
+    &lt;/interfaces&gt;
+    &lt;locations&gt;
+        &lt;location name=&quot;staging&quot; path=&quot;/apps/falcon/staging&quot; /&gt;
+        &lt;location name=&quot;temp&quot; path=&quot;/tmp&quot; /&gt;
+        &lt;location name=&quot;working&quot; path=&quot;/apps/falcon/working&quot; /&gt;
+    &lt;/locations&gt;
+&lt;/cluster&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Feed with replication and eviction policy<a name="Feed_with_replication_and_eviction_policy"></a></h5>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot;?&gt;
+&lt;!--
+    Replicating Hourly customer table from primary to secondary cluster.
+  --&gt;
+&lt;feed description=&quot;Replicating customer table feed&quot; name=&quot;customer-table-replicating-feed&quot;
+      xmlns=&quot;uri:falcon:feed:0.1&quot;&gt;
+    &lt;frequency&gt;hours(1)&lt;/frequency&gt;
+    &lt;timezone&gt;UTC&lt;/timezone&gt;
+
+    &lt;clusters&gt;
+        &lt;cluster name=&quot;primary-cluster&quot; type=&quot;source&quot;&gt;
+            &lt;validity start=&quot;2013-09-24T00:00Z&quot; end=&quot;2013-10-26T00:00Z&quot;/&gt;
+            &lt;retention limit=&quot;hours(2)&quot; action=&quot;delete&quot;/&gt;
+        &lt;/cluster&gt;
+        &lt;cluster name=&quot;bcp-cluster&quot; type=&quot;target&quot;&gt;
+            &lt;validity start=&quot;2013-09-24T00:00Z&quot; end=&quot;2013-10-26T00:00Z&quot;/&gt;
+            &lt;retention limit=&quot;days(30)&quot; action=&quot;delete&quot;/&gt;
+
+            &lt;table uri=&quot;catalog:tgt_demo_db:customer_bcp#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}&quot; /&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+    &lt;table uri=&quot;catalog:src_demo_db:customer_raw#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}&quot; /&gt;
+
+    &lt;ACL owner=&quot;seetharam&quot; group=&quot;users&quot; permission=&quot;0755&quot;/&gt;
+    &lt;schema location=&quot;&quot; provider=&quot;hcatalog&quot;/&gt;
+&lt;/feed&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Hive Table used in Processing Pipelines<a name="Hive_Table_used_in_Processing_Pipelines"></a></h4></div>
+<div class="section">
+<h5>Primary Cluster<a name="Primary_Cluster"></a></h5>
+<p>The cluster definition from the lifecycle example can be used.</p></div>
+<div class="section">
+<h5>Input Feed<a name="Input_Feed"></a></h5>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot;?&gt;
+&lt;feed description=&quot;clicks log table &quot; name=&quot;input-table&quot; xmlns=&quot;uri:falcon:feed:0.1&quot;&gt;
+    &lt;groups&gt;online,bi&lt;/groups&gt;
+    &lt;frequency&gt;hours(1)&lt;/frequency&gt;
+    &lt;timezone&gt;UTC&lt;/timezone&gt;
+
+    &lt;clusters&gt;
+        &lt;cluster name=&quot;##cluster##&quot; type=&quot;source&quot;&gt;
+            &lt;validity start=&quot;2010-01-01T00:00Z&quot; end=&quot;2012-04-21T00:00Z&quot;/&gt;
+            &lt;retention limit=&quot;hours(24)&quot; action=&quot;delete&quot;/&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+    &lt;table uri=&quot;catalog:falcon_db:input_table#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}&quot; /&gt;
+
+    &lt;ACL owner=&quot;testuser&quot; group=&quot;group&quot; permission=&quot;0x755&quot;/&gt;
+    &lt;schema location=&quot;/schema/clicks&quot; provider=&quot;protobuf&quot;/&gt;
+&lt;/feed&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Output Feed<a name="Output_Feed"></a></h5>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot;?&gt;
+&lt;feed description=&quot;clicks log identity table&quot; name=&quot;output-table&quot; xmlns=&quot;uri:falcon:feed:0.1&quot;&gt;
+    &lt;groups&gt;online,bi&lt;/groups&gt;
+    &lt;frequency&gt;hours(1)&lt;/frequency&gt;
+    &lt;timezone&gt;UTC&lt;/timezone&gt;
+
+    &lt;clusters&gt;
+        &lt;cluster name=&quot;##cluster##&quot; type=&quot;source&quot;&gt;
+            &lt;validity start=&quot;2010-01-01T00:00Z&quot; end=&quot;2012-04-21T00:00Z&quot;/&gt;
+            &lt;retention limit=&quot;hours(24)&quot; action=&quot;delete&quot;/&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+    &lt;table uri=&quot;catalog:falcon_db:output_table#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}&quot; /&gt;
+
+    &lt;ACL owner=&quot;testuser&quot; group=&quot;group&quot; permission=&quot;0x755&quot;/&gt;
+    &lt;schema location=&quot;/schema/clicks&quot; provider=&quot;protobuf&quot;/&gt;
+&lt;/feed&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Process<a name="Process"></a></h5>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot;?&gt;
+&lt;process name=&quot;##processName##&quot; xmlns=&quot;uri:falcon:process:0.1&quot;&gt;
+    &lt;clusters&gt;
+        &lt;cluster name=&quot;##cluster##&quot;&gt;
+            &lt;validity end=&quot;2012-04-22T00:00Z&quot; start=&quot;2012-04-21T00:00Z&quot;/&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+    &lt;parallel&gt;1&lt;/parallel&gt;
+    &lt;order&gt;FIFO&lt;/order&gt;
+    &lt;frequency&gt;days(1)&lt;/frequency&gt;
+    &lt;timezone&gt;UTC&lt;/timezone&gt;
+
+    &lt;inputs&gt;
+        &lt;input end=&quot;today(0,0)&quot; start=&quot;today(0,0)&quot; feed=&quot;input-table&quot; name=&quot;input&quot;/&gt;
+    &lt;/inputs&gt;
+
+    &lt;outputs&gt;
+        &lt;output instance=&quot;now(0,0)&quot; feed=&quot;output-table&quot; name=&quot;output&quot;/&gt;
+    &lt;/outputs&gt;
+
+    &lt;properties&gt;
+        &lt;property name=&quot;blah&quot; value=&quot;blah&quot;/&gt;
+    &lt;/properties&gt;
+
+    &lt;workflow engine=&quot;pig&quot; path=&quot;/falcon/test/apps/pig/table-id.pig&quot;/&gt;
+
+    &lt;retry policy=&quot;periodic&quot; delay=&quot;minutes(10)&quot; attempts=&quot;3&quot;/&gt;
+&lt;/process&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Pig Script<a name="Pig_Script"></a></h5>
+<div class="source">
+<pre>
+A = load '$input_database.$input_table' using org.apache.hcatalog.pig.HCatLoader();
+B = FILTER A BY $input_filter;
+C = foreach B generate id, value;
+store C into '$output_database.$output_table' USING org.apache.hcatalog.pig.HCatStorer('$output_dataout_partitions');
+
+</pre></div></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    2013-2016
+                        <a href="http://www.apache.org">Apache Software Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4612c3f7/content/0.10/HiveMirroring.html
----------------------------------------------------------------------
diff --git a/content/0.10/HiveMirroring.html b/content/0.10/HiveMirroring.html
new file mode 100644
index 0000000..8df4bf2
--- /dev/null
+++ b/content/0.10/HiveMirroring.html
@@ -0,0 +1,148 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2016-08-08
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20160808" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Hive Mirroring</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( '.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                                                <img src="images/falcon-logo.png"  alt="Apache Falcon" width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Hive Mirroring</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 2016-08-08</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.10</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>Hive Mirroring<a name="Hive_Mirroring"></a></h2></div>
+<div class="section">
+<h3>Overview<a name="Overview"></a></h3>
+<p>Falcon provides feature to replicate Hive metadata and data events from source cluster to destination cluster. This is supported for both secure and unsecure cluster through Falcon extensions.</p></div>
+<div class="section">
+<h3>Prerequisites<a name="Prerequisites"></a></h3>
+<p>Following is the prerequisites to use Hive Mirrroring</p>
+<p></p>
+<ul>
+<li><b>Hive 1.2.0+</b></li>
+<li><b>Oozie 4.2.0+</b></li></ul>
+<p><b>Note:</b> Set following properties in hive-site.xml for replicating the Hive events on source and destination Hive cluster:</p>
+<div class="source">
+<pre>
+    &lt;property&gt;
+        &lt;name&gt;hive.metastore.event.listeners&lt;/name&gt;
+        &lt;value&gt;org.apache.hive.hcatalog.listener.DbNotificationListener&lt;/value&gt;
+        &lt;description&gt;event listeners that are notified of any metastore changes&lt;/description&gt;
+    &lt;/property&gt;
+
+    &lt;property&gt;
+        &lt;name&gt;hive.metastore.dml.events&lt;/name&gt;
+        &lt;value&gt;true&lt;/value&gt;
+    &lt;/property&gt;
+
+</pre></div></div>
+<div class="section">
+<h3>Use Case<a name="Use_Case"></a></h3>
+<p>* Replicate data/metadata of Hive DB &amp; table from source to target cluster</p></div>
+<div class="section">
+<h3>Limitations<a name="Limitations"></a></h3>
+<p>* Currently Hive doesn't support create database, roles, views, offline tables, direct HDFS writes without registering with metadata and Database/Table name mapping replication events. Hence Hive mirroring extension cannot be used to replicate above mentioned events between warehouses.</p></div>
+<div class="section">
+<h3>Usage<a name="Usage"></a></h3></div>
+<div class="section">
+<h4>Bootstrap<a name="Bootstrap"></a></h4>
+<p>Perform initial bootstrap of Table and Database from source cluster to destination cluster</p>
+<ul>
+<li><b>Database Bootstrap</b></li></ul>For bootstrapping DB replication, first destination DB should be created. This step is expected,      since DB replication definitions can be set up by users only on pre-existing DB&#xe2;&#x80;&#x99;s. Second, Export all tables in      the source db and Import it in the destination db, as described in Table bootstrap.
+<p></p>
+<ul>
+<li><b>Table Bootstrap</b></li></ul>For bootstrapping table replication, essentially after having turned on the DbNotificationListener      on the source db, perform an Export of the table, distcp the Export over to the destination      warehouse and do an Import over there. Check the following <a class="externalLink" href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ImportExport">Hive Export-Import</a> for syntax details      and examples.      This will set up the destination table so that the events on the source cluster that modify the table      will then be replicated.</div>
+<div class="section">
+<h4>Setup source and destination clusters<a name="Setup_source_and_destination_clusters"></a></h4>
+<div class="source">
+<pre>
+    $FALCON_HOME/bin/falcon entity -submit -type cluster -file /cluster/definition.xml
+   
+</pre></div></div>
+<div class="section">
+<h4>Hive mirroring extension properties<a name="Hive_mirroring_extension_properties"></a></h4>
+<p>Extension artifacts are expected to be installed on HDFS at the path specified by &quot;extension.store.uri&quot; in startup properties. hive-mirroring-properties.json file located at &quot;&lt;extension.store.uri&gt;/hive-mirroring/META/hive-mirroring-properties.json&quot; lists all the required and optional parameters/arguments for scheduling Hive mirroring job.</p></div>
+<div class="section">
+<h4>Submit and schedule Hive mirroring extension<a name="Submit_and_schedule_Hive_mirroring_extension"></a></h4>
+<div class="source">
+<pre>
+    $FALCON_HOME/bin/falcon extension -submitAndSchedule -extensionName hive-mirroring -file /process/definition.xml
+   
+</pre></div>
+<p>Please Refer to <a href="./Falconcli/FalconCLI.html">Falcon CLI</a> and <a href="./Restapi/ResourceList.html">REST API</a> for more details on usage of CLI and REST API's.</p></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    2013-2016
+                        <a href="http://www.apache.org">Apache Software Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>