You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@falcon.apache.org by pa...@apache.org on 2016/03/10 10:48:32 UTC

[1/6] falcon git commit: Deleting accidental check-in of trunk/release/master

Repository: falcon
Updated Branches:
  refs/heads/asf-site 31b1d7e6a -> 4e4b8457d


http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/InstanceSuspend.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/InstanceSuspend.twiki b/trunk/releases/master/src/site/twiki/restapi/InstanceSuspend.twiki
deleted file mode 100644
index 2ba8663..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/InstanceSuspend.twiki
+++ /dev/null
@@ -1,44 +0,0 @@
----++  POST /api/instance/suspend/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Suspend instances of an entity.
-
----++ Parameters
-   * :entity-type can either be a feed or a process.
-   * :entity-name is name of the entity.
-   * start is the start time of the instance(s) that you want to refer to
-   * end is the end time of the instance(s) that you want to refer to
-   * lifecycle <optional param> can be Eviction/Replication(default) for feed and Execution(default) for process.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Results of the suspend command.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-POST http://localhost:15000/api/instance/suspend/process/SampleProcess?colo=*&start=2012-04-03T07:00Z&end=2014-04-03T07:00Z&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "instances": [
-        {
-            "details": "",
-            "endTime": "2013-10-21T15:15:01-07:00",
-            "startTime": "2013-10-21T15:14:32-07:00",
-            "cluster": "primary-cluster",
-            "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933395-oozie-rgau-W",
-            "status": "SUCCEEDED",
-            "instance": "2012-04-03T07:00Z"
-        }
-    ],
-    "requestId": "default\/ff07e45b-b6da-4f47-ae96-9182bd8a7e53\n",
-    "message": "default\/SUSPEND\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/MetadataList.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/MetadataList.twiki b/trunk/releases/master/src/site/twiki/restapi/MetadataList.twiki
deleted file mode 100644
index 98abf46..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/MetadataList.twiki
+++ /dev/null
@@ -1,31 +0,0 @@
----++  GET api/metadata/discovery/:type/list
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get all dimensions of specified type.
-
----++ Parameters
-   * :type Valid dimension types are cluster_entity,feed_entity, process_entity, user, colo, tags, groups, pipelines
-   * cluster <optional query param> Show dimensions related to this cluster.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
-
----++ Results
-List of dimensions that match requested type [and cluster].
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/metadata/discovery/process_entity/list?cluster=primary-cluster&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "results": ["sampleIngestProcess","testProcess","anotherProcess"],
-    "totalSize": 3
-}
-</verbatim>
-

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/MetadataRelations.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/MetadataRelations.twiki b/trunk/releases/master/src/site/twiki/restapi/MetadataRelations.twiki
deleted file mode 100644
index b29fd2a..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/MetadataRelations.twiki
+++ /dev/null
@@ -1,46 +0,0 @@
----++  GET api/metadata/discovery/:dimension-type/:dimension-name/relations
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get all relations of a specific dimension.
-
----++ Parameters
-   * :type Valid dimension types are cluster_entity,feed_entity, process_entity, user, colo, tags, groups, pipelines
-   * :name Name of the dimension.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Get all relations of a specific dimension.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/metadata/discovery/process_entity/sample-process/relations?doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "timestamp":"2014-09-09T01:31Z",
-    "userWorkflowEngine":"pig",
-    "name":"sample-process",
-    "type":"PROCESS_ENTITY",
-    "userWorkflowName":"imp-click-join-workflow",
-    "version":"1.0.9",
-    "inVertices":[
-        {"name":"clicks-feed","type":"FEED_ENTITY","label":"input"},
-        {"name":"impression-feed","type":"FEED_ENTITY","label":"input"},
-        {"name":"sample-process\/2014-01-01T01:00Z","type":"PROCESS_INSTANCE","label":"instance-of"}
-    ],
-    "outVertices":[
-        {"name":"Critical","type":"TAGS","label":"classified-as"},
-        {"name":"testPipeline","type":"PIPELINES","label":"pipeline"},
-        {"name":"primary-cluster","type":"CLUSTER_ENTITY","label":"runs-on"},
-        {"name":"imp-click-join2","type":"FEED_ENTITY","label":"output"},
-        {"name":"imp-click-join1","type":"FEED_ENTITY","label":"output"},
-        {"name":"falcon-user","type":"USER","label":"owned-by"}
-    ]
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/ResourceList.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/ResourceList.twiki b/trunk/releases/master/src/site/twiki/restapi/ResourceList.twiki
deleted file mode 100644
index 34c2c6f..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/ResourceList.twiki
+++ /dev/null
@@ -1,93 +0,0 @@
----+ RESTful Resources
-
----++ Resource List
-   * <a href="#REST_Call_on_Entity_Resource">REST Call on Entity Resource</a>
-   * <a href="#REST_Call_on_Feed_and_Process_Instances">REST Call on Feed/Process Instances</a>
-   * <a href="#REST_Call_on_Admin_Resource">REST Call on Admin Resource</a>
-   * <a href="#REST_Call_on_Lineage_Graph">REST Call on Lineage Graph Resource</a>
-   * <a href="#REST_Call_on_Metadata_Resource">REST Call on Metadata Resource</a>
-
----++ Authentication
-
-When security is off (Pseudo/Simple), the authenticated user is the username specified in the user.name query
-parameter. If the user.name parameter is not set, the server may either set the authenticated user to a default web
-user, if there is any, or return an error response.
-
-When security is on (kerberos), authentication is performed by Kerberos SPNEGO.
-
-Below are examples using the curl command tool.
-
-Authentication when security is off (Pseudo/Simple):
-<verbatim>
-curl -i "http://<HOST>:<PORT>/<PATH>?[user.name=<USER>&]<PARAM>=..."
-</verbatim>
-
-Authentication using Kerberos SPNEGO when security is on:
-<verbatim>
-curl -i --negotiate -u : "http://<HOST>:<PORT>/<PATH>?<PARAM>=..."
-</verbatim>
-
-See also: [[../Security.twiki][Security in Falcon]]
-
-The current version of the rest api's documentation is also hosted on the Falcon server and Prism Server (in distributed mode) at the url http://<HOST>:<PORT>/docs
-
----++ REST Call on Admin Resource
-
-| *Call Type* | *Resource*                                     | *Description*                               |
-| GET         | [[AdminStack][api/admin/stack]]                | Get stack of the server                     |
-| GET         | [[AdminVersion][api/admin/version]]            | Get version of the server                   |
-| GET         | [[AdminConfig][api/admin/config/:config-type]] | Get configuration information of the server |
-
----++ REST Call on Entity Resource
-
-| *Call Type* | *Resource*                                                                  | *Description*                      |
-| POST        | [[EntityValidate][api/entities/validate/:entity-type]]                      | Validate the entity                |
-| POST        | [[EntitySubmit][api/entities/submit/:entity-type]]                          | Submit the entity                  |
-| POST        | [[EntityUpdate][api/entities/update/:entity-type/:entity-name]]             | Update the entity                  |
-| POST        | [[EntitySubmitAndSchedule][api/entities/submitAndSchedule/:entity-type]]    | Submit & Schedule the entity       |
-| POST        | [[EntitySchedule][api/entities/schedule/:entity-type/:entity-name]]         | Schedule the entity                |
-| POST        | [[EntitySuspend][api/entities/suspend/:entity-type/:entity-name]]           | Suspend the entity                 |
-| POST        | [[EntityResume][api/entities/resume/:entity-type/:entity-name]]             | Resume the entity                  |
-| DELETE      | [[EntityDelete][api/entities/delete/:entity-type/:entity-name]]             | Delete the entity                  |
-| GET         | [[EntityStatus][api/entities/status/:entity-type/:entity-name]]             | Get the status of the entity       |
-| GET         | [[EntityDefinition][api/entities/definition/:entity-type/:entity-name]]     | Get the definition of the entity   |
-| GET         | [[EntityList][api/entities/list/:entity-type]]                              | Get the list of entities           |
-| GET         | [[EntitySummary][api/entities/summary/:entity-type/:cluster]]               | Get instance summary of all entities |
-| GET         | [[EntityDependencies][api/entities/dependencies/:entity-type/:entity-name]] | Get the dependencies of the entity |
-| GET         | [[FeedSLA][api/entities/sla-alert/:entity-type]]                            | Get pending feed instances which missed sla |
-| GET         | [[FeedLookup][api/entities/lookup/feed/]]                                   | Get feed for given path            |
-
----++ REST Call on Feed and Process Instances
-
-| *Call Type* | *Resource*                                                                  | *Description*                |
-| GET         | [[InstanceRunning][api/instance/running/:entity-type/:entity-name]]         | List of running instances.   |
-| GET         | [[InstanceParams][api/instance/params/:entity-type/:entity-name]]           | List of entity instances along with their workflow params.   |
-| GET         | [[InstanceList][api/instance/list/:entity-type/:entity-name]]               | List of instances   |
-| GET         | [[InstanceStatus][api/instance/status/:entity-type/:entity-name]]           | Status of a given instance   |
-| POST        | [[InstanceKill][api/instance/kill/:entity-type/:entity-name]]               | Kill a given instance        |
-| POST        | [[InstanceSuspend][api/instance/suspend/:entity-type/:entity-name]]         | Suspend a running instance   |
-| POST        | [[InstanceResume][api/instance/resume/:entity-type/:entity-name]]           | Resume a given instance      |
-| POST        | [[InstanceRerun][api/instance/rerun/:entity-type/:entity-name]]             | Rerun a given instance       |
-| GET         | [[InstanceLogs][api/instance/logs/:entity-type/:entity-name]]               | Get logs of a given instance |
-| GET         | [[Triage][api/instance/triage/:entity-type/:entity-name]]                   | Triage an instance to see it's stuck lineage |
-| GET         | [[InstanceSummary][api/instance/summary/:entity-type/:entity-name]]         | Return summary of instances for an entity |
-| GET         | [[InstanceDependency][api/instance/dependencies/:entity-type/:entity-name]] | Return dependent instances for a given instance |
-
----++ REST Call on Metadata Lineage Resource
-
-| *Call Type* | *Resource*                                                                             | *Description*                                                                 |
-| GET         | [[Graph][api/metadata/lineage/serialize]]                                              | dump the graph                                                                |
-| GET         | [[AllVertices][api/metadata/lineage/vertices/all]]                                     | get all vertices                                                              |
-| GET         | [[Vertices][api/metadata/lineage/vertices?key=:key&value=:value]]                      | get all vertices for a key index                                              |
-| GET         | [[Vertex][api/metadata/lineage/vertices/:id]]                                          | get the vertex with the specified id                                          |
-| GET         | [[VertexProperties][api/metadata/lineage/vertices/properties/:id?relationships=:true]] | get the properties of the vertex with the specified id                        |
-| GET         | [[AdjacentVertices][api/metadata/lineage/vertices/:id/:direction]]                     | get the adjacent vertices or edges of the vertex with the specified direction |
-| GET         | [[AllEdges][api/metadata/lineage/edges/all]]                                           | get all edges                                                                 |
-| GET         | [[Edge][api/metadata/lineage/edges/:id]]                                               | get the edge with the specified id                                            |
-| GET         | [[EntityLineage][api/metadata/lineage/entities?pipeline=:name]]                        | Get lineage graph for processes and feeds in the specified pipeline           |
-
----++ REST Call on Metadata Discovery Resource
-
-| *Call Type* | *Resource*                                                                                     | *Description*                                                                 |
-| GET         | [[MetadataList][api/metadata/discovery/:dimension-type/list]]                                  | list of dimensions  |
-| GET         | [MetadataRelations][api/metadata/discovery/:dimension-type/:dimension-name/relations]]         | Return all relations of a dimension |

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/Triage.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/Triage.twiki b/trunk/releases/master/src/site/twiki/restapi/Triage.twiki
deleted file mode 100644
index 9ff95c8..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/Triage.twiki
+++ /dev/null
@@ -1,45 +0,0 @@
----++  GET api/instance/triage/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Given a feed/process instance this command traces it's ancestors to find what all ancestors have failed. It's useful if
-lot of instances are failing in a pipeline as it then finds out the root cause of the pipeline being stuck.
-
-
----++ Parameters
-   * :entity-type type of entity(feed/process).
-   * :entity-name name of the feed/process.
-   * :start instance time of the entity instance.
-   * :colo <optional param> name of the colo on which you want to triage
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-It returns a json graph
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/instance/triage/feed/my-feed?start=2015-03-02T00:00Z&colo=local&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "vertices": ["(FEED) my-feed (2015-03-02T00:00Z) [Unavailable]", "(PROCESS) producer-process (2015-03-01T10:00Z) [TIMEDOUT]", "(FEED) input-feed-for-producer (2015-03-01T00:00Z) [Available]"],
-    "edges":
-    [
-        {
-         "from"  : "(PROCESS) producer-process (2015-03-01T10:00Z) [TIMEDOUT]",
-         "to"    : "(FEED) my-feed (2015-03-02T00:00Z) [Unavailable]",
-         "label" : "produces"
-        },
-        {
-         "from"  : "(FEED) input-feed-for-producer (2015-03-01T00:00Z) [Available]",
-         "to"    : "(PROCESS) producer-process (2015-03-01T10:00Z) [TIMEDOUT]",
-         "label" : "consumed by"
-        }
-    ]
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/Vertex.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/Vertex.twiki b/trunk/releases/master/src/site/twiki/restapi/Vertex.twiki
deleted file mode 100644
index 82f5bfb..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/Vertex.twiki
+++ /dev/null
@@ -1,36 +0,0 @@
----++  GET api/metadata/lineage/vertices/:id
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Gets the vertex with specified id.
-
----++ Parameters
-   * :id is the unique id of the vertex.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Vertex with the specified id.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/metadata/lineage/vertices/4?doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "results": [
-        {
-            "timestamp":"2014-04-21T20:55Z",
-            "name":"sampleIngestProcess",
-            "type":"process-instance",
-            "version":"2.0.0",
-            "_id":4,
-            "_type":"vertex"
-        }
-    ]
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/VertexProperties.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/VertexProperties.twiki b/trunk/releases/master/src/site/twiki/restapi/VertexProperties.twiki
deleted file mode 100644
index 11c64b5..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/VertexProperties.twiki
+++ /dev/null
@@ -1,34 +0,0 @@
----++  GET api/metadata/lineage/vertices/properties/:id?relationships=:true
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Gets the properties of the vertex with specified id.
-
----++ Parameters
-   * :id is the unique id of the vertex.
-   * :relationships has default value of false. Pass true if relationships should be fetched.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
- Properties associated with the specified vertex.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/metadata/lineage/vertices/properties/40004?relationships=true&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "results":
-        {
-            "timestamp":"2014-04-25T22:20Z",
-            "name":"local",
-            "type":"cluster-entity"
-        },
-    "totalSize":3
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/Vertices.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/Vertices.twiki b/trunk/releases/master/src/site/twiki/restapi/Vertices.twiki
deleted file mode 100644
index 643e6e9..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/Vertices.twiki
+++ /dev/null
@@ -1,38 +0,0 @@
----++  GET api/metadata/lineage/vertices?key=:key&value=:value
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get all vertices for a key index given the specified value.
-
----++ Parameters
-   * :key is the key to be matched.
-   * :value is the associated value of the key.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-All vertices matching given property key and a value.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/metadata/lineage/vertices?key=name&value=sampleIngestProcess&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "results": [
-        {
-            "timestamp":"2014-04-21T20:55Z",
-            "name":"sampleIngestProcess",
-            "type":"process-instance",
-            "version":"2.0.0",
-            "_id":4,
-            "_type":"vertex"
-        }
-    ],
-    "totalSize": 1
-}
-</verbatim>
\ No newline at end of file


[4/6] falcon git commit: Deleting accidental check-in of trunk/release/master

Posted by pa...@apache.org.
http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/FalconEmailNotification.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/FalconEmailNotification.twiki b/trunk/releases/master/src/site/twiki/FalconEmailNotification.twiki
deleted file mode 100644
index 25abdd2..0000000
--- a/trunk/releases/master/src/site/twiki/FalconEmailNotification.twiki
+++ /dev/null
@@ -1,29 +0,0 @@
----++Falcon Email Notification
-
-Falcon Email notification allows sending email notifications when scheduled feed/process instances complete.
-Email notification in feed/process entity can be defined as follows:
-<verbatim>
-<process name="[process name]">
-    ...
-    <notification type="email" to="bob@xyz.com,tom@xyz.com"/>
-    ...
-</process>
-</verbatim>
-
-   *  *type*    - specifies about the type of notification. *Note:* Currently "email" notification type is supported.
-   *  *to*  - specifies the address to send notifications to; multiple recipients may be provided as a comma-separated list.
-
-
-Falcon email notification requires some SMTP server configuration to be defined in startup.properties. Following are the values
-it looks for:
-   * *falcon.email.smtp.host*   - The host where the email action may find the SMTP server (localhost by default).
-   * *falcon.email.smtp.port*   - The port to connect to for the SMTP server (25 by default).
-   * *falcon.email.from.address*    - The from address to be used for mailing all emails (falcon@localhost by default).
-   * *falcon.email.smtp.auth*   - Boolean property that specifies if authentication is to be done or not. (false by default).
-   * *falcon.email.smtp.user*   - If authentication is enabled, the username to login as (empty by default).
-   * *falcon.email.smtp.password*   - If authentication is enabled, the username's password (empty by default).
-
-
-
-Also ensure that email notification plugin is enabled in startup.properties to send email notifications:
-   * *monitoring.plugins*   - org.apache.falcon.plugin.EmailNotificationPlugin,org.apache.falcon.plugin.DefaultMonitoringPlugin
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/FalconNativeScheduler.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/FalconNativeScheduler.twiki b/trunk/releases/master/src/site/twiki/FalconNativeScheduler.twiki
deleted file mode 100644
index 9ffc5e9..0000000
--- a/trunk/releases/master/src/site/twiki/FalconNativeScheduler.twiki
+++ /dev/null
@@ -1,213 +0,0 @@
----+ Falcon Native Scheduler
-
----++ Overview
-Falcon has been using Oozie as its scheduling engine.  While the use of Oozie works reasonably well, there are scenarios where Oozie scheduling is proving to be a limiting factor. In its current form, Falcon relies on Oozie for both scheduling and for workflow execution, due to which the scheduling is limited to time based/cron based scheduling with additional gating conditions on data availability. Also, this imposes restrictions on datasets being periodic in nature. In order to offer better scheduling capabilities, Falcon comes with its own native scheduler. 
-
----++ Capabilities
-The native scheduler will offer the capabilities offered by Oozie co-ordinator and more. The native scheduler will be built and released over the next few releases of Falcon giving users an opportunity to use it and provide feedback.
-
-Currently, the native scheduler offers the following capabilities:
-   1. Submit and schedule a Falcon process that runs periodically (without data dependency) - It could be a PIG script, oozie workflow, Hive (all the engine types currently supported).
-   1. Monitor/Query/Modify the scheduled process - All applicable entity APIs and instance APIs should work as it does now.  Falcon provides data management functions for feeds declaratively. It allows users to represent feed locations as time-based partition directories on HDFS containing files.
-
-*NOTE: Execution order is FIFO. LIFO and LAST_ONLY are not supported yet.*
-
-In the near future, Falcon scheduler will provide feature parity with Oozie scheduler and in subsequent releases will provide the following features:
-   * Periodic, cron-based, calendar-based scheduling.
-   * Data availability based scheduling.
-   * External trigger/notification based scheduling.
-   * Support for periodic/a-periodic datasets.
-   * Support for optional/mandatory datasets. Option to specify minumum/maximum/exactly-N instances of data to consume.
-   * Handle dependencies across entities during re-run.
-
----++ Configuring Native Scheduler
-You can enable native scheduler by making changes to __$FALCON_HOME/conf/startup.properties__ as follows. You will need to restart Falcon Server for the changes to take effect.
-<verbatim>
-*.dag.engine.impl=org.apache.falcon.workflow.engine.OozieDAGEngine
-*.application.services=org.apache.falcon.security.AuthenticationInitializationService,\
-                        org.apache.falcon.workflow.WorkflowJobEndNotificationService, \
-                        org.apache.falcon.service.ProcessSubscriberService,\
-                        org.apache.falcon.service.FeedSLAMonitoringService,\
-                        org.apache.falcon.service.LifecyclePolicyMap,\
-                        org.apache.falcon.state.store.service.FalconJPAService,\
-                        org.apache.falcon.entity.store.ConfigurationStore,\
-                        org.apache.falcon.rerun.service.RetryService,\
-                        org.apache.falcon.rerun.service.LateRunService,\
-                        org.apache.falcon.metadata.MetadataMappingService,\
-                        org.apache.falcon.service.LogCleanupService,\
-                        org.apache.falcon.service.GroupsService,\
-                        org.apache.falcon.service.ProxyUserService,\
-                        org.apache.falcon.notification.service.impl.JobCompletionService,\
-                        org.apache.falcon.notification.service.impl.SchedulerService,\
-                        org.apache.falcon.notification.service.impl.AlarmService,\
-                        org.apache.falcon.notification.service.impl.DataAvailabilityService,\
-                        org.apache.falcon.execution.FalconExecutionService
-</verbatim>
-
----+++ Making the Native Scheduler the default scheduler
-To ensure backward compatibility, even when the native scheduler is enabled, the default scheduler is still Oozie. This means users will be scheduling entities on Oozie scheduler, by default. They will need to explicitly specify the scheduler as native, if they wish to schedule entities using native scheduler. 
-
-<a href="#Scheduling_new_entities_on_Native_Scheduler">This section</a> has more details on how to schedule on either of the schedulers. 
-
-If you wish to make the Falcon Native Scheduler your default scheduler and remove Oozie as the scheduler, set the following property in __$FALCON_HOME/conf/startup.properties__
-<verbatim>
-## If you wish to use Falcon native scheduler as your default scheduler, set the workflow engine to FalconWorkflowEngine instead of OozieWorkflowEngine. ##
-*.workflow.engine.impl=org.apache.falcon.workflow.engine.FalconWorkflowEngine
-</verbatim>
-
----+++ Configuring the state store for Native Scheduler
-You can configure statestore by making changes to __$FALCON_HOME/conf/statestore.properties__ as follows. You will need to restart Falcon Server for the changes to take effect.
-
-Falcon Server needs to maintain state of the entities and instances in a persistent store for the system to be recoverable. Since Prism only federates, it does not need to maintain any state information. Following properties need to be set in statestore.properties of Falcon Servers:
-<verbatim>
-######### StateStore Properties #####
-*.falcon.state.store.impl=org.apache.falcon.state.store.jdbc.JDBCStateStore
-*.falcon.statestore.jdbc.driver=org.apache.derby.jdbc.EmbeddedDriver
-*.falcon.statestore.jdbc.url=jdbc:derby:data/falcon.db
-# StateStore credentials file where username,password and other properties can be stored securely.
-# Set this credentials file permission 400 and make sure user who starts falcon should only have read permission.
-# Give Absolute path to credentials file along with file name or put in classpath with file name statestore.credentials.
-# Credentials file should be present either in given location or class path, otherwise falcon won't start.
-*.falcon.statestore.credentials.file=
-*.falcon.statestore.jdbc.username=sa
-*.falcon.statestore.jdbc.password=
-*.falcon.statestore.connection.data.source=org.apache.commons.dbcp.BasicDataSource
-# Maximum number of active connections that can be allocated from this pool at the same time.
-*.falcon.statestore.pool.max.active.conn=10
-*.falcon.statestore.connection.properties=
-# Indicates the interval (in milliseconds) between eviction runs.
-*.falcon.statestore.validate.db.connection.eviction.interval=300000
-## The number of objects to examine during each run of the idle object evictor thread.
-*.falcon.statestore.validate.db.connection.eviction.num=10
-## Creates Falcon DB.
-## If set to true, it creates the DB schema if it does not exist. If the DB schema exists is a NOP.
-## If set to false, it does not create the DB schema. If the DB schema does not exist it fails start up.
-*.falcon.statestore.create.db.schema=true
-</verbatim> 
-
-The _*.falcon.statestore.jdbc.url_ property in statestore.properties determines the DB and data location. All other properties are common across RDBMS.
-
-*NOTE : Although multiple Falcon Servers can share a DB (not applicable for Derby DB), it is recommended that you have different DBs for different Falcon Servers for better performance.*
-
-You will need to create the state DB and tables before starting the Falcon Server. To create tables, a tool comes bundled with the Falcon installation. You can use the _falcon-db.sh_ script to create tables in the DB. The script needs to be run only for Falcon Servers and can be run by any user that has execute permission on the script. The script picks up the DB connection details from __$FALCON_HOME/conf/statestore.properties__. Ensure that you have granted the right privileges to the user mentioned in statestore.properties_, so the tables can be created.
-
-You can use the help command to get details on the sub-commands supported:
-<verbatim>
-./bin/falcon-db.sh help
-Hadoop home is set, adding libraries from '/Users/pallavi.rao/falcon/hadoop-2.6.0/bin/hadoop classpath' into falcon classpath
-usage: 
-      Falcon DB initialization tool currently supports Derby DB/ Mysql
-
-      falcondb help : Display usage for all commands or specified command
-
-      falcondb version : Show Falcon DB version information
-
-      falcondb create <OPTIONS> : Create Falcon DB schema
-                      -run             Confirmation option regarding DB schema creation/upgrade
-                      -sqlfile <arg>   Generate SQL script instead of creating/upgrading the DB
-                                       schema
-
-      falcondb upgrade <OPTIONS> : Upgrade Falcon DB schema
-                       -run             Confirmation option regarding DB schema creation/upgrade
-                       -sqlfile <arg>   Generate SQL script instead of creating/upgrading the DB
-                                        schema
-
-</verbatim>
-Currently, MySQL, postgreSQL and Derby are supported as state stores. We may extend support to other DBs in the future. Falcon has been tested against MySQL v5.5 and PostgreSQL v9.5. If you are using MySQL ensure you also copy mysql-connector-java-<version>.jar under __$FALCON_HOME/server/webapp/falcon/WEB-INF/lib__ and __$FALCON_HOME/client/lib__
-
----++++ Using Derby as the State Store
-Using Derby is ideal for QA and staging setup. Falcon comes bundled with a Derby connector and no explicit setup is required (although you can set it up) in terms creating the DB or tables.
-For example,
- <verbatim> *.falcon.statestore.jdbc.url=jdbc:derby:data/falcon.db;create=true </verbatim>
-
- tells Falcon to use the Derby JDBC connector, with data directory, $FALCON_HOME/data/ and DB name 'falcon'. If _create=true_ is specified, you will not need to create a DB up front; a database will be created if it does not exist.
-
----++++ Using MySQL as the State Store
-The jdbc.url property in statestore.properties determines the DB and data location.
-For example,
- <verbatim> *.falcon.statestore.jdbc.url=jdbc:mysql://localhost:3306/falcon </verbatim>
-
- tells Falcon to use the MySQL JDBC connector, which is accessible @localhost:3306, with DB name 'falcon'.
-
----++ Scheduling new entities on Native Scheduler
-To schedule an entity (currently only process is supported) using the native scheduler, you need to specify the scheduler in the schedule command as shown below:
-<verbatim>
-$FALCON_HOME/bin/falcon entity -type process -name <process name> -schedule -properties falcon.scheduler:native
-</verbatim>
-
-If Oozie is configured as the default scheduler, you can skip the scheduler option or explicitly set it to _oozie_, as shown below:
-<verbatim>
-$FALCON_HOME/bin/falcon entity -type process -name <process name> -schedule
-OR
-$FALCON_HOME/bin/falcon entity -type process -name <process name> -schedule -properties falcon.scheduler:oozie
-</verbatim>
-
-If the native scheduler is configured as the default scheduler, then, you can omit the scheduler option, as shown below:
-<verbatim>
-$FALCON_HOME/bin/falcon entity -type process -name <process name> -schedule 
-</verbatim>
-
----++ Migrating entities from Oozie Scheduler to Native Scheduler
-Currently, user will have to delete and re-create entities in order to move across schedulers. Attempting to schedule an already scheduled entity on a different scheduler will result in an error. Note that the history of instances prior to scheduling on native scheduler will not be available via the instance APIs. However, user can retrieve that information using metadata APIs. Native scheduler must be enabled before migrating entities to native scheduler.
-
-<a href="#Configuring_Native_Scheduler">Configuring Native Scheduler</a> has more details on how to enable native scheduler.
-
----+++ Migrating from Oozie to Native Scheduler
-   * Delete the entity (process). 
-<verbatim>$FALCON_HOME/bin/falcon entity -type process -name <process name> -delete </verbatim>
-   * Submit the entity (process) with start time from where the Oozie scheduler left off. 
-<verbatim>$FALCON_HOME/bin/falcon entity -type process -submit <path to process xml> </verbatim>
-   * Schedule the entity on native scheduler. 
-<verbatim> $FALCON_HOME/bin/falcon entity -type process -name <process name> -schedule -properties falcon.scheduler:native </verbatim>
-
----+++ Reverting to Oozie from Native Scheduler
-   * Delete the entity (process). 
-<verbatim>$FALCON_HOME/bin/falcon entity -type process -name <process name> -delete </verbatim>
-   * Submit the entity (process) with start time from where the Native scheduler left off. 
-<verbatim>$FALCON_HOME/bin/falcon entity -type process -submit <path to process xml> </verbatim>
-   * Schedule the entity on the default scheduler (Oozie).
- <verbatim> $FALCON_HOME/bin/falcon entity -type process -name <process name> -schedule </verbatim>
-
----+++ Differences in API responses between Oozie and Native Scheduler
-Most API responses are similar whether the entity is scheduled via Oozie or via Native scheduler. However, there are a few exceptions and those are listed below.
----++++ Rerun API
-When a user performs a rerun using Oozie scheduler, Falcon directly reruns the workflow on Oozie and the instance will be moved to 'RUNNING'.
-
-Example response:
-<verbatim>
-$ falcon instance -rerun processMerlinOozie -start 2016-01-08T12:13Z -end 2016-01-08T12:15Z
-Consolidated Status: SUCCEEDED
-
-Instances:
-Instance		Cluster		SourceCluster		Status		Start		End		Details					Log
------------------------------------------------------------------------------------------------
-2016-01-08T12:13Z	ProcessMultipleClustersTest-corp-9706f068	-	RUNNING	2016-01-08T13:03Z	2016-01-08T13:03Z	-	http://8RPCG32.corp.inmobi.com:11000/oozie?job=0001811-160104160825636-oozie-oozi-W
-2016-01-08T12:13Z	ProcessMultipleClustersTest-corp-0b270a1d	-	RUNNING	2016-01-08T13:03Z	2016-01-08T13:03Z	-	http://lda01:11000/oozie?job=0002247-160104115615658-oozie-oozi-W
-
-Additional Information:
-Response: ua1/RERUN
-ua2/RERUN
-Request Id: ua1/871377866@qtp-630572412-35 - 7190c4c8-bacb-4639-8d48-c9e639f544da
-ua2/1554129706@qtp-536122141-13 - bc18127b-1bf8-4ea1-99e6-b1f10ba3a441
-</verbatim>
-
-However, when a user performs a rerun on native scheduler, the instance is scheduled again. This is done intentionally so as to not violate the number of instances running in parallel.  Hence, the user will see the status of the instance as 'READY'.
-
-Example response:
-<verbatim>
-$ falcon instance -rerun ProcessMultipleClustersTest-agregator-coord16-8f55f59b -start 2016-01-08T12:13Z -end 2016-01-08T12:15Z
-Consolidated Status: SUCCEEDED
-
-Instances:
-Instance		Cluster		SourceCluster		Status		Start		End		Details					Log
------------------------------------------------------------------------------------------------
-2016-01-08T12:13Z	ProcessMultipleClustersTest-corp-9706f068	-	READY	2016-01-08T13:03Z	2016-01-08T13:03Z	-	http://8RPCG32.corp.inmobi.com:11000/oozie?job=0001812-160104160825636-oozie-oozi-W
-
-2016-01-08T12:13Z	ProcessMultipleClustersTest-corp-0b270a1d	-	READY	2016-01-08T13:03Z	2016-01-08T13:03Z	-	http://lda01:11000/oozie?job=0002248-160104115615658-oozie-oozi-W
-
-Additional Information:
-Response: ua1/RERUN
-ua2/RERUN
-Request Id: ua1/871377866@qtp-630572412-35 - 8d118d4d-c0ef-4335-a9af-10364498ec4f
-ua2/1554129706@qtp-536122141-13 - c2a3fc50-8b05-47ce-9c85-ca432b96d923
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/HDFSDR.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/HDFSDR.twiki b/trunk/releases/master/src/site/twiki/HDFSDR.twiki
deleted file mode 100644
index 1c1e3f5..0000000
--- a/trunk/releases/master/src/site/twiki/HDFSDR.twiki
+++ /dev/null
@@ -1,34 +0,0 @@
----+ HDFS DR Recipe
----++ Overview
-Falcon supports HDFS DR recipe to replicate data from source cluster to destination cluster.
-
----++ Usage
----+++ Setup cluster definition.
-   <verbatim>
-    $FALCON_HOME/bin/falcon entity -submit -type cluster -file /cluster/definition.xml
-   </verbatim>
-
----+++ Update recipes properties
-   Copy HDFS replication recipe properties, workflow and template file from $FALCON_HOME/data-mirroring/hdfs-replication to the accessible
-   directory path or to the recipe directory path (*falcon.recipe.path=<recipe directory path>*). *"falcon.recipe.path"* must be specified
-   in Falcon conf client.properties. Now update the copied recipe properties file with required attributes to replicate data from source cluster to
-   destination cluster for HDFS DR.
-
----+++ Submit HDFS DR recipe
-
-   After updating the recipe properties file with required attributes in directory path or in falcon.recipe.path,
-   there are two ways of submitting the HDFS DR recipe:
-
-   * 1. Specify Falcon recipe properties file through recipe command line.
-   <verbatim>
-    $FALCON_HOME/bin/falcon recipe -name hdfs-replication -operation HDFS_REPLICATION
-    -properties /cluster/hdfs-replication.properties
-   </verbatim>
-
-   * 2. Use Falcon recipe path specified in Falcon conf client.properties .
-   <verbatim>
-    $FALCON_HOME/bin/falcon recipe -name hdfs-replication -operation HDFS_REPLICATION
-   </verbatim>
-
-
-*Note:* Recipe properties file, workflow file and template file name must match to the recipe name, it must be unique and in the same directory.

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/HiveDR.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/HiveDR.twiki b/trunk/releases/master/src/site/twiki/HiveDR.twiki
deleted file mode 100644
index a8f6aee..0000000
--- a/trunk/releases/master/src/site/twiki/HiveDR.twiki
+++ /dev/null
@@ -1,74 +0,0 @@
----+Hive Disaster Recovery
-
-
----++Overview
-Falcon provides feature to replicate Hive metadata and data events from source cluster
-to destination cluster. This is supported for secure and unsecure cluster through Falcon Recipes.
-
-
----++Prerequisites
-Following is the prerequisites to use Hive DR
-
-   * *Hive 1.2.0+*
-   * *Oozie 4.2.0+*
-
-*Note:* Set following properties in hive-site.xml for replicating the Hive events on source and destination Hive cluster:
-<verbatim>
-    <property>
-        <name>hive.metastore.event.listeners</name>
-        <value>org.apache.hive.hcatalog.listener.DbNotificationListener</value>
-        <description>event listeners that are notified of any metastore changes</description>
-    </property>
-
-    <property>
-        <name>hive.metastore.dml.events</name>
-        <value>true</value>
-    </property>
-</verbatim>
-
----++ Usage
----+++ Bootstrap
-   Perform initial bootstrap of Table and Database from source cluster to destination cluster
-   * *Database Bootstrap*
-     For bootstrapping DB replication, first destination DB should be created. This step is expected,
-     since DB replication definitions can be set up by users only on pre-existing DB’s. Second, Export all tables in
-     the source db and Import it in the destination db, as described in Table bootstrap.
-
-   * *Table Bootstrap*
-     For bootstrapping table replication, essentially after having turned on the !DbNotificationListener
-     on the source db, perform an Export of the table, distcp the Export over to the destination
-     warehouse and do an Import over there. Check the following [[https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ImportExport][Hive Export-Import]] for syntax details
-     and examples.
-     This will set up the destination table so that the events on the source cluster that modify the table
-     will then be replicated.
-
----+++ Setup cluster definition
-   <verbatim>
-    $FALCON_HOME/bin/falcon entity -submit -type cluster -file /cluster/definition.xml
-   </verbatim>
-
----+++ Update recipes properties
-   Copy Hive DR recipe properties, workflow and template file from $FALCON_HOME/data-mirroring/hive-disaster-recovery to the accessible
-   directory path or to the recipe directory path (*falcon.recipe.path=<recipe directory path>*). *"falcon.recipe.path"* must be specified
-   in Falcon conf client.properties. Now update the copied recipe properties file with required attributes to replicate metadata and data from source cluster to
-   destination cluster for Hive DR.
-
----+++ Submit Hive DR recipe
-   After updating the recipe properties file with required attributes in directory path or in falcon.recipe.path,
-   there are two ways of submitting the Hive DR recipe:
-
-   * 1. Specify Falcon recipe properties file through recipe command line.
-   <verbatim>
-       $FALCON_HOME/bin/falcon recipe -name hive-disaster-recovery -operation HIVE_DISASTER_RECOVERY
-       -properties /cluster/hive-disaster-recovery.properties
-   </verbatim>
-
-   * 2. Use Falcon recipe path specified in Falcon conf client.properties .
-   <verbatim>
-       $FALCON_HOME/bin/falcon recipe -name hive-disaster-recovery -operation HIVE_DISASTER_RECOVERY
-   </verbatim>
-
-
-*Note:*
-   * Recipe properties file, workflow file and template file name must match to the recipe name, it must be unique and in the same directory.
-   * If kerberos security is enabled on cluster, use the secure templates for Hive DR from $FALCON_HOME/data-mirroring/hive-disaster-recovery .

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/HiveIntegration.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/HiveIntegration.twiki b/trunk/releases/master/src/site/twiki/HiveIntegration.twiki
deleted file mode 100644
index 688305d..0000000
--- a/trunk/releases/master/src/site/twiki/HiveIntegration.twiki
+++ /dev/null
@@ -1,372 +0,0 @@
----+ Hive Integration
-
----++ Overview
-Falcon provides data management functions for feeds declaratively. It allows users to represent feed locations as
-time-based partition directories on HDFS containing files.
-
-Hive provides a simple and familiar database like tabular model of data management to its users,
-which are backed by HDFS. It supports two classes of tables, managed tables and external tables.
-
-Falcon allows users to represent feed location as Hive tables. Falcon supports both managed and external tables
-and provide data management services for tables such as replication, eviction, archival, etc. Falcon will notify
-HCatalog as a side effect of either acquiring, replicating or evicting a data set instance and adds the
-missing capability of HCatalog table replication.
-
-In the near future, Falcon will allow users to express pipeline processing in Hive scripts
-apart from Pig and Oozie workflows.
-
-
----++ Assumptions
-   * Date is a mandatory first-level partition for Hive tables
-      * Data availability triggers are based on date pattern in Oozie
-   * Tables must be created in Hive prior to adding it as a Feed in Falcon.
-      * Duplicating this in Falcon will create confusion on the real source of truth. Also propagating schema changes
-    between systems is a hard problem.
-   * Falcon does not know about the encoding of the data and data should be in HCatalog supported format.
-
----++ Configuration
-Falcon provides a system level option to enable Hive integration. Falcon must be configured with an implementation
-for the catalog registry. The default implementation for Hive is shipped with Falcon.
-
-<verbatim>
-catalog.service.impl=org.apache.falcon.catalog.HiveCatalogService
-</verbatim>
-
-
----++ Incompatible changes
-Falcon depends heavily on data-availability triggers for scheduling Falcon workflows. Oozie must support
-data-availability triggers based on HCatalog partition availability. This is only available in oozie 4.x.
-
-Hence, Falcon for Hive support needs Oozie 4.x.
-
-
----++ Oozie Shared Library setup
-Falcon post Hive integration depends heavily on the [[http://oozie.apache.org/docs/4.0.1/WorkflowFunctionalSpec.html#a17_HDFS_Share_Libraries_for_Workflow_Applications_since_Oozie_2.3][shared library feature of Oozie]].
-Since the sheer number of jars for HCatalog, Pig and Hive are in the many 10s in numbers, its quite daunting to
-redistribute the dependent jars from Falcon.
-
-[[http://oozie.apache.org/docs/4.0.1/DG_QuickStart.html#Oozie_Share_Lib_Installation][This is a one time effort in Oozie setup and is quite straightforward.]]
-
-
----++ Approach
-
----+++ Entity Changes
-
-   * Cluster DSL will have an additional registry-interface section, specifying the endpoint for the
-HCatalog server. If this is absent, no HCatalog publication will be done from Falcon for this cluster.
-      <verbatim>thrift://hcatalog-server:port</verbatim>
-   * Feed DSL will allow users to specify the URI (location) for HCatalog tables as:
-      <verbatim>catalog:database_name:table_name#partitions(key=value?)*</verbatim>
-   * Failure to publish to HCatalog will be retried (configurable # of retires) with back off. Permanent failures
-   after all the retries are exhausted will fail the Falcon workflow
-
----+++ Eviction
-
-   * Falcon will construct DDL statements to filter candidate partitions eligible for eviction drop partitions
-   * Falcon will construct DDL statements to drop the eligible partitions
-   * Additionally, Falcon will nuke the data on HDFS for external tables
-
-
----+++ Replication
-
-   * Falcon will use HCatalog (Hive) API to export the data for a given table and the partition,
-which will result in a data collection that includes metadata on the data's storage format, the schema,
-how the data is sorted, what table the data came from, and values of any partition keys from that table.
-   * Falcon will use discp tool to copy the exported data collection into the secondary cluster into a staging
-directory used by Falcon.
-   * Falcon will then import the data into HCatalog (Hive) using the HCatalog (Hive) API. If the specified table does
-not yet exist, Falcon will create it, using the information in the imported metadata to set defaults for the
-table such as schema, storage format, etc.
-   * The partition is not complete and hence not visible to users until all the data is committed on the secondary
-cluster, (no dirty reads)
-   * Data collection is staged by Falcon and retries for copy continues from where it left off.
-   * Failure to register with Hive will be retired. After all the attempts are exhausted,
-the data will be cleaned up by Falcon.
-
-
----+++ Security
-The user owns all data managed by Falcon. Falcon runs as the user who submitted the feed. Falcon will authenticate
-with HCatalog as the end user who owns the entity and the data.
-
-For Hive managed tables, the table may be owned by the end user or “hive”. For “hive” owned tables,
-user will have to configure the feed as “hive”.
-
-
----++ Load on HCatalog from Falcon
-It generally depends on the frequency of the feeds configured in Falcon and how often data is ingested, replicated,
-or processed.
-
-
----++ User Impact
-   * There should not be any impact to user due to this integration
-   * Falcon will be fully backwards compatible 
-   * Users have a choice to either choose storage based on files on HDFS as they do today or use HCatalog for
-accessing the data in tables
-
-
----++ Known Limitations
-
----+++ Oozie
-
-   * Falcon with Hadoop 1.x requires copying guava jars manually to sharelib in oozie. Hadoop 2.x ships this.
-   * hcatalog-pig-adapter needs to be copied manually to oozie sharelib.
-<verbatim>
-bin/hadoop dfs -copyFromLocal $LFS/share/lib/hcatalog/hcatalog-pig-adapter-0.5.0-incubating.jar share/lib/hcatalog
-</verbatim>
-   * Oozie 4.x with Hadoop-2.x
-Replication jobs are submitted to oozie on the destination cluster. Oozie runs a table export job
-on RM on source cluster. Oozie server on the target cluster must be configured with source hadoop
-configs else jobs fail with errors on secure and non-secure clusters as below:
-<verbatim>
-org.apache.hadoop.security.token.SecretManager$InvalidToken: Password not found for ApplicationAttempt appattempt_1395965672651_0010_000002
-</verbatim>
-
-Make sure all oozie servers that falcon talks to has the hadoop configs configured in oozie-site.xml
-<verbatim>
-<property>
-      <name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
-      <value>*=/etc/hadoop/conf,arpit-new-falcon-1.cs1cloud.internal:8020=/etc/hadoop-1,arpit-new-falcon-1.cs1cloud.internal:8032=/etc/hadoop-1,arpit-new-falcon-2.cs1cloud.internal:8020=/etc/hadoop-2,arpit-new-falcon-2.cs1cloud.internal:8032=/etc/hadoop-2,arpit-new-falcon-5.cs1cloud.internal:8020=/etc/hadoop-3,arpit-new-falcon-5.cs1cloud.internal:8032=/etc/hadoop-3</value>
-      <description>
-          Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of
-          the Hadoop service (JobTracker, HDFS). The wildcard '*' configuration is
-          used when there is no exact match for an authority. The HADOOP_CONF_DIR contains
-          the relevant Hadoop *-site.xml files. If the path is relative is looked within
-          the Oozie configuration directory; though the path can be absolute (i.e. to point
-          to Hadoop client conf/ directories in the local filesystem.
-      </description>
-    </property>
-</verbatim>
-
----+++ Hive
-
-   * Dated Partitions
-Falcon does not work well when table partition contains multiple dated columns. Falcon only works
-with a single dated partition. This is being tracked in FALCON-357 which is a limitation in Oozie.
-<verbatim>
-catalog:default:table4#year=${YEAR};month=${MONTH};day=${DAY};hour=${HOUR};minute=${MINUTE}
-</verbatim>
-
-   * [[https://issues.apache.org/jira/browse/HIVE-5550][Hive table import fails for tables created with default text and sequence file formats using HCatalog API]]
-For some arcane reason, hive substitutes the output format for text and sequence to be prefixed with Hive.
-Hive table import fails since it compares against the input and output formats of the source table and they are
-different. Say, a table was created with out specifying the file format, it defaults to:
-<verbatim>
-fileFormat=TextFile, inputformat=org.apache.hadoop.mapred.TextInputFormat, outputformat=org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat
-</verbatim>
-
-But, when hive fetches the table from the metastore, it replaces the output format with org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
-and the comparison between source and target table fails.
-<verbatim>
-org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer#checkTable
-      // check IF/OF/Serde
-      String existingifc = table.getInputFormatClass().getName();
-      String importedifc = tableDesc.getInputFormat();
-      String existingofc = table.getOutputFormatClass().getName();
-      String importedofc = tableDesc.getOutputFormat();
-      if ((!existingifc.equals(importedifc))
-          || (!existingofc.equals(importedofc))) {
-        throw new SemanticException(
-            ErrorMsg.INCOMPATIBLE_SCHEMA
-                .getMsg(" Table inputformat/outputformats do not match"));
-      }
-</verbatim>
-The above is not an issue with Hive 0.13.
-
----++ Hive Examples
-Following is an example entity configuration for lifecycle management functions for tables in Hive.
-
----+++ Hive Table Lifecycle Management - Replication and Retention
-
----++++ Primary Cluster
-
-<verbatim>
-<?xml version="1.0"?>
-<!--
-    Primary cluster configuration for demo vm
-  -->
-<cluster colo="west-coast" description="Primary Cluster"
-         name="primary-cluster"
-         xmlns="uri:falcon:cluster:0.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
-    <interfaces>
-        <interface type="readonly" endpoint="hftp://localhost:10070"
-                   version="1.1.1" />
-        <interface type="write" endpoint="hdfs://localhost:10020"
-                   version="1.1.1" />
-        <interface type="execute" endpoint="localhost:10300"
-                   version="1.1.1" />
-        <interface type="workflow" endpoint="http://localhost:11010/oozie/"
-                   version="4.0.1" />
-        <interface type="registry" endpoint="thrift://localhost:19083"
-                   version="0.11.0" />
-        <interface type="messaging" endpoint="tcp://localhost:61616?daemon=true"
-                   version="5.4.3" />
-    </interfaces>
-    <locations>
-        <location name="staging" path="/apps/falcon/staging" />
-        <location name="temp" path="/tmp" />
-        <location name="working" path="/apps/falcon/working" />
-    </locations>
-</cluster>
-</verbatim>
-
----++++ BCP Cluster
-
-<verbatim>
-<?xml version="1.0"?>
-<!--
-    BCP cluster configuration for demo vm
-  -->
-<cluster colo="east-coast" description="BCP Cluster"
-         name="bcp-cluster"
-         xmlns="uri:falcon:cluster:0.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
-    <interfaces>
-        <interface type="readonly" endpoint="hftp://localhost:20070"
-                   version="1.1.1" />
-        <interface type="write" endpoint="hdfs://localhost:20020"
-                   version="1.1.1" />
-        <interface type="execute" endpoint="localhost:20300"
-                   version="1.1.1" />
-        <interface type="workflow" endpoint="http://localhost:11020/oozie/"
-                   version="4.0.1" />
-        <interface type="registry" endpoint="thrift://localhost:29083"
-                   version="0.11.0" />
-        <interface type="messaging" endpoint="tcp://localhost:61616?daemon=true"
-                   version="5.4.3" />
-    </interfaces>
-    <locations>
-        <location name="staging" path="/apps/falcon/staging" />
-        <location name="temp" path="/tmp" />
-        <location name="working" path="/apps/falcon/working" />
-    </locations>
-</cluster>
-</verbatim>
-
----++++ Feed with replication and eviction policy
-
-<verbatim>
-<?xml version="1.0"?>
-<!--
-    Replicating Hourly customer table from primary to secondary cluster.
-  -->
-<feed description="Replicating customer table feed" name="customer-table-replicating-feed"
-      xmlns="uri:falcon:feed:0.1">
-    <frequency>hours(1)</frequency>
-    <timezone>UTC</timezone>
-
-    <clusters>
-        <cluster name="primary-cluster" type="source">
-            <validity start="2013-09-24T00:00Z" end="2013-10-26T00:00Z"/>
-            <retention limit="hours(2)" action="delete"/>
-        </cluster>
-        <cluster name="bcp-cluster" type="target">
-            <validity start="2013-09-24T00:00Z" end="2013-10-26T00:00Z"/>
-            <retention limit="days(30)" action="delete"/>
-
-            <table uri="catalog:tgt_demo_db:customer_bcp#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
-        </cluster>
-    </clusters>
-
-    <table uri="catalog:src_demo_db:customer_raw#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
-
-    <ACL owner="seetharam" group="users" permission="0755"/>
-    <schema location="" provider="hcatalog"/>
-</feed>
-</verbatim>
-
-
----+++ Hive Table used in Processing Pipelines
-
----++++ Primary Cluster
-The cluster definition from the lifecycle example can be used.
-
----++++ Input Feed
-
-<verbatim>
-<?xml version="1.0"?>
-<feed description="clicks log table " name="input-table" xmlns="uri:falcon:feed:0.1">
-    <groups>online,bi</groups>
-    <frequency>hours(1)</frequency>
-    <timezone>UTC</timezone>
-
-    <clusters>
-        <cluster name="##cluster##" type="source">
-            <validity start="2010-01-01T00:00Z" end="2012-04-21T00:00Z"/>
-            <retention limit="hours(24)" action="delete"/>
-        </cluster>
-    </clusters>
-
-    <table uri="catalog:falcon_db:input_table#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
-
-    <ACL owner="testuser" group="group" permission="0x755"/>
-    <schema location="/schema/clicks" provider="protobuf"/>
-</feed>
-</verbatim>
-
-
----++++ Output Feed
-
-<verbatim>
-<?xml version="1.0"?>
-<feed description="clicks log identity table" name="output-table" xmlns="uri:falcon:feed:0.1">
-    <groups>online,bi</groups>
-    <frequency>hours(1)</frequency>
-    <timezone>UTC</timezone>
-
-    <clusters>
-        <cluster name="##cluster##" type="source">
-            <validity start="2010-01-01T00:00Z" end="2012-04-21T00:00Z"/>
-            <retention limit="hours(24)" action="delete"/>
-        </cluster>
-    </clusters>
-
-    <table uri="catalog:falcon_db:output_table#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
-
-    <ACL owner="testuser" group="group" permission="0x755"/>
-    <schema location="/schema/clicks" provider="protobuf"/>
-</feed>
-</verbatim>
-
-
----++++ Process
-
-<verbatim>
-<?xml version="1.0"?>
-<process name="##processName##" xmlns="uri:falcon:process:0.1">
-    <clusters>
-        <cluster name="##cluster##">
-            <validity end="2012-04-22T00:00Z" start="2012-04-21T00:00Z"/>
-        </cluster>
-    </clusters>
-
-    <parallel>1</parallel>
-    <order>FIFO</order>
-    <frequency>days(1)</frequency>
-    <timezone>UTC</timezone>
-
-    <inputs>
-        <input end="today(0,0)" start="today(0,0)" feed="input-table" name="input"/>
-    </inputs>
-
-    <outputs>
-        <output instance="now(0,0)" feed="output-table" name="output"/>
-    </outputs>
-
-    <properties>
-        <property name="blah" value="blah"/>
-    </properties>
-
-    <workflow engine="pig" path="/falcon/test/apps/pig/table-id.pig"/>
-
-    <retry policy="periodic" delay="minutes(10)" attempts="3"/>
-</process>
-</verbatim>
-
-
----++++ Pig Script
-
-<verbatim>
-A = load '$input_database.$input_table' using org.apache.hcatalog.pig.HCatLoader();
-B = FILTER A BY $input_filter;
-C = foreach B generate id, value;
-store C into '$output_database.$output_table' USING org.apache.hcatalog.pig.HCatStorer('$output_dataout_partitions');
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/ImportExport.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/ImportExport.twiki b/trunk/releases/master/src/site/twiki/ImportExport.twiki
deleted file mode 100644
index b0ce7ff..0000000
--- a/trunk/releases/master/src/site/twiki/ImportExport.twiki
+++ /dev/null
@@ -1,242 +0,0 @@
----+Falcon Data Import and Export
-
-
----++Overview
-
-Falcon provides constructs to periodically bring raw data from external data sources (like databases, drop boxes etc)
-onto Hadoop and push derived data computed on Hadoop onto external data sources.
-
-As of this release, Falcon only supports Relational Databases (e.g. Oracle, MySQL etc) via JDBC as external data source.
-The future releases will add support for other external data sources.
-
-
----++Prerequisites
-
-Following are the prerequisites to import external data from and export to databases.
-
-   * *Sqoop 1.4.6+*
-   * *Oozie 4.2.0+*
-   * *Appropriate database connector*
-
-
-*Note:* Falcon uses Sqoop for import/export operation. Sqoop will require appropriate database driver to connect to
-the relational database. Please refer to the Sqoop documentation for any Sqoop related question. Please make sure
-the database driver jar is copied into oozie share lib for Sqoop.
-
-<verbatim>
-For example, in order to import and export with MySQL, please make sure the latest MySQL connector
-*mysql-connector-java-5.1.31.jar+* is copied into oozie's Sqoop share lib
-
-/user/oozie/share/lib/{lib-dir}/sqoop/mysql-connector-java-5.1.31.jar+
-
-where {lib-dir} value varies in oozie deployments.
-
-</verbatim>
-
----++ Usage
----+++ Entity Definition and Setup
-   * *Datasource Entity*
-      Datasource entity abstracts connection and credential details to external data sources. The Datasource entity
-      supports read and write interfaces with specific credentials. The default credential will be used if the read
-      or write interface does not have its own credentials. In general, the Datasource entity will be defined by
-      system administrator. Please refer to datasource XSD for more details.
-
-      The following example defines a Datasource entity for a MySQL database. The import operation will use
-      the read interface with url "jdbc:mysql://dbhost/test", user name "import_usr" and password text "sqoop".
-      Where as, the export operation will use the write interface with url "jdbc:mysql://dbhost/test" with user
-      name "export_usr" and password specified in a HDFS file at the location "/user/ambari-qa/password-store/password_write_user".
-
-      The default credential specified will be used if either the read or write interface does not provide its own
-      credentials. The default credential specifies the password using password alias feature available via hadoop credential
-      functionality. User will be able to create a password alias using "hadoop credential -create <alias> -provider
-      <provider-path>" command, where <alias> is a string and <provider-path> is a HDFS jceks file. During runtime,
-      the specified alias will be used to look up the password stored encrypted in the jceks hdfs file specified under
-      the providerPath element.
-
-      The available read and write interfaces enable database administrators to segregate read and write workloads.
-
-      <verbatim>
-
-      File: mysql-database.xml
-
-      <?xml version="1.0" encoding="UTF-8"?>
-      <datasource colo="west-coast" description="MySQL database on west coast" type="mysql" name="mysql-db" xmlns="uri:falcon:datasource:0.1">
-          <tags>owner=foobar@ambari.apache.org, consumer=phoe@ambari.apache.org</tags>
-          <interfaces>
-              <!-- ***** read interface ***** -->
-              <interface type="readonly" endpoint="jdbc:mysql://dbhost/test">
-                  <credential type="password-text">
-                      <userName>import_usr</userName>
-                      <passwordText>sqoop</passwordFile>
-                  </credential>
-              </interface>
-
-              <!-- ***** write interface ***** -->
-              <interface type="write"  endpoint="jdbc:mysql://dbhost/test">
-                  <credential type="password-file">
-                      <userName>export_usr</userName>
-                      <passwordFile>/user/ambari-qa/password-store/password_write_user</passwordFile>
-                  </credential>
-              </interface>
-
-              <!-- *** default credential *** -->
-              <credential type="password-alias">
-                <userName>sqoop2_user</userName>
-                <passwordAlias>
-                    <alias>sqoop.password.alias</alias>
-                    <providerPath>hdfs://namenode:8020/user/ambari-qa/sqoop_password.jceks</providerPath>
-                </passwordAlias>
-              </credential>
-
-          </interfaces>
-
-          <driver>
-              <clazz>com.mysql.jdbc.Driver</clazz>
-              <jar>/user/oozie/share/lib/lib_20150721010816/sqoop/mysql-connector-java-5.1.31</jar>
-          </driver>
-      </datasource>
-      </verbatim>
-
-   * *Feed  Entity*
-      Feed entity now enables users to define IMPORT and EXPORT policies in addition to RETENTION and REPLICATION.
-      The IMPORT and EXPORT policies will refer to a already defined Datasource entity for connection and credential
-      details and take a table name from the policy to operate on. Please refer to feed entity XSD for details.
-
-      The following example defines a Feed entity with IMPORT and EXPORT policies. Both the IMPORT and EXPORT operations
-      refer to a datasource entity "mysql-db". The IMPORT operation will use the read interface and credentials while
-      the EXPORT operation will use the write interface and credentials. A feed instance is created every 1 hour
-      since the frequency of the Feed is hour(1) and the Feed instances are deleted after 90 days because of the
-      retention policy.
-
-
-      <verbatim>
-
-      File: customer_email_feed.xml
-
-      <?xml version="1.0" encoding="UTF-8"?>
-      <!--
-       A feed representing Hourly customer email data retained for 90 days
-       -->
-      <feed description="Raw customer email feed" name="customer_feed" xmlns="uri:falcon:feed:0.1">
-          <tags>externalSystem=USWestEmailServers,classification=secure</tags>
-          <groups>DataImportPipeline</groups>
-          <frequency>hours(1)</frequency>
-          <late-arrival cut-off="hours(4)"/>
-          <clusters>
-              <cluster name="primaryCluster" type="source">
-                  <validity start="2015-12-15T00:00Z" end="2016-03-31T00:00Z"/>
-                  <retention limit="days(90)" action="delete"/>
-                  <import>
-                      <source name="mysql-db" tableName="simple">
-                          <extract type="full">
-                              <mergepolicy>snapshot</mergepolicy>
-                          </extract>
-                          <fields>
-                              <includes>
-                                  <field>id</field>
-                                  <field>name</field>
-                              </includes>
-                          </fields>
-                      </source>
-                      <arguments>
-                          <argument name="--split-by" value="id"/>
-                          <argument name="--num-mappers" value="2"/>
-                      </arguments>
-                  </import>
-                  <export>
-                        <target name="mysql-db" tableName="simple_export">
-                            <load type="insert"/>
-                            <fields>
-                              <includes>
-                                <field>id</field>
-                                <field>name</field>
-                              </includes>
-                            </fields>
-                        </target>
-                        <arguments>
-                             <argument name="--update-key" value="id"/>
-                        </arguments>
-                    </export>
-              </cluster>
-          </clusters>
-
-          <locations>
-              <location type="data" path="/user/ambari-qa/falcon/demo/primary/importfeed/${YEAR}-${MONTH}-${DAY}-${HOUR}-${MINUTE}"/>
-              <location type="stats" path="/none"/>
-              <location type="meta" path="/none"/>
-          </locations>
-
-          <ACL owner="ambari-qa" group="users" permission="0755"/>
-          <schema location="/none" provider="none"/>
-
-      </feed>
-      </verbatim>
-
-   * *Import policy*
-     The import policy uses the datasource entity specified in the "source" to connect to the database. The tableName
-     specified should exist in the source datasource.
-
-     Extraction type specifies whether to pull data from external datasource "full" everytime or "incrementally".
-     The mergepolicy specifies how to organize (snapshot or append, i.e time series partiitons) the data on hadoop.
-     The valid combinations are:
-      * [full,snapshot] - data is extracted in full and dumped into the feed instance location.
-      * [incremental, append] - data is extracted incrementally using the key specified in the *deltacolumn*
-        and added as a partition to the feed instance location.
-      * [incremental, snapshot] - data is extracted incrementally and merged with already existing data on hadoop to
-        produce one latest feed instance.*This feature is not supported currently*. The use case for this feature is
-        to efficiently import very large dimention tables that have updates and inserts onto hadoop and make it available
-        as a snapshot with latest updates to consumers.
-
-      The following example defines an incremental extraction with append organization:
-
-      <verbatim>
-           <import>
-                <source name="mysql-db" tableName="simple">
-                    <extract type="incremental">
-                        <deltacolumn>modified_time</deltacolumn>
-                        <mergepolicy>append</mergepolicy>
-                    </extract>
-                    <fields>
-                        <includes>
-                            <field>id</field>
-                            <field>name</field>
-                        </includes>
-                    </fields>
-                </source>
-                <arguments>
-                    <argument name="--split-by" value="id"/>
-                    <argument name="--num-mappers" value="2"/>
-                </arguments>
-            </import>
-        </verbatim>
-
-
-     The fields option enables to control what fields get imported. By default, all fields get import. The "includes" option
-     brings only those fields specified. The "excludes" option brings all the fields other than specified.
-
-     The arguments section enables to pass in any extra arguments needed for fine control on the underlying implementation --
-     in this case, Sqoop.
-
-   * *Export policy*
-     The export, like import, uses the datasource for connecting to the database. Load type specifies whether to insert
-     or only update data onto the external table. Fields option behaves the same way as in import policy.
-     The tableName specified should exist in the external datasource.
-
----+++ Operation
-   Once the Datasource and Feed entity with import and export policies are defined, Users can submit and schedule
-   the Import and Export operations via CLI and REST API as below:
-
-   <verbatim>
-
-    ## submit the mysql-db datasource defined in the file mysql_datasource.xml
-    falcon entity -submit -type datasource -file mysql_datasource.xml
-
-    ## submit the customer_feed specified in the customer_email_feed.xml
-    falcon entity -submit -type feed -file customer_email_feed.xml
-
-    ## schedule the customer_feed
-    falcon entity -schedule -type feed -name customer_feed
-
-   </verbatim>
-
-   Falcon will create corresponding oozie bundles with coordinator and workflow for import and export operation.

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/InstallationSteps.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/InstallationSteps.twiki b/trunk/releases/master/src/site/twiki/InstallationSteps.twiki
deleted file mode 100644
index a5ee2cc..0000000
--- a/trunk/releases/master/src/site/twiki/InstallationSteps.twiki
+++ /dev/null
@@ -1,87 +0,0 @@
----+Building & Installing Falcon
-
-
----++Building Falcon
-
----+++Prerequisites
-
-   * JDK 1.7/1.8
-   * Maven 3.2.x
-
-
-
----+++Step 1 - Clone the Falcon repository
-
-<verbatim>
-$git clone https://git-wip-us.apache.org/repos/asf/falcon.git falcon
-</verbatim>
-
-
----+++Step 2 - Build Falcon
-
-<verbatim>
-$cd falcon
-$export MAVEN_OPTS="-Xmx1024m -XX:MaxPermSize=256m -noverify" && mvn clean install
-</verbatim>
-It builds and installs the package into the local repository, for use as a dependency in other projects locally.
-
-[optionally -Dhadoop.version=<<hadoop.version>> can be appended to build for a specific version of Hadoop]
-
-*NOTE:* Falcon drops support for Hadoop-1 and only supports Hadoop-2 from Falcon 0.6 onwards
-[optionally -Doozie.version=<<oozie version>> can be appended to build with a specific version of Oozie. Oozie versions
->= 4 are supported]
-NOTE: Falcon builds with JDK 1.7/1.8 using -noverify option
-      To compile Falcon with Hive Replication, optionally "-P hadoop-2,hivedr" can be appended. For this Hive >= 1.2.0
-      and Oozie >= 4.2.0 should be available.
-
-
-
----+++Step 3 - Package and Deploy Falcon
-
-Once the build successfully completes, artifacts can be packaged for deployment using the assembly plugin. The Assembly
-Plugin for Maven is primarily intended to allow users to aggregate the project output along with its dependencies,
-modules, site documentation, and other files into a single distributable archive. There are two basic ways in which you
-can deploy Falcon - Embedded mode(also known as Stand Alone Mode) and Distributed mode. Your next steps will vary based
-on the mode in which you want to deploy Falcon.
-
-*NOTE* : Oozie is being extended by Falcon (particularly on el-extensions) and hence the need for Falcon to build &
-re-package Oozie, so that users of Falcon can work with the right Oozie setup. Though Oozie is packaged by Falcon, it
-needs to be deployed separately by the administrator and is not auto deployed along with Falcon.
-
-
----++++Embedded/Stand Alone Mode
-Embedded mode is useful when the Hadoop jobs and relevant data processing involve only one Hadoop cluster. In this mode
- there is a single Falcon server that contacts the scheduler to schedule jobs on Hadoop. All the process/feed requests
- like submit, schedule, suspend, kill etc. are sent to this server. For running Falcon in this mode one should use the
- Falcon which has been built using standalone option. You can find the instructions for Embedded mode setup
- [[Embedded-mode][here]].
-
-
----++++Distributed Mode
-Distributed mode is for multiple (colos) instances of Hadoop clusters, and multiple workflow schedulers to handle them.
-In this mode Falcon has 2 components: Prism and Server(s). Both Prism and Server(s) have their own their own config
-locations(startup and runtime properties). In this mode Prism acts as a contact point for Falcon servers. While
- all commands are available through Prism, only read and instance api's are available through Server. You can find the
- instructions for Distributed Mode setup [[Distributed-mode][here]].
-
-
-
----+++Preparing Oozie and Falcon packages for deployment
-<verbatim>
-$cd <<project home>>
-$src/bin/package.sh <<hadoop-version>> <<oozie-version>>
-
->> ex. src/bin/package.sh 1.1.2 4.0.1 or src/bin/package.sh 0.20.2-cdh3u5 4.0.1
->> ex. src/bin/package.sh 2.5.0 4.0.0
->> Falcon package is available in <<falcon home>>/target/apache-falcon-<<version>>-bin.tar.gz
->> Oozie package is available in <<falcon home>>/target/oozie-4.0.1-distro.tar.gz
-</verbatim>
-
-*NOTE:* If you have a separate Apache Oozie installation, you will need to follow some additional steps:
-   1. Once you have setup the Falcon Server, copy libraries under {falcon-server-dir}/oozie/libext/ to {oozie-install-dir}/libext.
-   1. Modify Oozie's configuration file. Copy all Falcon related properties from {falcon-server-dir}/oozie/conf/oozie-site.xml to {oozie-install-dir}/conf/oozie-site.xml
-   1. Restart oozie:
-      1. cd {oozie-install-dir}
-      1. sudo -u oozie ./bin/oozie-stop.sh
-      1. sudo -u oozie ./bin/oozie-setup.sh prepare-war
-      1. sudo -u oozie ./bin/oozie-start.sh

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/LICENSE.txt
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/LICENSE.txt b/trunk/releases/master/src/site/twiki/LICENSE.txt
deleted file mode 100644
index d3b580f..0000000
--- a/trunk/releases/master/src/site/twiki/LICENSE.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-All files in this directory and subdirectories are under Apache License Version 2.0.
-The reason being Maven Doxia plugin that converts twiki to html does not have
-commenting out feature.

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/MigrationInstructions.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/MigrationInstructions.twiki b/trunk/releases/master/src/site/twiki/MigrationInstructions.twiki
deleted file mode 100644
index 7c0e027..0000000
--- a/trunk/releases/master/src/site/twiki/MigrationInstructions.twiki
+++ /dev/null
@@ -1,15 +0,0 @@
----+ Migration Instructions
-
----++ Migrate from 0.5-incubating to 0.6-incubating
-
-This is a placeholder wiki for migration instructions from falcon 0.5-incubating to 0.6-incubating.
-
----+++ Update Entities
-
----+++ Change cluster dir permissions
-
----+++ Enable/Disable TLS
-
----+++ Authorization
-
-

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/OnBoarding.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/OnBoarding.twiki b/trunk/releases/master/src/site/twiki/OnBoarding.twiki
deleted file mode 100644
index 8b02150..0000000
--- a/trunk/releases/master/src/site/twiki/OnBoarding.twiki
+++ /dev/null
@@ -1,269 +0,0 @@
----++ Contents
-   * <a href="#Onboarding Steps">Onboarding Steps</a>
-   * <a href="#Sample Pipeline">Sample Pipeline</a>
-   * [[HiveIntegration][Hive Examples]]
-
----+++ Onboarding Steps
-   * Create cluster definition for the cluster, specifying name node, job tracker, workflow engine endpoint, messaging endpoint. Refer to [[EntitySpecification][cluster definition]] for details.
-   * Create Feed definitions for each of the input and output specifying frequency, data path, ownership. Refer to [[EntitySpecification][feed definition]] for details.
-   * Create Process definition for your job. Process defines configuration for the workflow job. Important attributes are frequency, inputs/outputs and workflow path. Refer to [[EntitySpecification][process definition]] for process details.
-   * Define workflow for your job using the workflow engine(only oozie is supported as of now). Refer [[http://oozie.apache.org/docs/3.1.3-incubating/WorkflowFunctionalSpec.html][Oozie Workflow Specification]]. The libraries required for the workflow should be available in lib folder in workflow path.
-   * Set-up workflow definition, libraries and referenced scripts on hadoop. 
-   * Submit cluster definition
-   * Submit and schedule feed and process definitions
-   
-
----+++ Sample Pipeline
----++++ Cluster   
-Cluster definition that contains end points for name node, job tracker, oozie and jms server:
-The cluster locations MUST be created prior to submitting a cluster entity to Falcon.
-*staging* must have 777 permissions and the parent dirs must have execute permissions
-*working* must have 755 permissions and the parent dirs must have execute permissions
-
-<verbatim>
-<?xml version="1.0"?>
-<!--
-    Cluster configuration
-  -->
-<cluster colo="ua2" description="" name="corp" xmlns="uri:falcon:cluster:0.1"
-    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    
-    <interfaces>
-        <interface type="readonly" endpoint="hftp://name-node.com:50070" version="2.5.0" />
-
-        <interface type="write" endpoint="hdfs://name-node.com:54310" version="2.5.0" />
-
-        <interface type="execute" endpoint="job-tracker:54311" version="2.5.0" />
-
-        <interface type="workflow" endpoint="http://oozie.com:11000/oozie/" version="4.0.1" />
-
-        <interface type="messaging" endpoint="tcp://jms-server.com:61616?daemon=true" version="5.1.6" />
-    </interfaces>
-
-    <locations>
-        <location name="staging" path="/projects/falcon/staging" />
-        <location name="temp" path="/tmp" />
-        <location name="working" path="/projects/falcon/working" />
-    </locations>
-</cluster>
-</verbatim>
-   
----++++ Input Feed
-Hourly feed that defines feed path, frequency, ownership and validity:
-<verbatim>
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-    Hourly sample input data
-  -->
-
-<feed description="sample input data" name="SampleInput" xmlns="uri:falcon:feed:0.1"
-    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
-    <groups>group</groups>
-
-    <frequency>hours(1)</frequency>
-
-    <late-arrival cut-off="hours(6)" />
-
-    <clusters>
-        <cluster name="corp" type="source">
-            <validity start="2009-01-01T00:00Z" end="2099-12-31T00:00Z" timezone="UTC" />
-            <retention limit="months(24)" action="delete" />
-        </cluster>
-    </clusters>
-
-    <locations>
-        <location type="data" path="/projects/bootcamp/data/${YEAR}-${MONTH}-${DAY}-${HOUR}/SampleInput" />
-        <location type="stats" path="/projects/bootcamp/stats/SampleInput" />
-        <location type="meta" path="/projects/bootcamp/meta/SampleInput" />
-    </locations>
-
-    <ACL owner="suser" group="users" permission="0755" />
-
-    <schema location="/none" provider="none" />
-</feed>
-</verbatim>
-
----++++ Output Feed
-Daily feed that defines feed path, frequency, ownership and validity:
-<verbatim>
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-    Daily sample output data
-  -->
-
-<feed description="sample output data" name="SampleOutput" xmlns="uri:falcon:feed:0.1"
-xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
-    <groups>group</groups>
-
-    <frequency>days(1)</frequency>
-
-    <late-arrival cut-off="hours(6)" />
-
-    <clusters>
-        <cluster name="corp" type="source">
-            <validity start="2009-01-01T00:00Z" end="2099-12-31T00:00Z" timezone="UTC" />
-            <retention limit="months(24)" action="delete" />
-        </cluster>
-    </clusters>
-
-    <locations>
-        <location type="data" path="/projects/bootcamp/output/${YEAR}-${MONTH}-${DAY}/SampleOutput" />
-        <location type="stats" path="/projects/bootcamp/stats/SampleOutput" />
-        <location type="meta" path="/projects/bootcamp/meta/SampleOutput" />
-    </locations>
-
-    <ACL owner="suser" group="users" permission="0755" />
-
-    <schema location="/none" provider="none" />
-</feed>
-</verbatim>
-
----++++ Process
-Sample process which runs daily at 6th hour on corp cluster. It takes one input - !SampleInput for the previous day(24 instances). It generates one output - !SampleOutput for previous day. The workflow is defined at /projects/bootcamp/workflow/workflow.xml. Any libraries available for the workflow should be at /projects/bootcamp/workflow/lib. The process also defines properties queueName, ssh.host, and fileTimestamp which are passed to the workflow. In addition, Falcon exposes the following properties to the workflow: nameNode, jobTracker(hadoop properties), input and output(Input/Output properties).
-
-<verbatim>
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-    Daily sample process. Runs at 6th hour every day. Input - last day's hourly data. Generates output for yesterday
- -->
-<process name="SampleProcess">
-    <cluster name="corp" />
-
-    <frequency>days(1)</frequency>
-
-    <validity start="2012-04-03T06:00Z" end="2022-12-30T00:00Z" timezone="UTC" />
-
-    <inputs>
-        <input name="input" feed="SampleInput" start="yesterday(0,0)" end="today(-1,0)" />
-    </inputs>
-
-    <outputs>
-            <output name="output" feed="SampleOutput" instance="yesterday(0,0)" />
-    </outputs>
-
-    <properties>
-        <property name="queueName" value="reports" />
-        <property name="ssh.host" value="host.com" />
-        <property name="fileTimestamp" value="${coord:formatTime(coord:nominalTime(), 'yyyy-MM-dd')}" />
-    </properties>
-
-    <workflow engine="oozie" path="/projects/bootcamp/workflow" />
-
-    <retry policy="periodic" delay="minutes(5)" attempts="3" />
-    
-    <late-process policy="exp-backoff" delay="hours(1)">
-        <late-input input="input" workflow-path="/projects/bootcamp/workflow/lateinput" />
-    </late-process>
-</process>
-</verbatim>
-
----++++ Oozie Workflow
-The sample user workflow contains 3 actions:
-   * Pig action - Executes pig script /projects/bootcamp/workflow/script.pig
-   * concatenator - Java action that concatenates part files and generates a single file
-   * file upload - ssh action that gets the concatenated file from hadoop and sends the file to a remote host
-   
-<verbatim>
-<workflow-app xmlns="uri:oozie:workflow:0.2" name="sample-wf">
-        <start to="pig" />
-
-        <action name="pig">
-                <pig>
-                        <job-tracker>${jobTracker}</job-tracker>
-                        <name-node>${nameNode}</name-node>
-                        <prepare>
-                                <delete path="${output}"/>
-                        </prepare>
-                        <configuration>
-                                <property>
-                                        <name>mapred.job.queue.name</name>
-                                        <value>${queueName}</value>
-                                </property>
-                                <property>
-                                        <name>mapreduce.fileoutputcommitter.marksuccessfuljobs</name>
-                                        <value>true</value>
-                                </property>
-                        </configuration>
-                        <script>${nameNode}/projects/bootcamp/workflow/script.pig</script>
-                        <param>input=${input}</param>
-                        <param>output=${output}</param>
-                        <file>lib/dependent.jar</file>
-                </pig>
-                <ok to="concatenator" />
-                <error to="fail" />
-        </action>
-
-        <action name="concatenator">
-                <java>
-                        <job-tracker>${jobTracker}</job-tracker>
-                        <name-node>${nameNode}</name-node>
-                        <prepare>
-                                <delete path="${nameNode}/projects/bootcamp/concat/data-${fileTimestamp}.csv"/>
-                        </prepare>
-                        <configuration>
-                                <property>
-                                        <name>mapred.job.queue.name</name>
-                                        <value>${queueName}</value>
-                                </property>
-                        </configuration>
-                        <main-class>com.wf.Concatenator</main-class>
-                        <arg>${output}</arg>
-                        <arg>${nameNode}/projects/bootcamp/concat/data-${fileTimestamp}.csv</arg>
-                </java>
-                <ok to="fileupload" />
-                <error to="fail"/>
-        </action>
-                        
-        <action name="fileupload">
-                <ssh>
-                        <host>localhost</host>
-                        <command>/tmp/fileupload.sh</command>
-                        <args>${nameNode}/projects/bootcamp/concat/data-${fileTimestamp}.csv</args>
-                        <args>${wf:conf("ssh.host")}</args>
-                        <capture-output/>
-                </ssh>
-                <ok to="fileUploadDecision" />
-                <error to="fail"/>
-        </action>
-
-        <decision name="fileUploadDecision">
-                <switch>
-                        <case to="end">
-                                ${wf:actionData('fileupload')['output'] == '0'}
-                        </case>
-                        <default to="fail"/>
-                </switch>
-        </decision>
-
-        <kill name="fail">
-                <message>Workflow failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
-        </kill>
-
-        <end name="end" />
-</workflow-app>
-</verbatim>
-
----++++ File Upload Script
-The script gets the file from hadoop, rsyncs the file to /tmp on remote host and deletes the file from hadoop
-<verbatim>
-#!/bin/bash
-
-trap 'echo "output=$?"; exit $?' ERR INT TERM
-
-echo "Arguments: $@"
-SRCFILE=$1
-DESTHOST=$3
-
-FILENAME=`basename $SRCFILE`
-rm -f /tmp/$FILENAME
-hadoop fs -copyToLocal $SRCFILE /tmp/
-echo "Copied $SRCFILE to /tmp"
-
-rsync -ztv --rsh=ssh --stats /tmp/$FILENAME $DESTHOST:/tmp
-echo "rsynced $FILENAME to $DESTUSER@$DESTHOST:$DESTFILE"
-
-hadoop fs -rmr $SRCFILE
-echo "Deleted $SRCFILE"
-
-rm -f /tmp/$FILENAME
-echo "output=0"
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/Operability.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/Operability.twiki b/trunk/releases/master/src/site/twiki/Operability.twiki
deleted file mode 100644
index 05850c1..0000000
--- a/trunk/releases/master/src/site/twiki/Operability.twiki
+++ /dev/null
@@ -1,110 +0,0 @@
----+ Operationalizing Falcon
-
----++ Overview
-
-Apache Falcon provides various tools to operationalize Falcon consisting of Alerts for
-unrecoverable errors, Audits of user actions, Metrics, and Notifications. They are detailed below.
-
-++ Lineage
-
-Currently Lineage has no way to access or restore information about entity instances created during the time lineage
-was disabled. Information about entities however, is preserved and bootstrapped when lineage is enabled. If you have to
-reset the graph db then you can delete the graph db files as specified in the startup.properties and restart the falcon.
-Please note: you will loose all the information about the instances if you delete the graph db.
-
----++ Monitoring
-
-Falcon provides monitoring of various events by capturing metrics of those events.
-The metric numbers can then be used to monitor performance and health of the Falcon system and
-the entire processing pipelines.
-
-Falcon also exposes [[https://github.com/thinkaurelius/titan/wiki/Titan-Performance-and-Monitoring][metrics for titandb]]
-
-Users can view the logs of these events in the metric.log file, by default this file is created
-under ${user.dir}/logs/ directory. Users may also extend the Falcon monitoring framework to send
-events to systems like Mondemand/lwes by implementingorg.apache.falcon.plugin.MonitoringPlugin
-interface.
-
-The following events are captured by Falcon for logging the metrics:
-   1. New cluster definitions posted to Falcon (success & failures)
-   1. New feed definition posted to Falcon (success & failures)
-   1. New process definition posted to Falcon (success & failures)
-   1. Process update events (success & failures)
-   1. Feed update events (success & failures)
-   1. Cluster update events (success & failures)
-   1. Process suspend events (success & failures)
-   1. Feed suspend events (success & failures)
-   1. Process resume events (success & failures)
-   1. Feed resume events (success & failures)
-   1. Process remove events (success & failures)
-   1. Feed remove events (success & failures)
-   1. Cluster remove events (success & failures)
-   1. Process instance kill events (success & failures)
-   1. Process instance re-run events (success & failures)
-   1. Process instance generation events
-   1. Process instance failure events
-   1. Process instance auto-retry events
-   1. Process instance retry exhaust events
-   1. Feed instance deletion event
-   1. Feed instance deletion failure event (no retries)
-   1. Feed instance replication event
-   1. Feed instance replication failure event
-   1. Feed instance replication auto-retry event
-   1. Feed instance replication retry exhaust event
-   1. Feed instance late arrival event
-   1. Feed instance post cut-off arrival event
-   1. Process re-run due to late feed event
-   1. Transaction rollback failed event
-
-The metric logged for an event has the following properties:
-   1. Action - Name of the event.
-   2. Dimensions - A list of name/value pairs of various attributes for a given action.
-   3. Status- Status of an action FAILED/SUCCEEDED.
-   4. Time-taken - Time taken in nanoseconds for a given action.
-
-An example for an event logged for a submit of a new process definition:
-
-   2012-05-04 12:23:34,026 {Action:submit, Dimensions:{entityType=process}, Status: SUCCEEDED, Time-taken:97087000 ns}
-
-Users may parse the metric.log or capture these events from custom monitoring frameworks and can plot various graphs
-or send alerts according to their requirements.
-
-
----++ Notifications
-
-Falcon creates a JMS topic for every process/feed that is scheduled in Falcon.
-The implementation class and the broker url of the JMS engine are read from the dependent cluster's definition.
-Users may register consumers on the required topic to check the availability or status of feed instances.
-
-For a given process that is scheduled, the name of the topic is same as the process name.
-Falcon sends a Map message for every feed produced by the instance of a process to the JMS topic.
-The JMS !MapMessage sent to a topic has the following properties:
-entityName, feedNames, feedInstancePath, workflowId, runId, nominalTime, timeStamp, brokerUrl, brokerImplClass, entityType, operation, logFile, topicName, status, brokerTTL;
-
-For a given feed that is scheduled, the name of the topic is same as the feed name.
-Falcon sends a map message for every feed instance that is deleted/archived/replicated depending upon the retention policy set in the feed definition.
-The JMS !MapMessage sent to a topic has the following properties:
-entityName, feedNames, feedInstancePath, workflowId, runId, nominalTime, timeStamp, brokerUrl, brokerImplClass, entityType, operation, logFile, topicName, status, brokerTTL;
-
-The JMS messages are automatically purged after a certain period (default 3 days) by the Falcon JMS house-keeping service.TTL (Time-to-live) for JMS message
-can be configured in the Falcon's startup.properties file.
-
-
----++ Alerts
-
-Falcon generates alerts for unrecoverable errors into a log file by default.
-Users can view these alerts in the alerts.log file, by default this file is created
-under ${user.dir}/logs/ directory.
-
-Users may also extend the Falcon Alerting plugin to send events to systems like Nagios, etc. by
-extending org.apache.falcon.plugin.AlertingPlugin interface.
-
-
----++ Audits
-
-Falcon audits all user activity and captures them into a log file by default.
-Users can view these audits in the audit.log file, by default this file is created
-under ${user.dir}/logs/ directory.
-
-Users may also extend the Falcon Audit plugin to send audits to systems like Apache Argus, etc. by
-extending org.apache.falcon.plugin.AuditingPlugin interface.

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/Recipes.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/Recipes.twiki b/trunk/releases/master/src/site/twiki/Recipes.twiki
deleted file mode 100644
index b5faa1e..0000000
--- a/trunk/releases/master/src/site/twiki/Recipes.twiki
+++ /dev/null
@@ -1,85 +0,0 @@
----+ Falcon Recipes
-
----++ Overview
-
-A Falcon recipe is a static process template with parameterized workflow to realize a specific use case. Recipes are
-defined in user space. Recipes will not have support for update or lifecycle management.
-
-For example:
-
-   * Replicating directories from one HDFS cluster to another (not timed partitions)
-   * Replicating hive metadata (database, table, views, etc.)
-   * Replicating between HDFS and Hive - either way
-   * Data masking etc.
-
----++ Proposal
-
-Falcon provides a Process abstraction that encapsulates the configuration for a user workflow with scheduling
-controls. All recipes can be modeled as a Process with in Falcon which executes the user workflow periodically. The
-process and its associated workflow are parameterized. The user will provide a properties file with name value pairs
-that are substituted by falcon before scheduling it. Falcon translates these recipes as a process entity by
-replacing the parameters in the workflow definition.
-
----++ Falcon CLI recipe support
-
-Falcon CLI functionality to support recipes has been added.
-[[falconcli/FalconCLI][Falcon CLI]] Recipe command usage is defined here.
-
-CLI accepts recipe option with a recipe name and optional tool and does the following:
-   * Validates the options; name option is mandatory and tool is optional and should be provided if user wants to override the base recipe tool
-   * Looks for <name>-workflow.xml, <name>-template.xml and <name>.properties file in the path specified by falcon.recipe.path in client.properties. If files cannot be found then Falcon CLI will fail
-   * Invokes a Tool to substitute the properties in the templated process for the recipe. By default invokes base tool if tool option is not passed. Tool is responsible for generating process entity at the path specified by FalconCLI
-   * Validates the generated entity
-   * Submit and schedule this entity
-   * Generated process entity files are stored in tmp directory
-
----++ Base Recipe tool
-
-Falcon provides a base tool that recipes can override. Base Recipe tool does the following:
-   * Expects recipe template file path, recipe properties file path and path where process entity to be submitted should be generated. Validates these arguments
-   * Validates the artifacts i.e. workflow and/or lib files specified in the recipe template exists on local filesystem or HDFS at the specified path else returns error
-   * Copies if the artifacts exists on local filesystem
-      * If workflow is on local FS then falcon.recipe.workflow.path in recipe property file is mandatory for it to be copied to HDFS. If templated process requires custom libs falcon.recipe.workflow.lib.path property is mandatory for them to be copied from Local FS to HDFS. Recipe tool will copy the local artifacts only if these properties are set in properties file
-   * Looks for the patten ##[A-Za-z0-9_.]*## in the templated process and substitutes it with the properties. Process entity generated after the substitution is written to the empty file passed by FalconCLI
-
----++ Recipe template file format
-
-   * Any templatized string should be in the format ##[A-Za-z0-9_.]*##.
-   * There should be a corresponding entry in the recipe properties file "falcon.recipe.<templatized-string> = <value to be substituted>"
-
-<verbatim>
-Example: If the entry in recipe template is <workflow name="##workflow.name##"> there should be a corresponding entry in the recipe properties file falcon.recipe.workflow.name=hdfs-dr-workflow
-</verbatim>
-
----++ Recipe properties file format
-
-   * Regular key value pair properties file
-   * Property key should be prefixed by "falcon.recipe."
-
-<verbatim>
-Example: falcon.recipe.workflow.name=hdfs-dr-workflow
-Recipe template will have <workflow name="##workflow.name##">. Recipe tool will look for the patten ##workflow.name##
-and replace it with the property value "hdfs-dr-workflow". Substituted template will have <workflow name="hdfs-dr-workflow">
-</verbatim>
-
----++ Metrics
-HDFS DR and Hive DR recipes will capture the replication metrics like TIMETAKEN, BYTESCOPIED, COPY (number of files copied) for an
-instance and populate to the GraphDB.
-
----++ Managing the scheduled recipe process
-   * Scheduled recipe process is similar to regular process
-      * List : falcon entity -type process -name <recipe-process-name> -list
-      * Status : falcon entity -type process -name <recipe-process-name> -status
-      * Delete : falcon entity -type process -name <recipe-process-name> -delete
-
----++ Sample recipes
-
-   * Sample recipes are published in addons/recipes
-
----++ Types of recipes
-   * [[HDFSDR][HDFS Recipe]]
-   * [[HiveDR][HiveDR Recipe]]
-
----++ Packaging
-
-   * There is no packaging for recipes at this time but will be added soon.


[2/6] falcon git commit: Deleting accidental check-in of trunk/release/master

Posted by pa...@apache.org.
http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/EntityDependencies.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/EntityDependencies.twiki b/trunk/releases/master/src/site/twiki/restapi/EntityDependencies.twiki
deleted file mode 100644
index 864b084..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/EntityDependencies.twiki
+++ /dev/null
@@ -1,43 +0,0 @@
----++  GET /api/entities/dependencies/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get dependencies of the entity.
-
----++ Parameters
-   * :entity-type can be cluster, feed or process.
-   * :entity-name is name of the entity.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Dependenciess of the entity.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/entities/dependencies/process/SampleProcess?doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "entity": [
-        {
-            "name": "SampleInput",
-            "type": "feed",
-            "tag": [Input]
-        },
-        {
-            "name": "SampleOutput",
-            "type": "feed"
-            "tag": [Output]
-        },
-        {
-            "name": "primary-cluster",
-            "type": "cluster"
-        }
-    ]
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/EntityLineage.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/EntityLineage.twiki b/trunk/releases/master/src/site/twiki/restapi/EntityLineage.twiki
deleted file mode 100644
index f2258f2..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/EntityLineage.twiki
+++ /dev/null
@@ -1,40 +0,0 @@
----++  GET api/metadata/lineage/entities?pipeline=:pipeline
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-It returns the graph depicting the relationship between the various processes and feeds in a given pipeline.
-
----++ Parameters
-   * :pipeline is the name of the pipeline
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-It returns a json graph
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/metadata/lineage/entities?pipeline=my-pipeline&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "vertices": ["my-minutely-process", "my-hourly-process"],
-    "edges":
-    [
-        {
-         "from"  : "my-minutely-process",
-         "to"    : "my-hourly-process",
-         "label" : "my-minutely-feed"
-        },
-        {
-         "from"  : "my-hourly-process",
-         "to"    : "my-minutely-process",
-         "label" : "my-hourly-feedback"
-        }
-    ]
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/EntityList.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/EntityList.twiki b/trunk/releases/master/src/site/twiki/restapi/EntityList.twiki
deleted file mode 100644
index 2c2a734..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/EntityList.twiki
+++ /dev/null
@@ -1,164 +0,0 @@
----++  GET /api/entities/list/:entity-type?fields=:fields
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get list of the entities.
-
----++ Parameters
-   * :entity-type Comma-separated entity types. Can be empty. Valid entity types are cluster, feed or process.
-   * fields <optional param> Fields of entity that the user wants to view, separated by commas.
-      * Valid options are STATUS, TAGS, PIPELINES, CLUSTERS.
-   * nameseq <optional param> Subsequence of entity name. Not case sensitive.
-      * The entity name needs to contain all the characters in the subsequence in the same order.
-      * Example 1: "sample1" will match the entity named "SampleFeed1-2".
-      * Example 2: "mhs" will match the entity named "New-My-Hourly-Summary".
-   * tagkeys <optional param> Keywords in tags, separated by comma. Not case sensitive.
-      * The returned entities will have tags that match all the tag keywords.
-   * filterBy <optional param> Filter results by list of field:value pairs. Example: filterBy=STATUS:RUNNING,PIPELINES:clickLogs
-      * Supported filter fields are NAME, STATUS, PIPELINES, CLUSTER.
-      * Query will do an AND among filterBy fields.
-   * tags <optional param> Return list of entities that have specified tags, separated by a comma. Query will do AND on tag values.
-      * Example: tags=consumer=consumer@xyz.com,owner=producer@xyz.com
-   * orderBy <optional param> Field by which results should be ordered.
-      * Supports ordering by "name".
-   * sortOrder <optional param> Valid options are "asc" and "desc"
-   * offset <optional param> Show results from the offset, used for pagination. Defaults to 0.
-   * numResults <optional param> Number of results to show per request, used for pagination. Only integers > 0 are valid, Default is 10.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-   * Note:
-      * We have two filtering parameters for entity tags: "tags" and "tagkeys". "tags" does the exact match in key=value fashion, while "tagkeys" finds all the entities with the given key as a substring in the tags. This "tagkeys" filter is introduced for the user who doesn't remember the exact tag but some keywords in the tag. It also helps users to save the time of typing long tags.
-      * The returned entities will match all the filtering criteria.
-
----++ Results
-Total number of results and a list of entities.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/entities/list/feed
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "totalResults":"2”,
-    "entity": [
-        {
-            "name": "SampleOutput",
-            "type": "feed"
-        },
-        {
-            "name": "SampleInput",
-            "type": "feed"
-        }
-    ]
-}
-</verbatim>
-
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/entities/list
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "totalResults":"4”,
-    "entity": [
-        {
-            "name"  : "SampleCluster1",
-            "type"  : "cluster"
-        }
-        {
-            "name"  : "SampleOutput",
-            "type"  : "feed"
-        },
-        {
-            "name"  : "SampleInput",
-            "type"  : "feed"
-        }
-        {
-            "name"  : "SampleProcess1",
-            "type"  : "process"
-        }
-    ]
-}
-</verbatim>
-
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/entities/list/feed?fields=status
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "totalResults":"2”,
-    "entity": [
-        {
-            "name"  : "SampleOutput",
-            "type"  : "feed",
-            "status": "RUNNING"
-        },
-        {
-            "name": "SampleInput",
-            "type": "feed",
-            "status": "RUNNING"
-        }
-    ]
-}
-</verbatim>
-
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/entities/list/process?filterBy=STATUS:RUNNING,PIPELINES:dataReplication&fields=status,pipelines,tags&tags=consumer=consumer@xyz.com&orderBy=name&offset=2&numResults=2
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "totalResults":"10”,
-    "entity": [
-        {
-            "name"  : "SampleProcess1",
-            "type"  : "process",
-            "status": "RUNNING",
-            "pipelines": "dataReplication",
-            "tags": "consumer=consumer@xyz.com"
-        },
-        {
-            "name": "SampleProcess3",
-            "type": "process",
-            "status": "RUNNING",
-            "pipelines": "dataReplication",
-            "tags": "consumer=consumer@xyz.com,owner=producer@xyz.com"
-        }
-    ]
-}
-</verbatim>
-
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/entities/list/feed,process?nameseq=samplebill&tagkeys=billing,healthcare&numResults=2&offset=1&fields=status,clusters,tags&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "totalResults":"4”,
-    "entity”:[
-        {
-            "type":"FEED”,
-            "name":"SampleUSHealthBill”,
-            "status":"SUBMITTED”,
-            "tags”: {"tag":["related=ushealthcare","department=billingDepartment"]},
-            "clusters": {"cluster":["SampleCluster1","primaryCluster”]}
-        },
-        {
-            "type":"PROCESS”,
-            "name":"SampleHealthBill”,
-            "status":"SUBMITTED”,
-            "tags”: {"tag":["related=healthcare","department=billingDepartment"]},
-            "clusters": {"cluster":"primaryCluster”}
-        }
-    ]
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/EntityResume.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/EntityResume.twiki b/trunk/releases/master/src/site/twiki/restapi/EntityResume.twiki
deleted file mode 100644
index d0bbe41..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/EntityResume.twiki
+++ /dev/null
@@ -1,30 +0,0 @@
----++  POST /api/entities/resume/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Resume a supended entity.
-
----++ Parameters
-   * :entity-type can either be a feed or a process.
-   * :entity-name is name of the entity.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Result of the resume command.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-POST http://localhost:15000/api/entities/resume/process/SampleProcess?doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "requestId": "default\/106582a9-130f-4903-8b8f-f95d7b286c30\n",
-    "message": "default\/SampleProcess(process) resumed successfully\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/EntitySchedule.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/EntitySchedule.twiki b/trunk/releases/master/src/site/twiki/restapi/EntitySchedule.twiki
deleted file mode 100644
index 0dede9b..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/EntitySchedule.twiki
+++ /dev/null
@@ -1,100 +0,0 @@
----++  POST /api/entities/schedule/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Schedule an entity.
-
----++ Parameters
-   * :entity-type can either be a feed or a process.
-   * :entity-name is name of the entity.
-   * skipDryRun : Optional query param, Falcon skips oozie dryrun when value is set to true.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-   * properties <key1:val1,...,keyN:valN> : Optional query param, supplies a set of key-value pairs that will be available to the entity in the coordinator configuration. These values will not override properties with the same name predefined in the entity specification. For example, to change the scheduler used for scheduling the entity you would set the property _falcon.scheduler_ in the properties parameter to _native_ to use the Falcon Scheduler or to _oozie_ to use the Oozie Scheduler.
-
-
----++ Results
-Result of the schedule command.
-
----++ Examples
----+++ Oozie Workflow
-<verbatim>
-<workflow-app xmlns="uri:oozie:workflow:0.4" name="aggregator-wf">
-  <start to="aggregator" />
-  <action name="aggregator">
-    <java>
-      <job-tracker>${jobTracker}</job-tracker>
-      <name-node>${nameNode}</name-node>
-      <configuration>
-        <property>
-          <name>mapred.job.queue.name</name>
-          <value>${queueName}</value>
-        </property>
-      </configuration>
-      <main-class>com.company.hadoop.AggregatorJob</main-class>
-      <java-opts>-Dframework.instrumentation.host=${instrumentationServer}</java-opts>
-      <arg>--input.path=${inputBasePath}</arg>
-      <arg>--output.path=${outputBasePath}</arg>
-    </java>
-    <ok to="end" />
-    <error to="fail" />
-  </action>
-  <kill name="fail">
-    <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
-  </kill>
-</workflow-app>
-</verbatim>
----+++ Submitted Process
-<verbatim>
-<?xml version="1.0" encoding="UTF-8"?>
-<!-- Daily sample process. Runs at 6th hour every day. Input - last day's hourly data. Generates output for yesterday -->
-<process xmlns="uri:falcon:process:0.1" name="SampleProcess" >
-    <clusters>
-      <cluster name="primary-cluster">
-        <validity start="2012-04-03T06:00Z" end="2022-12-30T00:00Z" />
-      </cluster>
-    </clusters>
-
-    <parallel>1</parallel>
-    <order>FIFO</order>
-    <frequency>hours(1)</frequency>
-
-    <inputs>
-        <input name="input" feed="SampleInput" start="yesterday(0,0)" end="today(-1,0)" />
-    </inputs>
-
-    <outputs>
-        <output name="output" feed="SampleOutput" instance="yesterday(0,0)" />
-    </outputs>
-
-    <properties>
-        <property name="queueName" value="default" />
-        <property name="ssh.host" value="localhost" />
-        <property name="fileTimestamp" value="${coord:formatTime(coord:nominalTime(), 'yyyy-MM-dd')}" />
-        <property name="instrumentationServer" value="${coord:conf('instrumentation.host')}" />
-    </properties>
-
-    <workflow engine="oozie" path="/examples/apps/aggregator" />
-    <retry policy="exp-backoff" delay="minutes(5)" attempts="3" />
-    
-    <late-process policy="exp-backoff" delay="hours(1)">
-        <late-input input="input" workflow-path="/projects/bootcamp/workflow/lateinput" />
-    </late-process>
-</process>
-</verbatim>
----+++ Rest Call
-<verbatim>
-POST http://localhost:15000/api/entities/schedule/process/SampleProcess?skipDryRun=false&doAs=joe&properties=instrumentation.host:intrumentation.localdomain
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "requestId": "default\/ee735c95-98bd-41b8-a705-2e78bcfcdcd9\n",
-    "message": "default\/SampleProcess(process) scheduled successfully\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>
----+++ Notes
-In this example, the value of _framework.instrumentation.host_ in the Oozie workflow will be _intrumentation.localdomain_ which is the property passed when the process is scheduled.

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/EntityStatus.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/EntityStatus.twiki b/trunk/releases/master/src/site/twiki/restapi/EntityStatus.twiki
deleted file mode 100644
index 188019d..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/EntityStatus.twiki
+++ /dev/null
@@ -1,30 +0,0 @@
----++  GET /api/entities/status/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get status of the entity.
-
----++ Parameters
-   * :entity-type can be cluster, feed or process.
-   * :entity-name is name of the entity.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Status of the entity.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/entities/status/process/SampleProcess?doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "requestId": "default\/4d35b382-852a-4bc7-9972-b9db3493322a\n",
-    "message": "default\/SUBMITTED\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/EntitySubmit.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/EntitySubmit.twiki b/trunk/releases/master/src/site/twiki/restapi/EntitySubmit.twiki
deleted file mode 100644
index a8dc9d7..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/EntitySubmit.twiki
+++ /dev/null
@@ -1,105 +0,0 @@
----++ POST  api/entities/submit/:entity-type
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Submit the given entity.
-
----++ Parameters
-   * :entity-type can be cluster, feed or process.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Result of the submission.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-POST http://localhost:15000/api/entities/submit/feed
-<?xml version="1.0" encoding="UTF-8"?>
-<!-- Hourly sample input data -->
-
-<feed description="sample input data"
-      name="SampleInput" xmlns="uri:falcon:feed:0.1"
-      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
-    <groups>group</groups>
-    <frequency>hours(1)</frequency>
-    <late-arrival cut-off="hours(6)" />
-    <clusters>
-        <cluster name="primary-cluster" type="source">
-            <!--validity start="2009-01-01T00:00Z" end="2099-12-31T00:00Z" timezone="UTC" /-->
-            <validity start="2009-01-01T00:00Z" end="2099-12-31T00:00Z" />
-            <retention limit="months(24)" action="delete" />
-        </cluster>
-    </clusters>
-
-    <locations>
-        <location type="data" path="/projects/bootcamp/data/${YEAR}-${MONTH}-${DAY}-${HOUR}/SampleInput" />
-        <location type="stats" path="/projects/bootcamp/stats/SampleInput" />
-        <location type="meta" path="/projects/bootcamp/meta/SampleInput" />
-    </locations>
-
-    <ACL owner="suser" group="users" permission="0755" />
-
-    <schema location="/none" provider="none" />
-</feed>
-</verbatim>
-
----+++ Result
-<verbatim>
-{
-    "requestId": "default\/d72a41f7-6420-487b-8199-62d66e492e35\n",
-    "message": "default\/Submit successful (feed) SampleInput\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>
-
----+++ Rest Call
-<verbatim>
-POST http://localhost:15000/api/entities/submit/process?doAs=joe
-<?xml version="1.0" encoding="UTF-8"?>
-<!-- Daily sample process. Runs at 6th hour every day. Input - last day's hourly data. Generates output for yesterday -->
-<process xmlns="uri:falcon:process:0.1" name="SampleProcess" >
-    <clusters>
-      <cluster name="primary-cluster">
-	<validity start="2012-04-03T06:00Z" end="2022-12-30T00:00Z" />
-      </cluster>
-    </clusters>
-
-    <parallel>1</parallel>
-    <order>FIFO</order>
-    <frequency>hours(1)</frequency>
-
-    <inputs>
-        <input name="input" feed="SampleInput" start="yesterday(0,0)" end="today(-1,0)" />
-    </inputs>
-
-    <outputs>
-        <output name="output" feed="SampleOutput" instance="yesterday(0,0)" />
-    </outputs>
-
-    <properties>
-        <property name="queueName" value="default" />
-        <property name="ssh.host" value="localhost" />
-        <property name="fileTimestamp" value="${coord:formatTime(coord:nominalTime(), 'yyyy-MM-dd')}" />
-    </properties>
-
-    <workflow engine="oozie" path="/examples/apps/aggregator" />
-    <retry policy="exp-backoff" delay="minutes(5)" attempts="3" />
-    
-    <late-process policy="exp-backoff" delay="hours(1)">
-        <late-input input="input" workflow-path="/projects/bootcamp/workflow/lateinput" />
-    </late-process>
-</process>
-</verbatim>
-
----+++ Result
-<verbatim>
-{
-    "requestId": "default\/e5cc8230-f356-4566-9b65-536abdff8aa3\n",
-    "message": "default\/Submit successful (process) SampleProcess\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/EntitySubmitAndSchedule.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/EntitySubmitAndSchedule.twiki b/trunk/releases/master/src/site/twiki/restapi/EntitySubmitAndSchedule.twiki
deleted file mode 100644
index 3cc23e9..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/EntitySubmitAndSchedule.twiki
+++ /dev/null
@@ -1,64 +0,0 @@
----++  POST /api/entities/submitAndSchedule/:entity-type
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Submits and schedules an entity.
-
----++ Parameters
-   * :entity-type can either be a feed or a process.
-   * skipDryRun : Optional query param, Falcon skips oozie dryrun when value is set to true.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Result of the submit and schedule command.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-POST http://localhost:15000/api/entities/submitAndSchedule/process?skipDryRun=false&doAs=joe
-<?xml version="1.0" encoding="UTF-8"?>
-<!-- Daily sample process. Runs at 6th hour every day. Input - last day's hourly data. Generates output for yesterday -->
-<process xmlns="uri:falcon:process:0.1" name="SampleProcess" >
-    <clusters>
-      <cluster name="primary-cluster">
-	<validity start="2012-04-03T06:00Z" end="2022-12-30T00:00Z" />
-      </cluster>
-    </clusters>
-
-    <parallel>1</parallel>
-    <order>FIFO</order>
-    <frequency>hours(1)</frequency>
-
-    <inputs>
-        <input name="input" feed="SampleInput" start="yesterday(0,0)" end="today(-1,0)" />
-    </inputs>
-
-    <outputs>
-        <output name="output" feed="SampleOutput" instance="yesterday(0,0)" />
-    </outputs>
-
-    <properties>
-        <property name="queueName" value="default" />
-        <property name="ssh.host" value="localhost" />
-        <property name="fileTimestamp" value="${coord:formatTime(coord:nominalTime(), 'yyyy-MM-dd')}" />
-    </properties>
-
-    <workflow engine="oozie" path="/examples/apps/aggregator" />
-    <retry policy="exp-backoff" delay="minutes(5)" attempts="3" />
-    
-    <late-process policy="exp-backoff" delay="hours(1)">
-        <late-input input="input" workflow-path="/projects/bootcamp/workflow/lateinput" />
-    </late-process>
-</process>
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "requestId": "schedule\/default\/b5b40931-175b-4b15-8f2b-02ef2e66f06b\n\nsubmit\/default\/b5b40931-175b-4b15-8f2b-02ef2e66f06b\n\n",
-    "message": "schedule\/default\/SampleProcess(process) scheduled successfully\n\nsubmit\/default\/Submit successful (process) SampleProcess\n\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/EntitySummary.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/EntitySummary.twiki b/trunk/releases/master/src/site/twiki/restapi/EntitySummary.twiki
deleted file mode 100644
index 763c2a7..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/EntitySummary.twiki
+++ /dev/null
@@ -1,74 +0,0 @@
----++  GET /api/entities/summary/:entity-type
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Given an EntityType and cluster, get list of entities along with summary of N recent instances of each entity
-
----++ Parameters
-   * :entity-type Valid options are feed or process.
-   * cluster Show entities that belong to this cluster.
-   * start <optional param> Show entity summaries from this date. Date format is yyyy-MM-dd'T'HH:mm'Z'.
-      * By default, it is set to (end - 2 days).
-   * end <optional param> Show entity summary up to this date. Date format is yyyy-MM-dd'T'HH:mm'Z'.
-      * Default is set to now.
-   * fields <optional param> Fields of entity that the user wants to view, separated by commas.
-      * Valid options are STATUS, TAGS, PIPELINES.
-   * filterBy <optional param> Filter results by list of field:value pairs. Example: filterBy=STATUS:RUNNING,PIPELINES:clickLogs
-      * Supported filter fields are NAME, STATUS, PIPELINES, CLUSTER.
-      * Query will do an AND among filterBy fields.
-   * tags <optional param> Return list of entities that have specified tags, separated by a comma. Query will do AND on tag values.
-      * Example: tags=consumer=consumer@xyz.com,owner=producer@xyz.com
-   * orderBy <optional param> Field by which results should be ordered.
-      * Supports ordering by "name".
-   * sortOrder <optional param> Valid options are "asc" and "desc"
-   * offset <optional param> Show results from the offset, used for pagination. Defaults to 0.
-   * numResults <optional param> Number of results to show per request, used for pagination. Only integers > 0 are valid, Default is 10.
-   * numInstances <optional param> Number of recent instances to show per entity. Only integers > 0 are valid, Default is 7.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Show entities along with summary of N instances for each entity.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/entities/summary/feed?cluster=primary-cluster&filterBy=STATUS:RUNNING&fields=status&tags=consumer=consumer@xyz.com&orderBy=name&offset=0&numResults=1&numInstances=2&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "entitySummary": [
-        {
-            "name"  : "SampleOutput",
-            "type"  : "feed",
-            "status": "RUNNING",
-            "instances": [
-            {
-                "details": "",
-                "endTime": "2013-10-21T14:40:26-07:00",
-                "startTime": "2013-10-21T14:39:56-07:00",
-                "cluster": "primary-cluster",
-                "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933395-oozie-rgau-W",
-                "status": "RUNNING",
-                "instance": "2012-04-03T07:00Z"
-            },
-            {
-                "details": "",
-                "endTime": "2013-10-21T14:42:27-07:00",
-                "startTime": "2013-10-21T14:41:57-07:00",
-                "cluster": "primary-cluster",
-                "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933397-oozie-rgau-W",
-                "status": "RUNNING",
-                "instance": "2012-04-03T08:00Z"
-            },
-            ]
-        }
-    ]
-    "requestId": "default\/e15bb378-d09f-4911-9df2-5334a45153d2\n",
-    "message": "default\/STATUS\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/EntitySuspend.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/EntitySuspend.twiki b/trunk/releases/master/src/site/twiki/restapi/EntitySuspend.twiki
deleted file mode 100644
index b322b27..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/EntitySuspend.twiki
+++ /dev/null
@@ -1,30 +0,0 @@
----++  POST /api/entities/suspend/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Suspend an entity.
-
----++ Parameters
-   * :entity-type can either be a feed or a process.
-   * :entity-name is name of the entity.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Status of the entity.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-POST http://localhost:15000/api/entities/suspend/process/SampleProcess?doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "requestId": "default\/fe5f2b6c-1f2e-49fc-af3a-342079f0b46b\n",
-    "message": "default\/SampleProcess(process) suspended successfully\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/EntityTouch.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/EntityTouch.twiki b/trunk/releases/master/src/site/twiki/restapi/EntityTouch.twiki
deleted file mode 100644
index 5b58ce2..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/EntityTouch.twiki
+++ /dev/null
@@ -1,31 +0,0 @@
----++ POST  api/entities/touch/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Force updates the entity.
-
----++ Parameters
-   * :entity-type can be feed or process.
-   * :entity-name is name of the feed or process.
-   * skipDryRun : Optional query param, Falcon skips oozie dryrun when value is set to true.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Result of the validation.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-POST http://localhost:15000/api/entities/touch/process/SampleProcess?skipDryRun=true&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "requestId": "touch\/default\/d6aaa328-6836-4818-a212-515bb43d8b86\n\n",
-    "message": "touch\/default\/SampleProcess updated successfully\n\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/EntityUpdate.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/EntityUpdate.twiki b/trunk/releases/master/src/site/twiki/restapi/EntityUpdate.twiki
deleted file mode 100644
index 46b01fc..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/EntityUpdate.twiki
+++ /dev/null
@@ -1,66 +0,0 @@
----++ POST  api/entities/update/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Updates the submitted entity.
-
----++ Parameters
-   * :entity-type can be feed or process.
-   * :entity-name is name of the feed or process.
-   * skipDryRun : Optional query param, Falcon skips oozie dryrun when value is set to true.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Result of the validation.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-POST http://localhost:15000/api/entities/update/process/SampleProcess?skipDryRun=false&doAs=joe
-<?xml version="1.0" encoding="UTF-8"?>
-<!-- Daily sample process. Runs at 6th hour every day. Input - last day's hourly data. Generates output for yesterday -->
-<process xmlns="uri:falcon:process:0.1" name="SampleProcess" >
-    <clusters>
-      <cluster name="primary-cluster">
-	<validity start="2012-04-03T06:00Z" end="2022-12-30T00:00Z" />
-      </cluster>
-    </clusters>
-
-    <parallel>1</parallel>
-    <order>FIFO</order>
-    <frequency>hours(1)</frequency>
-
-    <inputs>
-        <input name="input" feed="SampleInput" start="yesterday(0,0)" end="today(-1,0)" />
-    </inputs>
-
-    <outputs>
-        <output name="output" feed="SampleOutput" instance="yesterday(0,0)" />
-    </outputs>
-
-    <properties>
-        <property name="queueName" value="default" />
-        <property name="ssh.host" value="localhost" />
-        <property name="fileTimestamp" value="${coord:formatTime(coord:nominalTime(), 'yyyy-MM-dd')}" />
-    </properties>
-
-    <workflow engine="oozie" path="/examples/apps/aggregator" />
-    <retry policy="exp-backoff" delay="minutes(5)" attempts="3" />
-    
-    <late-process policy="exp-backoff" delay="hours(1)">
-        <late-input input="input" workflow-path="/projects/bootcamp/workflow/lateinput" />
-    </late-process>
-</process>
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "requestId": "update\/default\/d6aaa328-6836-4818-a212-515bb43d8b86\n\n",
-    "message": "update\/default\/SampleProcess updated successfully\n\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>
-

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/EntityValidate.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/EntityValidate.twiki b/trunk/releases/master/src/site/twiki/restapi/EntityValidate.twiki
deleted file mode 100644
index 054b083..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/EntityValidate.twiki
+++ /dev/null
@@ -1,170 +0,0 @@
----++ POST  api/entities/validate/entity-type
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Validates the submitted entity.
-
----++ Parameters
-   * :entity-type can be cluster, feed or process.
-   * skipDryRun : Optional query param, Falcon skips oozie dryrun when value is set to true.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Result of the validation.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-POST http://localhost:15000/api/entities/validate/cluster
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<cluster xmlns="uri:falcon:cluster:0.1" name="primary-cluster" description="Primary Cluster" colo="west-coast">
-    <interfaces>
-        <interface type="readonly" endpoint="hftp://localhost:50070" version="1.1.1"/>
-        <interface type="write" endpoint="hdfs://localhost:9000" version="1.1.1"/>
-        <interface type="execute" endpoint="localhost:9001" version="1.1.1"/>
-        <interface type="workflow" endpoint="http://localhost:11000/oozie/" version="4.0.0"/>
-        <interface type="messaging" endpoint="tcp://localhost:61616?daemon=true" version="5.4.3"/>
-    </interfaces>
-    <locations>
-        <location name="staging" path="/apps/falcon/staging"/>
-        <location name="temp" path="/tmp"/>
-        <location name="working" path="/apps/falcon/working"/>
-    </locations>
-</cluster>
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "requestId": "dd3f6c3a-a6f1-4c50-97fb-3f9a3f698e10",
-    "message": "Validated successfully (CLUSTER) primary-cluster",
-    "status": "SUCCEEDED"
-}
-</verbatim>
-
----+++ Rest Call
-<verbatim>
-POST http://localhost:15000/api/entities/validate/feed?skipDryRun=true
-<?xml version="1.0" encoding="UTF-8"?>
-<!-- Hourly sample input data -->
-
-<feed description="sample input data"
-      name="SampleInput" xmlns="uri:falcon:feed:0.1"
-      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
-    <groups>group</groups>
-    <frequency>hours(1)</frequency>
-    <late-arrival cut-off="hours(6)" />
-    <clusters>
-        <cluster name="primary-cluster" type="source">
-            <!--validity start="2009-01-01T00:00Z" end="2099-12-31T00:00Z" timezone="UTC" /-->
-            <validity start="2009-01-01T00:00Z" end="2099-12-31T00:00Z" />
-            <retention limit="months(24)" action="delete" />
-        </cluster>
-    </clusters>
-
-    <locations>
-        <location type="data" path="/projects/bootcamp/data/${YEAR}-${MONTH}-${DAY}-${HOUR}/SampleInput" />
-        <location type="stats" path="/projects/bootcamp/stats/SampleInput" />
-        <location type="meta" path="/projects/bootcamp/meta/SampleInput" />
-    </locations>
-
-    <ACL owner="suser" group="users" permission="0755" />
-
-    <schema location="/none" provider="none" />
-</feed>
-</verbatim>
-
----+++ Result
-<verbatim>
-{
-    "requestId": "c85b190e-e653-493a-a863-d62de9c2e3b0",
-    "message": "Validated successfully (FEED) SampleInput",
-    "status": "SUCCEEDED"
-}
-</verbatim>
-
----+++ Rest Call
-<verbatim>
-POST http://localhost:15000/api/entities/validate/feed
-<?xml version="1.0" encoding="UTF-8"?>
-<!-- Daily sample output data -->
-
-<feed description="sample output data" name="SampleOutput" xmlns="uri:falcon:feed:0.1"
-xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
-    <groups>group</groups>
-    <frequency>hours(1)</frequency>
-    <late-arrival cut-off="hours(6)" />
-    <clusters>
-        <cluster name="primary-cluster" type="source">
-            <!--validity start="2009-01-01T00:00Z" end="2099-12-31T00:00Z" timezone="UTC" /-->
-            <validity start="2009-01-01T00:00Z" end="2099-12-31T00:00Z" />
-            <retention limit="months(24)" action="delete" />
-        </cluster>
-    </clusters>
-    <locations>
-        <location type="data" path="/projects/bootcamp/output/${YEAR}-${MONTH}-${DAY}-${HOUR}/SampleOutput" />
-        <location type="stats" path="/projects/bootcamp/stats/SampleOutput" />
-        <location type="meta" path="/projects/bootcamp/meta/SampleOutput" />
-    </locations>
-    <ACL owner="suser" group="users" permission="0755" />
-    <schema location="/none" provider="none" />
-</feed>
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "requestId": "60781732-460e-4c6c-ba86-a75fae574b05",
-    "message": "Validated successfully (FEED) SampleOutput",
-    "status": "SUCCEEDED"
-}
-</verbatim>
-
----+++ Rest Call
-<verbatim>
-POST http://localhost:15000/api/entities/validate/process?skipDryRun=false&doAs=joe
-<?xml version="1.0" encoding="UTF-8"?>
-<!-- Daily sample process. Runs at 6th hour every day. Input - last day's hourly data. Generates output for yesterday -->
-<process xmlns="uri:falcon:process:0.1" name="SampleProcess" >
-    <clusters>
-      <cluster name="primary-cluster">
-	<validity start="2012-04-03T06:00Z" end="2022-12-30T00:00Z" />
-      </cluster>
-    </clusters>
-
-    <parallel>1</parallel>
-    <order>FIFO</order>
-    <frequency>hours(1)</frequency>
-
-    <inputs>
-        <input name="input" feed="SampleInput" start="yesterday(0,0)" end="today(-1,0)" />
-    </inputs>
-
-    <outputs>
-        <output name="output" feed="SampleOutput" instance="yesterday(0,0)" />
-    </outputs>
-
-    <properties>
-        <property name="queueName" value="default" />
-        <property name="ssh.host" value="localhost" />
-        <property name="fileTimestamp" value="${coord:formatTime(coord:nominalTime(), 'yyyy-MM-dd')}" />
-    </properties>
-
-    <workflow engine="oozie" path="/examples/apps/aggregator" />
-    <retry policy="exp-backoff" delay="minutes(5)" attempts="3" />
-    
-    <late-process policy="exp-backoff" delay="hours(1)">
-        <late-input input="input" workflow-path="/projects/bootcamp/workflow/lateinput" />
-    </late-process>
-</process>
-</verbatim>
-
----+++ Result
-<verbatim>
-{
-    "requestId": "e4a965c6-c7a2-41d9-ba08-2e77f1c43f57",
-    "message": "Validated successfully (PROCESS) SampleProcess",
-    "status": "SUCCEEDED"
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/FeedInstanceListing.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/FeedInstanceListing.twiki b/trunk/releases/master/src/site/twiki/restapi/FeedInstanceListing.twiki
deleted file mode 100644
index 03f3c57..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/FeedInstanceListing.twiki
+++ /dev/null
@@ -1,46 +0,0 @@
----++ GET /api/instance/listing/feed/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get falcon feed instance availability.
-
----++ Parameters
-   * :entity-name Name of the entity.
-   * start <optional param> Show instances from this date. Date format is yyyy-MM-dd'T'HH:mm'Z'.
-      * By default, it is set to (end - (10 * entityFrequency)).
-   * end <optional param> Show instances up to this date. Date format is yyyy-MM-dd'T'HH:mm'Z'.
-      * Default is set to now.
-   * colo <optional param> Colo on which the query should be run.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Feed instance availability status
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/instance/listing/feed/SampleFeed?colo=*&start=2012-04-03T07:00Z&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "instances": [
-        {
-            "size": "450231212222",
-            "creationTime": "1236679827365",
-            "cluster": "primary-cluster",
-            "uri": "/data/SampleFeed/2012-04-03",
-            "status": "AVAILABLE",
-            "instance": "2012-04-03T07:00Z"
-        }
-    ],
-    "requestId": "default\/3527038e-8334-4e50-8173-76c4fa430d0b\n",
-    "message": "default\/STATUS\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>
-
-

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/FeedLookup.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/FeedLookup.twiki b/trunk/releases/master/src/site/twiki/restapi/FeedLookup.twiki
deleted file mode 100644
index 053182b..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/FeedLookup.twiki
+++ /dev/null
@@ -1,37 +0,0 @@
----++  GET api/entities/lookup/feed
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-
----++ Parameters
-    * path path of the instance for which you want to determine the feed. e.g. /data/project1/2014/10/10/23/
-    Path has to be the complete path and can't be a part of it.
-    * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Returns the name of the feed along with the location type(meta/data/stats) and cluster on which the given path belongs to this feed.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/entities/lookup/feed?path=/data/project1/2014/10/10/23&doAs=joe
-</verbatim>
----+++ Result
-{
-    "feeds":
-    [
-        {
-           "feedName": "My-Feed1",
-           "locationType": "DATA",
-           "clusterName": "My-cluster1"
-        },
-        {
-           "feedName": "My-Feed2",
-           "locationType": "DATA",
-           "clusterName": "My-cluster2"
-        }
-    ]
-}

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/FeedSLA.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/FeedSLA.twiki b/trunk/releases/master/src/site/twiki/restapi/FeedSLA.twiki
deleted file mode 100644
index 8760976..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/FeedSLA.twiki
+++ /dev/null
@@ -1,56 +0,0 @@
----++ GET /api/entities/sla-alert/:entity-type
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-<verbatim>
-Since: 0.8
-</verbatim>
-This command lists all the feed instances which have missed sla and are still not available. If a feed instance missed
-sla but is now available, then it will not be reported in results. The purpose of this API is alerting and hence it
- doesn't return feed instances which missed SLA but are available as they don't require any action.
-
----++ Parameters
-   * :entity-type Only valid option is feed.
-   * entity-name <optional param> parameter to restrict results for a particular feed using feed's name.
-   * start <mandatory param> start of the time window for nominal instances, inclusive.
-   * end <mandatory param> end of the time window for nominal instances to be considered, default is treated as current time.
-   * colo <optional param> name of the colo
-
-
----++ Results
-Pending feed instances which missed SLA.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/entities/sla-alert/feed?colo=*&start=2012-04-03T07:00Z
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "status":"SUCCEEDED",
-    "message":"default/Success!\n",
-    "requestId":"default/885720178@qtp-495452957-6 - f6e82e9b-d23f-466b-82df-4fb8293ce9cf\n",
-    "instances":[
-            {"cluster":"local","entityName":"out","entityType":"FEED","instanceTime":"2015-09-26T17:33:00+05:30","tags":"Missed SLA High"},
-            {"cluster":"local","entityName":"out","entityType":"FEED","instanceTime":"2015-09-26T17:29:00+05:30","tags":"Missed SLA High"},
-            {"cluster":"local","entityName":"out","entityType":"FEED","instanceTime":"2015-09-26T17:35:00+05:30","tags":"Missed SLA Low"},
-            {"cluster":"local","entityName":"out","entityType":"FEED","instanceTime":"2015-09-26T17:30:00+05:30","tags":"Missed SLA High"},
-            {"cluster":"local","entityName":"out","entityType":"FEED","instanceTime":"2015-09-26T17:34:00+05:30","tags":"Missed SLA High"},
-            {"cluster":"local","entityName":"out","entityType":"FEED","instanceTime":"2015-09-26T17:31:00+05:30","tags":"Missed SLA High"},
-            {"cluster":"local","entityName":"out","entityType":"FEED","instanceTime":"2015-09-26T17:32:00+05:30","tags":"Missed SLA High"}
-    ]
-}
-</verbatim>
-
-In case there are no pending instances which have missed sla the response will be like below:
-<verbatim>
-{
-    "status":"SUCCEEDED",
-    "message":"default/Success!\n",
-    "requestId":"default/979808239@qtp-1243851750-3 - 8c7396c0-efe2-43e9-9aea-7ae6afea5fd6\n"
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/Graph.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/Graph.twiki b/trunk/releases/master/src/site/twiki/restapi/Graph.twiki
deleted file mode 100644
index db58d2e..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/Graph.twiki
+++ /dev/null
@@ -1,22 +0,0 @@
----++  GET api/metadata/lineage/serialize
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Dump the graph.
-
----++ Parameters
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Serialize graph to a file configured using *.falcon.graph.serialize.path in Custom startup.properties.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/metadata/lineage/serialize?doAs=joe
-</verbatim>
----+++ Result
-None.

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/InstanceDependencies.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/InstanceDependencies.twiki b/trunk/releases/master/src/site/twiki/restapi/InstanceDependencies.twiki
deleted file mode 100644
index 5641757..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/InstanceDependencies.twiki
+++ /dev/null
@@ -1,49 +0,0 @@
----++ GET /api/instance/dependencies/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get dependent instances for a particular instance.
-
----++ Parameters
-   * :entity-type Valid options are feed or process.
-   * :entity-name Name of the entity
-   * instanceTime <mandatory param> time of the given instance
-   * colo <optional param> name of the colo
-
-
----++ Results
-Dependent instances for the specified instance
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/instance/dependencies/feed/myFeed?colo=*&instanceTime=2012-04-03T07:00Z
-</verbatim>
----+++ Result
-<verbatim>
-{
-    'status': 'SUCCEEDED',
-    'message': 'default/Success!\n',
-    'dependencies': [
-        {
-            'cluster': 'local',
-            'entityName': 'consumer-process',
-            'entityType': 'PROCESS',
-            'instanceTime': '2014-12-18T00:00Z',
-            'tags': 'Input'
-        },
-        {
-            'cluster': 'local',
-            'entityName': 'producer-process',
-            'entityType': 'PROCESS',
-            'instanceTime': '2014-12-18T00:00Z',
-            'tags': 'Output'
-        }
-    ],
-    'requestId': 'default/1405883107@qtp-1501726962-6-0c2e690f-546b-47b0-a5ee-0365d4522a31\n'
-}
-</verbatim>
-

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/InstanceKill.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/InstanceKill.twiki b/trunk/releases/master/src/site/twiki/restapi/InstanceKill.twiki
deleted file mode 100644
index eb22945..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/InstanceKill.twiki
+++ /dev/null
@@ -1,44 +0,0 @@
----++  POST /api/instance/kill/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Kill currently running instance(s) of an entity.
-
----++ Parameters
-   * :entity-type can either be a feed or a process.
-   * :entity-name is name of the entity.
-   * start is the start time of the instance(s) that you want to refer to
-   * end is the end time of the instance(s) that you want to refer to
-   * lifecycle <optional param> can be Eviction/Replication(default) for feed and Execution(default) for process.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Result of the kill operation.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-POST http://localhost:15000/api/instance/kill/process/SampleProcess?colo=*&start=2012-04-03T07:00Z&end=2014-04-03T07:00Z&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "instances": [
-        {
-            "details": "",
-            "endTime": "2013-10-21T15:26:59-07:00",
-            "startTime": "2013-10-21T15:19:57-07:00",
-            "cluster": "primary-cluster",
-            "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933395-oozie-rgau-W",
-            "status": "KILLED",
-            "instance": "2012-04-03T07:00Z"
-        }
-    ],
-    "requestId": "default\/23b3cfee-ee22-40c0-825d-39c322587d5f\n",
-    "message": "default\/KILL\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/InstanceList.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/InstanceList.twiki b/trunk/releases/master/src/site/twiki/restapi/InstanceList.twiki
deleted file mode 100644
index 214c22f..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/InstanceList.twiki
+++ /dev/null
@@ -1,151 +0,0 @@
----++  GET /api/instance/list/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get list of all instances of a given entity.
-
----++ Parameters
-   * :entity-type Valid options are cluster, feed or process.
-   * :entity-name Name of the entity.
-   * start <optional param> Show instances from this date. Date format is yyyy-MM-dd'T'HH:mm'Z'.
-      * By default, it is set to (end - (10 * entityFrequency)).
-   * end <optional param> Show instances up to this date. Date format is yyyy-MM-dd'T'HH:mm'Z'.
-      * Default is set to now.
-   * colo <optional param> Colo on which the query should be run.
-   * lifecycle <optional param> Valid lifecycles for feed are Eviction/Replication(default) and for process is Execution(default).
-   * filterBy <optional param>  Filter results by list of field:value pairs. Example: filterBy=STATUS:RUNNING,CLUSTER:primary-cluster
-      * Supported filter fields are STATUS, CLUSTER, SOURCECLUSTER, STARTEDAFTER.
-      * Query will do an AND among filterBy fields.
-   * orderBy <optional param> Field by which results should be ordered.
-      * Supports ordering by  "status","startTime","endTime","cluster".
-   * sortOrder <optional param> Valid options are "asc" and "desc"
-   * offset <optional param> Show results from the offset, used for pagination. Defaults to 0.
-   * numResults <optional param> Number of results to show per request, used for pagination. Only integers > 0 are valid, Default is 10.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-   * allAttempts <optional query param> To get all the attempts for corresponding instances.
-   
----++ Results
-List of instances of given entity.
-
-The possible instance status returned and its meaning are as follows:
-   * WAITING - The instance is waiting for the corresponding data(feed) instances to become available.
-   * READY - The instance is ready to be scheduled. But, is waiting for scheduling conditions to be met. For example, limitation on number of instances that can be run in parallel.
-   * RUNNING - The instance is running on the workflow engine.
-   * FAILED - The instance has failed during execution.
-   * KILLED - The instance has been killed either manually or by the system.
-   * SUCCEEDED - The instance has executed successfully.
-   * SKIPPED - This instance was not executed, but was skipped. For example, when the execution order is LAST_ONLY, the older instances are skipped.
-   * ERROR - There was error while executing this instance on the workflow engine.
-   * UNDEFINED - The status of the instance could not be determined.
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/instance/list/process/SampleProcess?colo=*&start=2012-04-03T07:00Z
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "instances": [
-        {
-            "details": "",
-            "endTime": "2013-10-21T14:40:26-07:00",
-            "startTime": "2013-10-21T14:39:56-07:00",
-            "cluster": "primary-cluster",
-            "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933395-oozie-rgau-W",
-            "status": "SUCCEEDED",
-            "instance": "2012-04-03T07:00Z"
-        }
-    ],
-    "requestId": "default\/e15bb378-d09f-4911-9df2-5334a45153d2\n",
-    "message": "default\/STATUS\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>
-
-
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/instance/list/process/SampleProcess?colo=*&start=2012-04-03T07:00Z&filterBy=STATUS:SUCCEEDED,CLUSTER:primary-cluster&orderBy=startTime&offset=2&numResults=2&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "instances": [
-        {
-            "details": "",
-            "endTime": "2013-10-21T14:40:26-07:00",
-            "startTime": "2013-10-21T14:39:56-07:00",
-            "cluster": "primary-cluster",
-            "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933395-oozie-rgau-W",
-            "status": "SUCCEEDED",
-            "instance": "2012-04-03T07:00Z"
-        },
-        {
-            "details": "",
-            "endTime": "2013-10-21T14:42:26-07:00",
-            "startTime": "2013-10-21T14:41:56-07:00",
-            "cluster": "primary-cluster",
-            "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933397-oozie-rgau-W",
-            "status": "SUCCEEDED",
-            "instance": "2012-04-03T08:00Z"
-        },
-    ],
-
-    "requestId": "default\/e15bb378-d09f-4911-9df2-5334a45153d2\n",
-    "message": "default\/STATUS\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>
-
-
----+++ Rest Call
-<verbatim>
-GET https://localhost:15443/api/instance/status/process/oozie-mr-process?user.name=narayan&start=2013-11-15T00:05Z&end=2013-11-15T01:00Z&colo=*&offset=0&allAttempts=true
-</verbatim>
----+++ Result
-<verbatim>
-{
-   "status":"SUCCEEDED",
-   "message":"default/STATUS\n",
-   "requestId":"default/942519651@qtp-1386909980-16 - 5b11a8ba-402b-4cc7-969c-256e0ed18ae2\n",
-   "instances":[
-      {
-         "instance":"2013-11-15T00:05Z",
-         "status":"SUCCEEDED",
-         "logFile":"http://IM1948-X0:11000/oozie?job=0000010-160106121750678-oozie-oozi-W",
-         "cluster":"local",
-         "startTime":"2016-01-06T12:39:22+05:30",
-         "endTime":"2016-01-06T12:40:05+05:30",
-         "runId":0,
-         "details":"",
-         "actions":[
-            {
-               "action":"mr-node",
-               "status":"SUCCEEDED",
-               "logFile":"http://localhost:8088/proxy/application_1452062826344_0010/"
-            }
-         ]
-      },
-      {
-         "instance":"2013-11-15T00:05Z",
-         "status":"SUCCEEDED",
-         "logFile":"http://IM1948-X0:11000/oozie?job=0000011-160106121750678-oozie-oozi-W",
-         "cluster":"local",
-         "startTime":"2016-01-06T12:40:27+05:30",
-         "endTime":"2016-01-06T12:41:05+05:30",
-         "runId":0,
-         "details":"",
-         "actions":[
-            {
-               "action":"mr-node",
-               "status":"SUCCEEDED",
-               "logFile":"http://localhost:8088/proxy/application_1452062826344_0012/"
-            }
-         ]
-      }
-   ]
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/InstanceLogs.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/InstanceLogs.twiki b/trunk/releases/master/src/site/twiki/restapi/InstanceLogs.twiki
deleted file mode 100644
index 1e1c98d..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/InstanceLogs.twiki
+++ /dev/null
@@ -1,113 +0,0 @@
----++ GET /api/instance/logs/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get log of a specific instance of an entity.
-
----++ Parameters
-   * :entity-type Valid options are cluster, feed or process.
-   * :entity-name Name of the entity.
-   * start <optional param> Show instances from this date. Date format is yyyy-MM-dd'T'HH:mm'Z'.
-      * By default, it is set to (end - (10 * entityFrequency)).
-   * end <optional param> Show instances up to this date. Date format is yyyy-MM-dd'T'HH:mm'Z'.
-      * Default is set to now.
-   * colo <optional param> Colo on which the query should be run.
-   * runId <optional param> Run Id.
-   * lifecycle <optional param> Valid lifecycles for feed are Eviction/Replication(default) and for process is Execution(default).
-   * filterBy <optional param>  Filter results by list of field:value pairs. Example: filterBy=STATUS:RUNNING,CLUSTER:primary-cluster
-      * Supported filter fields are STATUS, CLUSTER, SOURCECLUSTER, STARTEDAFTER.
-      * Query will do an AND among filterBy fields.
-   * orderBy <optional param> Field by which results should be ordered.
-      * Supports ordering by "status","startTime","endTime","cluster".
-   * sortOrder <optional param> Valid options are "asc" and "desc"
-   * offset <optional param> Show results from the offset, used for pagination. Defaults to 0.
-   * numResults <optional param> Number of results to show per request, used for pagination. Only integers > 0 are valid, Default is 10.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Log of specified instance.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/instance/logs/process/SampleProcess?colo=*&start=2012-04-03T07:00Z
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "instances": [
-        {
-            "actions": [
-                {
-                    "logFile": "http:\/\/localhost:50070\/data\/apps\/falcon\/staging\/falcon\/workflows\/process\/SampleProcess\/logs\/job-2012-04-03-07-00\/000\/pig_SUCCEEDED.log",
-                    "status": "SUCCEEDED",
-                    "action": "pig"
-                }
-            ],
-            "details": "",
-            "endTime": "2013-10-21T14:40:26-07:00",
-            "startTime": "2013-10-21T14:39:56-07:00",
-            "cluster": "primary-cluster",
-            "logFile": "http:\/\/localhost:50070\/data\/apps\/falcon\/staging\/falcon\/workflows\/process\/SampleProcess\/logs\/job-2012-04-03-07-00\/000\/oozie.log",
-            "status": "SUCCEEDED",
-            "instance": "2012-04-03T07:00Z"
-        }
-    ],
-    "requestId": "default\/3527038e-8334-4e50-8173-76c4fa430d0b\n",
-    "message": "default\/STATUS\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>
-
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/instance/logs/process/SampleProcess?colo=*&start=2012-04-03T07:00Z&filterBy=STATUS:SUCCEEDED,CLUSTER:primary-cluster&orderBy=startTime&offset=2&numResults=2&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "instances": [
-        {
-            "actions": [
-                {
-                    "logFile": "http:\/\/localhost:50070\/data\/apps\/falcon\/staging\/falcon\/workflows\/process\/SampleProcess\/logs\/job-2012-04-03-07-00\/000\/pig_SUCCEEDED.log",
-                    "status": "SUCCEEDED",
-                    "action": "pig"
-                }
-            ],
-            "details": "",
-            "endTime": "2013-10-21T14:40:26-07:00",
-            "startTime": "2013-10-21T14:39:56-07:00",
-            "cluster": "primary-cluster",
-            "logFile": "http:\/\/localhost:50070\/data\/apps\/falcon\/staging\/falcon\/workflows\/process\/SampleProcess\/logs\/job-2012-04-03-07-00\/000\/oozie.log",
-            "status": "SUCCEEDED",
-            "instance": "2012-04-03T07:00Z"
-        },
-        {
-            "actions": [
-                {
-                    "logFile": "http:\/\/localhost:50070\/data\/apps\/falcon\/staging\/falcon\/workflows\/process\/SampleProcess\/logs\/job-2012-04-03-07-00\/001\/pig_SUCCEEDED.log",
-                    "status": "SUCCEEDED",
-                    "action": "pig"
-                }
-            ],
-            "details": "",
-            "endTime": "2013-10-21T14:42:27-07:00",
-            "startTime": "2013-10-21T14:41:57-07:00",
-            "cluster": "primary-cluster",
-            "logFile": "http:\/\/localhost:50070\/data\/apps\/falcon\/staging\/falcon\/workflows\/process\/SampleProcess\/logs\/job-2012-04-03-07-00\/001\/oozie.log",
-            "status": "SUCCEEDED",
-            "instance": "2012-04-03T08:00Z"
-        }
-    ],
-    "requestId": "default\/3527038e-8334-4e50-8173-76c4fa430d0b\n",
-    "message": "default\/STATUS\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>
-
-
-

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/InstanceParams.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/InstanceParams.twiki b/trunk/releases/master/src/site/twiki/restapi/InstanceParams.twiki
deleted file mode 100644
index 7a340a5..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/InstanceParams.twiki
+++ /dev/null
@@ -1,83 +0,0 @@
----++  GET /api/instance/params/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get the params passed to the workflow for an instance of feed/process.
-
----++ Parameters
-   * :entity-type Valid options are cluster, feed or process.
-   * :entity-name Name of the entity.
-   * start should be the nominal time of the instance for which you want the params to be returned
-   * colo <optional param> Colo on which the query should be run.
-   * lifecycle <optional param> Valid lifecycles for feed are Eviction/Replication(default) and for process is Execution(default).
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
-
----++ Results
-List of instances currently running.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-http://userqa.user.com:16000/api/instance/params/process/Sample-Process?start=2014-10-01T11:00Z&colo=*&doAs=joe
-</verbatim>
----+++ Result
-{
-    "status": "SUCCEEDED",
-    "message": "ua1/PARAMS\n",
-    "requestId": "ua1/807e9fe6-ba60-490e-b720-f8dc8b92063e\n",
-    "instances": [
-        {
-            "instance": "2014-10-01T11:00Z",
-            "status": "RUNNING",
-            "logFile": "http://spyke.user.com:11000/oozie?job=0000211-141117203201940-oozie-oozi-W",
-            "cluster": "sample-cluster",
-            "startTime": "2014-11-19T19:46:29+08:00",
-            "details": "",
-            "actions": [
-                {
-                    "action": "succeeded-post-processing",
-                    "status": "RUNNING",
-                    "logFile": "http://spyke.user.com:50030/jobdetails.jsp?jobid=job_201411071450_1052"
-                }
-            ],
-            "params": {
-                "entry": {"key": "jobTracker", "value": "10.16.114.113:8021"},
-                "entry":{"key":"falconInputNames","value":"IGNORE"},
-                "entry":{"key":"shouldRecord","value":"false"},
-                "entry":{"key":"timeStamp","value":"2014-11-19-11-46"},
-                "entry":{"key":"falconInPaths","value":"IGNORE"},
-                "entry":{"key":"broker.url","value":"tcp://localhost:61616"},
-                "entry":{"key":"feedNames","value":"NONE"},
-                "entry":{"key":"falcon.libpath","value":"/path/falcon/sample/lib"},
-                "entry":{"key":"ENTITY_PATH","value":"/path/falcon/staging/falcon/workflows/process/Sample-Process/9506be19980e0e6fdb709e1baffff_1416397585511/DEFAULT"},
-                "entry":{"key":"entityType","value":"process"},
-                "entry":{"key":"nominalTime","value":"2014-10-01-11-00"},
-                "entry":{"key":"feedInstancePaths","value":"IGNORE"},
-                "entry":{"key":"oozie.bundle.application.path","value":"hdfs://10.16.104.13:8020/path/falcon/staging/falcon/workflows/process/Sample-Process/9506be19980e0e669709e1baffff_1416397585511"},
-                "entry":{"key":"logDir","value":"hdfs://10.16.104.13:8020/path/falcon/staging/falcon/workflows/process/Sample-Process/logs"},
-                "entry":{"key":"userWorkflowEngine","value":"oozie"},
-                "entry":{"key":"broker.ttlInMins","value":"4320"},
-                "entry":{"key":"oozie.use.system.libpath","value":"true"},
-                "entry":{"key":"queueName","value":"reports"},
-                "entry":{"key":"falconDataOperation","value":"GENERATE"},
-                "entry":{"key":"oozie.wf.external.id","value":"Sample-Process/DEFAULT/2014-10-01T11:00Z"},
-                "entry":{"key":"workflowEngineUrl","value":"http://10.11.100.10:11000/oozie/"},
-                "entry":{"key":"userBrokerImplClass","value":"org.apache.activemq.ActiveMQConnectionFactory"},
-                "entry":{"key":"ENTITY_NAME","value":"FALCON_PROCESS_DEFAULT_Sample-Process"},
-                "entry":{"key":"broker.impl.class","value":"org.apache.activemq.ActiveMQConnectionFactory"},
-                "entry":{"key":"userWorkflowName","value":"Sample-workflow"},
-                "entry":{"key":"entityName","value":"Sample-Process"},
-                "entry":{"key":"srcClusterName","value":"NA"},
-                "entry":{"key":"userBrokerUrl","value":"tcp://localhost:61616?daemon=true"},
-                "entry":{"key":"falconInputFeeds","value":"NONE"},
-                "entry":{"key":"user.name","value":"sampleuser"},
-                "entry":{"key":"threedaysback","value":"2014-09-28"},
-                "entry":{"key":"userWorkflowVersion","value":"1.0"}
-            }
-        }
-    ]
-}

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/InstanceRerun.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/InstanceRerun.twiki b/trunk/releases/master/src/site/twiki/restapi/InstanceRerun.twiki
deleted file mode 100644
index eef0e1a..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/InstanceRerun.twiki
+++ /dev/null
@@ -1,66 +0,0 @@
----++  POST /api/instance/rerun/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Rerun instances of an entity. On issuing a rerun, by default the execution resumes from the last failed node in the workflow.
-
----++ Parameters
-   * :entity-type can either be a feed or a process.
-   * :entity-name is name of the entity.
-   * start is the start time of the instance that you want to refer to
-   * end is the end time of the instance that you want to refer to
-   * lifecycle <optional param> can be Eviction/Replication(default) for feed and Execution(default) for process.
-   * force <optional param> can be used to forcefully rerun the entire instance.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Results of the rerun command.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-POST http://localhost:15000/api/instance/rerun/process/SampleProcess?colo=*&start=2013-04-03T07:00Z&end=2014-04-03T07:00Z
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "instances": [
-        {
-            "details": "",
-            "startTime": "2013-10-21T15:10:47-07:00",
-            "cluster": "primary-cluster",
-            "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933395-oozie-rgau-W",
-            "status": "RUNNING",
-            "instance": "2012-04-03T07:00Z"
-        }
-    ],
-    "requestId": "default\/7a3582bd-608c-45a7-9b74-1837b51ba6d5\n",
-    "message": "default\/RERUN\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>
-
-<verbatim>
-POST http://localhost:15000/api/instance/rerun/process/SampleProcess?colo=*&start=2013-04-03T07:00Z&end=2014-04-03T07:00Z&force=true&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "instances": [
-        {
-            "details": "",
-            "startTime": "2013-10-21T15:10:47-07:00",
-            "cluster": "primary-cluster",
-            "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933395-oozie-rgau-W",
-            "status": "RUNNING",
-            "instance": "2012-04-03T07:00Z"
-        }
-    ],
-    "requestId": "default\/7a3582bd-608c-45a7-9b74-1837b51ba6d5\n",
-    "message": "default\/RERUN\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/InstanceResume.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/InstanceResume.twiki b/trunk/releases/master/src/site/twiki/restapi/InstanceResume.twiki
deleted file mode 100644
index 1254785..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/InstanceResume.twiki
+++ /dev/null
@@ -1,43 +0,0 @@
----++  POST /api/instance/resume/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Resume suspended instances of an entity.
-
----++ Parameters
-   * :entity-type can either be a feed or a process.
-   * :entity-name is name of the entity.
-   * start is the start time of the instance(s) that you want to refer to
-   * end is the end time of the instance(s) that you want to refer to
-   * lifecycle <optional param> can be Eviction/Replication(default) for feed and Execution(default) for process.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Results of the resume command.
-
----++ Examples
----+++ Rest Call
-           <verbatim>
-           POST http://localhost:15000/api/instance/resume/process/SampleProcess?colo=*&start=2012-04-03T07:00Z&end=2014-04-03T07:00Z&doAs=joe
-           </verbatim>
-           ---+++ Result
-           <verbatim>
-           {
-               "instances": [
-                   {
-                       "details": "",
-                       "startTime": "2013-10-21T15:19:57-07:00",
-                       "cluster": "primary-cluster",
-                       "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933395-oozie-rgau-W",
-                       "status": "RUNNING",
-                       "instance": "2012-04-03T07:00Z"
-                   }
-               ],
-               "requestId": "default\/e88ff2e0-2af7-4829-a360-f92e95be2981\n",
-               "message": "default\/RESUME\n",
-               "status": "SUCCEEDED"
-           }
-           </verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/InstanceRunning.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/InstanceRunning.twiki b/trunk/releases/master/src/site/twiki/restapi/InstanceRunning.twiki
deleted file mode 100644
index 3d1cabc..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/InstanceRunning.twiki
+++ /dev/null
@@ -1,84 +0,0 @@
----++  GET /api/instance/running/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get a list of instances currently running for a given entity.
-
----++ Parameters
-   * :entity-type Valid options are cluster, feed or process.
-   * :entity-name Name of the entity.
-   * colo <optional param> Colo on which the query should be run.
-   * lifecycle <optional param> Valid lifecycles for feed are Eviction/Replication(default) and for process is Execution(default).
-   * filterBy <optional param>  Filter results by list of field:value pairs. Example: filterBy=CLUSTER:primary-cluster
-      * Supported filter fields are CLUSTER, SOURCECLUSTER, STARTEDAFTER.
-      * Query will do an AND among filterBy fields.
-   * orderBy <optional param> Field by which results should be ordered.
-      * Supports ordering by "status","startTime","endTime","cluster".
-   * sortOrder <optional param> Valid options are "asc" and "desc"
-   * offset <optional param> Show results from the offset, used for pagination. Defaults to 0.
-   * numResults <optional param> Number of results to show per request, used for pagination. Only integers > 0 are valid, Default is 10.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-List of instances currently running.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/instance/running/process/SampleProcess?colo=*
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "instances": [
-        {
-            "startTime": "2013-10-21T14:39:28-07:00",
-            "cluster": "primary-cluster",
-            "status": "RUNNING",
-            "instance": "2012-04-03T06:00Z"
-        }
-    ],
-    "requestId": "default\/12e9a7d4-3b4f-4a76-b471-c8f3786a62a0\n",
-    "message": "default\/Running Instances\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>
-
-
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/instance/running/process/SampleProcess?colo=*&start=2012-04-03T07:00Z&filterBy=CLUSTER:primary-cluster&orderBy=startTime&offset=2&numResults=2&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "instances": [
-        {
-            "details": "",
-            "endTime": "2013-10-21T14:40:26-07:00",
-            "startTime": "2013-10-21T14:39:56-07:00",
-            "cluster": "primary-cluster",
-            "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933395-oozie-rgau-W",
-            "status": "RUNNING",
-            "instance": "2012-04-03T07:00Z"
-        },
-        {
-            "details": "",
-            "endTime": "2013-10-21T14:42:27-07:00",
-            "startTime": "2013-10-21T14:41:57-07:00",
-            "cluster": "primary-cluster",
-            "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933397-oozie-rgau-W",
-            "status": "RUNNING",
-            "instance": "2012-04-03T08:00Z"
-        },
-    ],
-
-    "requestId": "default\/e15bb378-d09f-4911-9df2-5334a45153d2\n",
-    "message": "default\/STATUS\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>
-

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/InstanceStatus.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/InstanceStatus.twiki b/trunk/releases/master/src/site/twiki/restapi/InstanceStatus.twiki
deleted file mode 100644
index 2b7b643..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/InstanceStatus.twiki
+++ /dev/null
@@ -1,98 +0,0 @@
----++  GET /api/instance/status/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get status of a specific instance of an entity.
-
----++ Parameters
-   * :entity-type Valid options are cluster, feed or process.
-   * :entity-name Name of the entity.
-   * start <optional param> Show instances from this date. Date format is yyyy-MM-dd'T'HH:mm'Z'.
-      * By default, it is set to (end - (10 * entityFrequency)).
-   * end <optional param> Show instances up to this date. Date format is yyyy-MM-dd'T'HH:mm'Z'.
-      * Default is set to now.
-   * colo <optional param> Colo on which the query should be run.
-   * lifecycle <optional param> Valid lifecycles for feed are Eviction/Replication(default) and for process is Execution(default).
-   * filterBy <optional param>  Filter results by list of field:value pairs. Example: filterBy=STATUS:RUNNING,CLUSTER:primary-cluster
-      * Supported filter fields are STATUS, CLUSTER, SOURCECLUSTER, STARTEDAFTER.
-      * Query will do an AND among filterBy fields.
-   * orderBy <optional param> Field by which results should be ordered.
-      * Supports ordering by "status","startTime","endTime","cluster".
-   * sortOrder <optional param> Valid options are "asc" and "desc"
-   * offset <optional param> Show results from the offset, used for pagination. Defaults to 0.
-   * numResults <optional param> Number of results to show per request, used for pagination. Only integers > 0 are valid, Default is 10.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-   * allAttempts <optional query param> To get all the attempts for corresponding instances.
-   
----++ Results
-Status of the specified instance along with job urls for all actions of user workflow and non-succeeded actions of the main-workflow.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET https://localhost:15443/api/instance/status/process/WordCount?start=2014-11-04T16:00Z&colo=*
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "instances": [
-        {
-            "details": "",
-            "endTime": "2014-11-05T16:08:10+05:30",
-            "startTime": "2014-11-05T16:07:29+05:30",
-            "cluster": "local",
-            "logFile": "http:\/\/localhost:11000\/oozie?job=0000011-141105155430303-oozie-oozi-W",
-            "status": "SUCCEEDED",
-            "instance": "2014-11-04T16:00Z",
-            "actions": [
-                {
-                    "action": "wordcount-mr",
-                    "status": "SUCCEEDED",
-                    "logFile": "http:\/\/localhost:50030\/jobdetails.jsp?jobid=job_201411051553_0005"
-                }
-            ]
-        }
-    ],
-    "requestId": "default\/b9fc3cba-1b46-4d1f-8196-52c795ea3580\n",
-    "message": "default\/STATUS\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>
-
-
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/instance/status/process/SampleProcess?colo=*&start=2012-04-03T07:00Z&filterBy=STATUS:SUCCEEDED,CLUSTER:primary-cluster&orderBy=startTime&offset=2&numResults=2&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "instances": [
-        {
-            "details": "",
-            "endTime": "2013-10-21T14:40:26-07:00",
-            "startTime": "2013-10-21T14:39:56-07:00",
-            "cluster": "primary-cluster",
-            "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933395-oozie-rgau-W",
-            "status": "SUCCEEDED",
-            "instance": "2012-04-03T07:00Z"
-        },
-        {
-            "details": "",
-            "endTime": "2013-10-21T14:42:26-07:00",
-            "startTime": "2013-10-21T14:41:56-07:00",
-            "cluster": "primary-cluster",
-            "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933397-oozie-rgau-W",
-            "status": "SUCCEEDED",
-            "instance": "2012-04-03T08:00Z"
-        },
-    ],
-
-    "requestId": "default\/e15bb378-d09f-4911-9df2-5334a45153d2\n",
-    "message": "default\/STATUS\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/InstanceSummary.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/InstanceSummary.twiki b/trunk/releases/master/src/site/twiki/restapi/InstanceSummary.twiki
deleted file mode 100644
index 0e1ffee..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/InstanceSummary.twiki
+++ /dev/null
@@ -1,114 +0,0 @@
----++  GET /api/instance/summary/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get summary of instance/instances of an entity.
-
----++ Parameters
-   * :entity-type Valid options are feed or process.
-   * :entity-name Name of the entity.
-   * start <optional param> Show instances from this date. Date format is yyyy-MM-dd'T'HH:mm'Z'.
-       * By default, it is set to (end - (10 * entityFrequency)).
-   * end <optional param> Show instances up to this date. Date format is yyyy-MM-dd'T'HH:mm'Z'.
-       * Default is set to now.
-   * colo <optional param> Colo on which the query should be run.
-   * lifecycle <optional param> Valid lifecycles for feed are Eviction/Replication(default) and for process is Execution(default).
-   * filterBy <optional param>  Filter results by list of field:value pairs.
-   Example1: filterBy=STATUS:RUNNING,CLUSTER:primary-cluster
-   Example2: filterBy=Status:RUNNING,Status:KILLED
-       * Supported filter fields are STATUS, CLUSTER
-       * Query will do an AND among filterBy fields.
-   * orderBy <optional param> Field by which results should be ordered.
-       * Supports ordering by "cluster".
-   * sortOrder <optional param> Valid options are "asc" and "desc"
-   Example: orderBy=cluster sortOrder=asc
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Summary of the instances over the specified time range
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/instance/summary/process/WordCount?colo=*&start=2014-01-21T13:00Z&end=2014-01-21T16:00Z
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "status":"SUCCEEDED",
-    "message":"default/SUMMARY\n",
-    "requestId":"default/c344567b-da73-44d5-bcd4-bf456524934c\n",
-    "instancesSummary":
-        {
-            "cluster":"local",
-            "map":
-                {
-                    "entry":
-                        {
-                            "key":"SUCCEEDED",
-                            "value":"value"
-                         }
-                }
-            }
-        }
-}
-</verbatim>
-
----+++ Rest Call
-<verbatim>
-GET https://localhost:16443/api/instance/summary/process/WordCount?filterBy=Status:KILLED,Status:RUNNING&start=2015-06-24T16:00Z&end=2015-06-24T23:00Z&colo=*
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "status":"SUCCEEDED",
-    "message":"local/SUMMARY\n",
-    "requestId":"local/1246061948@qtp-1059149611-5 - 34d8c3bb-f461-4fd5-87cd-402c9c6b1ed2\n",
-    "instancesSummary":[
-        {
-            "cluster":"local",
-            "map":{
-                "entry":{
-                    "key":"RUNNING",
-                    "value":"1"
-                },
-                "entry":{
-                    "key":"KILLED",
-                    "value":"1"
-                }
-            }
-        }
-    ]
-}
-</verbatim>
-
----+++ Rest Call
-<verbatim>
-GET https://localhost:16443/api/instance/summary/process/WordCount?orderBy=cluster&sortOrder=asc&start=2015-06-24T16:00Z&end=2015-06-24T23:00Z&colo=*&doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "status":"SUCCEEDED",
-    "message":"local/SUMMARY\n",
-    "requestId":"local/1246061948@qtp-1059149611-5 - 42e2040d-6b6e-4bfd-a090-83db5ed1a429\n",
-    "instancesSummary":[
-        {
-            "cluster":"local",
-            "map":{
-                "entry":{
-                    "key":"SUCCEEDED",
-                    "value":"6"
-                },
-                "entry":{
-                    "key":"KILLED",
-                    "value":"1"
-                }
-            }
-        }
-    ]
-}
-</verbatim>


[3/6] falcon git commit: Deleting accidental check-in of trunk/release/master

Posted by pa...@apache.org.
http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/Security.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/Security.twiki b/trunk/releases/master/src/site/twiki/Security.twiki
deleted file mode 100644
index 8955bdc..0000000
--- a/trunk/releases/master/src/site/twiki/Security.twiki
+++ /dev/null
@@ -1,387 +0,0 @@
----+ Securing Falcon
-
----++ Overview
-
-Apache Falcon enforces authentication and authorization which are detailed below. Falcon also
-provides transport level security ensuring data confidentiality and integrity.
-
-
----++ Authentication (User Identity)
-
-Apache Falcon enforces authentication on protected resources. Once authentication has been established it sets a
-signed HTTP Cookie that contains an authentication token with the user name, user principal,
-authentication type and expiration time.
-
-It does so by using [[http://hadoop.apache .org/docs/current/hadoop-auth/index.html][Hadoop Auth]].
-Hadoop Auth is a Java library consisting of a client and a server components to enable Kerberos SPNEGO authentication
-for HTTP. Hadoop Auth also supports additional authentication mechanisms on the client and the server side via 2
-simple interfaces.
-
-
----+++ Authentication Methods
-
-It supports 2 authentication methods, simple and kerberos out of the box.
-
----++++ Pseudo/Simple Authentication
-
-Falcon authenticates the user by simply trusting the value of the query string parameter 'user.name'. This is the
-default mode Falcon is configured with.
-
----++++ Kerberos Authentication
-
-Falcon uses HTTP Kerberos SPNEGO to authenticate the user.
-
-
----++ Authorization
-
-Falcon also enforces authorization on Entities using ACLs (Access Control Lists). ACLs are useful
-for implementing permission requirements and provide a way to set different permissions for
-specific users or named groups.
-
-By default, support for authorization is disabled and can be enabled in startup.properties.
-
----+++ ACLs in Entity
-
-All Entities now have ACL which needs to be present if authorization is enabled. Only owners who
-own or created the entity will be allowed to update or delete their entities.
-
-An entity has ACLs (Access Control Lists) that are useful for implementing permission requirements
-and provide a way to set different permissions for specific users or named groups.
-<verbatim>
-    <ACL owner="test-user" group="test-group" permission="*"/>
-</verbatim>
-ACL indicates the Access control list for this cluster.
-owner is the Owner of this entity.
-group is the one which has access to read.
-permission indicates the rwx is not enforced at this time.
-
----+++ Super-User
-
-The super-user is the user with the same identity as falcon process itself. Loosely, if you
-started the falcon, then you are the super-user. The super-user can do anything in that
-permissions checks never fail for the super-user. There is no persistent notion of who was the
-super-user; when the falcon is started the process identity determines who is the super-user
-for now. The Falcon super-user does not have to be the super-user of the falcon host, nor is it
-necessary that all clusters have the same super-user. Also, an experimenter running Falcon on a
-personal workstation, conveniently becomes that installation's super-user without any configuration.
-
-Falcon also allows users to configure a super user group and allows users belonging to this
-group to be a super user.
-
-ACL owner and group must be valid even if the authenticated user is a super-user.
-
----+++ Group Memberships
-
-Once a user has been authenticated and a username has been determined, the list of groups is
-determined by a group mapping service, configured by the hadoop.security.group.mapping property
-in Hadoop. The default implementation, org.apache.hadoop.security.ShellBasedUnixGroupsMapping,
-will shell out to the Unix bash -c groups command to resolve a list of groups for a user.
-
-Note that Falcon stores the user and group of an Entity as strings; there is no
-conversion from user and group identity numbers as is conventional in Unix.
-
-The only limitation is that a user cannot add a group in ACL that he does not belong to.
-
----+++ Authorization Provider
-
-Falcon provides a plugin-able provider interface for Authorization. It also ships with a default
-implementation that enforces the following authorization policy.
-
----++++ Entity and Instance Management Operations Policy
-
-   * All Entity and Instance operations are authorized for users who created them, Owners and users with group memberships
-   * Reference to entities with in a feed or process is allowed with out enforcing permissions
-
-Any Feed or Process can refer to a Cluster entity not owned by the Feed or Process owner. Any Process can refer to a Feed entity not owned by the Process owner
-
-The authorization is enforced in the following way:
-
-   * if admin resource,
-      * If authenticated user name matches the admin users configuration
-      * Else if groups of the authenticated user matches the admin groups configuration
-      * Else authorization exception is thrown
-   * Else if entities or instance resource
-      * If the authenticated user matches the owner in ACL for the entity
-      * Else if the groups of the authenticated user matches the group in ACL for the entity
-      * Else authorization exception is thrown
-   * Else if lineage resource
-      * All have read-only permissions, reason being folks should be able to examine the dependency and allow reuse
-
-To authenticate user for REST api calls, user should append "user.name=<username>" to the query.
-
-*operations on Entity Resource*
-
-| *Resource*                                                                          | *Description*                      | *Authorization* |
-| [[restapi/EntityValidate][api/entities/validate/:entity-type]]                      | Validate the entity                | Owner/Group     |
-| [[restapi/EntitySubmit][api/entities/submit/:entity-type]]                          | Submit the entity                  | Owner/Group     |
-| [[restapi/EntityUpdate][api/entities/update/:entity-type/:entity-name]]             | Update the entity                  | Owner/Group     |
-| [[restapi/EntitySubmitAndSchedule][api/entities/submitAndSchedule/:entity-type]]    | Submit & Schedule the entity       | Owner/Group     |
-| [[restapi/EntitySchedule][api/entities/schedule/:entity-type/:entity-name]]         | Schedule the entity                | Owner/Group     |
-| [[restapi/EntitySuspend][api/entities/suspend/:entity-type/:entity-name]]           | Suspend the entity                 | Owner/Group     |
-| [[restapi/EntityResume][api/entities/resume/:entity-type/:entity-name]]             | Resume the entity                  | Owner/Group     |
-| [[restapi/EntityDelete][api/entities/delete/:entity-type/:entity-name]]             | Delete the entity                  | Owner/Group     |
-| [[restapi/EntityStatus][api/entities/status/:entity-type/:entity-name]]             | Get the status of the entity       | Owner/Group     |
-| [[restapi/EntityDefinition][api/entities/definition/:entity-type/:entity-name]]     | Get the definition of the entity   | Owner/Group     |
-| [[restapi/EntityList][api/entities/list/:entity-type?fields=:fields]]               | Get the list of entities           | Owner/Group     |
-| [[restapi/EntityDependencies][api/entities/dependencies/:entity-type/:entity-name]] | Get the dependencies of the entity | Owner/Group     |
-
-*REST Call on Feed and Process Instances*
-
-| *Resource*                                                                  | *Description*                | *Authorization* |
-| [[restapi/InstanceRunning][api/instance/running/:entity-type/:entity-name]] | List of running instances.   | Owner/Group     |
-| [[restapi/InstanceStatus][api/instance/status/:entity-type/:entity-name]]   | Status of a given instance   | Owner/Group     |
-| [[restapi/InstanceKill][api/instance/kill/:entity-type/:entity-name]]       | Kill a given instance        | Owner/Group     |
-| [[restapi/InstanceSuspend][api/instance/suspend/:entity-type/:entity-name]] | Suspend a running instance   | Owner/Group     |
-| [[restapi/InstanceResume][api/instance/resume/:entity-type/:entity-name]]   | Resume a given instance      | Owner/Group     |
-| [[restapi/InstanceRerun][api/instance/rerun/:entity-type/:entity-name]]     | Rerun a given instance       | Owner/Group     |
-| [[InstanceLogs][api/instance/logs/:entity-type/:entity-name]]               | Get logs of a given instance | Owner/Group     |
-
----++++ Admin Resources Policy
-
-Only users belonging to admin users or groups have access to this resource. Admin membership is
-determined by a static configuration parameter.
-
-| *Resource*                                             | *Description*                               | *Authorization*  |
-| [[restapi/AdminVersion][api/admin/version]]            | Get version of the server                   | No restriction   |
-| [[restapi/AdminStack][api/admin/stack]]                | Get stack of the server                     | Admin User/Group |
-| [[restapi/AdminConfig][api/admin/config/:config-type]] | Get configuration information of the server | Admin User/Group |
-
-
----++++ Lineage Resource Policy
-
-Lineage is read-only and hence all users can look at lineage for their respective entities.
-*Note:* This gap will be fixed in a later release.
-
-
----++ Authentication Configuration
-
-Following is the Server Side Configuration Setup for Authentication.
-
----+++ Common Configuration Parameters
-
-<verbatim>
-# Authentication type must be specified: simple|kerberos
-*.falcon.authentication.type=kerberos
-</verbatim>
-
----+++ Kerberos Configuration
-
-<verbatim>
-##### Service Configuration
-
-# Indicates the Kerberos principal to be used in Falcon Service.
-*.falcon.service.authentication.kerberos.principal=falcon/_HOST@EXAMPLE.COM
-
-# Location of the keytab file with the credentials for the Service principal.
-*.falcon.service.authentication.kerberos.keytab=/etc/security/keytabs/falcon.service.keytab
-
-# name node principal to talk to config store
-*.dfs.namenode.kerberos.principal=nn/_HOST@EXAMPLE.COM
-
-# Indicates how long (in seconds) falcon authentication token is valid before it has to be renewed.
-*.falcon.service.authentication.token.validity=86400
-
-##### SPNEGO Configuration
-
-# Authentication type must be specified: simple|kerberos|<class>
-# org.apache.falcon.security.RemoteUserInHeaderBasedAuthenticationHandler can be used for backwards compatibility
-*.falcon.http.authentication.type=kerberos
-
-# Indicates how long (in seconds) an authentication token is valid before it has to be renewed.
-*.falcon.http.authentication.token.validity=36000
-
-# The signature secret for signing the authentication tokens.
-*.falcon.http.authentication.signature.secret=falcon
-
-# The domain to use for the HTTP cookie that stores the authentication token.
-*.falcon.http.authentication.cookie.domain=
-
-# Indicates if anonymous requests are allowed when using 'simple' authentication.
-*.falcon.http.authentication.simple.anonymous.allowed=true
-
-# Indicates the Kerberos principal to be used for HTTP endpoint.
-# The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification.
-*.falcon.http.authentication.kerberos.principal=HTTP/_HOST@EXAMPLE.COM
-
-# Location of the keytab file with the credentials for the HTTP principal.
-*.falcon.http.authentication.kerberos.keytab=/etc/security/keytabs/spnego.service.keytab
-
-# The kerberos names rules is to resolve kerberos principal names, refer to Hadoop's KerberosName for more details.
-*.falcon.http.authentication.kerberos.name.rules=DEFAULT
-
-# Comma separated list of black listed users
-*.falcon.http.authentication.blacklisted.users=
-
-# Increase Jetty request buffer size to accommodate the generated Kerberos token
-*.falcon.jetty.request.buffer.size=16192
-</verbatim>
-
----+++ Pseudo/Simple Configuration
-
-<verbatim>
-##### SPNEGO Configuration
-
-# Authentication type must be specified: simple|kerberos|<class>
-# org.apache.falcon.security.RemoteUserInHeaderBasedAuthenticationHandler can be used for backwards compatibility
-*.falcon.http.authentication.type=simple
-
-# Indicates how long (in seconds) an authentication token is valid before it has to be renewed.
-*.falcon.http.authentication.token.validity=36000
-
-# The signature secret for signing the authentication tokens.
-*.falcon.http.authentication.signature.secret=falcon
-
-# The domain to use for the HTTP cookie that stores the authentication token.
-*.falcon.http.authentication.cookie.domain=
-
-# Indicates if anonymous requests are allowed when using 'simple' authentication.
-*.falcon.http.authentication.simple.anonymous.allowed=true
-
-# Comma separated list of black listed users
-*.falcon.http.authentication.blacklisted.users=
-</verbatim>
-
----++ Authorization Configuration
-
----+++ Enabling Authorization
-By default, support for authorization is disabled and specifying ACLs in entities are optional.
-To enable support for authorization, set falcon.security.authorization.enabled to true in the
-startup configuration.
-
-<verbatim>
-# Authorization Enabled flag: false|true
-*.falcon.security.authorization.enabled=true
-</verbatim>
-
----+++ Authorization Provider
-
-Falcon provides a basic implementation for Authorization bundled, org.apache.falcon.security .DefaultFalconAuthorizationProvider.
-This can be overridden by custom implementations in the startup configuration.
-
-<verbatim>
-# Authorization Provider Fully Qualified Class Name
-*.falcon.security.authorization.provider=org.apache.falcon.security.DefaultAuthorizationProvider
-</verbatim>
-
----+++ Super User Group
-
-Super user group is determined by the configuration:
-
-<verbatim>
-# The name of the group of super-users
-*.falcon.security.authorization.superusergroup=falcon
-</verbatim>
-
----+++ Admin Membership
-
-Administrative users are determined by the configuration:
-
-<verbatim>
-# Admin Users, comma separated users
-*.falcon.security.authorization.admin.users=falcon,ambari-qa,seetharam
-</verbatim>
-
-Administrative groups are determined by the configuration:
-
-<verbatim>
-# Admin Group Membership, comma separated users
-*.falcon.security.authorization.admin.groups=falcon,testgroup,staff
-</verbatim>
-
-
----++ SSL
-
-Falcon provides transport level security ensuring data confidentiality and integrity. This is
-enabled by default for communicating over HTTP between the client and the server.
-
----+++ SSL Configuration
-
-<verbatim>
-*.falcon.enableTLS=true
-*.keystore.file=/path/to/keystore/file
-*.keystore.password=password
-</verbatim>
-
----+++ Distributed Falcon Setup
-
-Falcon should be configured to communicate with Prism over TLS in secure mode. Its not enabled by default.
-
-
----++ Changes to ownership and permissions of directories managed by Falcon
-
-| *Directory*              | *Location*                                                        | *Owner* | *Permissions* |
-| Configuration Store      | ${config.store.uri}                                               | falcon  | 700           |
-| Cluster Staging Location | ${cluster.staging-location}                                       | falcon  | 777           |
-| Cluster Working Location | ${cluster.working-location}                                       | falcon  | 755           |
-| Shared libs              | {cluster.working}/{lib,libext}                                    | falcon  | 755           |
-| Oozie coord/bundle XMLs  | ${cluster.staging-location}/workflows/{entity}/{entity-name}      | $user   | cluster umask |
-| App logs                 | ${cluster.staging-location}/workflows/{entity}/{entity-name}/logs | $user   | cluster umask |
-
-*Note:* Please note that the cluster staging and working locations MUST be created prior to
-submitting a cluster entity to Falcon. Also, note that the the parent dirs must have execute
-permissions.
-
-
----++ Backwards compatibility
-
----+++ Scheduled Entities
-
-Entities already scheduled with an earlier version of Falcon are not compatible with this version
-
----+++ Falcon Clients
-
-Older Falcon clients are backwards compatible wrt Authentication and user information sent as part of the HTTP
-header, Remote-User is still honoured when the authentication type is configured as below:
-
-<verbatim>
-*.falcon.http.authentication.type=org.apache.falcon.security.RemoteUserInHeaderBasedAuthenticationHandler
-</verbatim>
-
----+++ Blacklisted super users for authentication
-
-The blacklist users used to have the following super users: hdfs, mapreduce, oozie, and falcon.
-The list is externalized from code into Startup.properties file and is empty now and needs to be
-configured specifically in the file.
-
-
----+++ Falcon Dashboard
-
-To initialize the current user for dashboard, user should append query param "user.name=<username>" to the REST api call.
-
-If dashboard user wishes to change the current user, they should do the following.
-   * delete the hadoop.auth cookie from browser cache.
-   * append query param "user.name=<new_user>" to the next REST API call.
-
-In Kerberos method, the browser must support HTTP Kerberos SPNEGO.
-
-
----++ Known Limitations
-
-   * ActiveMQ topics are not secure but will be in the near future
-   * Entities already scheduled with an earlier version of Falcon are not compatible with this version as new
-   workflow parameters are being passed back into Falcon such as the user are required
-   * Use of hftp as the scheme for read only interface in cluster entity [[https://issues.apache.org/jira/browse/HADOOP-10215][will not work in Oozie]]
-   The alternative is to use webhdfs scheme instead and its been tested with DistCp.
-
-
----++ Examples
-
----+++ Accessing the server using Falcon CLI (Java client)
-
-There is no change in the way the CLI is used. The CLI has been changed to work with the configured authentication
-method.
-
----+++ Accessing the server using curl
-
-Try accessing protected resources using curl. The protected resources are:
-
-<verbatim>
-$ kinit
-Please enter the password for venkatesh@LOCALHOST:
-
-$ curl http://localhost:15000/api/admin/version
-
-$ curl http://localhost:15000/api/admin/version?user.name=venkatesh
-
-$ curl --negotiate -u foo -b ~/cookiejar.txt -c ~/cookiejar.txt curl http://localhost:15000/api/admin/version
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/CommonCLI.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/CommonCLI.twiki b/trunk/releases/master/src/site/twiki/falconcli/CommonCLI.twiki
deleted file mode 100644
index fab2ed1..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/CommonCLI.twiki
+++ /dev/null
@@ -1,21 +0,0 @@
----++ Common CLI Options
-
----+++Falcon URL
-
-Optional -url option indicating the URL of the Falcon system to run the command against can be provided.  If not mentioned it will be picked from the system environment variable FALCON_URL. If FALCON_URL is not set then it will be picked from client.properties file. If the option is not
-provided and also not set in client.properties, Falcon CLI will fail.
-
----+++Proxy user support
-
-The -doAs option allows the current user to impersonate other users when interacting with the Falcon system. The current user must be configured as a proxyuser in the Falcon system. The proxyuser configuration may restrict from
-which hosts a user may impersonate users, as well as users of which groups can be impersonated.
-
-<a href="../FalconDocumentation.html#Proxyuser_support">Proxyuser support described here.</a>
-
----+++Debug Mode
-
-If you export FALCON_DEBUG=true then the Falcon CLI will output the Web Services API details used by any commands you execute. This is useful for debugging purposes to or see how the Falcon CLI works with the WS API.
-Alternately, you can specify '-debug' through the CLI arguments to get the debug statements.
-
-Example:
-$FALCON_HOME/bin/falcon entity -submit -type cluster -file /cluster/definition.xml -debug
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/ContinueInstance.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/ContinueInstance.twiki b/trunk/releases/master/src/site/twiki/falconcli/ContinueInstance.twiki
deleted file mode 100644
index 304e281..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/ContinueInstance.twiki
+++ /dev/null
@@ -1,8 +0,0 @@
----+++Continue
-
-[[CommonCLI][Common CLI Options]]
-
-Continue option is used to continue the failed workflow instance. This option is valid only for process instances in terminal state, i.e. KILLED or FAILED.
-
-Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -continue -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/Definition.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/Definition.twiki b/trunk/releases/master/src/site/twiki/falconcli/Definition.twiki
deleted file mode 100644
index 08d46c7..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/Definition.twiki
+++ /dev/null
@@ -1,8 +0,0 @@
----+++Definition
-
-[[CommonCLI][Common CLI Options]]
-
-Definition option returns the entity definition submitted earlier during submit step.
-
-Usage:
-$FALCON_HOME/bin/falcon entity -type [cluster|datasource|feed|process] -name <<name>> -definition

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/DeleteEntity.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/DeleteEntity.twiki b/trunk/releases/master/src/site/twiki/falconcli/DeleteEntity.twiki
deleted file mode 100644
index f2b3080..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/DeleteEntity.twiki
+++ /dev/null
@@ -1,8 +0,0 @@
----+++Delete
-
-[[CommonCLI][Common CLI Options]]
-
-Delete removes the submitted entity definition for the specified entity and put it into the archive.
-
-Usage:
-$FALCON_HOME/bin/falcon entity  -type [cluster|datasource|feed|process] -name <<name>> -delete

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/DependencyEntity.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/DependencyEntity.twiki b/trunk/releases/master/src/site/twiki/falconcli/DependencyEntity.twiki
deleted file mode 100644
index bdef1d7..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/DependencyEntity.twiki
+++ /dev/null
@@ -1,10 +0,0 @@
----+++Dependency
-
-[[CommonCLI][Common CLI Options]]
-
-With the use of dependency option, we can list all the entities on which the specified entity is dependent.
-For example for a feed, dependency return the cluster name and for process it returns all the input feeds,
-output feeds and cluster names.
-
-Usage:
-$FALCON_HOME/bin/falcon entity -type [cluster|datasource|feed|process] -name <<name>> -dependency
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/DependencyInstance.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/DependencyInstance.twiki b/trunk/releases/master/src/site/twiki/falconcli/DependencyInstance.twiki
deleted file mode 100644
index 51508cc..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/DependencyInstance.twiki
+++ /dev/null
@@ -1,33 +0,0 @@
----+++Dependency
-Display the dependent instances which are dependent on the given instance. For example for a given process instance it will
-list all the input feed instances(if any) and the output feed instances(if any).
-
-An example use case of this command is as follows:
-Suppose you find out that the data in a feed instance was incorrect and you need to figure out which all process instances
-consumed this feed instance so that you can reprocess them after correcting the feed instance. You can give the feed instance
-and it will tell you which process instance produced this feed and which all process instances consumed this feed.
-
-NOTE:
-1. instanceTime must be a valid instanceTime e.g. instanceTime of a feed should be in it's validity range on applicable clusters,
- and it should be in the range of instances produced by the producer process(if any)
-
-2. For processes with inputs like latest() which vary with time the results are not guaranteed to be correct.
-
-Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -dependency -instanceTime "yyyy-MM-dd'T'HH:mm'Z'"
-
-For example:
-$FALCON_HOME/bin/falcon instance -dependency -type feed -name out -instanceTime 2014-12-15T00:00Z
-name: producer, type: PROCESS, cluster: local, instanceTime: 2014-12-15T00:00Z, tags: Output
-name: consumer, type: PROCESS, cluster: local, instanceTime: 2014-12-15T00:03Z, tags: Input
-name: consumer, type: PROCESS, cluster: local, instanceTime: 2014-12-15T00:04Z, tags: Input
-name: consumer, type: PROCESS, cluster: local, instanceTime: 2014-12-15T00:02Z, tags: Input
-name: consumer, type: PROCESS, cluster: local, instanceTime: 2014-12-15T00:05Z, tags: Input
-
-
-Response: default/Success!
-
-Request Id: default/1125035965@qtp-503156953-7 - 447be0ad-1d38-4dce-b438-20f3de69b172
-
-
-<a href="../Restapi/InstanceDependencies.html">Optional params described here.</a>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/EdgeMetadata.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/EdgeMetadata.twiki b/trunk/releases/master/src/site/twiki/falconcli/EdgeMetadata.twiki
deleted file mode 100644
index 477996e..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/EdgeMetadata.twiki
+++ /dev/null
@@ -1,11 +0,0 @@
----+++ Edge
-
-[[CommonCLI][Common CLI Options]]
-
-Get the edge with the specified id.
-
-Usage:
-$FALCON_HOME/bin/falcon metadata -edge -id <<id>>
-
-Example:
-$FALCON_HOME/bin/falcon metadata -edge -id Q9n-Q-5g
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/FalconCLI.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/FalconCLI.twiki b/trunk/releases/master/src/site/twiki/falconcli/FalconCLI.twiki
deleted file mode 100644
index 0c0082f..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/FalconCLI.twiki
+++ /dev/null
@@ -1,112 +0,0 @@
----+FalconCLI
-
-FalconCLI is a interface between user and Falcon. It is a command line utility provided by Falcon. FalconCLI supports Entity Management, Instance Management and Admin operations.There is a set of web services that are used by FalconCLI to interact with Falcon.
-
----+++Types of CLI Options
-
-CLI options are classified into :
-
-   * <a href="#Common_CLI_Options">Common CLI Options</a>
-   * <a href="#Entity_Management_Commands">Entity Management Commands</a>
-   * <a href="#Instance_Management_Commands">Instance Management Commands</a>
-   * <a href="#Metadata_Commands">Metadata Commands</a>
-   * <a href="#Admin_Commands">Admin commands</a>
-   * <a href="#Recipe_Commands">Recipe commands</a>
-
-
-
------------
-
----++Common CLI Options
-
----+++Falcon URL
-
-Optional -url option indicating the URL of the Falcon system to run the command against can be provided.  If not mentioned it will be picked from the system environment variable FALCON_URL. If FALCON_URL is not set then it will be picked from client.properties file. If the option is not
-provided and also not set in client.properties, Falcon CLI will fail.
-
----+++Proxy user support
-
-The -doAs option allows the current user to impersonate other users when interacting with the Falcon system. The current user must be configured as a proxyuser in the Falcon system. The proxyuser configuration may restrict from
-which hosts a user may impersonate users, as well as users of which groups can be impersonated.
-
-<a href="../FalconDocumentation.html#Proxyuser_support">Proxyuser support described here.</a>
-
----+++Debug Mode
-
-If you export FALCON_DEBUG=true then the Falcon CLI will output the Web Services API details used by any commands you execute. This is useful for debugging purposes to or see how the Falcon CLI works with the WS API.
-Alternately, you can specify '-debug' through the CLI arguments to get the debug statements.
-Example:
-$FALCON_HOME/bin/falcon entity -submit -type cluster -file /cluster/definition.xml -debug
-
------------
-
----++Entity Management Commands
-
-| *Command*                                      | *Description*                                   |
-| [[Submit]]                                     | Submit the entity definition.                   |
-| [[Schedule]]                                   | Scheduled the entity                            |
-| [[SuspendEntity][Suspend]]                     | Suspends the scheduled entity                   |
-| [[ResumeEntity][Resume]]                       | Puts a suspended entity back in action          |
-| [[DeleteEntity][Delete]]                       | Remove the submitted entity                     |
-| [[ListEntity][List]]                           | Lists the particular type of entity             |
-| [[SummaryEntity][Summary]]                     | Shows summary of the type of entity             |
-| [[UpdateEntity][Update]]                       | Update already submitted entity                 |
-| [[Touch]]                                      | Force update already submitted entity           |
-| [[StatusEntity][Status]]                       | Return's the status of the entity               |
-| [[DependencyEntity][Dependency]]               | List all the entities on which the specified entity is dependent|
-| [[Definition]]                                 | Return's the definition of the entity           |
-| [[Lookup]]                                     | Return's the feed name for a path               |
-| [[SLAAlert]]                                   | Return's the feed instance which have missed sla|
-
-
------------
----++Instance Management Commands
-
-| *Command*                                      | *Description*                                   |
-| [[KillInstance][Kill]]                         | Kills all the instances of specified process    |
-| [[SuspendInstance][Suspend]]                   | Suspends instances of a specified process       |
-| [[ContinueInstance][Continue]]                 | Continue the failed workflow instances          |
-| [[RerunInstance][Rerun]]                       | Rerun instances of specified process            |
-| [[ResumeInstance][Resume]]                     | Resume instance of specified process from suspended state   |
-| [[StatusInstance][Status]]                     | Gets the status of entity                       |
-| [[ListInstance][List]]                         | Gets single or multiple instances               |
-| [[SummaryInstance][Summary]]                   | Gets consolidated status of the instances between the specified time period    |
-| [[RunningInstance][Running]]                   | Gets running instances of the mentioned process |
-| [[FeedInstanceListing]]                        | Gets falcon feed instance availability          |
-| [[LogsInstance][Logs]]                         | Gets logs for instance                          |
-| [[LifeCycleInstance][LifeCycle]]               | Describes list of life cycles of a entity       |
-| [[TriageInstance][Triage]]                     | Traces entities ancestors for failure           |
-| [[ParamsInstance][Params]]                     | Displays workflow params                        |
-| [[DependencyInstance][Dependency]]             | Displays the dependent instances    |
-
------------
-
----++Metadata Commands
-
-| *Command*                                      | *Description*                                    |
-|[[LineageMetadata][Lineage]]                    | Returns the relationship between processes and feeds |
-|[[VertexMetadata][Vertex]]                      | Gets the vertex with the specified id            |
-|[[VerticesMetadata][Vertices]]                  | Gets all vertices for a key                      |
-|[[VertexEdgesMetadata][Vertex Edges]]           | Gets the adjacent vertices or edges of the vertex|
-|[[EdgeMetadata][Edge]]                          | Gets the edge with the specified id              |
-|[[ListMetadata][List]]                          | Return list of all dimension of given type       |
-|[[RelationMetadata][Relations]]                | Return all dimensions related to specified Dimension |
-
------------
-
----++Admin Commands
-
-| *Command*                                      | *Description*                                   |
-|[[HelpAdmin][Help]]                             | Return help options                             |
-|[[VersionAdmin][Version]]                       | Return current falcon version                   |
-|[[StatusAdmin][Status]]                         | Return the status of falcon                     |
-
------------
-
----++Recipe Commands
-
-| *Command*                                      | *Description*                                   |
-|[[SubmitRecipe][Submit]]                        | Submit the specified Recipe                     |
-
-
-

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/FeedInstanceListing.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/FeedInstanceListing.twiki b/trunk/releases/master/src/site/twiki/falconcli/FeedInstanceListing.twiki
deleted file mode 100644
index aa60d49..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/FeedInstanceListing.twiki
+++ /dev/null
@@ -1,11 +0,0 @@
----+++FeedInstanceListing
-
-Get falcon feed instance availability.
-
-Usage:
-$FALCON_HOME/bin/falcon instance -type feed -name <<name>> -listing
-
-Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"
--colo <<colo>>
-
-<a href="../Restapi/FeedInstanceListing.html">Optional params described here.</a>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/HelpAdmin.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/HelpAdmin.twiki b/trunk/releases/master/src/site/twiki/falconcli/HelpAdmin.twiki
deleted file mode 100644
index 69b1378..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/HelpAdmin.twiki
+++ /dev/null
@@ -1,6 +0,0 @@
----+++Help
-
-[[CommonCLI][Common CLI Options]]
-
-Usage:
-$FALCON_HOME/bin/falcon admin -help

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/KillInstance.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/KillInstance.twiki b/trunk/releases/master/src/site/twiki/falconcli/KillInstance.twiki
deleted file mode 100644
index 623921f..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/KillInstance.twiki
+++ /dev/null
@@ -1,14 +0,0 @@
----+++Kill
-
-[[CommonCLI][Common CLI Options]]
-
-Kill sub-command is used to kill all the instances of the specified process whose nominal time is between the given start time and end time.
-
-Note:
-1. The start time and end time needs to be specified in TZ format.
-Example:   01 Jan 2012 01:00  => 2012-01-01T01:00Z
-
-2. Process name is compulsory parameter for each instance management command.
-
-Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -kill -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/LifeCycleInstance.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/LifeCycleInstance.twiki b/trunk/releases/master/src/site/twiki/falconcli/LifeCycleInstance.twiki
deleted file mode 100644
index bbcda55..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/LifeCycleInstance.twiki
+++ /dev/null
@@ -1,9 +0,0 @@
----+++LifeCycle
-
-[[CommonCLI][Common CLI Options]]
-
-Describes list of life cycles of a entity , for feed it can be replication/retention and for process it can be execution.
-This can be used with instance management options. Default values are replication for feed and execution for process.
-
-Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -status -lifecycle <<lifecycletype>> -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/LineageMetadata.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/LineageMetadata.twiki b/trunk/releases/master/src/site/twiki/falconcli/LineageMetadata.twiki
deleted file mode 100644
index e668e03..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/LineageMetadata.twiki
+++ /dev/null
@@ -1,12 +0,0 @@
----+++Lineage
-
-
-Returns the relationship between processes and feeds in a given pipeline in [[http://www.graphviz.org/content/dot-language/][dot]] format.
-You can use the output and view a graphical representation of DAG using an online graphviz viewer like [[http://www.webgraphviz.com/][this]].
-
-Usage:
-
-$FALCON_HOME/bin/falcon metadata -lineage -pipeline my-pipeline
-
-pipeline is a mandatory option.
-

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/ListEntity.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/ListEntity.twiki b/trunk/releases/master/src/site/twiki/falconcli/ListEntity.twiki
deleted file mode 100644
index 0047c1b..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/ListEntity.twiki
+++ /dev/null
@@ -1,17 +0,0 @@
----+++List
-
-[[CommonCLI][Common CLI Options]]
-
-Entities of a particular type can be listed with list sub-command.
-
-Usage:
-$FALCON_HOME/bin/falcon entity -list
-
-Optional Args : -fields <<field1,field2>>
--type <<[cluster|datasource|feed|process],[cluster|datasource|feed|process]>>
--nameseq <<namesubsequence>> -tagkeys <<tagkeyword1,tagkeyword2>>
--filterBy <<field1:value1,field2:value2>> -tags <<tagkey=tagvalue,tagkey=tagvalue>>
--orderBy <<field>> -sortOrder <<sortOrder>> -offset 0 -numResults 10
-
-<a href="../Restapi/EntityList.html">Optional params described here.</a>
-

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/ListInstance.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/ListInstance.twiki b/trunk/releases/master/src/site/twiki/falconcli/ListInstance.twiki
deleted file mode 100644
index 1203629..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/ListInstance.twiki
+++ /dev/null
@@ -1,20 +0,0 @@
----+++List
-
-[[CommonCLI][Common CLI Options]]
-
-List option via CLI can be used to get single or multiple instances.  If the instance is not yet materialized but is within the process validity range, WAITING is returned as the state. Instance time is also returned. Log location gives the oozie workflow url
-If the instance is in WAITING state, missing dependencies are listed
-
-Example : Suppose a process has 3 instance, one has succeeded,one is in running state and other one is waiting, the expected output is:
-
-{"status":"SUCCEEDED","message":"getStatus is successful","instances":[{"instance":"2012-05-07T05:02Z","status":"SUCCEEDED","logFile":"http://oozie-dashboard-url"},{"instance":"2012-05-07T05:07Z","status":"RUNNING","logFile":"http://oozie-dashboard-url"}, {"instance":"2010-01-02T11:05Z","status":"WAITING"}]}
-
-Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -list
-
-Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"
--colo <<colo>> -lifecycle <<lifecycles>>
--filterBy <<field1:value1,field2:value2>> -orderBy field -sortOrder <<sortOrder>> -offset 0 -numResults 10
--allAttempts To get all the attempts for corresponding instances
-
-<a href="../Restapi/InstanceList.html">Optional params described here.</a>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/ListMetadata.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/ListMetadata.twiki b/trunk/releases/master/src/site/twiki/falconcli/ListMetadata.twiki
deleted file mode 100644
index 8adea21..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/ListMetadata.twiki
+++ /dev/null
@@ -1,13 +0,0 @@
----+++ List
-
-[[CommonCLI][Common CLI Options]]
-
-Lists of all dimensions of given type. If the user provides optional param cluster, only the dimensions related to the cluster are listed.
-Usage:
-$FALCON_HOME/bin/falcon metadata -list -type [cluster_entity|datasource_entity|feed_entity|process_entity|user|colo|tags|groups|pipelines]
-
-Optional Args : -cluster <<cluster name>>
-
-Example:
-$FALCON_HOME/bin/falcon metadata -list -type process_entity -cluster primary-cluster
-$FALCON_HOME/bin/falcon metadata -list -type tags

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/LogsInstance.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/LogsInstance.twiki b/trunk/releases/master/src/site/twiki/falconcli/LogsInstance.twiki
deleted file mode 100644
index ac40ec0..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/LogsInstance.twiki
+++ /dev/null
@@ -1,14 +0,0 @@
----+++Logs
-
-[[CommonCLI][Common CLI Options]]
-
-Get logs for instance actions
-
-Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -logs
-
-Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" -runid <<runid>>
--colo <<colo>> -lifecycle <<lifecycles>>
--filterBy <<field1:value1,field2:value2>> -orderBy field -sortOrder <<sortOrder>> -offset 0 -numResults 10
-
-<a href="../Restapi/InstanceLogs.html">Optional params described here.</a>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/Lookup.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/Lookup.twiki b/trunk/releases/master/src/site/twiki/falconcli/Lookup.twiki
deleted file mode 100644
index a9d9c4e..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/Lookup.twiki
+++ /dev/null
@@ -1,12 +0,0 @@
----+++Lookup
-
-[[CommonCLI][Common CLI Options]]
-
-Lookup option tells you which feed does a given path belong to. This can be useful in several scenarios e.g. generally you would want to have a single definition for common feeds like metadata with same location
-otherwise it can result in a problem (different retention durations can result in surprises for one team) If you want to check if there are multiple definitions of same metadata then you can pick
-an instance of that and run through the lookup command like below.
-
-Usage:
-$FALCON_HOME/bin/falcon entity -type feed -lookup -path /data/projects/my-hourly/2014/10/10/23/
-
-If you have multiple feeds with location as /data/projects/my-hourly/${YEAR}/${MONTH}/${DAY}/${HOUR} then this command will return all of them.

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/ParamsInstance.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/ParamsInstance.twiki b/trunk/releases/master/src/site/twiki/falconcli/ParamsInstance.twiki
deleted file mode 100644
index 9f217ba..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/ParamsInstance.twiki
+++ /dev/null
@@ -1,8 +0,0 @@
----+++Params
-
-[[CommonCLI][Common CLI Options]]
-
-Displays the workflow params of a given instance. Where start time is considered as nominal time of that instance and end time won't be considered.
-
-Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -params -start "yyyy-MM-dd'T'HH:mm'Z'"

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/RelationMetadata.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/RelationMetadata.twiki b/trunk/releases/master/src/site/twiki/falconcli/RelationMetadata.twiki
deleted file mode 100644
index e9bc970..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/RelationMetadata.twiki
+++ /dev/null
@@ -1,10 +0,0 @@
----+++ Relations
-
-[[CommonCLI][Common CLI Options]]
-
-List all dimensions related to specified Dimension identified by dimension-type and dimension-name.
-Usage:
-$FALCON_HOME/bin/falcon metadata -relations -type [cluster_entity|feed_entity|process_entity|user|colo|tags|groups|pipelines] -name <<Dimension Name>>
-
-Example:
-$FALCON_HOME/bin/falcon metadata -relations -type process_entity -name sample-process
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/RerunInstance.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/RerunInstance.twiki b/trunk/releases/master/src/site/twiki/falconcli/RerunInstance.twiki
deleted file mode 100644
index aac844c..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/RerunInstance.twiki
+++ /dev/null
@@ -1,10 +0,0 @@
----+++Rerun
-
-[[CommonCLI][Common CLI Options]]
-
-Rerun option is used to rerun instances of a given process. On issuing a rerun, by default the execution resumes from the last failed node in the workflow. This option is valid only for process instances in terminal state, i.e. SUCCEEDED, KILLED or FAILED.
-If one wants to forcefully rerun the entire workflow, -force should be passed along with -rerun
-Additionally, you can also specify properties to override via a properties file.
-
-Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -rerun -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" [-force] [-file <<properties file>>]

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/ResumeEntity.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/ResumeEntity.twiki b/trunk/releases/master/src/site/twiki/falconcli/ResumeEntity.twiki
deleted file mode 100644
index 39be411..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/ResumeEntity.twiki
+++ /dev/null
@@ -1,8 +0,0 @@
----+++Resume
-
-[[CommonCLI][Common CLI Options]]
-
-Puts a suspended process/feed back to active, which in turn resumes applicable oozie bundle.
-
-Usage:
- $FALCON_HOME/bin/falcon entity  -type [feed|process] -name <<name>> -resume

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/ResumeInstance.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/ResumeInstance.twiki b/trunk/releases/master/src/site/twiki/falconcli/ResumeInstance.twiki
deleted file mode 100644
index 3790f47..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/ResumeInstance.twiki
+++ /dev/null
@@ -1,8 +0,0 @@
----+++Resume
-
-[[CommonCLI][Common CLI Options]]
-
-Resume option is used to resume any instance that  is in suspended state.
-
-Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -resume -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/RunningInstance.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/RunningInstance.twiki b/trunk/releases/master/src/site/twiki/falconcli/RunningInstance.twiki
deleted file mode 100644
index f269358..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/RunningInstance.twiki
+++ /dev/null
@@ -1,13 +0,0 @@
----+++Running
-
-[[CommonCLI][Common CLI Options]]
-
-Running option provides all the running instances of the mentioned process.
-
-Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -running
-
-Optional Args : -colo <<colo>> -lifecycle <<lifecycles>>
--filterBy <<field1:value1,field2:value2>> -orderBy <<field>> -sortOrder <<sortOrder>> -offset 0 -numResults 10
-
-<a href="../Restapi/InstanceRunning.html">Optional params described here.</a>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/SLAAlert.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/SLAAlert.twiki b/trunk/releases/master/src/site/twiki/falconcli/SLAAlert.twiki
deleted file mode 100644
index e5270fa..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/SLAAlert.twiki
+++ /dev/null
@@ -1,49 +0,0 @@
----+++SLAAlert
-
-[[CommonCLI][Common CLI Options]]
-
-<verbatim>
-Since: 0.8
-</verbatim>
-
-This command lists all the feed instances which have missed sla and are still not available. If a feed instance missed
-sla but is now available, then it will not be reported in results. The purpose of this API is alerting and hence it
- doesn't return feed instances which missed SLA but are available as they don't require any action.
-
-* Currently sla monitoring is supported only for feeds.
-
-* Option end is optional and will default to current time if missing.
-
-* Option name is optional, if provided only instances of that feed will be considered.
-
-Usage:
-
-*Example 1*
-
-*$FALCON_HOME/bin/falcon entity -type feed -start 2014-09-05T00:00Z -slaAlert  -end 2016-05-03T00:00Z -colo local*
-
-name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T11:59Z, tags: Missed SLA High
-name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:00Z, tags: Missed SLA High
-name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:01Z, tags: Missed SLA High
-name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:02Z, tags: Missed SLA High
-name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:03Z, tags: Missed SLA High
-name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:04Z, tags: Missed SLA High
-name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:05Z, tags: Missed SLA High
-name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:06Z, tags: Missed SLA High
-name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:07Z, tags: Missed SLA High
-name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:08Z, tags: Missed SLA Low
-
-
-Response: default/Success!
-
-Request Id: default/216978070@qtp-830047511-4 - f5a6c129-ab42-4feb-a2bf-c3baed356248
-
-*Example 2*
-
-*$FALCON_HOME/bin/falcon entity -type feed -start 2014-09-05T00:00Z -slaAlert  -end 2016-05-03T00:00Z -colo local -name in*
-
-name: in, type: FEED, cluster: local, instanceTime: 2015-09-26T06:00Z, tags: Missed SLA High
-
-Response: default/Success!
-
-Request Id: default/1580107885@qtp-830047511-7 - f16cbc51-5070-4551-ad25-28f75e5e4cf2

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/Schedule.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/Schedule.twiki b/trunk/releases/master/src/site/twiki/falconcli/Schedule.twiki
deleted file mode 100644
index c4422e7..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/Schedule.twiki
+++ /dev/null
@@ -1,22 +0,0 @@
----+++Schedule
-
-[[CommonCLI][Common CLI Options]]
-
-Once submitted, an entity can be scheduled using schedule option. Process and feed can only be scheduled.
-
-Usage:
-$FALCON_HOME/bin/falcon entity  -type [process|feed] -name <<name>> -schedule
-
-Optional Args :
-
--skipDryRun When this argument is specified, Falcon skips oozie dryrun.
-
--doAs <username>
-
--properties <<key1:val1,...,keyN:valN>>. Specifying 'falcon.scheduler:native' as a property will schedule the entity on the the native scheduler of Falcon. Else, it will default to the engine specified in startup.properties. For details on Native scheduler, refer to [[FalconNativeScheduler][Falcon Native Scheduler]]
-
-Examples:
-
- $FALCON_HOME/bin/falcon entity  -type process -name sampleProcess -schedule
-
- $FALCON_HOME/bin/falcon entity  -type process -name sampleProcess -schedule -properties falcon.scheduler:native

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/StatusAdmin.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/StatusAdmin.twiki b/trunk/releases/master/src/site/twiki/falconcli/StatusAdmin.twiki
deleted file mode 100644
index dadb8e5..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/StatusAdmin.twiki
+++ /dev/null
@@ -1,8 +0,0 @@
----+++Status
-
-[[CommonCLI][Common CLI Options]]
-
-Status returns the current state of Falcon (running or stopped).
-Usage:
-$FALCON_HOME/bin/falcon admin -status
-

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/StatusEntity.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/StatusEntity.twiki b/trunk/releases/master/src/site/twiki/falconcli/StatusEntity.twiki
deleted file mode 100644
index 56d16f0..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/StatusEntity.twiki
+++ /dev/null
@@ -1,8 +0,0 @@
----+++Status
-
-[[CommonCLI][Common CLI Options]]
-
-Status returns the current status of the entity.
-
-Usage:
-$FALCON_HOME/bin/falcon entity -type [cluster|datasource|feed|process] -name <<name>> -status
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/StatusInstance.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/StatusInstance.twiki b/trunk/releases/master/src/site/twiki/falconcli/StatusInstance.twiki
deleted file mode 100644
index 047d334..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/StatusInstance.twiki
+++ /dev/null
@@ -1,21 +0,0 @@
----+++Status
-
-[[CommonCLI][Common CLI Options]]
-
-Status option via CLI can be used to get the status of a single or multiple instances.  If the instance is not yet materialized but is within the process validity range, WAITING is returned as the state. Along with the status of the instance time is also returned. Log location gives the oozie workflow url
-If the instance is in WAITING state, missing dependencies are listed.
-The job urls are populated for all actions of user workflow and non-succeeded actions of the main-workflow. The user then need not go to the underlying scheduler to get the job urls when needed to debug an issue in the job.
-
-Example : Suppose a process has 3 instance, one has succeeded,one is in running state and other one is waiting, the expected output is:
-
-{"status":"SUCCEEDED","message":"getStatus is successful","instances":[{"instance":"2012-05-07T05:02Z","status":"SUCCEEDED","logFile":"http://oozie-dashboard-url"},{"instance":"2012-05-07T05:07Z","status":"RUNNING","logFile":"http://oozie-dashboard-url"}, {"instance":"2010-01-02T11:05Z","status":"WAITING"}]
-
-Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -status
-
-Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" -colo <<colo>>
--filterBy <<field1:value1,field2:value2>> -lifecycle <<lifecycles>>
--orderBy field -sortOrder <<sortOrder>> -offset 0 -numResults 10
--allAttempts To get all the attempts for corresponding instances
-
-<a href="../Restapi/InstanceStatus.html"> Optional params described here.</a>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/Submit.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/Submit.twiki b/trunk/releases/master/src/site/twiki/falconcli/Submit.twiki
deleted file mode 100644
index f2f7a49..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/Submit.twiki
+++ /dev/null
@@ -1,13 +0,0 @@
----+++Submit
-
-[[CommonCLI][Common CLI Options]]
-
-Submit option is used to set up entity definition.
-
-Usage:
-$FALCON_HOME/bin/falcon entity -submit -type [cluster|datasource|feed|process] -file <entity-definition.xml>
-
-Example:
-$FALCON_HOME/bin/falcon entity -submit -type cluster -file /cluster/definition.xml
-
-Note: The url option in the above and all subsequent commands is optional. If not mentioned it will be picked from client.properties file. If the option is not provided and also not set in client.properties, Falcon CLI will fail.

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/SubmitRecipe.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/SubmitRecipe.twiki b/trunk/releases/master/src/site/twiki/falconcli/SubmitRecipe.twiki
deleted file mode 100644
index d14b00d..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/SubmitRecipe.twiki
+++ /dev/null
@@ -1,17 +0,0 @@
----+++ Submit Recipe
-
-[[CommonCLI][Common CLI Options]]
-
-Submit the specified recipe.
-
-Usage:
-$FALCON_HOME/bin/falcon recipe -name <name>
-Name of the recipe. User should have defined <name>-template.xml and <name>.properties in the path specified by falcon.recipe.path in client.properties file. falcon.home path is used if its not specified in client.properties file.
-If its not specified in client.properties file and also if files cannot be found at falcon.home, Falcon CLI will fail.
-
-Optional Args : -tool <recipeToolClassName>
-Falcon provides a base tool that recipes can override. If this option is not specified the default Recipe Tool
-RecipeTool defined is used. This option is required if user defines his own recipe tool class.
-
-Example:
-$FALCON_HOME/bin/falcon recipe -name hdfs-replication
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/SummaryEntity.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/SummaryEntity.twiki b/trunk/releases/master/src/site/twiki/falconcli/SummaryEntity.twiki
deleted file mode 100644
index 800f9fc..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/SummaryEntity.twiki
+++ /dev/null
@@ -1,14 +0,0 @@
----+++Summary
-
-[[CommonCLI][Common CLI Options]]
-
-Summary of entities of a particular type and a cluster will be listed. Entity summary has N most recent instances of entity.
-
-Usage:
-$FALCON_HOME/bin/falcon entity -type [feed|process] -summary
-
-Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" -fields <<field1,field2>>
--filterBy <<field1:value1,field2:value2>> -tags <<tagkey=tagvalue,tagkey=tagvalue>>
--orderBy <<field>> -sortOrder <<sortOrder>> -offset 0 -numResults 10 -numInstances 7
-
-<a href="../Restapi/EntitySummary.html">Optional params described here.</a>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/SummaryInstance.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/SummaryInstance.twiki b/trunk/releases/master/src/site/twiki/falconcli/SummaryInstance.twiki
deleted file mode 100644
index f7ca0b4..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/SummaryInstance.twiki
+++ /dev/null
@@ -1,20 +0,0 @@
----+++Summary
-
-[[CommonCLI][Common CLI Options]]
-
-Summary option via CLI can be used to get the consolidated status of the instances between the specified time period.
-Each status along with the corresponding instance count are listed for each of the applicable colos.
-The unscheduled instances between the specified time period are included as UNSCHEDULED in the output to provide more clarity.
-
-Example : Suppose a process has 3 instance, one has succeeded,one is in running state and other one is waiting, the expected output is:
-
-{"status":"SUCCEEDED","message":"getSummary is successful", instancesSummary:[{"cluster": <<name>> "map":[{"SUCCEEDED":"1"}, {"WAITING":"1"}, {"RUNNING":"1"}]}]}
-
-Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -summary
-
-Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" -colo <<colo>>
--filterBy <<field1:value1,field2:value2>> -lifecycle <<lifecycles>>
--orderBy field -sortOrder <<sortOrder>>
-
-<a href="../Restapi/InstanceSummary.html">Optional params described here.</a>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/SuspendEntity.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/SuspendEntity.twiki b/trunk/releases/master/src/site/twiki/falconcli/SuspendEntity.twiki
deleted file mode 100644
index 7618e9c..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/SuspendEntity.twiki
+++ /dev/null
@@ -1,8 +0,0 @@
----+++Suspend
-
-[[CommonCLI][Common CLI Options]]
-
-Suspend on an entity results in suspension of the oozie bundle that was scheduled earlier through the schedule function. No further instances are executed on a suspended entity. Only schedule-able entities(process/feed) can be suspended.
-
-Usage:
-$FALCON_HOME/bin/falcon entity  -type [feed|process] -name <<name>> -suspend
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/SuspendInstance.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/SuspendInstance.twiki b/trunk/releases/master/src/site/twiki/falconcli/SuspendInstance.twiki
deleted file mode 100644
index 221cf5c..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/SuspendInstance.twiki
+++ /dev/null
@@ -1,8 +0,0 @@
----+++Suspend
-
-[[CommonCLI][Common CLI Options]]
-
-Suspend is used to suspend a instance or instances  for the given process. This option pauses the parent workflow at the state, which it was in at the time of execution of this command.
-
-Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -suspend -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/Touch.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/Touch.twiki b/trunk/releases/master/src/site/twiki/falconcli/Touch.twiki
deleted file mode 100644
index afbd848..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/Touch.twiki
+++ /dev/null
@@ -1,10 +0,0 @@
----+++Touch
-
-[[CommonCLI][Common CLI Options]]
-
-Force Update operation allows an already submitted/scheduled entity to be updated.
-
-Usage:
-$FALCON_HOME/bin/falcon entity  -type [feed|process] -name <<name>> -touch
-
-Optional Arg : -skipDryRun. When this argument is specified, Falcon skips oozie dryrun.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/TriageInstance.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/TriageInstance.twiki b/trunk/releases/master/src/site/twiki/falconcli/TriageInstance.twiki
deleted file mode 100644
index c2c32cd..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/TriageInstance.twiki
+++ /dev/null
@@ -1,9 +0,0 @@
----+++Triage
-
-[[CommonCLI][Common CLI Options]]
-
-Given a feed/process instance this command traces it's ancestors to find what all ancestors have failed. It's useful if
-lot of instances are failing in a pipeline as it then finds out the root cause of the pipeline being stuck.
-
-Usage:
-$FALCON_HOME/bin/falcon instance -triage -type <<feed/process>> -name <<name>> -start "yyyy-MM-dd'T'HH:mm'Z'"

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/UpdateEntity.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/UpdateEntity.twiki b/trunk/releases/master/src/site/twiki/falconcli/UpdateEntity.twiki
deleted file mode 100644
index ae60559..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/UpdateEntity.twiki
+++ /dev/null
@@ -1,14 +0,0 @@
----+++Update
-
-[[CommonCLI][Common CLI Options]]
-
-Update operation allows an already submitted/scheduled entity to be updated. Cluster and datasource updates are
-currently not allowed.
-
-Usage:
-$FALCON_HOME/bin/falcon entity  -type [feed|process] -name <<name>> -update -file <<path_to_file>>
-
-Optional Arg : -skipDryRun. When this argument is specified, Falcon skips oozie dryrun.
-
-Example:
-$FALCON_HOME/bin/falcon entity -type process -name hourly-reports-generator -update -file /process/definition.xml

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/VersionAdmin.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/VersionAdmin.twiki b/trunk/releases/master/src/site/twiki/falconcli/VersionAdmin.twiki
deleted file mode 100644
index 453f6a1..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/VersionAdmin.twiki
+++ /dev/null
@@ -1,7 +0,0 @@
----+++Version
-
-[[CommonCLI][Common CLI Options]]
-
-Version returns the current version of Falcon installed.
-Usage:
-$FALCON_HOME/bin/falcon admin -version

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/VertexEdgesMetadata.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/VertexEdgesMetadata.twiki b/trunk/releases/master/src/site/twiki/falconcli/VertexEdgesMetadata.twiki
deleted file mode 100644
index e9182fc..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/VertexEdgesMetadata.twiki
+++ /dev/null
@@ -1,12 +0,0 @@
----+++ Vertex Edges
-
-[[CommonCLI][Common CLI Options]]
-
-Get the adjacent vertices or edges of the vertex with the specified direction.
-
-Usage:
-$FALCON_HOME/bin/falcon metadata -edges -id <<vertex-id>> -direction <<direction>>
-
-Example:
-$FALCON_HOME/bin/falcon metadata -edges -id 4 -direction both
-$FALCON_HOME/bin/falcon metadata -edges -id 4 -direction inE

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/VertexMetadata.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/VertexMetadata.twiki b/trunk/releases/master/src/site/twiki/falconcli/VertexMetadata.twiki
deleted file mode 100644
index b2c62e8..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/VertexMetadata.twiki
+++ /dev/null
@@ -1,11 +0,0 @@
----+++ Vertex
-
-[[CommonCLI][Common CLI Options]]
-
-Get the vertex with the specified id.
-
-Usage:
-$FALCON_HOME/bin/falcon metadata -vertex -id <<id>>
-
-Example:
-$FALCON_HOME/bin/falcon metadata -vertex -id 4

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/falconcli/VerticesMetadata.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/falconcli/VerticesMetadata.twiki b/trunk/releases/master/src/site/twiki/falconcli/VerticesMetadata.twiki
deleted file mode 100644
index 1b32ad5..0000000
--- a/trunk/releases/master/src/site/twiki/falconcli/VerticesMetadata.twiki
+++ /dev/null
@@ -1,11 +0,0 @@
----+++ Vertices
-
-[[CommonCLI][Common CLI Options]]
-
-Get all vertices for a key index given the specified value.
-
-Usage:
-$FALCON_HOME/bin/falcon metadata -vertices -key <<key>> -value <<value>>
-
-Example:
-$FALCON_HOME/bin/falcon metadata -vertices -key type -value feed-instance

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/index.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/index.twiki b/trunk/releases/master/src/site/twiki/index.twiki
deleted file mode 100644
index a1ee0a3..0000000
--- a/trunk/releases/master/src/site/twiki/index.twiki
+++ /dev/null
@@ -1,43 +0,0 @@
----+ Falcon - Feed management and data processing platform
-
-Falcon is a feed processing and feed management system aimed at making it
-easier for end consumers to onboard their feed processing and feed
-management on hadoop clusters.
-
----++ Why?
-
-   * Establishes relationship between various data and processing elements on a Hadoop environment
-
-   * Feed management services such as feed retention, replications across clusters, archival etc.
-
-   * Easy to onboard new workflows/pipelines, with support for late data handling, retry policies
-
-   * Integration with metastore/catalog such as Hive/HCatalog
-
-   * Provide notification to end customer based on availability of feed groups
-     (logical group of related feeds, which are likely to be used together)
-
-   * Enables use cases for local processing in colo and global aggregations
-
-   * Captures Lineage information for feeds and processes
-
----+ Getting Started
-
-Start with these simple steps to install an falcon instance [[InstallationSteps][Simple setup]]. Also refer
-to Falcon architecture and documentation in [[FalconDocumentation][Documentation]]. [[OnBoarding][On boarding]]
-describes steps to on-board a pipeline to Falcon. It also gives a sample pipeline for reference.
-[[EntitySpecification][Entity Specification]] gives complete details of all Falcon entities.
-
-[[falconcli/FalconCLI][Falcon CLI]] implements [[restapi/ResourceList][Falcon's RESTful API]] and
-describes various options for the command line utility provided by Falcon.
-
-Falcon provides OOTB [[HiveIntegration][lifecycle management for Tables in Hive (HCatalog)]]
-such as table replication for BCP and table eviction. Falcon also enforces
-[[Security][Security]] on protected resources and enables SSL.
-
-#LicenseInfo
----+ Licensing Information
-
-Falcon is distributed under [[http://www.apache.org/licenses/LICENSE-2.0][Apache License 2.0]].
-
-

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/AdjacentVertices.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/AdjacentVertices.twiki b/trunk/releases/master/src/site/twiki/restapi/AdjacentVertices.twiki
deleted file mode 100644
index 1e60866..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/AdjacentVertices.twiki
+++ /dev/null
@@ -1,91 +0,0 @@
----++  GET api/metadata/lineage/vertices/:id/:direction
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get a list of adjacent vertices or edges with a direction.
-
----++ Parameters
-   * :id is the id of the vertex.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-   * :direction is the direction associated with the edges.
-
-   To get the adjacent out vertices of vertex pass direction as out, in to get adjacent in vertices
-   and both to get both in and out adjacent vertices. Similarly to get the out edges of vertex
-   pass outE, inE to get in edges and bothE to get the both in and out edges of vertex.
-
-      * out  : get the adjacent out vertices of vertex
-      * in   : get the adjacent in vertices of vertex
-      * both : get the both adjacent in and out vertices of vertex
-      * outCount  : get the number of out vertices of vertex
-      * inCount   : get the number of in vertices of vertex
-      * bothCount : get the number of adjacent in and out vertices of vertex
-      * outIds  : get the identifiers of out vertices of vertex
-      * inIds   : get the identifiers of in vertices of vertex
-      * bothIds : get the identifiers of adjacent in and out vertices of vertex
-
----++ Results
-Adjacent vertices of the vertex for the specified direction.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/metadata/lineage/vertices/4/out
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "results": [
-        {
-            "timestamp":"2014-04-21T20:55Z",
-            "name":"sampleFeed",
-            "type":"feed-instance",
-            "_id":8,
-            "_type":"vertex"
-        }
-    ],
-    "totalSize":1}
-}
-</verbatim>
-
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/metadata/lineage/vertices/4/bothE
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "results":[
-        {
-            "_id":"Q5V-4-5g",
-            "_type":"edge",
-            "_outV":4,
-            "_inV":8,
-            "_label":"output"
-        }
-    ],
-    "totalSize":1
-}
-</verbatim>
-
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/metadata/lineage/vertices/4/bothE?doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "results":[
-        {
-            "_id":"Q5V-4-5g",
-            "_type":"edge",
-            "_outV":4,
-            "_inV":8,
-            "_label":"output"
-        }
-    ],
-    "totalSize":1
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/AdminConfig.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/AdminConfig.twiki b/trunk/releases/master/src/site/twiki/restapi/AdminConfig.twiki
deleted file mode 100644
index 675b19e..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/AdminConfig.twiki
+++ /dev/null
@@ -1,35 +0,0 @@
----++  GET /api/admin/config/:config-type
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get configuration information of the falcon server.
-
----++ Parameters
-   * :config-type can be build, deploy, startup or runtime
-
----++ Results
-Configuration information of the server.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/admin/config/deploy
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "properties": [
-        {
-            "value": "embedded",
-            "key": "deploy.mode"
-        },
-        {
-            "value": "all",
-            "key": "domain"
-        }
-    ]
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/AdminStack.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/AdminStack.twiki b/trunk/releases/master/src/site/twiki/restapi/AdminStack.twiki
deleted file mode 100644
index 08903a2..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/AdminStack.twiki
+++ /dev/null
@@ -1,40 +0,0 @@
----++  GET /api/admin/stack
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get stack trace of the falcon server.
-
----++ Parameters
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Stack trace of the server.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/admin/stack?doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-Reference Handler
-State: WAITING
-java.lang.Object.wait(Native Method)
-java.lang.Object.wait(Object.java:485)
-java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)Finalizer
-
-...
-
-State: TIMED_WAITING
-sun.misc.Unsafe.park(Native Method)
-java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
-java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
-java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
-java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
-java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955)
-java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
-java.lang.Thread.run(Thread.java:695)
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/AdminVersion.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/AdminVersion.twiki b/trunk/releases/master/src/site/twiki/restapi/AdminVersion.twiki
deleted file mode 100644
index 7db2d8f..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/AdminVersion.twiki
+++ /dev/null
@@ -1,35 +0,0 @@
----++  GET /api/admin/version
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get version of the falcon server.
-
----++ Parameters
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Version of the server.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/admin/version?doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "properties":[
-        {
-            "key":"Version",
-            "value":"0.4-incubating-SNAPSHOT-rb47788d1112fcf949c22a3860934167237b395b0"
-        },
-        {
-            "key":"Mode",
-            "value":"embedded"
-        }
-    ]
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/AllEdges.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/AllEdges.twiki b/trunk/releases/master/src/site/twiki/restapi/AllEdges.twiki
deleted file mode 100644
index 303ac50..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/AllEdges.twiki
+++ /dev/null
@@ -1,42 +0,0 @@
----++  GET pi/metadata/lineage//edges/all
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get all edges.
-
----++ Parameters
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-All edges in lineage graph.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/metadata/lineage/edges/all?doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "results": [
-        {
-            "_id":"Q5V-4-5g",
-            "_type":"edge",
-            "_outV":4,
-            "_inV":8,
-            "_label":"output"
-        },
-        {
-            "_id":"Q6t-c-5g",
-            "_type":"edge",
-            "_outV":12,
-            "_inV":16,
-            "_label":"output"
-        }
-    ],
-    "totalSize": 2
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/AllVertices.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/AllVertices.twiki b/trunk/releases/master/src/site/twiki/restapi/AllVertices.twiki
deleted file mode 100644
index d2beb48..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/AllVertices.twiki
+++ /dev/null
@@ -1,43 +0,0 @@
----++  GET api/metadata/lineage/vertices/all
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get all vertices.
-
----++ Parameters
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-All vertices in lineage graph.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/metadata/lineage/vertices/all?doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "results": [
-        {
-            "timestamp":"2014-04-21T20:55Z",
-            "name":"sampleIngestProcess\/2014-03-01T10:00Z",
-            "type":"process-instance",
-            "version":"2.0.0",
-            "_id":4,
-            "_type":"vertex"
-        },
-        {
-            "timestamp":"2014-04-21T20:55Z",
-            "name":"rawEmailFeed\/2014-03-01T10:00Z",
-            "type":"feed-instance",
-            "_id":8,
-            "_type":"vertex"
-        }
-    ],
-    "totalSize": 2
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/Edge.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/Edge.twiki b/trunk/releases/master/src/site/twiki/restapi/Edge.twiki
deleted file mode 100644
index 7c4dbe5..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/Edge.twiki
+++ /dev/null
@@ -1,34 +0,0 @@
----++  GET api/metadata/lineage/edges/:id
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Gets the edge with specified id.
-
----++ Parameters
-   * :id is the unique id of the edge.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Edge with the specified id.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/metadata/lineage/edges/Q6t-c-5g?doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "results":
-        {
-            "_id":"Q6t-c-5g",
-            "_type":"edge",
-            "_outV":12,
-            "_inV":16,
-            "_label":"output"
-        }
-}
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/EntityDefinition.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/EntityDefinition.twiki b/trunk/releases/master/src/site/twiki/restapi/EntityDefinition.twiki
deleted file mode 100644
index 5e1165b..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/EntityDefinition.twiki
+++ /dev/null
@@ -1,53 +0,0 @@
----++  GET /api/entities/definition/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Get definition of the entity.
-
----++ Parameters
-   * :entity-type can be cluster, feed or process.
-   * :entity-name is name of the entity.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Definition of the entity.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-GET http://localhost:15000/api/entities/definition/process/SampleProcess?doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<process name="SampleProcess" xmlns="uri:falcon:process:0.1">
-  <clusters>
-    <cluster name="primary-cluster">
-      <validity start="2012-04-03T06:00Z" end="2022-12-30T00:00Z"/>
-    </cluster>
-  </clusters>
-  <parallel>1</parallel>
-  <order>FIFO</order>
-  <frequency>hours(1)</frequency>
-  <timezone>UTC</timezone>
-  <inputs>
-    <input name="input" feed="SampleInput" start="yesterday(0,0)" end="today(-1,0)"/>
-  </inputs>
-  <outputs>
-    <output name="output" feed="SampleOutput" instance="yesterday(0,0)"/>
-  </outputs>
-  <properties>
-    <property name="queueName" value="default"/>
-    <property name="ssh.host" value="localhost"/>
-    <property name="fileTimestamp" value="${coord:formatTime(coord:nominalTime(), 'yyyy-MM-dd')}"/>
-  </properties>
-  <workflow engine="oozie" path="/examples/apps/aggregator"/>
-  <retry policy="exp-backoff" delay="minutes(5)" attempts="3"/>
-  <late-process policy="exp-backoff" delay="hours(1)">
-    <late-input input="input" workflow-path="/projects/bootcamp/workflow/lateinput"/>
-  </late-process>
-</process>
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/restapi/EntityDelete.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/restapi/EntityDelete.twiki b/trunk/releases/master/src/site/twiki/restapi/EntityDelete.twiki
deleted file mode 100644
index a488943..0000000
--- a/trunk/releases/master/src/site/twiki/restapi/EntityDelete.twiki
+++ /dev/null
@@ -1,31 +0,0 @@
----++  DELETE /api/entities/delete/:entity-type/:entity-name
-   * <a href="#Description">Description</a>
-   * <a href="#Parameters">Parameters</a>
-   * <a href="#Results">Results</a>
-   * <a href="#Examples">Examples</a>
-
----++ Description
-Delete the specified entity.
-
----++ Parameters
-   * :entity-type can be feed or process.
-   * :entity-name is name of the feed or process.
-   * doAs <optional query param> allows the current user to impersonate the user passed in doAs when interacting with the Falcon system.
-
----++ Results
-Results of the delete operation.
-
----++ Examples
----+++ Rest Call
-<verbatim>
-DELETE http://localhost:15000/api/entities/delete/cluster/SampleProcess?doAs=joe
-</verbatim>
----+++ Result
-<verbatim>
-{
-    "requestId": "falcon\/17ff6ca6-1c8a-459f-9ba8-8fec480e384a\n",
-    "message": "falcon\/SampleProcess(cluster) removed successfully\n",
-    "status": "SUCCEEDED"
-}
-</verbatim>
-


[6/6] falcon git commit: Deleting accidental check-in of trunk/release/master

Posted by pa...@apache.org.
Deleting accidental check-in of trunk/release/master


Project: http://git-wip-us.apache.org/repos/asf/falcon/repo
Commit: http://git-wip-us.apache.org/repos/asf/falcon/commit/4e4b8457
Tree: http://git-wip-us.apache.org/repos/asf/falcon/tree/4e4b8457
Diff: http://git-wip-us.apache.org/repos/asf/falcon/diff/4e4b8457

Branch: refs/heads/asf-site
Commit: 4e4b8457d5febe68ccc806c983a725e60b850489
Parents: 31b1d7e
Author: Pallavi Rao <pa...@inmobi.com>
Authored: Thu Mar 10 15:18:06 2016 +0530
Committer: Pallavi Rao <pa...@inmobi.com>
Committed: Thu Mar 10 15:18:06 2016 +0530

----------------------------------------------------------------------
 trunk/releases/master/pom.xml                   |  66 --
 .../master/src/site/resources/Architecture.png  | Bin 65687 -> 0 bytes
 .../src/site/resources/EntityDependency.png     | Bin 53036 -> 0 bytes
 .../master/src/site/resources/FeedSchedule.png  | Bin 84841 -> 0 bytes
 .../master/src/site/resources/PrismSetup.png    | Bin 103747 -> 0 bytes
 .../src/site/resources/ProcessSchedule.png      | Bin 85720 -> 0 bytes
 .../images/accessories-text-editor.png          | Bin 746 -> 0 bytes
 .../master/src/site/resources/images/add.gif    | Bin 397 -> 0 bytes
 .../resources/images/apache-incubator-logo.png  | Bin 4234 -> 0 bytes
 .../resources/images/apache-maven-project-2.png | Bin 33442 -> 0 bytes
 .../images/application-certificate.png          | Bin 923 -> 0 bytes
 .../src/site/resources/images/contact-new.png   | Bin 736 -> 0 bytes
 .../resources/images/document-properties.png    | Bin 577 -> 0 bytes
 .../site/resources/images/drive-harddisk.png    | Bin 700 -> 0 bytes
 .../src/site/resources/images/falcon-logo.png   | Bin 13293 -> 0 bytes
 .../master/src/site/resources/images/fix.gif    | Bin 366 -> 0 bytes
 .../site/resources/images/icon_error_sml.gif    | Bin 633 -> 0 bytes
 .../src/site/resources/images/icon_help_sml.gif | Bin 1072 -> 0 bytes
 .../src/site/resources/images/icon_info_sml.gif | Bin 638 -> 0 bytes
 .../site/resources/images/icon_success_sml.gif  | Bin 604 -> 0 bytes
 .../site/resources/images/icon_warning_sml.gif  | Bin 625 -> 0 bytes
 .../site/resources/images/image-x-generic.png   | Bin 662 -> 0 bytes
 .../resources/images/internet-web-browser.png   | Bin 1017 -> 0 bytes
 .../images/logos/build-by-maven-black.png       | Bin 2294 -> 0 bytes
 .../images/logos/build-by-maven-white.png       | Bin 2260 -> 0 bytes
 .../resources/images/logos/maven-feather.png    | Bin 3330 -> 0 bytes
 .../site/resources/images/network-server.png    | Bin 536 -> 0 bytes
 .../site/resources/images/package-x-generic.png | Bin 717 -> 0 bytes
 .../resources/images/profiles/pre-release.png   | Bin 32607 -> 0 bytes
 .../site/resources/images/profiles/retired.png  | Bin 22003 -> 0 bytes
 .../site/resources/images/profiles/sandbox.png  | Bin 33010 -> 0 bytes
 .../master/src/site/resources/images/remove.gif | Bin 607 -> 0 bytes
 .../master/src/site/resources/images/rss.png    | Bin 474 -> 0 bytes
 .../master/src/site/resources/images/update.gif | Bin 1090 -> 0 bytes
 .../src/site/resources/images/window-new.png    | Bin 583 -> 0 bytes
 trunk/releases/master/src/site/site.xml         |  62 --
 .../master/src/site/twiki/Appendix.twiki        |  55 -
 .../master/src/site/twiki/Configuration.twiki   | 122 ---
 .../src/site/twiki/Distributed-mode.twiki       | 198 ----
 .../master/src/site/twiki/Embedded-mode.twiki   | 198 ----
 .../src/site/twiki/EntitySpecification.twiki    | 996 -------------------
 .../src/site/twiki/FalconDocumentation.twiki    | 777 ---------------
 .../site/twiki/FalconEmailNotification.twiki    |  29 -
 .../src/site/twiki/FalconNativeScheduler.twiki  | 213 ----
 .../releases/master/src/site/twiki/HDFSDR.twiki |  34 -
 .../releases/master/src/site/twiki/HiveDR.twiki |  74 --
 .../master/src/site/twiki/HiveIntegration.twiki | 372 -------
 .../master/src/site/twiki/ImportExport.twiki    | 242 -----
 .../src/site/twiki/InstallationSteps.twiki      |  87 --
 .../releases/master/src/site/twiki/LICENSE.txt  |   3 -
 .../src/site/twiki/MigrationInstructions.twiki  |  15 -
 .../master/src/site/twiki/OnBoarding.twiki      | 269 -----
 .../master/src/site/twiki/Operability.twiki     | 110 --
 .../master/src/site/twiki/Recipes.twiki         |  85 --
 .../master/src/site/twiki/Security.twiki        | 387 -------
 .../src/site/twiki/falconcli/CommonCLI.twiki    |  21 -
 .../site/twiki/falconcli/ContinueInstance.twiki |   8 -
 .../src/site/twiki/falconcli/Definition.twiki   |   8 -
 .../src/site/twiki/falconcli/DeleteEntity.twiki |   8 -
 .../site/twiki/falconcli/DependencyEntity.twiki |  10 -
 .../twiki/falconcli/DependencyInstance.twiki    |  33 -
 .../src/site/twiki/falconcli/EdgeMetadata.twiki |  11 -
 .../src/site/twiki/falconcli/FalconCLI.twiki    | 112 ---
 .../twiki/falconcli/FeedInstanceListing.twiki   |  11 -
 .../src/site/twiki/falconcli/HelpAdmin.twiki    |   6 -
 .../src/site/twiki/falconcli/KillInstance.twiki |  14 -
 .../twiki/falconcli/LifeCycleInstance.twiki     |   9 -
 .../site/twiki/falconcli/LineageMetadata.twiki  |  12 -
 .../src/site/twiki/falconcli/ListEntity.twiki   |  17 -
 .../src/site/twiki/falconcli/ListInstance.twiki |  20 -
 .../src/site/twiki/falconcli/ListMetadata.twiki |  13 -
 .../src/site/twiki/falconcli/LogsInstance.twiki |  14 -
 .../src/site/twiki/falconcli/Lookup.twiki       |  12 -
 .../site/twiki/falconcli/ParamsInstance.twiki   |   8 -
 .../site/twiki/falconcli/RelationMetadata.twiki |  10 -
 .../site/twiki/falconcli/RerunInstance.twiki    |  10 -
 .../src/site/twiki/falconcli/ResumeEntity.twiki |   8 -
 .../site/twiki/falconcli/ResumeInstance.twiki   |   8 -
 .../site/twiki/falconcli/RunningInstance.twiki  |  13 -
 .../src/site/twiki/falconcli/SLAAlert.twiki     |  49 -
 .../src/site/twiki/falconcli/Schedule.twiki     |  22 -
 .../src/site/twiki/falconcli/StatusAdmin.twiki  |   8 -
 .../src/site/twiki/falconcli/StatusEntity.twiki |   8 -
 .../site/twiki/falconcli/StatusInstance.twiki   |  21 -
 .../src/site/twiki/falconcli/Submit.twiki       |  13 -
 .../src/site/twiki/falconcli/SubmitRecipe.twiki |  17 -
 .../site/twiki/falconcli/SummaryEntity.twiki    |  14 -
 .../site/twiki/falconcli/SummaryInstance.twiki  |  20 -
 .../site/twiki/falconcli/SuspendEntity.twiki    |   8 -
 .../site/twiki/falconcli/SuspendInstance.twiki  |   8 -
 .../master/src/site/twiki/falconcli/Touch.twiki |  10 -
 .../site/twiki/falconcli/TriageInstance.twiki   |   9 -
 .../src/site/twiki/falconcli/UpdateEntity.twiki |  14 -
 .../src/site/twiki/falconcli/VersionAdmin.twiki |   7 -
 .../twiki/falconcli/VertexEdgesMetadata.twiki   |  12 -
 .../site/twiki/falconcli/VertexMetadata.twiki   |  11 -
 .../site/twiki/falconcli/VerticesMetadata.twiki |  11 -
 .../releases/master/src/site/twiki/index.twiki  |  43 -
 .../site/twiki/restapi/AdjacentVertices.twiki   |  91 --
 .../src/site/twiki/restapi/AdminConfig.twiki    |  35 -
 .../src/site/twiki/restapi/AdminStack.twiki     |  40 -
 .../src/site/twiki/restapi/AdminVersion.twiki   |  35 -
 .../src/site/twiki/restapi/AllEdges.twiki       |  42 -
 .../src/site/twiki/restapi/AllVertices.twiki    |  43 -
 .../master/src/site/twiki/restapi/Edge.twiki    |  34 -
 .../site/twiki/restapi/EntityDefinition.twiki   |  53 -
 .../src/site/twiki/restapi/EntityDelete.twiki   |  31 -
 .../site/twiki/restapi/EntityDependencies.twiki |  43 -
 .../src/site/twiki/restapi/EntityLineage.twiki  |  40 -
 .../src/site/twiki/restapi/EntityList.twiki     | 164 ---
 .../src/site/twiki/restapi/EntityResume.twiki   |  30 -
 .../src/site/twiki/restapi/EntitySchedule.twiki | 100 --
 .../src/site/twiki/restapi/EntityStatus.twiki   |  30 -
 .../src/site/twiki/restapi/EntitySubmit.twiki   | 105 --
 .../twiki/restapi/EntitySubmitAndSchedule.twiki |  64 --
 .../src/site/twiki/restapi/EntitySummary.twiki  |  74 --
 .../src/site/twiki/restapi/EntitySuspend.twiki  |  30 -
 .../src/site/twiki/restapi/EntityTouch.twiki    |  31 -
 .../src/site/twiki/restapi/EntityUpdate.twiki   |  66 --
 .../src/site/twiki/restapi/EntityValidate.twiki | 170 ----
 .../twiki/restapi/FeedInstanceListing.twiki     |  46 -
 .../src/site/twiki/restapi/FeedLookup.twiki     |  37 -
 .../master/src/site/twiki/restapi/FeedSLA.twiki |  56 --
 .../master/src/site/twiki/restapi/Graph.twiki   |  22 -
 .../twiki/restapi/InstanceDependencies.twiki    |  49 -
 .../src/site/twiki/restapi/InstanceKill.twiki   |  44 -
 .../src/site/twiki/restapi/InstanceList.twiki   | 151 ---
 .../src/site/twiki/restapi/InstanceLogs.twiki   | 113 ---
 .../src/site/twiki/restapi/InstanceParams.twiki |  83 --
 .../src/site/twiki/restapi/InstanceRerun.twiki  |  66 --
 .../src/site/twiki/restapi/InstanceResume.twiki |  43 -
 .../site/twiki/restapi/InstanceRunning.twiki    |  84 --
 .../src/site/twiki/restapi/InstanceStatus.twiki |  98 --
 .../site/twiki/restapi/InstanceSummary.twiki    | 114 ---
 .../site/twiki/restapi/InstanceSuspend.twiki    |  44 -
 .../src/site/twiki/restapi/MetadataList.twiki   |  31 -
 .../site/twiki/restapi/MetadataRelations.twiki  |  46 -
 .../src/site/twiki/restapi/ResourceList.twiki   |  93 --
 .../master/src/site/twiki/restapi/Triage.twiki  |  45 -
 .../master/src/site/twiki/restapi/Vertex.twiki  |  36 -
 .../site/twiki/restapi/VertexProperties.twiki   |  34 -
 .../src/site/twiki/restapi/Vertices.twiki       |  38 -
 142 files changed, 7819 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/pom.xml
----------------------------------------------------------------------
diff --git a/trunk/releases/master/pom.xml b/trunk/releases/master/pom.xml
deleted file mode 100644
index dfa3758..0000000
--- a/trunk/releases/master/pom.xml
+++ /dev/null
@@ -1,66 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-       http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
-    <modelVersion>4.0.0</modelVersion>
-    <parent>
-        <groupId>org.apache.falcon</groupId>
-        <artifactId>falcon-website-releases</artifactId>
-        <version>0.2</version>
-    </parent>
-    <artifactId>falcon-website-master</artifactId>
-    <version>master</version>
-    <packaging>pom</packaging>
-
-    <name>Apache Falcon - Documentation vmaster</name>
-
-    <build>
-        <plugins>
-            <plugin>
-                <groupId>org.apache.maven.plugins</groupId>
-                <artifactId>maven-site-plugin</artifactId>
-                <version>3.3</version>
-                <dependencies>
-                    <dependency>
-                        <groupId>org.apache.maven.doxia</groupId>
-                        <artifactId>doxia-module-twiki</artifactId>
-                        <version>1.3</version>
-                    </dependency>
-                    <dependency>
-                        <groupId>org.apache.maven.wagon</groupId>
-                        <artifactId>wagon-ssh-external</artifactId>
-                        <version>2.6</version>
-                    </dependency>
-                </dependencies>
-                <executions>
-                    <execution>
-                        <goals>
-                            <goal>site</goal>
-                        </goals>
-                        <phase>prepare-package</phase>
-                    </execution>
-                </executions>
-                <configuration>
-                    <outputDirectory>../../../site/master</outputDirectory>
-                </configuration>
-            </plugin>
-        </plugins>
-    </build>
-
-</project>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/Architecture.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/Architecture.png b/trunk/releases/master/src/site/resources/Architecture.png
deleted file mode 100644
index 0378b49..0000000
Binary files a/trunk/releases/master/src/site/resources/Architecture.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/EntityDependency.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/EntityDependency.png b/trunk/releases/master/src/site/resources/EntityDependency.png
deleted file mode 100644
index 9f11870..0000000
Binary files a/trunk/releases/master/src/site/resources/EntityDependency.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/FeedSchedule.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/FeedSchedule.png b/trunk/releases/master/src/site/resources/FeedSchedule.png
deleted file mode 100644
index 105c6b1..0000000
Binary files a/trunk/releases/master/src/site/resources/FeedSchedule.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/PrismSetup.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/PrismSetup.png b/trunk/releases/master/src/site/resources/PrismSetup.png
deleted file mode 100644
index b0dc9a5..0000000
Binary files a/trunk/releases/master/src/site/resources/PrismSetup.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/ProcessSchedule.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/ProcessSchedule.png b/trunk/releases/master/src/site/resources/ProcessSchedule.png
deleted file mode 100644
index a7dd788..0000000
Binary files a/trunk/releases/master/src/site/resources/ProcessSchedule.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/accessories-text-editor.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/accessories-text-editor.png b/trunk/releases/master/src/site/resources/images/accessories-text-editor.png
deleted file mode 100644
index abc3366..0000000
Binary files a/trunk/releases/master/src/site/resources/images/accessories-text-editor.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/add.gif
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/add.gif b/trunk/releases/master/src/site/resources/images/add.gif
deleted file mode 100644
index 1cb3dbf..0000000
Binary files a/trunk/releases/master/src/site/resources/images/add.gif and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/apache-incubator-logo.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/apache-incubator-logo.png b/trunk/releases/master/src/site/resources/images/apache-incubator-logo.png
deleted file mode 100644
index 81fb31e..0000000
Binary files a/trunk/releases/master/src/site/resources/images/apache-incubator-logo.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/apache-maven-project-2.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/apache-maven-project-2.png b/trunk/releases/master/src/site/resources/images/apache-maven-project-2.png
deleted file mode 100644
index 6c096ec..0000000
Binary files a/trunk/releases/master/src/site/resources/images/apache-maven-project-2.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/application-certificate.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/application-certificate.png b/trunk/releases/master/src/site/resources/images/application-certificate.png
deleted file mode 100644
index cc6aff6..0000000
Binary files a/trunk/releases/master/src/site/resources/images/application-certificate.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/contact-new.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/contact-new.png b/trunk/releases/master/src/site/resources/images/contact-new.png
deleted file mode 100644
index ebc4316..0000000
Binary files a/trunk/releases/master/src/site/resources/images/contact-new.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/document-properties.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/document-properties.png b/trunk/releases/master/src/site/resources/images/document-properties.png
deleted file mode 100644
index 34c2409..0000000
Binary files a/trunk/releases/master/src/site/resources/images/document-properties.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/drive-harddisk.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/drive-harddisk.png b/trunk/releases/master/src/site/resources/images/drive-harddisk.png
deleted file mode 100644
index d7ce475..0000000
Binary files a/trunk/releases/master/src/site/resources/images/drive-harddisk.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/falcon-logo.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/falcon-logo.png b/trunk/releases/master/src/site/resources/images/falcon-logo.png
deleted file mode 100644
index 0a9f6cf..0000000
Binary files a/trunk/releases/master/src/site/resources/images/falcon-logo.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/fix.gif
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/fix.gif b/trunk/releases/master/src/site/resources/images/fix.gif
deleted file mode 100644
index b7eb3dc..0000000
Binary files a/trunk/releases/master/src/site/resources/images/fix.gif and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/icon_error_sml.gif
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/icon_error_sml.gif b/trunk/releases/master/src/site/resources/images/icon_error_sml.gif
deleted file mode 100644
index 12e9a01..0000000
Binary files a/trunk/releases/master/src/site/resources/images/icon_error_sml.gif and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/icon_help_sml.gif
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/icon_help_sml.gif b/trunk/releases/master/src/site/resources/images/icon_help_sml.gif
deleted file mode 100644
index aaf20e6..0000000
Binary files a/trunk/releases/master/src/site/resources/images/icon_help_sml.gif and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/icon_info_sml.gif
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/icon_info_sml.gif b/trunk/releases/master/src/site/resources/images/icon_info_sml.gif
deleted file mode 100644
index b776326..0000000
Binary files a/trunk/releases/master/src/site/resources/images/icon_info_sml.gif and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/icon_success_sml.gif
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/icon_success_sml.gif b/trunk/releases/master/src/site/resources/images/icon_success_sml.gif
deleted file mode 100644
index 0a19527..0000000
Binary files a/trunk/releases/master/src/site/resources/images/icon_success_sml.gif and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/icon_warning_sml.gif
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/icon_warning_sml.gif b/trunk/releases/master/src/site/resources/images/icon_warning_sml.gif
deleted file mode 100644
index ac6ad6a..0000000
Binary files a/trunk/releases/master/src/site/resources/images/icon_warning_sml.gif and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/image-x-generic.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/image-x-generic.png b/trunk/releases/master/src/site/resources/images/image-x-generic.png
deleted file mode 100644
index ab49efb..0000000
Binary files a/trunk/releases/master/src/site/resources/images/image-x-generic.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/internet-web-browser.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/internet-web-browser.png b/trunk/releases/master/src/site/resources/images/internet-web-browser.png
deleted file mode 100644
index 307d6ac..0000000
Binary files a/trunk/releases/master/src/site/resources/images/internet-web-browser.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/logos/build-by-maven-black.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/logos/build-by-maven-black.png b/trunk/releases/master/src/site/resources/images/logos/build-by-maven-black.png
deleted file mode 100644
index 919fd0f..0000000
Binary files a/trunk/releases/master/src/site/resources/images/logos/build-by-maven-black.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/logos/build-by-maven-white.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/logos/build-by-maven-white.png b/trunk/releases/master/src/site/resources/images/logos/build-by-maven-white.png
deleted file mode 100644
index 7d44c9c..0000000
Binary files a/trunk/releases/master/src/site/resources/images/logos/build-by-maven-white.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/logos/maven-feather.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/logos/maven-feather.png b/trunk/releases/master/src/site/resources/images/logos/maven-feather.png
deleted file mode 100644
index b5ada83..0000000
Binary files a/trunk/releases/master/src/site/resources/images/logos/maven-feather.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/network-server.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/network-server.png b/trunk/releases/master/src/site/resources/images/network-server.png
deleted file mode 100644
index 1d12e19..0000000
Binary files a/trunk/releases/master/src/site/resources/images/network-server.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/package-x-generic.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/package-x-generic.png b/trunk/releases/master/src/site/resources/images/package-x-generic.png
deleted file mode 100644
index 8b7e9e6..0000000
Binary files a/trunk/releases/master/src/site/resources/images/package-x-generic.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/profiles/pre-release.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/profiles/pre-release.png b/trunk/releases/master/src/site/resources/images/profiles/pre-release.png
deleted file mode 100644
index d448e85..0000000
Binary files a/trunk/releases/master/src/site/resources/images/profiles/pre-release.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/profiles/retired.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/profiles/retired.png b/trunk/releases/master/src/site/resources/images/profiles/retired.png
deleted file mode 100644
index f89f6a2..0000000
Binary files a/trunk/releases/master/src/site/resources/images/profiles/retired.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/profiles/sandbox.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/profiles/sandbox.png b/trunk/releases/master/src/site/resources/images/profiles/sandbox.png
deleted file mode 100644
index f88b362..0000000
Binary files a/trunk/releases/master/src/site/resources/images/profiles/sandbox.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/remove.gif
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/remove.gif b/trunk/releases/master/src/site/resources/images/remove.gif
deleted file mode 100644
index fc65631..0000000
Binary files a/trunk/releases/master/src/site/resources/images/remove.gif and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/rss.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/rss.png b/trunk/releases/master/src/site/resources/images/rss.png
deleted file mode 100644
index a9850ee..0000000
Binary files a/trunk/releases/master/src/site/resources/images/rss.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/update.gif
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/update.gif b/trunk/releases/master/src/site/resources/images/update.gif
deleted file mode 100644
index b2a6d0b..0000000
Binary files a/trunk/releases/master/src/site/resources/images/update.gif and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/resources/images/window-new.png
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/resources/images/window-new.png b/trunk/releases/master/src/site/resources/images/window-new.png
deleted file mode 100644
index 0e12ef9..0000000
Binary files a/trunk/releases/master/src/site/resources/images/window-new.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/site.xml
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/site.xml b/trunk/releases/master/src/site/site.xml
deleted file mode 100644
index aeb7a5e..0000000
--- a/trunk/releases/master/src/site/site.xml
+++ /dev/null
@@ -1,62 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-  
-       http://www.apache.org/licenses/LICENSE-2.0
-  
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-<project name="Falcon" xmlns="http://maven.apache.org/DECORATION/1.3.0"
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/DECORATION/1.3.0 http://maven.apache.org/xsd/decoration-1.3.0.xsd">
-
-    <skin>
-        <groupId>org.apache.maven.skins</groupId>
-        <artifactId>maven-fluido-skin</artifactId>
-        <version>1.3.0</version>
-    </skin>
-
-    <custom>
-        <fluidoSkin>
-            <project>Apache Falcon</project>
-            <sideBarEnabled>false</sideBarEnabled>
-        </fluidoSkin>
-    </custom>
-
-    <bannerLeft>
-        <name>Apache Falcon</name>
-        <src>./images/falcon-logo.png</src>
-        <width>200px</width>
-        <height>45px</height>
-    </bannerLeft>
-
-    <publishDate position="right"/>
-    <version position="right"/>
-
-    <body>
-        <head>
-            <script type="text/javascript">
-                $( document ).ready( function() { $( '.carousel' ).carousel( { interval: 3500 } ) } );
-            </script>
-        </head>
-
-        <breadcrumbs position="left">
-            <item name="Falcon" title="Apache Falcon" href="index.html"/>
-        </breadcrumbs>
-
-        <footer>
-            © 2011-2012 The Apache Software Foundation. Apache Falcon, Falcon, Apache, the Apache feather logo,
-            and the Apache Falcon project logo are trademarks of The Apache Software Foundation.
-        </footer>
-    </body>
-</project>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/Appendix.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/Appendix.twiki b/trunk/releases/master/src/site/twiki/Appendix.twiki
deleted file mode 100644
index e3752fb..0000000
--- a/trunk/releases/master/src/site/twiki/Appendix.twiki
+++ /dev/null
@@ -1,55 +0,0 @@
----+ Compatibility
-
----++ 0.6-incubating Version Compatibility Matrix
-
-   * Hadoop 2.5.0 and above
-   * Oozie  4.0.0 and above
-   * Hive 0.11.0 and above
-   * HCatalog 0.11.0 and above
-   * Active MQ 5.4.3 and above
-   * Titan 0.4.2 and above but below 0.5
-
-
----++ 0.6-incubating Tested Compatibility
-
-   * Hadoop 2.6.0
-   * Oozie  4.1.0
-   * Hive 0.14.0
-   * HCatalog 0.14.0
-   * Active MQ 5.4.3
-   * Titan 0.4.2
-   * Java 1.6, Java 1.7
-
-   Note : Oozie versions below 4.1.0 are not compatible with Java 1.7
-
----++ 0.6-incubating Release Notes
-
-Major additions are listed below. Refer to CHANGES.txt for detailed issues addressed in this release.
-
-   * Security - Authorization, SSL
-   * Lineage - More complete with better API
-   * Recipes
-   * Usability improvements - Dry run, entity summary, Pagination, etc.
-   * Operability - Alerts, Audits, etc.
-   * Refactoring - Messaging, Orchestration of workflows, etc.
-   * Extension points for developers
-   * Many bug fixes
-
-
----++ 0.6-incubating Upgrade Instructions
-
-Please follow these instructions when upgrading from an older release.
-
----+++ Upgrading from 0.5-incubating
-
-0.6-incubating is backwards *incompatible* with 0.5-incubating. It is recommended that user do not
-migrate from 0.5 to 0.6. However if the user must migrate from 0.5-incubating to 0.6-incubating,
-user should [[https://cwiki.apache.org/confluence/display/FALCON/Index][follow these instructions]
-
----+++ Upgrading from 0.4-incubating
-
-It is not possible to upgrade to 0.6-incubating from 0.4-incubating.
-
----+++ Upgrading from 0.3-incubating
-
-It is not possible to upgrade to 0.6-incubating from 0.3-incubating.

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/Configuration.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/Configuration.twiki b/trunk/releases/master/src/site/twiki/Configuration.twiki
deleted file mode 100644
index 0df094f..0000000
--- a/trunk/releases/master/src/site/twiki/Configuration.twiki
+++ /dev/null
@@ -1,122 +0,0 @@
----+Configuring Falcon
-
-By default config directory used by falcon is {package dir}/conf. To override this (to use the same conf with multiple
-falcon upgrades), set environment variable FALCON_CONF to the path of the conf dir.
-
-falcon-env.sh has been added to the falcon conf. This file can be used to set various environment variables that you
-need for you services.
-In addition you can set any other environment variables you might need. This file will be sourced by falcon scripts
-before any commands are executed. The following environment variables are available to set.
-
-<verbatim>
-# The java implementation to use. If JAVA_HOME is not found we expect java and jar to be in path
-#export JAVA_HOME=
-
-# any additional java opts you want to set. This will apply to both client and server operations
-#export FALCON_OPTS=
-
-# any additional java opts that you want to set for client only
-#export FALCON_CLIENT_OPTS=
-
-# java heap size we want to set for the client. Default is 1024MB
-#export FALCON_CLIENT_HEAP=
-
-# any additional opts you want to set for prism service.
-#export FALCON_PRISM_OPTS=
-
-# java heap size we want to set for the prism service. Default is 1024MB
-#export FALCON_PRISM_HEAP=
-
-# any additional opts you want to set for falcon service.
-#export FALCON_SERVER_OPTS=
-
-# java heap size we want to set for the falcon server. Default is 1024MB
-#export FALCON_SERVER_HEAP=
-
-# What is is considered as falcon home dir. Default is the base location of the installed software
-#export FALCON_HOME_DIR=
-
-# Where log files are stored. Default is logs directory under the base install location
-#export FALCON_LOG_DIR=
-
-# Where pid files are stored. Default is logs directory under the base install location
-#export FALCON_PID_DIR=
-
-# where the falcon active mq data is stored. Default is logs/data directory under the base install location
-#export FALCON_DATA_DIR=
-
-# Where do you want to expand the war file. By Default it is in /server/webapp dir under the base install dir.
-#export FALCON_EXPANDED_WEBAPP_DIR=
-</verbatim>
-
----++Advanced Configurations
-
----+++Configuring Monitoring plugin to register catalog partitions
-Falcon comes with a monitoring plugin that registers catalog partition. This comes in really handy during migration from
- filesystem based feeds to hcatalog based feeds.
-This plugin enables the user to de-couple the partition registration and assume that all partitions are already on
-hcatalog even before the migration, simplifying the hcatalog migration.
-
-By default this plugin is disabled.
-To enable this plugin and leverage the feature, there are 3 pre-requisites:
-<verbatim>
-In {package dir}/conf/startup.properties, add
-*.workflow.execution.listeners=org.apache.falcon.catalog.CatalogPartitionHandler
-
-In the cluster definition, ensure registry endpoint is defined.
-Ex:
-<interface type="registry" endpoint="thrift://localhost:1109" version="0.13.3"/>
-
-In the feed definition, ensure the corresponding catalog table is mentioned in feed-properties
-Ex:
-<properties>
-    <property name="catalog.table" value="catalog:default:in_table#year={YEAR};month={MONTH};day={DAY};hour={HOUR};
-    minute={MINUTE}"/>
-</properties>
-</verbatim>
-
-*NOTE : for Mac OS users*
-<verbatim>
-If you are using a Mac OS, you will need to configure the FALCON_SERVER_OPTS (explained above).
-
-In  {package dir}/conf/falcon-env.sh uncomment the following line
-#export FALCON_SERVER_OPTS=
-
-and change it to look as below
-export FALCON_SERVER_OPTS="-Djava.awt.headless=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc="
-</verbatim>
-
----+++Activemq
-
-* falcon server starts embedded active mq. To control this behaviour, set the following system properties using -D
-option in environment variable FALCON_OPTS:
-   * falcon.embeddedmq=<true/false> - Should server start embedded active mq, default true
-   * falcon.embeddedmq.port=<port> - Port for embedded active mq, default 61616
-   * falcon.embeddedmq.data=<path> - Data path for embedded active mq, default {package dir}/logs/data
-
----+++Falcon System Notifications
-Some Falcon features such as late data handling, retries, metadata service, depend on JMS notifications sent when the Oozie workflow completes. These system notifications are sent as part of Falcon Post Processing action. Given that the post processing action is also a job, it is prone to failures and in case of failures, Falcon is blind to the status of the workflow. To alleviate this problem and make the notifications more reliable, you can enable Oozie's JMS notification feature and disable Falcon post-processing notification by making the following changes:
-   * In Falcon runtime.properties, set *.falcon.jms.notification.enabled to false. This will turn off JMS notification in post-processing.
-   * Copy notification related properties in oozie/conf/oozie-site.xml to oozie-site.xml of the Oozie installation.  Restart Oozie so changes get reflected.  
-
-*NOTE : If you disable Falcon post-processing JMS notification and not enable Oozie JMS notification, features such as failure retry, late data handling and metadata service will be disabled for all entities on the server.*
-
----+++Enabling Falcon Native Scheudler
-You can either choose to schedule entities using Oozie's coordinator or using Falcon's native scheduler. To be able to schedule entities natively on Falcon, you will need to add some additional properties to <verbatim>$FALCON_HOME/conf/startup.properties</verbatim> before starting the Falcon Server. For details on the same, refer to [[FalconNativeScheduler][Falcon Native Scheduler]]
-
----+++Adding Extension Libraries
-
-Library extensions allows users to add custom libraries to entity lifecycles such as feed retention, feed replication
-and process execution. This is useful for usecases such as adding filesystem extensions. To enable this, add the
-following configs to startup.properties:
-*.libext.paths=<paths to be added to all entity lifecycles>
-
-*.libext.feed.paths=<paths to be added to all feed lifecycles>
-
-*.libext.feed.retentions.paths=<paths to be added to feed retention workflow>
-
-*.libext.feed.replication.paths=<paths to be added to feed replication workflow>
-
-*.libext.process.paths=<paths to be added to process workflow>
-
-The configured jars are added to falcon classpath and the corresponding workflows.

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/Distributed-mode.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/Distributed-mode.twiki b/trunk/releases/master/src/site/twiki/Distributed-mode.twiki
deleted file mode 100644
index 34fb092..0000000
--- a/trunk/releases/master/src/site/twiki/Distributed-mode.twiki
+++ /dev/null
@@ -1,198 +0,0 @@
----+Distributed Mode
-
-
-Following are the steps needed to package and deploy Falcon in Embedded Mode. You need to complete Steps 1-3 mentioned
- [[InstallationSteps][here]] before proceeding further.
-
----++Package Falcon
-Ensure that you are in the base directory (where you cloned Falcon). Let’s call it {project dir}
-
-<verbatim>
-$mvn clean assembly:assembly -DskipTests -DskipCheck=true -Pdistributed,hadoop-2
-</verbatim>
-
-
-<verbatim>
-$ls {project dir}/distro/target/
-</verbatim>
-
-It should give an output like below :
-<verbatim>
-apache-falcon-distributed-${project.version}-server.tar.gz
-apache-falcon-distributed-${project.version}-sources.tar.gz
-archive-tmp
-maven-shared-archive-resources
-</verbatim>
-
-   * apache-falcon-distributed-${project.version}-sources.tar.gz contains source files of Falcon repo.
-
-   * apache-falcon-distributed-${project.version}-server.tar.gz package contains project artifacts along with it's
-dependencies, configuration files and scripts required to deploy Falcon.
-
-
-Tar can be found in {project dir}/target/apache-falcon-distributed-${project.version}-server.tar.gz . This is the tar
-used for installing Falcon. Lets call it {falcon package}
-
-Tar is structured as follows.
-
-<verbatim>
-
-|- bin
-   |- falcon
-   |- falcon-start
-   |- falcon-stop
-   |- falcon-status
-   |- falcon-config.sh
-   |- service-start.sh
-   |- service-stop.sh
-   |- service-status.sh
-   |- prism-stop
-   |- prism-start
-   |- prism-status
-|- conf
-   |- startup.properties
-   |- runtime.properties
-   |- client.properties
-   |- prism.keystore
-   |- log4j.xml
-   |- falcon-env.sh
-|- docs
-|- client
-   |- lib (client support libs)
-|- server
-   |- webapp
-      |- falcon.war
-      |- prism.war
-|- oozie
-   |- conf
-   |- libext
-|- hadooplibs
-|- README
-|- NOTICE.txt
-|- LICENSE.txt
-|- DISCLAIMER.txt
-|- CHANGES.txt
-</verbatim>
-
-
----++Installing & running Falcon
-
----+++Installing Falcon
-
-Running Falcon in distributed mode requires bringing up both prism and server.As the name suggests Falcon prism splits
-the request it gets to the Falcon servers. It is a good practice to start prism and server with their corresponding
-configurations separately. Create separate directory for prism and server. Let's call them {falcon-prism-dir} and
-{falcon-server-dir} respectively.
-
-*For prism*
-<verbatim>
-$mkdir {falcon-prism-dir}
-$tar -xzvf {falcon package}
-</verbatim>
-
-*For server*
-<verbatim>
-$mkdir {falcon-server-dir}
-$tar -xzvf {falcon package}
-</verbatim>
-
-
----+++Starting Prism
-
-<verbatim>
-cd {falcon-prism-dir}/falcon-distributed-${project.version}
-bin/prism-start [-port <port>]
-</verbatim>
-
-By default,
-* prism server starts at port 16443. To change the port, use -port option
-
-* falcon.enableTLS can be set to true or false explicitly to enable SSL, if not port that end with 443 will
-automatically put prism on https://
-
-* prism starts with conf from {falcon-prism-dir}/falcon-distributed-${project.version}/conf. To override this (to use
-the same conf with multiple prism upgrades), set environment variable FALCON_CONF to the path of conf dir. You can find
-the instructions for configuring Falcon [[Configuration][here]].
-
-*Enabling prism-client*
-*If prism is not started using default-port 16443 then edit the following property in
-{falcon-prism-dir}/falcon-distributed-${project.version}/conf/client.properties
-falcon.url=http://{machine-ip}:{prism-port}/
-
-
----+++Starting Falcon Server
-
-<verbatim>
-$cd {falcon-server-dir}/falcon-distributed-${project.version}
-$bin/falcon-start [-port <port>]
-</verbatim>
-
-By default,
-* If falcon.enableTLS is set to true explicitly or not set at all, Falcon starts at port 15443 on https:// by default.
-
-* If falcon.enableTLS is set to false explicitly, Falcon starts at port 15000 on http://.
-
-* To change the port, use -port option.
-
-* If falcon.enableTLS is not set explicitly, port that ends with 443 will automatically put Falcon on https://. Any
-other port will put Falcon on http://.
-
-* server starts with conf from {falcon-server-dir}/falcon-distributed-${project.version}/conf. To override this (to use
-the same conf with multiple server upgrades), set environment variable FALCON_CONF to the path of conf dir. You can find
- the instructions for configuring Falcon [[Configuration][here]].
-
-*Enabling server-client*
-*If server is not started using default-port 15443 then edit the following property in
-{falcon-server-dir}/falcon-distributed-${project.version}/conf/client.properties. You can find the instructions for
-configuring Falcon here.
-falcon.url=http://{machine-ip}:{server-port}/
-
-*NOTE* : https is the secure version of HTTP, the protocol over which data is sent between your browser and the website
-that you are connected to. By default Falcon runs in https mode. But user can configure it to http.
-
-
----+++Using Falcon
-
-<verbatim>
-$cd {falcon-prism-dir}/falcon-distributed-${project.version}
-$bin/falcon admin -version
-Falcon server build version: {Version:"${project.version}-SNAPSHOT-rd7e2be9afa2a5dc96acd1ec9e325f39c6b2f17f7",
-Mode:"embedded"}
-
-$bin/falcon help
-(for more details about Falcon cli usage)
-</verbatim>
-
-
----+++Dashboard
-
-Once Falcon / prism is started, you can view the status of Falcon entities using the Web-based dashboard. You can open
-your browser at the corresponding port to use the web UI.
-
-Falcon dashboard makes the REST api calls as user "falcon-dashboard". If this user does not exist on your Falcon and
-Oozie servers, please create the user.
-
-<verbatim>
-## create user.
-[root@falconhost ~] useradd -U -m falcon-dashboard -G users
-
-## verify user is created with membership in correct groups.
-[root@falconhost ~] groups falcon-dashboard
-falcon-dashboard : falcon-dashboard users
-[root@falconhost ~]
-</verbatim>
-
-
----+++Stopping Falcon Server
-
-<verbatim>
-$cd {falcon-server-dir}/falcon-distributed-${project.version}
-$bin/falcon-stop
-</verbatim>
-
----+++Stopping Falcon Prism
-
-<verbatim>
-$cd {falcon-prism-dir}/falcon-distributed-${project.version}
-$bin/prism-stop
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/Embedded-mode.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/Embedded-mode.twiki b/trunk/releases/master/src/site/twiki/Embedded-mode.twiki
deleted file mode 100644
index d5c37a1..0000000
--- a/trunk/releases/master/src/site/twiki/Embedded-mode.twiki
+++ /dev/null
@@ -1,198 +0,0 @@
----+Embedded Mode
-
-Following are the steps needed to package and deploy Falcon in Embedded Mode. You need to complete Steps 1-3 mentioned
- [[InstallationSteps][here]] before proceeding further.
-
----++Package Falcon
-Ensure that you are in the base directory (where you cloned Falcon). Let’s call it {project dir}
-
-<verbatim>
-$mvn clean assembly:assembly -DskipTests -DskipCheck=true
-</verbatim>
-
-<verbatim>
-$ls {project dir}/distro/target/
-</verbatim>
-It should give an output like below :
-<verbatim>
-apache-falcon-${project.version}-bin.tar.gz
-apache-falcon-${project.version}-sources.tar.gz
-archive-tmp
-maven-shared-archive-resources
-</verbatim>
-
-* apache-falcon-${project.version}-sources.tar.gz contains source files of Falcon repo.
-
-* apache-falcon-${project.version}-bin.tar.gz package contains project artifacts along with it's dependencies,
-configuration files and scripts required to deploy Falcon.
-
-Tar can be found in {project dir}/target/apache-falcon-${project.version}-bin.tar.gz
-
-Tar is structured as follows :
-
-<verbatim>
-
-|- bin
-   |- falcon
-   |- falcon-start
-   |- falcon-stop
-   |- falcon-status
-   |- falcon-config.sh
-   |- service-start.sh
-   |- service-stop.sh
-   |- service-status.sh
-|- conf
-   |- startup.properties
-   |- runtime.properties
-   |- prism.keystore
-   |- client.properties
-   |- log4j.xml
-   |- falcon-env.sh
-|- docs
-|- client
-   |- lib (client support libs)
-|- server
-   |- webapp
-      |- falcon.war
-|- data
-   |- falcon-store
-   |- graphdb
-   |- localhost
-|- examples
-   |- app
-      |- hive
-      |- oozie-mr
-      |- pig
-   |- data
-   |- entity
-      |- filesystem
-      |- hcat
-|- oozie
-   |- conf
-   |- libext
-|- logs
-|- hadooplibs
-|- README
-|- NOTICE.txt
-|- LICENSE.txt
-|- DISCLAIMER.txt
-|- CHANGES.txt
-</verbatim>
-
-
----++Installing & running Falcon
-
-Running Falcon in embedded mode requires bringing up server.
-
-<verbatim>
-$tar -xzvf {falcon package}
-$cd falcon-${project.version}
-</verbatim>
-
-
----+++Starting Falcon Server
-<verbatim>
-$cd falcon-${project.version}
-$bin/falcon-start [-port <port>]
-</verbatim>
-
-By default,
-* If falcon.enableTLS is set to true explicitly or not set at all, Falcon starts at port 15443 on https:// by default.
-
-* If falcon.enableTLS is set to false explicitly, Falcon starts at port 15000 on http://.
-
-* To change the port, use -port option.
-
-* If falcon.enableTLS is not set explicitly, port that ends with 443 will automatically put Falcon on https://. Any
-other port will put Falcon on http://.
-
-* Server starts with conf from {falcon-server-dir}/falcon-distributed-${project.version}/conf. To override this (to use
-the same conf with multiple server upgrades), set environment variable FALCON_CONF to the path of conf dir. You can find
- the instructions for configuring Falcon [[Configuration][here]].
-
-
----+++Enabling server-client
-If server is not started using default-port 15443 then edit the following property in
-{falcon-server-dir}/falcon-${project.version}/conf/client.properties
-
-falcon.url=http://{machine-ip}:{server-port}/
-
-
----+++Using Falcon
-<verbatim>
-$cd falcon-${project.version}
-$bin/falcon admin -version
-Falcon server build version: {Version:"${project.version}-SNAPSHOT-rd7e2be9afa2a5dc96acd1ec9e325f39c6b2f17f7",Mode:
-"embedded",Hadoop:"${hadoop.version}"}
-
-$bin/falcon help
-(for more details about Falcon cli usage)
-</verbatim>
-
-*Note* : https is the secure version of HTTP, the protocol over which data is sent between your browser and the website
-that you are connected to. By default Falcon runs in https mode. But user can configure it to http.
-
-
----+++Dashboard
-
-Once Falcon server is started, you can view the status of Falcon entities using the Web-based dashboard. You can open
-your browser at the corresponding port to use the web UI.
-
-Falcon dashboard makes the REST api calls as user "falcon-dashboard". If this user does not exist on your Falcon and
-Oozie servers, please create the user.
-
-<verbatim>
-## create user.
-[root@falconhost ~] useradd -U -m falcon-dashboard -G users
-
-## verify user is created with membership in correct groups.
-[root@falconhost ~] groups falcon-dashboard
-falcon-dashboard : falcon-dashboard users
-[root@falconhost ~]
-</verbatim>
-
-
----++Running Examples using embedded package
-<verbatim>
-$cd falcon-${project.version}
-$bin/falcon-start
-</verbatim>
-Make sure the Hadoop and Oozie endpoints are according to your setup in
-examples/entity/filesystem/standalone-cluster.xml
-The cluster locations,staging and working dirs, MUST be created prior to submitting a cluster entity to Falcon.
-*staging* must have 777 permissions and the parent dirs must have execute permissions
-*working* must have 755 permissions and the parent dirs must have execute permissions
-<verbatim>
-$bin/falcon entity -submit -type cluster -file examples/entity/filesystem/standalone-cluster.xml
-</verbatim>
-Submit input and output feeds:
-<verbatim>
-$bin/falcon entity -submit -type feed -file examples/entity/filesystem/in-feed.xml
-$bin/falcon entity -submit -type feed -file examples/entity/filesystem/out-feed.xml
-</verbatim>
-Set-up workflow for the process:
-<verbatim>
-$hadoop fs -put examples/app /
-</verbatim>
-Submit and schedule the process:
-<verbatim>
-$bin/falcon entity -submitAndSchedule -type process -file examples/entity/filesystem/oozie-mr-process.xml
-$bin/falcon entity -submitAndSchedule -type process -file examples/entity/filesystem/pig-process.xml
-</verbatim>
-Generate input data:
-<verbatim>
-$examples/data/generate.sh <<hdfs endpoint>>
-</verbatim>
-Get status of instances:
-<verbatim>
-$bin/falcon instance -status -type process -name oozie-mr-process -start 2013-11-15T00:05Z -end 2013-11-15T01:00Z
-</verbatim>
-
-HCat based example entities are in examples/entity/hcat.
-
-
----+++Stopping Falcon Server
-<verbatim>
-$cd falcon-${project.version}
-$bin/falcon-stop
-</verbatim>


[5/6] falcon git commit: Deleting accidental check-in of trunk/release/master

Posted by pa...@apache.org.
http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/EntitySpecification.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/EntitySpecification.twiki b/trunk/releases/master/src/site/twiki/EntitySpecification.twiki
deleted file mode 100644
index d08c3a3..0000000
--- a/trunk/releases/master/src/site/twiki/EntitySpecification.twiki
+++ /dev/null
@@ -1,996 +0,0 @@
----++ Contents
-   * <a href="#Cluster_Specification">Cluster Specification</a>
-   * <a href="#Feed_Specification">Feed Specification</a>
-   * <a href="#Process_Specification">Process Specification</a>
-   
----++ Cluster Specification
-The cluster XSD specification is available here:
-A cluster contains different interfaces which are used by Falcon like readonly, write, workflow and messaging.
-A cluster is referenced by feeds and processes which are on-boarded to Falcon by its name.
-
-Following are the tags defined in a cluster.xml:
-<verbatim>
-<cluster colo="gs" description="" name="corp" xmlns="uri:falcon:cluster:0.1"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
-</verbatim>
-The colo specifies the colo to which this cluster belongs to and name is the name of the cluster which has to 
-be unique.
-
-
----+++ Interfaces
-
-A cluster has various interfaces as described below:
-<verbatim>
-    <interface type="readonly" endpoint="hftp://localhost:50010" version="0.20.2" />
-</verbatim>
-A readonly interface specifies the endpoint for Hadoop's HFTP protocol, 
-this would be used in the context of feed replication.
-
-<verbatim>
-<interface type="write" endpoint="hdfs://localhost:8020" version="0.20.2" />
-</verbatim>
-A write interface specifies the interface to write to hdfs, it's endpoint is the value of fs.defaultFS.
-Falcon uses this interface to write system data to hdfs and feeds referencing this cluster are written to hdfs
-using the same write interface.
-
-<verbatim>
-<interface type="execute" endpoint="localhost:8021" version="0.20.2" />
-</verbatim>
-An execute interface specifies the interface for job tracker, it's endpoint is the value of mapreduce.jobtracker.address.
-Falcon uses this interface to submit the processes as jobs on !JobTracker defined here.
-
-<verbatim>
-<interface type="workflow" endpoint="http://localhost:11000/oozie/" version="4.0" />
-</verbatim>
-A workflow interface specifies the interface for workflow engine, example of its endpoint is the value for OOZIE_URL.
-Falcon uses this interface to schedule the processes referencing this cluster on workflow engine defined here.
-
-<verbatim>
-<interface type="registry" endpoint="thrift://localhost:9083" version="0.11.0" />
-</verbatim>
-A registry interface specifies the interface for metadata catalog, such as Hive Metastore (or HCatalog).
-Falcon uses this interface to register/de-register partitions for a given database and table. Also,
-uses this information to schedule data availability events based on partitions in the workflow engine.
-Although Hive metastore supports both RPC and HTTP, Falcon comes with an implementation for RPC over thrift.
-
-<verbatim>
-<interface type="messaging" endpoint="tcp://localhost:61616?daemon=true" version="5.4.6" />
-</verbatim>
-A messaging interface specifies the interface for sending feed availability messages, it's endpoint is broker url with tcp address.
-
----+++ Locations
-
-A cluster has a list of locations defined:
-<verbatim>
-<location name="staging" path="/projects/falcon/staging" />
-<location name="working" path="/projects/falcon/working" /> <!--optional-->
-</verbatim>
-Location has the name and the path, name is the type of locations .Allowed values of name are staging, temp and working.
-Path is the hdfs path for each location.
-Falcon would use the location to do intermediate processing of entities in hdfs and hence Falcon
-should have read/write/execute permission on these locations.
-These locations MUST be created prior to submitting a cluster entity to Falcon.
-*staging* should have 777 permissions and is a mandatory location .The parent dirs must have execute permissions so multiple
-users can write to this location. *working* must have 755 permissions and is a optional location.
-If *working* is not specified, falcon creates a sub directory in the *staging* location with 755 perms.
-The parent dir for *working* must have execute permissions so multiple
-users can read from this location
-
----+++ ACL
-
-A cluster has ACL (Access Control List) useful for implementing permission requirements
-and provide a way to set different permissions for specific users or named groups.
-<verbatim>
-    <ACL owner="test-user" group="test-group" permission="*"/>
-</verbatim>
-ACL indicates the Access control list for this cluster.
-owner is the Owner of this entity.
-group is the one which has access to read.
-permission indicates the permission.
-
----+++ Custom Properties
-
-A cluster has a list of properties:
-A key-value pair, which are propagated to the workflow engine.
-<verbatim>
-<property name="brokerImplClass" value="org.apache.activemq.ActiveMQConnectionFactory" />
-</verbatim>
-Ideally JMS impl class name of messaging engine (brokerImplClass) 
-should be defined here.
-
----++ Datasource Specification
-
-The datasource entity contains connection information required to connect to a data source like MySQL database.
-The datasource XSD specification is available here:
-A datasource contains read and write interfaces which are used by Falcon to import or export data from or to
-datasources respectively. A datasource is referenced by feeds which are on-boarded to Falcon by its name.
-
-Following are the tags defined in a datasource.xml:
-
-<verbatim>
-<datasource colo="west-coast" description="Customer database on west coast" type="mysql"
- name="test-hsql-db" xmlns="uri:falcon:datasource:0.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
-</verbatim>
-
-The colo specifies the colo to which the datasource belongs to and name is the name of the datasource which has to
-be unique.
-
----+++ Interfaces
-
-A datasource has two interfaces as described below:
-<verbatim>
-    <interface type="readonly" endpoint="jdbc:hsqldb:localhost/db"/>
-</verbatim>
-
-A readonly interface specifies the endpoint and protocol to connect to a datasource.
-This would be used in the context of import from datasource into HDFS.
-
-<verbatim>
-<interface type="write" endpoint="jdbc:hsqldb:localhost/db1">
-</verbatim>
-
-A write interface specifies the endpoint and protocol to to write to the datasource.
-Falcon uses this interface to export data from hdfs to datasource.
-
-<verbatim>
-<credential type="password-text">
-    <userName>SA</userName>
-    <passwordText></passwordText>
-</credential>
-</verbatim>
-
-
-A credential is associated with an interface (read or write) providing user name and password to authenticate
-to the datasource.
-
-<verbatim>
-<credential type="password-text">
-     <userName>SA</userName>
-     <passwordFile>hdfs-file-path</passwordText>
-</credential>
-</verbatim>
-
-The credential can be specified via a password file present in the HDFS. This file should only be accessible by
-the user.
-
----++ Feed Specification
-The Feed XSD specification is available here.
-A Feed defines various attributes of feed like feed location, frequency, late-arrival handling and retention policies.
-A feed can be scheduled on a cluster, once a feed is scheduled its retention and replication process are triggered in a given cluster.
-<verbatim>
-<feed description="clicks log" name="clicks" xmlns="uri:falcon:feed:0.1"
-xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
-</verbatim>
-A feed should have a unique name and this name is referenced by processes as input or output feed.
-
----+++ Storage
-Falcon introduces a new abstraction to encapsulate the storage for a given feed which can either be
-expressed as a path on the file system, File System Storage or a table in a catalog such as Hive, Catalog Storage.
-
-<verbatim>
-    <xs:choice minOccurs="1" maxOccurs="1">
-        <xs:element type="locations" name="locations"/>
-        <xs:element type="catalog-table" name="table"/>
-    </xs:choice>
-</verbatim>
-
-Feed should contain one of the two storage options. Locations on File System or Table in a Catalog.
-
----++++ File System Storage
-
-<verbatim>
-        <clusters>
-        <cluster name="test-cluster">
-            <validity start="2012-07-20T03:00Z" end="2099-07-16T00:00Z"/>
-            <retention limit="days(10)" action="delete"/>
-            <sla slaLow="hours(3)" slaHigh="hours(4)"/>
-            <locations>
-                <location type="data" path="/hdfsDataLocation/${YEAR}/${MONTH}/${DAY}/${HOUR}/${MINUTE}"/>
-                <location type="stats" path="/projects/falcon/clicksStats" />
-                <location type="meta" path="/projects/falcon/clicksMetaData" />
-            </locations>
-        </cluster>
-..... more clusters </clusters>
-</verbatim>
-Feed references a cluster by it's name, before submitting a feed all the referenced cluster should be submitted to Falcon.
-type: specifies whether the referenced cluster should be treated as a source or target for a feed. A feed can have multiple source and target clusters. If the type of cluster is not specified then the cluster is not considered for replication.
-Validity of a feed on cluster specifies duration for which this feed is valid on this cluster.
-Retention specifies how long the feed is retained on this cluster and the action to be taken on the feed after the expiry of retention period.
-The retention limit is specified by expression frequency(times), ex: if feed should be retained for at least 6 hours then retention's limit="hours(6)".
-The field partitionExp contains partition tags. Number of partition tags has to be equal to number of partitions specified in feed schema. A partition tag can be a wildcard(*), a static string or an expression. Atleast one of the strings has to be an expression.
-sla specifies sla for the feed on this cluster. This is an optional parameter and sla can be same or different from the
-global sla tag (mentioned outside the clusters tag ). This tag provides the user to flexibility to have
-different sla for different clusters e.g. in case of replication. If this attribute is missing then the default global
-sla is picked from the feed definition.
-Location specifies where the feed is available on this cluster. This is an optional parameter and path can be same or different from the global locations tag value ( it is mentioned outside the clusters tag ) . This tag provides the user to flexibility to have feed at different locations on different clusters. If this attribute is missing then the default global location is picked from the feed definition. Also the individual location tags data, stats, meta are optional.
-<verbatim>
- <location type="data" path="/projects/falcon/clicks" />
- <location type="stats" path="/projects/falcon/clicksStats" />
- <location type="meta" path="/projects/falcon/clicksMetaData" />
-</verbatim>
-A location tag specifies the type of location like data, meta, stats and the corresponding paths for them.
-A feed should at least define the location for type data, which specifies the HDFS path pattern where the feed is generated
-periodically. ex: type="data" path="/projects/TrafficHourly/${YEAR}-${MONTH}-${DAY}/traffic"
-The granularity of date pattern in the path should be at least that of a frequency of a feed.
-Other location type which are supported are stats and meta paths, if a process references a feed then the meta and stats
-paths are available as a property in a process.
-
----++++ Catalog Storage (Table)
-
-A table tag specifies the table URI in the catalog registry as:
-<verbatim>
-catalog:$database-name:$table-name#partition-key=partition-value);partition-key=partition-value);*
-</verbatim>
-
-This is modeled as a URI (similar to an ISBN URI). It does not have any reference to Hive or HCatalog. Its quite
-generic so it can be tied to other implementations of a catalog registry. The catalog implementation specified
-in the startup config provides implementation for the catalog URI.
-
-Top-level partition has to be a dated pattern and the granularity of date pattern should be at least that
-of a frequency of a feed.
-
-<verbatim>
-    <xs:complexType name="catalog-table">
-        <xs:annotation>
-            <xs:documentation>
-                catalog specifies the uri of a Hive table along with the partition spec.
-                uri="catalog:$database:$table#(partition-key=partition-value);+"
-                Example: catalog:logs-db:clicks#ds=${YEAR}-${MONTH}-${DAY}
-            </xs:documentation>
-        </xs:annotation>
-        <xs:attribute type="xs:string" name="uri" use="required"/>
-    </xs:complexType>
-</verbatim>
-
-Examples:
-<verbatim>
-<table uri="catalog:default:clicks#ds=${YEAR}-${MONTH}-${DAY}-${HOUR};region=${region}" />
-<table uri="catalog:src_demo_db:customer_raw#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
-<table uri="catalog:tgt_demo_db:customer_bcp#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
-</verbatim>
-
----+++ Partitions
-
-<verbatim>
-   <partitions>
-        <partition name="country" />
-        <partition name="cluster" />
-    </partitions>
-</verbatim>
-A feed can define multiple partitions, if a referenced cluster defines partitions then the number of partitions in feed has to be equal to or more than the cluster partitions.
-
-*Note:* This will only apply for !FileSystem storage but not Table storage as partitions are defined and maintained in
-Hive (HCatalog) registry.
-
----+++ Groups
-
-<verbatim>
-    <groups>online,bi</groups>
-</verbatim>
-A feed specifies a list of comma separated groups, a group is a logical grouping of feeds and a group is said to be
-available if all the feeds belonging to a group are available. The frequency of all the feed which belong to the same group
-must be same.
-
----+++ Availability Flags
-
-<verbatim>
-    <availabilityFlag>_SUCCESS</availabilityFlag>
-</verbatim>
-An availabilityFlag specifies the name of a file which when present/created in a feeds data directory, 
-the feed is termed as available. ex: _SUCCESS, if this element is ignored then Falcon would consider the presence of feed's
-data directory as feed availability.
-
----+++ Frequency
-
-<verbatim>
-    <frequency>minutes(20)</frequency>
-</verbatim>
-A feed has a frequency which specifies the frequency by which this feed is generated. 
-ex: it can be generated every hour, every 5 minutes, daily, weekly etc.
-valid frequency type for a feed are minutes, hours, days, months. The values can be negative, zero or positive.
-
----+++ SLA
-<verbatim>
-    <sla slaLow="hours(40)" slaHigh="hours(44)" />
-</verbatim>
-
-A feed can have SLA and each SLA has two properties - slaLow and slaHigh. Both slaLow and slaHigh are written using
-expressions like frequency. slaLow is intended to serve for alerting for feed instances which are in danger of missing their
-availability SLAs. slaHigh is intended to serve for reporting the feeds which missed their SLAs. SLAs are relative to
-feed instance time.
-
----+++ Import
-
-<verbatim>
-<import>
-    <source name="test-hsql-db" tableName="customer">
-        <extract type="full">
-            <mergepolicy>snapshot</mergepolicy>
-         </extract>
-         <fields>
-            <includes>
-                <field>id</field>
-                <field>name</field>
-            </includes>
-         </fields>
-    </source>
-    <arguments>
-        <argument name="--split-by" value="id"/>
-        <argument name="--num-mappers" value="2"/>
-    </arguments>
-</import>
-
-A feed can have an import policy associated with it. The souce name specified the datasource reference to the
-datasource entity from which the data will be imported to HDFS. The tableName spcified the table or topic to be
-imported from the datasource. The extract type specifies the pull mechanism (full or
-incremental extract). Full extract method extracts all the data from the datasource. The incremental extraction
-method feature implementation is in progress. The mergeplocy determines how the data is to be layed out on HDFS.
-The snapshot layout creates a snapshot of the data on HDFS using the feed's location specification. Fields is used
-to specify the projection columns. Feed import from database underneath uses sqoop to achieve the task. Any advanced
-Sqoop options can be specified via the arguments.
-
----+++ Late Arrival
-
-<verbatim>
-    <late-arrival cut-off="hours(6)" />
-</verbatim>
-A late-arrival specifies the cut-off period till which the feed is expected to arrive late and should be honored be processes referring to it as input feed by rerunning the instances in case the data arrives late with in a cut-off period.
-The cut-off period is specified by expression frequency(times), ex: if the feed can arrive late
-upto 8 hours then late-arrival's cut-off="hours(8)"
-
-*Note:* This will only apply for !FileSystem storage but not Table storage until a future time.
-
-
----+++ Email Notification
-
-<verbatim>
-    <notification type="email" to="bob@xyz.com"/>
-</verbatim>
-Specifying the notification element with "type" property allows users to receive email notification when a scheduled feed instance completes.
-Multiple recipients of an email can be provided as comma separated addresses with "to" property.
-To send email notification ensure that SMTP parameters are defined in Falcon startup.properties.
-Refer to [[FalconEmailNotification][Falcon Email Notification]] for more details.
-
-
----+++ ACL
-
-A feed has ACL (Access Control List) useful for implementing permission requirements
-and provide a way to set different permissions for specific users or named groups.
-<verbatim>
-    <ACL owner="test-user" group="test-group" permission="*"/>
-</verbatim>
-ACL indicates the Access control list for this cluster.
-owner is the Owner of this entity.
-group is the one which has access to read.
-permission indicates the permission.
-
----+++ Custom Properties
-
-<verbatim>
-    <properties>
-        <property name="tmpFeedPath" value="tmpFeedPathValue" />
-        <property name="field2" value="value2" />
-        <property name="queueName" value="hadoopQueue"/>
-        <property name="jobPriority" value="VERY_HIGH"/>
-        <property name="timeout" value="hours(1)"/>
-        <property name="parallel" value="3"/>
-        <property name="maxMaps" value="8"/>
-        <property name="mapBandwidth" value="1"/>
-        <property name="overwrite" value="true"/>
-        <property name="ignoreErrors" value="false"/>
-        <property name="skipChecksum" value="false"/>
-        <property name="removeDeletedFiles" value="true"/>
-        <property name="preserveBlockSize" value="true"/>
-        <property name="preserveReplicationNumber" value="true"/>
-        <property name="preservePermission" value="true"/>
-        <property name="order" value="LIFO"/>
-    </properties>
-</verbatim>
-A key-value pair, which are propagated to the workflow engine. "queueName" and "jobPriority" are special properties
-available to user to specify the Hadoop job queue and priority, the same values are used by Falcon's launcher job.
-"timeout", "parallel" and "order" are other special properties which decides replication instance's timeout value while
-waiting for the feed instance, parallel decides the concurrent replication instances that can run at any given time and
-order decides the execution order for replication instances like FIFO, LIFO and LAST_ONLY.
-DistCp options can be passed as custom properties, which will be propagated to the DistCp tool. "maxMaps" represents
-the maximum number of maps used during replication. "mapBandwidth" represents the bandwidth in MB/s
-used by each mapper during replication. "overwrite" represents overwrite destination during replication.
-"ignoreErrors" represents ignore failures not causing the job to fail during replication. "skipChecksum" represents
-bypassing checksum verification during replication. "removeDeletedFiles" represents deleting the files existing in the
-destination but not in source during replication. "preserveBlockSize" represents preserving block size during
-replication. "preserveReplicationNumber" represents preserving replication number during replication.
-"preservePermission" represents preserving permission during
-
-
----+++ Lifecycle
-<verbatim>
-
-<lifecycle>
-    <retention-stage>
-        <frequency>hours(10)</frequency>
-        <queue>reports</queue>
-        <priority>NORMAL</priority>
-        <properties>
-            <property name="retention.policy.agebaseddelete.limit" value="hours(9)"></property>
-        </properties>
-    </retention-stage>
-</lifecycle>
-
-</verbatim>
-
-lifecycle tag is the new way to define various stages of a feed's lifecycle. In the example above we have defined a
-retention-stage using lifecycle tag. You may define lifecycle at global level or a cluster level or both. Cluster level
-configuration takes precedence and falcon falls back to global definition if cluster level specification is missing.
-
-
-----++++ Retention Stage
-As of now there are two ways to specify retention. One is through the <retention> tag in the cluster and another is the
-new way through <retention-stage> tag in <lifecycle> tag. If both are defined for a feed, then the lifecycle tag will be
-considered effective and falcon will ignore the <retention> tag in the cluster. If there is an invalid configuration of
-retention-stage in lifecycle tag, then falcon will *NOT* fall back to retention tag even if it is defined and will
-throw validation error.
-
-In this new method of defining retention you can specify the frequency at which the retention should occur, you can
-also define the queue and priority parameters for retention jobs. The default behavior of retention-stage is same as
-the existing one which is to delete all instances corresponding to instance-time earlier than the duration provided in
-"retention.policy.agebaseddelete.limit"
-
-Property "retention.policy.agebaseddelete.limit" is a mandatory property and must contain a valid duration e.g. "hours(1)"
-Retention frequency is not a mandatory parameter. If user doesn't specify the frequency in the retention stage then
-it doesn't fallback to old retention policy frequency. Its default value is set to 6 hours if feed frequency is less
-than 6 hours else its set to feed frequency as retention shouldn't be more frequent than data availability to avoid
-wastage of compute resources.
-
-In future, we will allow more customisation like customising how to choose instances to be deleted through this method.
-
-
-
----++ Process Specification
-A process defines configuration for a workflow. A workflow is a directed acyclic graph(DAG) which defines the job for the workflow engine. A process definition defines  the configurations required to run the workflow job. For example, process defines the frequency at which the workflow should run, the clusters on which the workflow should run, the inputs and outputs for the workflow, how the workflow failures should be handled, how the late inputs should be handled and so on.  
-
-The different details of process are:
----+++ Name
-Each process is identified with a unique name.
-Syntax:
-<verbatim>
-<process name="[process name]">
-...
-</process>
-</verbatim>
-
----+++ Tags
-An optional list of comma separated tags which are used for classification of processes.
-Syntax:
-<verbatim>
-...
-    <tags>consumer=consumer@xyz.com, owner=producer@xyz.com, department=forecasting</tags>
-</verbatim>
-
----+++ Pipelines
-An optional list of comma separated word strings, specifies the data processing pipeline(s) to which this process belongs.
-Only letters, numbers and underscore are allowed for pipeline string.
-Syntax:
-<verbatim>
-...
-    <pipelines>test_Pipeline, dataReplication, clickStream_pipeline</pipelines>
-</verbatim>
-
----+++ Cluster
-The cluster on which the workflow should run. A process should contain one or more clusters. Cluster definition for the cluster name gives the end points for workflow execution, name node, job tracker, messaging and so on. Each cluster inturn has validity mentioned, which tell the times between which the job should run on that specified cluster. 
-Syntax:
-<verbatim>
-<process name="[process name]">
-...
-   <clusters>
-        <cluster name="test-cluster1">
-            <validity start="2012-12-21T08:15Z" end="2100-01-01T00:00Z"/>
-        </cluster>
-        <cluster name="test-cluster2">
-            <validity start="2012-12-21T08:15Z" end="2100-01-01T00:00Z"/>
-        </cluster>
-       ....
-       ....
-    </clusters>
-
-...
-</process>
-</verbatim>
-
----+++ Parallel
-Parallel defines how many instances of the workflow can run concurrently. It should be a positive integer > 0.
-For example, parallel of 1 ensures that only one instance of the workflow can run at a time. The next instance will start only after the running instance completes.
-Syntax:
-<verbatim>
-<process name="[process name]">
-...
-   <parallel>[parallel]</parallel>
-...
-</process>
-</verbatim>
-
----+++ Order
-Order defines the order in which the ready instances are picked up. The possible values are FIFO(First In First Out), LIFO(Last In First Out), and ONLYLAST(Last Only).
-Syntax:
-<verbatim>
-<process name="[process name]">
-...
-   <order>[order]</order>
-...
-</process>
-</verbatim>
-
----+++ Timeout
-A optional Timeout specifies the maximum time an instance waits for a dataset before being killed by the workflow engine, a time out is specified like frequency.
-If timeout is not specified, falcon computes a default timeout for a process based on its frequency, which is six times of the frequency of process or 30 minutes if computed timeout is less than 30 minutes.
-<verbatim>
-<process name="[process name]">
-...
-   <timeout>[timeunit]([frequency])</timeout>
-...
-</process>
-</verbatim>
-
----+++ Frequency
-Frequency defines how frequently the workflow job should run. For example, hours(1) defines the frequency as hourly, days(7) defines weekly frequency. The values for timeunit can be minutes/hours/days/months and the frequency number should be a positive integer > 0. 
-Syntax:
-<verbatim>
-<process name="[process name]">
-...
-   <frequency>[timeunit]([frequency])</order>
-...
-</process>
-</verbatim>
-
----+++ SLA
-<verbatim>
-    <sla shouldStartIn="hours(2)" shouldEndIn="hours(4)"/>
-</verbatim>
-A process can have SLA which is defined by 2 optional attributes - shouldStartIn and shouldEndIn. All the attributes
-are written using expressions like frequency. shouldStartIn is the time by which the process should have started.
-shouldEndIn is the time by which the process should have finished.
-
-
----+++ Validity
-Validity defines how long the workflow should run. It has 3 components - start time, end time and timezone. Start time and end time are timestamps defined in yyyy-MM-dd'T'HH:mm'Z' format and should always be in UTC. Timezone is used to compute the next instances starting from start time. The workflow will start at start time and end before end time specified on a given cluster. So, there will not be a workflow instance at end time.
-Syntax:
-<verbatim>
-<process name="[process name]">
-...
-   <validity start=[start time] end=[end time] timezone=[timezone]/>
-...
-</process>
-</verbatim>
-
-Examples:
-<verbatim>
-<process name="sample-process">
-...
-    <frequency>days(1)</frequency>
-    <validity start="2012-01-01T00:40Z" end="2012-04-01T00:00" timezone="UTC"/>
-...
-</process>
-</verbatim>
-The daily workflow will start on Jan 1st 2012 at 00:40 UTC, it will run at 40th minute of every hour and the last instance will be at March 31st 2012 at 23:40 UTC.
-                                                                                               
-<verbatim>
-<process name="sample-process">
-...
-    <frequency>hours(1)</frequency>
-    <validity start="2012-03-11T08:40Z" end="2012-03-12T08:00" timezone="PST8PDT"/>
-...
-</process>
-</verbatim>
-The hourly workflow will start on March 11th 2012 at 00:40 PST, the next instances will be at 01:40 PST, 03:40 PDT, 04:40 PDT and so on till 23:40 PDT. So, there will be just 23 instances of the workflow for March 11th 2012 because of DST switch.
-
----+++ Inputs
-Inputs define the input data for the workflow. The workflow job will start executing only after the schedule time and when all the inputs are available. There can be 0 or more inputs and each of the input maps to a feed. The path and frequency of input data is picked up from feed definition. Each input should also define start and end instances in terms of [[FalconDocumentation][EL expressions]] and can optionally specify specific partition of input that the workflow requires. The components in partition should be subset of partitions defined in the feed.
-
-For each input, Falcon will create a property with the input name that contains the comma separated list of input paths. This property can be used in workflow actions like pig scripts and so on.
-
-Syntax:
-<verbatim>
-<process name="[process name]">
-...
-    <inputs>
-        <input name=[input name] feed=[feed name] start=[start el] end=[end el] partition=[partition]/>
-        ...
-    </inputs>
-...
-</process>
-</verbatim>
-
-Example:
-<verbatim>
-<feed name="feed1">
-...
-    <partition name="isFraud"/>
-    <partition name="country"/>
-    <frequency>hours(1)</frequency>
-    <locations>
-        <location type="data" path="/projects/bootcamp/feed1/${YEAR}-${MONTH}-${DAY}-${HOUR}"/>
-        ...
-    </locations>
-...
-</feed>
-<process name="sample-process">
-...
-    <inputs>
-        <input name="input1" feed="feed1" start="today(0,0)" end="today(1,0)" partition="*/US"/>
-        ...
-    </inputs>
-...
-</process>
-</verbatim>
-The input for the workflow is a hourly feed and takes 0th and 1st hour data of today(the day when the workflow runs).
-If the workflow is running for 2012-03-01T06:40Z, the inputs are /projects/bootcamp/feed1/2012-03-01-00/*/US and
-/projects/bootcamp/feed1/2012-03-01-01/*/US. The property for this input is
-input1=/projects/bootcamp/feed1/2012-03-01-00/*/US,/projects/bootcamp/feed1/2012-03-01-01/*/US
-
-Also, feeds with Hive table storage can be used as inputs to a process. Several parameters from inputs are passed as
-params to the user workflow or pig script.
-
-<verbatim>
-    ${wf:conf('falcon_input_database')} - database name associated with the feed for a given input
-    ${wf:conf('falcon_input_table')} - table name associated with the feed for a given input
-    ${wf:conf('falcon_input_catalog_url')} - Hive metastore URI for this input feed
-    ${wf:conf('falcon_input_partition_filter_pig')} - value of ${coord:dataInPartitionFilter('$input', 'pig')}
-    ${wf:conf('falcon_input_partition_filter_hive')} - value of ${coord:dataInPartitionFilter('$input', 'hive')}
-    ${wf:conf('falcon_input_partition_filter_java')} - value of ${coord:dataInPartitionFilter('$input', 'java')}
-</verbatim>
-
-*NOTE:* input is the name of the input configured in the process, which is input.getName().
-<verbatim><input name="input" feed="clicks-raw-table" start="yesterday(0,0)" end="yesterday(20,0)"/></verbatim>
-
-Example workflow configuration:
-
-<verbatim>
-<configuration>
-  <property>
-    <name>falcon_input_database</name>
-    <value>falcon_db</value>
-  </property>
-  <property>
-    <name>falcon_input_table</name>
-    <value>input_table</value>
-  </property>
-  <property>
-    <name>falcon_input_catalog_url</name>
-    <value>thrift://localhost:29083</value>
-  </property>
-  <property>
-    <name>falcon_input_storage_type</name>
-    <value>TABLE</value>
-  </property>
-  <property>
-    <name>feedInstancePaths</name>
-    <value>hcat://localhost:29083/falcon_db/output_table/ds=2012-04-21-00</value>
-  </property>
-  <property>
-    <name>falcon_input_partition_filter_java</name>
-    <value>(ds='2012-04-21-00')</value>
-  </property>
-  <property>
-    <name>falcon_input_partition_filter_hive</name>
-    <value>(ds='2012-04-21-00')</value>
-  </property>
-  <property>
-    <name>falcon_input_partition_filter_pig</name>
-    <value>(ds=='2012-04-21-00')</value>
-  </property>
-  ...
-</configuration>
-</verbatim>
-
-
----+++ Optional Inputs
-User can mention one or more inputs as optional inputs. In such cases the job does not wait on those inputs which are
-mentioned as optional. If they are present it considers them otherwise continue with the compulsory ones.
-Example:
-<verbatim>
-<feed name="feed1">
-...
-    <partition name="isFraud"/>
-    <partition name="country"/>
-    <frequency>hours(1)</frequency>
-    <locations>
-        <location type="data" path="/projects/bootcamp/feed1/${YEAR}-${MONTH}-${DAY}-${HOUR}"/>
-        ...
-    </locations>
-...
-</feed>
-<process name="sample-process">
-...
-    <inputs>
-        <input name="input1" feed="feed1" start="today(0,0)" end="today(1,0)" partition="*/US"/>
-        <input name="input2" feed="feed2" start="today(0,0)" end="today(1,0)" partition="*/UK" optional="true" />
-        ...
-    </inputs>
-...
-</process>
-</verbatim>
-
-*Note:* This is only supported for !FileSystem storage but not Table storage at this point.
-
-
----+++ Outputs
-Outputs define the output data that is generated by the workflow. A process can define 0 or more outputs. Each output is mapped to a feed and the output path is picked up from feed definition. The output instance that should be generated is specified in terms of [[FalconDocumentation][EL expression]].
-
-For each output, Falcon creates a property with output name that contains the path of output data. This can be used in workflows to store in the path.
-Syntax:
-<verbatim>
-<process name="[process name]">
-...
-    <outputs>
-        <output name=[input name] feed=[feed name] instance=[instance el]/>
-        ...
-    </outputs>
-...
-</process>
-</verbatim>
-
-Example:
-<verbatim>
-<feed name="feed2">
-...
-    <frequency>days(1)</frequency>
-    <locations>
-        <location type="data" path="/projects/bootcamp/feed2/${YEAR}-${MONTH}-${DAY}"/>
-        ...
-    </locations>
-...
-</feed>
-<process name="sample-process">
-...
-    <outputs>
-        <output name="output1" feed="feed2" instance="today(0,0)"/>
-        ...
-    </outputs>
-...
-</process>
-</verbatim>
-The output of the workflow is feed instance for today. If the workflow is running for 2012-03-01T06:40Z,
-the workflow generates output /projects/bootcamp/feed2/2012-03-01. The property for this output that is available
-for workflow is: output1=/projects/bootcamp/feed2/2012-03-01
-
-Also, feeds with Hive table storage can be used as outputs to a process. Several parameters from outputs are passed as
-params to the user workflow or pig script.
-<verbatim>
-    ${wf:conf('falcon_output_database')} - database name associated with the feed for a given output
-    ${wf:conf('falcon_output_table')} - table name associated with the feed for a given output
-    ${wf:conf('falcon_output_catalog_url')} - Hive metastore URI for the given output feed
-    ${wf:conf('falcon_output_dataout_partitions')} - value of ${coord:dataOutPartitions('$output')}
-</verbatim>
-
-*NOTE:* output is the name of the output configured in the process, which is output.getName().
-<verbatim><output name="output" feed="clicks-summary-table" instance="today(0,0)"/></verbatim>
-
-Example workflow configuration:
-
-<verbatim>
-<configuration>
-  <property>
-    <name>falcon_output_database</name>
-    <value>falcon_db</value>
-  </property>
-  <property>
-    <name>falcon_output_table</name>
-    <value>output_table</value>
-  </property>
-  <property>
-    <name>falcon_output_catalog_url</name>
-    <value>thrift://localhost:29083</value>
-  </property>
-  <property>
-    <name>falcon_output_storage_type</name>
-    <value>TABLE</value>
-  </property>
-  <property>
-    <name>feedInstancePaths</name>
-    <value>hcat://localhost:29083/falcon_db/output_table/ds=2012-04-21-00</value>
-  </property>
-  <property>
-    <name>falcon_output_dataout_partitions</name>
-    <value>'ds=2012-04-21-00'</value>
-  </property>
-  ....
-</configuration>
-</verbatim>
-
----+++ Custom Properties
-The properties are key value pairs that are passed to the workflow. These properties are optional and can be used
-in workflow to parameterize the workflow.
-Syntax:
-<verbatim>
-<process name="[process name]">
-...
-    <properties>
-        <property name=[key] value=[value]/>
-        ...
-    </properties>
-...
-</process>
-</verbatim>
-
-The following are some special properties, which when present are used by the Falcon's launcher job, the same property is also available in workflow which can be used to propagate to pig or M/R job.
-<verbatim>
-        <property name="queueName" value="hadoopQueue"/>
-        <property name="jobPriority" value="VERY_HIGH"/>
-        <!-- This property is used to turn off JMS notifications for this process. JMS notifications are enabled by default. -->
-        <property name="userJMSNotificationEnabled" value="false"/>
-</verbatim>
-
----+++ Workflow
-
-The workflow defines the workflow engine that should be used and the path to the workflow on hdfs.
-Libraries required can be specified using lib attribute in the workflow element and will be comma separated HDFS paths.
-The workflow definition on hdfs contains the actual job that should run and it should confirm to
-the workflow specification of the engine specified. The libraries required by the workflow should
-be in lib folder inside the workflow path.
-
-The properties defined in the cluster and cluster properties(nameNode and jobTracker) will also
-be available for the workflow.
-
-There are 3 engines supported today.
-
----++++ Oozie
-
-As part of oozie workflow engine support, users can embed a oozie workflow.
-Refer to oozie [[http://oozie.apache.org/docs/4.0.1/DG_Overview.html][workflow overview]] and
-[[http://oozie.apache.org/docs/4.0.1/WorkflowFunctionalSpec.html][workflow specification]] for details.
-
-Syntax:
-<verbatim>
-<process name="[process name]">
-...
-    <workflow engine=[workflow engine] path=[workflow path] lib=[comma separated lib paths]/>
-...
-</process>
-</verbatim>
-
-Example:
-<verbatim>
-<process name="sample-process">
-...
-    <workflow engine="oozie" path="/projects/bootcamp/workflow"/>
-...
-</process>
-</verbatim>
-
-This defines the workflow engine to be oozie and the workflow xml is defined at
-/projects/bootcamp/workflow/workflow.xml. The libraries are at /projects/bootcamp/workflow/lib.
-Libraries path can be overridden using lib attribute. e.g.: lib="/projects/bootcamp/wf/libs,/projects/bootcamp/oozie/libs" in the workflow element.
-
----++++ Pig
-
-Falcon also adds the Pig engine which enables users to embed a Pig script as a process.
-
-Example:
-<verbatim>
-<process name="sample-process">
-...
-    <workflow engine="pig" path="/projects/bootcamp/pig.script" lib="/projects/bootcamp/wf/libs,/projects/bootcamp/pig/libs"/>
-...
-</process>
-</verbatim>
-
-This defines the workflow engine to be pig and the pig script is defined at
-/projects/bootcamp/pig.script.
-
-Feeds with Hive table storage will send one more parameter apart from the general ones:
-<verbatim>$input_filter</verbatim>
-
----++++ Hive
-
-Falcon also adds the Hive engine as part of Hive Integration which enables users to embed a Hive script as a process.
-This would enable users to create materialized queries in a declarative way.
-
-Example:
-<verbatim>
-<process name="sample-process">
-...
-    <workflow engine="hive" path="/projects/bootcamp/hive-script.hql"/>
-...
-</process>
-</verbatim>
-
-This defines the workflow engine to be hive and the hive script is defined at
-/projects/bootcamp/hive-script.hql.
-
-Feeds with Hive table storage will send one more parameter apart from the general ones:
-<verbatim>$input_filter</verbatim>
-
----+++ Retry
-Retry policy defines how the workflow failures should be handled. Three retry policies are defined: periodic, exp-backoff(exponential backoff) and final. Depending on the delay and number of attempts, the workflow is re-tried after specific intervals. If user sets the onTimeout attribute to "true", retries will happen for TIMED_OUT instances.
-Syntax:
-<verbatim>
-<process name="[process name]">
-...
-    <retry policy=[retry policy] delay=[retry delay] attempts=[retry attempts] onTimeout=[retry onTimeout]/>
-...
-</process>
-</verbatim>
-
-Examples:
-<verbatim>
-<process name="sample-process">
-...
-    <retry policy="periodic" delay="minutes(10)" attempts="3" onTimeout="true"/>
-...
-</process>
-</verbatim>
-The workflow is re-tried after 10 mins, 20 mins and 30 mins. With exponential backoff, the workflow will be re-tried after 10 mins, 20 mins and 40 mins.
-
-*NOTE :* If user does a manual rerun with -force option (using the instance rerun API), then the runId will get reset and user might see more Falcon system retries than configured in the process definition.
-
-To enable retries for instances for feeds, user will have to set the following properties in runtime.properties
-<verbatim>
-falcon.recipe.retry.policy=periodic
-falcon.recipe.retry.delay=minutes(30)
-falcon.recipe.retry.attempts=3
-falcon.recipe.retry.onTimeout=false
-<verbatim>
----+++ Late data
-Late data handling defines how the late data should be handled. Each feed is defined with a late cut-off value which specifies the time till which late data is valid. For example, late cut-off of hours(6) means that data for nth hour can get delayed by upto 6 hours. Late data specification in process defines how this late data is handled.
-
-Late data policy defines how frequently check is done to detect late data. The policies supported are: backoff, exp-backoff(exponention backoff) and final(at feed's late cut-off). The policy along with delay defines the interval at which late data check is done.
-
-Late input specification for each input defines the workflow that should run when late data is detected for that input. 
-
-Syntax:
-<verbatim>
-<process name="[process name]">
-...
-    <late-process policy=[late handling policy] delay=[delay]>
-        <late-input input=[input name] workflow-path=[workflow path]/>
-        ...
-    </late-process>
-...
-</process>
-</verbatim>
-
-Example:
-<verbatim>
-<feed name="feed1">
-...
-    <frequency>hours(1)</frequency>
-    <late-arrival cut-off="hours(6)"/>
-...
-</feed>
-<process name="sample-process">
-...
-    <inputs>
-        <input name="input1" feed="feed1" start="today(0,0)" end="today(1,0)"/>
-        ...
-    </inputs>
-    <late-process policy="final">
-        <late-input input="input1" workflow-path="/projects/bootcamp/workflow/lateinput1" />
-        ...
-    </late-process>
-...
-</process>
-</verbatim>
-This late handling specifies that late data detection should run at feed's late cut-off which is 6 hours in this case. If there is late data, Falcon should run the workflow specified at /projects/bootcamp/workflow/lateinput1/workflow.xml
-
-*Note:* This is only supported for !FileSystem storage but not Table storage at this point.
-
----+++ Email Notification
-
-<verbatim>
-    <notification type="email" to="bob@@xyz.com"/>
-</verbatim>
-Specifying the notification element with "type" property allows users to receive email notification when a scheduled process instance completes.
-Multiple recipients of an email can be provided as comma separated addresses with "to" property.
-To send email notification ensure that SMTP parameters are defined in Falcon startup.properties.
-Refer to [[FalconEmailNotification][Falcon Email Notification]] for more details.
-
----+++ ACL
-
-A process has ACL (Access Control List) useful for implementing permission requirements
-and provide a way to set different permissions for specific users or named groups.
-<verbatim>
-    <ACL owner="test-user" group="test-group" permission="*"/>
-</verbatim>
-ACL indicates the Access control list for this cluster.
-owner is the Owner of this entity.
-group is the one which has access to read.
-permission indicates the permission.
-

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/FalconDocumentation.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/FalconDocumentation.twiki b/trunk/releases/master/src/site/twiki/FalconDocumentation.twiki
deleted file mode 100644
index 122435a..0000000
--- a/trunk/releases/master/src/site/twiki/FalconDocumentation.twiki
+++ /dev/null
@@ -1,777 +0,0 @@
----++ Contents
-   * <a href="#Architecture">Architecture</a>
-   * <a href="#Control_flow">Control flow</a>
-   * <a href="#Modes_Of_Deployment">Modes Of Deployment</a>
-   * <a href="#Entity_Management_actions">Entity Management actions</a>
-   * <a href="#Instance_Management_actions">Instance Management actions</a>
-   * <a href="#Retention">Retention</a>
-   * <a href="#Replication">Replication</a>
-   * <a href="#Cross_entity_validations">Cross entity validations</a>
-   * <a href="#Updating_process_and_feed_definition">Updating process and feed definition</a>
-   * <a href="#Handling_late_input_data">Handling late input data</a>
-   * <a href="#Idempotency">Idempotency</a>
-   * <a href="#Falcon_EL_Expressions">Falcon EL Expressions</a>
-   * <a href="#Lineage">Lineage</a>
-   * <a href="#Security">Security</a>
-   * <a href="#Recipes">Recipes</a>
-   * <a href="#Monitoring">Monitoring</a>
-   * <a href="#Email_Notification">Email Notification</a>
-   * <a href="#Backwards_Compatibility">Backwards Compatibility Instructions</a>
-   * <a href="#Proxyuser_support">Proxyuser support</a>
-   * <a href="#ImportExport">Data Import and Export</a>
-
----++ Architecture
-
----+++ Introduction
-Falcon is a feed and process management platform over hadoop. Falcon essentially transforms user's feed
-and process configurations into repeated actions through a standard workflow engine. Falcon by itself
-doesn't do any heavy lifting. All the functions and workflow state management requirements are delegated
-to the workflow scheduler. The only thing that Falcon maintains is the dependencies and relationship between
-these entities. This is adequate to provide integrated and seamless experience to the developers using
-the falcon platform.
-
----+++ Falcon Architecture - Overview
-<img src="Architecture.png" height="400" width="600" />
-
----+++ Scheduler
-Falcon system has picked Oozie as the default scheduler. However the system is open for integration with
-other schedulers. Lot of the data processing in hadoop requires scheduling to be based on both data availability
-as well as time. Oozie currently supports these capabilities off the shelf and hence the choice.
-
-While the use of Oozie works reasonably well, there are scenarios where Oozie scheduling is proving to be a limiting factor. In its current form, Falcon relies on Oozie for both scheduling and for workflow execution, due to which the scheduling is limited to time based/cron based scheduling with additional gating conditions on data availability. Also, this imposes restrictions on datasets being periodic/cyclic in nature. In order to offer better scheduling capabilities, Falcon comes with its own native scheduler. Refer to [[FalconNativeScheduler][Falcon Native Scheduler]] for details.
-
----+++ Control flow
-Though the actual responsibility of the workflow is with the scheduler (Oozie), Falcon remains in the
-execution path, by subscribing to messages that each of the workflow may generate. When Falcon generates a
-workflow in Oozie, it does so, after instrumenting the workflow with additional steps which includes messaging
-via JMS. Falcon system itself subscribes to these control messages and can perform actions such as retries,
-handling late input arrival etc.
-
-
----++++ Feed Schedule flow
-<img src="FeedSchedule.png" height="400" width="600" />
-
----++++ Process Schedule flow
-<img src="ProcessSchedule.png" height="400" width="600" />
-
-
-
----++ Modes Of Deployment
-There are two basic components of Falcon set up. Falcon Prism and Falcon Server.
-As the name suggests Falcon Prism splits the request it gets to the Falcon Servers. More details below:
-
----+++ Stand Alone Mode
-Stand alone mode is useful when the hadoop jobs and relevant data processing involves only one hadoop cluster.
-In this mode there is a single Falcon server that contacts Oozie to schedule jobs on Hadoop.
-All the process/feed requests like submit, schedule, suspend, kill etc. are sent to this server.
-For running falcon in this mode one should use the falcon which has been built using standalone option.
-
----+++ Distributed Mode
-Distributed mode is for multiple (colos) instances of hadoop clusters, and multiple workflow schedulers to handle them.
-In this mode falcon has 2 components: Prism and Server(s).
-Both Prism and servers have their own setup (runtime and startup properties) and their own config locations.
-In this mode Prism acts as a contact point for Falcon servers.
-While all commands are available through Prism, only read and instance api's are available through Server.
-Below are the requests that can be sent to each of these:
-
- Prism: submit, schedule, submitAndSchedule, Suspend, Resume, Kill, instance management
- Server: schedule, suspend, resume, instance management
- 
-As observed above submit and kill are kept exclusively as Prism operations to keep all the config stores in sync and to support feature of idempotency.
-Request may also be sent from prism but directed to a specific server using the option "-colo" from CLI or append the same in web request, if using API.
-
-When a cluster is submitted it is by default sent to all the servers configured in the prism.
-When is feed is SUBMIT / SCHEDULED request is only sent to the servers specified in the feed / process definitions. Servers are mentioned in the feed / process via CLUSTER tags in xml definition.
-
-Communication between prism and falcon server (for submit/update entity function) is secured over https:// using a client-certificate based auth. Prism server needs to present a valid client certificate for the falcon server to accept the action.
-
-Startup property file in both falcon & prism server need to be configured with the following configuration if TLS is enabled.
-* keystore.file
-* keystore.password
-
----++++ Prism Setup
-<img src="PrismSetup.png" height="400" width="600" />
- 
----+++ Configuration Store
-Configuration store is file system based store that the Falcon system maintains where the entity definitions
-are stored. File System used for the configuration store can either be a local file system or HDFS file system.
-It is recommended that the store be maintained outside of the system where Falcon is deployed. This is needed
-for handling issues relating to disk failures or other permanent failures of the system where Falcon is deployed.
-Configuration store also maintains an archive location where prior versions of the configuration or deleted
-configurations are maintained. They are never accessed by the Falcon system and they merely serve to track
-historical changes to the entity definitions.
-
----+++ Atomic Actions
-Often times when Falcon performs entity management actions, it may need to do several individual actions.
-If one of the action were to fail, then the system could be in an inconsistent state. To avoid this, all
-individual operations performed are recorded into a transaction journal. This journal is then used to undo
-the overall user action. In some cases, it is not possible to undo the action. In such cases, Falcon attempts
-to keep the system in an consistent state.
-
----+++ Storage
-Falcon introduces a new abstraction to encapsulate the storage for a given feed which can either be
-expressed as a path on the file system, File System Storage or a table in a catalog such as Hive, Catalog Storage.
-
-<verbatim>
-    <xs:choice minOccurs="1" maxOccurs="1">
-        <xs:element type="locations" name="locations"/>
-        <xs:element type="catalog-table" name="table"/>
-    </xs:choice>
-</verbatim>
-
-Feed should contain one of the two storage options. Locations on File System or Table in a Catalog.
-
----++++ File System Storage
-
-This is expressed as a location on the file system. Location specifies where the feed is available on this cluster.
-A location tag specifies the type of location like data, meta, stats and the corresponding paths for them.
-A feed should at least define the location for type data, which specifies the HDFS path pattern where the feed is
-generated periodically. ex: type="data" path="/projects/TrafficHourly/${YEAR}-${MONTH}-${DAY}/traffic"
-The granularity of date pattern in the path should be at least that of a frequency of a feed.
-
-<verbatim>
- <location type="data" path="/projects/falcon/clicks" />
- <location type="stats" path="/projects/falcon/clicksStats" />
- <location type="meta" path="/projects/falcon/clicksMetaData" />
-</verbatim>
-
----++++ Catalog Storage (Table)
-
-A table tag specifies the table URI in the catalog registry as:
-<verbatim>
-catalog:$database-name:$table-name#partition-key=partition-value);partition-key=partition-value);*
-</verbatim>
-
-This is modeled as a URI (similar to an ISBN URI). It does not have any reference to Hive or HCatalog. Its quite
-generic so it can be tied to other implementations of a catalog registry. The catalog implementation specified
-in the startup config provides implementation for the catalog URI.
-
-Top-level partition has to be a dated pattern and the granularity of date pattern should be at least that
-of a frequency of a feed.
-
-Examples:
-<verbatim>
-<table uri="catalog:default:clicks#ds=${YEAR}-${MONTH}-${DAY}-${HOUR};region=${region}" />
-<table uri="catalog:src_demo_db:customer_raw#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
-<table uri="catalog:tgt_demo_db:customer_bcp#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
-</verbatim>
-
-
----++ Entity Management actions
-All the following operation can also be done using [[restapi/ResourceList][Falcon's RESTful API]].
-
----+++ Submit
-Entity submit action allows a new cluster/feed/process to be setup within Falcon. Submitted entity is not
-scheduled, meaning it would simply be in the configuration store within Falcon. Besides validating against
-the schema for the corresponding entity being added, the Falcon system would also perform inter-field
-validations within the configuration file and validations across dependent entities.
-
----+++ List
-List all the entities within the falcon config store for the entity type being requested. This will include
-both scheduled and submitted entity configurations.
-
----+++ Dependency
-Returns the dependencies of the requested entity. Dependency list include both forward and backward
-dependencies (depends on & is dependent on). For example, a feed would show process that are dependent on the
-feed and the clusters that it depends on.
-
----+++ Schedule
-Feeds or Processes that are already submitted and present in the config store can be scheduled. Upon schedule,
-Falcon system wraps the required repeatable action as a bundle of oozie coordinators and executes them on the
-Oozie scheduler. (It is possible to extend Falcon to use an alternate workflow engine other than Oozie).
-Falcon overrides the workflow instance's external id in Oozie to reflect the process/feed and the nominal
-time. This external Id can then be used for instance management functions.
-
-The schedule copies the user specified workflow and library to a staging path, and the scheduler references the workflow
-and lib from the staging path.
-
----+++ Suspend
-This action is applicable only on scheduled entity. This triggers suspend on the oozie bundle that was
-scheduled earlier through the schedule function. No further instances are executed on a suspended process/feed.
-
----+++ Resume
-Puts a suspended process/feed back to active, which in turn resumes applicable oozie bundle.
-
----+++ Status
-Gets the current status of the entity.
-
----+++ Definition
-Gets the current entity definition as stored in the configuration store. Please note that user documentations
-in the entity will not be retained.
-
----+++ Delete
-Delete operation on the entity removes any scheduled activity on the workflow engine, besides removing the
-entity from the falcon configuration store. Delete operation on an entity would only succeed if there are
-no dependent entities on the deleted entity.
-
----+++ Update
-Update operation allows an already submitted/scheduled entity to be updated. Cluster update is currently
-not allowed. Feed update can cause cascading update to all the processes already scheduled. Process update triggers
-update in falcon if entity is updated. The following set of actions are performed in scheduler to realize an update:
-   * Update the old scheduled entity to set the end time to "now"
-   * Schedule as per the new process/feed definition with the start time as "now"
-
----++ Instance Management actions
-
-Instance Manager gives user the option to control individual instances of the process based on their instance start time (start time of that instance). Start time needs to be given in standard TZ format. Example: 01 Jan 2012 01:00 => 2012-01-01T01:00Z
-
-All the instance management operations (except running) allow single instance or list of instance within a Date range to be acted on. Make sure the dates are valid. i.e. are within the start and end time of process itself. 
-
-For every query in instance management the process name is a compulsory parameter. 
-
-Parameters -start and -end are used to mention the date range within which you want the instance to be operated upon. 
-
--start: using only "-start" without "-end" will conduct the desired operation only on single instance given by date along with start.
-
--end: "-end" can only be used along with "-start" . It corresponds to the end date till which instance need to operated upon. 
-
-   * 1. *status*: -status option via CLI can be used to get the status of a single or multiple instances. If the instance is not yet materialized but is within the process validity range, WAITING is returned as the state. Along with the status of the instance log location is also returned.
-
-
-   * 2.	*running*: -running returns all the running instance of the process. It does not take any start or end dates but simply return all the instances in state RUNNING at that given time. 
-
-   * 3.	*rerun*: -rerun is the option that you will use most often from instance management. As the name suggest this option is used to rerun a particular instance or instances of the process. The rerun option reruns all parent workflow for the instance, which in turn rerun all the sub-workflows for it. This option is valid for any instance in terminal state, i.e. KILLED, SUCCEEDED, FAILED. User can also set properties in the request, which will give options what types of actions should be rerun like, only failed, run all etc. These properties are dependent on the workflow engine being used along with falcon.
-   
-   * 4. *suspend*: -suspend is used to suspend a instance or instances for the given process. This option pauses the parent workflow at the state, which it was in at the time of execution of this command. This command is similar to SUSPEND process command in functionality only difference being, SUSPEND process suspends all the instance whereas suspend instance suspend only that instance or instances in the range. 
-
-   * 5.	*resume*: -resume option is used to resume any instance that is in suspended state. (Note: due to a bug in oozie �resume option in some cases may not actually resume the suspended instance/ instances)
-   * 6. *kill*: -kill option can be used to kill an instance or multiple instances
-
-   * 7. *summary*: -summary option via CLI can be used to get the consolidated status of the instances between the specified time period. Each status along with the corresponding instance count are listed for each of the applicable colos.
-
-
-In all the cases where your request is syntactically correct but logically not, the instance / instances are returned with the same status as earlier. Example: trying to resume a KILLED / SUCCEEDED instance will return the instance with KILLED / SUCCEEDED, without actually performing any operation. This is so because only an instance in SUSPENDED state can be resumed. Same thing is valid for rerun a SUSPENDED or RUNNING options etc. 
-
----++ Retention
-In coherence with it's feed lifecycle management philosophy, Falcon allows the user to retain data in the system
-for a specific period of time for a scheduled feed. The user can specify the retention period in the respective
-feed/data xml in the following manner for each cluster the feed can belong to :
-<verbatim>
-<clusters>
-        <cluster name="corp" type="source">
-            <validity start="2012-01-30T00:00Z" end="2013-03-31T23:59Z"
-                      timezone="UTC" />
-            <retention limit="hours(10)" action="delete" /> 
-        </cluster>
- </clusters> 
-</verbatim>
-
-The 'limit' attribute can be specified in units of minutes/hours/days/months, and a corresponding numeric value can
-be attached to it. It essentially instructs the system to retain data till the time specified
-in the attribute spanning backwards in time, from now. Any data older than that is erased from the system. By default,
-Falcon runs retention jobs up to the cluster validity end time. This causes the instances created within the endTime
-and "endTime - retentionLimit" to be retained forever. If the users do not want to retain any instances of the
-feed past the cluster validity end time, user should set property "falcon.retention.keep.instances.beyond.validity"
-to false in runtime.properties.
-
-With the integration of Hive, Falcon also provides retention for tables in Hive catalog.
-
----+++ Example:
-If retention period is 10 hours, and the policy kicks in at time 't', the data retained by system is essentially the
-one after or equal to t-10h . Any data before t-10h is removed from the system.
-
-The 'action' attribute can attain values of DELETE/ARCHIVE. Based upon the tag value, the data eligible for removal is
-either deleted/archived.
-
----+++ NOTE: Falcon 0.1/0.2 releases support Delete operation only
-
----+++ When does retention policy come into play, aka when is retention really performed?
-
-Retention policy in Falcon kicks off on the basis of the time value specified by the user. Here are the basic rules:
-
-   * If the retention policy specified is less than 24 hours: In this event, the retention policy automatically kicks off every 6 hours.
-   * If the retention policy specified is more than 24 hours: In this event, the retention policy automatically kicks off every 24 hours.
-   * As soon as a feed is successfully scheduled: the retention policy is triggered immediately regardless of the current timestamp/state of the system.
-
-Relation between feed path and retention policy: Retention policy for a particular scheduled feed applies only to the eligible feed path
-specified in the feed xml. Any other paths that do not conform to the specified feed path are left unaffected by the retention policy.
-
----++ Replication
-Falcon's feed lifecycle management also supports Feed replication across different clusters out-of-the-box.
-Multiple source clusters and target clusters can be defined in feed definition. Falcon replicates the data using
-hadoop's distcp version 2 across different clusters whenever a feed is scheduled.
-
-The frequency at which the data is replicated is governed by the frequency specified in the feed definition.
-Ideally, the feeds data path should have the same granularity as that for frequency of the feed, i.e. if the frequency of the feed is hours(3), then the data path should be to level /${YEAR}/${MONTH}/${DAY}/${HOUR}. 
-<verbatim>
-    <clusters>
-        <cluster name="sourceCluster1" type="source" partition="${cluster.name}" delay="minutes(40)">
-            <validity start="2021-11-01T00:00Z" end="2021-12-31T00:00Z"/>
-        </cluster>
-        <cluster name="sourceCluster2" type="source" partition="COUNTRY/${cluster.name}">
-            <validity start="2021-11-01T00:00Z" end="2021-12-31T00:00Z"/>
-        </cluster>
-        <cluster name="backupCluster" type="target">
-            <validity start="2011-11-01T00:00Z" end="2011-12-31T00:00Z"/>
-        </cluster>
-    </clusters>
-</verbatim>
-
-If more than 1 source cluster is defined, then partition expression is compulsory, a partition can also have a constant.
-The expression is required to avoid copying data from different source location to the same target location,
-also only the data in the partition is considered for replication if it is present. The partitions defined in the
-cluster should be less than or equal to the number of partition declared in the feed definition.
-
-Falcon uses pull based replication mechanism, meaning in every target cluster, for a given source cluster,
-a coordinator is scheduled which pulls the data using distcp from source cluster. So in the above example,
-2 coordinators are scheduled in backupCluster, one which pulls the data from sourceCluster1 and another
-from sourceCluster2. Also, for every feed instance which is replicated Falcon sends a JMS message on success or
-failure of replication instance.
-
-Replication can be scheduled with the past date, the time frame considered for replication is the minimum
-overlapping window of start and end time of source and target cluster, ex: if s1 and e1 is the start and end time
-of source cluster respectively, and s2 and e2 of target cluster, then the coordinator is scheduled in
-target cluster with start time max(s1,s2) and min(e1,e2).
-
-A feed can also optionally specify the delay for replication instance in the cluster tag, the delay governs the
-replication instance delays. If the frequency of the feed is hours(2) and delay is hours(1), then the replication
-instance will run every 2 hours and replicates data with an offset of 1 hour, i.e. at 09:00 UTC, feed instance which
-is eligible for replication is 08:00; and 11:00 UTC, feed instance of 10:00 UTC is eligible and so on.
-
-If it is required to capture the feed replication metrics like TIMETAKEN, COPY, BYTESCOPIED, set the parameter "job.counter" to "true"
-in feed entity properties section. Captured metrics from instance will be populated to the GraphDB for display on UI.
-
-*Example:*
-<verbatim>
-<properties>
-        <property name="job.counter" value="true" />
-</properties>
-</verbatim>
-
----+++ Where is the feed path defined for File System Storage?
-
-It's defined in the feed xml within the location tag.
-
-*Example:*
-<verbatim>
-<locations>
-        <location type="data" path="/retention/testFolders/${YEAR}-${MONTH}-${DAY}" />
-</locations>
-</verbatim>
-
-Now, if the above path contains folders in the following fashion:
-
-/retention/testFolders/${YEAR}-${MONTH}-${DAY}
-/retention/testFolders/${YEAR}-${MONTH}/someFolder
-
-The feed retention policy would only act on the former and not the latter.
-
-Users may choose to override the feed path specific to a cluster, so every cluster
-may have a different feed path.
-*Example:*
-<verbatim>
-<clusters>
-        <cluster name="testCluster" type="source">
-            <validity start="2011-11-01T00:00Z" end="2011-12-31T00:00Z"/>
-       		<locations>
-        		<location type="data" path="/projects/falcon/clicks/${YEAR}-${MONTH}-${DAY}" />
-        		<location type="stats" path="/projects/falcon/clicksStats/${YEAR}-${MONTH}-${DAY}" />
-        		<location type="meta" path="/projects/falcon/clicksMetaData/${YEAR}-${MONTH}-${DAY}" />
-    		</locations>
-        </cluster>
-    </clusters>
-</verbatim>
-
----+++ Hive Table Replication
-
-With the integration of Hive, Falcon adds table replication of Hive catalog tables. Replication will be triggered
-for a partition when the partition is complete at the source.
-
-   * Falcon will use HCatalog (Hive) API to export the data for a given table and the partition,
-which will result in a data collection that includes metadata on the data's storage format, the schema,
-how the data is sorted, what table the data came from, and values of any partition keys from that table.
-   * Falcon will use discp tool to copy the exported data collection into the secondary cluster into a staging
-directory used by Falcon.
-   * Falcon will then import the data into HCatalog (Hive) using the HCatalog (Hive) API. If the specified table does
-not yet exist, Falcon will create it, using the information in the imported metadata to set defaults for the table
-such as schema, storage format, etc.
-   * The partition is not complete and hence not visible to users until all the data is committed on the secondary
-cluster, (no dirty reads)
-
-
----+++ Archival as Replication
-
-Falcon allows users to archive data from on-premice to cloud, either Azure WASB or S3.
-It uses the underlying replication for archiving data from source to target. The archival URI is
-specified as the overridden location for the target cluster.
-
-*Example:*
-<verbatim>
-    <clusters>
-        <cluster name="on-premise-cluster" type="source">
-            <validity start="2021-11-01T00:00Z" end="2021-12-31T00:00Z"/>
-        </cluster>
-        <cluster name="cloud-cluster" type="target">
-            <validity start="2011-11-01T00:00Z" end="2011-12-31T00:00Z"/>
-            <locations>
-                <location type="data"
-                          path="wasb://test@blah.blob.core.windows.net/data/${YEAR}-${MONTH}-${DAY}-${HOUR}"/>
-            </locations>
-        </cluster>
-    </clusters>
-</verbatim>
-
----+++ Relation between feed's retention limit and feed's late arrival cut off period:
-
-For reasons that are obvious, Falcon has an external validation that ensures that the user
-always specifies the feed retention limit to be more than the feed's allowed late arrival period.
-If this rule is violated by the user, the feed submission call itself throws back an error.
-
-
----++ Cross entity validations
-
-
----+++ Entity Dependencies in a nutshell
-<img src="EntityDependency.png" height="50" width="300" />
-
-
-The above schematic shows the dependencies between entities in Falcon. The arrow in above diagram
-points from a dependency to the dependent. 
-
-
-Let's just get one simple rule stated here, which we will keep referring to time and again while
-talking about entities: A dependency in the system cannot be removed unless all it's dependents are
-removed first. This holds true for all transitive dependencies also.
-
-Now, let's follow it up with a simple illustration of an Falcon Job:
-
-Let's consider a process P that refers to feed F1 as an input feed, and generates feed F2 as an
-output feed. These feeds/processes are supposed to be associated with a cluster C1.
-
-The order of submission of this job would be in the following order:
-
-C1->F1/F2(in any order)->P
-
-The order of removal of this job from the system is in the exact opposite order, i.e.:
-
-P->F1/F2(in any order)->C1
-
-Please note that there might be multiple process referring to a particular feed, or a single feed belonging
-to multiple clusters. In that event, any of the dependencies cannot be removed unless ALL of their dependents
-are removed first. Attempting to do so will result in an error message and a 400 Bad Request operation.
-
-
----+++ Other cross validations between entities in Falcon system
-
-*Cluster-Feed Cross validations:*
-
-   * The cluster(s) referenced by feed (inside the <clusters> tag) should be  present in the system at the time
-of submission. Any exception to this results in a feed submission failure. Note that a feed might be referring
-to more than a single cluster. The identifier for the same is the 'name' attribute for the individual cluster.
-
-*Example:*
-
-*Feed XML:*
-   
-<verbatim>
-   <clusters>
-        <cluster name="corp" type="source">
-            <validity start="2009-01-01T00:00Z" end="2012-12-31T23:59Z"
-                      timezone="UTC" />
-            <retention limit="months(6)" action="delete" />
-        </cluster>
-    </clusters>
-</verbatim>
-
-*Cluster corp's XML:*
-
-<verbatim>
-<cluster colo="gs" description="" name="corp" xmlns="uri:falcon:cluster:0.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
-</verbatim>
-
-*Cluster-Process Cross validations:*
-
-
-   * In a similar relationship to that of feed and a cluster, a process also refers to the relevant cluster by the
-'name' attribute. Any exception results in a process submission failure.
-
-
----+++ Example:
----+++ Process XML:
-<verbatim>
-<process name="agregator-coord16">
-    <cluster name="corp"/>....
-</verbatim>
----+++ Cluster corp's XML:
-<verbatim>
-<cluster colo="gs" description="" name="corp" xmlns="uri:falcon:cluster:0.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
-</verbatim>
-
-*Feed-Process Cross Validations:*
-
-
-1. The process <input> and feeds designated as input feeds for the job:
-
- For every feed referenced in the <input> tag in a process definition, following rules are applied
-when the process is due for submission:
-
-   * The feed having a value associated with the 'feed' attribute in input tag should be present in
-the system. The corresponding attribute in the feed definition is the 'name' attribute in the <feed> tag.
-
-*Example:*
-
-*Process xml:*
-
-<verbatim>
-<input end-instance="now(0,20)" start-instance="now(0,-60)"
-feed="raaw-logs16" name="inputData"/>
-</verbatim>
-
-*Feed xml:*
-<verbatim>
-<feed description="clicks log" name="raw-logs16"....
-</verbatim>
-
-   
-    * The time interpretation for corresponding tags indicating the start and end instances for a
-particular input feed in the process xml should lie well within the time span of the period specified in
-<validity> tag of the particular feed.
-
-*Example:*
-
-1. In the following scenario, process submission will result in an error:
-
-*Process XML:*
-<verbatim>
-<input end-instance="now(0,20)" start-instance="now(0,-60)"
-   feed="raw-logs16" name="inputData"/>
-</verbatim>
-*Feed XML:*
-<verbatim>
-<validity start="2009-01-01T00:00Z" end="2009-12-31T23:59Z".....
-</verbatim>
-Explanation: The process timelines for the feed range between a 40 minute interval between [-60m,-20m] from
-the current timestamp (which lets assume is 'today' as per the 'now' directive). However, the feed validity
-is between a 1 year period in 2009, which makes it anachronistic. 
-
-2. The following example would work just fine:
-
-*Process XML:*
-<verbatim>
-<input end-instance="now(0,20)" start-instance="now(0,-60)"
-   feed="raaw-logs16" name="inputData"/>
-</verbatim>
-*Feed XML:*
-<verbatim>
-validity start="2009-01-01T00:00Z" end="2012-12-31T23:59Z" .......
-</verbatim>
-since at the time of charting this document (03/03/2012), the feed validity is able to encapsulate the process
-input's start and end instances.
-
-
-Failure to follow any of the above rules would result in a process submission failure.
-
-*NOTE:* Even though the above check ensures that the timelines are not anachronistic, if the input data is not
-present in the system for the specified time period, the process can be submitted and scheduled, but all instances
-created would result in a WAITING state unless data is actually provided in the cluster.
-
-
-
----++ Updating process and feed definition
-Any changes in feed/process can be done by updating its definition. After the update, any new workflows which are to be scheduled after the update call will pick up the new changes. Feed/process name and start time can't be updated. Updating a process triggers updates to the workflow that is triggered in the workflow engine. Updating feed updates feed workflows like retention, replication etc. and also updates the processes that reference the feed.
-
-
----++ Handling late input data
-Falcon system can handle late arrival of input data and appropriately re-trigger processing for the affected
-instance. From the perspective of late handling, there are two main configuration parameters late-arrival cut-off
-and late-inputs section in feed and process entity definition that are central. These configurations govern
-how and when the late processing happens. In the current implementation (oozie based) the late handling is very
-simple and basic. The falcon system looks at all dependent input feeds for a process and computes the max late
-cut-off period. Then it uses a scheduled messaging framework, like the one available in Apache ActiveMQ or Java's !DelayQueue to schedule a message with a cut-off period, then after a cut-off period the message is dequeued and Falcon checks for changes in the feed data which is recorded in HDFS in latedata file by falcons "record-size" action, if it detects any changes then the workflow will be rerun with the new set of feed data.
-
-*Example:*
-For a process entity, the late rerun policy can be configured in the process definition.
-Falcon supports 3 policies, periodic, exp-backoff and final.
-Delay specifies, how often the feed data should be checked for changes, also one needs to 
-explicitly set the feed names in late-input which needs to be checked for late data.
-<verbatim>
-  <late-process policy="exp-backoff" delay="hours(1)">
-        <late-input input="impression" workflow-path="hdfs://impression/late/workflow" />
-        <late-input input="clicks" workflow-path="hdfs://clicks/late/workflow" />
-   </late-process>
-</verbatim>
-
-*NOTE:* Feeds configured with table storage does not support late input data handling at this point. This will be
-made available in the near future.
-
-For a feed entity replication job, the default late data handling policy can be configured in the runtime.properties file.
-Since these properties are runtime.properties, they will take effect for all replication jobs completed subsequent to the change.
-<verbatim>
-  # Default configs to handle replication for late arriving feeds.
-  *.feed.late.allowed=true
-  *.feed.late.frequency=hours(3)
-  *.feed.late.policy=exp-backoff
-</verbatim>
-
-
----++ Idempotency
-All the operations in Falcon are Idempotent. That is if you make same request to the falcon server / prism again you will get a SUCCESSFUL return if it was SUCCESSFUL in the first attempt. For example, you submit a new process / feed and get SUCCESSFUL message return. Now if you run the same command / api request on same entity you will again get a SUCCESSFUL message. Same is true for other operations like schedule, kill, suspend and resume.
-Idempotency also by takes care of the condition when request is sent through prism and fails on one or more servers. For example prism is configured to send request to 3 servers. First user sends a request to SUBMIT a process on all 3 of them, and receives a response SUCCESSFUL from all of them. Then due to some issue one of the servers goes down, and user send a request to schedule the submitted process. This time he will receive a response with PARTIAL status and a FAILURE message from the server that has gone down. If the users check he will find the process would have been started and running on the 2 SUCCESSFUL servers. Now the issue with server is figured out and it is brought up. Sending the SCHEDULE request again through prism will result in a SUCCESSFUL response from prism as well as other three servers, but this time PROCESS will be SCHEDULED only on the server which had failed earlier and other two will keep running as before. 
- 
-
----++ Falcon EL Expressions
-
-
-Falcon expression language can be used in process definition for giving the start and end instance for various feeds.
-
-Before going into how to use falcon EL expressions it is necessary to understand what does instance and instance start time refer to with respect to Falcon.
-
-Lets consider a part of process definition below:
-
-<verbatim>
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<process name="testProcess">
-    <clusters>
-        <cluster name="corp">
-            <validity start="2010-01-02T01:00Z" end="2011-01-03T03:00Z" />
-        </cluster>
-    </clusters>
-   <parallel>2</parallel>
-   <order>LIFO</order>
-   <timeout>hours(3)</timeout>
-   <frequency>minutes(30)</frequency>
-
-  <inputs>
- <input end-instance="now(0,20)" start-instance="now(0,-60)"
-			feed="input-log" name="inputData"/>
- </inputs>
-<outputs>
-	<output instance="now(0,0)" feed="output-log"
-		name="outputData" />
-</outputs>
-...
-...
-...
-...
-</process>
-</verbatim>
-
-
-The above definition says that the process will start at 2nd of Jan 2010 at 1 am and will end at 3rd of Jan 2011 at 3 am on cluster corp. Also process will start a user-defined workflow (which we will call instance) every 30 mins.
-
-This means starting 2010-01-02T01:00Z every 30 mins a instance will start will run user defined workflow. Now if this workflow needs some input data and produce some output, user needs to give that in <inputs> and <outputs> tags. 
-Since the inputs that the process takes can be distributed over a wide range we use the limits by giving "start" and "end" instance for input. Output is only one location so only instance is given. 
-The timeout specifies, the how long a given instance should wait for input data before being terminated by the workflow engine.
-
-Coming back to instance start time, since a instance will start every 30 mins starting 2010-01-02T01:00Z, the time it is scheduled to start is called its instance time. For example first few instance time for above example are:
-
-
-<pre>Instance Number      instance start Time</pre>
-
-<pre>1			 2010-01-02T01:00Z</pre>
-<pre>2			 2010-01-02T01:30Z</pre>
-<pre>3			 2010-01-02T02:00Z</pre>
-<pre>4			 2010-01-02T02:30Z</pre>
-<pre>.				.</pre>
-<pre>.				.</pre>
-<pre>.				.</pre>
-<pre>.				.</pre>
-
-Now lets go to how to use expression language. Only thing to keep in mind is all EL evaluation are done based on the start time of that instance, and very instance will have different inputs / outputs based on the feed instance given in process definition.  
-
-All the parameters in various El can be both positive, zero or negative values. Positive values indicate so many units in future, zero means the base time EL has been resolved to, and negative values indicate corresponding units in past. 
-
-__Note: if no instance is created at the resolved time, then the instance immediately before it is considered.__
-
-Falcon currently support following ELs:
-
-
-   * 1.	*now(hours,minutes)*: now refer to the instance start time. Hours and minutes given are in reference with the start time of instance. For example now(-2,40)  corresponds to feed instance at -2 hr and +40 minutes i.e.  feed instance 80 mins before the instance start time. Id user would have given now(0,-80) it would have correspond to the same. 
-   * 2.	*today(hours,minutes)*: hours and minutes given in this EL corresponds to instance from the start day of instance start time. Ie. If instance start is at 2010-01-02T01:30Z  then today(-3,-20) will mean instance created at 2010-01-01T20:40 and today(3,20) will correspond to 2010-01-02T3:20Z. 
-
-   * 3.	*yesterday(hours,minutes)*: As the name suggest EL yesterday picks up feed instances with respect to start of day yesterday. Hours and minutes are added to the 00 hours starting yesterday, Example: yesterday(24,30) will actually correspond to 00:30 am of today, for 2010-01-02T01:30Z this would mean 2010-01-02:00:30 feed. 
-
-   * 7.	*lastYear(month,day,hour,minute)*: This is exactly similarly to currentYear in usage> only difference being start reference is taken to start of previous year. For example: lastYear(4,2,2,20) will correspond to feed instance created at 2009-05-03T02:20Z and lastYear(12,2,2,20) will correspond to feed at 2010-01-03T02:20Z.
-
-   * 4.	*currentMonth(day,hour,minute)*: Current month takes the reference to start of the month with respect to instance start time. One thing to keep in mind is that day is added to the first day of the month. So the value of day is the number of days you want to add to the first day of the month. For example: for instance start time 2010-01-12T01:30Z and El as currentMonth(3,2,40) will correspond to feed created at 2010-01-04T02:40Z and currentMonth(0,0,0) will mean 2010-01-01T00:00Z.
-
-   * 5.	*lastMonth(day,hour,minute)*: Parameters for lastMonth is same as currentMonth, only difference being the reference is shifted to one month back. For instance start 2010-01-12T01:30Z lastMonth(2,3,30) will correspond to feed instance at 2009-12-03:T03:30Z 
-
-   * 6.	*currentYear(month,day,hour,minute)*: The month,day,hour, minutes in the parameter are added with reference to the start of year of instance start time. For our example start time 2010-01-02:00:30 reference will go back to 2010-01-01:T00:00Z. Also similar to days, months are added to the 1st month that Jan. So currentYear(0,2,2,20) will mean 2010-01-03T02:20Z while currentYear(11,2,2,20) will mean 2010-12-03T02:20Z
-
-
-   * 7.	*lastYear(month,day,hour,minute)*: This is exactly similarly to currentYear in usage> only difference being start reference is taken to start of previous year. For example: lastYear(4,2,2,20) will corrospond to feed insatnce created at 2009-05-03T02:20Z and lastYear(12,2,2,20) will corrospond to feed at 2010-01-03T02:20Z.
-   
-   * 8. *latest(number of latest instance)*: This will simply make you input consider the number of latest available instance of the feed given as parameter. For example: latest(0) will consider the last available instance of feed, where as latest latest(-1) will consider second last available feed and latest(-3) will consider 4th last available feed.
-   
-   * 9.	*currentWeek(weekDayName,hour,minute)*: This is similar to currentMonth in the sense that it returns a relative time with respect to the instance start time, considering the day name provided as input as the start of the week. The day names can be one of SUN, MON, TUE, WED, THU, FRI, SAT.
-
-   * 10. *lastWeek(weekDayName,hour,minute)*: This is typically 7 days less than what the currentWeek returns for similar parameters.
-
-
----++ Lineage
-
-Falcon adds the ability to capture lineage for both entities and its associated instances. It
-also captures the metadata tags associated with each of the entities as relationships. The
-following relationships are captured:
-
-   * owner of entities - User
-   * data classification tags
-   * groups defined in feeds
-   * Relationships between entities
-      * Clusters associated with Feed and Process entity
-      * Input and Output feeds for a Process
-   * Instances refer to corresponding entities
-
-Lineage is exposed in 3 ways:
-
-   * REST API
-   * CLI
-   * Dashboard - Interactive lineage for Process instances
-
-This feature is enabled by default but could be disabled by removing the following from:
-<verbatim>
-config name: *.application.services
-config value: org.apache.falcon.metadata.MetadataMappingService
-</verbatim>
-
-Lineage is only captured for Process executions. A future release will capture lineage for
-lifecycle policies such as replication and retention.
-
----++Security
-
-Security is detailed in [[Security][Security]].
-
----++ Recipes
-
-Recipes is detailed in [[Recipes][Recipes]].
-
----++ Monitoring
-
-Monitoring and Operationalizing Falcon is detailed in [[Operability][Operability]].
-
----++ Email Notification
-Notification for instance completion in Falcon is defined in [[FalconEmailNotification][Falcon Email Notification]].
-
----++ Backwards Compatibility
-
-Backwards compatibility instructions are [[Compatibility][detailed here.]]
-
----++ Proxyuser support
-Falcon supports impersonation or proxyuser functionality (identical to Hadoop proxyuser capabilities and conceptually
-similar to Unix 'sudo').
-
-Proxyuser enables Falcon clients to submit entities on behalf of other users. Falcon will utilize Hadoop core's hadoop-auth
-module to implement this functionality.
-
-Because proxyuser is a powerful capability, Falcon provides the following restriction capabilities (similar to Hadoop):
-
-   * Proxyuser is an explicit configuration on per proxyuser user basis.
-   * A proxyuser user can be restricted to impersonate other users from a set of hosts.
-   * A proxyuser user can be restricted to impersonate users belonging to a set of groups.
-
-There are 2 configuration properties needed in runtime properties to set up a proxyuser:
-   * falcon.service.ProxyUserService.proxyuser.#USER#.hosts: hosts from where the user #USER# can impersonate other users.
-   * falcon.service.ProxyUserService.proxyuser.#USER#.groups: groups the users being impersonated by user #USER# must belong to.
-
-If these configurations are not present, impersonation will not be allowed and connection will fail. If more lax security is preferred,
-the wildcard value * may be used to allow impersonation from any host or of any user, although this is recommended only for testing/development.
-
--doAs option via  CLI or doAs query parameter can be appended if using API to enable impersonation.
-
----++ ImportExport
-
-Data Import and Export is detailed in [[ImportExport][Data Import and Export]].
-
-
-