You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@falcon.apache.org by sr...@apache.org on 2014/09/12 11:43:51 UTC

svn commit: r1624488 [6/7] - in /incubator/falcon: site/ site/0.3-incubating/ site/0.4-incubating/ site/docs/ site/docs/restapi/ trunk/ trunk/general/src/site/twiki/docs/ trunk/general/src/site/twiki/docs/restapi/

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/FalconArchitecture.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/FalconArchitecture.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/FalconArchitecture.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/FalconArchitecture.twiki Fri Sep 12 09:43:48 2014
@@ -12,23 +12,25 @@
    * <a href="#Idempotency">Idempotency</a>
    * <a href="#Alerting_and_Monitoring">Alerting and Monitoring</a>
    * <a href="#Falcon_EL_Expressions">Falcon EL Expressions</a>
+   * <a href="#Lineage">Lineage</a>
+   * <a href="#Security">Security</a>
 
 ---++ Architecture
 ---+++ Introduction
 Falcon is a feed and process management platform over hadoop. Falcon essentially transforms user's feed
-and process configurations into repeated actions through a standard workflow engine (Apache Oozie). Falcon
-by itself doesn't do any heavy lifting. All the functions and workflow state management requirements are
-delegated to the workflow scheduler. The only thing that Falcon maintains is the dependencies and relationship
-between these entities. This is adequate to provide integrated and seamless experience to the developers using
+and process configurations into repeated actions through a standard workflow engine. Falcon by itself
+doesn't do any heavy lifting. All the functions and workflow state management requirements are delegated
+to the workflow scheduler. The only thing that Falcon maintains is the dependencies and relationship between
+these entities. This is adequate to provide integrated and seamless experience to the developers using
 the falcon platform.
 
 ---+++ Falcon Architecture - Overview
 <img src="../images/Architecture.png" height="400" width="600" />
 
 ---+++ Scheduler
-Falcon system has picked Apache Oozie as the default scheduler. However the system is open for integration with
+Falcon system has picked Oozie as the default scheduler. However the system is open for integration with
 other schedulers. Lot of the data processing in hadoop requires scheduling to be based on both data availability
-as well as time. Apache Oozie currently supports these capabilities off the shelf and hence the choice.
+as well as time. Oozie currently supports these capabilities off the shelf and hence the choice.
 
 ---+++ Control flow
 Though the actual responsibility of the workflow is with the scheduler (Oozie), Falcon remains in the
@@ -54,7 +56,7 @@ As the name suggests Falcon Prism splits
 Stand alone mode is useful when the hadoop jobs and relevant data processing involves only one hadoop cluster. In this mode there is single Falcon server that contacts with oozie to schedule jobs on Hadoop. All the process / feed request like submit, schedule, suspend, kill are sent to this server only. For running in this mode one should use the falcon which has been built for standalone mode, or build using standalone option if using source code.
 
 ---+++ Distributed Mode
-Distributed mode is the mode which you might me using most of the time. This is for orgs which have multiple instances of hadoop clusters, and multiple workflow schedulers to handle them. Here we have 2 components: Prism and Server. Both Prism and server have there own setup (runtime and startup properties) and there config locations. 
+Distributed mode is the mode which you might me using most of the time. This is for organisations which have multiple instances of hadoop clusters, and multiple workflow schedulers to handle them. Here we have 2 components: Prism and Server. Both Prism and server have there own setup (runtime and startup properties) and there config locations.
 In this mode Prism acts as a contact point for Falcon servers. Below are the requests that can be sent to prism and server in this mode:
 
  Prism: submit, schedule, submitAndSchedule, Suspend, Resume, Kill, instance management
@@ -66,6 +68,12 @@ Request may also be sent from prism but 
 When a cluster is submitted it is by default sent to all the servers configured in the prism.
 When is feed is SUBMIT / SCHEDULED request is only sent to the servers specified in the feed / process definitions. Servers are mentioned in the feed / process via CLUSTER tags in xml definition.
 
+Communication between prism and falcon server (for submit/update entity function) is secured over https:// using a client-certificate based auth. Prism server needs to present a valid client certificate for the falcon server to accept the action.
+
+Startup property file in both falcon & prism server need to be configured with the following configuration if TLS is enabled.
+* keystore.file
+* keystore.password
+
 ---++++ Prism Setup
 <img src="../images/PrismSetup.png" height="400" width="600" />
  
@@ -85,7 +93,57 @@ individual operations performed are reco
 the overall user action. In some cases, it is not possible to undo the action. In such cases, Falcon attempts
 to keep the system in an consistent state.
 
+---+++ Storage
+Falcon introduces a new abstraction to encapsulate the storage for a given feed which can either be
+expressed as a path on the file system, File System Storage or a table in a catalog such as Hive, Catalog Storage.
+
+<verbatim>
+    <xs:choice minOccurs="1" maxOccurs="1">
+        <xs:element type="locations" name="locations"/>
+        <xs:element type="catalog-table" name="table"/>
+    </xs:choice>
+</verbatim>
+
+Feed should contain one of the two storage options. Locations on File System or Table in a Catalog.
+
+---++++ File System Storage
+
+This is expressed as a location on the file system. Location specifies where the feed is available on this cluster.
+A location tag specifies the type of location like data, meta, stats and the corresponding paths for them.
+A feed should at least define the location for type data, which specifies the HDFS path pattern where the feed is
+generated periodically. ex: type="data" path="/projects/TrafficHourly/${YEAR}-${MONTH}-${DAY}/traffic"
+The granularity of date pattern in the path should be at least that of a frequency of a feed.
+
+<verbatim>
+ <location type="data" path="/projects/falcon/clicks" />
+ <location type="stats" path="/projects/falcon/clicksStats" />
+ <location type="meta" path="/projects/falcon/clicksMetaData" />
+</verbatim>
+
+---++++ Catalog Storage (Table)
+
+A table tag specifies the table URI in the catalog registry as:
+<verbatim>
+catalog:$database-name:$table-name#partition-key=partition-value);partition-key=partition-value);*
+</verbatim>
+
+This is modeled as a URI (similar to an ISBN URI). It does not have any reference to Hive or HCatalog. Its quite
+generic so it can be tied to other implementations of a catalog registry. The catalog implementation specified
+in the startup config provides implementation for the catalog URI.
+
+Top-level partition has to be a dated pattern and the granularity of date pattern should be at least that
+of a frequency of a feed.
+
+Examples:
+<verbatim>
+<table uri="catalog:default:clicks#ds=${YEAR}-${MONTH}-${DAY}-${HOUR};region=${region}" />
+<table uri="catalog:src_demo_db:customer_raw#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
+<table uri="catalog:tgt_demo_db:customer_bcp#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
+</verbatim>
+
+
 ---++ Entity Management actions
+All the following operation can also be done using [[restapi/ResourceList][Falcon's RESTful API]].
 
 ---+++ Submit
 Entity submit action allows a new cluster/feed/process to be setup within Falcon. Submitted entity is not
@@ -109,6 +167,9 @@ Oozie scheduler. (It is possible to exte
 Falcon overrides the workflow instance's external id in Oozie to reflect the process/feed and the nominal
 time. This external Id can then be used for instance management functions.
 
+The schedule copies the user specified workflow and library to a staging path, and the scheduler references the workflow
+and lib from the staging path.
+
 ---+++ Suspend
 This action is applicable only on scheduled entity. This triggers suspend on the oozie bundle that was
 scheduled earlier through the schedule function. No further instances are executed on a suspended process/feed.
@@ -130,14 +191,18 @@ no dependent entities on the deleted ent
 
 ---+++ Update
 Update operation allows an already submitted/scheduled entity to be updated. Cluster update is currently
-not allowed. Feed update can cause cascading update to all the processes already scheduled. The following
-set of actions are performed in Oozie to realize an update.
-
-   * Suspend the previously scheduled Oozie coordinator. This is prevent any new action from being triggered.
+not allowed. Feed update can cause cascading update to all the processes already scheduled. Process update triggers
+update in falcon if entity is updated/the user specified workflow/lib is updated. The following set of actions are
+performed in Oozie to realize an update:
+   * Suspend the previously scheduled Oozie coordinator. This is to prevent any new action from being triggered.
    * Update the coordinator to set the end time to "now"
    * Resume the suspended coordinators
    * Schedule as per the new process/feed definition with the start time as "now"
 
+Update optionally takes effective time as a parameter which is used as the end time of previously scheduled coordinator.
+So, the updated configuration will be effective since the given timestamp.
+
+
 ---++ Instance Management actions
 
 
@@ -163,14 +228,16 @@ Parameters -start and -end are used to m
    * 4. *suspend*: -suspend is used to suspend a instance or instances for the given process. This option pauses the parent workflow at the state, which it was in at the time of execution of this command. This command is similar to SUSPEND process command in functionality only difference being, SUSPEND process suspends all the instance whereas suspend instance suspend only that instance or instances in the range. 
 
    * 5.	*resume*: -resume option is used to resume any instance that is in suspended state. (Note: due to a bug in oozie �resume option in some cases may not actually resume the suspended instance/ instances)
-   * 6. *kill*: -kill option can be used to kill an instance or multiple instances 
+   * 6. *kill*: -kill option can be used to kill an instance or multiple instances
+
+   * 7. *summary*: -summary option via CLI can be used to get the consolidated status of the instances between the specified time period. Each status along with the corresponding instance count are listed for each of the applicable colos.
 
 
 In all the cases where your request is syntactically correct but logically not, the instance / instances are returned with the same status as earlier. Example: trying to resume a KILLED / SUCCEEDED instance will return the instance with KILLED / SUCCEEDED, without actually performing any operation. This is so because only an instance in SUSPENDED state can be resumed. Same thing is valid for rerun a SUSPENDED or RUNNING options etc. 
 
 ---++ Retention
 In coherence with it's feed lifecycle management philosophy, Falcon allows the user to retain data in the system
-for a specific period of time for a scheduled feed. The user can specify the retention period in the respective 
+for a specific period of time for a scheduled feed. The user can specify the retention period in the respective
 feed/data xml in the following manner for each cluster the feed can belong to :
 <verbatim>
 <clusters>
@@ -186,6 +253,8 @@ The 'limit' attribute can be specified i
 be attached to it. It essentially instructs the system to retain data spanning from the current moment to the time specified
 in the attribute spanning backwards in time. Any data beyond the limit (past/future) is erased from the system.
 
+With the integration of Hive, Falcon also provides retention for tables in Hive catalog.
+
 ---+++ Example:
 If retention period is 10 hours, and the policy kicks in at time 't', the data retained by system is essentially the
 one falling in between [t-10h,t]. Any data in the boundaries [-�,t-10h) and (t,�] is removed from the system.
@@ -228,18 +297,27 @@ Ideally, the feeds data path should have
 </verbatim>
 
 If more than 1 source cluster is defined, then partition expression is compulsory, a partition can also have a constant.
-The expression is required to avoid copying data from different source location to the same target location, also only the data in the partition is considered for replication if it is present. The partitions defined in the cluster should be less than or equal to the number of partition declared in the feed definition.
-
-Falcon uses pull based replication mechanism, meaning in every target cluster, for a given source cluster, a coordinator is scheduled which pulls the data using distcp from source cluster. So in the above example, 2 coordinators are scheduled in backupCluster, one which pulls the data from sourceCluster1 and another from sourceCluster2.
-Also, for every feed instance which is replicated Falcon sends a JMS message on success or failure of replication instance.
-
-Replication can be scheduled with the past date, the time frame considered for replication is the minimum overlapping window of start and end time of source and target cluster, ex: if s1 and e1 is the start and end time of source cluster respectively,
-and s2 and e2 of target cluster, then the coordinator is scheduled in target cluster with start time max(s1,s2) and min(e1,e2).
-
-A feed can also optionally specify the delay for replication instance in the cluster tag, the delay governs the replication instance delays. If the frequency of the feed is hours(2) and delay is hours(1), then the replication instance will run every 2 hours and replicates data with an offset of 1 hour, i.e. at
-09:00 UTC, feed instance which is eligible for replication is 08:00; and 11:00 UTC, feed instance of 10:00 UTC is eligible and so on.
+The expression is required to avoid copying data from different source location to the same target location,
+also only the data in the partition is considered for replication if it is present. The partitions defined in the
+cluster should be less than or equal to the number of partition declared in the feed definition.
+
+Falcon uses pull based replication mechanism, meaning in every target cluster, for a given source cluster,
+a coordinator is scheduled which pulls the data using distcp from source cluster. So in the above example,
+2 coordinators are scheduled in backupCluster, one which pulls the data from sourceCluster1 and another
+from sourceCluster2. Also, for every feed instance which is replicated Falcon sends a JMS message on success or
+failure of replication instance.
+
+Replication can be scheduled with the past date, the time frame considered for replication is the minimum
+overlapping window of start and end time of source and target cluster, ex: if s1 and e1 is the start and end time
+of source cluster respectively, and s2 and e2 of target cluster, then the coordinator is scheduled in
+target cluster with start time max(s1,s2) and min(e1,e2).
+
+A feed can also optionally specify the delay for replication instance in the cluster tag, the delay governs the
+replication instance delays. If the frequency of the feed is hours(2) and delay is hours(1), then the replication
+instance will run every 2 hours and replicates data with an offset of 1 hour, i.e. at 09:00 UTC, feed instance which
+is eligible for replication is 08:00; and 11:00 UTC, feed instance of 10:00 UTC is eligible and so on.
 
----+++ Where is the feed path defined?
+---+++ Where is the feed path defined for File System Storage?
 
 It's defined in the feed xml within the location tag.
 
@@ -273,6 +351,22 @@ may have a different feed path.
     </clusters>
 </verbatim>
 
+---+++ Hive Table Replication
+
+With the integration of Hive, Falcon adds table replication of Hive catalog tables. Replication will be triggered
+for a partition when the partition is complete at the source.
+
+   * Falcon will use HCatalog (Hive) API to export the data for a given table and the partition,
+which will result in a data collection that includes metadata on the data's storage format, the schema,
+how the data is sorted, what table the data came from, and values of any partition keys from that table.
+   * Falcon will use discp tool to copy the exported data collection into the secondary cluster into a staging
+directory used by Falcon.
+   * Falcon will then import the data into HCatalog (Hive) using the HCatalog (Hive) API. If the specified table does
+not yet exist, Falcon will create it, using the information in the imported metadata to set defaults for the table
+such as schema, storage format, etc.
+   * The partition is not complete and hence not visible to users until all the data is committed on the secondary
+cluster, (no dirty reads)
+
 
 ---+++ Relation between feed's retention limit and feed's late arrival cut off period:
 
@@ -382,12 +476,12 @@ feed="raaw-logs16" name="inputData"/>
 
 *Feed xml:*
 <verbatim>
-<feed description="clicks log" name="raaw-logs16"....
+<feed description="clicks log" name="raw-logs16"....
 </verbatim>
 
    
     * The time interpretation for corresponding tags indicating the start and end instances for a
-particular input feed in the process xml should lie well within the timespan of the period specified in
+particular input feed in the process xml should lie well within the time span of the period specified in
 <validity> tag of the particular feed.
 
 *Example:*
@@ -397,7 +491,7 @@ particular input feed in the process xml
 *Process XML:*
 <verbatim>
 <input end-instance="now(0,20)" start-instance="now(0,-60)"
-   feed="raaw-logs16" name="inputData"/>
+   feed="raw-logs16" name="inputData"/>
 </verbatim>
 *Feed XML:*
 <verbatim>
@@ -440,7 +534,7 @@ instance. From the perspective of late h
 and late-inputs section in feed and process entity definition that are central. These configurations govern
 how and when the late processing happens. In the current implementation (oozie based) the late handling is very
 simple and basic. The falcon system looks at all dependent input feeds for a process and computes the max late
-cut-off period. Then it uses a scheduled messaging framework, like the one available in Apache ActiveMQ or Java's DelayQueue to schedule a message with a cut-off period, then after a cut-off period the message is dequeued and Falcon checks for changes in the feed data which is recorded in HDFS in latedata file by falcons "record-size" action, if it detects any changes then the workflow will be rerun with the new set of feed data.
+cut-off period. Then it uses a scheduled messaging framework, like the one available in Apache ActiveMQ or Java's !DelayQueue to schedule a message with a cut-off period, then after a cut-off period the message is dequeued and Falcon checks for changes in the feed data which is recorded in HDFS in latedata file by falcons "record-size" action, if it detects any changes then the workflow will be rerun with the new set of feed data.
 
 *Example:*
 The late rerun policy can be configured in the process definition.
@@ -454,6 +548,9 @@ explicitly set the feed names in late-in
    </late-process>
 </verbatim>
 
+*NOTE:* Feeds configured with table storage does not support late input data handling at this point. This will be
+made available in the near future.
+
 ---++ Idempotency
 All the operations in Falcon are Idempotent. That is if you make same request to the falcon server / prism again you will get a SUCCESSFUL return if it was SUCCESSFUL in the first attempt. For example, you submit a new process / feed and get SUCCESSFUL message return. Now if you run the same command / api request on same entity you will again get a SUCCESSFUL message. Same is true for other operations like schedule, kill, suspend and resume.
 Idempotency also by takes care of the condition when request is sent through prism and fails on one or more servers. For example prism is configured to send request to 3 servers. First user sends a request to SUBMIT a process on all 3 of them, and receives a response SUCCESSFUL from all of them. Then due to some issue one of the servers goes down, and user send a request to schedule the submitted process. This time he will receive a response with PARTIAL status and a FAILURE message from the server that has gone down. If the users check he will find the process would have been started and running on the 2 SUCCESSFUL servers. Now the issue with server is figured out and it is brought up. Sending the SCHEDULE request again through prism will result in a SUCCESSFUL response from prism as well as other three servers, but this time PROCESS will be SCHEDULED only on the server which had failed earlier and other two will keep running as before. 
@@ -502,7 +599,7 @@ The metric logged for an event has the f
    1. Action - Name of the event.
    2. Dimensions - A list of name/value pairs of various attributes for a given action.
    3. Status- Status of an action FAILED/SUCCEEDED.
-   4. Time-taken - Time taken in nano seconds for a given action.
+   4. Time-taken - Time taken in nanoseconds for a given action.
 
 An example for an event logged for a submit of a new process definition:   
 
@@ -518,12 +615,12 @@ Users may register consumers on the requ
  
 For a given process that is scheduled, the name of the topic is same as the process name.
 Falcon sends a Map message for every feed produced by the instance of a process to the JMS topic.
-The JMS MapMessage sent to a topic has the following properties:
+The JMS !MapMessage sent to a topic has the following properties:
 entityName, feedNames, feedInstancePath, workflowId, runId, nominalTime, timeStamp, brokerUrl, brokerImplClass, entityType, operation, logFile, topicName, status, brokerTTL;
 
 For a given feed that is scheduled, the name of the topic is same as the feed name.
 Falcon sends a map message for every feed instance that is deleted/archived/replicated depending upon the retention policy set in the feed definition.
-The JMS MapMessage sent to a topic has the following properties:
+The JMS !MapMessage sent to a topic has the following properties:
 entityName, feedNames, feedInstancePath, workflowId, runId, nominalTime, timeStamp, brokerUrl, brokerImplClass, entityType, operation, logFile, topicName, status, brokerTTL;
 
 The JMS messages are automatically purged after a certain period (default 3 days) by the Falcon JMS house-keeping service.TTL (Time-to-live) for JMS message
@@ -601,15 +698,53 @@ Falcon currently support following ELs:
 
    * 3.	*yesterday(hours,minutes)*: As the name suggest EL yesterday picks up feed instances with respect to start of day yesterday. Hours and minutes are added to the 00 hours starting yesterday, Example: yesterday(24,30) will actually correspond to 00:30 am of today, for 2010-01-02T01:30Z this would mean 2010-01-02:00:30 feed. 
 
+   * 7.	*lastYear(month,day,hour,minute)*: This is exactly similarly to currentYear in usage> only difference being start reference is taken to start of previous year. For example: lastYear(4,2,2,20) will correspond to feed instance created at 2009-05-03T02:20Z and lastYear(12,2,2,20) will correspond to feed at 2010-01-03T02:20Z.
+
    * 4.	*currentMonth(day,hour,minute)*: Current month takes the reference to start of the month with respect to instance start time. One thing to keep in mind is that day is added to the first day of the month. So the value of day is the number of days you want to add to the first day of the month. For example: for instance start time 2010-01-12T01:30Z and El as currentMonth(3,2,40) will correspond to feed created at 2010-01-04T02:40Z and currentMonth(0,0,0) will mean 2010-01-01T00:00Z.
 
    * 5.	*lastMonth(day,hour,minute)*: Parameters for lastMonth is same as currentMonth, only difference being the reference is shifted to one month back. For instance start 2010-01-12T01:30Z lastMonth(2,3,30) will correspond to feed instance at 2009-12-03:T03:30Z 
 
-   * 6.	*currentYear(month,day,hour,minute)*: The month,day,hour, minutes in the pareamter are added with reference to the start of year of instance start time. For our exmple start time 2010-01-02:00:30 reference will go back to 2010-01-01:T00:00Z. Also similar to days, months are added to the 1st month that Jan. So currentYear(0,2,2,20) will mean 2010-01-03T02:20Z while currentYear(11,2,2,20) will mean 2010-12-03T02:20Z
+   * 6.	*currentYear(month,day,hour,minute)*: The month,day,hour, minutes in the parameter are added with reference to the start of year of instance start time. For our example start time 2010-01-02:00:30 reference will go back to 2010-01-01:T00:00Z. Also similar to days, months are added to the 1st month that Jan. So currentYear(0,2,2,20) will mean 2010-01-03T02:20Z while currentYear(11,2,2,20) will mean 2010-12-03T02:20Z
 
 
-   * 7.	*lastYear(month,day,hour,minute)*: This is exactly similary to currentYear in usage> only difference being start reference is taken to start of previous year. For example: lastYear(4,2,2,20) will corrospond to feed insatnce created at 2009-05-03T02:20Z and lastYear(12,2,2,20) will corrospond to feed at 2010-01-03T02:20Z.
+   * 7.	*lastYear(month,day,hour,minute)*: This is exactly similarly to currentYear in usage> only difference being start reference is taken to start of previous year. For example: lastYear(4,2,2,20) will corrospond to feed insatnce created at 2009-05-03T02:20Z and lastYear(12,2,2,20) will corrospond to feed at 2010-01-03T02:20Z.
    
    * 8. *latest(number of latest instance)*: This will simply make you input consider the number of latest available instance of the feed given as parameter. For example: latest(0) will consider the last available instance of feed, where as latest latest(-1) will consider second last available feed and latest(-3) will consider 4th last available feed.
    
+   * 9.	*currentWeek(weekDayName,hour,minute)*: This is similar to currentMonth in the sense that it returns a relative time with respect to the instance start time, considering the day name provided as input as the start of the week. The day names can be one of SUN, MON, TUE, WED, THU, FRI, SAT.
+
+   * 10. *lastWeek(weekDayName,hour,minute)*: This is typically 7 days less than what the currentWeek returns for similar parameters.
+
+
+---++ Lineage
+
+Falcon adds the ability to capture lineage for both entities and its associated instances. It
+also captures the metadata tags associated with each of the entities as relationships. The
+following relationships are captured:
+
+   * owner of entities - User
+   * data classification tags
+   * groups defined in feeds
+   * Relationships between entities
+      * Clusters associated with Feed and Process entity
+      * Input and Output feeds for a Process
+   * Instances refer to corresponding entities
+
+Lineage is exposed in 3 ways:
+
+   * REST API
+   * CLI
+   * Dashboard - Interactive lineage for Process instances
+
+This feature is enabled by default but could be disabled by removing the following from:
+<verbatim>
+config name: *.application.services
+config value: org.apache.falcon.metadata.MetadataMappingService
+<verbatim>
+
+Lineage is only captured for Process executions. A future release will capture lineage for
+lifecycle policies such as replication and retention.
+
+--++ Security
 
+Security is detailed in [[Security][Security]].

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/FalconCLI.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/FalconCLI.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/FalconCLI.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/FalconCLI.twiki Fri Sep 12 09:43:48 2014
@@ -25,7 +25,7 @@ $FALCON_HOME/bin/falcon entity  -type pr
 
 ---+++Suspend
 
-Suspend on an entity results in suspension of the oozie bundle that was scheduled earlier through the schedule function. No further instances are executed on a suspended entity. Only schedulable entities(process/feed) can be suspended.
+Suspend on an entity results in suspension of the oozie bundle that was scheduled earlier through the schedule function. No further instances are executed on a suspended entity. Only schedule-able entities(process/feed) can be suspended.
 
 Usage:
 $FALCON_HOME/bin/falcon entity  -type [feed|process] -name <<name>> -suspend
@@ -51,6 +51,24 @@ Entities of a particular type can be lis
 Usage:
 $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -list
 
+Optional Args : -fields <<field1,field2>> -filterBy <<field1:value1,field2:value2>> -tags <<tagkey=tagvalue,tagkey=tagvalue>>
+-orderBy <<field>> -sortOrder <<sortOrder>> -offset 0 -numResults 10
+
+<a href="./Restapi/EntityList.html">Optional params described here.</a>
+
+---+++Summary
+
+Summary of entities of a particular type and a cluster will be listed. Entity summary has N most recent instances of entity.
+
+Usage:
+$FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -summary
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" -fields <<field1,field2>>
+-filterBy <<field1:value1,field2:value2>> -tags <<tagkey=tagvalue,tagkey=tagvalue>>
+-orderBy <<field>> -sortOrder <<sortOrder>> -offset 0 -numResults 10 -numInstances 7
+
+<a href="./Restapi/EntitySummary.html">Optional params described here.</a>
+
 ---+++Update
 
 Update operation allows an already submitted/scheduled entity to be updated. Cluster update is currently
@@ -135,7 +153,31 @@ Example : Suppose a process has 3 instan
 {"status":"SUCCEEDED","message":"getStatus is successful","instances":[{"instance":"2012-05-07T05:02Z","status":"SUCCEEDED","logFile":"http://oozie-dashboard-url"},{"instance":"2012-05-07T05:07Z","status":"RUNNING","logFile":"http://oozie-dashboard-url"}, {"instance":"2010-01-02T11:05Z","status":"WAITING"}] 
 
 Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -status -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -status
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" -colo <<colo>>
+-filterBy <<field1:value1,field2:value2>> -lifecycle <<lifecycles>>
+-orderBy field -sortOrder <<sortOrder>> -offset 0 -numResults 10
+
+<a href="./Restapi/InstanceStatus.html"> Optional params described here.</a>
+
+---+++List
+
+List option via CLI can be used to get single or multiple instances.  If the instance is not yet materialized but is within the process validity range, WAITING is returned as the state. Instance time is also returned. Log location gives the oozie workflow url
+If the instance is in WAITING state, missing dependencies are listed
+
+Example : Suppose a process has 3 instance, one has succeeded,one is in running state and other one is waiting, the expected output is:
+
+{"status":"SUCCEEDED","message":"getStatus is successful","instances":[{"instance":"2012-05-07T05:02Z","status":"SUCCEEDED","logFile":"http://oozie-dashboard-url"},{"instance":"2012-05-07T05:07Z","status":"RUNNING","logFile":"http://oozie-dashboard-url"}, {"instance":"2010-01-02T11:05Z","status":"WAITING"}]
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -list
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"
+-colo <<colo>> -lifecycle <<lifecycles>>
+-filterBy <<field1:value1,field2:value2>> -orderBy field -sortOrder <<sortOrder>> -offset 0 -numResults 10
+
+<a href="./Restapi/InstanceList.html">Optional params described here.</a>
 
 ---+++Summary
 
@@ -148,7 +190,12 @@ Example : Suppose a process has 3 instan
 {"status":"SUCCEEDED","message":"getSummary is successful", "cluster": <<name>> [{"SUCCEEDED":"1"}, {"WAITING":"1"}, {"RUNNING":"1"}]}
 
 Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -summary -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -summary
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"
+-colo <<colo>> -lifecycle <<lifecycles>>
+
+<a href="./Restapi/InstanceSummary.html">Optional params described here.</a>
 
 ---+++Running
 
@@ -157,12 +204,82 @@ Running option provides all the running 
 Usage:
 $FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -running
 
+Optional Args : -colo <<colo>> -lifecycle <<lifecycles>>
+-filterBy <<field1:value1,field2:value2>> -orderBy <<field>> -sortOrder <<sortOrder>> -offset 0 -numResults 10
+
+<a href="./Restapi/InstanceRunning.html">Optional params described here.</a>
+
 ---+++Logs
 
 Get logs for instance actions
 
 Usage:
-$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -logs -start "yyyy-MM-dd'T'HH:mm'Z'" [-end "yyyy-MM-dd'T'HH:mm'Z'"] [-runid <<runid>>]
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -logs
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" -runid <<runid>>
+-colo <<colo>> -lifecycle <<lifecycles>>
+-filterBy <<field1:value1,field2:value2>> -orderBy field -sortOrder <<sortOrder>> -offset 0 -numResults 10
+
+<a href="./Restapi/InstanceLogs.html">Optional params described here.</a>
+
+---+++LifeCycle
+
+Describes list of life cycles of a entity , for feed it can be replication/retention and for process it can be execution.
+This can be used with instance management options. Default values are replication for feed and execution for process.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -status -lifecycle <<lifecycletype>> -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"
+
+---+++Params
+
+Displays the workflow params of a given instance. Where start time is considered as nominal time of that instance.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -params -start "yyyy-MM-dd'T'HH:mm'Z'"
+
+
+---++ Graphs Options
+
+---+++ Vertex
+
+Get the vertex with the specified id.
+
+Usage:
+$FALCON_HOME/bin/falcon graph -vertex -id <<id>>
+
+Example:
+$FALCON_HOME/bin/falcon graph -vertex -id 4
+
+---+++ Vertices
+
+Get all vertices for a key index given the specified value.
+
+Usage:
+$FALCON_HOME/bin/falcon graph -vertices -key <<key>> -value <<value>>
+
+Example:
+$FALCON_HOME/bin/falcon graph -vertices -key type -value feed-instance
+
+---+++ Vertex Edges
+
+Get the adjacent vertices or edges of the vertex with the specified direction.
+
+Usage:
+$FALCON_HOME/bin/falcon graph -edges -id <<vertex-id>> -direction <<direction>>
+
+Example:
+$FALCON_HOME/bin/falcon graph -edges -id 4 -direction both
+$FALCON_HOME/bin/falcon graph -edges -id 4 -direction inE
+
+---+++ Edge
+
+Get the edge with the specified id.
+
+Usage:
+$FALCON_HOME/bin/falcon graph -edge -id <<id>>
+
+Example:
+$FALCON_HOME/bin/falcon graph -edge -id Q9n-Q-5g
 
 
 ---++Admin Options
@@ -170,10 +287,16 @@ $FALCON_HOME/bin/falcon instance -type <
 ---+++Help
 
 Usage:
-$FALCON_HOME/bin/falcon admin -version
+$FALCON_HOME/bin/falcon admin -help
 
 ---+++Version
 
-Version returns the current verion of Falcon installed.
+Version returns the current version of Falcon installed.
 Usage:
-$FALCON_HOME/bin/falcon admin -help
+$FALCON_HOME/bin/falcon admin -version
+
+---+++Status
+
+Status returns the current state of Falcon (running or stopped).
+Usage:
+$FALCON_HOME/bin/falcon admin -status

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/HiveIntegration.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/HiveIntegration.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/HiveIntegration.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/HiveIntegration.twiki Fri Sep 12 09:43:48 2014
@@ -72,7 +72,7 @@ HCatalog server. If this is absent, no H
    * Falcon will use HCatalog (Hive) API to export the data for a given table and the partition,
 which will result in a data collection that includes metadata on the data's storage format, the schema,
 how the data is sorted, what table the data came from, and values of any partition keys from that table.
-   * Falcon will use DistCp tool to copy the exported data collection into the secondary cluster into a staging
+   * Falcon will use discp tool to copy the exported data collection into the secondary cluster into a staging
 directory used by Falcon.
    * Falcon will then import the data into HCatalog (Hive) using the HCatalog (Hive) API. If the specified table does
 not yet exist, Falcon will create it, using the information in the imported metadata to set defaults for the
@@ -113,9 +113,39 @@ accessing the data in tables
 <verbatim>
 bin/hadoop dfs -copyFromLocal $LFS/share/lib/hcatalog/hcatalog-pig-adapter-0.5.0-incubating.jar share/lib/hcatalog
 </verbatim>
+   * Oozie 4.x with Hadoop-2.x
+Replication jobs are submitted to oozie on the destination cluster. Oozie runs a table export job
+on RM on source cluster. Oozie server on the target cluster must be configured with source hadoop
+configs else jobs fail with errors on secure and non-secure clusters as below:
+<verbatim>
+org.apache.hadoop.security.token.SecretManager$InvalidToken: Password not found for ApplicationAttempt appattempt_1395965672651_0010_000002
+</verbatim>
+
+Make sure all oozie servers that falcon talks to has the hadoop configs configured in oozie-site.xml
+<verbatim>
+<property>
+      <name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
+      <value>*=/etc/hadoop/conf,arpit-new-falcon-1.cs1cloud.internal:8020=/etc/hadoop-1,arpit-new-falcon-1.cs1cloud.internal:8032=/etc/hadoop-1,arpit-new-falcon-2.cs1cloud.internal:8020=/etc/hadoop-2,arpit-new-falcon-2.cs1cloud.internal:8032=/etc/hadoop-2,arpit-new-falcon-5.cs1cloud.internal:8020=/etc/hadoop-3,arpit-new-falcon-5.cs1cloud.internal:8032=/etc/hadoop-3</value>
+      <description>
+          Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of
+          the Hadoop service (JobTracker, HDFS). The wildcard '*' configuration is
+          used when there is no exact match for an authority. The HADOOP_CONF_DIR contains
+          the relevant Hadoop *-site.xml files. If the path is relative is looked within
+          the Oozie configuration directory; though the path can be absolute (i.e. to point
+          to Hadoop client conf/ directories in the local filesystem.
+      </description>
+    </property>
+</verbatim>
 
 ---+++ Hive
 
+   * Dated Partitions
+Falcon does not work well when table partition contains multiple dated columns. Falcon only works
+with a single dated partition. This is being tracked in FALCON-357 which is a limitation in Oozie.
+<verbatim>
+catalog:default:table4#year=${YEAR};month=${MONTH};day=${DAY};hour=${HOUR};minute=${MINUTE}
+</verbatim>
+
    * [[https://issues.apache.org/jira/browse/HIVE-5550][Hive table import fails for tables created with default text and sequence file formats using HCatalog API]]
 For some arcane reason, hive substitutes the output format for text and sequence to be prefixed with Hive.
 Hive table import fails since it compares against the input and output formats of the source table and they are
@@ -140,7 +170,7 @@ org.apache.hadoop.hive.ql.parse.ImportSe
                 .getMsg(" Table inputformat/outputformats do not match"));
       }
 </verbatim>
-
+The above is not an issue with Hive 0.13.
 
 ---++ Hive Examples
 Following is an example entity configuration for lifecycle management functions for tables in Hive.

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/InstallationSteps.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/InstallationSteps.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/InstallationSteps.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/InstallationSteps.twiki Fri Sep 12 09:43:48 2014
@@ -166,10 +166,11 @@ bin/falcon-start [-port <port>]
 </verbatim>
 
 By default, 
-* falcon server starts at port 15000. To change the port, use -port option
+* falcon server starts at port 15443 (https) by default . To change the port, use -port option
+   * falcon.enableTLS can be set to true or false explicitly to enable SSL, if not port that end with 443 will automatically put falcon on https://
 * falcon server starts embedded active mq. To control this behaviour, set the following system properties using -D option in environment variable FALCON_OPTS:
    * falcon.embeddedmq=<true/false> - Should server start embedded active mq, default true
-   * falcon.emeddedmq.port=<port> - Port for embedded active mq, default 61616
+   * falcon.embeddedmq.port=<port> - Port for embedded active mq, default 61616
    * falcon.embeddedmq.data=<path> - Data path for embedded active mq, default {package dir}/logs/data
 * falcon server starts with conf from {package dir}/conf. To override this (to use the same conf with multiple falcon upgrades), set environment variable FALCON_CONF to the path of conf dir
 
@@ -190,7 +191,8 @@ bin/prism-start [-port <port>]
 </verbatim>
 
 By default, 
-* falcon server starts at port 16000. To change the port, use -port option
+* prism server starts at port 16443. To change the port, use -port option
+   * falcon.enableTLS can be set to true or false explicitly to enable SSL, if not port that end with 443 will automatically put prism on https://
 * prism starts with conf from {package dir}/conf. To override this (to use the same conf with multiple prism upgrades), set environment variable FALCON_CONF to the path of conf dir
 
 *Using Falcon*
@@ -232,14 +234,14 @@ src/bin/package.sh <<hadoop-version>> <<
 <verbatim>
 bin/falcon-start
 </verbatim>
-Make sure the hadoop and oozie endpoints are according to your setup in examples/entity/standalone-cluster.xml
+Make sure the hadoop and oozie endpoints are according to your setup in examples/entity/filesystem/standalone-cluster.xml
 <verbatim>
-bin/falcon entity -submit -type cluster -file examples/entity/standalone-cluster.xml
+bin/falcon entity -submit -type cluster -file examples/entity/filesystem/standalone-cluster.xml
 </verbatim>
 Submit input and output feeds:
 <verbatim>
-bin/falcon entity -submit -type feed -file examples/entity/in-feed.xml
-bin/falcon entity -submit -type feed -file examples/entity/out-feed.xml
+bin/falcon entity -submit -type feed -file examples/entity/filesystem/in-feed.xml
+bin/falcon entity -submit -type feed -file examples/entity/filesystem/out-feed.xml
 </verbatim>
 Set-up workflow for the process:
 <verbatim>
@@ -247,8 +249,8 @@ hadoop fs -put examples/app /
 </verbatim>
 Submit and schedule the process:
 <verbatim>
-bin/falcon entity -submitAndSchedule -type process -file examples/entity/oozie-mr-process.xml
-bin/falcon entity -submitAndSchedule -type process -file examples/entity/pig-process.xml
+bin/falcon entity -submitAndSchedule -type process -file examples/entity/filesystem/oozie-mr-process.xml
+bin/falcon entity -submitAndSchedule -type process -file examples/entity/filesystem/pig-process.xml
 </verbatim>
 Generate input data:
 <verbatim>
@@ -259,5 +261,6 @@ Get status of instances:
 bin/falcon instance -status -type process -name oozie-mr-process -start 2013-11-15T00:05Z -end 2013-11-15T01:00Z
 </verbatim>
 
+HCat based example entities are in examples/entity/hcat.
 
 

Added: incubator/falcon/trunk/general/src/site/twiki/docs/LICENSE.txt
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/LICENSE.txt?rev=1624488&view=auto
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/LICENSE.txt (added)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/LICENSE.txt Fri Sep 12 09:43:48 2014
@@ -0,0 +1,3 @@
+All files in this directory and subdirectories are under Apache License Version 2.0.
+The reason being Maven Doxia plugin that converts twiki to html does not have
+commenting out feature.

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/OnBoarding.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/OnBoarding.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/OnBoarding.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/OnBoarding.twiki Fri Sep 12 09:43:48 2014
@@ -7,7 +7,7 @@
    * Create cluster definition for the cluster, specifying name node, job tracker, workflow engine endpoint, messaging endpoint. Refer to [[EntitySpecification][cluster definition]] for details.
    * Create Feed definitions for each of the input and output specifying frequency, data path, ownership. Refer to [[EntitySpecification][feed definition]] for details.
    * Create Process definition for your job. Process defines configuration for the workflow job. Important attributes are frequency, inputs/outputs and workflow path. Refer to [[EntitySpecification][process definition]] for process details.
-   * Define workflow for your job using the workflow engine(only oozie is supported as of now). Refer [[http://incubator.apache.org/oozie/docs/3.1.3/docs/WorkflowFunctionalSpec.html][Oozie Workflow Specification]]. The libraries required for the workflow should be available in lib folder in workflow path.
+   * Define workflow for your job using the workflow engine(only oozie is supported as of now). Refer [[http://oozie.apache.org/docs/3.1.3-incubating/WorkflowFunctionalSpec.html][Oozie Workflow Specification]]. The libraries required for the workflow should be available in lib folder in workflow path.
    * Set-up workflow definition, libraries and referenced scripts on hadoop. 
    * Submit cluster definition
    * Submit and schedule feed and process definitions
@@ -114,7 +114,7 @@ xmlns:xsi="http://www.w3.org/2001/XMLSch
 </verbatim>
 
 ---++++ Process
-Sample process which runs daily at 6th hour on corp cluster. It takes one input - SampleInput for the previous day(24 instances). It generates one output - SampleOutput for previous day. The workflow is defined at /projects/bootcamp/workflow/workflow.xml. Any libraries available for the workflow should be at /projects/bootcamp/workflow/lib. The process also defines properties queueName, ssh.host, and fileTimestamp which are passed to the workflow. In addition, Falcon exposes the following properties to the workflow: nameNode, jobTracker(hadoop properties), input and output(Input/Output properties).
+Sample process which runs daily at 6th hour on corp cluster. It takes one input - !SampleInput for the previous day(24 instances). It generates one output - !SampleOutput for previous day. The workflow is defined at /projects/bootcamp/workflow/workflow.xml. Any libraries available for the workflow should be at /projects/bootcamp/workflow/lib. The process also defines properties queueName, ssh.host, and fileTimestamp which are passed to the workflow. In addition, Falcon exposes the following properties to the workflow: nameNode, jobTracker(hadoop properties), input and output(Input/Output properties).
 
 <verbatim>
 <?xml version="1.0" encoding="UTF-8"?>

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/Security.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/Security.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/Security.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/Security.twiki Fri Sep 12 09:43:48 2014
@@ -2,6 +2,12 @@
 
 ---++ Overview
 
+Apache Falcon enforces authentication and authorization which are detailed below. Falcon also
+provides transport level security ensuring data confidentiality and integrity.
+
+
+---++ Authentication (User Identity)
+
 Apache Falcon enforces authentication on protected resources. Once authentication has been established it sets a
 signed HTTP Cookie that contains an authentication token with the user name, user principal,
 authentication type and expiration time.
@@ -12,20 +18,140 @@ for HTTP. Hadoop Auth also supports addi
 simple interfaces.
 
 
----++ Authentication Methods
+---+++ Authentication Methods
 
 It supports 2 authentication methods, simple and kerberos out of the box.
 
----+++ Pseudo/Simple Authentication
+---++++ Pseudo/Simple Authentication
 
 Falcon authenticates the user by simply trusting the value of the query string parameter 'user.name'. This is the
 default mode Falcon is configured with.
 
----+++ Kerberos Authentication
+---++++ Kerberos Authentication
 
 Falcon uses HTTP Kerberos SPNEGO to authenticate the user.
 
----++ Server Side Configuration Setup
+
+---++ Authorization
+
+Falcon also enforces authorization on Entities using ACLs (Access Control Lists). ACLs are useful
+for implementing permission requirements and provide a way to set different permissions for
+specific users or named groups.
+
+By default, support for authorization is disabled and can be enabled in startup.properties.
+
+---+++ ACLs in Entity
+
+All Entities now have ACL which needs to be present if authorization is enabled. Only owners who
+own or created the entity will be allowed to update or delete their entities.
+
+An entity has ACLs (Access Control Lists) that are useful for implementing permission requirements
+and provide a way to set different permissions for specific users or named groups.
+<verbatim>
+    <ACL owner="test-user" group="test-group" permission="*"/>
+</verbatim>
+ACL indicates the Access control list for this cluster.
+owner is the Owner of this entity.
+group is the one which has access to read.
+permission indicates the rwx is not enforced at this time.
+
+---+++ Super-User
+
+The super-user is the user with the same identity as falcon process itself. Loosely, if you
+started the falcon, then you are the super-user. The super-user can do anything in that
+permissions checks never fail for the super-user. There is no persistent notion of who was the
+super-user; when the falcon is started the process identity determines who is the super-user
+for now. The Falcon super-user does not have to be the super-user of the falcon host, nor is it
+necessary that all clusters have the same super-user. Also, an experimenter running Falcon on a
+personal workstation, conveniently becomes that installation's super-user without any configuration.
+
+Falcon also allows users to configure a super user group and allows users belonging to this
+group to be a super user.
+
+---+++ Group Memberships
+
+Once a user has been authenticated and a username has been determined, the list of groups is
+determined by a group mapping service, configured by the hadoop.security.group.mapping property
+in Hadoop. The default implementation, org.apache.hadoop.security.ShellBasedUnixGroupsMapping,
+will shell out to the Unix bash -c groups command to resolve a list of groups for a user.
+
+Note that Falcon stores the user and group of an Entity as strings; there is no
+conversion from user and group identity numbers as is conventional in Unix.
+
+---+++ Authorization Provider
+
+Falcon provides a plugin-able provider interface for Authorization. It also ships with a default
+implementation that enforces the following authorization policy.
+
+---++++ Entity and Instance Management Operations Policy
+
+* All Entity and Instance operations are authorized for users who created them, Owners and users
+with group memberships
+* Reference to entities with in a feed or process is allowed with out enforcing permissions
+Any Feed or Process can refer to a Cluster entity not owned by the Feed or Process owner
+Any Process can refer to a Feed entity not owned by the Process owner
+
+The authorization is enforced in the following way:
+
+if admin resource,
+     if authenticated user name matches the admin users configuration
+     Else if groups of the authenticated user matches the admin groups configuration
+     Else authorization exception is thrown
+Else if entities or instance resource
+     if the authenticated user matches the owner in ACL for the entity
+     Else if the groups of the authenticated user matches the group in ACL for the entity
+     Else authorization exception is thrown
+Else if lineage resource
+     All have read-only permissions, reason being folks should be able to examine the dependency
+     and allow reuse
+
+
+*operations on Entity Resource*
+
+| *Resource*                                                                          | *Description*                      | *Authorization* |
+| [[restapi/EntityValidate][api/entities/validate/:entity-type]]                      | Validate the entity                | Owner/Group     |
+| [[restapi/EntitySubmit][api/entities/submit/:entity-type]]                          | Submit the entity                  | Owner/Group     |
+| [[restapi/EntityUpdate][api/entities/update/:entity-type/:entity-name]]             | Update the entity                  | Owner/Group     |
+| [[restapi/EntitySubmitAndSchedule][api/entities/submitAndSchedule/:entity-type]]    | Submit & Schedule the entity       | Owner/Group     |
+| [[restapi/EntitySchedule][api/entities/schedule/:entity-type/:entity-name]]         | Schedule the entity                | Owner/Group     |
+| [[restapi/EntitySuspend][api/entities/suspend/:entity-type/:entity-name]]           | Suspend the entity                 | Owner/Group     |
+| [[restapi/EntityResume][api/entities/resume/:entity-type/:entity-name]]             | Resume the entity                  | Owner/Group     |
+| [[restapi/EntityDelete][api/entities/delete/:entity-type/:entity-name]]             | Delete the entity                  | Owner/Group     |
+| [[restapi/EntityStatus][api/entities/status/:entity-type/:entity-name]]             | Get the status of the entity       | Owner/Group     |
+| [[restapi/EntityDefinition][api/entities/definition/:entity-type/:entity-name]]     | Get the definition of the entity   | Owner/Group     |
+| [[restapi/EntityList][api/entities/list/:entity-type?fields=:fields]]               | Get the list of entities           | Owner/Group     |
+| [[restapi/EntityDependencies][api/entities/dependencies/:entity-type/:entity-name]] | Get the dependencies of the entity | Owner/Group     |
+
+*REST Call on Feed and Process Instances*
+
+| *Resource*                                                                  | *Description*                | *Authorization* |
+| [[restapi/InstanceRunning][api/instance/running/:entity-type/:entity-name]] | List of running instances.   | Owner/Group     |
+| [[restapi/InstanceStatus][api/instance/status/:entity-type/:entity-name]]   | Status of a given instance   | Owner/Group     |
+| [[restapi/InstanceKill][api/instance/kill/:entity-type/:entity-name]]       | Kill a given instance        | Owner/Group     |
+| [[restapi/InstanceSuspend][api/instance/suspend/:entity-type/:entity-name]] | Suspend a running instance   | Owner/Group     |
+| [[restapi/InstanceResume][api/instance/resume/:entity-type/:entity-name]]   | Resume a given instance      | Owner/Group     |
+| [[restapi/InstanceRerun][api/instance/rerun/:entity-type/:entity-name]]     | Rerun a given instance       | Owner/Group     |
+| [[InstanceLogs][api/instance/logs/:entity-type/:entity-name]]               | Get logs of a given instance | Owner/Group     |
+
+---++++ Admin Resources Policy
+
+Only users belonging to admin users or groups have access to this resource. Admin membership is
+determined by a static configuration parameter.
+
+| *Resource*                                             | *Description*                               | *Authorization*  |
+| [[restapi/AdminVersion][api/admin/version]]            | Get version of the server                   | No restriction   |
+| [[restapi/AdminStack][api/admin/stack]]                | Get stack of the server                     | Admin User/Group |
+| [[restapi/AdminConfig][api/admin/config/:config-type]] | Get configuration information of the server | Admin User/Group |
+
+
+---++++ Lineage Resource Policy
+
+Lineage is read-only and hence all users can look at lineage for their respective entities.
+
+
+---++ Authentication Configuration
+
+Following is the Server Side Configuration Setup for Authentication.
 
 ---+++ Common Configuration Parameters
 
@@ -105,6 +231,59 @@ Falcon uses HTTP Kerberos SPNEGO to auth
 *.falcon.http.authentication.blacklisted.users=
 </verbatim>
 
+---++ Authorization Configuration
+
+---+++ Enabling Authorization
+By default, support for authorization is disabled and specifying ACLs in entities are optional.
+To enable support for authorization, set falcon.security.authorization.enabled to true in the
+startup configuration.
+
+<verbatim>
+# Authorization Enabled flag: false|true
+*.falcon.security.authorization.enabled=true
+</verbatim>
+
+---+++ Authorization Provider
+
+Falcon provides a basic implementation for Authorization bundled, org.apache.falcon.security .DefaultFalconAuthorizationProvider.
+This can be overridden by custom implementations in the startup configuration.
+
+<verbatim>
+# Authorization Provider Fully Qualified Class Name
+*.falcon.security.authorization.provider=org.apache.falcon.security.DefaultAuthorizationProvider
+</verbatim>
+
+---+++ Super User Group
+
+Super user group is determined by the configuration:
+
+<verbatim>
+# The name of the group of super-users
+*.falcon.security.authorization.superusergroup=falcon
+</verbatim>
+
+---+++ Admin Membership
+
+Administrative users are determined by the configuration:
+
+<verbatim>
+# Admin Users, comma separated users
+*.falcon.security.authorization.admin.users=falcon,ambari-qa,seetharam
+</verbatim>
+
+Administrative groups are determined by the configuration:
+
+<verbatim>
+# Admin Group Membership, comma separated users
+*.falcon.security.authorization.admin.groups=falcon,testgroup,staff
+</verbatim>
+
+
+---++ SSL
+
+Falcon provides transport level security ensuring data confidentiality and integrity. This is
+enabled by default for communicating over HTTP between the client and the server.
+
 ---+++ SSL Configuration
 
 <verbatim>

Added: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdjacentVertices.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdjacentVertices.twiki?rev=1624488&view=auto
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdjacentVertices.twiki (added)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdjacentVertices.twiki Fri Sep 12 09:43:48 2014
@@ -0,0 +1,70 @@
+---++  GET api/graphs/lineage/vertices/:id/:direction
+   * <a href="#Description">Description</a>
+   * <a href="#Parameters">Parameters</a>
+   * <a href="#Results">Results</a>
+   * <a href="#Examples">Examples</a>
+
+---++ Description
+Get a list of adjacent vertices or edges with a direction.
+
+---++ Parameters
+   * :id is the id of the vertex.
+   * :direction is the direction associated with the edges.
+
+   To get the adjacent out vertices of vertex pass direction as out, in to get adjacent in vertices
+   and both to get both in and out adjacent vertices. Similarly to get the out edges of vertex
+   pass outE, inE to get in edges and bothE to get the both in and out edges of vertex.
+
+      * out  : get the adjacent out vertices of vertex
+      * in   : get the adjacent in vertices of vertex
+      * both : get the both adjacent in and out vertices of vertex
+      * outCount  : get the number of out vertices of vertex
+      * inCount   : get the number of in vertices of vertex
+      * bothCount : get the number of adjacent in and out vertices of vertex
+      * outIds  : get the identifiers of out vertices of vertex
+      * inIds   : get the identifiers of in vertices of vertex
+      * bothIds : get the identifiers of adjacent in and out vertices of vertex
+
+---++ Results
+Adjacent vertices of the vertex for the specified direction.
+
+---++ Examples
+---+++ Rest Call
+<verbatim>
+GET http://localhost:15000/api/graphs/lineage/vertices/4/out
+</verbatim>
+---+++ Result
+<verbatim>
+{
+    "results": [
+        {
+            "timestamp":"2014-04-21T20:55Z",
+            "name":"sampleFeed",
+            "type":"feed-instance",
+            "_id":8,
+            "_type":"vertex"
+        }
+    ],
+    "totalSize":1}
+}
+</verbatim>
+
+---+++ Rest Call
+<verbatim>
+GET http://localhost:15000/api/graphs/lineage/vertices/4/bothE
+</verbatim>
+---+++ Result
+<verbatim>
+{
+    "results":[
+        {
+            "_id":"Q5V-4-5g",
+            "_type":"edge",
+            "_outV":4,
+            "_inV":8,
+            "_label":"output"
+        }
+    ],
+    "totalSize":1
+}
+</verbatim>

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdminConfig.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdminConfig.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdminConfig.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdminConfig.twiki Fri Sep 12 09:43:48 2014
@@ -17,7 +17,6 @@ Configuration information of the server.
 ---+++ Rest Call
 <verbatim>
 GET http://localhost:15000/api/admin/config/deploy
-Remote-User: rgautam
 </verbatim>
 ---+++ Result
 <verbatim>

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdminStack.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdminStack.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdminStack.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdminStack.twiki Fri Sep 12 09:43:48 2014
@@ -16,7 +16,6 @@ Stack trace of the server.
 ---+++ Rest Call
 <verbatim>
 GET http://localhost:15000/api/admin/stack
-Remote-User: rgautam
 </verbatim>
 ---+++ Result
 <verbatim>

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdminVersion.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdminVersion.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdminVersion.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AdminVersion.twiki Fri Sep 12 09:43:48 2014
@@ -16,7 +16,6 @@ Version of the server.
 ---+++ Rest Call
 <verbatim>
 GET http://localhost:15000/api/admin/version
-Remote-User: rgautam
 </verbatim>
 ---+++ Result
 <verbatim>

Added: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AllEdges.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AllEdges.twiki?rev=1624488&view=auto
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AllEdges.twiki (added)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AllEdges.twiki Fri Sep 12 09:43:48 2014
@@ -0,0 +1,42 @@
+---++  GET pi/graphs/lineage//edges/all
+   * <a href="#Description">Description</a>
+   * <a href="#Parameters">Parameters</a>
+   * <a href="#Results">Results</a>
+   * <a href="#Examples">Examples</a>
+
+---++ Description
+Get all edges.
+
+---++ Parameters
+None.
+
+---++ Results
+All edges in lineage graph.
+
+---++ Examples
+---+++ Rest Call
+<verbatim>
+GET http://localhost:15000/api/graphs/lineage/edges/all
+</verbatim>
+---+++ Result
+<verbatim>
+{
+    "results": [
+        {
+            "_id":"Q5V-4-5g",
+            "_type":"edge",
+            "_outV":4,
+            "_inV":8,
+            "_label":"output"
+        },
+        {
+            "_id":"Q6t-c-5g",
+            "_type":"edge",
+            "_outV":12,
+            "_inV":16,
+            "_label":"output"
+        }
+    ],
+    "totalSize": 2
+}
+</verbatim>

Added: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AllVertices.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AllVertices.twiki?rev=1624488&view=auto
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AllVertices.twiki (added)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/AllVertices.twiki Fri Sep 12 09:43:48 2014
@@ -0,0 +1,43 @@
+---++  GET api/graphs/lineage/vertices/all
+   * <a href="#Description">Description</a>
+   * <a href="#Parameters">Parameters</a>
+   * <a href="#Results">Results</a>
+   * <a href="#Examples">Examples</a>
+
+---++ Description
+Get all vertices.
+
+---++ Parameters
+None.
+
+---++ Results
+All vertices in lineage graph.
+
+---++ Examples
+---+++ Rest Call
+<verbatim>
+GET http://localhost:15000/api/graphs/lineage/vertices/all
+</verbatim>
+---+++ Result
+<verbatim>
+{
+    "results": [
+        {
+            "timestamp":"2014-04-21T20:55Z",
+            "name":"sampleIngestProcess\/2014-03-01T10:00Z",
+            "type":"process-instance",
+            "version":"2.0.0",
+            "_id":4,
+            "_type":"vertex"
+        },
+        {
+            "timestamp":"2014-04-21T20:55Z",
+            "name":"rawEmailFeed\/2014-03-01T10:00Z",
+            "type":"feed-instance",
+            "_id":8,
+            "_type":"vertex"
+        }
+    ],
+    "totalSize": 2
+}
+</verbatim>

Added: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/Edge.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/Edge.twiki?rev=1624488&view=auto
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/Edge.twiki (added)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/Edge.twiki Fri Sep 12 09:43:48 2014
@@ -0,0 +1,33 @@
+---++  GET api/graphs/lineage/edges/:id
+   * <a href="#Description">Description</a>
+   * <a href="#Parameters">Parameters</a>
+   * <a href="#Results">Results</a>
+   * <a href="#Examples">Examples</a>
+
+---++ Description
+Gets the edge with specified id.
+
+---++ Parameters
+   * :id is the unique id of the edge.
+
+---++ Results
+Edge with the specified id.
+
+---++ Examples
+---+++ Rest Call
+<verbatim>
+GET http://localhost:15000/api/graphs/lineage/edges/Q6t-c-5g
+</verbatim>
+---+++ Result
+<verbatim>
+{
+    "results":
+        {
+            "_id":"Q6t-c-5g",
+            "_type":"edge",
+            "_outV":12,
+            "_inV":16,
+            "_label":"output"
+        }
+}
+</verbatim>

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityDefinition.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityDefinition.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityDefinition.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityDefinition.twiki Fri Sep 12 09:43:48 2014
@@ -18,7 +18,6 @@ Definition of the entity.
 ---+++ Rest Call
 <verbatim>
 GET http://localhost:15000/api/entities/definition/process/SampleProcess
-Remote-User: rgautam
 </verbatim>
 ---+++ Result
 <verbatim>

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityDelete.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityDelete.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityDelete.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityDelete.twiki Fri Sep 12 09:43:48 2014
@@ -18,7 +18,6 @@ Results of the delete operation.
 ---+++ Rest Call
 <verbatim>
 DELETE http://localhost:15000/api/entities/delete/cluster/SampleProcess
-Remote-User: rgautam
 </verbatim>
 ---+++ Result
 <verbatim>

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityDependencies.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityDependencies.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityDependencies.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityDependencies.twiki Fri Sep 12 09:43:48 2014
@@ -18,7 +18,6 @@ Dependenciess of the entity.
 ---+++ Rest Call
 <verbatim>
 GET http://localhost:15000/api/entities/dependencies/process/SampleProcess
-Remote-User: rgautam
 </verbatim>
 ---+++ Result
 <verbatim>
@@ -26,11 +25,13 @@ Remote-User: rgautam
     "entity": [
         {
             "name": "SampleInput",
-            "type": "feed"
+            "type": "feed",
+            "tag": [Input]
         },
         {
             "name": "SampleOutput",
             "type": "feed"
+            "tag": [Output]
         },
         {
             "name": "primary-cluster",

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityList.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityList.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityList.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityList.twiki Fri Sep 12 09:43:48 2014
@@ -8,9 +8,19 @@
 Get list of the entities.
 
 ---++ Parameters
-   * :entity-type can be cluster, feed or process.
-   * :fields (optional) additional fields that the client are interested in, separated by commas.
-     Currently falcon only support status as a valid field.
+   * :entity-type Valid options are cluster, feed or process.
+   * fields <optional param> Fields of entity that the user wants to view, separated by commas.
+      * Valid options are STATUS, TAGS, PIPELINES.
+   * filterBy <optional param> Filter results by list of field:value pairs. Example: filterBy=STATUS:RUNNING,PIPELINES:clickLogs
+      * Supported filter fields are NAME, STATUS, PIPELINES, CLUSTER.
+      * Query will do an AND among filterBy fields.
+   * tags <optional param> Return list of entities that have specified tags, separated by a comma. Query will do AND on tag values.
+      * Example: tags=consumer=consumer@xyz.com,owner=producer@xyz.com
+   * orderBy <optional param> Field by which results should be ordered.
+      * Supports ordering by "name".
+   * sortOrder <optional param> Valid options are "asc" and "desc"
+   * offset <optional param> Show results from the offset, used for pagination. Defaults to 0.
+   * numResults <optional param> Number of results to show per request, used for pagination. Only integers > 0 are valid, Default is 10.
 
 ---++ Results
 List of the entities.
@@ -19,7 +29,6 @@ List of the entities.
 ---+++ Rest Call
 <verbatim>
 GET http://localhost:15000/api/entities/list/feed
-Remote-User: rgautam
 </verbatim>
 ---+++ Result
 <verbatim>
@@ -40,7 +49,6 @@ Remote-User: rgautam
 ---+++ Rest Call
 <verbatim>
 GET http://localhost:15000/api/entities/list/feed?fields=status
-Remote-User: rgautam
 </verbatim>
 ---+++ Result
 <verbatim>
@@ -60,3 +68,28 @@ Remote-User: rgautam
 }
 </verbatim>
 
+---+++ Rest Call
+<verbatim>
+GET http://localhost:15000/api/entities/list/process?filterBy=STATUS:RUNNING,PIPELINES:dataReplication&fields=status,pipelines,tags&tags=consumer=consumer@xyz.com&orderBy=name&offset=2&numResults=2
+</verbatim>
+---+++ Result
+<verbatim>
+{
+    "entity": [
+        {
+            "name"  : "SampleProcess1",
+            "type"  : "process",
+            "status": "RUNNING",
+            "pipelines": "dataReplication",
+            "tags": "consumer=consumer@xyz.com"
+        },
+        {
+            "name": "SampleProcess3",
+            "type": "process",
+            "status": "RUNNING",
+            "pipelines": "dataReplication",
+            "tags": "consumer=consumer@xyz.com,owner=producer@xyz.com"
+        }
+    ]
+}
+</verbatim>

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityResume.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityResume.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityResume.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityResume.twiki Fri Sep 12 09:43:48 2014
@@ -18,7 +18,6 @@ Result of the resume command.
 ---+++ Rest Call
 <verbatim>
 POST http://localhost:15000/api/entities/resume/process/SampleProcess
-Remote-User: rgautam
 </verbatim>
 ---+++ Result
 <verbatim>

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySchedule.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySchedule.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySchedule.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySchedule.twiki Fri Sep 12 09:43:48 2014
@@ -18,7 +18,6 @@ Result of the schedule command.
 ---+++ Rest Call
 <verbatim>
 POST http://localhost:15000/api/entities/schedule/process/SampleProcess
-Remote-User: rgautam
 </verbatim>
 ---+++ Result
 <verbatim>

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityStatus.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityStatus.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityStatus.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityStatus.twiki Fri Sep 12 09:43:48 2014
@@ -18,7 +18,6 @@ Status of the entity.
 ---+++ Rest Call
 <verbatim>
 GET http://localhost:15000/api/entities/status/process/SampleProcess
-Remote-User: rgautam
 </verbatim>
 ---+++ Result
 <verbatim>

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySubmit.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySubmit.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySubmit.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySubmit.twiki Fri Sep 12 09:43:48 2014
@@ -17,7 +17,6 @@ Result of the submission.
 ---+++ Rest Call
 <verbatim>
 POST http://localhost:15000/api/entities/submit/feed
-Remote-User: rgautam
 <?xml version="1.0" encoding="UTF-8"?>
 <!-- Hourly sample input data -->
 
@@ -59,7 +58,6 @@ Remote-User: rgautam
 ---+++ Rest Call
 <verbatim>
 POST http://localhost:15000/api/entities/submit/process
-Remote-User: rgautam
 <?xml version="1.0" encoding="UTF-8"?>
 <!-- Daily sample process. Runs at 6th hour every day. Input - last day's hourly data. Generates output for yesterday -->
 <process xmlns="uri:falcon:process:0.1" name="SampleProcess" >
@@ -103,4 +101,4 @@ Remote-User: rgautam
     "message": "default\/Submit successful (process) SampleProcess\n",
     "status": "SUCCEEDED"
 }
-</verbatim>
\ No newline at end of file
+</verbatim>

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySubmitAndSchedule.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySubmitAndSchedule.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySubmitAndSchedule.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySubmitAndSchedule.twiki Fri Sep 12 09:43:48 2014
@@ -17,7 +17,6 @@ Result of the submit and schedule comman
 ---+++ Rest Call
 <verbatim>
 POST http://localhost:15000/api/entities/submitAndSchedule/process
-Remote-User: rgautam
 <?xml version="1.0" encoding="UTF-8"?>
 <!-- Daily sample process. Runs at 6th hour every day. Input - last day's hourly data. Generates output for yesterday -->
 <process xmlns="uri:falcon:process:0.1" name="SampleProcess" >

Added: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySummary.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySummary.twiki?rev=1624488&view=auto
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySummary.twiki (added)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySummary.twiki Fri Sep 12 09:43:48 2014
@@ -0,0 +1,73 @@
+---++  GET /api/entities/summary/:entity-type/:cluster
+   * <a href="#Description">Description</a>
+   * <a href="#Parameters">Parameters</a>
+   * <a href="#Results">Results</a>
+   * <a href="#Examples">Examples</a>
+
+---++ Description
+Given an EntityType and cluster, get list of entities along with summary of N recent instances of each entity
+
+---++ Parameters
+   * :entity-type Valid options are cluster, feed or process.
+   * :cluster Show entities that belong to this cluster.
+   * start <optional param> Show entity summaries from this date. Date format is yyyy-MM-dd'T'HH:mm'Z'.
+      * By default, it is set to (end - 2 days).
+   * end <optional param> Show entity summary up to this date. Date format is yyyy-MM-dd'T'HH:mm'Z'.
+      * Default is set to now.
+   * fields <optional param> Fields of entity that the user wants to view, separated by commas.
+      * Valid options are STATUS, TAGS, PIPELINES.
+   * filterBy <optional param> Filter results by list of field:value pairs. Example: filterBy=STATUS:RUNNING,PIPELINES:clickLogs
+      * Supported filter fields are NAME, STATUS, PIPELINES, CLUSTER.
+      * Query will do an AND among filterBy fields.
+   * tags <optional param> Return list of entities that have specified tags, separated by a comma. Query will do AND on tag values.
+      * Example: tags=consumer=consumer@xyz.com,owner=producer@xyz.com
+   * orderBy <optional param> Field by which results should be ordered.
+      * Supports ordering by "name".
+   * sortOrder <optional param> Valid options are "asc" and "desc"
+   * offset <optional param> Show results from the offset, used for pagination. Defaults to 0.
+   * numResults <optional param> Number of results to show per request, used for pagination. Only integers > 0 are valid, Default is 10.
+   * numInstances <optional param> Number of recent instances to show per entity. Only integers > 0 are valid, Default is 7.
+
+---++ Results
+Show entities along with summary of N instances for each entity.
+
+---++ Examples
+---+++ Rest Call
+<verbatim>
+GET http://localhost:15000/api/entities/summary/feed/primary-cluster?filterBy=STATUS:RUNNING&fields=status&tags=consumer=consumer@xyz.com&orderBy=name&offset=0&numResults=1&numInstances=2
+</verbatim>
+---+++ Result
+<verbatim>
+{
+    "entitySummary": [
+        {
+            "name"  : "SampleOutput",
+            "type"  : "feed",
+            "status": "RUNNING",
+            "instances": [
+            {
+                "details": "",
+                "endTime": "2013-10-21T14:40:26-07:00",
+                "startTime": "2013-10-21T14:39:56-07:00",
+                "cluster": "primary-cluster",
+                "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933395-oozie-rgau-W",
+                "status": "RUNNING",
+                "instance": "2012-04-03T07:00Z"
+            },
+            {
+                "details": "",
+                "endTime": "2013-10-21T14:42:27-07:00",
+                "startTime": "2013-10-21T14:41:57-07:00",
+                "cluster": "primary-cluster",
+                "logFile": "http:\/\/localhost:11000\/oozie?job=0000070-131021115933397-oozie-rgau-W",
+                "status": "RUNNING",
+                "instance": "2012-04-03T08:00Z"
+            },
+            ]
+        }
+    ]
+    "requestId": "default\/e15bb378-d09f-4911-9df2-5334a45153d2\n",
+    "message": "default\/STATUS\n",
+    "status": "SUCCEEDED"
+}
+</verbatim>

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySuspend.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySuspend.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySuspend.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntitySuspend.twiki Fri Sep 12 09:43:48 2014
@@ -18,7 +18,6 @@ Status of the entity.
 ---+++ Rest Call
 <verbatim>
 POST http://localhost:15000/api/entities/suspend/process/SampleProcess
-Remote-User: rgautam
 </verbatim>
 ---+++ Result
 <verbatim>

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityUpdate.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityUpdate.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityUpdate.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityUpdate.twiki Fri Sep 12 09:43:48 2014
@@ -19,7 +19,6 @@ Result of the validation.
 ---+++ Rest Call
 <verbatim>
 POST http://localhost:15000/api/entities/update/process/SampleProcess?effective=2014-01-01T00:00Z
-Remote-User: rgautam
 <?xml version="1.0" encoding="UTF-8"?>
 <!-- Daily sample process. Runs at 6th hour every day. Input - last day's hourly data. Generates output for yesterday -->
 <process xmlns="uri:falcon:process:0.1" name="SampleProcess" >

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityValidate.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityValidate.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityValidate.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/EntityValidate.twiki Fri Sep 12 09:43:48 2014
@@ -17,7 +17,6 @@ Result of the validation.
 ---+++ Rest Call
 <verbatim>
 POST http://localhost:15000/api/entities/validate/cluster
-Remote-User: rgautam
 <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
 <cluster xmlns="uri:falcon:cluster:0.1" name="primary-cluster" description="Primary Cluster" colo="west-coast">
     <interfaces>
@@ -46,7 +45,6 @@ Remote-User: rgautam
 ---+++ Rest Call
 <verbatim>
 POST http://localhost:15000/api/entities/validate/feed
-Remote-User: rgautam
 <?xml version="1.0" encoding="UTF-8"?>
 <!-- Hourly sample input data -->
 
@@ -88,7 +86,6 @@ Remote-User: rgautam
 ---+++ Rest Call
 <verbatim>
 POST http://localhost:15000/api/entities/validate/feed
-Remote-User: rgautam
 <?xml version="1.0" encoding="UTF-8"?>
 <!-- Daily sample output data -->
 
@@ -125,7 +122,6 @@ xmlns:xsi="http://www.w3.org/2001/XMLSch
 ---+++ Rest Call
 <verbatim>
 POST http://localhost:15000/api/entities/validate/process
-Remote-User: rgautam
 <?xml version="1.0" encoding="UTF-8"?>
 <!-- Daily sample process. Runs at 6th hour every day. Input - last day's hourly data. Generates output for yesterday -->
 <process xmlns="uri:falcon:process:0.1" name="SampleProcess" >
@@ -169,4 +165,4 @@ Remote-User: rgautam
     "message": "Validated successfully (PROCESS) SampleProcess",
     "status": "SUCCEEDED"
 }
-</verbatim>
\ No newline at end of file
+</verbatim>

Added: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/Graph.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/Graph.twiki?rev=1624488&view=auto
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/Graph.twiki (added)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/Graph.twiki Fri Sep 12 09:43:48 2014
@@ -0,0 +1,22 @@
+---++  GET api/graphs/lineage/serialize
+   * <a href="#Description">Description</a>
+   * <a href="#Parameters">Parameters</a>
+   * <a href="#Results">Results</a>
+   * <a href="#Examples">Examples</a>
+
+---++ Description
+Dump the graph.
+
+---++ Parameters
+None.
+
+---++ Results
+Serialize graph to a file configured using *.falcon.graph.serialize.path in Custom startup.properties.
+
+---++ Examples
+---+++ Rest Call
+<verbatim>
+GET http://localhost:15000/api/graphs/lineage/serialize
+</verbatim>
+---+++ Result
+None.

Modified: incubator/falcon/trunk/general/src/site/twiki/docs/restapi/InstanceKill.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/twiki/docs/restapi/InstanceKill.twiki?rev=1624488&r1=1624487&r2=1624488&view=diff
==============================================================================
--- incubator/falcon/trunk/general/src/site/twiki/docs/restapi/InstanceKill.twiki (original)
+++ incubator/falcon/trunk/general/src/site/twiki/docs/restapi/InstanceKill.twiki Fri Sep 12 09:43:48 2014
@@ -11,6 +11,7 @@ Kill a currently running instance.
    * :entity-type can either be a feed or a process.
    * :entity-name is name of the entity.
    * start start time of the entity.
+   * lifecycle <optional param> can be Eviction/Replication(default) for feed and Execution(default) for process.
 
 ---++ Results
 Result of the kill operation.
@@ -19,7 +20,6 @@ Result of the kill operation.
 ---+++ Rest Call
 <verbatim>
 POST http://localhost:15000/api/instance/kill/process/SampleProcess?colo=*&start=2012-04-03T07:00Z
-Remote-User: rgautam
 </verbatim>
 ---+++ Result
 <verbatim>