You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@falcon.apache.org by sr...@apache.org on 2013/08/20 19:04:55 UTC

svn commit: r1515884 [1/2] - in /incubator/falcon/trunk: ./ general/ general/src/ general/src/site/ general/src/site/twiki/docs/ general/src/site/twiki/wiki/ releases/ releases/0.3-incubating/ releases/0.3-incubating/src/ releases/0.3-incubating/src/si...

Author: sriksun
Date: Tue Aug 20 17:04:54 2013
New Revision: 1515884

URL: http://svn.apache.org/r1515884
Log:
Updated site with release details for 0.3-incubating

Added:
    incubator/falcon/trunk/general/
    incubator/falcon/trunk/general/pom.xml
    incubator/falcon/trunk/general/src/
      - copied from r1515558, incubator/falcon/trunk/src/
    incubator/falcon/trunk/general/src/site/site.xml
      - copied, changed from r1515562, incubator/falcon/trunk/src/site/site.xml
    incubator/falcon/trunk/general/src/site/twiki/docs/FalconArchitecture.twiki
      - copied unchanged from r1515562, incubator/falcon/trunk/src/site/twiki/docs/FalconArchitecture.twiki
    incubator/falcon/trunk/releases/
    incubator/falcon/trunk/releases/0.3-incubating/
    incubator/falcon/trunk/releases/0.3-incubating/pom.xml
    incubator/falcon/trunk/releases/0.3-incubating/src/
    incubator/falcon/trunk/releases/0.3-incubating/src/site/
    incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/
    incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/
    incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/Architecture.png   (with props)
    incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/EntityDependency.png   (with props)
    incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/FeedSchedule.png   (with props)
    incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/PrismSetup.png   (with props)
    incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/ProcessSchedule.png   (with props)
    incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/apache-incubator-logo.png   (with props)
    incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/falcon-logo.png   (with props)
    incubator/falcon/trunk/releases/0.3-incubating/src/site/site.xml
    incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/
    incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/docs/
    incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/docs/EntitySpecification.twiki
    incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/docs/FalconArchitecture.twiki
    incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/docs/FalconCLI.twiki
    incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/docs/InstallationSteps.twiki
    incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/docs/OnBoarding.twiki
    incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/index.twiki
    incubator/falcon/trunk/releases/pom.xml
Removed:
    incubator/falcon/trunk/general/src/site/twiki/wiki/IRCChannel.twiki
    incubator/falcon/trunk/general/src/site/twiki/wiki/PoweredBy.twiki
    incubator/falcon/trunk/general/src/site/twiki/wiki/Roadmap.twiki
    incubator/falcon/trunk/src/
Modified:
    incubator/falcon/trunk/pom.xml

Added: incubator/falcon/trunk/general/pom.xml
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/pom.xml?rev=1515884&view=auto
==============================================================================
--- incubator/falcon/trunk/general/pom.xml (added)
+++ incubator/falcon/trunk/general/pom.xml Tue Aug 20 17:04:54 2013
@@ -0,0 +1,114 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+    <modelVersion>4.0.0</modelVersion>
+    <parent>
+        <groupId>org.apache.falcon</groupId>
+        <artifactId>falcon-website</artifactId>
+        <version>0.4-SNAPSHOT</version>
+    </parent>
+    <artifactId>falcon-website-general</artifactId>
+    <version>0.4-SNAPSHOT</version>
+    <packaging>war</packaging>
+
+    <name>Apache Falcon - General</name>
+
+    <build>
+        <plugins>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-site-plugin</artifactId>
+                <version>3.2</version>
+                <dependencies>
+                    <dependency>
+                        <groupId>org.apache.maven.doxia</groupId>
+                        <artifactId>doxia-module-twiki</artifactId>
+                        <version>1.3</version>
+                    </dependency>
+                </dependencies>
+                <executions>
+                    <execution>
+                        <goals>
+                            <goal>site</goal>
+                        </goals>
+                        <phase>prepare-package</phase>
+                    </execution>
+                </executions>
+                <configuration>
+                    <outputDirectory>../../site</outputDirectory>
+                    <reportPlugins>
+                        <plugin>
+                            <groupId>org.apache.maven.plugins</groupId>
+                            <artifactId>maven-project-info-reports-plugin</artifactId>
+                            <version>2.3</version>
+                            <reportSets>
+                                <reportSet>
+                                    <reports>
+                                        <report>index</report>
+                                        <report>project-team</report>
+                                        <report>mailing-list</report>
+                                        <report>issue-tracking</report>
+                                        <report>license</report>
+                                        <report>scm</report>
+                                    </reports>
+                                </reportSet>
+                            </reportSets>
+                            <configuration>
+                                <dependencyDetailsEnabled>false</dependencyDetailsEnabled>
+                                <dependencyLocationsEnabled>false</dependencyLocationsEnabled>
+                            </configuration>
+                        </plugin>
+                        <plugin>
+                            <groupId>org.apache.maven.plugins</groupId>
+                            <artifactId>maven-javadoc-plugin</artifactId>
+                            <version>2.7</version>
+                        </plugin>
+                        <plugin>
+                            <groupId>org.apache.maven.plugins</groupId>
+                            <artifactId>maven-jxr-plugin</artifactId>
+                            <version>2.1</version>
+                            <configuration>
+                                <aggregate>true</aggregate>
+                            </configuration>
+                        </plugin>
+                    </reportPlugins>
+                </configuration>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-war-plugin</artifactId>
+                <version>2.3</version>
+                <configuration>
+                    <webResources>
+                        <resource>
+                            <directory>src/site/resources</directory>
+                            <targetPath>pages</targetPath>
+                        </resource>
+                        <resource>
+                            <directory>target/site</directory>
+                            <targetPath>pages</targetPath>
+                        </resource>
+                    </webResources>
+                </configuration>
+            </plugin>
+        </plugins>
+    </build>
+
+</project>

Copied: incubator/falcon/trunk/general/src/site/site.xml (from r1515562, incubator/falcon/trunk/src/site/site.xml)
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/general/src/site/site.xml?p2=incubator/falcon/trunk/general/src/site/site.xml&p1=incubator/falcon/trunk/src/site/site.xml&r1=1515562&r2=1515884&rev=1515884&view=diff
==============================================================================
--- incubator/falcon/trunk/src/site/site.xml (original)
+++ incubator/falcon/trunk/general/src/site/site.xml Tue Aug 20 17:04:54 2013
@@ -45,7 +45,7 @@
     <bannerLeft>
         <name>Falcon</name>
         <src>./images/falcon-logo.png</src>
-        <href>http://falcon.incubator.apache.org</href>
+        <href>http://falcon.incubator.apache.org/index.html</href>
         <width>200px</width>
         <height>45px</height>
     </bannerLeft>
@@ -66,7 +66,7 @@
             </script>
         </head>
 
-        <breadcrumbs position="left">
+        <breadcrumbs>
             <item name="Apache" href="http://www.apache.org"/>
             <item name="Falcon" title="Apache Falcon" href="index.html"/>
         </breadcrumbs>
@@ -94,12 +94,13 @@
         </menu>
 
         <menu name="Releases">
-            <item name="0.3.0 (coming soon)" href=""/>
+            <item name="0.3-incubating" href="http://www.apache.org/dist/incubator/falcon"/>
             <item name="Roadmap" href="https://cwiki.apache.org/confluence/display/FALCON/Roadmap"/>
         </menu>
 
         <menu name="Documentation">
-            <item name="0.3.0 (coming soon)" href=""/>
+            <item name="current" href="./docs/GettingStarted.html"/>
+            <item name="0.3-incubating" href="./0.3-incubating/index.html"/>
         </menu>
 
         <menu name="Resources">

Modified: incubator/falcon/trunk/pom.xml
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/pom.xml?rev=1515884&r1=1515883&r2=1515884&view=diff
==============================================================================
--- incubator/falcon/trunk/pom.xml (original)
+++ incubator/falcon/trunk/pom.xml Tue Aug 20 17:04:54 2013
@@ -21,8 +21,8 @@
     <modelVersion>4.0.0</modelVersion>
     <groupId>org.apache.falcon</groupId>
     <artifactId>falcon-website</artifactId>
-    <version>0.3-SNAPSHOT</version>
-    <packaging>war</packaging>
+    <version>0.4-SNAPSHOT</version>
+    <packaging>pom</packaging>
 
     <name>Apache Falcon</name>
     <description>Apache Falcon is a data management platform for Hadoop.</description>
@@ -184,86 +184,10 @@
         </developer>
     </developers>
 
-    <build>
-        <plugins>
-            <plugin>
-                <groupId>org.apache.maven.plugins</groupId>
-                <artifactId>maven-site-plugin</artifactId>
-                <version>3.2</version>
-                <dependencies>
-                    <dependency>
-                        <groupId>org.apache.maven.doxia</groupId>
-                        <artifactId>doxia-module-twiki</artifactId>
-                        <version>1.3</version>
-                    </dependency>
-                </dependencies>
-                <executions>
-                    <execution>
-                        <goals>
-                            <goal>site</goal>
-                        </goals>
-                        <phase>prepare-package</phase>
-                    </execution>
-                </executions>
-                <configuration>
-                    <outputDirectory>../site</outputDirectory>
-                    <reportPlugins>
-                        <plugin>
-                            <groupId>org.apache.maven.plugins</groupId>
-                            <artifactId>maven-project-info-reports-plugin</artifactId>
-                            <version>2.3</version>
-                            <reportSets>
-                                <reportSet>
-                                    <reports>
-                                        <report>index</report>
-                                        <report>project-team</report>
-                                        <report>mailing-list</report>
-                                        <report>issue-tracking</report>
-                                        <report>license</report>
-                                        <report>scm</report>
-                                    </reports>
-                                </reportSet>
-                            </reportSets>
-                            <configuration>
-                                <dependencyDetailsEnabled>false</dependencyDetailsEnabled>
-                                <dependencyLocationsEnabled>false</dependencyLocationsEnabled>
-                            </configuration>
-                        </plugin>
-                        <plugin>
-                            <groupId>org.apache.maven.plugins</groupId>
-                            <artifactId>maven-javadoc-plugin</artifactId>
-                            <version>2.7</version>
-                        </plugin>
-                        <plugin>
-                            <groupId>org.apache.maven.plugins</groupId>
-                            <artifactId>maven-jxr-plugin</artifactId>
-                            <version>2.1</version>
-                            <configuration>
-                                <aggregate>true</aggregate>
-                            </configuration>
-                        </plugin>
-                    </reportPlugins>
-                </configuration>
-            </plugin>
-            <plugin>
-                <groupId>org.apache.maven.plugins</groupId>
-                <artifactId>maven-war-plugin</artifactId>
-                <version>2.3</version>
-                <configuration>
-                    <webResources>
-                        <resource>
-                            <directory>src/site/resources</directory>
-                            <targetPath>pages</targetPath>
-                        </resource>
-                        <resource>
-                            <directory>target/site</directory>
-                            <targetPath>pages</targetPath>
-                        </resource>
-                    </webResources>
-                </configuration>
-            </plugin>
-        </plugins>
-    </build>
+    <modules>
+        <module>general</module>
+        <module>releases</module>
+    </modules>
 
     <distributionManagement>
         <site>

Added: incubator/falcon/trunk/releases/0.3-incubating/pom.xml
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/releases/0.3-incubating/pom.xml?rev=1515884&view=auto
==============================================================================
--- incubator/falcon/trunk/releases/0.3-incubating/pom.xml (added)
+++ incubator/falcon/trunk/releases/0.3-incubating/pom.xml Tue Aug 20 17:04:54 2013
@@ -0,0 +1,61 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+  
+       http://www.apache.org/licenses/LICENSE-2.0
+  
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+    <modelVersion>4.0.0</modelVersion>
+    <parent>
+        <groupId>org.apache.falcon</groupId>
+        <artifactId>falcon-website-releases</artifactId>
+        <version>0.1</version>
+    </parent>
+    <artifactId>falcon-website-0.3-incubating</artifactId>
+    <version>0.3-incubating</version>
+    <packaging>pom</packaging>
+
+    <name>Apache Falcon - Documentation</name>
+
+    <build>
+        <plugins>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-site-plugin</artifactId>
+                <version>3.2</version>
+                <dependencies>
+                    <dependency>
+                        <groupId>org.apache.maven.doxia</groupId>
+                        <artifactId>doxia-module-twiki</artifactId>
+                        <version>1.3</version>
+                    </dependency>
+                </dependencies>
+                <executions>
+                    <execution>
+                        <goals>
+                            <goal>site</goal>
+                        </goals>
+                        <phase>prepare-package</phase>
+                    </execution>
+                </executions>
+                <configuration>
+                    <outputDirectory>../../../site/0.3-incubating</outputDirectory>
+                </configuration>
+            </plugin>
+        </plugins>
+    </build>
+
+</project>

Added: incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/Architecture.png
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/Architecture.png?rev=1515884&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/Architecture.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/EntityDependency.png
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/EntityDependency.png?rev=1515884&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/EntityDependency.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/FeedSchedule.png
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/FeedSchedule.png?rev=1515884&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/FeedSchedule.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/PrismSetup.png
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/PrismSetup.png?rev=1515884&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/PrismSetup.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/ProcessSchedule.png
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/ProcessSchedule.png?rev=1515884&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/ProcessSchedule.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/apache-incubator-logo.png
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/apache-incubator-logo.png?rev=1515884&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/apache-incubator-logo.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/falcon-logo.png
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/falcon-logo.png?rev=1515884&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/falcon/trunk/releases/0.3-incubating/src/site/resources/images/falcon-logo.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: incubator/falcon/trunk/releases/0.3-incubating/src/site/site.xml
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/releases/0.3-incubating/src/site/site.xml?rev=1515884&view=auto
==============================================================================
--- incubator/falcon/trunk/releases/0.3-incubating/src/site/site.xml (added)
+++ incubator/falcon/trunk/releases/0.3-incubating/src/site/site.xml Tue Aug 20 17:04:54 2013
@@ -0,0 +1,71 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+  
+       http://www.apache.org/licenses/LICENSE-2.0
+  
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<project name="Falcon" xmlns="http://maven.apache.org/DECORATION/1.3.0"
+         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:schemaLocation="http://maven.apache.org/DECORATION/1.3.0 http://maven.apache.org/xsd/decoration-1.3.0.xsd">
+
+    <skin>
+        <groupId>org.apache.maven.skins</groupId>
+        <artifactId>maven-fluido-skin</artifactId>
+        <version>1.3.0</version>
+    </skin>
+
+    <custom>
+        <fluidoSkin>
+            <project>Apache Falcon</project>
+            <topBarEnabled>false</topBarEnabled>
+            <sideBarEnabled>false</sideBarEnabled>
+        </fluidoSkin>
+    </custom>
+
+    <bannerLeft>
+        <name>Falcon</name>
+        <src>./images/falcon-logo.png</src>
+        <href>http://falcon.incubator.apache.org/index.html</href>
+        <width>200px</width>
+        <height>45px</height>
+    </bannerLeft>
+
+    <bannerRight>
+        <name>Apache Incubator</name>
+        <src>./images/apache-incubator-logo.png</src>
+        <href>http://incubator.apache.org</href>
+    </bannerRight>
+
+    <publishDate position="none"/>
+    <version position="none"/>
+
+    <body>
+        <links>
+            <item name="0.3-incubating" title="0.3-incubating"
+                  href="http://www.apache.org/dist/incubator/falcon" position="none"/>
+            <item name="Released: 2013-08-15" title="Released: 2013-08-15"
+                  href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12314429&amp;version=12324423" position="none"/>
+        </links>
+
+        <breadcrumbs>
+            <item name="Home" title="Apache Falcon" href="index.html" position="none"/>
+        </breadcrumbs>
+
+        <footer>
+            © 2011-2012 The Apache Software Foundation. Apache Falcon, Falcon, Apache, the Apache feather logo,
+            and the Apache Falcon project logo are trademarks of The Apache Software Foundation.
+        </footer>
+    </body>
+</project>

Added: incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/docs/EntitySpecification.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/docs/EntitySpecification.twiki?rev=1515884&view=auto
==============================================================================
--- incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/docs/EntitySpecification.twiki (added)
+++ incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/docs/EntitySpecification.twiki Tue Aug 20 17:04:54 2013
@@ -0,0 +1,481 @@
+---++ Contents
+   * <a href="#Cluster_Specification">Cluster Specification</a>
+   * <a href="#Feed_Specification">Feed Specification</a>
+   * <a href="#Process_Specification">Process Specification</a>
+   
+---++ Cluster Specification
+The cluster XSD specification is available here:
+A cluster contains different interfaces which are used by Falcon like readonly, write, workflow and messaging.
+A cluster is referenced by feeds and processes which are on-boarded to Falcon by its name.
+
+Following are the tags defined in a cluster.xml:
+<verbatim>
+<cluster colo="gs" description="" name="corp" xmlns="uri:falcon:cluster:0.1"
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+</verbatim>
+The colo specifies the colo to which this cluster belongs to and name is the name of the cluster which has to 
+be unique.
+
+
+A cluster has varies interfaces as described below:
+<verbatim>
+    <interface type="readonly" endpoint="hftp://localhost:50010" version="0.20.2" />
+</verbatim>
+A readonly interface specifies the endpoint for Hadoop's HFTP protocol, 
+this would be used in the context of feed replication.
+
+<verbatim>
+<interface type="write" endpoint="hdfs://localhost:8020" version="0.20.2" />
+</verbatim>
+A write interface specifies the interface to write to hdfs, it's endpoint is the value of fs.default.name.
+Falcon uses this interface to write system data to hdfs and feeds referencing this cluster are written to hdfs
+using the same write interface.
+
+<verbatim>
+<interface type="execute" endpoint="localhost:8021" version="0.20.2" />
+</verbatim>
+An execute interface specifies the interface for job tracker, it's endpoint is the value of mapred.job.tracker. 
+Falcon uses this interface to submit the processes as jobs on JobTracker defined here.
+
+<verbatim>
+<interface type="workflow" endpoint="http://localhost:11000/oozie/" version="3.1" />
+</verbatim>
+A workflow interface specifies the interface for workflow engine, example of its endpoint is the value for OOZIE_URL.
+Falcon uses this interface to schedule the processes referencing this cluster on workflow engine defined here.
+
+<verbatim>
+<interface type="messaging" endpoint="tcp://localhost:61616?daemon=true" version="5.4.6" />
+</verbatim>
+A messaging interface specifies the interface for sending feed availability messages, it's endpoint is broker url with tcp address.
+
+A cluster has a list of locations defined:
+<verbatim>
+<location name="staging" path="/projects/falcon/staging" />
+</verbatim>
+Location has the name and the path, name is the type of locations like staging, temp and working.
+and path is the hdfs path for each location.
+Falcon would use the location to do intermediate processing of entities in hdfs and hence Falcon
+should have read/write/execute permission on these locations.
+
+A cluster has a list of properties:
+A key-value pair, which are propagated to the workflow engine.
+<verbatim>
+<property name="brokerImplClass" value="org.apache.activemq.ActiveMQConnectionFactory" />
+</verbatim>
+Ideally JMS impl class name of messaging engine (brokerImplClass) 
+should be defined here.
+
+---++ Feed Specification
+The Feed XSD specification is available here.
+a Feed defines various attributes of feed like feed location, frequency, late-arrival handling
+and retention policies.
+A feed can be scheduled on a cluster, once a feed is scheduled its retention and replication process are triggered in a given cluster.
+<verbatim>
+<feed description="clicks log" name="clicks" xmlns="uri:falcon:feed:0.1"
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+</verbatim>
+A feed should have a unique name and this name is referenced by processes as input or output feed.
+
+<verbatim>
+   <partitions>
+        <partition name="country" />
+        <partition name="cluster" />
+    </partitions>
+</verbatim>
+A feed can define multiple partitions, if a referenced cluster defines partitions then the number of partitions in feed has to be equal to or more than the cluster partitions.
+
+<verbatim>
+    <groups>online,bi</groups>
+</verbatim>
+A feed specifies a list of comma separated groups, a group is a logical grouping of feeds and a group is said to be
+available if all the feeds belonging to a group are available. The frequency of all the feed which belong to the same group
+must be same.
+
+<verbatim>
+    <availabilityFlag>_SUCCESS</availabilityFlag>
+</verbatim>
+An availabilityFlag specifies the name of a file which when present/created in a feeds data directory, 
+the feed is termed as available. ex: _SUCCESS, if this element is ignored then Falcon would consider the presence of feed's
+data directory as feed availability.
+
+<verbatim>
+    <frequency>minutes(20)</frequency>
+</verbatim>
+A feed has a frequency which specifies the frequency by which this feed is generated. 
+ex: it can be generated every hour, every 5 minutes, daily, weekly etc.
+valid frequency type for a feed are minutes, hours, days, months. The values can be negative, zero or positive.
+
+<verbatim>
+    <late-arrival cut-off="hours(6)" />
+</verbatim>
+A late-arrival specifies the cut-off period till which the feed is expected to arrive late and should be honored be processes referring to it as input feed by rerunning the instances in case the data arrives late with in a cut-off period.
+The cut-off period is specified by expression frequency(times), ex: if the feed can arrive late
+upto 8 hours then late-arrival's cut-off="hours(8)"
+
+<verbatim>
+        <clusters>
+        <cluster name="test-cluster">
+            <validity start="2012-07-20T03:00Z" end="2099-07-16T00:00Z"/>
+            <retention limit="days(10)" action="delete"/>
+            <locations>
+                <location type="data" path="/hdfsDataLocation/${YEAR}/${MONTH}/${DAY}/${HOUR}/${MINUTE}"/>
+                <location type="stats" path="/projects/falcon/clicksStats" />
+                <location type="meta" path="/projects/falcon/clicksMetaData" />
+            </locations>
+        </cluster>
+..... more clusters </clusters>
+</verbatim>
+Feed references a cluster by it's name, before submitting a feed all the referenced cluster should be submitted to Falcon.
+type: specifies whether the referenced cluster should be treated as a source or target for a feed. A feed can have multiple source and target clusters. If the type of cluster is not specified then the cluster is not considered for replication.
+Validity of a feed on cluster specifies duration for which this feed is valid on this cluster.
+Retention specifies how long the feed is retained on this cluster and the action to be taken on the feed after the expiry of retention period.
+The retention limit is specified by expression frequency(times), ex: if feed should be retained for at least 6 hours then retention's limit="hours(6)".
+The field partitionExp contains partition tags. Number of partition tags has to be equal to number of partitions specified in feed schema. A partition tag can be a wildcard(*), a static string or an expression. Atleast one of the strings has to be an expression.
+Location specifies where the feed is available on this cluster. This is an optional parameter and path can be same or different from the global locations tag value ( it is mentioned outside the clusters tag ) . This tag provides the user to flexibility to have feed at different locations on different clusters. If this attribute is missing then the default global location is picked from the feed definition. Also the individual location tags data, stats, meta are optional. 
+
+<verbatim>
+ <location type="data" path="/projects/falcon/clicks" />
+ <location type="stats" path="/projects/falcon/clicksStats" />
+ <location type="meta" path="/projects/falcon/clicksMetaData" />
+</verbatim>
+A location tag specifies the type of location like data, meta, stats and the corresponding paths for them.
+A feed should at least define the location for type data, which specifies the HDFS path pattern where the feed is generated
+periodically. ex: type="data" path="/projects/TrafficHourly/${YEAR}-${MONTH}-${DAY}/traffic"
+The granularity of date pattern in the path should be atleast that of a frequency of a feed.
+Other location type which are supported are stats and meta paths, if a process references a feed then the meta and stats
+paths are available as a property in a process.
+
+<verbatim>
+    <properties>
+        <property name="tmpFeedPath" value="tmpFeedPathValue" />
+        <property name="field2" value="value2" />
+        <property name="queueName" value="hadoopQueue"/>
+        <property name="jobPriority" value="VERY_HIGH"/>
+    </properties>
+</verbatim>
+A key-value pair, which are propagated to the workflow engine. "queueName" and "jobPriority" are special properties available to user to specify the hadoop job queue and priority, the same value is used by Falcons launcher job.
+ 
+---++ Process Specification
+A process defines configuration for a workflow. A workflow is a directed acyclic graph(DAG) which defines the job for the workflow engine. A process definition defines  the configurations required to run the workflow job. For example, process defines the frequency at which the workflow should run, the clusters on which the workflow should run, the inputs and outputs for the workflow, how the workflow failures should be handled, how the late inputs should be handled and so on.  
+
+The different details of process are:
+---++++ Name
+Each process is identified with a unique name.
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+</process>
+</verbatim>
+
+---++++ Cluster
+The cluster on which the workflow should run. A process should contain one or more clusters. Cluster definition for the cluster name gives the end points for workflow execution, name node, job tracker, messaging and so on. Each cluster inturn has validity mentioned, which tell the times between which the job should run on that specified cluster. 
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+   <clusters>
+        <cluster name="test-cluster1">
+            <validity start="2012-12-21T08:15Z" end="2100-01-01T00:00Z"/>
+        </cluster>
+        <cluster name="test-cluster2">
+            <validity start="2012-12-21T08:15Z" end="2100-01-01T00:00Z"/>
+        </cluster>
+       ....
+       ....
+    </clusters>
+
+...
+</process>
+</verbatim>
+
+---++++ Parallel
+Parallel defines how many instances of the workflow can run concurrently. It should be a positive interger > 0. For example, concurrency of 1 ensures that only one instance of the workflow can run at a time. The next instance will start only after the running instance completes.
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+   <concurrency>[concurrency]</concurrency>
+...
+</process>
+</verbatim>
+
+---++++ Order
+Order defines the order in which the ready instances are picked up. The possible values are FIFO(First In First Out), LIFO(Last In First Out), and ONLYLAST(Last Only).
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+   <order>[order]</order>
+...
+</process>
+</verbatim>
+
+---++++ Timeout
+A optional Timeout specifies the maximum time an instance waits for a dataset before being killed by the workflow engine, a time out is specified like frequency.
+If timeout is not specified, falcon computes a default timeout for a process based on its frequency, which is six times of the frequency of process or 30 minutes if computed timeout is less than 30 minutes.
+<verbatim>
+<process name="[process name]">
+...
+   <timeout>[timeunit]([frequency])</timeout>
+...
+</process>
+</verbatim>
+
+---++++ Frequency
+Frequency defines how frequently the workflow job should run. For example, hours(1) defines the frequency as hourly, days(7) defines weekly frequency. The values for timeunit can be minutes/hours/days/months and the frequency number should be a positive integer > 0. 
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+   <frequency>[timeunit]([frequency])</order>
+...
+</process>
+</verbatim>
+
+---++++ Validity
+Validity defines how long the workflow should run. It has 3 components - start time, end time and timezone. Start time and end time are timestamps defined in yyyy-MM-dd'T'HH:mm'Z' format and should always be in UTC. Timezone is used to compute the next instances starting from start time. The workflow will start at start time and end before end time specified on a given cluster. So, there will not be a workflow instance at end time.
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+   <validity start=[start time] end=[end time] timezone=[timezone]/>
+...
+</process>
+</verbatim>
+
+Examples:
+<verbatim>
+<process name="sample-process">
+...
+    <frequency>days(1)</frequency>
+    <validity start="2012-01-01T00:40Z" end="2012-04-01T00:00" timezone="UTC"/>
+...
+</process>
+</verbatim>
+The daily workflow will start on Jan 1st 2012 at 00:40 UTC, it will run at 40th minute of every hour and the last instance will be at March 31st 2012 at 23:40 UTC.
+                                                                                               
+<verbatim>
+<process name="sample-process">
+...
+    <frequency>hours(1)</frequency>
+    <validity start="2012-03-11T08:40Z" end="2012-03-12T08:00" timezone="PST8PDT"/>
+...
+</process>
+</verbatim>
+The hourly workflow will start on March 11th 2012 at 00:40 PST, the next instances will be at 01:40 PST, 03:40 PDT, 04:40 PDT and so on till 23:40 PDT. So, there will be just 23 instances of the workflow for March 11th 2012 because of DST switch.
+
+---++++ Inputs
+Inputs define the input data for the workflow. The workflow job will start executing only after the schedule time and when all the inputs are available. There can be 0 or more inputs and each of the input maps to a feed. The path and frequency of input data is picked up from feed definition. Each input should also define start and end instances in terms of [[FalconDocumentation][EL expressions]] and can optionally specify specific partition of input that the workflow requires. The components in partition should be subset of partitions defined in the feed.
+
+For each input, Falcon will create a property with the input name that contains the comma separated list of input paths. This property can be used in workflow actions like pig scripts and so on.
+
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+    <inputs>
+        <input name=[input name] feed=[feed name] start=[start el] end=[end el] partition=[partition]/>
+        ...
+    </inputs>
+...
+</process>
+</verbatim>
+
+Example:
+<verbatim>
+<feed name="feed1">
+...
+    <partition name="isFraud"/>
+    <partition name="country"/>
+    <frequency>hours(1)</frequency>
+    <locations>
+        <location type="data" path="/projects/bootcamp/feed1/${YEAR}-${MONTH}-${DAY}-${HOUR}"/>
+        ...
+    </locations>
+...
+</feed>
+<process name="sample-process">
+...
+    <inputs>
+        <input name="input1" feed="feed1" start="today(0,0)" end="today(1,0)" partition="*/US"/>
+        ...
+    </inputs>
+...
+</process>
+</verbatim>
+The input for the workflow is a hourly feed and takes 0th and 1st hour data of today(the day when the workflow runs). If the workflow is running for 2012-03-01T06:40Z, the inputs are /projects/bootcamp/feed1/2012-03-01-00/*/US and /projects/bootcamp/feed1/2012-03-01-01/*/US. The property for this input is
+input1=/projects/bootcamp/feed1/2012-03-01-00/*/US,/projects/bootcamp/feed1/2012-03-01-01/*/US
+
+---++++ Optional Inputs
+User can metion one or more inputs as optional inputs. In such cases the job does not wait on those inputs which are mentioned as optional. If they are present it considers them otherwise continue with the comlpulsury ones. 
+Example:
+<verbatim>
+<feed name="feed1">
+...
+    <partition name="isFraud"/>
+    <partition name="country"/>
+    <frequency>hours(1)</frequency>
+    <locations>
+        <location type="data" path="/projects/bootcamp/feed1/${YEAR}-${MONTH}-${DAY}-${HOUR}"/>
+        ...
+    </locations>
+...
+</feed>
+<process name="sample-process">
+...
+    <inputs>
+        <input name="input1" feed="feed1" start="today(0,0)" end="today(1,0)" partition="*/US"/>
+        <input name="input2" feed="feed2" start="today(0,0)" end="today(1,0)" partition="*/UK" optional="true" />
+        ...
+    </inputs>
+...
+</process>
+</verbatim>
+
+
+---++++ Outputs
+Outputs define the output data that is generated by the workflow. A process can define 0 or more outputs. Each output is mapped to a feed and the output path is picked up from feed definition. The output instance that should be generated is specified in terms of [[FalconDocumentation][EL expression]].
+
+For each output, Falcon creates a property with output name that contains the path of output data. This can be used in workflows to store in the path.
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+    <outputs>
+        <output name=[input name] feed=[feed name] instance=[instance el]/>
+        ...
+    </outputs>
+...
+</process>
+</verbatim>
+
+Example:
+<verbatim>
+<feed name="feed2">
+...
+    <frequency>days(1)</frequency>
+    <locations>
+        <location type="data" path="/projects/bootcamp/feed2/${YEAR}-${MONTH}-${DAY}"/>
+        ...
+    </locations>
+...
+</feed>
+<process name="sample-process">
+...
+    <outputs>
+        <output name="output1" feed="feed2" instance="today(0,0)"/>
+        ...
+    </outputs>
+...
+</process>
+</verbatim>
+The output of the workflow is feed instance for today. If the workflow is running for 2012-03-01T06:40Z, the workflow generates output /projects/bootcamp/feed2/2012-03-01. The property for this output that is available for workflow is:
+output1=/projects/bootcamp/feed2/2012-03-01
+
+---++++ Properties
+The properties are key value pairs that are passed to the workflow. These properties are optional and can be used in workflow to parameterize the workflow.
+Synatx:
+<verbatim>
+<process name="[process name]">
+...
+    <properties>
+        <propery name=[key] value=[value]/>
+        ...
+    </properties>
+...
+</process>
+</verbatim>
+
+queueName and jobPriority are special properites, which when present are used by the Falcon's launcher job, the same property is also availalble in workflow which can be used to propogate to pig or M/R job.
+<verbatim>
+        <property name="queueName" value="hadoopQueue"/>
+        <property name="jobPriority" value="VERY_HIGH"/>
+</verbatim>
+---++++ Workflow
+The workflow defines the workflow engine that should be used and the path to the workflow on hdfs. The workflow definition on hdfs contains the actual job that should run and it should confirm to the workflow specification of the engine specified. The libraries required by the workflow should be in lib folder inside the workflow path.
+
+The properties defined in the cluster and cluster properties(nameNode and jobTracker) will also be available for the workflow.
+
+As of now, only oozie workflow engine is supported. Refer to oozie [[http://incubator.apache.org/oozie/overview.html][workflow overview]] and [[http://incubator.apache.org/oozie/docs/3.1.3/docs/WorkflowFunctionalSpec.html][workflow specification]] for details.  
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+    <workflow engine=[workflow engine] path=[workflow path]/>
+...
+</process>
+</verbatim>
+
+Example:
+<verbatim>
+<process name="sample-process">
+...
+    <workflow engine="oozie" path="/projects/bootcamp/workflow"/>
+...
+</process>
+</verbatim>
+This defines the workflow engine to be oozie and the workflow xml is defined at /projects/bootcamp/workflow/workflow.xml. The libraries are at /projects/bootcamp/workflow/lib.
+
+---++++ Retry
+Retry policy defines how the workflow failures should be handled. Two retry policies are defined: backoff and exp-backoff(exponential backoff). Depending on the delay and number of attempts, the workflow is re-tried after specific intervals.
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+    <retry policy=[retry policy] delay=[retry delay] attempts=[retry attempts]/>
+...
+</process>
+</verbatim>
+
+Examples:
+<verbatim>
+<process name="sample-process">
+...
+    <retry policy="backoff" delay="minutes(10)" attempts="3"/>
+...
+</process>
+</verbatim>
+The workflow is re-tried after 10 mins, 20 mins and 30 mins. With exponential backoff, the workflow will be re-tried after 10 mins, 20 mins and 40 mins.
+
+---++++ Late data
+Late data handling defines how the late data should be handled. Each feed is defined with a late cut-off value which specifies the time till which late data is valid. For example, late cut-off of hours(6) means that data for nth hour can get delayed by upto 6 hours. Late data specification in process defines how this late data is handled.
+
+Late data policy defines how frequently check is done to detect late data. The policies supported are: backoff, exp-backoff(exponention backoff) and final(at feed's late cut-off). The policy along with delay defines the interval at which late data check is done.
+
+Late input specification for each input defines the workflow that should run when late data is detected for that input. 
+
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+    <late-process policy=[late handling policy] delay=[delay]>
+        <late-input input=[input name] workflow-path=[workflow path]/>
+        ...
+    </late-process>
+...
+</process>
+</verbatim>
+
+Example:
+<verbatim>
+<feed name="feed1">
+...
+    <frequency>hours(1)</frequency>
+    <late-arrival cut-off="hours(6)"/>
+...
+</feed>
+<process name="sample-process">
+...
+    <inputs>
+        <input name="input1" feed="feed1" start="today(0,0)" end="today(1,0)"/>
+        ...
+    </inputs>
+    <late-process policy="final">
+        <late-input input="input1" workflow-path="/projects/bootcamp/workflow/lateinput1" />
+        ...
+    </late-process>
+...
+</process>
+</verbatim>
+This late handling specifies that late data detection should run at feed's late cut-off which is 6 hours in this case. If there is late data, Falcon should run the workflow specified at /projects/bootcamp/workflow/lateinput1/workflow.xml
\ No newline at end of file

Added: incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/docs/FalconArchitecture.twiki
URL: http://svn.apache.org/viewvc/incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/docs/FalconArchitecture.twiki?rev=1515884&view=auto
==============================================================================
--- incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/docs/FalconArchitecture.twiki (added)
+++ incubator/falcon/trunk/releases/0.3-incubating/src/site/twiki/docs/FalconArchitecture.twiki Tue Aug 20 17:04:54 2013
@@ -0,0 +1,615 @@
+---++ Contents
+   * <a href="#Architecture">Architecture</a>
+   * <a href="#Control_flow">Control flow</a>
+   * <a href="#Modes_Of_Deployment">Modes Of Deployment</a>
+   * <a href="#Entity_Management_actions">Entity Management actions</a>
+   * <a href="#Instance_Management_actions">Instance Management actions</a>
+   * <a href="#Retention">Retention</a>
+   * <a href="#Replication">Replication</a>
+   * <a href="#Cross_entity_validations">Cross entity validations</a>
+   * <a href="#Updating_process_and_feed_definition">Updating process and feed definition</a>
+   * <a href="#Handling_late_input_data">Handling late input data</a>
+   * <a href="#Idempotency">Idempotency</a>
+   * <a href="#Alerting_and_Monitoring">Alerting and Monitoring</a>
+   * <a href="#Falcon_EL_Expressions">Falcon EL Expressions</a>
+
+---++ Architecture
+---+++ Introduction
+Falcon is a feed and process management platform over hadoop. Falcon essentially transforms user's feed
+and process configurations into repeated actions through a standard workflow engine (Apache Oozie). Falcon
+by itself doesn't do any heavy lifting. All the functions and workflow state management requirements are
+delegated to the workflow scheduler. The only thing that Falcon maintains is the dependencies and relationship
+between these entities. This is adequate to provide integrated and seamless experience to the developers using
+the falcon platform.
+
+---+++ Falcon Architecture - Overview
+<img src="../images/Architecture.png" height="400" width="600" />
+
+---+++ Scheduler
+Falcon system has picked Apache Oozie as the default scheduler. However the system is open for integration with
+other schedulers. Lot of the data processing in hadoop requires scheduling to be based on both data availability
+as well as time. Apache Oozie currently supports these capabilities off the shelf and hence the choice.
+
+---+++ Control flow
+Though the actual responsibility of the workflow is with the scheduler (Oozie), Falcon remains in the
+execution path, by subscribing to messages that each of the workflow may generate. When Falcon generates a
+workflow in Oozie, it does so, after instrumenting the workflow with additional steps which includes messaging
+via JMS. Falcon system itself subscribes to these control messages and can perform actions such as retries,
+handling late input arrival etc.
+
+
+---++++ Feed Schedule flow
+<img src="../images/FeedSchedule.png" height="400" width="600" />
+
+---++++ Process Schedule flow
+<img src="../images/ProcessSchedule.png" height="400" width="600" />
+
+
+
+---++ Modes Of Deployment
+There are two basic components of Falcon set up. Falcon Prism and Falcon Server.
+As the name suggests Falcon Prism splits the request it gets to the Falcon Servers. More details below:
+
+---+++ Stand Alone Mode
+Stand alone mode is useful when the hadoop jobs and relevant data processing involves only one hadoop cluster. In this mode there is single Falcon server that contacts with oozie to schedule jobs on Hadoop. All the process / feed request like submit, schedule, suspend, kill are sent to this server only. For running in this mode one should use the falcon which has been built for standalone mode, or build using standalone option if using source code.
+
+---+++ Distributed Mode
+Distributed mode is the mode which you might me using most of the time. This is for orgs which have multiple instances of hadoop clusters, and multiple workflow schedulers to handle them. Here we have 2 components: Prism and Server. Both Prism and server have there own setup (runtime and startup properties) and there config locations. 
+In this mode Prism acts as a contact point for Falcon servers. Below are the requests that can be sent to prism and server in this mode:
+
+ Prism: submit, schedule, submitAndSchedule, Suspend, Resume, Kill, instance management
+ Server: schedule, suspend, resume, instance management
+ 
+As observed above submit and kill are kept exclusively as Prism operations to keep all the config stores in sync and to support feature of idempotency.
+Request may also be sent from prism but directed to a specific server using the option "-colo" from CLI or append the same in web request, if using API.
+
+When a cluster is submitted it is by default sent to all the servers configured in the prism.
+When is feed is SUBMIT / SCHEDULED request is only sent to the servers specified in the feed / process definitions. Servers are mentioned in the feed / process via CLUSTER tags in xml definition.
+
+---++++ Prism Setup
+<img src="../images/PrismSetup.png" height="400" width="600" />
+ 
+---+++ Configuration Store
+Configuration store is file system based store that the Falcon system maintains where the entity definitions
+are stored. File System used for the configuration store can either be a local file system or HDFS file system.
+It is recommended that the store be maintained outside of the system where Falcon is deployed. This is needed
+for handling issues relating to disk failures or other permanent failures of the system where Falcon is deployed.
+Configuration store also maintains an archive location where prior versions of the configuration or deleted
+configurations are maintained. They are never accessed by the Falcon system and they merely serve to track
+historical changes to the entity definitions.
+
+---+++ Atomic Actions
+Often times when Falcon performs entity management actions, it may need to do several individual actions.
+If one of the action were to fail, then the system could be in an inconsistent state. To avoid this, all
+individual operations performed are recorded into a transaction journal. This journal is then used to undo
+the overall user action. In some cases, it is not possible to undo the action. In such cases, Falcon attempts
+to keep the system in an consistent state.
+
+---++ Entity Management actions
+
+---+++ Submit
+Entity submit action allows a new cluster/feed/process to be setup within Falcon. Submitted entity is not
+scheduled, meaning it would simply be in the configuration store within Falcon. Besides validating against
+the schema for the corresponding entity being added, the Falcon system would also perform inter-field
+validations within the configuration file and validations across dependent entities.
+
+---+++ List
+List all the entities within the falcon config store for the entity type being requested. This will include
+both scheduled and submitted entity configurations.
+
+---+++ Dependency
+Returns the dependencies of the requested entity. Dependency list include both forward and backward
+dependencies (depends on & is dependent on). For example, a feed would show process that are dependent on the
+feed and the clusters that it depends on.
+
+---+++ Schedule
+Feeds or Processes that are already submitted and present in the config store can be scheduled. Upon schedule,
+Falcon system wraps the required repeatable action as a bundle of oozie coordinators and executes them on the
+Oozie scheduler. (It is possible to extend Falcon to use an alternate workflow engine other than Oozie).
+Falcon overrides the workflow instance's external id in Oozie to reflect the process/feed and the nominal
+time. This external Id can then be used for instance management functions.
+
+---+++ Suspend
+This action is applicable only on scheduled entity. This triggers suspend on the oozie bundle that was
+scheduled earlier through the schedule function. No further instances are executed on a suspended process/feed.
+
+---+++ Resume
+Puts a suspended process/feed back to active, which in turn resumes applicable oozie bundle.
+
+---+++ Status
+Gets the current status of the entity.
+
+---+++ Definition
+Gets the current entity definition as stored in the configuration store. Please note that user documentations
+in the entity will not be retained.
+
+---+++ Delete
+Delete operation on the entity removes any scheduled activity on the workflow engine, besides removing the
+entity from the falcon configuration store. Delete operation on an entity would only succeed if there are
+no dependent entities on the deleted entity.
+
+---+++ Update
+Update operation allows an already submitted/scheduled entity to be updated. Cluster update is currently
+not allowed. Feed update can cause cascading update to all the processes already scheduled. The following
+set of actions are performed in Oozie to realize an update.
+
+   * Suspend the previously scheduled Oozie coordinator. This is prevent any new action from being triggered.
+   * Update the coordinator to set the end time to "now"
+   * Resume the suspended coordinators
+   * Schedule as per the new process/feed definition with the start time as "now"
+
+---++ Instance Management actions
+
+
+Instance Manager gives user the option to control individual instances of the process based on their instance start time (start time of that instance). Start time needs to be given in standard TZ format. Example: 01 Jan 2012 01:00 => 2012-01-01T01:00Z
+
+All the instance management operations (except running) allow single instance or list of instance within a Date range to be acted on. Make sure the dates are valid. i.e. are within the start and end time of process itself. 
+
+For every query in instance management the process name is a compulsory parameter. 
+
+Parameters -start and -end are used to mention the date range within which you want the instance to be operated upon. 
+
+-start: using only "-start" without "-end" will conduct the desired operation only on single instance given by date along with start.
+
+-end: "-end" can only be used along with "-start" . It corresponds to the end date till which instance need to operated upon. 
+
+   * 1. *status*: -status option via CLI can be used to get the status of a single or multiple instances. If the instance is not yet materialized but is within the process validity range, WAITING is returned as the state. Along with the status of the instance log location is also returned.
+
+
+   * 2.	*running*: -running returns all the running instance of the process. It does not take any start or end dates but simply return all the instances in state RUNNING at that given time. 
+
+   * 3.	*rerun*: -rerun is the option that you will use most often from instance management. As the name suggest this option is used to rerun a particular instance or instances of the process. The rerun option reruns all parent workflow for the instance, which in turn rerun all the sub-workflows for it. This option is valid for any instance in terminal state, i.e. KILLED, SUCCEEDED, FAILED. User can also set properties in the request, which will give options what types of actions should be rerun like, only failed, run all etc. These properties are dependent on the workflow engine being used along with falcon.
+   
+   * 4. *suspend*: -suspend is used to suspend a instance or instances for the given process. This option pauses the parent workflow at the state, which it was in at the time of execution of this command. This command is similar to SUSPEND process command in functionality only difference being, SUSPEND process suspends all the instance whereas suspend instance suspend only that instance or instances in the range. 
+
+   * 5.	*resume*: -resume option is used to resume any instance that is in suspended state. (Note: due to a bug in oozie �resume option in some cases may not actually resume the suspended instance/ instances)
+   * 6. *kill*: -kill option can be used to kill an instance or multiple instances 
+
+
+In all the cases where your request is syntactically correct but logically not, the instance / instances are returned with the same status as earlier. Example: trying to resume a KILLED / SUCCEEDED instance will return the instance with KILLED / SUCCEEDED, without actually performing any operation. This is so because only an instance in SUSPENDED state can be resumed. Same thing is valid for rerun a SUSPENDED or RUNNING options etc. 
+
+---++ Retention
+In coherence with it's feed lifecycle management philosophy, Falcon allows the user to retain data in the system
+for a specific period of time for a scheduled feed. The user can specify the retention period in the respective 
+feed/data xml in the following manner for each cluster the feed can belong to :
+<verbatim>
+<clusters>
+        <cluster name="corp" type="source">
+            <validity start="2012-01-30T00:00Z" end="2013-03-31T23:59Z"
+                      timezone="UTC" />
+            <retention limit="hours(10)" action="delete" /> 
+        </cluster>
+ </clusters> 
+</verbatim>
+
+The 'limit' attribute can be specified in units of minutes/hours/days/months, and a corresponding numeric value can
+be attached to it. It essentially instructs the system to retain data spanning from the current moment to the time specified
+in the attribute spanning backwards in time. Any data beyond the limit (past/future) is erased from the system.
+
+---+++ Example:
+If retention period is 10 hours, and the policy kicks in at time 't', the data retained by system is essentially the
+one falling in between [t-10h,t]. Any data in the boundaries [-�,t-10h) and (t,�] is removed from the system.
+
+The 'action' attribute can attain values of DELETE/ARCHIVE. Based upon the tag value, the data eligible for removal is either
+deleted/archived.
+
+---+++ NOTE: Falcon 0.1/0.2 releases support Delete operation only
+
+---+++ When does retention policy come into play, aka when is retention really performed?
+
+Retention policy in Falcon kicks off on the basis of the time value specified by the user. Here are the basic rules:
+
+   * If the retention policy specified is less than 24 hours: In this event, the retention policy automatically kicks off every 6 hours.
+   * If the retention policy specified is more than 24 hours: In this event, the retention policy automatically kicks off every 24 hours.
+   * As soon as a feed is successfully scheduled: the retention policy is triggered immediately regardless of the current timestamp/state of the system.
+
+Relation between feed path and retention policy: Retention policy for a particular scheduled feed applies only to the eligible feed path
+specified in the feed xml. Any other paths that do not conform to the specified feed path are left unaffected by the retention policy.
+
+---++ Replication
+Falcon's feed lifecycle management also supports Feed replication across different clusters out-of-the-box.
+Multiple source clusters and target clusters can be defined in feed definition. Falcon replicates the data using
+hadoop's distcp version 2 across different clusters whenever a feed is scheduled.
+
+The frequency at which the data is replicated is governed by the frequency specified in the feed definition.
+Ideally, the feeds data path should have the same granularity as that for frequency of the feed, i.e. if the frequency of the feed is hours(3), then the data path should be to level /${YEAR}/${MONTH}/${DAY}/${HOUR}. 
+<verbatim>
+    <clusters>
+        <cluster name="sourceCluster1" type="source" partition="${cluster.name}" delay="minutes(40)">
+            <validity start="2021-11-01T00:00Z" end="2021-12-31T00:00Z"/>
+        </cluster>
+        <cluster name="sourceCluster2" type="source" partition="COUNTRY/${cluster.name}">
+            <validity start="2021-11-01T00:00Z" end="2021-12-31T00:00Z"/>
+        </cluster>
+        <cluster name="backupCluster" type="target">
+            <validity start="2011-11-01T00:00Z" end="2011-12-31T00:00Z"/>
+        </cluster>
+    </clusters>
+</verbatim>
+
+If more than 1 source cluster is defined, then partition expression is compulsory, a partition can also have a constant.
+The expression is required to avoid copying data from different source location to the same target location, also only the data in the partition is considered for replication if it is present. The partitions defined in the cluster should be less than or equal to the number of partition declared in the feed definition.
+
+Falcon uses pull based replication mechanism, meaning in every target cluster, for a given source cluster, a coordinator is scheduled which pulls the data using distcp from source cluster. So in the above example, 2 coordinators are scheduled in backupCluster, one which pulls the data from sourceCluster1 and another from sourceCluster2.
+Also, for every feed instance which is replicated Falcon sends a JMS message on success or failure of replication instance.
+
+Replication can be scheduled with the past date, the time frame considered for replication is the minimum overlapping window of start and end time of source and target cluster, ex: if s1 and e1 is the start and end time of source cluster respectively,
+and s2 and e2 of target cluster, then the coordinator is scheduled in target cluster with start time max(s1,s2) and min(e1,e2).
+
+A feed can also optionally specify the delay for replication instance in the cluster tag, the delay governs the replication instance delays. If the frequency of the feed is hours(2) and delay is hours(1), then the replication instance will run every 2 hours and replicates data with an offset of 1 hour, i.e. at
+09:00 UTC, feed instance which is eligible for replication is 08:00; and 11:00 UTC, feed instance of 10:00 UTC is eligible and so on.
+
+---+++ Where is the feed path defined?
+
+It's defined in the feed xml within the location tag.
+
+*Example:*
+<verbatim>
+<locations>
+        <location type="data" path="/retention/testFolders/${YEAR}-${MONTH}-${DAY}" />
+</locations>
+</verbatim>
+
+Now, if the above path contains folders in the following fashion:
+
+/retention/testFolders/${YEAR}-${MONTH}-${DAY}
+/retention/testFolders/${YEAR}-${MONTH}/someFolder
+
+The feed retention policy would only act on the former and not the latter.
+
+Users may choose to override the feed path specific to a cluster, so every cluster
+may have a different feed path.
+*Example:*
+<verbatim>
+<clusters>
+        <cluster name="testCluster" type="source">
+            <validity start="2011-11-01T00:00Z" end="2011-12-31T00:00Z"/>
+       		<locations>
+        		<location type="data" path="/projects/falcon/clicks/${YEAR}-${MONTH}-${DAY}" />
+        		<location type="stats" path="/projects/falcon/clicksStats/${YEAR}-${MONTH}-${DAY}" />
+        		<location type="meta" path="/projects/falcon/clicksMetaData/${YEAR}-${MONTH}-${DAY}" />
+    		</locations>
+        </cluster>
+    </clusters>
+</verbatim>
+
+
+---+++ Relation between feed's retention limit and feed's late arrival cut off period:
+
+For reasons that are obvious, Falcon has an external validation that ensures that the user
+always specifies the feed retention limit to be more than the feed's allowed late arrival period.
+If this rule is violated by the user, the feed submission call itself throws back an error.
+
+
+---++ Cross entity validations
+
+
+---+++ Entity Dependencies in a nutshell
+<img src="../images/EntityDependency.png" height="50" width="300" />
+
+
+The above schematic shows the dependencies between entities in Falcon. The arrow in above diagram
+points from a dependency to the dependent. 
+
+
+Let's just get one simple rule stated here, which we will keep referring to time and again while
+talking about entities: A dependency in the system cannot be removed unless all it's dependents are
+removed first. This holds true for all transitive dependencies also.
+
+Now, let's follow it up with a simple illustration of an Falcon Job:
+
+Let's consider a process P that refers to feed F1 as an input feed, and generates feed F2 as an
+output feed. These feeds/processes are supposed to be associated with a cluster C1.
+
+The order of submission of this job would be in the following order:
+
+C1->F1/F2(in any order)->P
+
+The order of removal of this job from the system is in the exact opposite order, i.e.:
+
+P->F1/F2(in any order)->C1
+
+Please note that there might be multiple process referring to a particular feed, or a single feed belonging
+to multiple clusters. In that event, any of the dependencies cannot be removed unless ALL of their dependents
+are removed first. Attempting to do so will result in an error message and a 400 Bad Request operation.
+
+
+---+++ Other cross validations between entities in Falcon system
+
+*Cluster-Feed Cross validations:*
+
+   * The cluster(s) referenced by feed (inside the <clusters> tag) should be  present in the system at the time
+of submission. Any exception to this results in a feed submission failure. Note that a feed might be referring
+to more than a single cluster. The identifier for the same is the 'name' attribute for the individual cluster.
+
+*Example:*
+
+*Feed XML:*
+   
+<verbatim>
+   <clusters>
+        <cluster name="corp" type="source">
+            <validity start="2009-01-01T00:00Z" end="2012-12-31T23:59Z"
+                      timezone="UTC" />
+            <retention limit="months(6)" action="delete" />
+        </cluster>
+    </clusters>
+</verbatim>
+
+*Cluster corp's XML:*
+
+<verbatim>
+<cluster colo="gs" description="" name="corp" xmlns="uri:falcon:cluster:0.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+</verbatim>
+
+*Cluster-Process Cross validations:*
+
+
+   * In a similar relationship to that of feed and a cluster, a process also refers to the relevant cluster by the
+'name' attribute. Any exception results in a process submission failure.
+
+
+---+++ Example:
+---+++ Process XML:
+<verbatim>
+<process name="agregator-coord16">
+    <cluster name="corp"/>....
+</verbatim>
+---+++ Cluster corp's XML:
+<verbatim>
+<cluster colo="gs" description="" name="corp" xmlns="uri:falcon:cluster:0.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+</verbatim>
+
+*Feed-Process Cross Validations:*
+
+
+1. The process <input> and feeds designated as input feeds for the job:
+
+ For every feed referenced in the <input> tag in a process definition, following rules are applied
+when the process is due for submission:
+
+   * The feed having a value associated with the 'feed' attribute in input tag should be present in
+the system. The corresponding attribute in the feed definition is the 'name' attribute in the <feed> tag.
+
+*Example:*
+
+*Process xml:*
+
+<verbatim>
+<input end-instance="now(0,20)" start-instance="now(0,-60)"
+feed="raaw-logs16" name="inputData"/>
+</verbatim>
+
+*Feed xml:*
+<verbatim>
+<feed description="clicks log" name="raaw-logs16"....
+</verbatim>
+
+   
+    * The time interpretation for corresponding tags indicating the start and end instances for a
+particular input feed in the process xml should lie well within the timespan of the period specified in
+<validity> tag of the particular feed.
+
+*Example:*
+
+1. In the following scenario, process submission will result in an error:
+
+*Process XML:*
+<verbatim>
+<input end-instance="now(0,20)" start-instance="now(0,-60)"
+   feed="raaw-logs16" name="inputData"/>
+</verbatim>
+*Feed XML:*
+<verbatim>
+<validity start="2009-01-01T00:00Z" end="2009-12-31T23:59Z".....
+</verbatim>
+Explanation: The process timelines for the feed range between a 40 minute interval between [-60m,-20m] from
+the current timestamp (which lets assume is 'today' as per the 'now' directive). However, the feed validity
+is between a 1 year period in 2009, which makes it anachronistic. 
+
+2. The following example would work just fine:
+
+*Process XML:*
+<verbatim>
+<input end-instance="now(0,20)" start-instance="now(0,-60)"
+   feed="raaw-logs16" name="inputData"/>
+</verbatim>
+*Feed XML:*
+<verbatim>
+validity start="2009-01-01T00:00Z" end="2012-12-31T23:59Z" .......
+</verbatim>
+since at the time of charting this document (03/03/2012), the feed validity is able to encapsulate the process
+input's start and end instances.
+
+
+Failure to follow any of the above rules would result in a process submission failure.
+
+*NOTE:* Even though the above check ensures that the timelines are not anachronistic, if the input data is not
+present in the system for the specified time period, the process can be submitted and scheduled, but all instances
+created would result in a WAITING state unless data is actually provided in the cluster.
+
+
+
+---++ Updating process and feed definition
+Any changes in feed/process can be done by updating its definition. After the update, any new workflows which are to be scheduled after the update call will pick up the new changes. Feed/process name and start time can't be updated. Updating a process triggers updates to the workflow that is triggered in the workflow engine. Updating feed updates feed workflows like retention, replication etc. and also updates the processes that reference the feed.
+
+
+---++ Handling late input data
+Falcon system can handle late arrival of input data and appropriately re-trigger processing for the affected
+instance. From the perspective of late handling, there are two main configuration parameters late-arrival cut-off
+and late-inputs section in feed and process entity definition that are central. These configurations govern
+how and when the late processing happens. In the current implementation (oozie based) the late handling is very
+simple and basic. The falcon system looks at all dependent input feeds for a process and computes the max late
+cut-off period. Then it uses a scheduled messaging framework, like the one available in Apache ActiveMQ or Java's DelayQueue to schedule a message with a cut-off period, then after a cut-off period the message is dequeued and Falcon checks for changes in the feed data which is recorded in HDFS in latedata file by falcons "record-size" action, if it detects any changes then the workflow will be rerun with the new set of feed data.
+
+*Example:*
+The late rerun policy can be configured in the process definition.
+Falcon supports 3 policies, periodic, exp-backoff and final.
+Delay specifies, how often the feed data should be checked for changes, also one needs to 
+explicitly set the feed names in late-input which needs to be checked for late data.
+<verbatim>
+  <late-process policy="exp-backoff" delay="hours(1)">
+        <late-input input="impression" workflow-path="hdfs://impression/late/workflow" />
+        <late-input input="clicks" workflow-path="hdfs://clicks/late/workflow" />
+   </late-process>
+</verbatim>
+
+---++ Idempotency
+All the operations in Falcon are Idempotent. That is if you make same request to the falcon server / prism again you will get a SUCCESSFUL return if it was SUCCESSFUL in the first attempt. For example, you submit a new process / feed and get SUCCESSFUL message return. Now if you run the same command / api request on same entity you will again get a SUCCESSFUL message. Same is true for other operations like schedule, kill, suspend and resume.
+Idempotency also by takes care of the condition when request is sent through prism and fails on one or more servers. For example prism is configured to send request to 3 servers. First user sends a request to SUBMIT a process on all 3 of them, and receives a response SUCCESSFUL from all of them. Then due to some issue one of the servers goes down, and user send a request to schedule the submitted process. This time he will receive a response with PARTIAL status and a FAILURE message from the server that has gone down. If the users check he will find the process would have been started and running on the 2 SUCCESSFUL servers. Now the issue with server is figured out and it is brought up. Sending the SCHEDULE request again through prism will result in a SUCCESSFUL response from prism as well as other three servers, but this time PROCESS will be SCHEDULED only on the server which had failed earlier and other two will keep running as before. 
+ 
+
+---++ Alerting and Monitoring
+---+++ Alerting
+Falcon provides monitoring of various events by capturing metrics of those events.
+The metric numbers can then be used to monitor performance and health of the Falcon system and the entire processing pipelines.
+
+Users can view the logs of these events in the metric.log file, by default this file is created under ${user.dir}/logs/ directory.
+Users may also extend the Falcon monitoring framework to send events to systems like Mondemand/lwes.
+
+The following events are captured by Falcon for logging the metrics:
+   1. New cluster definitions posted to Falcon (success & failures)
+   1. New feed definition posted to Falcon (success & failures)
+   1. New process definition posted to Falcon (success & failures)
+   1. Process update events (success & failures)
+   1. Feed update events (success & failures)
+   1. Cluster update events (success & failures)
+   1. Process suspend events (success & failures)
+   1. Feed suspend events (success & failures)
+   1. Process resume events (success & failures)
+   1. Feed resume events (success & failures)
+   1. Process remove events (success & failures)
+   1. Feed remove events (success & failures)
+   1. Cluster remove events (success & failures)
+   1. Process instance kill events (success & failures)
+   1. Process instance re-run events (success & failures)
+   1. Process instance generation events
+   1. Process instance failure events
+   1. Process instance auto-retry events
+   1. Process instance retry exhaust events
+   1. Feed instance deletion event
+   1. Feed instance deletion failure event (no retries)
+   1. Feed instance replication event
+   1. Feed instance replication failure event
+   1. Feed instance replication auto-retry event
+   1. Feed instance replication retry exhaust event
+   1. Feed instance late arrival event
+   1. Feed instance post cut-off arrival event
+   1. Process re-run due to late feed event
+   1. Transaction rollback failed event
+
+The metric logged for an event has the following properties:
+   1. Action - Name of the event.
+   2. Dimensions - A list of name/value pairs of various attributes for a given action.
+   3. Status- Status of an action FAILED/SUCCEEDED.
+   4. Time-taken - Time taken in nano seconds for a given action.
+
+An example for an event logged for a submit of a new process definition:   
+
+   2012-05-04 12:23:34,026 {Action:submit, Dimensions:{entityType=process}, Status: SUCCEEDED, Time-taken:97087000 ns}
+
+Users may parse the metric.log or capture these events from custom monitoring frameworks and can plot various graphs 
+or send alerts according to their requirements.
+
+---+++ Notifications
+Falcon creates a JMS topic for every process/feed that is scheduled in Falcon.
+The implementation class and the broker url of the JMS engine are read from the dependent cluster's definition.
+Users may register consumers on the required topic to check the availability or status of feed instances.
+ 
+For a given process that is scheduled, the name of the topic is same as the process name.
+Falcon sends a Map message for every feed produced by the instance of a process to the JMS topic.
+The JMS MapMessage sent to a topic has the following properties:
+entityName, feedNames, feedInstancePath, workflowId, runId, nominalTime, timeStamp, brokerUrl, brokerImplClass, entityType, operation, logFile, topicName, status, brokerTTL;
+
+For a given feed that is scheduled, the name of the topic is same as the feed name.
+Falcon sends a map message for every feed instance that is deleted/archived/replicated depending upon the retention policy set in the feed definition.
+The JMS MapMessage sent to a topic has the following properties:
+entityName, feedNames, feedInstancePath, workflowId, runId, nominalTime, timeStamp, brokerUrl, brokerImplClass, entityType, operation, logFile, topicName, status, brokerTTL;
+
+The JMS messages are automatically purged after a certain period (default 3 days) by the Falcon JMS house-keeping service.TTL (Time-to-live) for JMS message
+can be configured in the Falcon's startup.properties file.
+
+---++ Falcon EL Expressions
+
+
+Falcon expression language can be used in process definition for giving the start and end instance for various feeds.
+
+Before going into how to use falcon EL expressions it is necessary to understand what does instance and instance start time refer to with respect to Falcon.
+
+Lets consider a part of process definition below:
+
+<verbatim>
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<process name="testProcess">
+    <clusters>
+        <cluster name="corp">
+            <validity start="2010-01-02T01:00Z" end="2011-01-03T03:00Z" />
+        </cluster>
+    </clusters>
+   <parallel>2</parallel>
+   <order>LIFO</order>
+   <timeout>hours(3)</timeout>
+   <frequency>minutes(30)</frequency>
+
+  <inputs>
+ <input end-instance="now(0,20)" start-instance="now(0,-60)"
+			feed="input-log" name="inputData"/>
+ </inputs>
+<outputs>
+	<output instance="now(0,0)" feed="output-log"
+		name="outputData" />
+</outputs>
+...
+...
+...
+...
+</process>
+</verbatim>
+
+
+The above definition says that the process will start at 2nd of Jan 2010 at 1 am and will end at 3rd of Jan 2011 at 3 am on cluster corp. Also process will start a user-defined workflow (which we will call instance) every 30 mins.
+
+This means starting 2010-01-02T01:00Z every 30 mins a instance will start will run user defined workflow. Now if this workflow needs some input data and produce some output, user needs to give that in <inputs> and <outputs> tags. 
+Since the inputs that the process takes can be distributed over a wide range we use the limits by giving "start" and "end" instance for input. Output is only one location so only instance is given. 
+The timeout specifies, the how long a given instance should wait for input data before being terminated by the workflow engine.
+
+Coming back to instance start time, since a instance will start every 30 mins starting 2010-01-02T01:00Z, the time it is scheduled to start is called its instance time. For example first few instance time for above example are:
+
+
+<pre>Instance Number      instance start Time</pre>
+
+<pre>1			 2010-01-02T01:00Z</pre>
+<pre>2			 2010-01-02T01:30Z</pre>
+<pre>3			 2010-01-02T02:00Z</pre>
+<pre>4			 2010-01-02T02:30Z</pre>
+<pre>.				.</pre>
+<pre>.				.</pre>
+<pre>.				.</pre>
+<pre>.				.</pre>
+
+Now lets go to how to use expression language. Only thing to keep in mind is all EL evaluation are done based on the start time of that instance, and very instance will have different inputs / outputs based on the feed instance given in process definition.  
+
+All the parameters in various El can be both positive, zero or negative values. Positive values indicate so many units in future, zero means the base time EL has been resolved to, and negative values indicate corresponding units in past. 
+
+__Note: if no instance is created at the resolved time, then the instance immediately before it is considered.__
+
+Falcon currently support following ELs:
+
+
+   * 1.	*now(hours,minutes)*: now refer to the instance start time. Hours and minutes given are in reference with the start time of instance. For example now(-2,40)  corresponds to feed instance at -2 hr and +40 minutes i.e.  feed instance 80 mins before the instance start time. Id user would have given now(0,-80) it would have correspond to the same. 
+   * 2.	*today(hours,minutes)*: hours and minutes given in this EL corresponds to instance from the start day of instance start time. Ie. If instance start is at 2010-01-02T01:30Z  then today(-3,-20) will mean instance created at 2010-01-01T20:40 and today(3,20) will correspond to 2010-01-02T3:20Z. 
+
+   * 3.	*yesterday(hours,minutes)*: As the name suggest EL yesterday picks up feed instances with respect to start of day yesterday. Hours and minutes are added to the 00 hours starting yesterday, Example: yesterday(24,30) will actually correspond to 00:30 am of today, for 2010-01-02T01:30Z this would mean 2010-01-02:00:30 feed. 
+
+   * 4.	*currentMonth(day,hour,minute)*: Current month takes the reference to start of the month with respect to instance start time. One thing to keep in mind is that day is added to the first day of the month. So the value of day is the number of days you want to add to the first day of the month. For example: for instance start time 2010-01-12T01:30Z and El as currentMonth(3,2,40) will correspond to feed created at 2010-01-04T02:40Z and currentMonth(0,0,0) will mean 2010-01-01T00:00Z.
+
+   * 5.	*lastMonth(day,hour,minute)*: Parameters for lastMonth is same as currentMonth, only difference being the reference is shifted to one month back. For instance start 2010-01-12T01:30Z lastMonth(2,3,30) will correspond to feed instance at 2009-12-03:T03:30Z 
+
+   * 6.	*currentYear(month,day,hour,minute)*: The month,day,hour, minutes in the pareamter are added with reference to the start of year of instance start time. For our exmple start time 2010-01-02:00:30 reference will go back to 2010-01-01:T00:00Z. Also similar to days, months are added to the 1st month that Jan. So currentYear(0,2,2,20) will mean 2010-01-03T02:20Z while currentYear(11,2,2,20) will mean 2010-12-03T02:20Z
+
+
+   * 7.	*lastYear(month,day,hour,minute)*: This is exactly similary to currentYear in usage> only difference being start reference is taken to start of previous year. For example: lastYear(4,2,2,20) will corrospond to feed insatnce created at 2009-05-03T02:20Z and lastYear(12,2,2,20) will corrospond to feed at 2010-01-03T02:20Z.
+   
+   * 8. *latest(number of latest instance)*: This will simply make you input consider the number of latest available instance of the feed given as parameter. For example: latest(0) will consider the last available instance of feed, where as latest latest(-1) will consider second last available feed and latest(-3) will consider 4th last available feed.
+   
+