You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@storm.apache.org by pt...@apache.org on 2016/01/15 17:22:46 UTC

[2/4] storm git commit: update documentation

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/Windowing.md
----------------------------------------------------------------------
diff --git a/documentation/Windowing.md b/documentation/Windowing.md
new file mode 100644
index 0000000..803e5ca
--- /dev/null
+++ b/documentation/Windowing.md
@@ -0,0 +1,235 @@
+# Windowing support in core storm
+
+Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
+following two parameters,
+
+1. Window length - the length or duration of the window
+2. Sliding interval - the interval at which the windowing slides
+
+## Sliding Window
+
+Tuples are grouped in windows and window slides every sliding interval. A tuple can belong to more than one window.
+
+For example a time duration based sliding window with length 10 secs and sliding interval of 5 seconds.
+
+```
+| e1 e2 | e3 e4 e5 e6 | e7 e8 e9 |...
+0       5             10         15    -> time
+
+|<------- w1 -------->|
+        |------------ w2 ------->|
+```
+
+The window is evaluated every 5 seconds and some of the tuples in the first window overlaps with the second one.
+		
+
+## Tumbling Window
+
+Tuples are grouped in a single window based on time or count. Any tuple belongs to only one of the windows.
+
+For example a time duration based tumbling window with length 5 secs.
+
+```
+| e1 e2 | e3 e4 e5 e6 | e7 e8 e9 |...
+0       5             10         15    -> time
+   w1         w2            w3
+```
+
+The window is evaluated every five seconds and none of the windows overlap.
+
+Storm supports specifying the window length and sliding intervals as a count of the number of tuples or as a time duration.
+
+The bolt interface `IWindowedBolt` is implemented by bolts that needs windowing support.
+
+```java
+public interface IWindowedBolt extends IComponent {
+    void prepare(Map stormConf, TopologyContext context, OutputCollector collector);
+    /**
+     * Process tuples falling within the window and optionally emit 
+     * new tuples based on the tuples in the input window.
+     */
+    void execute(TupleWindow inputWindow);
+    void cleanup();
+}
+```
+
+Every time the window activates, the `execute` method is invoked. The TupleWindow parameter gives access to the current tuples
+in the window, the tuples that expired and the new tuples that are added since last window was computed which will be useful 
+for efficient windowing computations.
+
+Bolts that needs windowing support typically would extend `BaseWindowedBolt` which has the apis for specifying the
+window length and sliding intervals.
+
+E.g. 
+
+```java
+public class SlidingWindowBolt extends BaseWindowedBolt {
+	private OutputCollector collector;
+	
+    @Override
+    public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
+    	this.collector = collector;
+    }
+	
+    @Override
+    public void execute(TupleWindow inputWindow) {
+	  for(Tuple tuple: inputWindow.get()) {
+	    // do the windowing computation
+		...
+	  }
+	  // emit the results
+	  collector.emit(new Values(computedValue));
+    }
+}
+
+public static void main(String[] args) {
+    TopologyBuilder builder = new TopologyBuilder();
+     builder.setSpout("spout", new RandomSentenceSpout(), 1);
+     builder.setBolt("slidingwindowbolt", 
+                     new SlidingWindowBolt().withWindow(new Count(30), new Count(10)),
+                     1).shuffleGrouping("spout");
+    Config conf = new Config();
+    conf.setDebug(true);
+    conf.setNumWorkers(1);
+
+    StormSubmitter.submitTopologyWithProgressBar(args[0], conf, builder.createTopology());
+	
+}
+```
+
+The following window configurations are supported.
+
+```java
+withWindow(Count windowLength, Count slidingInterval)
+Tuple count based sliding window that slides after `slidingInterval` number of tuples.
+
+withWindow(Count windowLength)
+Tuple count based window that slides with every incoming tuple.
+
+withWindow(Count windowLength, Duration slidingInterval)
+Tuple count based sliding window that slides after `slidingInterval` time duration.
+
+withWindow(Duration windowLength, Duration slidingInterval)
+Time duration based sliding window that slides after `slidingInterval` time duration.
+
+withWindow(Duration windowLength)
+Time duration based window that slides with every incoming tuple.
+
+withWindow(Duration windowLength, Count slidingInterval)
+Time duration based sliding window configuration that slides after `slidingInterval` number of tuples.
+
+withTumblingWindow(BaseWindowedBolt.Count count)
+Count based tumbling window that tumbles after the specified count of tuples.
+
+withTumblingWindow(BaseWindowedBolt.Duration duration)
+Time duration based tumbling window that tumbles after the specified time duration.
+
+```
+
+## Tuple timestamp and out of order tuples
+By default the timestamp tracked in the window is the time when the tuple is processed by the bolt. The window calculations
+are performed based on the processing timestamp. Storm has support for tracking windows based on the source generated timestamp.
+
+```java
+/**
+* Specify a field in the tuple that represents the timestamp as a long value. If this
+* field is not present in the incoming tuple, an {@link IllegalArgumentException} will be thrown.
+*
+* @param fieldName the name of the field that contains the timestamp
+*/
+public BaseWindowedBolt withTimestampField(String fieldName)
+```
+
+The value for the above `fieldName` will be looked up from the incoming tuple and considered for windowing calculations. 
+If the field is not present in the tuple an exception will be thrown. Along with the timestamp field name, a time lag parameter 
+can also be specified which indicates the max time limit for tuples with out of order timestamps. 
+
+E.g. If the lag is 5 secs and a tuple `t1` arrived with timestamp `06:00:05` no tuples may arrive with tuple timestamp earlier than `06:00:00`. If a tuple
+arrives with timestamp 05:59:59 after `t1` and the window has moved past `t1`, it will be treated as a late tuple and not processed. Currently the late
+ tuples are just logged in the worker log files at INFO level.
+
+```java
+/**
+* Specify the maximum time lag of the tuple timestamp in milliseconds. It means that the tuple timestamps
+* cannot be out of order by more than this amount.
+*
+* @param duration the max lag duration
+*/
+public BaseWindowedBolt withLag(Duration duration)
+```
+
+### Watermarks
+For processing tuples with timestamp field, storm internally computes watermarks based on the incoming tuple timestamp. Watermark is 
+the minimum of the latest tuple timestamps (minus the lag) across all the input streams. At a higher level this is similar to the watermark concept
+used by Flink and Google's MillWheel for tracking event based timestamps.
+
+Periodically (default every sec), the watermark timestamps are emitted and this is considered as the clock tick for the window calculation if 
+tuple based timestamps are in use. The interval at which watermarks are emitted can be changed with the below api.
+ 
+```java
+/**
+* Specify the watermark event generation interval. For tuple based timestamps, watermark events
+* are used to track the progress of time
+*
+* @param interval the interval at which watermark events are generated
+*/
+public BaseWindowedBolt withWatermarkInterval(Duration interval)
+```
+
+
+When a watermark is received, all windows up to that timestamp will be evaluated.
+
+For example, consider tuple timestamp based processing with following window parameters,
+
+`Window length = 20s, sliding interval = 10s, watermark emit frequency = 1s, max lag = 5s`
+
+```
+|-----|-----|-----|-----|-----|-----|-----|
+0     10    20    30    40    50    60    70
+````
+
+Current ts = `09:00:00`
+
+Tuples `e1(6:00:03), e2(6:00:05), e3(6:00:07), e4(6:00:18), e5(6:00:26), e6(6:00:36)` are received between `9:00:00` and `9:00:01`
+
+At time t = `09:00:01`, watermark w1 = `6:00:31` is emitted since no tuples earlier than `6:00:31` can arrive.
+
+Three windows will be evaluated. The first window end ts (06:00:10) is computed by taking the earliest event timestamp (06:00:03) 
+and computing the ceiling based on the sliding interval (10s).
+
+1. `5:59:50 - 06:00:10` with tuples e1, e2, e3
+2. `6:00:00 - 06:00:20` with tuples e1, e2, e3, e4
+3. `6:00:10 - 06:00:30` with tuples e4, e5
+
+e6 is not evaluated since watermark timestamp `6:00:31` is older than the tuple ts `6:00:36`.
+
+Tuples `e7(8:00:25), e8(8:00:26), e9(8:00:27), e10(8:00:39)` are received between `9:00:01` and `9:00:02`
+
+At time t = `09:00:02` another watermark w2 = `08:00:34` is emitted since no tuples earlier than `8:00:34` can arrive now.
+
+Three windows will be evaluated,
+
+1. `6:00:20 - 06:00:40` with tuples e5, e6 (from earlier batch)
+2. `6:00:30 - 06:00:50` with tuple e6 (from earlier batch)
+3. `8:00:10 - 08:00:30` with tuples e7, e8, e9
+
+e10 is not evaluated since the tuple ts `8:00:39` is beyond the watermark time `8:00:34`.
+
+The window calculation considers the time gaps and computes the windows based on the tuple timestamp.
+
+## Guarentees
+The windowing functionality in storm core currently provides at-least once guarentee. The values emitted from the bolts
+`execute(TupleWindow inputWindow)` method are automatically anchored to all the tuples in the inputWindow. The downstream
+bolts are expected to ack the received tuple (i.e the tuple emitted from the windowed bolt) to complete the tuple tree. 
+If not the tuples will be replayed and the windowing computation will be re-evaluated. 
+
+The tuples in the window are automatically acked when the expire, i.e. when they fall out of the window after 
+`windowLength + slidingInterval`. Note that the configuration `topology.message.timeout.secs` should be sufficiently more 
+than `windowLength + slidingInterval` for time based windows; otherwise the tuples will timeout and get replayed and can result
+in duplicate evaluations. For count based windows, the configuration should be adjusted such that `windowLength + slidingInterval`
+tuples can be received within the timeout period.
+
+## Example topology
+An example toplogy `SlidingWindowTopology` shows how to use the apis to compute a sliding window sum and a tumbling window 
+average.
+

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/distcache-blobstore.md
----------------------------------------------------------------------
diff --git a/documentation/distcache-blobstore.md b/documentation/distcache-blobstore.md
new file mode 100644
index 0000000..2011ce3
--- /dev/null
+++ b/documentation/distcache-blobstore.md
@@ -0,0 +1,735 @@
+# Storm Distributed Cache API
+
+The distributed cache feature in storm is used to efficiently distribute files
+(or blobs, which is the equivalent terminology for a file in the distributed
+cache and is used interchangeably in this document) that are large and can
+change during the lifetime of a topology, such as geo-location data,
+dictionaries, etc. Typical use cases include phrase recognition, entity
+extraction, document classification, URL re-writing, location/address detection
+and so forth. Such files may be several KB to several GB in size. For small
+datasets that don't need dynamic updates, including them in the topology jar
+could be fine. But for large files, the startup times could become very large.
+In these cases, the distributed cache feature can provide fast topology startup,
+especially if the files were previously downloaded for the same submitter and
+are still in the cache. This is useful with frequent deployments, sometimes few
+times a day with updated jars, because the large cached files will remain available
+without changes. The large cached blobs that do not change frequently will
+remain available in the distributed cache.
+
+At the starting time of a topology, the user specifies the set of files the
+topology needs. Once a topology is running, the user at any time can request for
+any file in the distributed cache to be updated with a newer version. The
+updating of blobs happens in an eventual consistency model. If the topology
+needs to know what version of a file it has access to, it is the responsibility
+of the user to find this information out. The files are stored in a cache with
+Least-Recently Used (LRU) eviction policy, where the supervisor decides which
+cached files are no longer needed and can delete them to free disk space. The
+blobs can be compressed, and the user can request the blobs to be uncompressed
+before it accesses them.
+
+## Motivation for Distributed Cache
+* Allows sharing blobs among topologies.
+* Allows updating the blobs from the command line.
+
+## Distributed Cache Implementations
+The current BlobStore interface has the following two implementations
+* LocalFsBlobStore
+* HdfsBlobStore
+
+Appendix A contains the interface for blobstore implementation.
+
+## LocalFsBlobStore
+![LocalFsBlobStore](images/local_blobstore.png)
+
+Local file system implementation of Blobstore can be depicted in the above timeline diagram.
+
+There are several stages from blob creation to blob download and corresponding execution of a topology. 
+The main stages can be depicted as follows
+
+### Blob Creation Command
+Blobs in the blobstore can be created through command line using the following command.
+
+```
+storm blobstore create --file README.txt --acl o::rwa --repl-fctr 4 key1
+```
+
+The above command creates a blob with a key name “key1” corresponding to the file README.txt. 
+The access given to all users being read, write and admin with a replication factor of 4.
+
+### Topology Submission and Blob Mapping
+Users can submit their topology with the following command. The command includes the 
+topology map configuration. The configuration holds two keys “key1” and “key2” with the 
+key “key1” having a local file name mapping named “blob_file” and it is not compressed.
+
+```
+storm jar /home/y/lib/storm-starter/current/storm-starter-jar-with-dependencies.jar 
+storm.starter.clj.word_count test_topo -c topology.blobstore.map='{"key1":{"localname":"blob_file", "uncompress":"false"},"key2":{}}'
+```
+
+### Blob Creation Process
+The creation of the blob takes place through the interface “ClientBlobStore”. Appendix B contains the “ClientBlobStore” interface. 
+The concrete implementation of this interface is the  “NimbusBlobStore”. In the case of local file system the client makes a 
+call to the nimbus to create the blobs within the local file system. The nimbus uses the local file system implementation to create these blobs. 
+When a user submits a topology, the jar, configuration and code files are uploaded as blobs with the help of blobstore. 
+Also, all the other blobs specified by the topology are mapped to it with the help of topology.blobstore.map configuration.
+
+### Blob Download by the Supervisor
+Finally, the blobs corresponding to a topology are downloaded by the supervisor once it receives the assignments from the nimbus through 
+the same “NimbusBlobStore” thrift client that uploaded the blobs. The supervisor downloads the code, jar and conf blobs by calling the 
+“NimbusBlobStore” client directly while the blobs specified in the topology.blobstore.map are downloaded and mapped locally with the help 
+of the Localizer. The Localizer talks to the “NimbusBlobStore” thrift client to download the blobs and adds the blob compression and local 
+blob name mapping logic to suit the implementation of a topology. Once all the blobs have been downloaded the workers are launched to run 
+the topologies.
+
+## HdfsBlobStore
+![HdfsBlobStore](images/hdfs_blobstore.png)
+
+The HdfsBlobStore functionality has a similar implementation and blob creation and download procedure barring how the replication 
+is handled in the two blobstore implementations. The replication in HDFS blobstore is obvious as HDFS is equipped to handle replication 
+and it requires no state to be stored inside the zookeeper. On the other hand, the local file system blobstore requires the state to be 
+stored on the zookeeper in order for it to work with nimbus HA. Nimbus HA allows the local filesystem to implement the replication feature 
+seamlessly by storing the state in the zookeeper about the running topologies and syncing the blobs on various nimbuses. On the supervisor’s 
+end, the supervisor and localizer talks to HdfsBlobStore through “HdfsClientBlobStore” implementation.
+
+## Additional Features and Documentation
+```
+storm jar /home/y/lib/storm-starter/current/storm-starter-jar-with-dependencies.jar storm.starter.clj.word_count test_topo 
+-c topology.blobstore.map='{"key1":{"localname":"blob_file", "uncompress":"false"},"key2":{}}'
+```
+ 
+### Compression
+The blobstore allows the user to specify the “uncompress” configuration to true or false. This configuration can be specified 
+in the topology.blobstore.map mentioned in the above command. This allows the user to upload a compressed file like a tarball/zip. 
+In local file system blobstore, the compressed blobs are stored on the nimbus node. The localizer code takes the responsibility to 
+uncompress the blob and store it on the supervisor node. Symbolic links to the blobs on the supervisor node are created within the worker 
+before the execution starts.
+
+### Local File Name Mapping
+Apart from compression the blobstore helps to give the blob a name that can be used by the workers. The localizer takes 
+the responsibility of mapping the blob to a local name on the supervisor node.
+
+## Additional Blobstore Implementation Details
+Blobstore uses a hashing function to create the blobs based on the key. The blobs are generally stored inside the directory specified by
+the blobstore.dir configuration. By default, it is stored under “storm.local.dir/nimbus/blobs” for local file system and a similar path on 
+hdfs file system.
+
+Once a file is submitted, the blobstore reads the configs and creates a metadata for the blob with all the access control details. The metadata 
+is generally used for authorization while accessing the blobs. The blob key and version contribute to the hash code and there by the directory 
+under “storm.local.dir/nimbus/blobs/data” where the data is placed. The blobs are generally placed in a positive number directory like 193,822 etc.
+
+Once the topology is launched and the relevant blobs have been created, the supervisor downloads blobs related to the storm.conf, storm.ser 
+and storm.code first and all the blobs uploaded by the command line separately using the localizer to uncompress and map them to a local name 
+specified in the topology.blobstore.map configuration. The supervisor periodically updates blobs by checking for the change of version. 
+This allows updating the blobs on the fly and thereby making it a very useful feature.
+
+For a local file system, the distributed cache on the supervisor node is set to 10240 MB as a soft limit and the clean up code attempts 
+to clean anything over the soft limit every 600 seconds based on LRU policy.
+
+The HDFS blobstore implementation handles load better by removing the burden on the nimbus to store the blobs, which avoids it becoming a bottleneck. Moreover, it provides seamless replication of blobs. On the other hand, the local file system blobstore is not very efficient in 
+replicating the blobs and is limited by the number of nimbuses. Moreover, the supervisor talks to the HDFS blobstore directly without the 
+involvement of the nimbus and thereby reduces the load and dependency on nimbus.
+
+## Highly Available Nimbus
+### Problem Statement:
+Currently the storm master aka nimbus, is a process that runs on a single machine under supervision. In most cases, the 
+nimbus failure is transient and it is restarted by the process that does supervision. However sometimes when disks fail and networks 
+partitions occur, nimbus goes down. Under these circumstances, the topologies run normally but no new topologies can be 
+submitted, no existing topologies can be killed/deactivated/activated and if a supervisor node fails then the 
+reassignments are not performed resulting in performance degradation or topology failures. With this project we intend, 
+to resolve this problem by running nimbus in a primary backup mode to guarantee that even if a nimbus server fails one 
+of the backups will take over. 
+
+### Requirements for Highly Available Nimbus:
+* Increase overall availability of nimbus.
+* Allow nimbus hosts to leave and join the cluster at will any time. A newly joined host should auto catch up and join 
+the list of potential leaders automatically. 
+* No topology resubmissions required in case of nimbus fail overs.
+* No active topology should ever be lost. 
+
+#### Leader Election:
+The nimbus server will use the following interface:
+
+```java
+public interface ILeaderElector {
+    /**
+     * queue up for leadership lock. The call returns immediately and the caller                     
+     * must check isLeader() to perform any leadership action.
+     */
+    void addToLeaderLockQueue();
+
+    /**
+     * Removes the caller from the leader lock queue. If the caller is leader
+     * also releases the lock.
+     */
+    void removeFromLeaderLockQueue();
+
+    /**
+     *
+     * @return true if the caller currently has the leader lock.
+     */
+    boolean isLeader();
+
+    /**
+     *
+     * @return the current leader's address , throws exception if noone has has    lock.
+     */
+    InetSocketAddress getLeaderAddress();
+
+    /**
+     * 
+     * @return list of current nimbus addresses, includes leader.
+     */
+    List<InetSocketAddress> getAllNimbusAddresses();
+}
+```
+Once a nimbus comes up it calls addToLeaderLockQueue() function. The leader election code selects a leader from the queue.
+If the topology code, jar or config blobs are missing, it would download the blobs from any other nimbus which is up and running.
+
+The first implementation will be Zookeeper based. If the zookeeper connection is lost/reset resulting in loss of lock
+or the spot in queue the implementation will take care of updating the state such that isLeader() will reflect the 
+current status. The leader like actions must finish in less than minimumOf(connectionTimeout, SessionTimeout) to ensure
+the lock was held by nimbus for the entire duration of the action (Not sure if we want to just state this expectation 
+and ensure that zk configurations are set high enough which will result in higher failover time or we actually want to 
+create some sort of rollback mechanism for all actions, the second option needs a lot of code). If a nimbus that is not 
+leader receives a request that only a leader can perform,  it will throw a RunTimeException.
+
+### Nimbus state store:
+
+To achieve fail over from primary to backup servers nimbus state/data needs to be replicated across all nimbus hosts or 
+needs to be stored in a distributed storage. Replicating the data correctly involves state management, consistency checks
+and it is hard to test for correctness. However many storm users do not want to take extra dependency on another replicated
+storage system like HDFS and still need high availability. The blobstore implementation along with the state storage helps
+to overcome the failover scenarios in case a leader nimbus goes down.
+
+To support replication we will allow the user to define a code replication factor which would reflect number of nimbus 
+hosts to which the code must be replicated before starting the topology. With replication comes the issue of consistency. 
+The topology is launched once the code, jar and conf blob files are replicated based on the "topology.min.replication" config.
+Maintaining state for failover scenarios is important for local file system. The current implementation makes sure one of the
+available nimbus is elected as a leader in the case of a failure. If the topology specific blobs are missing, the leader nimbus
+tries to download them as and when they are needed. With this current architecture, we do not have to download all the blobs 
+required for a topology for a nimbus to accept leadership. This helps us in case the blobs are very large and avoid causing any 
+inadvertant delays in electing a leader.
+
+The state for every blob is relevant for the local blobstore implementation. For HDFS blobstore the replication
+is taken care by the HDFS. For handling the fail over scenarios for a local blobstore we need to store the state of the leader and
+non-leader nimbuses within the zookeeper.
+
+The state is stored under /storm/blobstore/key/nimbusHostPort:SequenceNumber for the blobstore to work to make nimbus highly available. 
+This state is used in the local file system blobstore to support replication. The HDFS blobstore does not have to store the state inside the 
+zookeeper.
+
+* NimbusHostPort: This piece of information generally contains the parsed string holding the hostname and port of the nimbus. 
+  It uses the same class “NimbusHostPortInfo” used earlier by the code-distributor interface to store the state and parse the data.
+
+* SequenceNumber: This is the blob sequence number information. The SequenceNumber information is implemented by a KeySequenceNumber class. 
+The sequence numbers are generated for every key. For every update, the sequence numbers are assigned based ona global sequence number 
+stored under /storm/blobstoremaxsequencenumber/key. For more details about how the numbers are generated you can look at the java docs for KeySequenceNumber.
+
+![Nimbus High Availability - BlobStore](images/nimbus_ha_blobstore.png)
+
+The sequence diagram proposes how the blobstore works and the state storage inside the zookeeper makes the nimbus highly available.
+Currently, the thread to sync the blobs on a non-leader is within the nimbus. In the future, it will be nice to move the thread around
+to the blobstore to make the blobstore coordinate the state change and blob download as per the sequence diagram.
+
+## Thrift and Rest API 
+In order to avoid workers/supervisors/ui talking to zookeeper for getting master nimbus address we are going to modify the 
+`getClusterInfo` API so it can also return nimbus information. getClusterInfo currently returns `ClusterSummary` instance
+which has a list of `supervisorSummary` and a list of `topologySummary` instances. We will add a list of `NimbusSummary` 
+to the `ClusterSummary`. See the structures below:
+
+```thrift
+struct ClusterSummary {
+  1: required list<SupervisorSummary> supervisors;
+  3: required list<TopologySummary> topologies;
+  4: required list<NimbusSummary> nimbuses;
+}
+
+struct NimbusSummary {
+  1: required string host;
+  2: required i32 port;
+  3: required i32 uptime_secs;
+  4: required bool isLeader;
+  5: required string version;
+}
+```
+
+This will be used by StormSubmitter, Nimbus clients, supervisors and ui to discover the current leaders and participating 
+nimbus hosts. Any nimbus host will be able to respond to these requests. The nimbus hosts can read this information once 
+from zookeeper and cache it and keep updating the cache when the watchers are fired to indicate any changes,which should 
+be rare in general case.
+
+Note: All nimbus hosts have watchers on zookeeper to be notified immediately as soon as a new blobs is available for download, the callback may or may not download
+the code. Therefore, a background thread is triggered to download the respective blobs to run the topologies. The replication is achieved when the blobs are downloaded
+onto non-leader nimbuses. So you should expect your topology submission time to be somewhere between 0 to (2 * nimbus.code.sync.freq.secs) for any 
+nimbus.min.replication.count > 1.
+
+## Configuration
+
+```
+blobstore.dir: The directory where all blobs are stored. For local file system it represents the directory on the nimbus
+node and for HDFS file system it represents the hdfs file system path.
+
+supervisor.blobstore.class: This configuration is meant to set the client for  the supervisor  in order to talk to the blobstore. 
+For a local file system blobstore it is set to “backtype.storm.blobstore.NimbusBlobStore” and for the HDFS blobstore it is set 
+to “backtype.storm.blobstore.HdfsClientBlobStore”.
+
+supervisor.blobstore.download.thread.count: This configuration spawns multiple threads for from the supervisor in order download 
+blobs concurrently. The default is set to 5
+
+supervisor.blobstore.download.max_retries: This configuration is set to allow the supervisor to retry for the blob download. 
+By default it is set to 3.
+
+supervisor.localizer.cache.target.size.mb: The jvm opts provided to workers launched by this supervisor. All "%ID%" substrings 
+are replaced with an identifier for this worker. Also, "%WORKER-ID%", "%STORM-ID%" and "%WORKER-PORT%" are replaced with 
+appropriate runtime values for this worker. The distributed cache target size in MB. This is a soft limit to the size 
+of the distributed cache contents. It is set to 10240 MB.
+
+supervisor.localizer.cleanup.interval.ms: The distributed cache cleanup interval. Controls how often it scans to attempt to 
+cleanup anything over the cache target size. By default it is set to 600000 milliseconds.
+
+nimbus.blobstore.class:  Sets the blobstore implementation nimbus uses. It is set to "backtype.storm.blobstore.LocalFsBlobStore"
+
+nimbus.blobstore.expiration.secs: During operations with the blobstore, via master, how long a connection is idle before nimbus 
+considers it dead and drops the session and any associated connections. The default is set to 600.
+
+storm.blobstore.inputstream.buffer.size.bytes: The buffer size it uses for blobstore upload. It is set to 65536 bytes.
+
+client.blobstore.class: The blobstore implementation the storm client uses. The current implementation uses the default 
+config "backtype.storm.blobstore.NimbusBlobStore".
+
+blobstore.replication.factor: It sets the replication for each blob within the blobstore. The “topology.min.replication.count” 
+ensures the minimum replication the topology specific blobs are set before launching the topology. You might want to set the 
+“topology.min.replication.count <= blobstore.replication”. The default is set to 3.
+
+topology.min.replication.count : Minimum number of nimbus hosts where the code must be replicated before leader nimbus
+can mark the topology as active and create assignments. Default is 1.
+
+topology.max.replication.wait.time.sec: Maximum wait time for the nimbus host replication to achieve the nimbus.min.replication.count.
+Once this time is elapsed nimbus will go ahead and perform topology activation tasks even if required nimbus.min.replication.count is not achieved. 
+The default is 60 seconds, a value of -1 indicates to wait for ever.
+* nimbus.code.sync.freq.secs: Frequency at which the background thread on nimbus which syncs code for locally missing blobs. Default is 2 minutes.
+```
+
+## Using the Distributed Cache API, Command Line Interface (CLI)
+
+### Creating blobs 
+
+To use the distributed cache feature, the user first has to "introduce" files
+that need to be cached and bind them to key strings. To achieve this, the user
+uses the "blobstore create" command of the storm executable, as follows:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore create [-f|--file FILE] [-a|--acl ACL1,ACL2,...] [--repl-fctr NUMBER] [keyname]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The contents come from a FILE, if provided by -f or --file option, otherwise
+from STDIN.  
+The ACLs, which can also be a comma separated list of many ACLs, is of the
+following format:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+> [u|o]:[username]:[r-|w-|a-|_]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+where:  
+
+* u = user  
+* o = other  
+* username = user for this particular ACL  
+* r = read access  
+* w = write access  
+* a = admin access  
+* _ = ignored  
+
+The replication factor can be set to a value greater than 1 using --repl-fctr.
+
+Note: The replication right now is configurable for a hdfs blobstore but for a
+local blobstore the replication always stays at 1. For a hdfs blobstore
+the default replication is set to 3.
+
+###### Example:  
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore create --file README.txt --acl o::rwa --repl-fctr 4 key1
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In the above example, the *README.txt* file is added to the distributed cache.
+It can be accessed using the key string "*key1*" for any topology that needs
+it. The file is set to have read/write/admin access for others, a.k.a world
+everything and the replication is set to 4.
+
+###### Example:  
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore create mytopo:data.tgz -f data.tgz -a u:alice:rwa,u:bob:rw,o::r  
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The above example createss a mytopo:data.tgz key using the data stored in
+data.tgz.  User alice would have full access, bob would have read/write access
+and everyone else would have read access.
+
+### Making dist. cache files accessible to topologies
+
+Once a blob is created, we can use it for topologies. This is generally achieved
+by including the key string among the configurations of a topology, with the
+following format. A shortcut is to add the configuration item on the command
+line when starting a topology by using the **-c** command:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-c topology.blobstore.map='{"[KEY]":{"localname":"[VALUE]", "uncompress":"[true|false]"}}'
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Note: Please take care of the quotes.
+
+The cache file would then be accessible to the topology as a local file with the
+name [VALUE].  
+The localname parameter is optional, if omitted the local cached file will have
+the same name as [KEY].  
+The uncompress parameter is optional, if omitted the local cached file will not
+be uncompressed.  Note that the key string needs to have the appropriate
+file-name-like format and extension, so it can be uncompressed correctly.
+
+###### Example:  
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm jar /home/y/lib/storm-starter/current/storm-starter-jar-with-dependencies.jar storm.starter.clj.word_count test_topo -c topology.blobstore.map='{"key1":{"localname":"blob_file", "uncompress":"false"},"key2":{}}'
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Note: Please take care of the quotes.
+
+In the above example, we start the *word_count* topology (stored in the
+*storm-starter-jar-with-dependencies.jar* file), and ask it to have access
+to the cached file stored with key string = *key1*. This file would then be
+accessible to the topology as a local file called *blob_file*, and the
+supervisor will not try to uncompress the file. Note that in our example, the
+file's content originally came from *README.txt*. We also ask for the file
+stored with the key string = *key2* to be accessible to the topology. Since
+both the optional parameters are omitted, this file will get the local name =
+*key2*, and will not be uncompressed.
+
+### Updating a cached file
+
+It is possible for the cached files to be updated while topologies are running.
+The update happens in an eventual consistency model, where the supervisors poll
+Nimbus every 30 seconds, and update their local copies. In the current version,
+it is the user's responsibility to check whether a new file is available.
+
+To update a cached file, use the following command. Contents come from a FILE or
+STDIN. Write access is required to be able to update a cached file.
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore update [-f|--file NEW_FILE] [KEYSTRING]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+###### Example:  
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore update -f updates.txt key1
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In the above example, the topologies will be presented with the contents of the
+file *updates.txt* instead of *README.txt* (from the previous example), even
+though their access by the topology is still through a file called
+*blob_file*.
+
+### Removing a cached file
+
+To remove a file from the distributed cache, use the following command. Removing
+a file requires write access.
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore delete [KEYSTRING]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Listing Blobs currently in the distributed cache blobstore
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore list [KEY...]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+lists blobs currently in the blobstore
+
+### Reading the contents of a blob
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore cat [-f|--file FILE] KEY
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+read a blob and then either write it to a file, or STDOUT. Reading a blob
+requires read access.
+
+### Setting the access control for a blob
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+set-acl [-s ACL] KEY
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ACL is in the form [uo]:[username]:[r-][w-][a-] can be comma  separated list
+(requires admin access).
+
+### Update the replication factor for a blob
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore replication --update --repl-fctr 5 key1
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Read the replication factor of a blob
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore replication --read key1
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Command line help
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm help blobstore
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+## Using the Distributed Cache API from Java
+
+We start by getting a ClientBlobStore object by calling this function:
+
+``` java
+Config theconf = new Config();
+theconf.putAll(Utils.readStormConfig());
+ClientBlobStore clientBlobStore = Utils.getClientBlobStore(theconf);
+```
+
+The required Utils package can by imported by:
+
+```java
+import backtype.storm.utils.Utils;
+```
+
+ClientBlobStore and other blob-related classes can be imported by:
+
+```java
+import backtype.storm.blobstore.ClientBlobStore;
+import backtype.storm.blobstore.AtomicOutputStream;
+import backtype.storm.blobstore.InputStreamWithMeta;
+import backtype.storm.blobstore.BlobStoreAclHandler;
+import backtype.storm.generated.*;
+```
+
+### Creating ACLs to be used for blobs
+
+```java
+String stringBlobACL = "u:username:rwa";
+AccessControl blobACL = BlobStoreAclHandler.parseAccessControl(stringBlobACL);
+List<AccessControl> acls = new LinkedList<AccessControl>();
+acls.add(blobACL); // more ACLs can be added here
+SettableBlobMeta settableBlobMeta = new SettableBlobMeta(acls);
+settableBlobMeta.set_replication_factor(4); // Here we can set the replication factor
+```
+
+The settableBlobMeta object is what we need to create a blob in the next step. 
+
+### Creating a blob
+
+```java
+AtomicOutputStream blobStream = clientBlobStore.createBlob("some_key", settableBlobMeta);
+blobStream.write("Some String or input data".getBytes());
+blobStream.close();
+```
+
+Note that the settableBlobMeta object here comes from the last step, creating ACLs.
+It is recommended that for very large files, the user writes the bytes in smaller chunks (for example 64 KB, up to 1 MB chunks).
+
+### Updating a blob
+
+Similar to creating a blob, but we get the AtomicOutputStream in a different way:
+
+```java
+String blobKey = "some_key";
+AtomicOutputStream blobStream = clientBlobStore.updateBlob(blobKey);
+```
+
+Pass a byte stream to the returned AtomicOutputStream as before. 
+
+### Updating the ACLs of a blob
+
+```java
+String blobKey = "some_key";
+AccessControl updateAcl = BlobStoreAclHandler.parseAccessControl("u:USER:--a");
+List<AccessControl> updateAcls = new LinkedList<AccessControl>();
+updateAcls.add(updateAcl);
+SettableBlobMeta modifiedSettableBlobMeta = new SettableBlobMeta(updateAcls);
+clientBlobStore.setBlobMeta(blobKey, modifiedSettableBlobMeta);
+
+//Now set write only
+updateAcl = BlobStoreAclHandler.parseAccessControl("u:USER:-w-");
+updateAcls = new LinkedList<AccessControl>();
+updateAcls.add(updateAcl);
+modifiedSettableBlobMeta = new SettableBlobMeta(updateAcls);
+clientBlobStore.setBlobMeta(blobKey, modifiedSettableBlobMeta);
+```
+
+### Updating and Reading the replication of a blob
+
+```java
+String blobKey = "some_key";
+BlobReplication replication = clientBlobStore.updateBlobReplication(blobKey, 5);
+int replication_factor = replication.get_replication();
+```
+
+Note: The replication factor gets updated and reflected only for hdfs blobstore
+
+### Reading a blob
+
+```java
+String blobKey = "some_key";
+InputStreamWithMeta blobInputStream = clientBlobStore.getBlob(blobKey);
+BufferedReader r = new BufferedReader(new InputStreamReader(blobInputStream));
+String blobContents =  r.readLine();
+```
+
+### Deleting a blob
+
+```java
+String blobKey = "some_key";
+clientBlobStore.deleteBlob(blobKey);
+```
+
+### Getting a list of blob keys already in the blobstore
+
+```java
+Iterator <String> stringIterator = clientBlobStore.listKeys();
+```
+
+## Appendix A
+
+```java
+public abstract void prepare(Map conf, String baseDir);
+
+public abstract AtomicOutputStream createBlob(String key, SettableBlobMeta meta, Subject who) throws AuthorizationException, KeyAlreadyExistsException;
+
+public abstract AtomicOutputStream updateBlob(String key, Subject who) throws AuthorizationException, KeyNotFoundException;
+
+public abstract ReadableBlobMeta getBlobMeta(String key, Subject who) throws AuthorizationException, KeyNotFoundException;
+
+public abstract void setBlobMeta(String key, SettableBlobMeta meta, Subject who) throws AuthorizationException, KeyNotFoundException;
+
+public abstract void deleteBlob(String key, Subject who) throws AuthorizationException, KeyNotFoundException;
+
+public abstract InputStreamWithMeta getBlob(String key, Subject who) throws AuthorizationException, KeyNotFoundException;
+
+public abstract Iterator<String> listKeys(Subject who);
+
+public abstract BlobReplication getBlobReplication(String key, Subject who) throws Exception;
+
+public abstract BlobReplication updateBlobReplication(String key, int replication, Subject who) throws AuthorizationException, KeyNotFoundException, IOException
+```
+
+## Appendix B
+
+```java
+public abstract void prepare(Map conf);
+
+protected abstract AtomicOutputStream createBlobToExtend(String key, SettableBlobMeta meta) throws AuthorizationException, KeyAlreadyExistsException;
+
+public abstract AtomicOutputStream updateBlob(String key) throws AuthorizationException, KeyNotFoundException;
+
+public abstract ReadableBlobMeta getBlobMeta(String key) throws AuthorizationException, KeyNotFoundException;
+
+protected abstract void setBlobMetaToExtend(String key, SettableBlobMeta meta) throws AuthorizationException, KeyNotFoundException;
+
+public abstract void deleteBlob(String key) throws AuthorizationException, KeyNotFoundException;
+
+public abstract InputStreamWithMeta getBlob(String key) throws AuthorizationException, KeyNotFoundException;
+
+public abstract Iterator<String> listKeys();
+
+public abstract void watchBlob(String key, IBlobWatcher watcher) throws AuthorizationException;
+
+public abstract void stopWatchingBlob(String key) throws AuthorizationException;
+
+public abstract BlobReplication getBlobReplication(String Key) throws AuthorizationException, KeyNotFoundException;
+
+public abstract BlobReplication updateBlobReplication(String Key, int replication) throws AuthorizationException, KeyNotFoundException
+```
+
+## Appendix C
+
+``` thrift
+service Nimbus {
+...
+string beginCreateBlob(1: string key, 2: SettableBlobMeta meta) throws (1: AuthorizationException aze, 2: KeyAlreadyExistsException kae);
+
+string beginUpdateBlob(1: string key) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf);
+
+void uploadBlobChunk(1: string session, 2: binary chunk) throws (1: AuthorizationException aze);
+
+void finishBlobUpload(1: string session) throws (1: AuthorizationException aze);
+
+void cancelBlobUpload(1: string session) throws (1: AuthorizationException aze);
+
+ReadableBlobMeta getBlobMeta(1: string key) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf);
+
+void setBlobMeta(1: string key, 2: SettableBlobMeta meta) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf);
+
+BeginDownloadResult beginBlobDownload(1: string key) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf);
+
+binary downloadBlobChunk(1: string session) throws (1: AuthorizationException aze);
+
+void deleteBlob(1: string key) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf);
+
+ListBlobsResult listBlobs(1: string session);
+
+BlobReplication getBlobReplication(1: string key) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf);
+
+BlobReplication updateBlobReplication(1: string key, 2: i32 replication) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf);
+...
+}
+
+struct BlobReplication {
+1: required i32 replication;
+}
+
+exception AuthorizationException {
+ 1: required string msg;
+}
+
+exception KeyNotFoundException {
+ 1: required string msg;
+}
+
+exception KeyAlreadyExistsException {
+ 1: required string msg;
+}
+
+enum AccessControlType {
+ OTHER = 1,
+ USER = 2
+ //eventually ,GROUP=3
+}
+
+struct AccessControl {
+ 1: required AccessControlType type;
+ 2: optional string name; //Name of user or group in ACL
+ 3: required i32 access; //bitmasks READ=0x1, WRITE=0x2, ADMIN=0x4
+}
+
+struct SettableBlobMeta {
+ 1: required list<AccessControl> acl;
+ 2: optional i32 replication_factor
+}
+
+struct ReadableBlobMeta {
+ 1: required SettableBlobMeta settable;
+ //This is some indication of a version of a BLOB.  The only guarantee is
+ // if the data changed in the blob the version will be different.
+ 2: required i64 version;
+}
+
+struct ListBlobsResult {
+ 1: required list<string> keys;
+ 2: required string session;
+}
+
+struct BeginDownloadResult {
+ //Same version as in ReadableBlobMeta
+ 1: required i64 version;
+ 2: required string session;
+ 3: optional i64 data_size;
+}
+```

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/dynamic-log-level-settings.md
----------------------------------------------------------------------
diff --git a/documentation/dynamic-log-level-settings.md b/documentation/dynamic-log-level-settings.md
new file mode 100644
index 0000000..f38b708
--- /dev/null
+++ b/documentation/dynamic-log-level-settings.md
@@ -0,0 +1,41 @@
+Dynamic Log Level Settings
+==========================
+
+We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. 
+
+The log level settings apply the same way as you'd expect from log4j, as all we are doing is telling log4j to set the level of the logger you provide. If you set the log level of a parent logger, the children loggers start using that level (unless the children have a more restrictive level already). A timeout can optionally be provided (except for DEBUG mode, where it’s required in the UI), if workers should reset log levels automatically.
+
+This revert action is triggered using a polling mechanism (every 30 seconds, but this is configurable), so you should expect your timeouts to be the value you provided plus anywhere between 0 and the setting's value.
+
+Using the Storm UI
+-------------
+
+In order to set a level, click on a running topology, and then click on “Change Log Level” in the Topology Actions section.
+
+![Change Log Level dialog](images/dynamic_log_level_settings_1.png "Change Log Level dialog")
+
+Next, provide the logger name, select the level you expect (e.g. WARN), and a timeout in seconds (or 0 if not needed). Then click on “Add”.
+
+![After adding a log level setting](images/dynamic_log_level_settings_2.png "After adding a log level setting")
+
+To clear the log level click on the “Clear” button. This reverts the log level back to what it was before you added the setting. The log level line will disappear from the UI.
+
+While there is a delay resetting log levels back, setting the log level in the first place is immediate (or as quickly as the message can travel from the UI/CLI to the workers by way of nimbus and zookeeper).
+
+Using the CLI
+-------------
+
+Using the CLI, issue the command:
+
+`./bin/storm set_log_level [topology name] -l [logger name]=[LEVEL]:[TIMEOUT]`
+
+For example:
+
+`./bin/storm set_log_level my_topology -l ROOT=DEBUG:30`
+
+Sets the ROOT logger to DEBUG for 30 seconds.
+
+`./bin/storm set_log_level my_topology -r ROOT`
+
+Clears the ROOT logger dynamic log level, resetting it to its original value.
+

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/dynamic-worker-profiling.md
----------------------------------------------------------------------
diff --git a/documentation/dynamic-worker-profiling.md b/documentation/dynamic-worker-profiling.md
new file mode 100644
index 0000000..088a232
--- /dev/null
+++ b/documentation/dynamic-worker-profiling.md
@@ -0,0 +1,33 @@
+Dynamic Worker Profiling
+==========================
+
+In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users' ability to analyze and debug issues when monitoring it actively.
+
+The storm dynamic profiler lets you dynamically take heap-dumps, jprofile or jstack for a worker jvm running on stock cluster. It let user download these dumps from the browser and use your favorite tools to analyze it  The UI component page provides list workers for the component and action buttons. The logviewer lets you download the dumps generated by these logs. Please see the screenshots for more information.
+
+Using the Storm UI
+-------------
+
+In order to request for heap-dump, jstack, start/stop/dump jprofile or restart a worker, click on a running topology, then click on specific component, then you can select workers by checking the box of any of the worker's executors in the Executors table, and then click on “Start","Heap", "Jstack" or "Restart Worker" in the "Profiling and Debugging" section.
+
+![Selecting Workers](images/dynamic_profiling_debugging_4.png "Selecting Workers")
+
+In the Executors table, click the checkbox in the Actions column next to any executor, and any other executors belonging to the same worker are automatically selected. When the action has completed, any output files created will available at the link in the Actions column.
+
+![Profiling and Debugging](images/dynamic_profiling_debugging_1.png "Profiling and Debugging")
+
+For start jprofile, provide a timeout in minutes (or 10 if not needed). Then click on “Start”.
+
+![After starting jprofile for worker](images/dynamic_profiling_debugging_2.png "After jprofile for worker ")
+
+To stop the jprofile logging click on the “Stop” button. This dumps the jprofile stats and stops the profiling. Refresh the page for the line to disappear from the UI.
+
+Click on "My Dump Files" to go the logviewer UI for list of worker specific dump files.
+
+![Dump Files Links for worker](images/dynamic_profiling_debugging_3.png "Dump Files Links for worker")
+
+Configuration
+-------------
+
+The "worker.profiler.command" can be configured to point to specific pluggable profiler, heapdump commands. The "worker.profiler.enabled" can be disabled if plugin is not available or jdk does not support Jprofile flight recording so that worker JVM options will not have "worker.profiler.childopts". To use different profiler plugin, you can change these configuration.
+

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/images/dynamic_log_level_settings_1.png
----------------------------------------------------------------------
diff --git a/documentation/images/dynamic_log_level_settings_1.png b/documentation/images/dynamic_log_level_settings_1.png
new file mode 100644
index 0000000..71d42e7
Binary files /dev/null and b/documentation/images/dynamic_log_level_settings_1.png differ

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/images/dynamic_log_level_settings_2.png
----------------------------------------------------------------------
diff --git a/documentation/images/dynamic_log_level_settings_2.png b/documentation/images/dynamic_log_level_settings_2.png
new file mode 100644
index 0000000..d0e61a7
Binary files /dev/null and b/documentation/images/dynamic_log_level_settings_2.png differ

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/images/dynamic_profiling_debugging_1.png
----------------------------------------------------------------------
diff --git a/documentation/images/dynamic_profiling_debugging_1.png b/documentation/images/dynamic_profiling_debugging_1.png
new file mode 100644
index 0000000..6be1f86
Binary files /dev/null and b/documentation/images/dynamic_profiling_debugging_1.png differ

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/images/dynamic_profiling_debugging_2.png
----------------------------------------------------------------------
diff --git a/documentation/images/dynamic_profiling_debugging_2.png b/documentation/images/dynamic_profiling_debugging_2.png
new file mode 100644
index 0000000..342ad94
Binary files /dev/null and b/documentation/images/dynamic_profiling_debugging_2.png differ

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/images/dynamic_profiling_debugging_3.png
----------------------------------------------------------------------
diff --git a/documentation/images/dynamic_profiling_debugging_3.png b/documentation/images/dynamic_profiling_debugging_3.png
new file mode 100644
index 0000000..5706d7e
Binary files /dev/null and b/documentation/images/dynamic_profiling_debugging_3.png differ

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/images/dynamic_profiling_debugging_4.png
----------------------------------------------------------------------
diff --git a/documentation/images/dynamic_profiling_debugging_4.png b/documentation/images/dynamic_profiling_debugging_4.png
new file mode 100644
index 0000000..0afe9f4
Binary files /dev/null and b/documentation/images/dynamic_profiling_debugging_4.png differ

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/images/hdfs_blobstore.png
----------------------------------------------------------------------
diff --git a/documentation/images/hdfs_blobstore.png b/documentation/images/hdfs_blobstore.png
new file mode 100644
index 0000000..11c5c10
Binary files /dev/null and b/documentation/images/hdfs_blobstore.png differ

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/images/local_blobstore.png
----------------------------------------------------------------------
diff --git a/documentation/images/local_blobstore.png b/documentation/images/local_blobstore.png
new file mode 100644
index 0000000..ff8001e
Binary files /dev/null and b/documentation/images/local_blobstore.png differ

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/images/nimbus_ha_blobstore.png
----------------------------------------------------------------------
diff --git a/documentation/images/nimbus_ha_blobstore.png b/documentation/images/nimbus_ha_blobstore.png
new file mode 100644
index 0000000..26e8c2a
Binary files /dev/null and b/documentation/images/nimbus_ha_blobstore.png differ

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/images/search-a-topology.png
----------------------------------------------------------------------
diff --git a/documentation/images/search-a-topology.png b/documentation/images/search-a-topology.png
new file mode 100644
index 0000000..8d6153c
Binary files /dev/null and b/documentation/images/search-a-topology.png differ

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/images/search-for-a-single-worker-log.png
----------------------------------------------------------------------
diff --git a/documentation/images/search-for-a-single-worker-log.png b/documentation/images/search-for-a-single-worker-log.png
new file mode 100644
index 0000000..8c6f423
Binary files /dev/null and b/documentation/images/search-for-a-single-worker-log.png differ

http://git-wip-us.apache.org/repos/asf/storm/blob/d63146b7/documentation/storm-metrics-profiling-internal-actions.md
----------------------------------------------------------------------
diff --git a/documentation/storm-metrics-profiling-internal-actions.md b/documentation/storm-metrics-profiling-internal-actions.md
new file mode 100644
index 0000000..e549c0c
--- /dev/null
+++ b/documentation/storm-metrics-profiling-internal-actions.md
@@ -0,0 +1,70 @@
+# Storm Metrics for Profiling Various Storm Internal Actions
+
+With the addition of these metrics, Storm users can collect, view, and analyze the performance of various internal actions.  The actions that are profiled include thrift rpc calls and http quests within the storm daemons. For instance, in the Storm Nimbus daemon, the following thrift calls defined in the Nimbus$Iface are profiled:
+
+- submitTopology
+- submitTopologyWithOpts
+- killTopology
+- killTopologyWithOpts
+- activate
+- deactivate
+- rebalance
+- setLogConfig
+- getLogConfig
+
+Various HTTP GET and POST requests are marked for profiling as well such as the GET and POST requests for the Storm UI daemon (ui/core.cj)
+To implement these metrics the following packages are used: 
+- io.dropwizard.metrics
+- metrics-clojure
+
+## How it works
+
+By using packages io.dropwizard.metrics and metrics-clojure (clojure wrapper for the metrics Java API), we can mark functions to profile by declaring (defmeter num-some-func-calls) and then adding the (mark! num-some-func-calls) to where the function is invoked. For example:
+
+    (defmeter num-some-func-calls)
+    (defn some-func [args]
+        (mark! num-some-func-calls)
+        (body))
+        
+What essentially the mark! API call does is increment a counter that represents how many times a certain action occured.  For instantanous measurements user can use gauges.  For example: 
+
+    (defgauge nimbus:num-supervisors
+         (fn [] (.size (.supervisors (:storm-cluster-state nimbus) nil))))
+         
+The above example will get the number of supervisors in the cluster.  This metric is not accumlative like one previously discussed.
+
+A metrics reporting server needs to also be activated to collect the metrics. You can do this by calling the following function:
+
+    (defn start-metrics-reporters []
+        (jmx/start (jmx/reporter {})))
+
+## How to collect the metrics
+
+Metrics can be reported via JMX or HTTP.  A user can use JConsole or VisualVM to connect to the jvm process and view the stats.
+
+To view the metrics in a GUI use VisualVM or JConsole.  Screenshot of using VisualVm for metrics: 
+
+![Viewing metrics with VisualVM](images/viewing_metrics_with_VisualVM.png)
+
+For detailed information regarding how to collect the metrics please reference: 
+
+https://dropwizard.github.io/metrics/3.1.0/getting-started/
+
+If you want use JMX and view metrics through JConsole or VisualVM, remember launch JVM processes your want to profile with the correct JMX configurations.  For example in Storm you would add the following to conf/storm.yaml
+
+    nimbus.childopts: "-Xmx1024m -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=3333  -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
+    
+    ui.childopts: "-Xmx768m -Dcom.sun.management.jmxremote.port=3334 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
+    
+    logviewer.childopts: "-Xmx128m -Dcom.sun.management.jmxremote.port=3335 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
+    
+    drpc.childopts: "-Xmx768m -Dcom.sun.management.jmxremote.port=3336 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
+   
+    supervisor.childopts: "-Xmx256m -Dcom.sun.management.jmxremote.port=3337 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
+
+### Please Note:
+Since we shade all of the packages we use, additional plugins for collecting metrics might not work at this time.  Currently collecting the metrics via JMX is supported.
+   
+For more information about io.dropwizard.metrics and metrics-clojure packages please reference their original documentation:
+- https://dropwizard.github.io/metrics/3.1.0/
+- http://metrics-clojure.readthedocs.org/en/latest/
\ No newline at end of file