You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@storm.apache.org by bo...@apache.org on 2016/01/11 21:57:46 UTC

[50/53] [abbrv] [partial] storm git commit: STORM-1202: Migrate APIs to org.apache.storm, but try to provide some form of backwards compatability

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/DEVELOPER.md
----------------------------------------------------------------------
diff --git a/DEVELOPER.md b/DEVELOPER.md
index 7a98ead..5fc1c36 100644
--- a/DEVELOPER.md
+++ b/DEVELOPER.md
@@ -133,8 +133,8 @@ To mark a Java test as a Java integration test, add the annotation `@Category(In
  
 To mark a Clojure test as Clojure integration test, the test source must be located in a package with name prefixed by `integration.`
 
-For example, the test `test/clj/backtype.storm.drpc_test.clj` is considered a clojure unit test, whereas
- `test/clj/integration.backtype.storm.drpc_test.clj` is considered a clojure integration test.
+For example, the test `test/clj/org.apache.storm.drpc_test.clj` is considered a clojure unit test, whereas
+ `test/clj/integration.org.apache.storm.drpc_test.clj` is considered a clojure integration test.
 
 Please refer to section <a href="#building">Build the code and run the tests</a> for how to run integration tests, and the info on the build phase each test runs. 
 
@@ -301,8 +301,8 @@ To run all unit tests and all integration tests execute one of the commands
  
  
 You can also run tests selectively via the Clojure REPL.  The following example runs the tests in
-[auth_test.clj](storm-core/test/clj/backtype/storm/security/auth/auth_test.clj), which has the namespace
-`backtype.storm.security.auth.auth-test`.
+[auth_test.clj](storm-core/test/clj/org/apache/storm/security/auth/auth_test.clj), which has the namespace
+`org.apache.storm.security.auth.auth-test`.
 
 You can also run tests selectively with `-Dtest=<test_name>`.  This works for both clojure and junit tests.
 
@@ -360,8 +360,8 @@ Tests should never rely on timing in order to pass.  Storm can properly test fun
 simulating time, which means we do not have to worry about e.g. random delays failing our tests indeterministically.
 
 If you are testing topologies that do not do full tuple acking, then you should be testing using the "tracked
-topologies" utilities in `backtype.storm.testing.clj`.  For example,
-[test-acking](storm-core/test/clj/backtype/storm/integration_test.clj) (around line 213) tests the acking system in
+topologies" utilities in `org.apache.storm.testing.clj`.  For example,
+[test-acking](storm-core/test/clj/org/apache/storm/integration_test.clj) (around line 213) tests the acking system in
 Storm using tracked topologies.  Here, the key is the `tracked-wait` function: it will only return when both that many
 tuples have been emitted by the spouts _and_ the topology is idle (i.e. no tuples have been emitted nor will be emitted
 without further input).  Note that you should not use tracked topologies for topologies that have tick tuples.

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/SECURITY.md
----------------------------------------------------------------------
diff --git a/SECURITY.md b/SECURITY.md
index 6d6c825..e9966b6 100644
--- a/SECURITY.md
+++ b/SECURITY.md
@@ -172,7 +172,7 @@ Each jaas file may have multiple sections for different interfaces being used.
 
 To enable Kerberos authentication in storm you need to set the following `storm.yaml` configs
 ```yaml
-storm.thrift.transport: "backtype.storm.security.auth.kerberos.KerberosSaslTransportPlugin"
+storm.thrift.transport: "org.apache.storm.security.auth.kerberos.KerberosSaslTransportPlugin"
 java.security.auth.login.config: "/path/to/jaas.conf"
 ```
 
@@ -275,7 +275,7 @@ Server {
 Nimbus also will translate the principal into a local user name, so that other services can use this name.  To configure this for Kerberos authentication set
 
 ```
-storm.principal.tolocal: "backtype.storm.security.auth.KerberosPrincipalToLocal"
+storm.principal.tolocal: "org.apache.storm.security.auth.KerberosPrincipalToLocal"
 ```
 
 This only needs to be done on nimbus, but it will not hurt on any node.
@@ -324,7 +324,7 @@ The end user can override this if they have a headless user that has a keytab.
 The preferred authorization plug-in for nimbus is The *SimpleACLAuthorizer*.  To use the *SimpleACLAuthorizer*, set the following:
 
 ```yaml
-nimbus.authorizer: "backtype.storm.security.auth.authorizer.SimpleACLAuthorizer"
+nimbus.authorizer: "org.apache.storm.security.auth.authorizer.SimpleACLAuthorizer"
 ```
 
 DRPC has a separate authorizer configuration for it.  Do not use SimpleACLAuthorizer for DRPC.
@@ -349,7 +349,7 @@ To ensure isolation of users in multi-tenancy, the supervisors must run under a
 
 To support multi-tenancy better we have written a new scheduler.  To enable this scheduler set:
 ```yaml
-storm.scheduler: "backtype.storm.scheduler.multitenant.MultitenantScheduler"
+storm.scheduler: "org.apache.storm.scheduler.multitenant.MultitenantScheduler"
 ```
 Be aware that many of the features of this scheduler rely on storm authentication.  Without storm authentication, the scheduler will not know what the user is, and thus will not isolate topologies properly.
 
@@ -392,11 +392,11 @@ A storm client may submit requests on behalf of another user. For example, if a
 it can do so by leveraging the impersonation feature. In order to submit a topology as some other user, you can use the `StormSubmitter.submitTopologyAs` API. Alternatively you can use `NimbusClient.getConfiguredClientAs`
 to get a nimbus client as some other user and perform any nimbus action (i.e., kill/rebalance/activate/deactivate) using this client.
 
-To ensure only authorized users can perform impersonation, you should start nimbus with `nimbus.impersonation.authorizer` set to `backtype.storm.security.auth.authorizer.ImpersonationAuthorizer`.
+To ensure only authorized users can perform impersonation, you should start nimbus with `nimbus.impersonation.authorizer` set to `org.apache.storm.security.auth.authorizer.ImpersonationAuthorizer`.
 The `ImpersonationAuthorizer` uses `nimbus.impersonation.acl` as the acl to authorize users. Following is a sample nimbus config for supporting impersonation:
 
 ```yaml
-nimbus.impersonation.authorizer: backtype.storm.security.auth.authorizer.ImpersonationAuthorizer
+nimbus.impersonation.authorizer: org.apache.storm.security.auth.authorizer.ImpersonationAuthorizer
 nimbus.impersonation.acl:
     impersonating_user1:
         hosts:
@@ -425,7 +425,7 @@ Individual topologies have the ability to push credentials (tickets and tokens)
 To hide this from them, in the common case plugins can be used to populate the credentials, unpack them on the other side into a java Subject, and also allow Nimbus to renew the credentials if needed.
 These are controlled by the following configs:
 
-* `topology.auto-credentials`: a list of java plugins, all of which must implement IAutoCredentials interface, that populate the credentials on gateway and unpack them on the worker side. On a kerberos secure cluster they should be set by default to point to `backtype.storm.security.auth.kerberos.AutoTGT`.  `nimbus.credential.renewers.classes` should also be set to this value so that nimbus can periodically renew the TGT on behalf of the user.
+* `topology.auto-credentials`: a list of java plugins, all of which must implement IAutoCredentials interface, that populate the credentials on gateway and unpack them on the worker side. On a kerberos secure cluster they should be set by default to point to `org.apache.storm.security.auth.kerberos.AutoTGT`.  `nimbus.credential.renewers.classes` should also be set to this value so that nimbus can periodically renew the TGT on behalf of the user.
 * `nimbus.credential.renewers.freq.secs`: controls how often the renewer will poll to see if anything needs to be renewed, but the default should be fine.
 
 In addition Nimbus itself can be used to get credentials on behalf of the user submitting topologies. This can be configures using:

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/bin/storm-config.cmd
----------------------------------------------------------------------
diff --git a/bin/storm-config.cmd b/bin/storm-config.cmd
index d259e30..cb1e203 100644
--- a/bin/storm-config.cmd
+++ b/bin/storm-config.cmd
@@ -86,7 +86,7 @@ if not defined STORM_LOG_DIR (
 @rem retrieve storm.log4j2.conf.dir from conf file
 @rem
 
-"%JAVA%" -client -Dstorm.options= -Dstorm.conf.file= -cp "%CLASSPATH%" backtype.storm.command.config_value storm.log4j2.conf.dir > %CMD_TEMP_FILE%
+"%JAVA%" -client -Dstorm.options= -Dstorm.conf.file= -cp "%CLASSPATH%" org.apache.storm.command.config_value storm.log4j2.conf.dir > %CMD_TEMP_FILE%
 
 FOR /F "delims=" %%i in (%CMD_TEMP_FILE%) do (
 	FOR /F "tokens=1,* delims= " %%a in ("%%i") do (
@@ -113,7 +113,7 @@ if not defined STORM_LOG4J2_CONFIGURATION_FILE (
   set STORM_LOG4J2_CONFIGURATION_FILE="file://%STORM_HOME%\log4j2\cluster.xml"
 )
 
-"%JAVA%" -client -Dstorm.options= -Dstorm.conf.file= -cp "%CLASSPATH%" backtype.storm.command.config_value java.library.path > %CMD_TEMP_FILE%
+"%JAVA%" -client -Dstorm.options= -Dstorm.conf.file= -cp "%CLASSPATH%" org.apache.storm.command.config_value java.library.path > %CMD_TEMP_FILE%
 
 FOR /F "delims=" %%i in (%CMD_TEMP_FILE%) do (
     FOR /F "tokens=1,* delims= " %%a in ("%%i") do (

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/bin/storm.cmd
----------------------------------------------------------------------
diff --git a/bin/storm.cmd b/bin/storm.cmd
index ad1a81f..ee125e5 100644
--- a/bin/storm.cmd
+++ b/bin/storm.cmd
@@ -94,7 +94,7 @@
 
 
 :activate
-  set CLASS=backtype.storm.command.activate
+  set CLASS=org.apache.storm.command.activate
   set STORM_OPTS=%STORM_CLIENT_OPTS% %STORM_OPTS%
   goto :eof
 
@@ -103,18 +103,18 @@
   goto :eof
 
 :deactivate
-  set CLASS=backtype.storm.command.deactivate
+  set CLASS=org.apache.storm.command.deactivate
   set STORM_OPTS=%STORM_CLIENT_OPTS% %STORM_OPTS%
   goto :eof
 
 :dev-zookeeper
-  set CLASS=backtype.storm.command.dev_zookeeper
+  set CLASS=org.apache.storm.command.dev_zookeeper
   set STORM_OPTS=%STORM_SERVER_OPTS% %STORM_OPTS%
   goto :eof
 
 :drpc
-  set CLASS=backtype.storm.daemon.drpc
-  "%JAVA%" -client -Dstorm.options= -Dstorm.conf.file= -cp "%CLASSPATH%" backtype.storm.command.config_value drpc.childopts > %CMD_TEMP_FILE%
+  set CLASS=org.apache.storm.daemon.drpc
+  "%JAVA%" -client -Dstorm.options= -Dstorm.conf.file= -cp "%CLASSPATH%" org.apache.storm.command.config_value drpc.childopts > %CMD_TEMP_FILE%
   FOR /F "delims=" %%i in (%CMD_TEMP_FILE%) do (
      FOR /F "tokens=1,* delims= " %%a in ("%%i") do (
 	  if %%a == VALUE: (
@@ -129,18 +129,18 @@
   goto :eof
 
 :kill
-  set CLASS=backtype.storm.command.kill_topology
+  set CLASS=org.apache.storm.command.kill_topology
   set STORM_OPTS=%STORM_CLIENT_OPTS% %STORM_OPTS%
   goto :eof
 
 :list
-  set CLASS=backtype.storm.command.list
+  set CLASS=org.apache.storm.command.list
   set STORM_OPTS=%STORM_CLIENT_OPTS% %STORM_OPTS%
   goto :eof
 
 :logviewer
-  set CLASS=backtype.storm.daemon.logviewer
-   "%JAVA%" -client -Dstorm.options= -Dstorm.conf.file= -cp "%CLASSPATH%" backtype.storm.command.config_value logviewer.childopts > %CMD_TEMP_FILE%
+  set CLASS=org.apache.storm.daemon.logviewer
+   "%JAVA%" -client -Dstorm.options= -Dstorm.conf.file= -cp "%CLASSPATH%" org.apache.storm.command.config_value logviewer.childopts > %CMD_TEMP_FILE%
   FOR /F "delims=" %%i in (%CMD_TEMP_FILE%) do (
      FOR /F "tokens=1,* delims= " %%a in ("%%i") do (
 	  if %%a == VALUE: (
@@ -151,8 +151,8 @@
   goto :eof
 
 :nimbus
-  set CLASS=backtype.storm.daemon.nimbus
-  "%JAVA%" -client -Dstorm.options= -Dstorm.conf.file= -cp "%CLASSPATH%" backtype.storm.command.config_value nimbus.childopts > %CMD_TEMP_FILE%
+  set CLASS=org.apache.storm.daemon.nimbus
+  "%JAVA%" -client -Dstorm.options= -Dstorm.conf.file= -cp "%CLASSPATH%" org.apache.storm.command.config_value nimbus.childopts > %CMD_TEMP_FILE%
   FOR /F "delims=" %%i in (%CMD_TEMP_FILE%) do (
      FOR /F "tokens=1,* delims= " %%a in ("%%i") do (
 	  if %%a == VALUE: (
@@ -163,12 +163,12 @@
   goto :eof
 
 :rebalance
-  set CLASS=backtype.storm.command.rebalance
+  set CLASS=org.apache.storm.command.rebalance
   set STORM_OPTS=%STORM_CLIENT_OPTS% %STORM_OPTS%
   goto :eof
 
 :remoteconfvalue
-  set CLASS=backtype.storm.command.config_value
+  set CLASS=org.apache.storm.command.config_value
   set STORM_OPTS=%STORM_CLIENT_OPTS% %STORM_OPTS%
   goto :eof
 
@@ -178,13 +178,13 @@
   goto :eof
 
 :shell
-  set CLASS=backtype.storm.command.shell_submission
+  set CLASS=org.apache.storm.command.shell_submission
   set STORM_OPTS=%STORM_CLIENT_OPTS% %STORM_OPTS% 
   goto :eof
   
 :supervisor
-  set CLASS=backtype.storm.daemon.supervisor
-  "%JAVA%" -client -Dstorm.options= -Dstorm.conf.file= -cp "%CLASSPATH%" backtype.storm.command.config_value supervisor.childopts > %CMD_TEMP_FILE%
+  set CLASS=org.apache.storm.daemon.supervisor
+  "%JAVA%" -client -Dstorm.options= -Dstorm.conf.file= -cp "%CLASSPATH%" org.apache.storm.command.config_value supervisor.childopts > %CMD_TEMP_FILE%
   FOR /F "delims=" %%i in (%CMD_TEMP_FILE%) do (
      FOR /F "tokens=1,* delims= " %%a in ("%%i") do (
 	  if %%a == VALUE: (
@@ -195,9 +195,9 @@
   goto :eof
 
 :ui
-  set CLASS=backtype.storm.ui.core
+  set CLASS=org.apache.storm.ui.core
   set CLASSPATH=%CLASSPATH%;%STORM_HOME%
-  "%JAVA%" -client -Dstorm.options= -Dstorm.conf.file= -cp "%CLASSPATH%" backtype.storm.command.config_value ui.childopts > %CMD_TEMP_FILE%
+  "%JAVA%" -client -Dstorm.options= -Dstorm.conf.file= -cp "%CLASSPATH%" org.apache.storm.command.config_value ui.childopts > %CMD_TEMP_FILE%
   FOR /F "delims=" %%i in (%CMD_TEMP_FILE%) do (
      FOR /F "tokens=1,* delims= " %%a in ("%%i") do (
 	  if %%a == VALUE: (
@@ -208,7 +208,7 @@
   goto :eof
 
 :version
-  set CLASS=backtype.storm.utils.VersionInfo
+  set CLASS=org.apache.storm.utils.VersionInfo
   set STORM_OPTS=%STORM_CLIENT_OPTS% %STORM_OPTS%
   goto :eof
 

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/bin/storm.py
----------------------------------------------------------------------
diff --git a/bin/storm.py b/bin/storm.py
index e59b55a..80fd958 100755
--- a/bin/storm.py
+++ b/bin/storm.py
@@ -137,7 +137,7 @@ def confvalue(name, extrapaths, daemon=True):
     global CONFFILE
     command = [
         JAVA_CMD, "-client", get_config_opts(), "-Dstorm.conf.file=" + CONFFILE,
-        "-cp", get_classpath(extrapaths, daemon), "backtype.storm.command.config_value", name
+        "-cp", get_classpath(extrapaths, daemon), "org.apache.storm.command.config_value", name
     ]
     p = sub.Popen(command, stdout=sub.PIPE)
     output, errors = p.communicate()
@@ -225,13 +225,13 @@ def jar(jarfile, klass, *args):
     Runs the main method of class with the specified arguments.
     The storm jars and configs in ~/.storm are put on the classpath.
     The process is configured so that StormSubmitter
-    (http://storm.apache.org/apidocs/backtype/storm/StormSubmitter.html)
+    (http://storm.apache.org/apidocs/org/apache/storm/StormSubmitter.html)
     will upload the jar at topology-jar-path when the topology is submitted.
     """
     transform_class = confvalue("client.jartransformer.class", [CLUSTER_CONF_DIR])
     if (transform_class != None and transform_class != "nil"):
         tmpjar = os.path.join(tempfile.gettempdir(), uuid.uuid1().hex+".jar")
-        exec_storm_class("backtype.storm.daemon.ClientJarTransformerRunner", args=[transform_class, jarfile, tmpjar], fork=True, daemon=False)
+        exec_storm_class("org.apache.storm.daemon.ClientJarTransformerRunner", args=[transform_class, jarfile, tmpjar], fork=True, daemon=False)
         exec_storm_class(
             klass,
             jvmtype="-client",
@@ -276,7 +276,7 @@ def kill(*args):
         print_usage(command="kill")
         sys.exit(2)
     exec_storm_class(
-        "backtype.storm.command.kill_topology",
+        "org.apache.storm.command.kill_topology",
         args=args,
         jvmtype="-client",
         extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
@@ -291,7 +291,7 @@ def upload_credentials(*args):
         print_usage(command="upload_credentials")
         sys.exit(2)
     exec_storm_class(
-        "backtype.storm.command.upload_credentials",
+        "org.apache.storm.command.upload_credentials",
         args=args,
         jvmtype="-client",
         extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
@@ -317,7 +317,7 @@ def blobstore(*args):
     storm blobstore create mytopo:data.tgz -f data.tgz -a u:alice:rwa,u:bob:rw,o::r
     """
     exec_storm_class(
-        "backtype.storm.command.blobstore",
+        "org.apache.storm.command.blobstore",
         args=args,
         jvmtype="-client",
         extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
@@ -329,7 +329,7 @@ def heartbeats(*args):
     get  PATH - Get the heartbeat data at PATH
     """
     exec_storm_class(
-        "backtype.storm.command.heartbeats",
+        "org.apache.storm.command.heartbeats",
         args=args,
         jvmtype="-client",
         extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
@@ -343,7 +343,7 @@ def activate(*args):
         print_usage(command="activate")
         sys.exit(2)
     exec_storm_class(
-        "backtype.storm.command.activate",
+        "org.apache.storm.command.activate",
         args=args,
         jvmtype="-client",
         extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
@@ -376,7 +376,7 @@ def set_log_level(*args):
         Clears settings, resetting back to the original level
     """
     exec_storm_class(
-        "backtype.storm.command.set_log_level",
+        "org.apache.storm.command.set_log_level",
         args=args,
         jvmtype="-client",
         extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
@@ -387,7 +387,7 @@ def listtopos(*args):
     List the running topologies and their statuses.
     """
     exec_storm_class(
-        "backtype.storm.command.list",
+        "org.apache.storm.command.list",
         args=args,
         jvmtype="-client",
         extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
@@ -401,7 +401,7 @@ def deactivate(*args):
         print_usage(command="deactivate")
         sys.exit(2)
     exec_storm_class(
-        "backtype.storm.command.deactivate",
+        "org.apache.storm.command.deactivate",
         args=args,
         jvmtype="-client",
         extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
@@ -431,7 +431,7 @@ def rebalance(*args):
         print_usage(command="rebalance")
         sys.exit(2)
     exec_storm_class(
-        "backtype.storm.command.rebalance",
+        "org.apache.storm.command.rebalance",
         args=args,
         jvmtype="-client",
         extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
@@ -447,7 +447,7 @@ def get_errors(*args):
         print_usage(command="get_errors")
         sys.exit(2)
     exec_storm_class(
-        "backtype.storm.command.get_errors",
+        "org.apache.storm.command.get_errors",
         args=args,
         jvmtype="-client",
         extrajars=[USER_CONF_DIR, os.path.join(STORM_DIR, "bin")])
@@ -458,7 +458,7 @@ def healthcheck(*args):
     Run health checks on the local supervisor.
     """
     exec_storm_class(
-        "backtype.storm.command.healthcheck",
+        "org.apache.storm.command.healthcheck",
         args=args,
         jvmtype="-client",
         extrajars=[USER_CONF_DIR, os.path.join(STORM_DIR, "bin")])
@@ -471,7 +471,7 @@ def kill_workers(*args):
     to have admin rights on the node to be able to successfully kill all workers.
     """
     exec_storm_class(
-        "backtype.storm.command.kill_workers",
+        "org.apache.storm.command.kill_workers",
         args=args,
         jvmtype="-client",
         extrajars=[USER_CONF_DIR, os.path.join(STORM_DIR, "bin")])
@@ -482,7 +482,7 @@ def shell(resourcesdir, command, *args):
     runnerargs = [tmpjarpath, command]
     runnerargs.extend(args)
     exec_storm_class(
-        "backtype.storm.command.shell_submission",
+        "org.apache.storm.command.shell_submission",
         args=runnerargs,
         jvmtype="-client",
         extrajars=[USER_CONF_DIR],
@@ -507,7 +507,7 @@ def get_log4j2_conf_dir():
         storm_log4j2_conf_dir = os.path.join(STORM_DIR, storm_log4j2_conf_dir)
     return storm_log4j2_conf_dir
 
-def nimbus(klass="backtype.storm.daemon.nimbus"):
+def nimbus(klass="org.apache.storm.daemon.nimbus"):
     """Syntax: [storm nimbus]
 
     Launches the nimbus daemon. This command should be run under
@@ -550,7 +550,7 @@ def pacemaker(klass="org.apache.storm.pacemaker.pacemaker"):
         extrajars=cppaths,
         jvmopts=jvmopts)
 
-def supervisor(klass="backtype.storm.daemon.supervisor"):
+def supervisor(klass="org.apache.storm.daemon.supervisor"):
     """Syntax: [storm supervisor]
 
     Launches the supervisor daemon. This command should be run
@@ -589,7 +589,7 @@ def ui():
         "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(), "cluster.xml")
     ]
     exec_storm_class(
-        "backtype.storm.ui.core",
+        "org.apache.storm.ui.core",
         jvmtype="-server",
         daemonName="ui",
         jvmopts=jvmopts,
@@ -612,7 +612,7 @@ def logviewer():
         "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(), "cluster.xml")
     ]
     exec_storm_class(
-        "backtype.storm.daemon.logviewer",
+        "org.apache.storm.daemon.logviewer",
         jvmtype="-server",
         daemonName="logviewer",
         jvmopts=jvmopts,
@@ -634,7 +634,7 @@ def drpc():
         "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(), "cluster.xml")
     ]
     exec_storm_class(
-        "backtype.storm.daemon.drpc",
+        "org.apache.storm.daemon.drpc",
         jvmtype="-server",
         daemonName="drpc",
         jvmopts=jvmopts,
@@ -649,7 +649,7 @@ def dev_zookeeper():
     """
     cppaths = [CLUSTER_CONF_DIR]
     exec_storm_class(
-        "backtype.storm.command.dev_zookeeper",
+        "org.apache.storm.command.dev_zookeeper",
         jvmtype="-server",
         extrajars=[CLUSTER_CONF_DIR])
 
@@ -660,7 +660,7 @@ def version():
   """
   cppaths = [CLUSTER_CONF_DIR]
   exec_storm_class(
-       "backtype.storm.utils.VersionInfo",
+       "org.apache.storm.utils.VersionInfo",
        jvmtype="-client",
        extrajars=[CLUSTER_CONF_DIR])
 
@@ -683,7 +683,7 @@ def monitor(*args):
         watch-item is 'emitted';
     """
     exec_storm_class(
-        "backtype.storm.command.monitor",
+        "org.apache.storm.command.monitor",
         args=args,
         jvmtype="-client",
         extrajars=[USER_CONF_DIR, STORM_BIN_DIR])

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/conf/defaults.yaml
----------------------------------------------------------------------
diff --git a/conf/defaults.yaml b/conf/defaults.yaml
index b5c8b47..735a83e 100644
--- a/conf/defaults.yaml
+++ b/conf/defaults.yaml
@@ -39,11 +39,11 @@ storm.exhibitor.port: 8080
 storm.exhibitor.poll.uripath: "/exhibitor/v1/cluster/list"
 storm.cluster.mode: "distributed" # can be distributed or local
 storm.local.mode.zmq: false
-storm.thrift.transport: "backtype.storm.security.auth.SimpleTransportPlugin"
-storm.principal.tolocal: "backtype.storm.security.auth.DefaultPrincipalToLocal"
-storm.group.mapping.service: "backtype.storm.security.auth.ShellBasedGroupsMapping"
+storm.thrift.transport: "org.apache.storm.security.auth.SimpleTransportPlugin"
+storm.principal.tolocal: "org.apache.storm.security.auth.DefaultPrincipalToLocal"
+storm.group.mapping.service: "org.apache.storm.security.auth.ShellBasedGroupsMapping"
 storm.group.mapping.service.params: null
-storm.messaging.transport: "backtype.storm.messaging.netty.Context"
+storm.messaging.transport: "org.apache.storm.messaging.netty.Context"
 storm.nimbus.retry.times: 5
 storm.nimbus.retry.interval.millis: 2000
 storm.nimbus.retry.intervalceiling.millis: 60000
@@ -51,9 +51,9 @@ storm.auth.simple-white-list.users: []
 storm.auth.simple-acl.users: []
 storm.auth.simple-acl.users.commands: []
 storm.auth.simple-acl.admins: []
-storm.cluster.state.store: "backtype.storm.cluster_state.zookeeper_state_factory"
-storm.meta.serialization.delegate: "backtype.storm.serialization.GzipThriftSerializationDelegate"
-storm.codedistributor.class: "backtype.storm.codedistributor.LocalFileSystemCodeDistributor"
+storm.cluster.state.store: "org.apache.storm.cluster_state.zookeeper_state_factory"
+storm.meta.serialization.delegate: "org.apache.storm.serialization.GzipThriftSerializationDelegate"
+storm.codedistributor.class: "org.apache.storm.codedistributor.LocalFileSystemCodeDistributor"
 storm.workers.artifacts.dir: "workers-artifacts"
 storm.health.check.dir: "healthchecks"
 storm.health.check.timeout.ms: 5000
@@ -72,11 +72,11 @@ nimbus.inbox.jar.expiration.secs: 3600
 nimbus.code.sync.freq.secs: 120
 nimbus.task.launch.secs: 120
 nimbus.file.copy.expiration.secs: 600
-nimbus.topology.validator: "backtype.storm.nimbus.DefaultTopologyValidator"
+nimbus.topology.validator: "org.apache.storm.nimbus.DefaultTopologyValidator"
 topology.min.replication.count: 1
 topology.max.replication.wait.time.sec: 60
 nimbus.credential.renewers.freq.secs: 600
-nimbus.impersonation.authorizer: "backtype.storm.security.auth.authorizer.ImpersonationAuthorizer"
+nimbus.impersonation.authorizer: "org.apache.storm.security.auth.authorizer.ImpersonationAuthorizer"
 
 scheduler.display.resource: false
 
@@ -89,7 +89,7 @@ ui.filter: null
 ui.filter.params: null
 ui.users: null
 ui.header.buffer.bytes: 4096
-ui.http.creds.plugin: backtype.storm.security.auth.DefaultHttpCredentialsPlugin
+ui.http.creds.plugin: org.apache.storm.security.auth.DefaultHttpCredentialsPlugin
 
 logviewer.port: 8000
 logviewer.childopts: "-Xmx128m"
@@ -112,7 +112,7 @@ drpc.http.port: 3774
 drpc.https.port: -1
 drpc.https.keystore.password: ""
 drpc.https.keystore.type: "JKS"
-drpc.http.creds.plugin: backtype.storm.security.auth.DefaultHttpCredentialsPlugin
+drpc.http.creds.plugin: org.apache.storm.security.auth.DefaultHttpCredentialsPlugin
 drpc.authorizer.acl.filename: "drpc-auth-acl.yaml"
 drpc.authorizer.acl.strict: false
 
@@ -121,17 +121,17 @@ transactional.zookeeper.servers: null
 transactional.zookeeper.port: null
 
 ## blobstore configs
-supervisor.blobstore.class: "backtype.storm.blobstore.NimbusBlobStore"
+supervisor.blobstore.class: "org.apache.storm.blobstore.NimbusBlobStore"
 supervisor.blobstore.download.thread.count: 5
 supervisor.blobstore.download.max_retries: 3
 supervisor.localizer.cache.target.size.mb: 10240
 supervisor.localizer.cleanup.interval.ms: 600000
 
-nimbus.blobstore.class: "backtype.storm.blobstore.LocalFsBlobStore"
+nimbus.blobstore.class: "org.apache.storm.blobstore.LocalFsBlobStore"
 nimbus.blobstore.expiration.secs: 600
 
 storm.blobstore.inputstream.buffer.size.bytes: 65536
-client.blobstore.class: "backtype.storm.blobstore.NimbusBlobStore"
+client.blobstore.class: "org.apache.storm.blobstore.NimbusBlobStore"
 storm.blobstore.replication.factor: 3
 
 ### supervisor.* configs are for node supervisors
@@ -208,7 +208,7 @@ storm.messaging.netty.socket.backlog: 500
 storm.messaging.netty.authentication: false
 
 # Default plugin to use for automatic network topology discovery
-storm.network.topography.plugin: backtype.storm.networktopography.DefaultRackDNSToSwitchMapping
+storm.network.topography.plugin: org.apache.storm.networktopography.DefaultRackDNSToSwitchMapping
 
 # default number of seconds group mapping service will cache user group
 storm.group.mapping.service.cache.duration.secs: 120
@@ -222,7 +222,7 @@ topology.eventlogger.executors: null
 topology.tasks: null
 # maximum amount of time a message has to complete before it's considered failed
 topology.message.timeout.secs: 30
-topology.multilang.serializer: "backtype.storm.multilang.JsonSerializer"
+topology.multilang.serializer: "org.apache.storm.multilang.JsonSerializer"
 topology.shellbolt.max.pending: 100
 topology.skip.missing.kryo.registrations: false
 topology.max.task.parallelism: null
@@ -238,12 +238,12 @@ topology.executor.send.buffer.size: 1024 #individual messages
 topology.transfer.buffer.size: 1024 # batched
 topology.tick.tuple.freq.secs: null
 topology.worker.shared.thread.pool.size: 4
-topology.spout.wait.strategy: "backtype.storm.spout.SleepSpoutWaitStrategy"
+topology.spout.wait.strategy: "org.apache.storm.spout.SleepSpoutWaitStrategy"
 topology.sleep.spout.wait.strategy.time.ms: 1
 topology.error.throttle.interval.secs: 10
 topology.max.error.report.per.interval: 5
-topology.kryo.factory: "backtype.storm.serialization.DefaultKryoFactory"
-topology.tuple.serializer: "backtype.storm.serialization.types.ListDelegateSerializer"
+topology.kryo.factory: "org.apache.storm.serialization.DefaultKryoFactory"
+topology.tuple.serializer: "org.apache.storm.serialization.types.ListDelegateSerializer"
 topology.trident.batch.emit.interval.millis: 500
 topology.testing.always.try.serialize: false
 topology.classpath: null
@@ -262,9 +262,9 @@ topology.component.resources.onheap.memory.mb: 128.0
 topology.component.resources.offheap.memory.mb: 0.0
 topology.component.cpu.pcore.percent: 10.0
 topology.worker.max.heap.size.mb: 768.0
-topology.scheduler.strategy: "backtype.storm.scheduler.resource.strategies.scheduling.DefaultResourceAwareStrategy"
-resource.aware.scheduler.eviction.strategy: "backtype.storm.scheduler.resource.strategies.eviction.DefaultEvictionStrategy"
-resource.aware.scheduler.priority.strategy: "backtype.storm.scheduler.resource.strategies.priority.DefaultSchedulingPriorityStrategy"
+topology.scheduler.strategy: "org.apache.storm.scheduler.resource.strategies.scheduling.DefaultResourceAwareStrategy"
+resource.aware.scheduler.eviction.strategy: "org.apache.storm.scheduler.resource.strategies.eviction.DefaultEvictionStrategy"
+resource.aware.scheduler.priority.strategy: "org.apache.storm.scheduler.resource.strategies.priority.DefaultSchedulingPriorityStrategy"
 
 dev.zookeeper.path: "/tmp/dev-storm-zookeeper"
 

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/conf/storm.yaml.example
----------------------------------------------------------------------
diff --git a/conf/storm.yaml.example b/conf/storm.yaml.example
index 13c2f8e..7df3e9d 100644
--- a/conf/storm.yaml.example
+++ b/conf/storm.yaml.example
@@ -40,7 +40,7 @@
 
 ## Metrics Consumers
 # topology.metrics.consumer.register:
-#   - class: "backtype.storm.metric.LoggingMetricsConsumer"
+#   - class: "org.apache.storm.metric.LoggingMetricsConsumer"
 #     parallelism.hint: 1
 #   - class: "org.mycompany.MyMetricsConsumer"
 #     parallelism.hint: 1

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/examples/storm-starter/README.markdown
----------------------------------------------------------------------
diff --git a/examples/storm-starter/README.markdown b/examples/storm-starter/README.markdown
index 01ba09f..48d9d27 100644
--- a/examples/storm-starter/README.markdown
+++ b/examples/storm-starter/README.markdown
@@ -88,11 +88,11 @@ Example filename of the uberjar:
 You can submit (run) a topology contained in this uberjar to Storm via the `storm` CLI tool:
 
     # Example 1: Run the ExclamationTopology in local mode (LocalCluster)
-    $ storm jar target/storm-starter-*.jar storm.starter.ExclamationTopology
+    $ storm jar target/storm-starter-*.jar org.apache.storm.starter.ExclamationTopology
 
     # Example 2: Run the RollingTopWords in remote/cluster mode,
     #            under the name "production-topology"
-    $ storm jar storm-starter-*.jar storm.starter.RollingTopWords production-topology remote
+    $ storm jar storm-starter-*.jar org.apache.storm.starter.RollingTopWords production-topology remote
 
 With submitting you can run topologies which use multilang, for example, `WordCountTopology`.
 

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/examples/storm-starter/multilang/resources/randomsentence.js
----------------------------------------------------------------------
diff --git a/examples/storm-starter/multilang/resources/randomsentence.js b/examples/storm-starter/multilang/resources/randomsentence.js
index 36fc5f5..b121915 100644
--- a/examples/storm-starter/multilang/resources/randomsentence.js
+++ b/examples/storm-starter/multilang/resources/randomsentence.js
@@ -18,7 +18,7 @@
 
 /**
  * Example for storm spout. Emits random sentences.
- * The original class in java - storm.starter.spout.RandomSentenceSpout.
+ * The original class in java - org.apache.storm.starter.spout.RandomSentenceSpout.
  *
  */
 

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/examples/storm-starter/src/clj/org/apache/storm/starter/clj/word_count.clj
----------------------------------------------------------------------
diff --git a/examples/storm-starter/src/clj/org/apache/storm/starter/clj/word_count.clj b/examples/storm-starter/src/clj/org/apache/storm/starter/clj/word_count.clj
new file mode 100644
index 0000000..fb3a695
--- /dev/null
+++ b/examples/storm-starter/src/clj/org/apache/storm/starter/clj/word_count.clj
@@ -0,0 +1,95 @@
+;; Licensed to the Apache Software Foundation (ASF) under one
+;; or more contributor license agreements.  See the NOTICE file
+;; distributed with this work for additional information
+;; regarding copyright ownership.  The ASF licenses this file
+;; to you under the Apache License, Version 2.0 (the
+;; "License"); you may not use this file except in compliance
+;; with the License.  You may obtain a copy of the License at
+;;
+;; http://www.apache.org/licenses/LICENSE-2.0
+;;
+;; Unless required by applicable law or agreed to in writing, software
+;; distributed under the License is distributed on an "AS IS" BASIS,
+;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;; See the License for the specific language governing permissions and
+;; limitations under the License.
+(ns org.apache.storm.starter.clj.word-count
+  (:import [org.apache.storm StormSubmitter LocalCluster])
+  (:use [org.apache.storm clojure config])
+  (:gen-class))
+
+(defspout sentence-spout ["sentence"]
+  [conf context collector]
+  (let [sentences ["a little brown dog"
+                   "the man petted the dog"
+                   "four score and seven years ago"
+                   "an apple a day keeps the doctor away"]]
+    (spout
+     (nextTuple []
+       (Thread/sleep 100)
+       (emit-spout! collector [(rand-nth sentences)])         
+       )
+     (ack [id]
+        ;; You only need to define this method for reliable spouts
+        ;; (such as one that reads off of a queue like Kestrel)
+        ;; This is an unreliable spout, so it does nothing here
+        ))))
+
+(defspout sentence-spout-parameterized ["word"] {:params [sentences] :prepare false}
+  [collector]
+  (Thread/sleep 500)
+  (emit-spout! collector [(rand-nth sentences)]))
+
+(defbolt split-sentence ["word"] [tuple collector]
+  (let [words (.split (.getString tuple 0) " ")]
+    (doseq [w words]
+      (emit-bolt! collector [w] :anchor tuple))
+    (ack! collector tuple)
+    ))
+
+(defbolt word-count ["word" "count"] {:prepare true}
+  [conf context collector]
+  (let [counts (atom {})]
+    (bolt
+     (execute [tuple]
+       (let [word (.getString tuple 0)]
+         (swap! counts (partial merge-with +) {word 1})
+         (emit-bolt! collector [word (@counts word)] :anchor tuple)
+         (ack! collector tuple)
+         )))))
+
+(defn mk-topology []
+
+  (topology
+   {"1" (spout-spec sentence-spout)
+    "2" (spout-spec (sentence-spout-parameterized
+                     ["the cat jumped over the door"
+                      "greetings from a faraway land"])
+                     :p 2)}
+   {"3" (bolt-spec {"1" :shuffle "2" :shuffle}
+                   split-sentence
+                   :p 5)
+    "4" (bolt-spec {"3" ["word"]}
+                   word-count
+                   :p 6)}))
+
+(defn run-local! []
+  (let [cluster (LocalCluster.)]
+    (.submitTopology cluster "word-count" {TOPOLOGY-DEBUG true} (mk-topology))
+    (Thread/sleep 10000)
+    (.shutdown cluster)
+    ))
+
+(defn submit-topology! [name]
+  (StormSubmitter/submitTopology
+   name
+   {TOPOLOGY-DEBUG true
+    TOPOLOGY-WORKERS 3}
+   (mk-topology)))
+
+(defn -main
+  ([]
+   (run-local!))
+  ([name]
+   (submit-topology! name)))
+

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/examples/storm-starter/src/clj/storm/starter/clj/word_count.clj
----------------------------------------------------------------------
diff --git a/examples/storm-starter/src/clj/storm/starter/clj/word_count.clj b/examples/storm-starter/src/clj/storm/starter/clj/word_count.clj
deleted file mode 100644
index 3b54ac8..0000000
--- a/examples/storm-starter/src/clj/storm/starter/clj/word_count.clj
+++ /dev/null
@@ -1,95 +0,0 @@
-;; Licensed to the Apache Software Foundation (ASF) under one
-;; or more contributor license agreements.  See the NOTICE file
-;; distributed with this work for additional information
-;; regarding copyright ownership.  The ASF licenses this file
-;; to you under the Apache License, Version 2.0 (the
-;; "License"); you may not use this file except in compliance
-;; with the License.  You may obtain a copy of the License at
-;;
-;; http://www.apache.org/licenses/LICENSE-2.0
-;;
-;; Unless required by applicable law or agreed to in writing, software
-;; distributed under the License is distributed on an "AS IS" BASIS,
-;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-;; See the License for the specific language governing permissions and
-;; limitations under the License.
-(ns storm.starter.clj.word-count
-  (:import [backtype.storm StormSubmitter LocalCluster])
-  (:use [backtype.storm clojure config])
-  (:gen-class))
-
-(defspout sentence-spout ["sentence"]
-  [conf context collector]
-  (let [sentences ["a little brown dog"
-                   "the man petted the dog"
-                   "four score and seven years ago"
-                   "an apple a day keeps the doctor away"]]
-    (spout
-     (nextTuple []
-       (Thread/sleep 100)
-       (emit-spout! collector [(rand-nth sentences)])         
-       )
-     (ack [id]
-        ;; You only need to define this method for reliable spouts
-        ;; (such as one that reads off of a queue like Kestrel)
-        ;; This is an unreliable spout, so it does nothing here
-        ))))
-
-(defspout sentence-spout-parameterized ["word"] {:params [sentences] :prepare false}
-  [collector]
-  (Thread/sleep 500)
-  (emit-spout! collector [(rand-nth sentences)]))
-
-(defbolt split-sentence ["word"] [tuple collector]
-  (let [words (.split (.getString tuple 0) " ")]
-    (doseq [w words]
-      (emit-bolt! collector [w] :anchor tuple))
-    (ack! collector tuple)
-    ))
-
-(defbolt word-count ["word" "count"] {:prepare true}
-  [conf context collector]
-  (let [counts (atom {})]
-    (bolt
-     (execute [tuple]
-       (let [word (.getString tuple 0)]
-         (swap! counts (partial merge-with +) {word 1})
-         (emit-bolt! collector [word (@counts word)] :anchor tuple)
-         (ack! collector tuple)
-         )))))
-
-(defn mk-topology []
-
-  (topology
-   {"1" (spout-spec sentence-spout)
-    "2" (spout-spec (sentence-spout-parameterized
-                     ["the cat jumped over the door"
-                      "greetings from a faraway land"])
-                     :p 2)}
-   {"3" (bolt-spec {"1" :shuffle "2" :shuffle}
-                   split-sentence
-                   :p 5)
-    "4" (bolt-spec {"3" ["word"]}
-                   word-count
-                   :p 6)}))
-
-(defn run-local! []
-  (let [cluster (LocalCluster.)]
-    (.submitTopology cluster "word-count" {TOPOLOGY-DEBUG true} (mk-topology))
-    (Thread/sleep 10000)
-    (.shutdown cluster)
-    ))
-
-(defn submit-topology! [name]
-  (StormSubmitter/submitTopology
-   name
-   {TOPOLOGY-DEBUG true
-    TOPOLOGY-WORKERS 3}
-   (mk-topology)))
-
-(defn -main
-  ([]
-   (run-local!))
-  ([name]
-   (submit-topology! name)))
-

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/examples/storm-starter/src/jvm/org/apache/storm/starter/BasicDRPCTopology.java
----------------------------------------------------------------------
diff --git a/examples/storm-starter/src/jvm/org/apache/storm/starter/BasicDRPCTopology.java b/examples/storm-starter/src/jvm/org/apache/storm/starter/BasicDRPCTopology.java
new file mode 100644
index 0000000..2187da4
--- /dev/null
+++ b/examples/storm-starter/src/jvm/org/apache/storm/starter/BasicDRPCTopology.java
@@ -0,0 +1,78 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.starter;
+
+import org.apache.storm.Config;
+import org.apache.storm.LocalCluster;
+import org.apache.storm.LocalDRPC;
+import org.apache.storm.StormSubmitter;
+import org.apache.storm.drpc.LinearDRPCTopologyBuilder;
+import org.apache.storm.topology.BasicOutputCollector;
+import org.apache.storm.topology.OutputFieldsDeclarer;
+import org.apache.storm.topology.base.BaseBasicBolt;
+import org.apache.storm.tuple.Fields;
+import org.apache.storm.tuple.Tuple;
+import org.apache.storm.tuple.Values;
+
+/**
+ * This topology is a basic example of doing distributed RPC on top of Storm. It implements a function that appends a
+ * "!" to any string you send the DRPC function.
+ *
+ * @see <a href="http://storm.apache.org/documentation/Distributed-RPC.html">Distributed RPC</a>
+ */
+public class BasicDRPCTopology {
+  public static class ExclaimBolt extends BaseBasicBolt {
+    @Override
+    public void execute(Tuple tuple, BasicOutputCollector collector) {
+      String input = tuple.getString(1);
+      collector.emit(new Values(tuple.getValue(0), input + "!"));
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+      declarer.declare(new Fields("id", "result"));
+    }
+
+  }
+
+  public static void main(String[] args) throws Exception {
+    LinearDRPCTopologyBuilder builder = new LinearDRPCTopologyBuilder("exclamation");
+    builder.addBolt(new ExclaimBolt(), 3);
+
+    Config conf = new Config();
+
+    if (args == null || args.length == 0) {
+      LocalDRPC drpc = new LocalDRPC();
+      LocalCluster cluster = new LocalCluster();
+
+      cluster.submitTopology("drpc-demo", conf, builder.createLocalTopology(drpc));
+
+      for (String word : new String[]{ "hello", "goodbye" }) {
+        System.out.println("Result for \"" + word + "\": " + drpc.execute("exclamation", word));
+      }
+
+      Thread.sleep(10000);
+      drpc.shutdown();
+      cluster.shutdown();
+    }
+    else {
+      conf.setNumWorkers(3);
+      StormSubmitter.submitTopologyWithProgressBar(args[0], conf, builder.createRemoteTopology());
+    }
+  }
+}

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/examples/storm-starter/src/jvm/org/apache/storm/starter/BlobStoreAPIWordCountTopology.java
----------------------------------------------------------------------
diff --git a/examples/storm-starter/src/jvm/org/apache/storm/starter/BlobStoreAPIWordCountTopology.java b/examples/storm-starter/src/jvm/org/apache/storm/starter/BlobStoreAPIWordCountTopology.java
new file mode 100644
index 0000000..13ccb1d
--- /dev/null
+++ b/examples/storm-starter/src/jvm/org/apache/storm/starter/BlobStoreAPIWordCountTopology.java
@@ -0,0 +1,304 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.starter;
+
+import org.apache.storm.Config;
+import org.apache.storm.StormSubmitter;
+import org.apache.storm.blobstore.AtomicOutputStream;
+import org.apache.storm.blobstore.ClientBlobStore;
+import org.apache.storm.blobstore.InputStreamWithMeta;
+import org.apache.storm.blobstore.NimbusBlobStore;
+
+import org.apache.storm.generated.AccessControl;
+import org.apache.storm.generated.AccessControlType;
+import org.apache.storm.generated.AlreadyAliveException;
+import org.apache.storm.generated.AuthorizationException;
+import org.apache.storm.generated.InvalidTopologyException;
+import org.apache.storm.generated.KeyAlreadyExistsException;
+import org.apache.storm.generated.KeyNotFoundException;
+import org.apache.storm.generated.SettableBlobMeta;
+import org.apache.storm.spout.SpoutOutputCollector;
+import org.apache.storm.task.ShellBolt;
+import org.apache.storm.task.TopologyContext;
+import org.apache.storm.topology.BasicOutputCollector;
+import org.apache.storm.topology.IRichBolt;
+import org.apache.storm.topology.OutputFieldsDeclarer;
+import org.apache.storm.topology.TopologyBuilder;
+import org.apache.storm.topology.base.BaseBasicBolt;
+import org.apache.storm.topology.base.BaseRichSpout;
+import org.apache.storm.blobstore.BlobStoreAclHandler;
+import org.apache.storm.tuple.Fields;
+import org.apache.storm.tuple.Tuple;
+import org.apache.storm.tuple.Values;
+import org.apache.storm.utils.Utils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.BufferedReader;
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileReader;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+import java.util.Set;
+import java.util.StringTokenizer;
+
+public class BlobStoreAPIWordCountTopology {
+    private static ClientBlobStore store; // Client API to invoke blob store API functionality
+    private static String key = "key";
+    private static String fileName = "blacklist.txt";
+    private static final Logger LOG = LoggerFactory.getLogger(BlobStoreAPIWordCountTopology.class);
+
+    public static void prepare() {
+        Config conf = new Config();
+        conf.putAll(Utils.readStormConfig());
+        store = Utils.getClientBlobStore(conf);
+    }
+
+    // Spout implementation
+    public static class RandomSentenceSpout extends BaseRichSpout {
+        SpoutOutputCollector _collector;
+
+        @Override
+        public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) {
+            _collector = collector;
+        }
+
+        @Override
+        public void nextTuple() {
+            Utils.sleep(100);
+            _collector.emit(new Values(getRandomSentence()));
+        }
+
+        @Override
+        public void ack(Object id) {
+        }
+
+        @Override
+        public void fail(Object id) {
+        }
+
+        @Override
+        public void declareOutputFields(OutputFieldsDeclarer declarer) {
+            declarer.declare(new Fields("sentence"));
+        }
+
+    }
+
+    // Bolt implementation
+    public static class SplitSentence extends ShellBolt implements IRichBolt {
+
+        public SplitSentence() {
+            super("python", "splitsentence.py");
+        }
+
+        @Override
+        public void declareOutputFields(OutputFieldsDeclarer declarer) {
+            declarer.declare(new Fields("word"));
+        }
+
+        @Override
+        public Map<String, Object> getComponentConfiguration() {
+            return null;
+        }
+    }
+
+    public static class FilterWords extends BaseBasicBolt {
+        boolean poll = false;
+        long pollTime;
+        Set<String> wordSet;
+        @Override
+        public void execute(Tuple tuple, BasicOutputCollector collector) {
+            String word = tuple.getString(0);
+            // Thread Polling every 5 seconds to update the wordSet seconds which is
+            // used in FilterWords bolt to filter the words
+            try {
+                if (!poll) {
+                    wordSet = parseFile(fileName);
+                    pollTime = System.currentTimeMillis();
+                    poll = true;
+                } else {
+                    if ((System.currentTimeMillis() - pollTime) > 5000) {
+                        wordSet = parseFile(fileName);
+                        pollTime = System.currentTimeMillis();
+                    }
+                }
+            } catch (IOException exp) {
+                throw new RuntimeException(exp);
+            }
+            if (wordSet !=null && !wordSet.contains(word)) {
+                collector.emit(new Values(word));
+            }
+        }
+
+        @Override
+        public void declareOutputFields(OutputFieldsDeclarer declarer) {
+            declarer.declare(new Fields("word"));
+        }
+    }
+
+    public void buildAndLaunchWordCountTopology(String[] args) {
+        TopologyBuilder builder = new TopologyBuilder();
+        builder.setSpout("spout", new RandomSentenceSpout(), 5);
+        builder.setBolt("split", new SplitSentence(), 8).shuffleGrouping("spout");
+        builder.setBolt("filter", new FilterWords(), 6).shuffleGrouping("split");
+
+        Config conf = new Config();
+        conf.setDebug(true);
+        try {
+            conf.setNumWorkers(3);
+            StormSubmitter.submitTopologyWithProgressBar(args[0], conf, builder.createTopology());
+        } catch (InvalidTopologyException | AuthorizationException | AlreadyAliveException exp) {
+            throw new RuntimeException(exp);
+        }
+    }
+
+    // Equivalent create command on command line
+    // storm blobstore create --file blacklist.txt --acl o::rwa key
+    private static void createBlobWithContent(String blobKey, ClientBlobStore clientBlobStore, File file)
+            throws AuthorizationException, KeyAlreadyExistsException, IOException,KeyNotFoundException {
+        String stringBlobACL = "o::rwa";
+        AccessControl blobACL = BlobStoreAclHandler.parseAccessControl(stringBlobACL);
+        List<AccessControl> acls = new LinkedList<AccessControl>();
+        acls.add(blobACL); // more ACLs can be added here
+        SettableBlobMeta settableBlobMeta = new SettableBlobMeta(acls);
+        AtomicOutputStream blobStream = clientBlobStore.createBlob(blobKey,settableBlobMeta);
+        blobStream.write(readFile(file).toString().getBytes());
+        blobStream.close();
+    }
+
+    // Equivalent update command on command line
+    // storm blobstore update --file blacklist.txt key
+    private static void updateBlobWithContent(String blobKey, ClientBlobStore clientBlobStore, File file)
+            throws KeyNotFoundException, AuthorizationException, IOException {
+        AtomicOutputStream blobOutputStream = clientBlobStore.updateBlob(blobKey);
+        blobOutputStream.write(readFile(file).toString().getBytes());
+        blobOutputStream.close();
+    }
+
+    private static String getRandomSentence() {
+        String[] sentences = new String[]{ "the cow jumped over the moon", "an apple a day keeps the doctor away",
+                "four score and seven years ago", "snow white and the seven dwarfs", "i am at two with nature" };
+        String sentence = sentences[new Random().nextInt(sentences.length)];
+        return sentence;
+    }
+
+    private static Set<String> getRandomWordSet() {
+        Set<String> randomWordSet = new HashSet<>();
+        Random random = new Random();
+        String[] words = new String[]{ "cow", "jumped", "over", "the", "moon", "apple", "day", "doctor", "away",
+                "four", "seven", "ago", "snow", "white", "seven", "dwarfs", "nature", "two" };
+        // Choosing atmost 5 words to update the blacklist file for filtering
+        for (int i=0; i<5; i++) {
+            randomWordSet.add(words[random.nextInt(words.length)]);
+        }
+        return randomWordSet;
+    }
+
+    private static Set<String> parseFile(String fileName) throws IOException {
+        File file = new File(fileName);
+        Set<String> wordSet = new HashSet<>();
+        if (!file.exists()) {
+            return wordSet;
+        }
+        StringTokenizer tokens = new StringTokenizer(readFile(file).toString(), "\r\n");
+        while (tokens.hasMoreElements()) {
+            wordSet.add(tokens.nextToken());
+        }
+        LOG.debug("parseFile {}", wordSet);
+        return wordSet;
+    }
+
+    private static StringBuilder readFile(File file) throws IOException {
+        String line;
+        StringBuilder fileContent = new StringBuilder();
+        // Do not use canonical file name here as we are using
+        // symbolic links to read file data and performing atomic move
+        // while updating files
+        BufferedReader br = new BufferedReader(new FileReader(file));
+        while ((line = br.readLine()) != null) {
+            fileContent.append(line);
+            fileContent.append(System.lineSeparator());
+        }
+        return fileContent;
+    }
+
+    // Creating a blacklist file to read from the disk
+    public static File createFile(String fileName) throws IOException {
+        File file = null;
+        file = new File(fileName);
+        if (!file.exists()) {
+            file.createNewFile();
+        }
+        writeToFile(file, getRandomWordSet());
+        return file;
+    }
+
+    // Updating a blacklist file periodically with random words
+    public static File updateFile(File file) throws IOException {
+        writeToFile(file, getRandomWordSet());
+        return file;
+    }
+
+    // Writing random words to be blacklisted
+    public static void writeToFile(File file, Set<String> content) throws IOException{
+        FileWriter fw = new FileWriter(file, false);
+        BufferedWriter bw = new BufferedWriter(fw);
+        Iterator<String> iter = content.iterator();
+        while(iter.hasNext()) {
+            bw.write(iter.next());
+            bw.write(System.lineSeparator());
+        }
+        bw.close();
+    }
+
+    public static void main(String[] args) {
+        prepare();
+        BlobStoreAPIWordCountTopology wc = new BlobStoreAPIWordCountTopology();
+        try {
+            File file = createFile(fileName);
+            // Creating blob again before launching topology
+            createBlobWithContent(key, store, file);
+
+            // Blostore launch command with topology blobstore map
+            // Here we are giving it a local name so that we can read from the file
+            // bin/storm jar examples/storm-starter/storm-starter-topologies-0.11.0-SNAPSHOT.jar
+            // org.apache.storm.starter.BlobStoreAPIWordCountTopology bl -c
+            // topology.blobstore.map='{"key":{"localname":"blacklist.txt", "uncompress":"false"}}'
+            wc.buildAndLaunchWordCountTopology(args);
+
+            // Updating file few times every 5 seconds
+            for(int i=0; i<10; i++) {
+                updateBlobWithContent(key, store, updateFile(file));
+                Utils.sleep(5000);
+            }
+        } catch (KeyAlreadyExistsException kae) {
+            LOG.info("Key already exists {}", kae);
+        } catch (AuthorizationException | KeyNotFoundException | IOException exp) {
+            throw new RuntimeException(exp);
+        }
+    }
+}
+
+

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/examples/storm-starter/src/jvm/org/apache/storm/starter/ExclamationTopology.java
----------------------------------------------------------------------
diff --git a/examples/storm-starter/src/jvm/org/apache/storm/starter/ExclamationTopology.java b/examples/storm-starter/src/jvm/org/apache/storm/starter/ExclamationTopology.java
new file mode 100644
index 0000000..26e0430
--- /dev/null
+++ b/examples/storm-starter/src/jvm/org/apache/storm/starter/ExclamationTopology.java
@@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.starter;
+
+import org.apache.storm.Config;
+import org.apache.storm.LocalCluster;
+import org.apache.storm.StormSubmitter;
+import org.apache.storm.task.OutputCollector;
+import org.apache.storm.task.TopologyContext;
+import org.apache.storm.testing.TestWordSpout;
+import org.apache.storm.topology.OutputFieldsDeclarer;
+import org.apache.storm.topology.TopologyBuilder;
+import org.apache.storm.topology.base.BaseRichBolt;
+import org.apache.storm.tuple.Fields;
+import org.apache.storm.tuple.Tuple;
+import org.apache.storm.tuple.Values;
+import org.apache.storm.utils.Utils;
+
+import java.util.Map;
+
+/**
+ * This is a basic example of a Storm topology.
+ */
+public class ExclamationTopology {
+
+  public static class ExclamationBolt extends BaseRichBolt {
+    OutputCollector _collector;
+
+    @Override
+    public void prepare(Map conf, TopologyContext context, OutputCollector collector) {
+      _collector = collector;
+    }
+
+    @Override
+    public void execute(Tuple tuple) {
+      _collector.emit(tuple, new Values(tuple.getString(0) + "!!!"));
+      _collector.ack(tuple);
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+      declarer.declare(new Fields("word"));
+    }
+
+
+  }
+
+  public static void main(String[] args) throws Exception {
+    TopologyBuilder builder = new TopologyBuilder();
+
+    builder.setSpout("word", new TestWordSpout(), 10);
+    builder.setBolt("exclaim1", new ExclamationBolt(), 3).shuffleGrouping("word");
+    builder.setBolt("exclaim2", new ExclamationBolt(), 2).shuffleGrouping("exclaim1");
+
+    Config conf = new Config();
+    conf.setDebug(true);
+
+    if (args != null && args.length > 0) {
+      conf.setNumWorkers(3);
+
+      StormSubmitter.submitTopologyWithProgressBar(args[0], conf, builder.createTopology());
+    }
+    else {
+
+      LocalCluster cluster = new LocalCluster();
+      cluster.submitTopology("test", conf, builder.createTopology());
+      Utils.sleep(10000);
+      cluster.killTopology("test");
+      cluster.shutdown();
+    }
+  }
+}

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/examples/storm-starter/src/jvm/org/apache/storm/starter/FastWordCountTopology.java
----------------------------------------------------------------------
diff --git a/examples/storm-starter/src/jvm/org/apache/storm/starter/FastWordCountTopology.java b/examples/storm-starter/src/jvm/org/apache/storm/starter/FastWordCountTopology.java
new file mode 100644
index 0000000..51f6b11
--- /dev/null
+++ b/examples/storm-starter/src/jvm/org/apache/storm/starter/FastWordCountTopology.java
@@ -0,0 +1,198 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.starter;
+
+import org.apache.storm.Config;
+import org.apache.storm.StormSubmitter;
+import org.apache.storm.generated.*;
+import org.apache.storm.spout.SpoutOutputCollector;
+import org.apache.storm.task.TopologyContext;
+import org.apache.storm.topology.BasicOutputCollector;
+import org.apache.storm.topology.IRichBolt;
+import org.apache.storm.topology.OutputFieldsDeclarer;
+import org.apache.storm.topology.TopologyBuilder;
+import org.apache.storm.topology.base.BaseBasicBolt;
+import org.apache.storm.topology.base.BaseRichSpout;
+import org.apache.storm.tuple.Fields;
+import org.apache.storm.tuple.Tuple;
+import org.apache.storm.tuple.Values;
+import org.apache.storm.utils.NimbusClient;
+import org.apache.storm.utils.Utils;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Random;
+import java.util.concurrent.ThreadLocalRandom;
+
+/**
+ * WordCount but teh spout does not stop, and the bolts are implemented in
+ * java.  This can show how fast the word count can run.
+ */
+public class FastWordCountTopology {
+  public static class FastRandomSentenceSpout extends BaseRichSpout {
+    SpoutOutputCollector _collector;
+    Random _rand;
+    private static final String[] CHOICES = {
+        "marry had a little lamb whos fleese was white as snow",
+        "and every where that marry went the lamb was sure to go",
+        "one two three four five six seven eight nine ten",
+        "this is a test of the emergency broadcast system this is only a test",
+        "peter piper picked a peck of pickeled peppers"
+    };
+
+    @Override
+    public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) {
+      _collector = collector;
+      _rand = ThreadLocalRandom.current();
+    }
+
+    @Override
+    public void nextTuple() {
+      String sentence = CHOICES[_rand.nextInt(CHOICES.length)];
+      _collector.emit(new Values(sentence), sentence);
+    }
+
+    @Override
+    public void ack(Object id) {
+        //Ignored
+    }
+
+    @Override
+    public void fail(Object id) {
+      _collector.emit(new Values(id), id);
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+      declarer.declare(new Fields("sentence"));
+    }
+  }
+
+  public static class SplitSentence extends BaseBasicBolt {
+    @Override
+    public void execute(Tuple tuple, BasicOutputCollector collector) {
+      String sentence = tuple.getString(0);
+      for (String word: sentence.split("\\s+")) {
+          collector.emit(new Values(word, 1));
+      }
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+      declarer.declare(new Fields("word", "count"));
+    }
+  }
+
+  public static class WordCount extends BaseBasicBolt {
+    Map<String, Integer> counts = new HashMap<String, Integer>();
+
+    @Override
+    public void execute(Tuple tuple, BasicOutputCollector collector) {
+      String word = tuple.getString(0);
+      Integer count = counts.get(word);
+      if (count == null)
+        count = 0;
+      count++;
+      counts.put(word, count);
+      collector.emit(new Values(word, count));
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+      declarer.declare(new Fields("word", "count"));
+    }
+  }
+
+  public static void printMetrics(Nimbus.Client client, String name) throws Exception {
+    ClusterSummary summary = client.getClusterInfo();
+    String id = null;
+    for (TopologySummary ts: summary.get_topologies()) {
+      if (name.equals(ts.get_name())) {
+        id = ts.get_id();
+      }
+    }
+    if (id == null) {
+      throw new Exception("Could not find a topology named "+name);
+    }
+    TopologyInfo info = client.getTopologyInfo(id);
+    int uptime = info.get_uptime_secs();
+    long acked = 0;
+    long failed = 0;
+    double weightedAvgTotal = 0.0;
+    for (ExecutorSummary exec: info.get_executors()) {
+      if ("spout".equals(exec.get_component_id())) {
+        SpoutStats stats = exec.get_stats().get_specific().get_spout();
+        Map<String, Long> failedMap = stats.get_failed().get(":all-time");
+        Map<String, Long> ackedMap = stats.get_acked().get(":all-time");
+        Map<String, Double> avgLatMap = stats.get_complete_ms_avg().get(":all-time");
+        for (String key: ackedMap.keySet()) {
+          if (failedMap != null) {
+              Long tmp = failedMap.get(key);
+              if (tmp != null) {
+                  failed += tmp;
+              }
+          }
+          long ackVal = ackedMap.get(key);
+          double latVal = avgLatMap.get(key) * ackVal;
+          acked += ackVal;
+          weightedAvgTotal += latVal;
+        }
+      }
+    }
+    double avgLatency = weightedAvgTotal/acked;
+    System.out.println("uptime: "+uptime+" acked: "+acked+" avgLatency: "+avgLatency+" acked/sec: "+(((double)acked)/uptime+" failed: "+failed));
+  } 
+
+  public static void kill(Nimbus.Client client, String name) throws Exception {
+    KillOptions opts = new KillOptions();
+    opts.set_wait_secs(0);
+    client.killTopologyWithOpts(name, opts);
+  } 
+
+  public static void main(String[] args) throws Exception {
+
+    TopologyBuilder builder = new TopologyBuilder();
+
+    builder.setSpout("spout", new FastRandomSentenceSpout(), 4);
+
+    builder.setBolt("split", new SplitSentence(), 4).shuffleGrouping("spout");
+    builder.setBolt("count", new WordCount(), 4).fieldsGrouping("split", new Fields("word"));
+
+    Config conf = new Config();
+    conf.registerMetricsConsumer(org.apache.storm.metric.LoggingMetricsConsumer.class);
+
+    String name = "wc-test";
+    if (args != null && args.length > 0) {
+        name = args[0];
+    }
+
+    conf.setNumWorkers(1);
+    StormSubmitter.submitTopologyWithProgressBar(name, conf, builder.createTopology());
+
+    Map clusterConf = Utils.readStormConfig();
+    clusterConf.putAll(Utils.readCommandLineOpts());
+    Nimbus.Client client = NimbusClient.getConfiguredClient(clusterConf).getClient();
+
+    //Sleep for 5 mins
+    for (int i = 0; i < 10; i++) {
+        Thread.sleep(30 * 1000);
+        printMetrics(client, name);
+    }
+    kill(client, name);
+  }
+}

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/examples/storm-starter/src/jvm/org/apache/storm/starter/InOrderDeliveryTest.java
----------------------------------------------------------------------
diff --git a/examples/storm-starter/src/jvm/org/apache/storm/starter/InOrderDeliveryTest.java b/examples/storm-starter/src/jvm/org/apache/storm/starter/InOrderDeliveryTest.java
new file mode 100644
index 0000000..1684ce5
--- /dev/null
+++ b/examples/storm-starter/src/jvm/org/apache/storm/starter/InOrderDeliveryTest.java
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.starter;
+
+import org.apache.storm.Config;
+import org.apache.storm.StormSubmitter;
+import org.apache.storm.generated.*;
+import org.apache.storm.spout.SpoutOutputCollector;
+import org.apache.storm.task.TopologyContext;
+import org.apache.storm.topology.BasicOutputCollector;
+import org.apache.storm.topology.FailedException;
+import org.apache.storm.topology.IRichBolt;
+import org.apache.storm.topology.OutputFieldsDeclarer;
+import org.apache.storm.topology.TopologyBuilder;
+import org.apache.storm.topology.base.BaseBasicBolt;
+import org.apache.storm.topology.base.BaseRichSpout;
+import org.apache.storm.tuple.Fields;
+import org.apache.storm.tuple.Tuple;
+import org.apache.storm.tuple.Values;
+import org.apache.storm.utils.NimbusClient;
+import org.apache.storm.utils.Utils;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Random;
+
+public class InOrderDeliveryTest {
+  public static class InOrderSpout extends BaseRichSpout {
+    SpoutOutputCollector _collector;
+    int _base = 0;
+    int _i = 0;
+
+    @Override
+    public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) {
+      _collector = collector;
+      _base = context.getThisTaskIndex();
+    }
+
+    @Override
+    public void nextTuple() {
+      Values v = new Values(_base, _i);
+      _collector.emit(v, "ACK");
+      _i++;
+    }
+
+    @Override
+    public void ack(Object id) {
+      //Ignored
+    }
+
+    @Override
+    public void fail(Object id) {
+      //Ignored
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+      declarer.declare(new Fields("c1", "c2"));
+    }
+  }
+
+  public static class Check extends BaseBasicBolt {
+    Map<Integer, Integer> expected = new HashMap<Integer, Integer>();
+
+    @Override
+    public void execute(Tuple tuple, BasicOutputCollector collector) {
+      Integer c1 = tuple.getInteger(0);
+      Integer c2 = tuple.getInteger(1);
+      Integer exp = expected.get(c1);
+      if (exp == null) exp = 0;
+      if (c2.intValue() != exp.intValue()) {
+          System.out.println(c1+" "+c2+" != "+exp);
+          throw new FailedException(c1+" "+c2+" != "+exp);
+      }
+      exp = c2 + 1;
+      expected.put(c1, exp);
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+      //Empty
+    }
+  }
+
+  public static void printMetrics(Nimbus.Client client, String name) throws Exception {
+    ClusterSummary summary = client.getClusterInfo();
+    String id = null;
+    for (TopologySummary ts: summary.get_topologies()) {
+      if (name.equals(ts.get_name())) {
+        id = ts.get_id();
+      }
+    }
+    if (id == null) {
+      throw new Exception("Could not find a topology named "+name);
+    }
+    TopologyInfo info = client.getTopologyInfo(id);
+    int uptime = info.get_uptime_secs();
+    long acked = 0;
+    long failed = 0;
+    double weightedAvgTotal = 0.0;
+    for (ExecutorSummary exec: info.get_executors()) {
+      if ("spout".equals(exec.get_component_id())) {
+        SpoutStats stats = exec.get_stats().get_specific().get_spout();
+        Map<String, Long> failedMap = stats.get_failed().get(":all-time");
+        Map<String, Long> ackedMap = stats.get_acked().get(":all-time");
+        Map<String, Double> avgLatMap = stats.get_complete_ms_avg().get(":all-time");
+        for (String key: ackedMap.keySet()) {
+          if (failedMap != null) {
+              Long tmp = failedMap.get(key);
+              if (tmp != null) {
+                  failed += tmp;
+              }
+          }
+          long ackVal = ackedMap.get(key);
+          double latVal = avgLatMap.get(key) * ackVal;
+          acked += ackVal;
+          weightedAvgTotal += latVal;
+        }
+      }
+    }
+    double avgLatency = weightedAvgTotal/acked;
+    System.out.println("uptime: "+uptime+" acked: "+acked+" avgLatency: "+avgLatency+" acked/sec: "+(((double)acked)/uptime+" failed: "+failed));
+  } 
+
+  public static void kill(Nimbus.Client client, String name) throws Exception {
+    KillOptions opts = new KillOptions();
+    opts.set_wait_secs(0);
+    client.killTopologyWithOpts(name, opts);
+  } 
+
+  public static void main(String[] args) throws Exception {
+
+    TopologyBuilder builder = new TopologyBuilder();
+
+    builder.setSpout("spout", new InOrderSpout(), 8);
+    builder.setBolt("count", new Check(), 8).fieldsGrouping("spout", new Fields("c1"));
+
+    Config conf = new Config();
+    conf.registerMetricsConsumer(org.apache.storm.metric.LoggingMetricsConsumer.class);
+
+    String name = "in-order-test";
+    if (args != null && args.length > 0) {
+        name = args[0];
+    }
+
+    conf.setNumWorkers(1);
+    StormSubmitter.submitTopologyWithProgressBar(name, conf, builder.createTopology());
+
+    Map clusterConf = Utils.readStormConfig();
+    clusterConf.putAll(Utils.readCommandLineOpts());
+    Nimbus.Client client = NimbusClient.getConfiguredClient(clusterConf).getClient();
+
+    //Sleep for 50 mins
+    for (int i = 0; i < 50; i++) {
+        Thread.sleep(30 * 1000);
+        printMetrics(client, name);
+    }
+    kill(client, name);
+  }
+}

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/examples/storm-starter/src/jvm/org/apache/storm/starter/ManualDRPC.java
----------------------------------------------------------------------
diff --git a/examples/storm-starter/src/jvm/org/apache/storm/starter/ManualDRPC.java b/examples/storm-starter/src/jvm/org/apache/storm/starter/ManualDRPC.java
new file mode 100644
index 0000000..4c9daec
--- /dev/null
+++ b/examples/storm-starter/src/jvm/org/apache/storm/starter/ManualDRPC.java
@@ -0,0 +1,68 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.starter;
+
+import org.apache.storm.Config;
+import org.apache.storm.LocalCluster;
+import org.apache.storm.LocalDRPC;
+import org.apache.storm.drpc.DRPCSpout;
+import org.apache.storm.drpc.ReturnResults;
+import org.apache.storm.topology.BasicOutputCollector;
+import org.apache.storm.topology.OutputFieldsDeclarer;
+import org.apache.storm.topology.TopologyBuilder;
+import org.apache.storm.topology.base.BaseBasicBolt;
+import org.apache.storm.tuple.Fields;
+import org.apache.storm.tuple.Tuple;
+import org.apache.storm.tuple.Values;
+
+
+public class ManualDRPC {
+  public static class ExclamationBolt extends BaseBasicBolt {
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+      declarer.declare(new Fields("result", "return-info"));
+    }
+
+    @Override
+    public void execute(Tuple tuple, BasicOutputCollector collector) {
+      String arg = tuple.getString(0);
+      Object retInfo = tuple.getValue(1);
+      collector.emit(new Values(arg + "!!!", retInfo));
+    }
+
+  }
+
+  public static void main(String[] args) {
+    TopologyBuilder builder = new TopologyBuilder();
+    LocalDRPC drpc = new LocalDRPC();
+
+    DRPCSpout spout = new DRPCSpout("exclamation", drpc);
+    builder.setSpout("drpc", spout);
+    builder.setBolt("exclaim", new ExclamationBolt(), 3).shuffleGrouping("drpc");
+    builder.setBolt("return", new ReturnResults(), 3).shuffleGrouping("exclaim");
+
+    LocalCluster cluster = new LocalCluster();
+    Config conf = new Config();
+    cluster.submitTopology("exclaim", conf, builder.createTopology());
+
+    System.out.println(drpc.execute("exclamation", "aaa"));
+    System.out.println(drpc.execute("exclamation", "bbb"));
+
+  }
+}

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/examples/storm-starter/src/jvm/org/apache/storm/starter/MultipleLoggerTopology.java
----------------------------------------------------------------------
diff --git a/examples/storm-starter/src/jvm/org/apache/storm/starter/MultipleLoggerTopology.java b/examples/storm-starter/src/jvm/org/apache/storm/starter/MultipleLoggerTopology.java
new file mode 100644
index 0000000..99c3da1
--- /dev/null
+++ b/examples/storm-starter/src/jvm/org/apache/storm/starter/MultipleLoggerTopology.java
@@ -0,0 +1,105 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.starter;
+
+import org.apache.storm.Config;
+import org.apache.storm.LocalCluster;
+import org.apache.storm.StormSubmitter;
+import org.apache.storm.task.OutputCollector;
+import org.apache.storm.task.TopologyContext;
+import org.apache.storm.testing.TestWordSpout;
+import org.apache.storm.topology.OutputFieldsDeclarer;
+import org.apache.storm.topology.TopologyBuilder;
+import org.apache.storm.topology.base.BaseRichBolt;
+import org.apache.storm.tuple.Fields;
+import org.apache.storm.tuple.Tuple;
+import org.apache.storm.tuple.Values;
+import org.apache.storm.utils.Utils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.List;
+import java.util.Map;
+
+/**
+ * This is a basic example of a Storm topology.
+ */
+public class MultipleLoggerTopology {
+  public static class ExclamationLoggingBolt extends BaseRichBolt {
+    OutputCollector _collector;
+    Logger _rootLogger = LoggerFactory.getLogger (Logger.ROOT_LOGGER_NAME);
+    // ensure the loggers are configured in the worker.xml before
+    // trying to use them here
+    Logger _logger = LoggerFactory.getLogger ("com.myapp");
+    Logger _subLogger = LoggerFactory.getLogger ("com.myapp.sub");
+
+    @Override
+    public void prepare(Map conf, TopologyContext context, OutputCollector collector) {
+      _collector = collector;
+    }
+
+    @Override
+    public void execute(Tuple tuple) {
+      _rootLogger.debug ("root: This is a DEBUG message");
+      _rootLogger.info ("root: This is an INFO message");
+      _rootLogger.warn ("root: This is a WARN message");
+      _rootLogger.error ("root: This is an ERROR message");
+
+      _logger.debug ("myapp: This is a DEBUG message");
+      _logger.info ("myapp: This is an INFO message");
+      _logger.warn ("myapp: This is a WARN message");
+      _logger.error ("myapp: This is an ERROR message");
+
+      _subLogger.debug ("myapp.sub: This is a DEBUG message");
+      _subLogger.info ("myapp.sub: This is an INFO message");
+      _subLogger.warn ("myapp.sub: This is a WARN message");
+      _subLogger.error ("myapp.sub: This is an ERROR message");
+
+      _collector.emit(tuple, new Values(tuple.getString(0) + "!!!"));
+      _collector.ack(tuple);
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+      declarer.declare(new Fields("word"));
+    }
+  }
+
+  public static void main(String[] args) throws Exception {
+    TopologyBuilder builder = new TopologyBuilder();
+
+    builder.setSpout("word", new TestWordSpout(), 10);
+    builder.setBolt("exclaim1", new ExclamationLoggingBolt(), 3).shuffleGrouping("word");
+    builder.setBolt("exclaim2", new ExclamationLoggingBolt(), 2).shuffleGrouping("exclaim1");
+
+    Config conf = new Config();
+    conf.setDebug(true);
+
+    if (args != null && args.length > 0) {
+      conf.setNumWorkers(2);
+      StormSubmitter.submitTopologyWithProgressBar(args[0], conf, builder.createTopology());
+    } else {
+      LocalCluster cluster = new LocalCluster();
+      cluster.submitTopology("test", conf, builder.createTopology());
+      Utils.sleep(10000);
+      cluster.killTopology("test");
+      cluster.shutdown();
+    }
+  }
+}

http://git-wip-us.apache.org/repos/asf/storm/blob/d839d1bf/examples/storm-starter/src/jvm/org/apache/storm/starter/PrintSampleStream.java
----------------------------------------------------------------------
diff --git a/examples/storm-starter/src/jvm/org/apache/storm/starter/PrintSampleStream.java b/examples/storm-starter/src/jvm/org/apache/storm/starter/PrintSampleStream.java
new file mode 100644
index 0000000..466fca0
--- /dev/null
+++ b/examples/storm-starter/src/jvm/org/apache/storm/starter/PrintSampleStream.java
@@ -0,0 +1,58 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.storm.starter;
+
+import java.util.Arrays;
+
+import org.apache.storm.Config;
+import org.apache.storm.LocalCluster;
+import org.apache.storm.topology.TopologyBuilder;
+import org.apache.storm.utils.Utils;
+
+import org.apache.storm.starter.bolt.PrinterBolt;
+import org.apache.storm.starter.spout.TwitterSampleSpout;
+
+public class PrintSampleStream {        
+    public static void main(String[] args) {
+        String consumerKey = args[0]; 
+        String consumerSecret = args[1]; 
+        String accessToken = args[2]; 
+        String accessTokenSecret = args[3];
+        String[] arguments = args.clone();
+        String[] keyWords = Arrays.copyOfRange(arguments, 4, arguments.length);
+        
+        TopologyBuilder builder = new TopologyBuilder();
+        
+        builder.setSpout("twitter", new TwitterSampleSpout(consumerKey, consumerSecret,
+                                accessToken, accessTokenSecret, keyWords));
+        builder.setBolt("print", new PrinterBolt())
+                .shuffleGrouping("twitter");
+                
+                
+        Config conf = new Config();
+        
+        
+        LocalCluster cluster = new LocalCluster();
+        
+        cluster.submitTopology("test", conf, builder.createTopology());
+        
+        Utils.sleep(10000);
+        cluster.shutdown();
+    }
+}