You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@storm.apache.org by sr...@apache.org on 2015/12/01 19:08:45 UTC

[01/50] [abbrv] storm git commit: Added STORM-1213 to Changlog

Repository: storm
Updated Branches:
  refs/heads/STORM-1040 9f214abdb -> 0f18238f5


Added STORM-1213 to Changlog


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/ccf3fd2b
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/ccf3fd2b
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/ccf3fd2b

Branch: refs/heads/STORM-1040
Commit: ccf3fd2b120ff42e5984efd20de1d11004ac8ad1
Parents: 49353bc
Author: Robert (Bobby) Evans <ev...@yahoo-inc.com>
Authored: Mon Nov 23 15:53:54 2015 -0600
Committer: Robert (Bobby) Evans <ev...@yahoo-inc.com>
Committed: Mon Nov 23 15:53:54 2015 -0600

----------------------------------------------------------------------
 CHANGELOG.md | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/ccf3fd2b/CHANGELOG.md
----------------------------------------------------------------------
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 8e0230e..6c1135a 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,5 @@
 ## 0.11.0
+ * STORM-1213: Remove sigar binaries from source tree
  * STORM-885:  Heartbeat Server (Pacemaker)
  * STORM-1221: Create a common interface for all Trident spout.
  * STORM-1198: Web UI to show resource usages and Total Resources on all supervisors


[44/50] [abbrv] storm git commit: Merge branch 'STORM-1341' of https://github.com/HeartSaVioR/storm into STORM-1341

Posted by sr...@apache.org.
Merge branch 'STORM-1341' of https://github.com/HeartSaVioR/storm into STORM-1341


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/63026eec
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/63026eec
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/63026eec

Branch: refs/heads/STORM-1040
Commit: 63026eecf46d411ef14ee37d85bfca8af4057b10
Parents: c7c367c fc8c296
Author: Jungtaek Lim <ka...@gmail.com>
Authored: Fri Nov 27 05:58:56 2015 +0900
Committer: Jungtaek Lim <ka...@gmail.com>
Committed: Fri Nov 27 05:58:56 2015 +0900

----------------------------------------------------------------------
 storm-core/src/jvm/backtype/storm/Config.java           | 8 ++++++++
 storm-core/src/jvm/backtype/storm/spout/ShellSpout.java | 6 +++++-
 storm-core/src/jvm/backtype/storm/task/ShellBolt.java   | 6 +++++-
 3 files changed, 18 insertions(+), 2 deletions(-)
----------------------------------------------------------------------



[22/50] [abbrv] storm git commit: add STORM-1349 to changelog

Posted by sr...@apache.org.
add STORM-1349 to changelog


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/dc05a00e
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/dc05a00e
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/dc05a00e

Branch: refs/heads/STORM-1040
Commit: dc05a00e5c0f956f4511fec789ea7c037f811723
Parents: 609b11c
Author: P. Taylor Goetz <pt...@gmail.com>
Authored: Tue Nov 24 14:42:05 2015 -0500
Committer: P. Taylor Goetz <pt...@gmail.com>
Committed: Tue Nov 24 14:42:05 2015 -0500

----------------------------------------------------------------------
 CHANGELOG.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/dc05a00e/CHANGELOG.md
----------------------------------------------------------------------
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 9f0d482..11aa0f5 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,5 @@
-## 0.11.0
++## 0.11.0
+ * STORM-1349: [Flux] Allow constructorArgs to take Maps as arguments
  * STORM-126: Add Lifecycle support API for worker nodes
  * STORM-1213: Remove sigar binaries from source tree
  * STORM-885:  Heartbeat Server (Pacemaker)


[07/50] [abbrv] storm git commit: add support for worker lifecycle hooks

Posted by sr...@apache.org.
http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/py/storm/ttypes.py
----------------------------------------------------------------------
diff --git a/storm-core/src/py/storm/ttypes.py b/storm-core/src/py/storm/ttypes.py
index 7ba62ae..a730c13 100644
--- a/storm-core/src/py/storm/ttypes.py
+++ b/storm-core/src/py/storm/ttypes.py
@@ -1377,6 +1377,7 @@ class StormTopology:
    - spouts
    - bolts
    - state_spouts
+   - worker_hooks
   """
 
   thrift_spec = (
@@ -1384,12 +1385,14 @@ class StormTopology:
     (1, TType.MAP, 'spouts', (TType.STRING,None,TType.STRUCT,(SpoutSpec, SpoutSpec.thrift_spec)), None, ), # 1
     (2, TType.MAP, 'bolts', (TType.STRING,None,TType.STRUCT,(Bolt, Bolt.thrift_spec)), None, ), # 2
     (3, TType.MAP, 'state_spouts', (TType.STRING,None,TType.STRUCT,(StateSpoutSpec, StateSpoutSpec.thrift_spec)), None, ), # 3
+    (4, TType.LIST, 'worker_hooks', (TType.STRING,None), None, ), # 4
   )
 
-  def __init__(self, spouts=None, bolts=None, state_spouts=None,):
+  def __init__(self, spouts=None, bolts=None, state_spouts=None, worker_hooks=None,):
     self.spouts = spouts
     self.bolts = bolts
     self.state_spouts = state_spouts
+    self.worker_hooks = worker_hooks
 
   def read(self, iprot):
     if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
@@ -1436,6 +1439,16 @@ class StormTopology:
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
+      elif fid == 4:
+        if ftype == TType.LIST:
+          self.worker_hooks = []
+          (_etype63, _size60) = iprot.readListBegin()
+          for _i64 in xrange(_size60):
+            _elem65 = iprot.readString()
+            self.worker_hooks.append(_elem65)
+          iprot.readListEnd()
+        else:
+          iprot.skip(ftype)
       else:
         iprot.skip(ftype)
       iprot.readFieldEnd()
@@ -1449,27 +1462,34 @@ class StormTopology:
     if self.spouts is not None:
       oprot.writeFieldBegin('spouts', TType.MAP, 1)
       oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.spouts))
-      for kiter60,viter61 in self.spouts.items():
-        oprot.writeString(kiter60.encode('utf-8'))
-        viter61.write(oprot)
+      for kiter66,viter67 in self.spouts.items():
+        oprot.writeString(kiter66.encode('utf-8'))
+        viter67.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.bolts is not None:
       oprot.writeFieldBegin('bolts', TType.MAP, 2)
       oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.bolts))
-      for kiter62,viter63 in self.bolts.items():
-        oprot.writeString(kiter62.encode('utf-8'))
-        viter63.write(oprot)
+      for kiter68,viter69 in self.bolts.items():
+        oprot.writeString(kiter68.encode('utf-8'))
+        viter69.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.state_spouts is not None:
       oprot.writeFieldBegin('state_spouts', TType.MAP, 3)
       oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.state_spouts))
-      for kiter64,viter65 in self.state_spouts.items():
-        oprot.writeString(kiter64.encode('utf-8'))
-        viter65.write(oprot)
+      for kiter70,viter71 in self.state_spouts.items():
+        oprot.writeString(kiter70.encode('utf-8'))
+        viter71.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
+    if self.worker_hooks is not None:
+      oprot.writeFieldBegin('worker_hooks', TType.LIST, 4)
+      oprot.writeListBegin(TType.STRING, len(self.worker_hooks))
+      for iter72 in self.worker_hooks:
+        oprot.writeString(iter72)
+      oprot.writeListEnd()
+      oprot.writeFieldEnd()
     oprot.writeFieldStop()
     oprot.writeStructEnd()
 
@@ -1488,6 +1508,7 @@ class StormTopology:
     value = (value * 31) ^ hash(self.spouts)
     value = (value * 31) ^ hash(self.bolts)
     value = (value * 31) ^ hash(self.state_spouts)
+    value = (value * 31) ^ hash(self.worker_hooks)
     return value
 
   def __repr__(self):
@@ -2645,11 +2666,11 @@ class SupervisorSummary:
       elif fid == 7:
         if ftype == TType.MAP:
           self.total_resources = {}
-          (_ktype67, _vtype68, _size66 ) = iprot.readMapBegin()
-          for _i70 in xrange(_size66):
-            _key71 = iprot.readString().decode('utf-8')
-            _val72 = iprot.readDouble()
-            self.total_resources[_key71] = _val72
+          (_ktype74, _vtype75, _size73 ) = iprot.readMapBegin()
+          for _i77 in xrange(_size73):
+            _key78 = iprot.readString().decode('utf-8')
+            _val79 = iprot.readDouble()
+            self.total_resources[_key78] = _val79
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -2700,9 +2721,9 @@ class SupervisorSummary:
     if self.total_resources is not None:
       oprot.writeFieldBegin('total_resources', TType.MAP, 7)
       oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.total_resources))
-      for kiter73,viter74 in self.total_resources.items():
-        oprot.writeString(kiter73.encode('utf-8'))
-        oprot.writeDouble(viter74)
+      for kiter80,viter81 in self.total_resources.items():
+        oprot.writeString(kiter80.encode('utf-8'))
+        oprot.writeDouble(viter81)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.used_mem is not None:
@@ -2916,11 +2937,11 @@ class ClusterSummary:
       if fid == 1:
         if ftype == TType.LIST:
           self.supervisors = []
-          (_etype78, _size75) = iprot.readListBegin()
-          for _i79 in xrange(_size75):
-            _elem80 = SupervisorSummary()
-            _elem80.read(iprot)
-            self.supervisors.append(_elem80)
+          (_etype85, _size82) = iprot.readListBegin()
+          for _i86 in xrange(_size82):
+            _elem87 = SupervisorSummary()
+            _elem87.read(iprot)
+            self.supervisors.append(_elem87)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
@@ -2932,22 +2953,22 @@ class ClusterSummary:
       elif fid == 3:
         if ftype == TType.LIST:
           self.topologies = []
-          (_etype84, _size81) = iprot.readListBegin()
-          for _i85 in xrange(_size81):
-            _elem86 = TopologySummary()
-            _elem86.read(iprot)
-            self.topologies.append(_elem86)
+          (_etype91, _size88) = iprot.readListBegin()
+          for _i92 in xrange(_size88):
+            _elem93 = TopologySummary()
+            _elem93.read(iprot)
+            self.topologies.append(_elem93)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.LIST:
           self.nimbuses = []
-          (_etype90, _size87) = iprot.readListBegin()
-          for _i91 in xrange(_size87):
-            _elem92 = NimbusSummary()
-            _elem92.read(iprot)
-            self.nimbuses.append(_elem92)
+          (_etype97, _size94) = iprot.readListBegin()
+          for _i98 in xrange(_size94):
+            _elem99 = NimbusSummary()
+            _elem99.read(iprot)
+            self.nimbuses.append(_elem99)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
@@ -2964,8 +2985,8 @@ class ClusterSummary:
     if self.supervisors is not None:
       oprot.writeFieldBegin('supervisors', TType.LIST, 1)
       oprot.writeListBegin(TType.STRUCT, len(self.supervisors))
-      for iter93 in self.supervisors:
-        iter93.write(oprot)
+      for iter100 in self.supervisors:
+        iter100.write(oprot)
       oprot.writeListEnd()
       oprot.writeFieldEnd()
     if self.nimbus_uptime_secs is not None:
@@ -2975,15 +2996,15 @@ class ClusterSummary:
     if self.topologies is not None:
       oprot.writeFieldBegin('topologies', TType.LIST, 3)
       oprot.writeListBegin(TType.STRUCT, len(self.topologies))
-      for iter94 in self.topologies:
-        iter94.write(oprot)
+      for iter101 in self.topologies:
+        iter101.write(oprot)
       oprot.writeListEnd()
       oprot.writeFieldEnd()
     if self.nimbuses is not None:
       oprot.writeFieldBegin('nimbuses', TType.LIST, 4)
       oprot.writeListBegin(TType.STRUCT, len(self.nimbuses))
-      for iter95 in self.nimbuses:
-        iter95.write(oprot)
+      for iter102 in self.nimbuses:
+        iter102.write(oprot)
       oprot.writeListEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()
@@ -3164,90 +3185,90 @@ class BoltStats:
       if fid == 1:
         if ftype == TType.MAP:
           self.acked = {}
-          (_ktype97, _vtype98, _size96 ) = iprot.readMapBegin()
-          for _i100 in xrange(_size96):
-            _key101 = iprot.readString().decode('utf-8')
-            _val102 = {}
-            (_ktype104, _vtype105, _size103 ) = iprot.readMapBegin()
-            for _i107 in xrange(_size103):
-              _key108 = GlobalStreamId()
-              _key108.read(iprot)
-              _val109 = iprot.readI64()
-              _val102[_key108] = _val109
+          (_ktype104, _vtype105, _size103 ) = iprot.readMapBegin()
+          for _i107 in xrange(_size103):
+            _key108 = iprot.readString().decode('utf-8')
+            _val109 = {}
+            (_ktype111, _vtype112, _size110 ) = iprot.readMapBegin()
+            for _i114 in xrange(_size110):
+              _key115 = GlobalStreamId()
+              _key115.read(iprot)
+              _val116 = iprot.readI64()
+              _val109[_key115] = _val116
             iprot.readMapEnd()
-            self.acked[_key101] = _val102
+            self.acked[_key108] = _val109
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.MAP:
           self.failed = {}
-          (_ktype111, _vtype112, _size110 ) = iprot.readMapBegin()
-          for _i114 in xrange(_size110):
-            _key115 = iprot.readString().decode('utf-8')
-            _val116 = {}
-            (_ktype118, _vtype119, _size117 ) = iprot.readMapBegin()
-            for _i121 in xrange(_size117):
-              _key122 = GlobalStreamId()
-              _key122.read(iprot)
-              _val123 = iprot.readI64()
-              _val116[_key122] = _val123
+          (_ktype118, _vtype119, _size117 ) = iprot.readMapBegin()
+          for _i121 in xrange(_size117):
+            _key122 = iprot.readString().decode('utf-8')
+            _val123 = {}
+            (_ktype125, _vtype126, _size124 ) = iprot.readMapBegin()
+            for _i128 in xrange(_size124):
+              _key129 = GlobalStreamId()
+              _key129.read(iprot)
+              _val130 = iprot.readI64()
+              _val123[_key129] = _val130
             iprot.readMapEnd()
-            self.failed[_key115] = _val116
+            self.failed[_key122] = _val123
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.MAP:
           self.process_ms_avg = {}
-          (_ktype125, _vtype126, _size124 ) = iprot.readMapBegin()
-          for _i128 in xrange(_size124):
-            _key129 = iprot.readString().decode('utf-8')
-            _val130 = {}
-            (_ktype132, _vtype133, _size131 ) = iprot.readMapBegin()
-            for _i135 in xrange(_size131):
-              _key136 = GlobalStreamId()
-              _key136.read(iprot)
-              _val137 = iprot.readDouble()
-              _val130[_key136] = _val137
+          (_ktype132, _vtype133, _size131 ) = iprot.readMapBegin()
+          for _i135 in xrange(_size131):
+            _key136 = iprot.readString().decode('utf-8')
+            _val137 = {}
+            (_ktype139, _vtype140, _size138 ) = iprot.readMapBegin()
+            for _i142 in xrange(_size138):
+              _key143 = GlobalStreamId()
+              _key143.read(iprot)
+              _val144 = iprot.readDouble()
+              _val137[_key143] = _val144
             iprot.readMapEnd()
-            self.process_ms_avg[_key129] = _val130
+            self.process_ms_avg[_key136] = _val137
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.MAP:
           self.executed = {}
-          (_ktype139, _vtype140, _size138 ) = iprot.readMapBegin()
-          for _i142 in xrange(_size138):
-            _key143 = iprot.readString().decode('utf-8')
-            _val144 = {}
-            (_ktype146, _vtype147, _size145 ) = iprot.readMapBegin()
-            for _i149 in xrange(_size145):
-              _key150 = GlobalStreamId()
-              _key150.read(iprot)
-              _val151 = iprot.readI64()
-              _val144[_key150] = _val151
+          (_ktype146, _vtype147, _size145 ) = iprot.readMapBegin()
+          for _i149 in xrange(_size145):
+            _key150 = iprot.readString().decode('utf-8')
+            _val151 = {}
+            (_ktype153, _vtype154, _size152 ) = iprot.readMapBegin()
+            for _i156 in xrange(_size152):
+              _key157 = GlobalStreamId()
+              _key157.read(iprot)
+              _val158 = iprot.readI64()
+              _val151[_key157] = _val158
             iprot.readMapEnd()
-            self.executed[_key143] = _val144
+            self.executed[_key150] = _val151
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
       elif fid == 5:
         if ftype == TType.MAP:
           self.execute_ms_avg = {}
-          (_ktype153, _vtype154, _size152 ) = iprot.readMapBegin()
-          for _i156 in xrange(_size152):
-            _key157 = iprot.readString().decode('utf-8')
-            _val158 = {}
-            (_ktype160, _vtype161, _size159 ) = iprot.readMapBegin()
-            for _i163 in xrange(_size159):
-              _key164 = GlobalStreamId()
-              _key164.read(iprot)
-              _val165 = iprot.readDouble()
-              _val158[_key164] = _val165
+          (_ktype160, _vtype161, _size159 ) = iprot.readMapBegin()
+          for _i163 in xrange(_size159):
+            _key164 = iprot.readString().decode('utf-8')
+            _val165 = {}
+            (_ktype167, _vtype168, _size166 ) = iprot.readMapBegin()
+            for _i170 in xrange(_size166):
+              _key171 = GlobalStreamId()
+              _key171.read(iprot)
+              _val172 = iprot.readDouble()
+              _val165[_key171] = _val172
             iprot.readMapEnd()
-            self.execute_ms_avg[_key157] = _val158
+            self.execute_ms_avg[_key164] = _val165
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -3264,60 +3285,60 @@ class BoltStats:
     if self.acked is not None:
       oprot.writeFieldBegin('acked', TType.MAP, 1)
       oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.acked))
-      for kiter166,viter167 in self.acked.items():
-        oprot.writeString(kiter166.encode('utf-8'))
-        oprot.writeMapBegin(TType.STRUCT, TType.I64, len(viter167))
-        for kiter168,viter169 in viter167.items():
-          kiter168.write(oprot)
-          oprot.writeI64(viter169)
+      for kiter173,viter174 in self.acked.items():
+        oprot.writeString(kiter173.encode('utf-8'))
+        oprot.writeMapBegin(TType.STRUCT, TType.I64, len(viter174))
+        for kiter175,viter176 in viter174.items():
+          kiter175.write(oprot)
+          oprot.writeI64(viter176)
         oprot.writeMapEnd()
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.failed is not None:
       oprot.writeFieldBegin('failed', TType.MAP, 2)
       oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.failed))
-      for kiter170,viter171 in self.failed.items():
-        oprot.writeString(kiter170.encode('utf-8'))
-        oprot.writeMapBegin(TType.STRUCT, TType.I64, len(viter171))
-        for kiter172,viter173 in viter171.items():
-          kiter172.write(oprot)
-          oprot.writeI64(viter173)
+      for kiter177,viter178 in self.failed.items():
+        oprot.writeString(kiter177.encode('utf-8'))
+        oprot.writeMapBegin(TType.STRUCT, TType.I64, len(viter178))
+        for kiter179,viter180 in viter178.items():
+          kiter179.write(oprot)
+          oprot.writeI64(viter180)
         oprot.writeMapEnd()
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.process_ms_avg is not None:
       oprot.writeFieldBegin('process_ms_avg', TType.MAP, 3)
       oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.process_ms_avg))
-      for kiter174,viter175 in self.process_ms_avg.items():
-        oprot.writeString(kiter174.encode('utf-8'))
-        oprot.writeMapBegin(TType.STRUCT, TType.DOUBLE, len(viter175))
-        for kiter176,viter177 in viter175.items():
-          kiter176.write(oprot)
-          oprot.writeDouble(viter177)
+      for kiter181,viter182 in self.process_ms_avg.items():
+        oprot.writeString(kiter181.encode('utf-8'))
+        oprot.writeMapBegin(TType.STRUCT, TType.DOUBLE, len(viter182))
+        for kiter183,viter184 in viter182.items():
+          kiter183.write(oprot)
+          oprot.writeDouble(viter184)
         oprot.writeMapEnd()
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.executed is not None:
       oprot.writeFieldBegin('executed', TType.MAP, 4)
       oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.executed))
-      for kiter178,viter179 in self.executed.items():
-        oprot.writeString(kiter178.encode('utf-8'))
-        oprot.writeMapBegin(TType.STRUCT, TType.I64, len(viter179))
-        for kiter180,viter181 in viter179.items():
-          kiter180.write(oprot)
-          oprot.writeI64(viter181)
+      for kiter185,viter186 in self.executed.items():
+        oprot.writeString(kiter185.encode('utf-8'))
+        oprot.writeMapBegin(TType.STRUCT, TType.I64, len(viter186))
+        for kiter187,viter188 in viter186.items():
+          kiter187.write(oprot)
+          oprot.writeI64(viter188)
         oprot.writeMapEnd()
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.execute_ms_avg is not None:
       oprot.writeFieldBegin('execute_ms_avg', TType.MAP, 5)
       oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.execute_ms_avg))
-      for kiter182,viter183 in self.execute_ms_avg.items():
-        oprot.writeString(kiter182.encode('utf-8'))
-        oprot.writeMapBegin(TType.STRUCT, TType.DOUBLE, len(viter183))
-        for kiter184,viter185 in viter183.items():
-          kiter184.write(oprot)
-          oprot.writeDouble(viter185)
+      for kiter189,viter190 in self.execute_ms_avg.items():
+        oprot.writeString(kiter189.encode('utf-8'))
+        oprot.writeMapBegin(TType.STRUCT, TType.DOUBLE, len(viter190))
+        for kiter191,viter192 in viter190.items():
+          kiter191.write(oprot)
+          oprot.writeDouble(viter192)
         oprot.writeMapEnd()
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
@@ -3390,51 +3411,51 @@ class SpoutStats:
       if fid == 1:
         if ftype == TType.MAP:
           self.acked = {}
-          (_ktype187, _vtype188, _size186 ) = iprot.readMapBegin()
-          for _i190 in xrange(_size186):
-            _key191 = iprot.readString().decode('utf-8')
-            _val192 = {}
-            (_ktype194, _vtype195, _size193 ) = iprot.readMapBegin()
-            for _i197 in xrange(_size193):
-              _key198 = iprot.readString().decode('utf-8')
-              _val199 = iprot.readI64()
-              _val192[_key198] = _val199
+          (_ktype194, _vtype195, _size193 ) = iprot.readMapBegin()
+          for _i197 in xrange(_size193):
+            _key198 = iprot.readString().decode('utf-8')
+            _val199 = {}
+            (_ktype201, _vtype202, _size200 ) = iprot.readMapBegin()
+            for _i204 in xrange(_size200):
+              _key205 = iprot.readString().decode('utf-8')
+              _val206 = iprot.readI64()
+              _val199[_key205] = _val206
             iprot.readMapEnd()
-            self.acked[_key191] = _val192
+            self.acked[_key198] = _val199
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.MAP:
           self.failed = {}
-          (_ktype201, _vtype202, _size200 ) = iprot.readMapBegin()
-          for _i204 in xrange(_size200):
-            _key205 = iprot.readString().decode('utf-8')
-            _val206 = {}
-            (_ktype208, _vtype209, _size207 ) = iprot.readMapBegin()
-            for _i211 in xrange(_size207):
-              _key212 = iprot.readString().decode('utf-8')
-              _val213 = iprot.readI64()
-              _val206[_key212] = _val213
+          (_ktype208, _vtype209, _size207 ) = iprot.readMapBegin()
+          for _i211 in xrange(_size207):
+            _key212 = iprot.readString().decode('utf-8')
+            _val213 = {}
+            (_ktype215, _vtype216, _size214 ) = iprot.readMapBegin()
+            for _i218 in xrange(_size214):
+              _key219 = iprot.readString().decode('utf-8')
+              _val220 = iprot.readI64()
+              _val213[_key219] = _val220
             iprot.readMapEnd()
-            self.failed[_key205] = _val206
+            self.failed[_key212] = _val213
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.MAP:
           self.complete_ms_avg = {}
-          (_ktype215, _vtype216, _size214 ) = iprot.readMapBegin()
-          for _i218 in xrange(_size214):
-            _key219 = iprot.readString().decode('utf-8')
-            _val220 = {}
-            (_ktype222, _vtype223, _size221 ) = iprot.readMapBegin()
-            for _i225 in xrange(_size221):
-              _key226 = iprot.readString().decode('utf-8')
-              _val227 = iprot.readDouble()
-              _val220[_key226] = _val227
+          (_ktype222, _vtype223, _size221 ) = iprot.readMapBegin()
+          for _i225 in xrange(_size221):
+            _key226 = iprot.readString().decode('utf-8')
+            _val227 = {}
+            (_ktype229, _vtype230, _size228 ) = iprot.readMapBegin()
+            for _i232 in xrange(_size228):
+              _key233 = iprot.readString().decode('utf-8')
+              _val234 = iprot.readDouble()
+              _val227[_key233] = _val234
             iprot.readMapEnd()
-            self.complete_ms_avg[_key219] = _val220
+            self.complete_ms_avg[_key226] = _val227
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -3451,36 +3472,36 @@ class SpoutStats:
     if self.acked is not None:
       oprot.writeFieldBegin('acked', TType.MAP, 1)
       oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.acked))
-      for kiter228,viter229 in self.acked.items():
-        oprot.writeString(kiter228.encode('utf-8'))
-        oprot.writeMapBegin(TType.STRING, TType.I64, len(viter229))
-        for kiter230,viter231 in viter229.items():
-          oprot.writeString(kiter230.encode('utf-8'))
-          oprot.writeI64(viter231)
+      for kiter235,viter236 in self.acked.items():
+        oprot.writeString(kiter235.encode('utf-8'))
+        oprot.writeMapBegin(TType.STRING, TType.I64, len(viter236))
+        for kiter237,viter238 in viter236.items():
+          oprot.writeString(kiter237.encode('utf-8'))
+          oprot.writeI64(viter238)
         oprot.writeMapEnd()
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.failed is not None:
       oprot.writeFieldBegin('failed', TType.MAP, 2)
       oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.failed))
-      for kiter232,viter233 in self.failed.items():
-        oprot.writeString(kiter232.encode('utf-8'))
-        oprot.writeMapBegin(TType.STRING, TType.I64, len(viter233))
-        for kiter234,viter235 in viter233.items():
-          oprot.writeString(kiter234.encode('utf-8'))
-          oprot.writeI64(viter235)
+      for kiter239,viter240 in self.failed.items():
+        oprot.writeString(kiter239.encode('utf-8'))
+        oprot.writeMapBegin(TType.STRING, TType.I64, len(viter240))
+        for kiter241,viter242 in viter240.items():
+          oprot.writeString(kiter241.encode('utf-8'))
+          oprot.writeI64(viter242)
         oprot.writeMapEnd()
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.complete_ms_avg is not None:
       oprot.writeFieldBegin('complete_ms_avg', TType.MAP, 3)
       oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.complete_ms_avg))
-      for kiter236,viter237 in self.complete_ms_avg.items():
-        oprot.writeString(kiter236.encode('utf-8'))
-        oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(viter237))
-        for kiter238,viter239 in viter237.items():
-          oprot.writeString(kiter238.encode('utf-8'))
-          oprot.writeDouble(viter239)
+      for kiter243,viter244 in self.complete_ms_avg.items():
+        oprot.writeString(kiter243.encode('utf-8'))
+        oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(viter244))
+        for kiter245,viter246 in viter244.items():
+          oprot.writeString(kiter245.encode('utf-8'))
+          oprot.writeDouble(viter246)
         oprot.writeMapEnd()
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
@@ -3630,34 +3651,34 @@ class ExecutorStats:
       if fid == 1:
         if ftype == TType.MAP:
           self.emitted = {}
-          (_ktype241, _vtype242, _size240 ) = iprot.readMapBegin()
-          for _i244 in xrange(_size240):
-            _key245 = iprot.readString().decode('utf-8')
-            _val246 = {}
-            (_ktype248, _vtype249, _size247 ) = iprot.readMapBegin()
-            for _i251 in xrange(_size247):
-              _key252 = iprot.readString().decode('utf-8')
-              _val253 = iprot.readI64()
-              _val246[_key252] = _val253
+          (_ktype248, _vtype249, _size247 ) = iprot.readMapBegin()
+          for _i251 in xrange(_size247):
+            _key252 = iprot.readString().decode('utf-8')
+            _val253 = {}
+            (_ktype255, _vtype256, _size254 ) = iprot.readMapBegin()
+            for _i258 in xrange(_size254):
+              _key259 = iprot.readString().decode('utf-8')
+              _val260 = iprot.readI64()
+              _val253[_key259] = _val260
             iprot.readMapEnd()
-            self.emitted[_key245] = _val246
+            self.emitted[_key252] = _val253
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.MAP:
           self.transferred = {}
-          (_ktype255, _vtype256, _size254 ) = iprot.readMapBegin()
-          for _i258 in xrange(_size254):
-            _key259 = iprot.readString().decode('utf-8')
-            _val260 = {}
-            (_ktype262, _vtype263, _size261 ) = iprot.readMapBegin()
-            for _i265 in xrange(_size261):
-              _key266 = iprot.readString().decode('utf-8')
-              _val267 = iprot.readI64()
-              _val260[_key266] = _val267
+          (_ktype262, _vtype263, _size261 ) = iprot.readMapBegin()
+          for _i265 in xrange(_size261):
+            _key266 = iprot.readString().decode('utf-8')
+            _val267 = {}
+            (_ktype269, _vtype270, _size268 ) = iprot.readMapBegin()
+            for _i272 in xrange(_size268):
+              _key273 = iprot.readString().decode('utf-8')
+              _val274 = iprot.readI64()
+              _val267[_key273] = _val274
             iprot.readMapEnd()
-            self.transferred[_key259] = _val260
+            self.transferred[_key266] = _val267
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -3685,24 +3706,24 @@ class ExecutorStats:
     if self.emitted is not None:
       oprot.writeFieldBegin('emitted', TType.MAP, 1)
       oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.emitted))
-      for kiter268,viter269 in self.emitted.items():
-        oprot.writeString(kiter268.encode('utf-8'))
-        oprot.writeMapBegin(TType.STRING, TType.I64, len(viter269))
-        for kiter270,viter271 in viter269.items():
-          oprot.writeString(kiter270.encode('utf-8'))
-          oprot.writeI64(viter271)
+      for kiter275,viter276 in self.emitted.items():
+        oprot.writeString(kiter275.encode('utf-8'))
+        oprot.writeMapBegin(TType.STRING, TType.I64, len(viter276))
+        for kiter277,viter278 in viter276.items():
+          oprot.writeString(kiter277.encode('utf-8'))
+          oprot.writeI64(viter278)
         oprot.writeMapEnd()
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.transferred is not None:
       oprot.writeFieldBegin('transferred', TType.MAP, 2)
       oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.transferred))
-      for kiter272,viter273 in self.transferred.items():
-        oprot.writeString(kiter272.encode('utf-8'))
-        oprot.writeMapBegin(TType.STRING, TType.I64, len(viter273))
-        for kiter274,viter275 in viter273.items():
-          oprot.writeString(kiter274.encode('utf-8'))
-          oprot.writeI64(viter275)
+      for kiter279,viter280 in self.transferred.items():
+        oprot.writeString(kiter279.encode('utf-8'))
+        oprot.writeMapBegin(TType.STRING, TType.I64, len(viter280))
+        for kiter281,viter282 in viter280.items():
+          oprot.writeString(kiter281.encode('utf-8'))
+          oprot.writeI64(viter282)
         oprot.writeMapEnd()
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
@@ -3973,6 +3994,84 @@ class ExecutorSummary:
   def __ne__(self, other):
     return not (self == other)
 
+class DebugOptions:
+  """
+  Attributes:
+   - enable
+   - samplingpct
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.BOOL, 'enable', None, None, ), # 1
+    (2, TType.DOUBLE, 'samplingpct', None, None, ), # 2
+  )
+
+  def __init__(self, enable=None, samplingpct=None,):
+    self.enable = enable
+    self.samplingpct = samplingpct
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.BOOL:
+          self.enable = iprot.readBool()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.DOUBLE:
+          self.samplingpct = iprot.readDouble()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('DebugOptions')
+    if self.enable is not None:
+      oprot.writeFieldBegin('enable', TType.BOOL, 1)
+      oprot.writeBool(self.enable)
+      oprot.writeFieldEnd()
+    if self.samplingpct is not None:
+      oprot.writeFieldBegin('samplingpct', TType.DOUBLE, 2)
+      oprot.writeDouble(self.samplingpct)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.enable)
+    value = (value * 31) ^ hash(self.samplingpct)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
 class TopologyInfo:
   """
   Attributes:
@@ -4569,11 +4668,11 @@ class TopologyInfo:
       elif fid == 4:
         if ftype == TType.LIST:
           self.executors = []
-          (_etype279, _size276) = iprot.readListBegin()
-          for _i280 in xrange(_size276):
-            _elem281 = ExecutorSummary()
-            _elem281.read(iprot)
-            self.executors.append(_elem281)
+          (_etype286, _size283) = iprot.readListBegin()
+          for _i287 in xrange(_size283):
+            _elem288 = ExecutorSummary()
+            _elem288.read(iprot)
+            self.executors.append(_elem288)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
@@ -4585,29 +4684,29 @@ class TopologyInfo:
       elif fid == 6:
         if ftype == TType.MAP:
           self.errors = {}
-          (_ktype283, _vtype284, _size282 ) = iprot.readMapBegin()
-          for _i286 in xrange(_size282):
-            _key287 = iprot.readString().decode('utf-8')
-            _val288 = []
-            (_etype292, _size289) = iprot.readListBegin()
-            for _i293 in xrange(_size289):
-              _elem294 = ErrorInfo()
-              _elem294.read(iprot)
-              _val288.append(_elem294)
+          (_ktype290, _vtype291, _size289 ) = iprot.readMapBegin()
+          for _i293 in xrange(_size289):
+            _key294 = iprot.readString().decode('utf-8')
+            _val295 = []
+            (_etype299, _size296) = iprot.readListBegin()
+            for _i300 in xrange(_size296):
+              _elem301 = ErrorInfo()
+              _elem301.read(iprot)
+              _val295.append(_elem301)
             iprot.readListEnd()
-            self.errors[_key287] = _val288
+            self.errors[_key294] = _val295
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
       elif fid == 7:
         if ftype == TType.MAP:
           self.component_debug = {}
-          (_ktype296, _vtype297, _size295 ) = iprot.readMapBegin()
-          for _i299 in xrange(_size295):
-            _key300 = iprot.readString().decode('utf-8')
-            _val301 = DebugOptions()
-            _val301.read(iprot)
-            self.component_debug[_key300] = _val301
+          (_ktype303, _vtype304, _size302 ) = iprot.readMapBegin()
+          for _i306 in xrange(_size302):
+            _key307 = iprot.readString().decode('utf-8')
+            _val308 = DebugOptions()
+            _val308.read(iprot)
+            self.component_debug[_key307] = _val308
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -4681,8 +4780,8 @@ class TopologyInfo:
     if self.executors is not None:
       oprot.writeFieldBegin('executors', TType.LIST, 4)
       oprot.writeListBegin(TType.STRUCT, len(self.executors))
-      for iter302 in self.executors:
-        iter302.write(oprot)
+      for iter309 in self.executors:
+        iter309.write(oprot)
       oprot.writeListEnd()
       oprot.writeFieldEnd()
     if self.status is not None:
@@ -4692,20 +4791,20 @@ class TopologyInfo:
     if self.errors is not None:
       oprot.writeFieldBegin('errors', TType.MAP, 6)
       oprot.writeMapBegin(TType.STRING, TType.LIST, len(self.errors))
-      for kiter303,viter304 in self.errors.items():
-        oprot.writeString(kiter303.encode('utf-8'))
-        oprot.writeListBegin(TType.STRUCT, len(viter304))
-        for iter305 in viter304:
-          iter305.write(oprot)
+      for kiter310,viter311 in self.errors.items():
+        oprot.writeString(kiter310.encode('utf-8'))
+        oprot.writeListBegin(TType.STRUCT, len(viter311))
+        for iter312 in viter311:
+          iter312.write(oprot)
         oprot.writeListEnd()
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.component_debug is not None:
       oprot.writeFieldBegin('component_debug', TType.MAP, 7)
       oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.component_debug))
-      for kiter306,viter307 in self.component_debug.items():
-        oprot.writeString(kiter306.encode('utf-8'))
-        viter307.write(oprot)
+      for kiter313,viter314 in self.component_debug.items():
+        oprot.writeString(kiter313.encode('utf-8'))
+        viter314.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.sched_status is not None:
@@ -4794,84 +4893,6 @@ class TopologyInfo:
   def __ne__(self, other):
     return not (self == other)
 
-class DebugOptions:
-  """
-  Attributes:
-   - enable
-   - samplingpct
-  """
-
-  thrift_spec = (
-    None, # 0
-    (1, TType.BOOL, 'enable', None, None, ), # 1
-    (2, TType.DOUBLE, 'samplingpct', None, None, ), # 2
-  )
-
-  def __init__(self, enable=None, samplingpct=None,):
-    self.enable = enable
-    self.samplingpct = samplingpct
-
-  def read(self, iprot):
-    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
-      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
-      return
-    iprot.readStructBegin()
-    while True:
-      (fname, ftype, fid) = iprot.readFieldBegin()
-      if ftype == TType.STOP:
-        break
-      if fid == 1:
-        if ftype == TType.BOOL:
-          self.enable = iprot.readBool()
-        else:
-          iprot.skip(ftype)
-      elif fid == 2:
-        if ftype == TType.DOUBLE:
-          self.samplingpct = iprot.readDouble()
-        else:
-          iprot.skip(ftype)
-      else:
-        iprot.skip(ftype)
-      iprot.readFieldEnd()
-    iprot.readStructEnd()
-
-  def write(self, oprot):
-    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
-      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
-      return
-    oprot.writeStructBegin('DebugOptions')
-    if self.enable is not None:
-      oprot.writeFieldBegin('enable', TType.BOOL, 1)
-      oprot.writeBool(self.enable)
-      oprot.writeFieldEnd()
-    if self.samplingpct is not None:
-      oprot.writeFieldBegin('samplingpct', TType.DOUBLE, 2)
-      oprot.writeDouble(self.samplingpct)
-      oprot.writeFieldEnd()
-    oprot.writeFieldStop()
-    oprot.writeStructEnd()
-
-  def validate(self):
-    return
-
-
-  def __hash__(self):
-    value = 17
-    value = (value * 31) ^ hash(self.enable)
-    value = (value * 31) ^ hash(self.samplingpct)
-    return value
-
-  def __repr__(self):
-    L = ['%s=%r' % (key, value)
-      for key, value in self.__dict__.iteritems()]
-    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
-
-  def __eq__(self, other):
-    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
-
-  def __ne__(self, other):
-    return not (self == other)
-
 class CommonAggregateStats:
   """
   Attributes:
@@ -5396,55 +5417,55 @@ class TopologyStats:
       if fid == 1:
         if ftype == TType.MAP:
           self.window_to_emitted = {}
-          (_ktype309, _vtype310, _size308 ) = iprot.readMapBegin()
-          for _i312 in xrange(_size308):
-            _key313 = iprot.readString().decode('utf-8')
-            _val314 = iprot.readI64()
-            self.window_to_emitted[_key313] = _val314
-          iprot.readMapEnd()
-        else:
-          iprot.skip(ftype)
-      elif fid == 2:
-        if ftype == TType.MAP:
-          self.window_to_transferred = {}
           (_ktype316, _vtype317, _size315 ) = iprot.readMapBegin()
           for _i319 in xrange(_size315):
             _key320 = iprot.readString().decode('utf-8')
             _val321 = iprot.readI64()
-            self.window_to_transferred[_key320] = _val321
+            self.window_to_emitted[_key320] = _val321
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
-      elif fid == 3:
+      elif fid == 2:
         if ftype == TType.MAP:
-          self.window_to_complete_latencies_ms = {}
+          self.window_to_transferred = {}
           (_ktype323, _vtype324, _size322 ) = iprot.readMapBegin()
           for _i326 in xrange(_size322):
             _key327 = iprot.readString().decode('utf-8')
-            _val328 = iprot.readDouble()
-            self.window_to_complete_latencies_ms[_key327] = _val328
+            _val328 = iprot.readI64()
+            self.window_to_transferred[_key327] = _val328
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
-      elif fid == 4:
+      elif fid == 3:
         if ftype == TType.MAP:
-          self.window_to_acked = {}
+          self.window_to_complete_latencies_ms = {}
           (_ktype330, _vtype331, _size329 ) = iprot.readMapBegin()
           for _i333 in xrange(_size329):
             _key334 = iprot.readString().decode('utf-8')
-            _val335 = iprot.readI64()
-            self.window_to_acked[_key334] = _val335
+            _val335 = iprot.readDouble()
+            self.window_to_complete_latencies_ms[_key334] = _val335
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
-      elif fid == 5:
+      elif fid == 4:
         if ftype == TType.MAP:
-          self.window_to_failed = {}
+          self.window_to_acked = {}
           (_ktype337, _vtype338, _size336 ) = iprot.readMapBegin()
           for _i340 in xrange(_size336):
             _key341 = iprot.readString().decode('utf-8')
             _val342 = iprot.readI64()
-            self.window_to_failed[_key341] = _val342
+            self.window_to_acked[_key341] = _val342
+          iprot.readMapEnd()
+        else:
+          iprot.skip(ftype)
+      elif fid == 5:
+        if ftype == TType.MAP:
+          self.window_to_failed = {}
+          (_ktype344, _vtype345, _size343 ) = iprot.readMapBegin()
+          for _i347 in xrange(_size343):
+            _key348 = iprot.readString().decode('utf-8')
+            _val349 = iprot.readI64()
+            self.window_to_failed[_key348] = _val349
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -5461,41 +5482,41 @@ class TopologyStats:
     if self.window_to_emitted is not None:
       oprot.writeFieldBegin('window_to_emitted', TType.MAP, 1)
       oprot.writeMapBegin(TType.STRING, TType.I64, len(self.window_to_emitted))
-      for kiter343,viter344 in self.window_to_emitted.items():
-        oprot.writeString(kiter343.encode('utf-8'))
-        oprot.writeI64(viter344)
+      for kiter350,viter351 in self.window_to_emitted.items():
+        oprot.writeString(kiter350.encode('utf-8'))
+        oprot.writeI64(viter351)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.window_to_transferred is not None:
       oprot.writeFieldBegin('window_to_transferred', TType.MAP, 2)
       oprot.writeMapBegin(TType.STRING, TType.I64, len(self.window_to_transferred))
-      for kiter345,viter346 in self.window_to_transferred.items():
-        oprot.writeString(kiter345.encode('utf-8'))
-        oprot.writeI64(viter346)
+      for kiter352,viter353 in self.window_to_transferred.items():
+        oprot.writeString(kiter352.encode('utf-8'))
+        oprot.writeI64(viter353)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.window_to_complete_latencies_ms is not None:
       oprot.writeFieldBegin('window_to_complete_latencies_ms', TType.MAP, 3)
       oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.window_to_complete_latencies_ms))
-      for kiter347,viter348 in self.window_to_complete_latencies_ms.items():
-        oprot.writeString(kiter347.encode('utf-8'))
-        oprot.writeDouble(viter348)
+      for kiter354,viter355 in self.window_to_complete_latencies_ms.items():
+        oprot.writeString(kiter354.encode('utf-8'))
+        oprot.writeDouble(viter355)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.window_to_acked is not None:
       oprot.writeFieldBegin('window_to_acked', TType.MAP, 4)
       oprot.writeMapBegin(TType.STRING, TType.I64, len(self.window_to_acked))
-      for kiter349,viter350 in self.window_to_acked.items():
-        oprot.writeString(kiter349.encode('utf-8'))
-        oprot.writeI64(viter350)
+      for kiter356,viter357 in self.window_to_acked.items():
+        oprot.writeString(kiter356.encode('utf-8'))
+        oprot.writeI64(viter357)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.window_to_failed is not None:
       oprot.writeFieldBegin('window_to_failed', TType.MAP, 5)
       oprot.writeMapBegin(TType.STRING, TType.I64, len(self.window_to_failed))
-      for kiter351,viter352 in self.window_to_failed.items():
-        oprot.writeString(kiter351.encode('utf-8'))
-        oprot.writeI64(viter352)
+      for kiter358,viter359 in self.window_to_failed.items():
+        oprot.writeString(kiter358.encode('utf-8'))
+        oprot.writeI64(viter359)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()
@@ -6156,24 +6177,24 @@ class TopologyPageInfo:
       elif fid == 9:
         if ftype == TType.MAP:
           self.id_to_spout_agg_stats = {}
-          (_ktype354, _vtype355, _size353 ) = iprot.readMapBegin()
-          for _i357 in xrange(_size353):
-            _key358 = iprot.readString().decode('utf-8')
-            _val359 = ComponentAggregateStats()
-            _val359.read(iprot)
-            self.id_to_spout_agg_stats[_key358] = _val359
+          (_ktype361, _vtype362, _size360 ) = iprot.readMapBegin()
+          for _i364 in xrange(_size360):
+            _key365 = iprot.readString().decode('utf-8')
+            _val366 = ComponentAggregateStats()
+            _val366.read(iprot)
+            self.id_to_spout_agg_stats[_key365] = _val366
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
       elif fid == 10:
         if ftype == TType.MAP:
           self.id_to_bolt_agg_stats = {}
-          (_ktype361, _vtype362, _size360 ) = iprot.readMapBegin()
-          for _i364 in xrange(_size360):
-            _key365 = iprot.readString().decode('utf-8')
-            _val366 = ComponentAggregateStats()
-            _val366.read(iprot)
-            self.id_to_bolt_agg_stats[_key365] = _val366
+          (_ktype368, _vtype369, _size367 ) = iprot.readMapBegin()
+          for _i371 in xrange(_size367):
+            _key372 = iprot.readString().decode('utf-8')
+            _val373 = ComponentAggregateStats()
+            _val373.read(iprot)
+            self.id_to_bolt_agg_stats[_key372] = _val373
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -6279,17 +6300,17 @@ class TopologyPageInfo:
     if self.id_to_spout_agg_stats is not None:
       oprot.writeFieldBegin('id_to_spout_agg_stats', TType.MAP, 9)
       oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.id_to_spout_agg_stats))
-      for kiter367,viter368 in self.id_to_spout_agg_stats.items():
-        oprot.writeString(kiter367.encode('utf-8'))
-        viter368.write(oprot)
+      for kiter374,viter375 in self.id_to_spout_agg_stats.items():
+        oprot.writeString(kiter374.encode('utf-8'))
+        viter375.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.id_to_bolt_agg_stats is not None:
       oprot.writeFieldBegin('id_to_bolt_agg_stats', TType.MAP, 10)
       oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.id_to_bolt_agg_stats))
-      for kiter369,viter370 in self.id_to_bolt_agg_stats.items():
-        oprot.writeString(kiter369.encode('utf-8'))
-        viter370.write(oprot)
+      for kiter376,viter377 in self.id_to_bolt_agg_stats.items():
+        oprot.writeString(kiter376.encode('utf-8'))
+        viter377.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.sched_status is not None:
@@ -6559,59 +6580,59 @@ class ComponentPageInfo:
       elif fid == 7:
         if ftype == TType.MAP:
           self.window_to_stats = {}
-          (_ktype372, _vtype373, _size371 ) = iprot.readMapBegin()
-          for _i375 in xrange(_size371):
-            _key376 = iprot.readString().decode('utf-8')
-            _val377 = ComponentAggregateStats()
-            _val377.read(iprot)
-            self.window_to_stats[_key376] = _val377
-          iprot.readMapEnd()
-        else:
-          iprot.skip(ftype)
-      elif fid == 8:
-        if ftype == TType.MAP:
-          self.gsid_to_input_stats = {}
           (_ktype379, _vtype380, _size378 ) = iprot.readMapBegin()
           for _i382 in xrange(_size378):
-            _key383 = GlobalStreamId()
-            _key383.read(iprot)
+            _key383 = iprot.readString().decode('utf-8')
             _val384 = ComponentAggregateStats()
             _val384.read(iprot)
-            self.gsid_to_input_stats[_key383] = _val384
+            self.window_to_stats[_key383] = _val384
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
-      elif fid == 9:
+      elif fid == 8:
         if ftype == TType.MAP:
-          self.sid_to_output_stats = {}
+          self.gsid_to_input_stats = {}
           (_ktype386, _vtype387, _size385 ) = iprot.readMapBegin()
           for _i389 in xrange(_size385):
-            _key390 = iprot.readString().decode('utf-8')
+            _key390 = GlobalStreamId()
+            _key390.read(iprot)
             _val391 = ComponentAggregateStats()
             _val391.read(iprot)
-            self.sid_to_output_stats[_key390] = _val391
+            self.gsid_to_input_stats[_key390] = _val391
+          iprot.readMapEnd()
+        else:
+          iprot.skip(ftype)
+      elif fid == 9:
+        if ftype == TType.MAP:
+          self.sid_to_output_stats = {}
+          (_ktype393, _vtype394, _size392 ) = iprot.readMapBegin()
+          for _i396 in xrange(_size392):
+            _key397 = iprot.readString().decode('utf-8')
+            _val398 = ComponentAggregateStats()
+            _val398.read(iprot)
+            self.sid_to_output_stats[_key397] = _val398
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
       elif fid == 10:
         if ftype == TType.LIST:
           self.exec_stats = []
-          (_etype395, _size392) = iprot.readListBegin()
-          for _i396 in xrange(_size392):
-            _elem397 = ExecutorAggregateStats()
-            _elem397.read(iprot)
-            self.exec_stats.append(_elem397)
+          (_etype402, _size399) = iprot.readListBegin()
+          for _i403 in xrange(_size399):
+            _elem404 = ExecutorAggregateStats()
+            _elem404.read(iprot)
+            self.exec_stats.append(_elem404)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
       elif fid == 11:
         if ftype == TType.LIST:
           self.errors = []
-          (_etype401, _size398) = iprot.readListBegin()
-          for _i402 in xrange(_size398):
-            _elem403 = ErrorInfo()
-            _elem403.read(iprot)
-            self.errors.append(_elem403)
+          (_etype408, _size405) = iprot.readListBegin()
+          for _i409 in xrange(_size405):
+            _elem410 = ErrorInfo()
+            _elem410.read(iprot)
+            self.errors.append(_elem410)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
@@ -6673,39 +6694,39 @@ class ComponentPageInfo:
     if self.window_to_stats is not None:
       oprot.writeFieldBegin('window_to_stats', TType.MAP, 7)
       oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.window_to_stats))
-      for kiter404,viter405 in self.window_to_stats.items():
-        oprot.writeString(kiter404.encode('utf-8'))
-        viter405.write(oprot)
+      for kiter411,viter412 in self.window_to_stats.items():
+        oprot.writeString(kiter411.encode('utf-8'))
+        viter412.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.gsid_to_input_stats is not None:
       oprot.writeFieldBegin('gsid_to_input_stats', TType.MAP, 8)
       oprot.writeMapBegin(TType.STRUCT, TType.STRUCT, len(self.gsid_to_input_stats))
-      for kiter406,viter407 in self.gsid_to_input_stats.items():
-        kiter406.write(oprot)
-        viter407.write(oprot)
+      for kiter413,viter414 in self.gsid_to_input_stats.items():
+        kiter413.write(oprot)
+        viter414.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.sid_to_output_stats is not None:
       oprot.writeFieldBegin('sid_to_output_stats', TType.MAP, 9)
       oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.sid_to_output_stats))
-      for kiter408,viter409 in self.sid_to_output_stats.items():
-        oprot.writeString(kiter408.encode('utf-8'))
-        viter409.write(oprot)
+      for kiter415,viter416 in self.sid_to_output_stats.items():
+        oprot.writeString(kiter415.encode('utf-8'))
+        viter416.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.exec_stats is not None:
       oprot.writeFieldBegin('exec_stats', TType.LIST, 10)
       oprot.writeListBegin(TType.STRUCT, len(self.exec_stats))
-      for iter410 in self.exec_stats:
-        iter410.write(oprot)
+      for iter417 in self.exec_stats:
+        iter417.write(oprot)
       oprot.writeListEnd()
       oprot.writeFieldEnd()
     if self.errors is not None:
       oprot.writeFieldBegin('errors', TType.LIST, 11)
       oprot.writeListBegin(TType.STRUCT, len(self.errors))
-      for iter411 in self.errors:
-        iter411.write(oprot)
+      for iter418 in self.errors:
+        iter418.write(oprot)
       oprot.writeListEnd()
       oprot.writeFieldEnd()
     if self.eventlog_host is not None:
@@ -6872,11 +6893,11 @@ class RebalanceOptions:
       elif fid == 3:
         if ftype == TType.MAP:
           self.num_executors = {}
-          (_ktype413, _vtype414, _size412 ) = iprot.readMapBegin()
-          for _i416 in xrange(_size412):
-            _key417 = iprot.readString().decode('utf-8')
-            _val418 = iprot.readI32()
-            self.num_executors[_key417] = _val418
+          (_ktype420, _vtype421, _size419 ) = iprot.readMapBegin()
+          for _i423 in xrange(_size419):
+            _key424 = iprot.readString().decode('utf-8')
+            _val425 = iprot.readI32()
+            self.num_executors[_key424] = _val425
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -6901,9 +6922,9 @@ class RebalanceOptions:
     if self.num_executors is not None:
       oprot.writeFieldBegin('num_executors', TType.MAP, 3)
       oprot.writeMapBegin(TType.STRING, TType.I32, len(self.num_executors))
-      for kiter419,viter420 in self.num_executors.items():
-        oprot.writeString(kiter419.encode('utf-8'))
-        oprot.writeI32(viter420)
+      for kiter426,viter427 in self.num_executors.items():
+        oprot.writeString(kiter426.encode('utf-8'))
+        oprot.writeI32(viter427)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()
@@ -6957,11 +6978,11 @@ class Credentials:
       if fid == 1:
         if ftype == TType.MAP:
           self.creds = {}
-          (_ktype422, _vtype423, _size421 ) = iprot.readMapBegin()
-          for _i425 in xrange(_size421):
-            _key426 = iprot.readString().decode('utf-8')
-            _val427 = iprot.readString().decode('utf-8')
-            self.creds[_key426] = _val427
+          (_ktype429, _vtype430, _size428 ) = iprot.readMapBegin()
+          for _i432 in xrange(_size428):
+            _key433 = iprot.readString().decode('utf-8')
+            _val434 = iprot.readString().decode('utf-8')
+            self.creds[_key433] = _val434
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -6978,9 +6999,9 @@ class Credentials:
     if self.creds is not None:
       oprot.writeFieldBegin('creds', TType.MAP, 1)
       oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.creds))
-      for kiter428,viter429 in self.creds.items():
-        oprot.writeString(kiter428.encode('utf-8'))
-        oprot.writeString(viter429.encode('utf-8'))
+      for kiter435,viter436 in self.creds.items():
+        oprot.writeString(kiter435.encode('utf-8'))
+        oprot.writeString(viter436.encode('utf-8'))
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()
@@ -7154,31 +7175,31 @@ class SupervisorInfo:
       elif fid == 4:
         if ftype == TType.LIST:
           self.used_ports = []
-          (_etype433, _size430) = iprot.readListBegin()
-          for _i434 in xrange(_size430):
-            _elem435 = iprot.readI64()
-            self.used_ports.append(_elem435)
+          (_etype440, _size437) = iprot.readListBegin()
+          for _i441 in xrange(_size437):
+            _elem442 = iprot.readI64()
+            self.used_ports.append(_elem442)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
       elif fid == 5:
         if ftype == TType.LIST:
           self.meta = []
-          (_etype439, _size436) = iprot.readListBegin()
-          for _i440 in xrange(_size436):
-            _elem441 = iprot.readI64()
-            self.meta.append(_elem441)
+          (_etype446, _size443) = iprot.readListBegin()
+          for _i447 in xrange(_size443):
+            _elem448 = iprot.readI64()
+            self.meta.append(_elem448)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
       elif fid == 6:
         if ftype == TType.MAP:
           self.scheduler_meta = {}
-          (_ktype443, _vtype444, _size442 ) = iprot.readMapBegin()
-          for _i446 in xrange(_size442):
-            _key447 = iprot.readString().decode('utf-8')
-            _val448 = iprot.readString().decode('utf-8')
-            self.scheduler_meta[_key447] = _val448
+          (_ktype450, _vtype451, _size449 ) = iprot.readMapBegin()
+          for _i453 in xrange(_size449):
+            _key454 = iprot.readString().decode('utf-8')
+            _val455 = iprot.readString().decode('utf-8')
+            self.scheduler_meta[_key454] = _val455
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -7195,11 +7216,11 @@ class SupervisorInfo:
       elif fid == 9:
         if ftype == TType.MAP:
           self.resources_map = {}
-          (_ktype450, _vtype451, _size449 ) = iprot.readMapBegin()
-          for _i453 in xrange(_size449):
-            _key454 = iprot.readString().decode('utf-8')
-            _val455 = iprot.readDouble()
-            self.resources_map[_key454] = _val455
+          (_ktype457, _vtype458, _size456 ) = iprot.readMapBegin()
+          for _i460 in xrange(_size456):
+            _key461 = iprot.readString().decode('utf-8')
+            _val462 = iprot.readDouble()
+            self.resources_map[_key461] = _val462
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -7228,23 +7249,23 @@ class SupervisorInfo:
     if self.used_ports is not None:
       oprot.writeFieldBegin('used_ports', TType.LIST, 4)
       oprot.writeListBegin(TType.I64, len(self.used_ports))
-      for iter456 in self.used_ports:
-        oprot.writeI64(iter456)
+      for iter463 in self.used_ports:
+        oprot.writeI64(iter463)
       oprot.writeListEnd()
       oprot.writeFieldEnd()
     if self.meta is not None:
       oprot.writeFieldBegin('meta', TType.LIST, 5)
       oprot.writeListBegin(TType.I64, len(self.meta))
-      for iter457 in self.meta:
-        oprot.writeI64(iter457)
+      for iter464 in self.meta:
+        oprot.writeI64(iter464)
       oprot.writeListEnd()
       oprot.writeFieldEnd()
     if self.scheduler_meta is not None:
       oprot.writeFieldBegin('scheduler_meta', TType.MAP, 6)
       oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.scheduler_meta))
-      for kiter458,viter459 in self.scheduler_meta.items():
-        oprot.writeString(kiter458.encode('utf-8'))
-        oprot.writeString(viter459.encode('utf-8'))
+      for kiter465,viter466 in self.scheduler_meta.items():
+        oprot.writeString(kiter465.encode('utf-8'))
+        oprot.writeString(viter466.encode('utf-8'))
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.uptime_secs is not None:
@@ -7258,9 +7279,9 @@ class SupervisorInfo:
     if self.resources_map is not None:
       oprot.writeFieldBegin('resources_map', TType.MAP, 9)
       oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.resources_map))
-      for kiter460,viter461 in self.resources_map.items():
-        oprot.writeString(kiter460.encode('utf-8'))
-        oprot.writeDouble(viter461)
+      for kiter467,viter468 in self.resources_map.items():
+        oprot.writeString(kiter467.encode('utf-8'))
+        oprot.writeDouble(viter468)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()
@@ -7332,10 +7353,10 @@ class NodeInfo:
       elif fid == 2:
         if ftype == TType.SET:
           self.port = set()
-          (_etype465, _size462) = iprot.readSetBegin()
-          for _i466 in xrange(_size462):
-            _elem467 = iprot.readI64()
-            self.port.add(_elem467)
+          (_etype472, _size469) = iprot.readSetBegin()
+          for _i473 in xrange(_size469):
+            _elem474 = iprot.readI64()
+            self.port.add(_elem474)
           iprot.readSetEnd()
         else:
           iprot.skip(ftype)
@@ -7356,8 +7377,8 @@ class NodeInfo:
     if self.port is not None:
       oprot.writeFieldBegin('port', TType.SET, 2)
       oprot.writeSetBegin(TType.I64, len(self.port))
-      for iter468 in self.port:
-        oprot.writeI64(iter468)
+      for iter475 in self.port:
+        oprot.writeI64(iter475)
       oprot.writeSetEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()
@@ -7538,57 +7559,57 @@ class Assignment:
       elif fid == 2:
         if ftype == TType.MAP:
           self.node_host = {}
-          (_ktype470, _vtype471, _size469 ) = iprot.readMapBegin()
-          for _i473 in xrange(_size469):
-            _key474 = iprot.readString().decode('utf-8')
-            _val475 = iprot.readString().decode('utf-8')
-            self.node_host[_key474] = _val475
+          (_ktype477, _vtype478, _size476 ) = iprot.readMapBegin()
+          for _i480 in xrange(_size476):
+            _key481 = iprot.readString().decode('utf-8')
+            _val482 = iprot.readString().decode('utf-8')
+            self.node_host[_key481] = _val482
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.MAP:
           self.executor_node_port = {}
-          (_ktype477, _vtype478, _size476 ) = iprot.readMapBegin()
-          for _i480 in xrange(_size476):
-            _key481 = []
-            (_etype486, _size483) = iprot.readListBegin()
-            for _i487 in xrange(_size483):
-              _elem488 = iprot.readI64()
-              _key481.append(_elem488)
+          (_ktype484, _vtype485, _size483 ) = iprot.readMapBegin()
+          for _i487 in xrange(_size483):
+            _key488 = []
+            (_etype493, _size490) = iprot.readListBegin()
+            for _i494 in xrange(_size490):
+              _elem495 = iprot.readI64()
+              _key488.append(_elem495)
             iprot.readListEnd()
-            _val482 = NodeInfo()
-            _val482.read(iprot)
-            self.executor_node_port[_key481] = _val482
+            _val489 = NodeInfo()
+            _val489.read(iprot)
+            self.executor_node_port[_key488] = _val489
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.MAP:
           self.executor_start_time_secs = {}
-          (_ktype490, _vtype491, _size489 ) = iprot.readMapBegin()
-          for _i493 in xrange(_size489):
-            _key494 = []
-            (_etype499, _size496) = iprot.readListBegin()
-            for _i500 in xrange(_size496):
-              _elem501 = iprot.readI64()
-              _key494.append(_elem501)
+          (_ktype497, _vtype498, _size496 ) = iprot.readMapBegin()
+          for _i500 in xrange(_size496):
+            _key501 = []
+            (_etype506, _size503) = iprot.readListBegin()
+            for _i507 in xrange(_size503):
+              _elem508 = iprot.readI64()
+              _key501.append(_elem508)
             iprot.readListEnd()
-            _val495 = iprot.readI64()
-            self.executor_start_time_secs[_key494] = _val495
+            _val502 = iprot.readI64()
+            self.executor_start_time_secs[_key501] = _val502
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
       elif fid == 5:
         if ftype == TType.MAP:
           self.worker_resources = {}
-          (_ktype503, _vtype504, _size502 ) = iprot.readMapBegin()
-          for _i506 in xrange(_size502):
-            _key507 = NodeInfo()
-            _key507.read(iprot)
-            _val508 = WorkerResources()
-            _val508.read(iprot)
-            self.worker_resources[_key507] = _val508
+          (_ktype510, _vtype511, _size509 ) = iprot.readMapBegin()
+          for _i513 in xrange(_size509):
+            _key514 = NodeInfo()
+            _key514.read(iprot)
+            _val515 = WorkerResources()
+            _val515.read(iprot)
+            self.worker_resources[_key514] = _val515
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -7609,39 +7630,39 @@ class Assignment:
     if self.node_host is not None:
       oprot.writeFieldBegin('node_host', TType.MAP, 2)
       oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.node_host))
-      for kiter509,viter510 in self.node_host.items():
-        oprot.writeString(kiter509.encode('utf-8'))
-        oprot.writeString(viter510.encode('utf-8'))
+      for kiter516,viter517 in self.node_host.items():
+        oprot.writeString(kiter516.encode('utf-8'))
+        oprot.writeString(viter517.encode('utf-8'))
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.executor_node_port is not None:
       oprot.writeFieldBegin('executor_node_port', TType.MAP, 3)
       oprot.writeMapBegin(TType.LIST, TType.STRUCT, len(self.executor_node_port))
-      for kiter511,viter512 in self.executor_node_port.items():
-        oprot.writeListBegin(TType.I64, len(kiter511))
-        for iter513 in kiter511:
-          oprot.writeI64(iter513)
+      for kiter518,viter519 in self.executor_node_port.items():
+        oprot.writeListBegin(TType.I64, len(kiter518))
+        for iter520 in kiter518:
+          oprot.writeI64(iter520)
         oprot.writeListEnd()
-        viter512.write(oprot)
+        viter519.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.executor_start_time_secs is not None:
       oprot.writeFieldBegin('executor_start_time_secs', TType.MAP, 4)
       oprot.writeMapBegin(TType.LIST, TType.I64, len(self.executor_start_time_secs))
-      for kiter514,viter515 in self.executor_start_time_secs.items():
-        oprot.writeListBegin(TType.I64, len(kiter514))
-        for iter516 in kiter514:
-          oprot.writeI64(iter516)
+      for kiter521,viter522 in self.executor_start_time_secs.items():
+        oprot.writeListBegin(TType.I64, len(kiter521))
+        for iter523 in kiter521:
+          oprot.writeI64(iter523)
         oprot.writeListEnd()
-        oprot.writeI64(viter515)
+        oprot.writeI64(viter522)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.worker_resources is not None:
       oprot.writeFieldBegin('worker_resources', TType.MAP, 5)
       oprot.writeMapBegin(TType.STRUCT, TType.STRUCT, len(self.worker_resources))
-      for kiter517,viter518 in self.worker_resources.items():
-        kiter517.write(oprot)
-        viter518.write(oprot)
+      for kiter524,viter525 in self.worker_resources.items():
+        kiter524.write(oprot)
+        viter525.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()
@@ -7818,11 +7839,11 @@ class StormBase:
       elif fid == 4:
         if ftype == TType.MAP:
           self.component_executors = {}
-          (_ktype520, _vtype521, _size519 ) = iprot.readMapBegin()
-          for _i523 in xrange(_size519):
-            _key524 = iprot.readString().decode('utf-8')
-            _val525 = iprot.readI32()
-            self.component_executors[_key524] = _val525
+          (_ktype527, _vtype528, _size526 ) = iprot.readMapBegin()
+          for _i530 in xrange(_size526):
+            _key531 = iprot.readString().decode('utf-8')
+            _val532 = iprot.readI32()
+            self.component_executors[_key531] = _val532
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -7850,12 +7871,12 @@ class StormBase:
       elif fid == 9:
         if ftype == TType.MAP:
           self.component_debug = {}
-          (_ktype527, _vtype528, _size526 ) = iprot.readMapBegin()
-          for _i530 in xrange(_size526):
-            _key531 = iprot.readString().decode('utf-8')
-            _val532 = DebugOptions()
-            _val532.read(iprot)
-            self.component_debug[_key531] = _val532
+          (_ktype534, _vtype535, _size533 ) = iprot.readMapBegin()
+          for _i537 in xrange(_size533):
+            _key538 = iprot.readString().decode('utf-8')
+            _val539 = DebugOptions()
+            _val539.read(iprot)
+            self.component_debug[_key538] = _val539
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -7884,9 +7905,9 @@ class StormBase:
     if self.component_executors is not None:
       oprot.writeFieldBegin('component_executors', TType.MAP, 4)
       oprot.writeMapBegin(TType.STRING, TType.I32, len(self.component_executors))
-      for kiter533,viter534 in self.component_executors.items():
-        oprot.writeString(kiter533.encode('utf-8'))
-        oprot.writeI32(viter534)
+      for kiter540,viter541 in self.component_executors.items():
+        oprot.writeString(kiter540.encode('utf-8'))
+        oprot.writeI32(viter541)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.launch_time_secs is not None:
@@ -7908,9 +7929,9 @@ class StormBase:
     if self.component_debug is not None:
       oprot.writeFieldBegin('component_debug', TType.MAP, 9)
       oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.component_debug))
-      for kiter535,viter536 in self.component_debug.items():
-        oprot.writeString(kiter535.encode('utf-8'))
-        viter536.write(oprot)
+      for kiter542,viter543 in self.component_debug.items():
+        oprot.writeString(kiter542.encode('utf-8'))
+        viter543.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()
@@ -7990,13 +8011,13 @@ class ClusterWorkerHeartbeat:
       elif fid == 2:
         if ftype == TType.MAP:
           self.executor_stats = {}
-          (_ktype538, _vtype539, _size537 ) = iprot.readMapBegin()
-          for _i541 in xrange(_size537):
-            _key542 = ExecutorInfo()
-            _key542.read(iprot)
-            _val543 = ExecutorStats()
-            _val543.read(iprot)
-            self.executor_stats[_key542] = _val543
+          (_ktype545, _vtype546, _size544 ) = iprot.readMapBegin()
+          for _i548 in xrange(_size544):
+            _key549 = ExecutorInfo()
+            _key549.read(iprot)
+            _val550 = ExecutorStats()
+            _val550.read(iprot)
+            self.executor_stats[_key549] = _val550
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -8027,9 +8048,9 @@ class ClusterWorkerHeartbeat:
     if self.executor_stats is not None:
       oprot.writeFieldBegin('executor_stats', TType.MAP, 2)
       oprot.writeMapBegin(TType.STRUCT, TType.STRUCT, len(self.executor_stats))
-      for kiter544,viter545 in self.executor_stats.items():
-        kiter544.write(oprot)
-        viter545.write(oprot)
+      for kiter551,viter552 in self.executor_stats.items():
+        kiter551.write(oprot)
+        viter552.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     if self.time_secs is not None:
@@ -8182,12 +8203,12 @@ class LocalStateData:
       if fid == 1:
         if ftype == TType.MAP:
           self.serialized_parts = {}
-          (_ktype547, _vtype548, _size546 ) = iprot.readMapBegin()
-          for _i550 in xrange(_size546):
-            _key551 = iprot.readString().decode('utf-8')
-            _val552 = ThriftSerializedObject()
-            _val552.read(iprot)
-            self.serialized_parts[_key551] = _val552
+          (_ktype554, _vtype555, _size553 ) = iprot.readMapBegin()
+          for _i557 in xrange(_size553):
+            _key558 = iprot.readString().decode('utf-8')
+            _val559 = ThriftSerializedObject()
+            _val559.read(iprot)
+            self.serialized_parts[_key558] = _val559
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -8204,9 +8225,9 @@ class LocalStateData:
     if self.serialized_parts is not None:
       oprot.writeFieldBegin('serialized_parts', TType.MAP, 1)
       oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.serialized_parts))
-      for kiter553,viter554 in self.serialized_parts.items():
-        oprot.writeString(kiter553.encode('utf-8'))
-        viter554.write(oprot)
+      for kiter560,viter561 in self.serialized_parts.items():
+        oprot.writeString(kiter560.encode('utf-8'))
+        viter561.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()
@@ -8271,11 +8292,11 @@ class LocalAssignment:
       elif fid == 2:
         if ftype == TType.LIST:
           self.executors = []
-          (_etype558, _size555) = iprot.readListBegin()
-          for _i559 in xrange(_size555):
-            _elem560 = ExecutorInfo()
-            _elem560.read(iprot)
-            self.executors.append(_elem560)
+          (_etype565, _size562) = iprot.readListBegin()
+          for _i566 in xrange(_size562):
+            _elem567 = ExecutorInfo()
+            _elem567.read(iprot)
+            self.executors.append(_elem567)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
@@ -8302,8 +8323,8 @@ class LocalAssignment:
     if self.executors is not None:
       oprot.writeFieldBegin('executors', TType.LIST, 2)
       oprot.writeListBegin(TType.STRUCT, len(self.executors))
-      for iter561 in self.executors:
-        iter561.write(oprot)
+      for iter568 in self.executors:
+        iter568.write(oprot)
       oprot.writeListEnd()
       oprot.writeFieldEnd()
     if self.resources is not None:
@@ -8432,11 +8453,11 @@ class LSApprovedWorkers:
       if fid == 1:
         if ftype == TType.MAP:
           self.approved_workers = {}
-          (_ktype563, _vtype564, _size562 ) = iprot.readMapBegin()
-          for _i566 in xrange(_size562):
-            _key567 = iprot.readString().decode('utf-8')
-            _val568 = iprot.readI32()
-            self.approved_workers[_key567] = _val568
+          (_ktype570, _vtype571, _size569 ) = iprot.readMapBegin()
+          for _i573 in xrange(_size569):
+            _key574 = iprot.readString().decode('utf-8')
+            _val575 = iprot.readI32()
+            self.approved_workers[_key574] = _val575
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -8453,9 +8474,9 @@ class LSApprovedWorkers:
     if self.approved_workers is not None:
       oprot.writeFieldBegin('approved_workers', TType.MAP, 1)
       oprot.writeMapBegin(TType.STRING, TType.I32, len(self.approved_workers))
-      for kiter569,viter570 in self.approved_workers.items():
-        oprot.writeString(kiter569.encode('utf-8'))
-        oprot.writeI32(viter570)
+      for kiter576,viter577 in self.approved_workers.items():
+        oprot.writeString(kiter576.encode('utf-8'))
+        oprot.writeI32(viter577)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()
@@ -8509,12 +8530,12 @@ class LSSupervisorAssignments:
       if fid == 1:
         if ftype == TType.MAP:
           self.assignments = {}
-          (_ktype572, _vtype573, _size571 ) = iprot.readMapBegin()
-          for _i575 in xrange(_size571):
-            _key576 = iprot.readI32()
-            _val577 = LocalAssignment()
-            _val577.read(iprot)
-            self.assignments[_key576] = _val577
+          (_ktype579, _vtype580, _size578 ) = iprot.readMapBegin()
+          for _i582 in xrange(_size578):
+            _key583 = iprot.readI32()
+            _val584 = LocalAssignment()
+            _val584.read(iprot)
+            self.assignments[_key583] = _val584
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -8531,9 +8552,9 @@ class LSSupervisorAssignments:
     if self.assignments is not None:
       oprot.writeFieldBegin('assignments', TType.MAP, 1)
       oprot.writeMapBegin(TType.I32, TType.STRUCT, len(self.assignments))
-      for kiter578,viter579 in self.assignments.items():
-        oprot.writeI32(kiter578)
-        viter579.write(oprot)
+      for kiter585,viter586 in self.assignments.items():
+        oprot.writeI32(kiter585)
+        viter586.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()
@@ -8606,11 +8627,11 @@ class LSWorkerHeartbeat:
       elif fid == 3:
         if ftype == TType.LIST:
           self.executors = []
-          (_etype583, _size580) = iprot.readListBegin()
-          for _i584 in xrange(_size580):
-            _elem585 = ExecutorInfo()
-            _elem585.read(iprot)
-            self.executors.append(_elem585)
+          (_etype590, _size587) = iprot.readListBegin()
+          for _i591 in xrange(_size587):
+            _elem592 = ExecutorInfo()
+            _elem592.read(iprot)
+            self.executors.append(_elem592)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
@@ -8640,8 +8661,8 @@ class LSWorkerHeartbeat:
     if self.executors is not None:
       oprot.writeFieldBegin('executors', TType.LIST, 3)
       oprot.writeListBegin(TType.STRUCT, len(self.executors))
-      for iter586 in self.executors:
-        iter586.write(oprot)
+      for iter593 in self.executors:
+        iter593.write(oprot)
       oprot.writeListEnd()
       oprot.writeFieldEnd()
     if self.port is not None:
@@ -8727,20 +8748,20 @@ class LSTopoHistory:
       elif fid == 3:
         if ftype == TType.LIST:
           self.users = []
-          (_etype590, _size587) = iprot.readListBegin()
-          for _i591 in xrange(_size587):
-            _elem592 = iprot.readString().decode('utf-8')
-            self.users.append(_elem592)
+          (_etype597, _size594) = iprot.readListBegin()
+          for _i598 in xrange(_size594):
+            _elem599 = iprot.readString().decode('utf-8')
+            self.users.append(_elem599)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.LIST:
           self.groups = []
-          (_etype596, _size593) = iprot.readListBegin()
-          for _i597 in xrange(_size593):
-            _elem598 = iprot.readString().decode('utf-8')
-            self.groups.append(_elem598)
+          (_etype603, _size600) = iprot.readListBegin()
+          for _i604 in xrange(_size600):
+            _elem605 = iprot.readString().decode('utf-8')
+            self.groups.append(_elem605)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
@@ -8765,15 +8786,15 @@ class LSTopoHistory:
     if self.users is not None:
       oprot.writeFieldBegin('users', TType.LIST, 3)
       oprot.writeListBegin(TType.STRING, len(self.users))
-      for iter599 in self.users:
-        oprot.writeString(iter599.encode('utf-8'))
+      for iter606 in self.users:
+        oprot.writeString(iter606.encode('utf-8'))
       oprot.writeListEnd()
       oprot.writeFieldEnd()
     if self.groups is not None:
       oprot.writeFieldBegin('groups', TType.LIST, 4)
       oprot.writeListBegin(TType.STRING, len(self.groups))
-      for iter600 in self.groups:
-        oprot.writeString(iter600.encode('utf-8'))
+      for iter607 in self.groups:
+        oprot.writeString(iter607.encode('utf-8'))
       oprot.writeListEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()
@@ -8836,11 +8857,11 @@ class LSTopoHistoryList:
       if fid == 1:
         if ftype == TType.LIST:
           self.topo_history = []
-          (_etype604, _size601) = iprot.readListBegin()
-          for _i605 in xrange(_size601):
-            _elem606 = LSTopoHistory()
-            _elem606.read(iprot)
-            self.topo_history.append(_elem606)
+          (_etype611, _size608) = iprot.readListBegin()
+          for _i612 in xrange(_size608):
+            _elem613 = LSTopoHistory()
+            _elem613.read(iprot)
+            self.topo_history.append(_elem613)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
@@ -8857,8 +8878,8 @@ class LSTopoHistoryList:
     if self.topo_history is not None:
       oprot.writeFieldBegin('topo_history', TType.LIST, 1)
       oprot.writeListBegin(TType.STRUCT, len(self.topo_history))
-      for iter607 in self.topo_history:
-        iter607.write(oprot)
+      for iter614 in self.topo_history:
+        iter614.write(oprot)
       oprot.writeListEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()
@@ -9193,12 +9214,12 @@ class LogConfig:
       if fid == 2:
         if ftype == TType.MAP:
           self.named_logger_level = {}
-          (_ktype609, _vtype610, _size608 ) = iprot.readMapBegin()
-          for _i612 in xrange(_size608):
-            _key613 = iprot.readString().decode('utf-8')
-            _val614 = LogLevel()
-            _val614.read(iprot)
-            self.named_logger_level[_key613] = _val614
+          (_ktype616, _vtype617, _size615 ) = iprot.readMapBegin()
+          for _i619 in xrange(_size615):
+            _key620 = iprot.readString().decode('utf-8')
+            _val621 = LogLevel()
+            _val621.read(iprot)
+            self.named_logger_level[_key620] = _val621
           iprot.readMapEnd()
         else:
           iprot.skip(ftype)
@@ -9215,9 +9236,9 @@ class LogConfig:
     if self.named_logger_level is not None:
       oprot.writeFieldBegin('named_logger_level', TType.MAP, 2)
       oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.named_logger_level))
-      for kiter615,viter616 in self.named_logger_level.items():
-        oprot.writeString(kiter615.encode('utf-8'))
-        viter616.write(oprot)
+      for kiter622,viter623 in self.named_logger_level.items():
+        oprot.writeString(kiter622.encode('utf-8'))
+        viter623.write(oprot)
       oprot.writeMapEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()
@@ -9269,10 +9290,10 @@ class TopologyHistoryInfo:
       if fid == 1:
         if ftype == TType.LIST:
           self.topo_ids = []
-          (_etype620, _size617) = iprot.readListBegin()
-          for _i621 in xrange(_size617):
-            _elem622 = iprot.readString().decode('utf-8')
-            self.topo_ids.append(_elem622)
+          (_etype627, _size624) = iprot.readListBegin()
+          for _i628 in xrange(_size624):
+            _elem629 = iprot.readString().decode('utf-8')
+            self.topo_ids.append(_elem629)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
@@ -9289,8 +9310,8 @@ class TopologyHistoryInfo:
     if self.topo_ids is not None:
       oprot.writeFieldBegin('topo_ids', TType.LIST, 1)
       oprot.writeListBegin(TType.STRING, len(self.topo_ids))
-      for iter623 in self.topo_ids:
-        oprot.writeString(iter623.encode('utf-8'))
+      for iter630 in self.topo_ids:
+        oprot.writeString(iter630.encode('utf-8'))
       oprot.writeListEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/storm.thrift
----------------------------------------------------------------------
diff --git a/storm-core/src/storm.thrift b/storm-core/src/storm.thrift
index d5952d7..677de2b 100644
--- a/storm-core/src/storm.thrift
+++ b/storm-core/src/storm.thrift
@@ -117,6 +117,7 @@ struct StormTopology {
   1: required map<string, SpoutSpec> spouts;
   2: required map<string, Bolt> bolts;
   3: required map<string, StateSpoutSpec> state_spouts;
+  4: optional list<binary> worker_hooks;
 }
 
 exception AlreadyAliveException {


[03/50] [abbrv] storm git commit: Merge branch 'master' into STORM-1340

Posted by sr...@apache.org.
Merge branch 'master' into STORM-1340


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/ba6ace80
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/ba6ace80
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/ba6ace80

Branch: refs/heads/STORM-1040
Commit: ba6ace80f4a88a141239e7028509b16f8595fad1
Parents: bb266fb 45792dd
Author: Kyle Nusbaum <Ky...@gmail.com>
Authored: Mon Nov 23 16:34:59 2015 -0600
Committer: Kyle Nusbaum <Ky...@gmail.com>
Committed: Mon Nov 23 16:34:59 2015 -0600

----------------------------------------------------------------------
 CHANGELOG.md                                    |  16 +
 DEVELOPER.md                                    |   2 +
 LICENSE                                         |  29 +
 README.markdown                                 |   3 +
 bin/storm.py                                    |  54 +-
 conf/defaults.yaml                              |  14 +
 dev-tools/travis/travis-script.sh               |   4 +-
 docs/documentation/Pacemaker.md                 | 108 +++
 .../documentation/Setting-up-a-Storm-cluster.md |  19 +
 docs/documentation/Windowing.md                 | 144 +++
 docs/documentation/ui-rest-api.md               |  16 +-
 .../storm/starter/SlidingWindowTopology.java    | 185 ++++
 log4j2/cluster.xml                              |  12 +-
 log4j2/worker.xml                               |  10 +-
 pom.xml                                         |   3 +-
 storm-core/pom.xml                              |   7 +-
 storm-core/src/clj/backtype/storm/cluster.clj   | 279 ++----
 .../cluster_state/zookeeper_state_factory.clj   | 157 ++++
 .../clj/backtype/storm/command/healthcheck.clj  |  88 ++
 .../clj/backtype/storm/command/heartbeats.clj   |  52 ++
 storm-core/src/clj/backtype/storm/config.clj    |  13 +
 .../src/clj/backtype/storm/daemon/executor.clj  | 107 ++-
 .../src/clj/backtype/storm/daemon/logviewer.clj |  60 +-
 .../src/clj/backtype/storm/daemon/nimbus.clj    |  97 +-
 .../clj/backtype/storm/daemon/supervisor.clj    |  11 +
 .../src/clj/backtype/storm/daemon/worker.clj    |  64 +-
 .../src/clj/backtype/storm/messaging/loader.clj |  76 +-
 .../src/clj/backtype/storm/messaging/local.clj  |  72 +-
 storm-core/src/clj/backtype/storm/stats.clj     |  88 +-
 storm-core/src/clj/backtype/storm/ui/core.clj   |  21 +-
 storm-core/src/clj/backtype/storm/util.clj      |  16 +
 .../org/apache/storm/pacemaker/pacemaker.clj    | 237 +++++
 .../storm/pacemaker/pacemaker_state_factory.clj | 124 +++
 storm-core/src/genthrift.sh                     |   2 +-
 storm-core/src/jvm/backtype/storm/Config.java   | 131 ++-
 .../backtype/storm/cluster/ClusterState.java    | 208 +++++
 .../storm/cluster/ClusterStateContext.java      |  41 +
 .../storm/cluster/ClusterStateFactory.java      |  28 +
 .../storm/cluster/ClusterStateListener.java     |  22 +
 .../backtype/storm/cluster/ConnectionState.java |  24 +
 .../jvm/backtype/storm/cluster/DaemonType.java  |  27 +
 .../storm/generated/AlreadyAliveException.java  |  11 +-
 .../backtype/storm/generated/Assignment.java    |   4 +-
 .../storm/generated/AuthorizationException.java |   4 +-
 .../src/jvm/backtype/storm/generated/Bolt.java  |   4 +-
 .../storm/generated/BoltAggregateStats.java     |  12 +-
 .../jvm/backtype/storm/generated/BoltStats.java |   4 +-
 .../storm/generated/ClusterSummary.java         |   6 +-
 .../storm/generated/ClusterWorkerHeartbeat.java |   8 +-
 .../storm/generated/CommonAggregateStats.java   |  16 +-
 .../generated/ComponentAggregateStats.java      |   4 +-
 .../storm/generated/ComponentCommon.java        |   6 +-
 .../storm/generated/ComponentObject.java        |   2 +-
 .../storm/generated/ComponentPageInfo.java      |  10 +-
 .../backtype/storm/generated/ComponentType.java |   2 +-
 .../backtype/storm/generated/Credentials.java   |   4 +-
 .../storm/generated/DRPCExecutionException.java |   4 +-
 .../backtype/storm/generated/DRPCRequest.java   |   4 +-
 .../backtype/storm/generated/DebugOptions.java  |   8 +-
 .../storm/generated/DistributedRPC.java         |   4 +-
 .../generated/DistributedRPCInvocations.java    |   4 +-
 .../jvm/backtype/storm/generated/ErrorInfo.java |   8 +-
 .../storm/generated/ExecutorAggregateStats.java |   4 +-
 .../backtype/storm/generated/ExecutorInfo.java  |   8 +-
 .../storm/generated/ExecutorSpecificStats.java  |   2 +-
 .../backtype/storm/generated/ExecutorStats.java |   6 +-
 .../storm/generated/ExecutorSummary.java        |   8 +-
 .../storm/generated/GetInfoOptions.java         |   4 +-
 .../storm/generated/GlobalStreamId.java         |   4 +-
 .../jvm/backtype/storm/generated/Grouping.java  |   2 +-
 .../generated/HBAuthorizationException.java     | 406 ++++++++
 .../storm/generated/HBExecutionException.java   | 406 ++++++++
 .../jvm/backtype/storm/generated/HBMessage.java | 636 +++++++++++++
 .../backtype/storm/generated/HBMessageData.java | 640 +++++++++++++
 .../jvm/backtype/storm/generated/HBNodes.java   | 461 +++++++++
 .../jvm/backtype/storm/generated/HBPulse.java   | 522 +++++++++++
 .../jvm/backtype/storm/generated/HBRecords.java | 466 +++++++++
 .../storm/generated/HBServerMessageType.java    | 113 +++
 .../generated/InvalidTopologyException.java     |   4 +-
 .../backtype/storm/generated/JavaObject.java    |   4 +-
 .../backtype/storm/generated/JavaObjectArg.java |   2 +-
 .../backtype/storm/generated/KillOptions.java   |   6 +-
 .../storm/generated/LSApprovedWorkers.java      |   4 +-
 .../generated/LSSupervisorAssignments.java      |   4 +-
 .../storm/generated/LSSupervisorId.java         |   4 +-
 .../backtype/storm/generated/LSTopoHistory.java |   6 +-
 .../storm/generated/LSTopoHistoryList.java      |   4 +-
 .../storm/generated/LSWorkerHeartbeat.java      |   8 +-
 .../storm/generated/LocalAssignment.java        |   4 +-
 .../storm/generated/LocalStateData.java         |   4 +-
 .../jvm/backtype/storm/generated/LogConfig.java |  52 +-
 .../jvm/backtype/storm/generated/LogLevel.java  |   8 +-
 .../storm/generated/LogLevelAction.java         |   2 +-
 .../jvm/backtype/storm/generated/Nimbus.java    |  48 +-
 .../backtype/storm/generated/NimbusSummary.java |  10 +-
 .../jvm/backtype/storm/generated/NodeInfo.java  |   4 +-
 .../storm/generated/NotAliveException.java      |   4 +-
 .../backtype/storm/generated/NullStruct.java    |   4 +-
 .../storm/generated/NumErrorsChoice.java        |   2 +-
 .../backtype/storm/generated/ProfileAction.java |   2 +-
 .../storm/generated/ProfileRequest.java         |   6 +-
 .../storm/generated/RebalanceOptions.java       |   8 +-
 .../storm/generated/ShellComponent.java         |   4 +-
 .../storm/generated/SpecificAggregateStats.java |   2 +-
 .../storm/generated/SpoutAggregateStats.java    |   6 +-
 .../jvm/backtype/storm/generated/SpoutSpec.java |   4 +-
 .../backtype/storm/generated/SpoutStats.java    |   4 +-
 .../storm/generated/StateSpoutSpec.java         |   4 +-
 .../jvm/backtype/storm/generated/StormBase.java |   8 +-
 .../backtype/storm/generated/StormTopology.java |   4 +-
 .../backtype/storm/generated/StreamInfo.java    |   6 +-
 .../backtype/storm/generated/SubmitOptions.java |   4 +-
 .../storm/generated/SupervisorInfo.java         |   8 +-
 .../storm/generated/SupervisorSummary.java      | 216 ++++-
 .../storm/generated/ThriftSerializedObject.java |   4 +-
 .../storm/generated/TopologyActionOptions.java  |   2 +-
 .../storm/generated/TopologyHistoryInfo.java    |   4 +-
 .../backtype/storm/generated/TopologyInfo.java  |  20 +-
 .../storm/generated/TopologyInitialStatus.java  |   2 +-
 .../storm/generated/TopologyPageInfo.java       |  26 +-
 .../backtype/storm/generated/TopologyStats.java |   4 +-
 .../storm/generated/TopologyStatus.java         |   2 +-
 .../storm/generated/TopologySummary.java        |  26 +-
 .../storm/generated/WorkerResources.java        |  10 +-
 .../storm/messaging/AddressedTuple.java         |  46 +
 .../DeserializingConnectionCallback.java        |  60 ++
 .../backtype/storm/messaging/IConnection.java   |  10 +-
 .../storm/messaging/IConnectionCallback.java    |  31 +
 .../backtype/storm/messaging/local/Context.java | 164 ++++
 .../backtype/storm/messaging/netty/Client.java  |   3 +-
 .../storm/messaging/netty/ControlMessage.java   |  17 +-
 .../messaging/netty/INettySerializable.java     |  26 +
 .../netty/KerberosSaslClientHandler.java        | 152 +++
 .../netty/KerberosSaslNettyClient.java          | 203 ++++
 .../netty/KerberosSaslNettyClientState.java     |  31 +
 .../netty/KerberosSaslNettyServer.java          | 210 +++++
 .../netty/KerberosSaslNettyServerState.java     |  30 +
 .../netty/KerberosSaslServerHandler.java        | 133 +++
 .../storm/messaging/netty/MessageDecoder.java   |   4 +-
 .../netty/NettyRenameThreadFactory.java         |  10 +-
 .../netty/NettyUncaughtExceptionHandler.java    |  35 +
 .../storm/messaging/netty/SaslMessageToken.java |  37 +-
 .../storm/messaging/netty/SaslNettyClient.java  |  22 +-
 .../storm/messaging/netty/SaslNettyServer.java  | 244 +++--
 .../messaging/netty/SaslNettyServerState.java   |  13 +-
 .../messaging/netty/SaslStormServerHandler.java |  21 +-
 .../storm/messaging/netty/SaslUtils.java        |   1 +
 .../backtype/storm/messaging/netty/Server.java  | 163 +---
 .../messaging/netty/StormServerHandler.java     |  24 +-
 .../metric/internal/LatencyStatAndMetric.java   |  13 +-
 .../nimbus/ITopologyActionNotifierPlugin.java   |  43 +
 .../jvm/backtype/storm/scheduler/Cluster.java   | 115 ++-
 .../resource/ResourceAwareScheduler.java        |  18 +-
 .../backtype/storm/security/auth/AuthUtils.java |  69 ++
 .../security/auth/SimpleTransportPlugin.java    |   2 +-
 .../backtype/storm/task/OutputCollector.java    |   2 +-
 .../backtype/storm/topology/IWindowedBolt.java  |  40 +
 .../storm/topology/TopologyBuilder.java         |  17 +
 .../storm/topology/WindowedBoltExecutor.java    | 224 +++++
 .../storm/topology/base/BaseWindowedBolt.java   | 179 ++++
 .../backtype/storm/tuple/AddressedTuple.java    |  48 +
 .../backtype/storm/utils/DisruptorQueue.java    | 187 +++-
 .../src/jvm/backtype/storm/utils/Utils.java     | 102 ++
 .../storm/validation/ConfigValidation.java      |  20 +-
 .../storm/windowing/CountEvictionPolicy.java    |  68 ++
 .../storm/windowing/CountTriggerPolicy.java     |  63 ++
 .../src/jvm/backtype/storm/windowing/Event.java |  41 +
 .../jvm/backtype/storm/windowing/EventImpl.java |  38 +
 .../storm/windowing/EvictionPolicy.java         |  42 +
 .../storm/windowing/TimeEvictionPolicy.java     |  52 ++
 .../storm/windowing/TimeTriggerPolicy.java      | 115 +++
 .../storm/windowing/TriggerHandler.java         |  29 +
 .../backtype/storm/windowing/TriggerPolicy.java |  42 +
 .../backtype/storm/windowing/TupleWindow.java   |  26 +
 .../storm/windowing/TupleWindowImpl.java        |  61 ++
 .../jvm/backtype/storm/windowing/Window.java    |  48 +
 .../windowing/WindowLifecycleListener.java      |  42 +
 .../backtype/storm/windowing/WindowManager.java | 212 +++++
 .../storm/pacemaker/IServerMessageHandler.java  |  25 +
 .../apache/storm/pacemaker/PacemakerClient.java | 255 +++++
 .../storm/pacemaker/PacemakerClientHandler.java |  75 ++
 .../apache/storm/pacemaker/PacemakerServer.java | 163 ++++
 .../storm/pacemaker/codec/ThriftDecoder.java    |  76 ++
 .../storm/pacemaker/codec/ThriftEncoder.java    | 110 +++
 .../pacemaker/codec/ThriftNettyClientCodec.java |  94 ++
 .../pacemaker/codec/ThriftNettyServerCodec.java |  99 ++
 .../src/jvm/storm/trident/TridentTopology.java  |  36 +-
 .../jvm/storm/trident/graph/GraphGrouper.java   |  11 +-
 .../jvm/storm/trident/spout/IBatchSpout.java    |   2 +-
 .../spout/IOpaquePartitionedTridentSpout.java   |   3 +-
 .../trident/spout/IPartitionedTridentSpout.java |   2 +-
 .../storm/trident/spout/ITridentDataSource.java |  26 +
 .../jvm/storm/trident/spout/ITridentSpout.java  |   2 +-
 .../jvm/storm/trident/util/TridentUtils.java    |  33 +-
 storm-core/src/py/storm/DistributedRPC-remote   |   2 +-
 storm-core/src/py/storm/DistributedRPC.py       |  20 +-
 .../py/storm/DistributedRPCInvocations-remote   |   2 +-
 .../src/py/storm/DistributedRPCInvocations.py   |  41 +-
 storm-core/src/py/storm/Nimbus-remote           |   2 +-
 storm-core/src/py/storm/Nimbus.py               | 457 ++++++---
 storm-core/src/py/storm/constants.py            |   2 +-
 storm-core/src/py/storm/ttypes.py               | 934 ++++++++++++++++---
 storm-core/src/storm.thrift                     |  61 ++
 storm-core/src/ui/public/css/style.css          |   8 +
 storm-core/src/ui/public/images/bug.png         | Bin 0 -> 4045 bytes
 storm-core/src/ui/public/images/statistic.png   | Bin 0 -> 488 bytes
 storm-core/src/ui/public/index.html             |   4 +-
 .../public/templates/index-page-template.html   |  36 +
 .../templates/topology-page-template.html       |   6 +
 .../src/ui/public/templates/user-template.html  |  22 +-
 storm-core/src/ui/public/topology.html          |   7 +-
 .../test/clj/backtype/storm/cluster_test.clj    |   7 +-
 .../storm/messaging/netty_unit_test.clj         | 122 +--
 .../test/clj/backtype/storm/messaging_test.clj  |  25 -
 .../test/clj/backtype/storm/metrics_test.clj    |   2 +-
 .../test/clj/backtype/storm/nimbus_test.clj     |  97 +-
 .../storm/pacemaker_state_factory_test.clj      | 150 +++
 .../clj/org/apache/storm/pacemaker_test.clj     | 242 +++++
 .../jvm/backtype/storm/TestConfigValidate.java  |  18 +
 .../nimbus/InMemoryTopologyActionNotifier.java  |  53 ++
 .../storm/utils/DisruptorQueueTest.java         |  41 +-
 .../storm/windowing/WindowManagerTest.java      | 250 +++++
 .../jvm/storm/trident/TestTridentTopology.java  |  76 ++
 storm-dist/binary/LICENSE                       |  29 +
 224 files changed, 12822 insertions(+), 1616 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/ba6ace80/dev-tools/travis/travis-script.sh
----------------------------------------------------------------------
diff --cc dev-tools/travis/travis-script.sh
index 325915e,67fe954..a302e23
--- a/dev-tools/travis/travis-script.sh
+++ b/dev-tools/travis/travis-script.sh
@@@ -28,11 -28,11 +28,11 @@@ TRAVIS_SCRIPT_DIR=$( cd "$( dirname "${
  
  cd ${STORM_SRC_ROOT_DIR}
  
- # We should concern that Travis CI could be very slow cause it uses VM
- export STORM_TEST_TIMEOUT_MS=100000
+ # We should be concerned that Travis CI could be very slow because it uses VM
+ export STORM_TEST_TIMEOUT_MS=150000
  
  # We now lean on Travis CI's implicit behavior, ```mvn clean install -DskipTests``` before running script
 -mvn --batch-mode install -fae -Pnative
 +mvn --batch-mode test -fae -Pnative -pl $2
  BUILD_RET_VAL=$?
  
  for dir in `find . -type d -and -wholename \*/target/\*-reports`;


[16/50] [abbrv] storm git commit: addWorkerHook shouldn't accpet null; and add test

Posted by sr...@apache.org.
addWorkerHook shouldn't accpet null; and add test


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/b0c37045
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/b0c37045
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/b0c37045

Branch: refs/heads/STORM-1040
Commit: b0c37045085d37d9a9309a81ef7f3f7797d454a4
Parents: 4078d95
Author: Michael Schonfeld <mi...@schonfeld.org>
Authored: Tue Nov 17 12:28:20 2015 -0500
Committer: Michael Schonfeld <mi...@schonfeld.org>
Committed: Mon Nov 23 18:50:55 2015 -0500

----------------------------------------------------------------------
 storm-core/src/jvm/backtype/storm/topology/TopologyBuilder.java | 4 ++++
 .../test/jvm/backtype/storm/topology/TopologyBuilderTest.java   | 5 +++++
 2 files changed, 9 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/b0c37045/storm-core/src/jvm/backtype/storm/topology/TopologyBuilder.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/topology/TopologyBuilder.java b/storm-core/src/jvm/backtype/storm/topology/TopologyBuilder.java
index 965540e..9d2ef61 100644
--- a/storm-core/src/jvm/backtype/storm/topology/TopologyBuilder.java
+++ b/storm-core/src/jvm/backtype/storm/topology/TopologyBuilder.java
@@ -237,6 +237,10 @@ public class TopologyBuilder {
      * @param workerHook the lifecycle hook to add
      */
     public void addWorkerHook(IWorkerHook workerHook) {
+        if(null == workerHook) {
+            throw new IllegalArgumentException("WorkerHook must not be null.");
+        }
+
         _workerHooks.add(ByteBuffer.wrap(Utils.javaSerialize(workerHook)));
     }
 

http://git-wip-us.apache.org/repos/asf/storm/blob/b0c37045/storm-core/test/jvm/backtype/storm/topology/TopologyBuilderTest.java
----------------------------------------------------------------------
diff --git a/storm-core/test/jvm/backtype/storm/topology/TopologyBuilderTest.java b/storm-core/test/jvm/backtype/storm/topology/TopologyBuilderTest.java
index 934bd69..c0891a6 100644
--- a/storm-core/test/jvm/backtype/storm/topology/TopologyBuilderTest.java
+++ b/storm-core/test/jvm/backtype/storm/topology/TopologyBuilderTest.java
@@ -39,6 +39,11 @@ public class TopologyBuilderTest {
         builder.setSpout("spout", mock(IRichSpout.class), 0);
     }
 
+    @Test(expected = IllegalArgumentException.class)
+    public void testAddWorkerHook() {
+        builder.addWorkerHook(null);
+    }
+
     // TODO enable if setStateSpout gets implemented
 //    @Test(expected = IllegalArgumentException.class)
 //    public void testSetStateSpout() {


[32/50] [abbrv] storm git commit: STORM-1217

Posted by sr...@apache.org.
STORM-1217


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/e182624e
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/e182624e
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/e182624e

Branch: refs/heads/STORM-1040
Commit: e182624eb5ba5d7e7561dc8bb2ae23e1217bc5be
Parents: 05c7004
Author: Derek Dagit <de...@yahoo-inc.com>
Authored: Tue Nov 24 16:48:38 2015 -0600
Committer: Derek Dagit <de...@yahoo-inc.com>
Committed: Tue Nov 24 16:48:38 2015 -0600

----------------------------------------------------------------------
 CHANGELOG.md | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/e182624e/CHANGELOG.md
----------------------------------------------------------------------
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 2661422..b116116 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -151,6 +151,7 @@
  * STORM-1142: Some config validators for positive ints need to allow 0
  * STORM-901: Worker Artifacts Directory
  * STORM-1144: Display requested and assigned cpu/mem resources for schedulers in UI
+ * STORM-1217: making small fixes in RAS
 
 ## 0.10.0-beta2
  * STORM-1108: Fix NPE in simulated time


[10/50] [abbrv] storm git commit: add support for worker lifecycle hooks

Posted by sr...@apache.org.
http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/ComponentPageInfo.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/ComponentPageInfo.java b/storm-core/src/jvm/backtype/storm/generated/ComponentPageInfo.java
index e2a538b..6152d02 100644
--- a/storm-core/src/jvm/backtype/storm/generated/ComponentPageInfo.java
+++ b/storm-core/src/jvm/backtype/storm/generated/ComponentPageInfo.java
@@ -1657,16 +1657,16 @@ public class ComponentPageInfo implements org.apache.thrift.TBase<ComponentPageI
           case 7: // WINDOW_TO_STATS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map414 = iprot.readMapBegin();
-                struct.window_to_stats = new HashMap<String,ComponentAggregateStats>(2*_map414.size);
-                String _key415;
-                ComponentAggregateStats _val416;
-                for (int _i417 = 0; _i417 < _map414.size; ++_i417)
+                org.apache.thrift.protocol.TMap _map422 = iprot.readMapBegin();
+                struct.window_to_stats = new HashMap<String,ComponentAggregateStats>(2*_map422.size);
+                String _key423;
+                ComponentAggregateStats _val424;
+                for (int _i425 = 0; _i425 < _map422.size; ++_i425)
                 {
-                  _key415 = iprot.readString();
-                  _val416 = new ComponentAggregateStats();
-                  _val416.read(iprot);
-                  struct.window_to_stats.put(_key415, _val416);
+                  _key423 = iprot.readString();
+                  _val424 = new ComponentAggregateStats();
+                  _val424.read(iprot);
+                  struct.window_to_stats.put(_key423, _val424);
                 }
                 iprot.readMapEnd();
               }
@@ -1678,17 +1678,17 @@ public class ComponentPageInfo implements org.apache.thrift.TBase<ComponentPageI
           case 8: // GSID_TO_INPUT_STATS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map418 = iprot.readMapBegin();
-                struct.gsid_to_input_stats = new HashMap<GlobalStreamId,ComponentAggregateStats>(2*_map418.size);
-                GlobalStreamId _key419;
-                ComponentAggregateStats _val420;
-                for (int _i421 = 0; _i421 < _map418.size; ++_i421)
+                org.apache.thrift.protocol.TMap _map426 = iprot.readMapBegin();
+                struct.gsid_to_input_stats = new HashMap<GlobalStreamId,ComponentAggregateStats>(2*_map426.size);
+                GlobalStreamId _key427;
+                ComponentAggregateStats _val428;
+                for (int _i429 = 0; _i429 < _map426.size; ++_i429)
                 {
-                  _key419 = new GlobalStreamId();
-                  _key419.read(iprot);
-                  _val420 = new ComponentAggregateStats();
-                  _val420.read(iprot);
-                  struct.gsid_to_input_stats.put(_key419, _val420);
+                  _key427 = new GlobalStreamId();
+                  _key427.read(iprot);
+                  _val428 = new ComponentAggregateStats();
+                  _val428.read(iprot);
+                  struct.gsid_to_input_stats.put(_key427, _val428);
                 }
                 iprot.readMapEnd();
               }
@@ -1700,16 +1700,16 @@ public class ComponentPageInfo implements org.apache.thrift.TBase<ComponentPageI
           case 9: // SID_TO_OUTPUT_STATS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map422 = iprot.readMapBegin();
-                struct.sid_to_output_stats = new HashMap<String,ComponentAggregateStats>(2*_map422.size);
-                String _key423;
-                ComponentAggregateStats _val424;
-                for (int _i425 = 0; _i425 < _map422.size; ++_i425)
+                org.apache.thrift.protocol.TMap _map430 = iprot.readMapBegin();
+                struct.sid_to_output_stats = new HashMap<String,ComponentAggregateStats>(2*_map430.size);
+                String _key431;
+                ComponentAggregateStats _val432;
+                for (int _i433 = 0; _i433 < _map430.size; ++_i433)
                 {
-                  _key423 = iprot.readString();
-                  _val424 = new ComponentAggregateStats();
-                  _val424.read(iprot);
-                  struct.sid_to_output_stats.put(_key423, _val424);
+                  _key431 = iprot.readString();
+                  _val432 = new ComponentAggregateStats();
+                  _val432.read(iprot);
+                  struct.sid_to_output_stats.put(_key431, _val432);
                 }
                 iprot.readMapEnd();
               }
@@ -1721,14 +1721,14 @@ public class ComponentPageInfo implements org.apache.thrift.TBase<ComponentPageI
           case 10: // EXEC_STATS
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list426 = iprot.readListBegin();
-                struct.exec_stats = new ArrayList<ExecutorAggregateStats>(_list426.size);
-                ExecutorAggregateStats _elem427;
-                for (int _i428 = 0; _i428 < _list426.size; ++_i428)
+                org.apache.thrift.protocol.TList _list434 = iprot.readListBegin();
+                struct.exec_stats = new ArrayList<ExecutorAggregateStats>(_list434.size);
+                ExecutorAggregateStats _elem435;
+                for (int _i436 = 0; _i436 < _list434.size; ++_i436)
                 {
-                  _elem427 = new ExecutorAggregateStats();
-                  _elem427.read(iprot);
-                  struct.exec_stats.add(_elem427);
+                  _elem435 = new ExecutorAggregateStats();
+                  _elem435.read(iprot);
+                  struct.exec_stats.add(_elem435);
                 }
                 iprot.readListEnd();
               }
@@ -1740,14 +1740,14 @@ public class ComponentPageInfo implements org.apache.thrift.TBase<ComponentPageI
           case 11: // ERRORS
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list429 = iprot.readListBegin();
-                struct.errors = new ArrayList<ErrorInfo>(_list429.size);
-                ErrorInfo _elem430;
-                for (int _i431 = 0; _i431 < _list429.size; ++_i431)
+                org.apache.thrift.protocol.TList _list437 = iprot.readListBegin();
+                struct.errors = new ArrayList<ErrorInfo>(_list437.size);
+                ErrorInfo _elem438;
+                for (int _i439 = 0; _i439 < _list437.size; ++_i439)
                 {
-                  _elem430 = new ErrorInfo();
-                  _elem430.read(iprot);
-                  struct.errors.add(_elem430);
+                  _elem438 = new ErrorInfo();
+                  _elem438.read(iprot);
+                  struct.errors.add(_elem438);
                 }
                 iprot.readListEnd();
               }
@@ -1841,10 +1841,10 @@ public class ComponentPageInfo implements org.apache.thrift.TBase<ComponentPageI
           oprot.writeFieldBegin(WINDOW_TO_STATS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, struct.window_to_stats.size()));
-            for (Map.Entry<String, ComponentAggregateStats> _iter432 : struct.window_to_stats.entrySet())
+            for (Map.Entry<String, ComponentAggregateStats> _iter440 : struct.window_to_stats.entrySet())
             {
-              oprot.writeString(_iter432.getKey());
-              _iter432.getValue().write(oprot);
+              oprot.writeString(_iter440.getKey());
+              _iter440.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -1856,10 +1856,10 @@ public class ComponentPageInfo implements org.apache.thrift.TBase<ComponentPageI
           oprot.writeFieldBegin(GSID_TO_INPUT_STATS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.STRUCT, struct.gsid_to_input_stats.size()));
-            for (Map.Entry<GlobalStreamId, ComponentAggregateStats> _iter433 : struct.gsid_to_input_stats.entrySet())
+            for (Map.Entry<GlobalStreamId, ComponentAggregateStats> _iter441 : struct.gsid_to_input_stats.entrySet())
             {
-              _iter433.getKey().write(oprot);
-              _iter433.getValue().write(oprot);
+              _iter441.getKey().write(oprot);
+              _iter441.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -1871,10 +1871,10 @@ public class ComponentPageInfo implements org.apache.thrift.TBase<ComponentPageI
           oprot.writeFieldBegin(SID_TO_OUTPUT_STATS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, struct.sid_to_output_stats.size()));
-            for (Map.Entry<String, ComponentAggregateStats> _iter434 : struct.sid_to_output_stats.entrySet())
+            for (Map.Entry<String, ComponentAggregateStats> _iter442 : struct.sid_to_output_stats.entrySet())
             {
-              oprot.writeString(_iter434.getKey());
-              _iter434.getValue().write(oprot);
+              oprot.writeString(_iter442.getKey());
+              _iter442.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -1886,9 +1886,9 @@ public class ComponentPageInfo implements org.apache.thrift.TBase<ComponentPageI
           oprot.writeFieldBegin(EXEC_STATS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.exec_stats.size()));
-            for (ExecutorAggregateStats _iter435 : struct.exec_stats)
+            for (ExecutorAggregateStats _iter443 : struct.exec_stats)
             {
-              _iter435.write(oprot);
+              _iter443.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -1900,9 +1900,9 @@ public class ComponentPageInfo implements org.apache.thrift.TBase<ComponentPageI
           oprot.writeFieldBegin(ERRORS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.errors.size()));
-            for (ErrorInfo _iter436 : struct.errors)
+            for (ErrorInfo _iter444 : struct.errors)
             {
-              _iter436.write(oprot);
+              _iter444.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -2010,48 +2010,48 @@ public class ComponentPageInfo implements org.apache.thrift.TBase<ComponentPageI
       if (struct.is_set_window_to_stats()) {
         {
           oprot.writeI32(struct.window_to_stats.size());
-          for (Map.Entry<String, ComponentAggregateStats> _iter437 : struct.window_to_stats.entrySet())
+          for (Map.Entry<String, ComponentAggregateStats> _iter445 : struct.window_to_stats.entrySet())
           {
-            oprot.writeString(_iter437.getKey());
-            _iter437.getValue().write(oprot);
+            oprot.writeString(_iter445.getKey());
+            _iter445.getValue().write(oprot);
           }
         }
       }
       if (struct.is_set_gsid_to_input_stats()) {
         {
           oprot.writeI32(struct.gsid_to_input_stats.size());
-          for (Map.Entry<GlobalStreamId, ComponentAggregateStats> _iter438 : struct.gsid_to_input_stats.entrySet())
+          for (Map.Entry<GlobalStreamId, ComponentAggregateStats> _iter446 : struct.gsid_to_input_stats.entrySet())
           {
-            _iter438.getKey().write(oprot);
-            _iter438.getValue().write(oprot);
+            _iter446.getKey().write(oprot);
+            _iter446.getValue().write(oprot);
           }
         }
       }
       if (struct.is_set_sid_to_output_stats()) {
         {
           oprot.writeI32(struct.sid_to_output_stats.size());
-          for (Map.Entry<String, ComponentAggregateStats> _iter439 : struct.sid_to_output_stats.entrySet())
+          for (Map.Entry<String, ComponentAggregateStats> _iter447 : struct.sid_to_output_stats.entrySet())
           {
-            oprot.writeString(_iter439.getKey());
-            _iter439.getValue().write(oprot);
+            oprot.writeString(_iter447.getKey());
+            _iter447.getValue().write(oprot);
           }
         }
       }
       if (struct.is_set_exec_stats()) {
         {
           oprot.writeI32(struct.exec_stats.size());
-          for (ExecutorAggregateStats _iter440 : struct.exec_stats)
+          for (ExecutorAggregateStats _iter448 : struct.exec_stats)
           {
-            _iter440.write(oprot);
+            _iter448.write(oprot);
           }
         }
       }
       if (struct.is_set_errors()) {
         {
           oprot.writeI32(struct.errors.size());
-          for (ErrorInfo _iter441 : struct.errors)
+          for (ErrorInfo _iter449 : struct.errors)
           {
-            _iter441.write(oprot);
+            _iter449.write(oprot);
           }
         }
       }
@@ -2095,77 +2095,77 @@ public class ComponentPageInfo implements org.apache.thrift.TBase<ComponentPageI
       }
       if (incoming.get(4)) {
         {
-          org.apache.thrift.protocol.TMap _map442 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.window_to_stats = new HashMap<String,ComponentAggregateStats>(2*_map442.size);
-          String _key443;
-          ComponentAggregateStats _val444;
-          for (int _i445 = 0; _i445 < _map442.size; ++_i445)
+          org.apache.thrift.protocol.TMap _map450 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.window_to_stats = new HashMap<String,ComponentAggregateStats>(2*_map450.size);
+          String _key451;
+          ComponentAggregateStats _val452;
+          for (int _i453 = 0; _i453 < _map450.size; ++_i453)
           {
-            _key443 = iprot.readString();
-            _val444 = new ComponentAggregateStats();
-            _val444.read(iprot);
-            struct.window_to_stats.put(_key443, _val444);
+            _key451 = iprot.readString();
+            _val452 = new ComponentAggregateStats();
+            _val452.read(iprot);
+            struct.window_to_stats.put(_key451, _val452);
           }
         }
         struct.set_window_to_stats_isSet(true);
       }
       if (incoming.get(5)) {
         {
-          org.apache.thrift.protocol.TMap _map446 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.gsid_to_input_stats = new HashMap<GlobalStreamId,ComponentAggregateStats>(2*_map446.size);
-          GlobalStreamId _key447;
-          ComponentAggregateStats _val448;
-          for (int _i449 = 0; _i449 < _map446.size; ++_i449)
+          org.apache.thrift.protocol.TMap _map454 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.gsid_to_input_stats = new HashMap<GlobalStreamId,ComponentAggregateStats>(2*_map454.size);
+          GlobalStreamId _key455;
+          ComponentAggregateStats _val456;
+          for (int _i457 = 0; _i457 < _map454.size; ++_i457)
           {
-            _key447 = new GlobalStreamId();
-            _key447.read(iprot);
-            _val448 = new ComponentAggregateStats();
-            _val448.read(iprot);
-            struct.gsid_to_input_stats.put(_key447, _val448);
+            _key455 = new GlobalStreamId();
+            _key455.read(iprot);
+            _val456 = new ComponentAggregateStats();
+            _val456.read(iprot);
+            struct.gsid_to_input_stats.put(_key455, _val456);
           }
         }
         struct.set_gsid_to_input_stats_isSet(true);
       }
       if (incoming.get(6)) {
         {
-          org.apache.thrift.protocol.TMap _map450 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.sid_to_output_stats = new HashMap<String,ComponentAggregateStats>(2*_map450.size);
-          String _key451;
-          ComponentAggregateStats _val452;
-          for (int _i453 = 0; _i453 < _map450.size; ++_i453)
+          org.apache.thrift.protocol.TMap _map458 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.sid_to_output_stats = new HashMap<String,ComponentAggregateStats>(2*_map458.size);
+          String _key459;
+          ComponentAggregateStats _val460;
+          for (int _i461 = 0; _i461 < _map458.size; ++_i461)
           {
-            _key451 = iprot.readString();
-            _val452 = new ComponentAggregateStats();
-            _val452.read(iprot);
-            struct.sid_to_output_stats.put(_key451, _val452);
+            _key459 = iprot.readString();
+            _val460 = new ComponentAggregateStats();
+            _val460.read(iprot);
+            struct.sid_to_output_stats.put(_key459, _val460);
           }
         }
         struct.set_sid_to_output_stats_isSet(true);
       }
       if (incoming.get(7)) {
         {
-          org.apache.thrift.protocol.TList _list454 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.exec_stats = new ArrayList<ExecutorAggregateStats>(_list454.size);
-          ExecutorAggregateStats _elem455;
-          for (int _i456 = 0; _i456 < _list454.size; ++_i456)
+          org.apache.thrift.protocol.TList _list462 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.exec_stats = new ArrayList<ExecutorAggregateStats>(_list462.size);
+          ExecutorAggregateStats _elem463;
+          for (int _i464 = 0; _i464 < _list462.size; ++_i464)
           {
-            _elem455 = new ExecutorAggregateStats();
-            _elem455.read(iprot);
-            struct.exec_stats.add(_elem455);
+            _elem463 = new ExecutorAggregateStats();
+            _elem463.read(iprot);
+            struct.exec_stats.add(_elem463);
           }
         }
         struct.set_exec_stats_isSet(true);
       }
       if (incoming.get(8)) {
         {
-          org.apache.thrift.protocol.TList _list457 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.errors = new ArrayList<ErrorInfo>(_list457.size);
-          ErrorInfo _elem458;
-          for (int _i459 = 0; _i459 < _list457.size; ++_i459)
+          org.apache.thrift.protocol.TList _list465 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.errors = new ArrayList<ErrorInfo>(_list465.size);
+          ErrorInfo _elem466;
+          for (int _i467 = 0; _i467 < _list465.size; ++_i467)
           {
-            _elem458 = new ErrorInfo();
-            _elem458.read(iprot);
-            struct.errors.add(_elem458);
+            _elem466 = new ErrorInfo();
+            _elem466.read(iprot);
+            struct.errors.add(_elem466);
           }
         }
         struct.set_errors_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/Credentials.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/Credentials.java b/storm-core/src/jvm/backtype/storm/generated/Credentials.java
index 1f8f00c..75cc5b6 100644
--- a/storm-core/src/jvm/backtype/storm/generated/Credentials.java
+++ b/storm-core/src/jvm/backtype/storm/generated/Credentials.java
@@ -365,15 +365,15 @@ public class Credentials implements org.apache.thrift.TBase<Credentials, Credent
           case 1: // CREDS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map470 = iprot.readMapBegin();
-                struct.creds = new HashMap<String,String>(2*_map470.size);
-                String _key471;
-                String _val472;
-                for (int _i473 = 0; _i473 < _map470.size; ++_i473)
+                org.apache.thrift.protocol.TMap _map478 = iprot.readMapBegin();
+                struct.creds = new HashMap<String,String>(2*_map478.size);
+                String _key479;
+                String _val480;
+                for (int _i481 = 0; _i481 < _map478.size; ++_i481)
                 {
-                  _key471 = iprot.readString();
-                  _val472 = iprot.readString();
-                  struct.creds.put(_key471, _val472);
+                  _key479 = iprot.readString();
+                  _val480 = iprot.readString();
+                  struct.creds.put(_key479, _val480);
                 }
                 iprot.readMapEnd();
               }
@@ -399,10 +399,10 @@ public class Credentials implements org.apache.thrift.TBase<Credentials, Credent
         oprot.writeFieldBegin(CREDS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, struct.creds.size()));
-          for (Map.Entry<String, String> _iter474 : struct.creds.entrySet())
+          for (Map.Entry<String, String> _iter482 : struct.creds.entrySet())
           {
-            oprot.writeString(_iter474.getKey());
-            oprot.writeString(_iter474.getValue());
+            oprot.writeString(_iter482.getKey());
+            oprot.writeString(_iter482.getValue());
           }
           oprot.writeMapEnd();
         }
@@ -427,10 +427,10 @@ public class Credentials implements org.apache.thrift.TBase<Credentials, Credent
       TTupleProtocol oprot = (TTupleProtocol) prot;
       {
         oprot.writeI32(struct.creds.size());
-        for (Map.Entry<String, String> _iter475 : struct.creds.entrySet())
+        for (Map.Entry<String, String> _iter483 : struct.creds.entrySet())
         {
-          oprot.writeString(_iter475.getKey());
-          oprot.writeString(_iter475.getValue());
+          oprot.writeString(_iter483.getKey());
+          oprot.writeString(_iter483.getValue());
         }
       }
     }
@@ -439,15 +439,15 @@ public class Credentials implements org.apache.thrift.TBase<Credentials, Credent
     public void read(org.apache.thrift.protocol.TProtocol prot, Credentials struct) throws org.apache.thrift.TException {
       TTupleProtocol iprot = (TTupleProtocol) prot;
       {
-        org.apache.thrift.protocol.TMap _map476 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-        struct.creds = new HashMap<String,String>(2*_map476.size);
-        String _key477;
-        String _val478;
-        for (int _i479 = 0; _i479 < _map476.size; ++_i479)
+        org.apache.thrift.protocol.TMap _map484 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+        struct.creds = new HashMap<String,String>(2*_map484.size);
+        String _key485;
+        String _val486;
+        for (int _i487 = 0; _i487 < _map484.size; ++_i487)
         {
-          _key477 = iprot.readString();
-          _val478 = iprot.readString();
-          struct.creds.put(_key477, _val478);
+          _key485 = iprot.readString();
+          _val486 = iprot.readString();
+          struct.creds.put(_key485, _val486);
         }
       }
       struct.set_creds_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/ExecutorStats.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/ExecutorStats.java b/storm-core/src/jvm/backtype/storm/generated/ExecutorStats.java
index ec6cad4..8a2a796 100644
--- a/storm-core/src/jvm/backtype/storm/generated/ExecutorStats.java
+++ b/storm-core/src/jvm/backtype/storm/generated/ExecutorStats.java
@@ -660,27 +660,27 @@ public class ExecutorStats implements org.apache.thrift.TBase<ExecutorStats, Exe
           case 1: // EMITTED
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map268 = iprot.readMapBegin();
-                struct.emitted = new HashMap<String,Map<String,Long>>(2*_map268.size);
-                String _key269;
-                Map<String,Long> _val270;
-                for (int _i271 = 0; _i271 < _map268.size; ++_i271)
+                org.apache.thrift.protocol.TMap _map276 = iprot.readMapBegin();
+                struct.emitted = new HashMap<String,Map<String,Long>>(2*_map276.size);
+                String _key277;
+                Map<String,Long> _val278;
+                for (int _i279 = 0; _i279 < _map276.size; ++_i279)
                 {
-                  _key269 = iprot.readString();
+                  _key277 = iprot.readString();
                   {
-                    org.apache.thrift.protocol.TMap _map272 = iprot.readMapBegin();
-                    _val270 = new HashMap<String,Long>(2*_map272.size);
-                    String _key273;
-                    long _val274;
-                    for (int _i275 = 0; _i275 < _map272.size; ++_i275)
+                    org.apache.thrift.protocol.TMap _map280 = iprot.readMapBegin();
+                    _val278 = new HashMap<String,Long>(2*_map280.size);
+                    String _key281;
+                    long _val282;
+                    for (int _i283 = 0; _i283 < _map280.size; ++_i283)
                     {
-                      _key273 = iprot.readString();
-                      _val274 = iprot.readI64();
-                      _val270.put(_key273, _val274);
+                      _key281 = iprot.readString();
+                      _val282 = iprot.readI64();
+                      _val278.put(_key281, _val282);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.emitted.put(_key269, _val270);
+                  struct.emitted.put(_key277, _val278);
                 }
                 iprot.readMapEnd();
               }
@@ -692,27 +692,27 @@ public class ExecutorStats implements org.apache.thrift.TBase<ExecutorStats, Exe
           case 2: // TRANSFERRED
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map276 = iprot.readMapBegin();
-                struct.transferred = new HashMap<String,Map<String,Long>>(2*_map276.size);
-                String _key277;
-                Map<String,Long> _val278;
-                for (int _i279 = 0; _i279 < _map276.size; ++_i279)
+                org.apache.thrift.protocol.TMap _map284 = iprot.readMapBegin();
+                struct.transferred = new HashMap<String,Map<String,Long>>(2*_map284.size);
+                String _key285;
+                Map<String,Long> _val286;
+                for (int _i287 = 0; _i287 < _map284.size; ++_i287)
                 {
-                  _key277 = iprot.readString();
+                  _key285 = iprot.readString();
                   {
-                    org.apache.thrift.protocol.TMap _map280 = iprot.readMapBegin();
-                    _val278 = new HashMap<String,Long>(2*_map280.size);
-                    String _key281;
-                    long _val282;
-                    for (int _i283 = 0; _i283 < _map280.size; ++_i283)
+                    org.apache.thrift.protocol.TMap _map288 = iprot.readMapBegin();
+                    _val286 = new HashMap<String,Long>(2*_map288.size);
+                    String _key289;
+                    long _val290;
+                    for (int _i291 = 0; _i291 < _map288.size; ++_i291)
                     {
-                      _key281 = iprot.readString();
-                      _val282 = iprot.readI64();
-                      _val278.put(_key281, _val282);
+                      _key289 = iprot.readString();
+                      _val290 = iprot.readI64();
+                      _val286.put(_key289, _val290);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.transferred.put(_key277, _val278);
+                  struct.transferred.put(_key285, _val286);
                 }
                 iprot.readMapEnd();
               }
@@ -755,15 +755,15 @@ public class ExecutorStats implements org.apache.thrift.TBase<ExecutorStats, Exe
         oprot.writeFieldBegin(EMITTED_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, struct.emitted.size()));
-          for (Map.Entry<String, Map<String,Long>> _iter284 : struct.emitted.entrySet())
+          for (Map.Entry<String, Map<String,Long>> _iter292 : struct.emitted.entrySet())
           {
-            oprot.writeString(_iter284.getKey());
+            oprot.writeString(_iter292.getKey());
             {
-              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, _iter284.getValue().size()));
-              for (Map.Entry<String, Long> _iter285 : _iter284.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, _iter292.getValue().size()));
+              for (Map.Entry<String, Long> _iter293 : _iter292.getValue().entrySet())
               {
-                oprot.writeString(_iter285.getKey());
-                oprot.writeI64(_iter285.getValue());
+                oprot.writeString(_iter293.getKey());
+                oprot.writeI64(_iter293.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -776,15 +776,15 @@ public class ExecutorStats implements org.apache.thrift.TBase<ExecutorStats, Exe
         oprot.writeFieldBegin(TRANSFERRED_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, struct.transferred.size()));
-          for (Map.Entry<String, Map<String,Long>> _iter286 : struct.transferred.entrySet())
+          for (Map.Entry<String, Map<String,Long>> _iter294 : struct.transferred.entrySet())
           {
-            oprot.writeString(_iter286.getKey());
+            oprot.writeString(_iter294.getKey());
             {
-              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, _iter286.getValue().size()));
-              for (Map.Entry<String, Long> _iter287 : _iter286.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, _iter294.getValue().size()));
+              for (Map.Entry<String, Long> _iter295 : _iter294.getValue().entrySet())
               {
-                oprot.writeString(_iter287.getKey());
-                oprot.writeI64(_iter287.getValue());
+                oprot.writeString(_iter295.getKey());
+                oprot.writeI64(_iter295.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -820,30 +820,30 @@ public class ExecutorStats implements org.apache.thrift.TBase<ExecutorStats, Exe
       TTupleProtocol oprot = (TTupleProtocol) prot;
       {
         oprot.writeI32(struct.emitted.size());
-        for (Map.Entry<String, Map<String,Long>> _iter288 : struct.emitted.entrySet())
+        for (Map.Entry<String, Map<String,Long>> _iter296 : struct.emitted.entrySet())
         {
-          oprot.writeString(_iter288.getKey());
+          oprot.writeString(_iter296.getKey());
           {
-            oprot.writeI32(_iter288.getValue().size());
-            for (Map.Entry<String, Long> _iter289 : _iter288.getValue().entrySet())
+            oprot.writeI32(_iter296.getValue().size());
+            for (Map.Entry<String, Long> _iter297 : _iter296.getValue().entrySet())
             {
-              oprot.writeString(_iter289.getKey());
-              oprot.writeI64(_iter289.getValue());
+              oprot.writeString(_iter297.getKey());
+              oprot.writeI64(_iter297.getValue());
             }
           }
         }
       }
       {
         oprot.writeI32(struct.transferred.size());
-        for (Map.Entry<String, Map<String,Long>> _iter290 : struct.transferred.entrySet())
+        for (Map.Entry<String, Map<String,Long>> _iter298 : struct.transferred.entrySet())
         {
-          oprot.writeString(_iter290.getKey());
+          oprot.writeString(_iter298.getKey());
           {
-            oprot.writeI32(_iter290.getValue().size());
-            for (Map.Entry<String, Long> _iter291 : _iter290.getValue().entrySet())
+            oprot.writeI32(_iter298.getValue().size());
+            for (Map.Entry<String, Long> _iter299 : _iter298.getValue().entrySet())
             {
-              oprot.writeString(_iter291.getKey());
-              oprot.writeI64(_iter291.getValue());
+              oprot.writeString(_iter299.getKey());
+              oprot.writeI64(_iter299.getValue());
             }
           }
         }
@@ -856,32 +856,8 @@ public class ExecutorStats implements org.apache.thrift.TBase<ExecutorStats, Exe
     public void read(org.apache.thrift.protocol.TProtocol prot, ExecutorStats struct) throws org.apache.thrift.TException {
       TTupleProtocol iprot = (TTupleProtocol) prot;
       {
-        org.apache.thrift.protocol.TMap _map292 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.emitted = new HashMap<String,Map<String,Long>>(2*_map292.size);
-        String _key293;
-        Map<String,Long> _val294;
-        for (int _i295 = 0; _i295 < _map292.size; ++_i295)
-        {
-          _key293 = iprot.readString();
-          {
-            org.apache.thrift.protocol.TMap _map296 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, iprot.readI32());
-            _val294 = new HashMap<String,Long>(2*_map296.size);
-            String _key297;
-            long _val298;
-            for (int _i299 = 0; _i299 < _map296.size; ++_i299)
-            {
-              _key297 = iprot.readString();
-              _val298 = iprot.readI64();
-              _val294.put(_key297, _val298);
-            }
-          }
-          struct.emitted.put(_key293, _val294);
-        }
-      }
-      struct.set_emitted_isSet(true);
-      {
         org.apache.thrift.protocol.TMap _map300 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.transferred = new HashMap<String,Map<String,Long>>(2*_map300.size);
+        struct.emitted = new HashMap<String,Map<String,Long>>(2*_map300.size);
         String _key301;
         Map<String,Long> _val302;
         for (int _i303 = 0; _i303 < _map300.size; ++_i303)
@@ -899,7 +875,31 @@ public class ExecutorStats implements org.apache.thrift.TBase<ExecutorStats, Exe
               _val302.put(_key305, _val306);
             }
           }
-          struct.transferred.put(_key301, _val302);
+          struct.emitted.put(_key301, _val302);
+        }
+      }
+      struct.set_emitted_isSet(true);
+      {
+        org.apache.thrift.protocol.TMap _map308 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
+        struct.transferred = new HashMap<String,Map<String,Long>>(2*_map308.size);
+        String _key309;
+        Map<String,Long> _val310;
+        for (int _i311 = 0; _i311 < _map308.size; ++_i311)
+        {
+          _key309 = iprot.readString();
+          {
+            org.apache.thrift.protocol.TMap _map312 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, iprot.readI32());
+            _val310 = new HashMap<String,Long>(2*_map312.size);
+            String _key313;
+            long _val314;
+            for (int _i315 = 0; _i315 < _map312.size; ++_i315)
+            {
+              _key313 = iprot.readString();
+              _val314 = iprot.readI64();
+              _val310.put(_key313, _val314);
+            }
+          }
+          struct.transferred.put(_key309, _val310);
         }
       }
       struct.set_transferred_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/LSApprovedWorkers.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/LSApprovedWorkers.java b/storm-core/src/jvm/backtype/storm/generated/LSApprovedWorkers.java
index bf801bc..20f0d10 100644
--- a/storm-core/src/jvm/backtype/storm/generated/LSApprovedWorkers.java
+++ b/storm-core/src/jvm/backtype/storm/generated/LSApprovedWorkers.java
@@ -365,15 +365,15 @@ public class LSApprovedWorkers implements org.apache.thrift.TBase<LSApprovedWork
           case 1: // APPROVED_WORKERS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map628 = iprot.readMapBegin();
-                struct.approved_workers = new HashMap<String,Integer>(2*_map628.size);
-                String _key629;
-                int _val630;
-                for (int _i631 = 0; _i631 < _map628.size; ++_i631)
+                org.apache.thrift.protocol.TMap _map636 = iprot.readMapBegin();
+                struct.approved_workers = new HashMap<String,Integer>(2*_map636.size);
+                String _key637;
+                int _val638;
+                for (int _i639 = 0; _i639 < _map636.size; ++_i639)
                 {
-                  _key629 = iprot.readString();
-                  _val630 = iprot.readI32();
-                  struct.approved_workers.put(_key629, _val630);
+                  _key637 = iprot.readString();
+                  _val638 = iprot.readI32();
+                  struct.approved_workers.put(_key637, _val638);
                 }
                 iprot.readMapEnd();
               }
@@ -399,10 +399,10 @@ public class LSApprovedWorkers implements org.apache.thrift.TBase<LSApprovedWork
         oprot.writeFieldBegin(APPROVED_WORKERS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I32, struct.approved_workers.size()));
-          for (Map.Entry<String, Integer> _iter632 : struct.approved_workers.entrySet())
+          for (Map.Entry<String, Integer> _iter640 : struct.approved_workers.entrySet())
           {
-            oprot.writeString(_iter632.getKey());
-            oprot.writeI32(_iter632.getValue());
+            oprot.writeString(_iter640.getKey());
+            oprot.writeI32(_iter640.getValue());
           }
           oprot.writeMapEnd();
         }
@@ -427,10 +427,10 @@ public class LSApprovedWorkers implements org.apache.thrift.TBase<LSApprovedWork
       TTupleProtocol oprot = (TTupleProtocol) prot;
       {
         oprot.writeI32(struct.approved_workers.size());
-        for (Map.Entry<String, Integer> _iter633 : struct.approved_workers.entrySet())
+        for (Map.Entry<String, Integer> _iter641 : struct.approved_workers.entrySet())
         {
-          oprot.writeString(_iter633.getKey());
-          oprot.writeI32(_iter633.getValue());
+          oprot.writeString(_iter641.getKey());
+          oprot.writeI32(_iter641.getValue());
         }
       }
     }
@@ -439,15 +439,15 @@ public class LSApprovedWorkers implements org.apache.thrift.TBase<LSApprovedWork
     public void read(org.apache.thrift.protocol.TProtocol prot, LSApprovedWorkers struct) throws org.apache.thrift.TException {
       TTupleProtocol iprot = (TTupleProtocol) prot;
       {
-        org.apache.thrift.protocol.TMap _map634 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I32, iprot.readI32());
-        struct.approved_workers = new HashMap<String,Integer>(2*_map634.size);
-        String _key635;
-        int _val636;
-        for (int _i637 = 0; _i637 < _map634.size; ++_i637)
+        org.apache.thrift.protocol.TMap _map642 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I32, iprot.readI32());
+        struct.approved_workers = new HashMap<String,Integer>(2*_map642.size);
+        String _key643;
+        int _val644;
+        for (int _i645 = 0; _i645 < _map642.size; ++_i645)
         {
-          _key635 = iprot.readString();
-          _val636 = iprot.readI32();
-          struct.approved_workers.put(_key635, _val636);
+          _key643 = iprot.readString();
+          _val644 = iprot.readI32();
+          struct.approved_workers.put(_key643, _val644);
         }
       }
       struct.set_approved_workers_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/LSSupervisorAssignments.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/LSSupervisorAssignments.java b/storm-core/src/jvm/backtype/storm/generated/LSSupervisorAssignments.java
index 1ee0e6c..de4c803 100644
--- a/storm-core/src/jvm/backtype/storm/generated/LSSupervisorAssignments.java
+++ b/storm-core/src/jvm/backtype/storm/generated/LSSupervisorAssignments.java
@@ -376,16 +376,16 @@ public class LSSupervisorAssignments implements org.apache.thrift.TBase<LSSuperv
           case 1: // ASSIGNMENTS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map638 = iprot.readMapBegin();
-                struct.assignments = new HashMap<Integer,LocalAssignment>(2*_map638.size);
-                int _key639;
-                LocalAssignment _val640;
-                for (int _i641 = 0; _i641 < _map638.size; ++_i641)
+                org.apache.thrift.protocol.TMap _map646 = iprot.readMapBegin();
+                struct.assignments = new HashMap<Integer,LocalAssignment>(2*_map646.size);
+                int _key647;
+                LocalAssignment _val648;
+                for (int _i649 = 0; _i649 < _map646.size; ++_i649)
                 {
-                  _key639 = iprot.readI32();
-                  _val640 = new LocalAssignment();
-                  _val640.read(iprot);
-                  struct.assignments.put(_key639, _val640);
+                  _key647 = iprot.readI32();
+                  _val648 = new LocalAssignment();
+                  _val648.read(iprot);
+                  struct.assignments.put(_key647, _val648);
                 }
                 iprot.readMapEnd();
               }
@@ -411,10 +411,10 @@ public class LSSupervisorAssignments implements org.apache.thrift.TBase<LSSuperv
         oprot.writeFieldBegin(ASSIGNMENTS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.I32, org.apache.thrift.protocol.TType.STRUCT, struct.assignments.size()));
-          for (Map.Entry<Integer, LocalAssignment> _iter642 : struct.assignments.entrySet())
+          for (Map.Entry<Integer, LocalAssignment> _iter650 : struct.assignments.entrySet())
           {
-            oprot.writeI32(_iter642.getKey());
-            _iter642.getValue().write(oprot);
+            oprot.writeI32(_iter650.getKey());
+            _iter650.getValue().write(oprot);
           }
           oprot.writeMapEnd();
         }
@@ -439,10 +439,10 @@ public class LSSupervisorAssignments implements org.apache.thrift.TBase<LSSuperv
       TTupleProtocol oprot = (TTupleProtocol) prot;
       {
         oprot.writeI32(struct.assignments.size());
-        for (Map.Entry<Integer, LocalAssignment> _iter643 : struct.assignments.entrySet())
+        for (Map.Entry<Integer, LocalAssignment> _iter651 : struct.assignments.entrySet())
         {
-          oprot.writeI32(_iter643.getKey());
-          _iter643.getValue().write(oprot);
+          oprot.writeI32(_iter651.getKey());
+          _iter651.getValue().write(oprot);
         }
       }
     }
@@ -451,16 +451,16 @@ public class LSSupervisorAssignments implements org.apache.thrift.TBase<LSSuperv
     public void read(org.apache.thrift.protocol.TProtocol prot, LSSupervisorAssignments struct) throws org.apache.thrift.TException {
       TTupleProtocol iprot = (TTupleProtocol) prot;
       {
-        org.apache.thrift.protocol.TMap _map644 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.I32, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.assignments = new HashMap<Integer,LocalAssignment>(2*_map644.size);
-        int _key645;
-        LocalAssignment _val646;
-        for (int _i647 = 0; _i647 < _map644.size; ++_i647)
+        org.apache.thrift.protocol.TMap _map652 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.I32, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.assignments = new HashMap<Integer,LocalAssignment>(2*_map652.size);
+        int _key653;
+        LocalAssignment _val654;
+        for (int _i655 = 0; _i655 < _map652.size; ++_i655)
         {
-          _key645 = iprot.readI32();
-          _val646 = new LocalAssignment();
-          _val646.read(iprot);
-          struct.assignments.put(_key645, _val646);
+          _key653 = iprot.readI32();
+          _val654 = new LocalAssignment();
+          _val654.read(iprot);
+          struct.assignments.put(_key653, _val654);
         }
       }
       struct.set_assignments_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/LSTopoHistory.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/LSTopoHistory.java b/storm-core/src/jvm/backtype/storm/generated/LSTopoHistory.java
index cb890b4..79fea1e 100644
--- a/storm-core/src/jvm/backtype/storm/generated/LSTopoHistory.java
+++ b/storm-core/src/jvm/backtype/storm/generated/LSTopoHistory.java
@@ -656,13 +656,13 @@ public class LSTopoHistory implements org.apache.thrift.TBase<LSTopoHistory, LST
           case 3: // USERS
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list656 = iprot.readListBegin();
-                struct.users = new ArrayList<String>(_list656.size);
-                String _elem657;
-                for (int _i658 = 0; _i658 < _list656.size; ++_i658)
+                org.apache.thrift.protocol.TList _list664 = iprot.readListBegin();
+                struct.users = new ArrayList<String>(_list664.size);
+                String _elem665;
+                for (int _i666 = 0; _i666 < _list664.size; ++_i666)
                 {
-                  _elem657 = iprot.readString();
-                  struct.users.add(_elem657);
+                  _elem665 = iprot.readString();
+                  struct.users.add(_elem665);
                 }
                 iprot.readListEnd();
               }
@@ -674,13 +674,13 @@ public class LSTopoHistory implements org.apache.thrift.TBase<LSTopoHistory, LST
           case 4: // GROUPS
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list659 = iprot.readListBegin();
-                struct.groups = new ArrayList<String>(_list659.size);
-                String _elem660;
-                for (int _i661 = 0; _i661 < _list659.size; ++_i661)
+                org.apache.thrift.protocol.TList _list667 = iprot.readListBegin();
+                struct.groups = new ArrayList<String>(_list667.size);
+                String _elem668;
+                for (int _i669 = 0; _i669 < _list667.size; ++_i669)
                 {
-                  _elem660 = iprot.readString();
-                  struct.groups.add(_elem660);
+                  _elem668 = iprot.readString();
+                  struct.groups.add(_elem668);
                 }
                 iprot.readListEnd();
               }
@@ -714,9 +714,9 @@ public class LSTopoHistory implements org.apache.thrift.TBase<LSTopoHistory, LST
         oprot.writeFieldBegin(USERS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, struct.users.size()));
-          for (String _iter662 : struct.users)
+          for (String _iter670 : struct.users)
           {
-            oprot.writeString(_iter662);
+            oprot.writeString(_iter670);
           }
           oprot.writeListEnd();
         }
@@ -726,9 +726,9 @@ public class LSTopoHistory implements org.apache.thrift.TBase<LSTopoHistory, LST
         oprot.writeFieldBegin(GROUPS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, struct.groups.size()));
-          for (String _iter663 : struct.groups)
+          for (String _iter671 : struct.groups)
           {
-            oprot.writeString(_iter663);
+            oprot.writeString(_iter671);
           }
           oprot.writeListEnd();
         }
@@ -755,16 +755,16 @@ public class LSTopoHistory implements org.apache.thrift.TBase<LSTopoHistory, LST
       oprot.writeI64(struct.time_stamp);
       {
         oprot.writeI32(struct.users.size());
-        for (String _iter664 : struct.users)
+        for (String _iter672 : struct.users)
         {
-          oprot.writeString(_iter664);
+          oprot.writeString(_iter672);
         }
       }
       {
         oprot.writeI32(struct.groups.size());
-        for (String _iter665 : struct.groups)
+        for (String _iter673 : struct.groups)
         {
-          oprot.writeString(_iter665);
+          oprot.writeString(_iter673);
         }
       }
     }
@@ -777,24 +777,24 @@ public class LSTopoHistory implements org.apache.thrift.TBase<LSTopoHistory, LST
       struct.time_stamp = iprot.readI64();
       struct.set_time_stamp_isSet(true);
       {
-        org.apache.thrift.protocol.TList _list666 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-        struct.users = new ArrayList<String>(_list666.size);
-        String _elem667;
-        for (int _i668 = 0; _i668 < _list666.size; ++_i668)
+        org.apache.thrift.protocol.TList _list674 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+        struct.users = new ArrayList<String>(_list674.size);
+        String _elem675;
+        for (int _i676 = 0; _i676 < _list674.size; ++_i676)
         {
-          _elem667 = iprot.readString();
-          struct.users.add(_elem667);
+          _elem675 = iprot.readString();
+          struct.users.add(_elem675);
         }
       }
       struct.set_users_isSet(true);
       {
-        org.apache.thrift.protocol.TList _list669 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-        struct.groups = new ArrayList<String>(_list669.size);
-        String _elem670;
-        for (int _i671 = 0; _i671 < _list669.size; ++_i671)
+        org.apache.thrift.protocol.TList _list677 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+        struct.groups = new ArrayList<String>(_list677.size);
+        String _elem678;
+        for (int _i679 = 0; _i679 < _list677.size; ++_i679)
         {
-          _elem670 = iprot.readString();
-          struct.groups.add(_elem670);
+          _elem678 = iprot.readString();
+          struct.groups.add(_elem678);
         }
       }
       struct.set_groups_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/LSTopoHistoryList.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/LSTopoHistoryList.java b/storm-core/src/jvm/backtype/storm/generated/LSTopoHistoryList.java
index ffb1d9e..962ece6 100644
--- a/storm-core/src/jvm/backtype/storm/generated/LSTopoHistoryList.java
+++ b/storm-core/src/jvm/backtype/storm/generated/LSTopoHistoryList.java
@@ -371,14 +371,14 @@ public class LSTopoHistoryList implements org.apache.thrift.TBase<LSTopoHistoryL
           case 1: // TOPO_HISTORY
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list672 = iprot.readListBegin();
-                struct.topo_history = new ArrayList<LSTopoHistory>(_list672.size);
-                LSTopoHistory _elem673;
-                for (int _i674 = 0; _i674 < _list672.size; ++_i674)
+                org.apache.thrift.protocol.TList _list680 = iprot.readListBegin();
+                struct.topo_history = new ArrayList<LSTopoHistory>(_list680.size);
+                LSTopoHistory _elem681;
+                for (int _i682 = 0; _i682 < _list680.size; ++_i682)
                 {
-                  _elem673 = new LSTopoHistory();
-                  _elem673.read(iprot);
-                  struct.topo_history.add(_elem673);
+                  _elem681 = new LSTopoHistory();
+                  _elem681.read(iprot);
+                  struct.topo_history.add(_elem681);
                 }
                 iprot.readListEnd();
               }
@@ -404,9 +404,9 @@ public class LSTopoHistoryList implements org.apache.thrift.TBase<LSTopoHistoryL
         oprot.writeFieldBegin(TOPO_HISTORY_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.topo_history.size()));
-          for (LSTopoHistory _iter675 : struct.topo_history)
+          for (LSTopoHistory _iter683 : struct.topo_history)
           {
-            _iter675.write(oprot);
+            _iter683.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -431,9 +431,9 @@ public class LSTopoHistoryList implements org.apache.thrift.TBase<LSTopoHistoryL
       TTupleProtocol oprot = (TTupleProtocol) prot;
       {
         oprot.writeI32(struct.topo_history.size());
-        for (LSTopoHistory _iter676 : struct.topo_history)
+        for (LSTopoHistory _iter684 : struct.topo_history)
         {
-          _iter676.write(oprot);
+          _iter684.write(oprot);
         }
       }
     }
@@ -442,14 +442,14 @@ public class LSTopoHistoryList implements org.apache.thrift.TBase<LSTopoHistoryL
     public void read(org.apache.thrift.protocol.TProtocol prot, LSTopoHistoryList struct) throws org.apache.thrift.TException {
       TTupleProtocol iprot = (TTupleProtocol) prot;
       {
-        org.apache.thrift.protocol.TList _list677 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.topo_history = new ArrayList<LSTopoHistory>(_list677.size);
-        LSTopoHistory _elem678;
-        for (int _i679 = 0; _i679 < _list677.size; ++_i679)
+        org.apache.thrift.protocol.TList _list685 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.topo_history = new ArrayList<LSTopoHistory>(_list685.size);
+        LSTopoHistory _elem686;
+        for (int _i687 = 0; _i687 < _list685.size; ++_i687)
         {
-          _elem678 = new LSTopoHistory();
-          _elem678.read(iprot);
-          struct.topo_history.add(_elem678);
+          _elem686 = new LSTopoHistory();
+          _elem686.read(iprot);
+          struct.topo_history.add(_elem686);
         }
       }
       struct.set_topo_history_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/LSWorkerHeartbeat.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/LSWorkerHeartbeat.java b/storm-core/src/jvm/backtype/storm/generated/LSWorkerHeartbeat.java
index 0f123f1..d6e7c36 100644
--- a/storm-core/src/jvm/backtype/storm/generated/LSWorkerHeartbeat.java
+++ b/storm-core/src/jvm/backtype/storm/generated/LSWorkerHeartbeat.java
@@ -638,14 +638,14 @@ public class LSWorkerHeartbeat implements org.apache.thrift.TBase<LSWorkerHeartb
           case 3: // EXECUTORS
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list648 = iprot.readListBegin();
-                struct.executors = new ArrayList<ExecutorInfo>(_list648.size);
-                ExecutorInfo _elem649;
-                for (int _i650 = 0; _i650 < _list648.size; ++_i650)
+                org.apache.thrift.protocol.TList _list656 = iprot.readListBegin();
+                struct.executors = new ArrayList<ExecutorInfo>(_list656.size);
+                ExecutorInfo _elem657;
+                for (int _i658 = 0; _i658 < _list656.size; ++_i658)
                 {
-                  _elem649 = new ExecutorInfo();
-                  _elem649.read(iprot);
-                  struct.executors.add(_elem649);
+                  _elem657 = new ExecutorInfo();
+                  _elem657.read(iprot);
+                  struct.executors.add(_elem657);
                 }
                 iprot.readListEnd();
               }
@@ -687,9 +687,9 @@ public class LSWorkerHeartbeat implements org.apache.thrift.TBase<LSWorkerHeartb
         oprot.writeFieldBegin(EXECUTORS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.executors.size()));
-          for (ExecutorInfo _iter651 : struct.executors)
+          for (ExecutorInfo _iter659 : struct.executors)
           {
-            _iter651.write(oprot);
+            _iter659.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -719,9 +719,9 @@ public class LSWorkerHeartbeat implements org.apache.thrift.TBase<LSWorkerHeartb
       oprot.writeString(struct.topology_id);
       {
         oprot.writeI32(struct.executors.size());
-        for (ExecutorInfo _iter652 : struct.executors)
+        for (ExecutorInfo _iter660 : struct.executors)
         {
-          _iter652.write(oprot);
+          _iter660.write(oprot);
         }
       }
       oprot.writeI32(struct.port);
@@ -735,14 +735,14 @@ public class LSWorkerHeartbeat implements org.apache.thrift.TBase<LSWorkerHeartb
       struct.topology_id = iprot.readString();
       struct.set_topology_id_isSet(true);
       {
-        org.apache.thrift.protocol.TList _list653 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.executors = new ArrayList<ExecutorInfo>(_list653.size);
-        ExecutorInfo _elem654;
-        for (int _i655 = 0; _i655 < _list653.size; ++_i655)
+        org.apache.thrift.protocol.TList _list661 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.executors = new ArrayList<ExecutorInfo>(_list661.size);
+        ExecutorInfo _elem662;
+        for (int _i663 = 0; _i663 < _list661.size; ++_i663)
         {
-          _elem654 = new ExecutorInfo();
-          _elem654.read(iprot);
-          struct.executors.add(_elem654);
+          _elem662 = new ExecutorInfo();
+          _elem662.read(iprot);
+          struct.executors.add(_elem662);
         }
       }
       struct.set_executors_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/LocalAssignment.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/LocalAssignment.java b/storm-core/src/jvm/backtype/storm/generated/LocalAssignment.java
index abfb4a4..a36e654 100644
--- a/storm-core/src/jvm/backtype/storm/generated/LocalAssignment.java
+++ b/storm-core/src/jvm/backtype/storm/generated/LocalAssignment.java
@@ -549,14 +549,14 @@ public class LocalAssignment implements org.apache.thrift.TBase<LocalAssignment,
           case 2: // EXECUTORS
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list620 = iprot.readListBegin();
-                struct.executors = new ArrayList<ExecutorInfo>(_list620.size);
-                ExecutorInfo _elem621;
-                for (int _i622 = 0; _i622 < _list620.size; ++_i622)
+                org.apache.thrift.protocol.TList _list628 = iprot.readListBegin();
+                struct.executors = new ArrayList<ExecutorInfo>(_list628.size);
+                ExecutorInfo _elem629;
+                for (int _i630 = 0; _i630 < _list628.size; ++_i630)
                 {
-                  _elem621 = new ExecutorInfo();
-                  _elem621.read(iprot);
-                  struct.executors.add(_elem621);
+                  _elem629 = new ExecutorInfo();
+                  _elem629.read(iprot);
+                  struct.executors.add(_elem629);
                 }
                 iprot.readListEnd();
               }
@@ -596,9 +596,9 @@ public class LocalAssignment implements org.apache.thrift.TBase<LocalAssignment,
         oprot.writeFieldBegin(EXECUTORS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.executors.size()));
-          for (ExecutorInfo _iter623 : struct.executors)
+          for (ExecutorInfo _iter631 : struct.executors)
           {
-            _iter623.write(oprot);
+            _iter631.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -631,9 +631,9 @@ public class LocalAssignment implements org.apache.thrift.TBase<LocalAssignment,
       oprot.writeString(struct.topology_id);
       {
         oprot.writeI32(struct.executors.size());
-        for (ExecutorInfo _iter624 : struct.executors)
+        for (ExecutorInfo _iter632 : struct.executors)
         {
-          _iter624.write(oprot);
+          _iter632.write(oprot);
         }
       }
       BitSet optionals = new BitSet();
@@ -652,14 +652,14 @@ public class LocalAssignment implements org.apache.thrift.TBase<LocalAssignment,
       struct.topology_id = iprot.readString();
       struct.set_topology_id_isSet(true);
       {
-        org.apache.thrift.protocol.TList _list625 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.executors = new ArrayList<ExecutorInfo>(_list625.size);
-        ExecutorInfo _elem626;
-        for (int _i627 = 0; _i627 < _list625.size; ++_i627)
+        org.apache.thrift.protocol.TList _list633 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.executors = new ArrayList<ExecutorInfo>(_list633.size);
+        ExecutorInfo _elem634;
+        for (int _i635 = 0; _i635 < _list633.size; ++_i635)
         {
-          _elem626 = new ExecutorInfo();
-          _elem626.read(iprot);
-          struct.executors.add(_elem626);
+          _elem634 = new ExecutorInfo();
+          _elem634.read(iprot);
+          struct.executors.add(_elem634);
         }
       }
       struct.set_executors_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/LocalStateData.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/LocalStateData.java b/storm-core/src/jvm/backtype/storm/generated/LocalStateData.java
index 384cab5..7340926 100644
--- a/storm-core/src/jvm/backtype/storm/generated/LocalStateData.java
+++ b/storm-core/src/jvm/backtype/storm/generated/LocalStateData.java
@@ -376,16 +376,16 @@ public class LocalStateData implements org.apache.thrift.TBase<LocalStateData, L
           case 1: // SERIALIZED_PARTS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map610 = iprot.readMapBegin();
-                struct.serialized_parts = new HashMap<String,ThriftSerializedObject>(2*_map610.size);
-                String _key611;
-                ThriftSerializedObject _val612;
-                for (int _i613 = 0; _i613 < _map610.size; ++_i613)
+                org.apache.thrift.protocol.TMap _map618 = iprot.readMapBegin();
+                struct.serialized_parts = new HashMap<String,ThriftSerializedObject>(2*_map618.size);
+                String _key619;
+                ThriftSerializedObject _val620;
+                for (int _i621 = 0; _i621 < _map618.size; ++_i621)
                 {
-                  _key611 = iprot.readString();
-                  _val612 = new ThriftSerializedObject();
-                  _val612.read(iprot);
-                  struct.serialized_parts.put(_key611, _val612);
+                  _key619 = iprot.readString();
+                  _val620 = new ThriftSerializedObject();
+                  _val620.read(iprot);
+                  struct.serialized_parts.put(_key619, _val620);
                 }
                 iprot.readMapEnd();
               }
@@ -411,10 +411,10 @@ public class LocalStateData implements org.apache.thrift.TBase<LocalStateData, L
         oprot.writeFieldBegin(SERIALIZED_PARTS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, struct.serialized_parts.size()));
-          for (Map.Entry<String, ThriftSerializedObject> _iter614 : struct.serialized_parts.entrySet())
+          for (Map.Entry<String, ThriftSerializedObject> _iter622 : struct.serialized_parts.entrySet())
           {
-            oprot.writeString(_iter614.getKey());
-            _iter614.getValue().write(oprot);
+            oprot.writeString(_iter622.getKey());
+            _iter622.getValue().write(oprot);
           }
           oprot.writeMapEnd();
         }
@@ -439,10 +439,10 @@ public class LocalStateData implements org.apache.thrift.TBase<LocalStateData, L
       TTupleProtocol oprot = (TTupleProtocol) prot;
       {
         oprot.writeI32(struct.serialized_parts.size());
-        for (Map.Entry<String, ThriftSerializedObject> _iter615 : struct.serialized_parts.entrySet())
+        for (Map.Entry<String, ThriftSerializedObject> _iter623 : struct.serialized_parts.entrySet())
         {
-          oprot.writeString(_iter615.getKey());
-          _iter615.getValue().write(oprot);
+          oprot.writeString(_iter623.getKey());
+          _iter623.getValue().write(oprot);
         }
       }
     }
@@ -451,16 +451,16 @@ public class LocalStateData implements org.apache.thrift.TBase<LocalStateData, L
     public void read(org.apache.thrift.protocol.TProtocol prot, LocalStateData struct) throws org.apache.thrift.TException {
       TTupleProtocol iprot = (TTupleProtocol) prot;
       {
-        org.apache.thrift.protocol.TMap _map616 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.serialized_parts = new HashMap<String,ThriftSerializedObject>(2*_map616.size);
-        String _key617;
-        ThriftSerializedObject _val618;
-        for (int _i619 = 0; _i619 < _map616.size; ++_i619)
+        org.apache.thrift.protocol.TMap _map624 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.serialized_parts = new HashMap<String,ThriftSerializedObject>(2*_map624.size);
+        String _key625;
+        ThriftSerializedObject _val626;
+        for (int _i627 = 0; _i627 < _map624.size; ++_i627)
         {
-          _key617 = iprot.readString();
-          _val618 = new ThriftSerializedObject();
-          _val618.read(iprot);
-          struct.serialized_parts.put(_key617, _val618);
+          _key625 = iprot.readString();
+          _val626 = new ThriftSerializedObject();
+          _val626.read(iprot);
+          struct.serialized_parts.put(_key625, _val626);
         }
       }
       struct.set_serialized_parts_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/LogConfig.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/LogConfig.java b/storm-core/src/jvm/backtype/storm/generated/LogConfig.java
index b74b17f..53bc326 100644
--- a/storm-core/src/jvm/backtype/storm/generated/LogConfig.java
+++ b/storm-core/src/jvm/backtype/storm/generated/LogConfig.java
@@ -368,16 +368,16 @@ public class LogConfig implements org.apache.thrift.TBase<LogConfig, LogConfig._
           case 2: // NAMED_LOGGER_LEVEL
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map680 = iprot.readMapBegin();
-                struct.named_logger_level = new HashMap<String,LogLevel>(2*_map680.size);
-                String _key681;
-                LogLevel _val682;
-                for (int _i683 = 0; _i683 < _map680.size; ++_i683)
+                org.apache.thrift.protocol.TMap _map688 = iprot.readMapBegin();
+                struct.named_logger_level = new HashMap<String,LogLevel>(2*_map688.size);
+                String _key689;
+                LogLevel _val690;
+                for (int _i691 = 0; _i691 < _map688.size; ++_i691)
                 {
-                  _key681 = iprot.readString();
-                  _val682 = new LogLevel();
-                  _val682.read(iprot);
-                  struct.named_logger_level.put(_key681, _val682);
+                  _key689 = iprot.readString();
+                  _val690 = new LogLevel();
+                  _val690.read(iprot);
+                  struct.named_logger_level.put(_key689, _val690);
                 }
                 iprot.readMapEnd();
               }
@@ -404,10 +404,10 @@ public class LogConfig implements org.apache.thrift.TBase<LogConfig, LogConfig._
           oprot.writeFieldBegin(NAMED_LOGGER_LEVEL_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, struct.named_logger_level.size()));
-            for (Map.Entry<String, LogLevel> _iter684 : struct.named_logger_level.entrySet())
+            for (Map.Entry<String, LogLevel> _iter692 : struct.named_logger_level.entrySet())
             {
-              oprot.writeString(_iter684.getKey());
-              _iter684.getValue().write(oprot);
+              oprot.writeString(_iter692.getKey());
+              _iter692.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -439,10 +439,10 @@ public class LogConfig implements org.apache.thrift.TBase<LogConfig, LogConfig._
       if (struct.is_set_named_logger_level()) {
         {
           oprot.writeI32(struct.named_logger_level.size());
-          for (Map.Entry<String, LogLevel> _iter685 : struct.named_logger_level.entrySet())
+          for (Map.Entry<String, LogLevel> _iter693 : struct.named_logger_level.entrySet())
           {
-            oprot.writeString(_iter685.getKey());
-            _iter685.getValue().write(oprot);
+            oprot.writeString(_iter693.getKey());
+            _iter693.getValue().write(oprot);
           }
         }
       }
@@ -454,16 +454,16 @@ public class LogConfig implements org.apache.thrift.TBase<LogConfig, LogConfig._
       BitSet incoming = iprot.readBitSet(1);
       if (incoming.get(0)) {
         {
-          org.apache.thrift.protocol.TMap _map686 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.named_logger_level = new HashMap<String,LogLevel>(2*_map686.size);
-          String _key687;
-          LogLevel _val688;
-          for (int _i689 = 0; _i689 < _map686.size; ++_i689)
+          org.apache.thrift.protocol.TMap _map694 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.named_logger_level = new HashMap<String,LogLevel>(2*_map694.size);
+          String _key695;
+          LogLevel _val696;
+          for (int _i697 = 0; _i697 < _map694.size; ++_i697)
           {
-            _key687 = iprot.readString();
-            _val688 = new LogLevel();
-            _val688.read(iprot);
-            struct.named_logger_level.put(_key687, _val688);
+            _key695 = iprot.readString();
+            _val696 = new LogLevel();
+            _val696.read(iprot);
+            struct.named_logger_level.put(_key695, _val696);
           }
         }
         struct.set_named_logger_level_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/Nimbus.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/Nimbus.java b/storm-core/src/jvm/backtype/storm/generated/Nimbus.java
index 360dba5..98d2d1c 100644
--- a/storm-core/src/jvm/backtype/storm/generated/Nimbus.java
+++ b/storm-core/src/jvm/backtype/storm/generated/Nimbus.java
@@ -15611,14 +15611,14 @@ public class Nimbus {
             case 0: // SUCCESS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list714 = iprot.readListBegin();
-                  struct.success = new ArrayList<ProfileRequest>(_list714.size);
-                  ProfileRequest _elem715;
-                  for (int _i716 = 0; _i716 < _list714.size; ++_i716)
+                  org.apache.thrift.protocol.TList _list722 = iprot.readListBegin();
+                  struct.success = new ArrayList<ProfileRequest>(_list722.size);
+                  ProfileRequest _elem723;
+                  for (int _i724 = 0; _i724 < _list722.size; ++_i724)
                   {
-                    _elem715 = new ProfileRequest();
-                    _elem715.read(iprot);
-                    struct.success.add(_elem715);
+                    _elem723 = new ProfileRequest();
+                    _elem723.read(iprot);
+                    struct.success.add(_elem723);
                   }
                   iprot.readListEnd();
                 }
@@ -15644,9 +15644,9 @@ public class Nimbus {
           oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.success.size()));
-            for (ProfileRequest _iter717 : struct.success)
+            for (ProfileRequest _iter725 : struct.success)
             {
-              _iter717.write(oprot);
+              _iter725.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -15677,9 +15677,9 @@ public class Nimbus {
         if (struct.is_set_success()) {
           {
             oprot.writeI32(struct.success.size());
-            for (ProfileRequest _iter718 : struct.success)
+            for (ProfileRequest _iter726 : struct.success)
             {
-              _iter718.write(oprot);
+              _iter726.write(oprot);
             }
           }
         }
@@ -15691,14 +15691,14 @@ public class Nimbus {
         BitSet incoming = iprot.readBitSet(1);
         if (incoming.get(0)) {
           {
-            org.apache.thrift.protocol.TList _list719 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-            struct.success = new ArrayList<ProfileRequest>(_list719.size);
-            ProfileRequest _elem720;
-            for (int _i721 = 0; _i721 < _list719.size; ++_i721)
+            org.apache.thrift.protocol.TList _list727 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+            struct.success = new ArrayList<ProfileRequest>(_list727.size);
+            ProfileRequest _elem728;
+            for (int _i729 = 0; _i729 < _list727.size; ++_i729)
             {
-              _elem720 = new ProfileRequest();
-              _elem720.read(iprot);
-              struct.success.add(_elem720);
+              _elem728 = new ProfileRequest();
+              _elem728.read(iprot);
+              struct.success.add(_elem728);
             }
           }
           struct.set_success_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/NodeInfo.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/NodeInfo.java b/storm-core/src/jvm/backtype/storm/generated/NodeInfo.java
index a99ecb3..5ef4b5b 100644
--- a/storm-core/src/jvm/backtype/storm/generated/NodeInfo.java
+++ b/storm-core/src/jvm/backtype/storm/generated/NodeInfo.java
@@ -461,13 +461,13 @@ public class NodeInfo implements org.apache.thrift.TBase<NodeInfo, NodeInfo._Fie
           case 2: // PORT
             if (schemeField.type == org.apache.thrift.protocol.TType.SET) {
               {
-                org.apache.thrift.protocol.TSet _set516 = iprot.readSetBegin();
-                struct.port = new HashSet<Long>(2*_set516.size);
-                long _elem517;
-                for (int _i518 = 0; _i518 < _set516.size; ++_i518)
+                org.apache.thrift.protocol.TSet _set524 = iprot.readSetBegin();
+                struct.port = new HashSet<Long>(2*_set524.size);
+                long _elem525;
+                for (int _i526 = 0; _i526 < _set524.size; ++_i526)
                 {
-                  _elem517 = iprot.readI64();
-                  struct.port.add(_elem517);
+                  _elem525 = iprot.readI64();
+                  struct.port.add(_elem525);
                 }
                 iprot.readSetEnd();
               }
@@ -498,9 +498,9 @@ public class NodeInfo implements org.apache.thrift.TBase<NodeInfo, NodeInfo._Fie
         oprot.writeFieldBegin(PORT_FIELD_DESC);
         {
           oprot.writeSetBegin(new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.I64, struct.port.size()));
-          for (long _iter519 : struct.port)
+          for (long _iter527 : struct.port)
           {
-            oprot.writeI64(_iter519);
+            oprot.writeI64(_iter527);
           }
           oprot.writeSetEnd();
         }
@@ -526,9 +526,9 @@ public class NodeInfo implements org.apache.thrift.TBase<NodeInfo, NodeInfo._Fie
       oprot.writeString(struct.node);
       {
         oprot.writeI32(struct.port.size());
-        for (long _iter520 : struct.port)
+        for (long _iter528 : struct.port)
         {
-          oprot.writeI64(_iter520);
+          oprot.writeI64(_iter528);
         }
       }
     }
@@ -539,13 +539,13 @@ public class NodeInfo implements org.apache.thrift.TBase<NodeInfo, NodeInfo._Fie
       struct.node = iprot.readString();
       struct.set_node_isSet(true);
       {
-        org.apache.thrift.protocol.TSet _set521 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.I64, iprot.readI32());
-        struct.port = new HashSet<Long>(2*_set521.size);
-        long _elem522;
-        for (int _i523 = 0; _i523 < _set521.size; ++_i523)
+        org.apache.thrift.protocol.TSet _set529 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.I64, iprot.readI32());
+        struct.port = new HashSet<Long>(2*_set529.size);
+        long _elem530;
+        for (int _i531 = 0; _i531 < _set529.size; ++_i531)
         {
-          _elem522 = iprot.readI64();
-          struct.port.add(_elem522);
+          _elem530 = iprot.readI64();
+          struct.port.add(_elem530);
         }
       }
       struct.set_port_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/RebalanceOptions.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/RebalanceOptions.java b/storm-core/src/jvm/backtype/storm/generated/RebalanceOptions.java
index 03c36da..49a5631 100644
--- a/storm-core/src/jvm/backtype/storm/generated/RebalanceOptions.java
+++ b/storm-core/src/jvm/backtype/storm/generated/RebalanceOptions.java
@@ -529,15 +529,15 @@ public class RebalanceOptions implements org.apache.thrift.TBase<RebalanceOption
           case 3: // NUM_EXECUTORS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map460 = iprot.readMapBegin();
-                struct.num_executors = new HashMap<String,Integer>(2*_map460.size);
-                String _key461;
-                int _val462;
-                for (int _i463 = 0; _i463 < _map460.size; ++_i463)
+                org.apache.thrift.protocol.TMap _map468 = iprot.readMapBegin();
+                struct.num_executors = new HashMap<String,Integer>(2*_map468.size);
+                String _key469;
+                int _val470;
+                for (int _i471 = 0; _i471 < _map468.size; ++_i471)
                 {
-                  _key461 = iprot.readString();
-                  _val462 = iprot.readI32();
-                  struct.num_executors.put(_key461, _val462);
+                  _key469 = iprot.readString();
+                  _val470 = iprot.readI32();
+                  struct.num_executors.put(_key469, _val470);
                 }
                 iprot.readMapEnd();
               }
@@ -574,10 +574,10 @@ public class RebalanceOptions implements org.apache.thrift.TBase<RebalanceOption
           oprot.writeFieldBegin(NUM_EXECUTORS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I32, struct.num_executors.size()));
-            for (Map.Entry<String, Integer> _iter464 : struct.num_executors.entrySet())
+            for (Map.Entry<String, Integer> _iter472 : struct.num_executors.entrySet())
             {
-              oprot.writeString(_iter464.getKey());
-              oprot.writeI32(_iter464.getValue());
+              oprot.writeString(_iter472.getKey());
+              oprot.writeI32(_iter472.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -621,10 +621,10 @@ public class RebalanceOptions implements org.apache.thrift.TBase<RebalanceOption
       if (struct.is_set_num_executors()) {
         {
           oprot.writeI32(struct.num_executors.size());
-          for (Map.Entry<String, Integer> _iter465 : struct.num_executors.entrySet())
+          for (Map.Entry<String, Integer> _iter473 : struct.num_executors.entrySet())
           {
-            oprot.writeString(_iter465.getKey());
-            oprot.writeI32(_iter465.getValue());
+            oprot.writeString(_iter473.getKey());
+            oprot.writeI32(_iter473.getValue());
           }
         }
       }
@@ -644,15 +644,15 @@ public class RebalanceOptions implements org.apache.thrift.TBase<RebalanceOption
       }
       if (incoming.get(2)) {
         {
-          org.apache.thrift.protocol.TMap _map466 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I32, iprot.readI32());
-          struct.num_executors = new HashMap<String,Integer>(2*_map466.size);
-          String _key467;
-          int _val468;
-          for (int _i469 = 0; _i469 < _map466.size; ++_i469)
+          org.apache.thrift.protocol.TMap _map474 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I32, iprot.readI32());
+          struct.num_executors = new HashMap<String,Integer>(2*_map474.size);
+          String _key475;
+          int _val476;
+          for (int _i477 = 0; _i477 < _map474.size; ++_i477)
           {
-            _key467 = iprot.readString();
-            _val468 = iprot.readI32();
-            struct.num_executors.put(_key467, _val468);
+            _key475 = iprot.readString();
+            _val476 = iprot.readI32();
+            struct.num_executors.put(_key475, _val476);
           }
         }
         struct.set_num_executors_isSet(true);


[49/50] [abbrv] storm git commit: Merge branch 'master' into STORM-1040

Posted by sr...@apache.org.
Merge branch 'master' into STORM-1040


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/31b49594
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/31b49594
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/31b49594

Branch: refs/heads/STORM-1040
Commit: 31b4959475f8d09d2e9dd90856a1941adcc94e8c
Parents: 4b1062e 0acc1ce
Author: Haohui Mai <wh...@apache.org>
Authored: Mon Nov 30 23:42:56 2015 -0800
Committer: Haohui Mai <wh...@apache.org>
Committed: Mon Nov 30 23:52:40 2015 -0800

----------------------------------------------------------------------
 .gitignore                                      |     3 +
 .travis.yml                                     |    17 +-
 CHANGELOG.md                                    |   115 +-
 DEVELOPER.md                                    |    35 +-
 DISCLAIMER                                      |    10 -
 LICENSE                                         |    41 +
 README.markdown                                 |    11 +
 STORM-UI-REST-API.md                            |   707 -
 bin/flight.bash                                 |   154 +
 bin/storm                                       |    22 +-
 bin/storm-config.cmd                            |    10 +-
 bin/storm.py                                    |   115 +-
 conf/defaults.yaml                              |    45 +-
 conf/storm.yaml.example                         |     2 +-
 dev-tools/storm-merge.py                        |     2 +-
 dev-tools/travis/ratprint.py                    |    26 +
 dev-tools/travis/travis-install.sh              |     9 +-
 dev-tools/travis/travis-script.sh               |    15 +-
 docs/documentation/Documentation.md             |     4 +
 docs/documentation/FAQ.md                       |     2 +-
 docs/documentation/Log-Search.md                |    14 +
 .../Message-passing-implementation.md           |    34 +-
 docs/documentation/Pacemaker.md                 |   108 +
 .../documentation/Setting-up-a-Storm-cluster.md |    19 +
 docs/documentation/Windowing.md                 |   144 +
 .../documentation/dynamic-log-level-settings.md |    41 +
 docs/documentation/dynamic-worker-profiling.md  |    29 +
 .../images/dynamic_log_level_settings_1.png     |   Bin 0 -> 93689 bytes
 .../images/dynamic_log_level_settings_2.png     |   Bin 0 -> 78785 bytes
 .../images/dynamic_profiling_debugging_1.png    |   Bin 0 -> 93635 bytes
 .../images/dynamic_profiling_debugging_2.png    |   Bin 0 -> 138120 bytes
 .../images/dynamic_profiling_debugging_3.png    |   Bin 0 -> 96974 bytes
 docs/documentation/images/search-a-topology.png |   Bin 0 -> 671031 bytes
 .../images/search-for-a-single-worker-log.png   |   Bin 0 -> 736579 bytes
 .../storm-metrics-profiling-internal-actions.md |    70 +
 docs/documentation/ui-rest-api.md               |   996 +
 docs/images/viewing_metrics_with_VisualVM.png   |   Bin 0 -> 225100 bytes
 examples/storm-starter/pom.xml                  |    17 +-
 .../storm/starter/FastWordCountTopology.java    |   198 +
 .../jvm/storm/starter/InOrderDeliveryTest.java  |   175 +
 .../storm/starter/MultipleLoggerTopology.java   |   105 +
 .../starter/ResourceAwareExampleTopology.java   |   101 +
 .../storm/starter/SlidingWindowTopology.java    |   185 +
 .../jvm/storm/starter/ThroughputVsLatency.java  |   432 +
 .../starter/trident/TridentKafkaWordCount.java  |    15 +-
 .../bolt/IntermediateRankingsBoltTest.java      |     2 +-
 .../starter/bolt/RollingCountBoltTest.java      |     2 +-
 .../starter/bolt/TotalRankingsBoltTest.java     |     2 +-
 .../storm/starter/tools/MockTupleHelpers.java   |    40 -
 external/flux/README.md                         |     4 +
 external/flux/flux-core/pom.xml                 |     1 -
 .../main/java/org/apache/storm/flux/Flux.java   |     3 +-
 .../java/org/apache/storm/flux/FluxBuilder.java |    55 +-
 .../org/apache/storm/flux/model/ObjectDef.java  |     2 +
 .../org/apache/storm/flux/test/TestBolt.java    |     8 +
 .../resources/configs/config-methods-test.yaml  |     2 +
 external/flux/flux-examples/README.md           |     9 +
 external/flux/flux-examples/pom.xml             |     1 -
 .../storm/flux/examples/TestPrintBolt.java      |    39 +
 .../storm/flux/examples/TestWindowBolt.java     |    47 +
 .../src/main/resources/simple_windowing.yaml    |    69 +
 .../storm/sql/compiler/TestCompilerUtils.java   |    17 +
 .../org/apache/storm/sql/kafka/JsonScheme.java  |     6 +-
 .../storm/sql/kafka/TestJsonRepresentation.java |     2 +-
 .../sql/kafka/TestKafkaDataSourcesProvider.java |    12 +-
 external/storm-elasticsearch/pom.xml            |     5 +
 external/storm-eventhubs/pom.xml                |     1 -
 external/storm-hbase/README.md                  |    10 +
 .../storm/hbase/bolt/AbstractHBaseBolt.java     |     2 +
 .../org/apache/storm/hbase/bolt/HBaseBolt.java  |    75 +-
 external/storm-hdfs/README.md                   |    33 +
 external/storm-hdfs/pom.xml                     |    71 +-
 .../storm/hdfs/bolt/AbstractHdfsBolt.java       |   124 +
 .../storm/hdfs/bolt/AvroGenericRecordBolt.java  |   145 +
 .../org/apache/storm/hdfs/bolt/HdfsBolt.java    |    51 +-
 .../storm/hdfs/bolt/SequenceFileBolt.java       |    42 +-
 .../ha/codedistributor/HDFSCodeDistributor.java |    17 +
 .../hdfs/bolt/AvroGenericRecordBoltTest.java    |   220 +
 .../apache/storm/hdfs/bolt/TestHdfsBolt.java    |   258 +
 .../storm/hdfs/bolt/TestSequenceFileBolt.java   |   186 +
 .../storm/hdfs/trident/HdfsStateTest.java       |    17 +
 external/storm-hive/pom.xml                     |     7 +
 .../org/apache/storm/hive/bolt/HiveBolt.java    |     9 +
 .../apache/storm/hive/bolt/TestHiveBolt.java    |    56 +-
 .../storm/jdbc/bolt/AbstractJdbcBolt.java       |     2 +
 .../apache/storm/jdbc/bolt/JdbcInsertBolt.java  |     9 +
 .../apache/storm/jdbc/bolt/JdbcLookupBolt.java  |     5 +
 .../jdbc/mapper/SimpleJdbcLookupMapper.java     |     3 +
 .../storm/jdbc/mapper/SimpleJdbcMapper.java     |     5 +
 .../storm/jdbc/bolt/JdbcInsertBoltTest.java     |    71 +
 .../storm/jdbc/bolt/JdbcLookupBoltTest.java     |    59 +
 external/storm-kafka/README.md                  |    86 +-
 external/storm-kafka/pom.xml                    |     5 +
 .../jvm/storm/kafka/DynamicBrokersReader.java   |    97 +-
 .../kafka/DynamicPartitionConnections.java      |    20 +-
 .../src/jvm/storm/kafka/KafkaConfig.java        |     3 +-
 .../src/jvm/storm/kafka/KafkaSpout.java         |    34 +-
 .../src/jvm/storm/kafka/KafkaUtils.java         |    95 +-
 .../src/jvm/storm/kafka/KeyValueScheme.java     |     5 +-
 .../kafka/KeyValueSchemeAsMultiScheme.java      |     5 +-
 .../jvm/storm/kafka/MessageMetadataScheme.java  |    27 +
 .../MessageMetadataSchemeAsMultiScheme.java     |    41 +
 .../src/jvm/storm/kafka/Partition.java          |    26 +-
 .../src/jvm/storm/kafka/PartitionManager.java   |    47 +-
 .../src/jvm/storm/kafka/StaticCoordinator.java  |    11 +-
 .../jvm/storm/kafka/StringKeyValueScheme.java   |     3 +-
 .../kafka/StringMessageAndMetadataScheme.java   |    43 +
 .../storm/kafka/StringMultiSchemeWithTopic.java |    48 +
 .../src/jvm/storm/kafka/StringScheme.java       |    20 +-
 .../src/jvm/storm/kafka/ZkCoordinator.java      |     2 +-
 .../src/jvm/storm/kafka/bolt/KafkaBolt.java     |    13 +-
 .../jvm/storm/kafka/trident/Coordinator.java    |     7 +-
 .../trident/GlobalPartitionInformation.java     |    26 +-
 .../jvm/storm/kafka/trident/IBrokerReader.java  |     7 +-
 .../kafka/trident/OpaqueTridentKafkaSpout.java  |     9 +-
 .../storm/kafka/trident/StaticBrokerReader.java |    23 +-
 .../trident/TransactionalTridentKafkaSpout.java |     4 +-
 .../kafka/trident/TridentKafkaEmitter.java      |    48 +-
 .../storm/kafka/trident/TridentKafkaState.java  |    33 +-
 .../kafka/trident/TridentKafkaStateFactory.java |    10 +-
 .../jvm/storm/kafka/trident/ZkBrokerReader.java |    20 +-
 .../storm/kafka/DynamicBrokersReaderTest.java   |   114 +-
 .../src/test/storm/kafka/KafkaUtilsTest.java    |   112 +-
 .../storm/kafka/StringKeyValueSchemeTest.java   |    17 +-
 .../src/test/storm/kafka/TestStringScheme.java  |    40 +
 .../src/test/storm/kafka/TestUtils.java         |    28 +-
 .../src/test/storm/kafka/TridentKafkaTest.java  |    13 +-
 .../test/storm/kafka/TridentKafkaTopology.java  |    33 +-
 .../src/test/storm/kafka/ZkCoordinatorTest.java |     8 +-
 .../test/storm/kafka/bolt/KafkaBoltTest.java    |    19 +-
 external/storm-metrics/pom.xml                  |   107 +
 .../metrics/hdrhistogram/HistogramMetric.java   |    79 +
 .../apache/storm/metrics/sigar/CPUMetric.java   |    60 +
 external/storm-solr/pom.xml                     |    21 +-
 log4j2/cluster.xml                              |    42 +-
 log4j2/worker.xml                               |    22 +-
 pom.xml                                         |   290 +-
 storm-core/pom.xml                              |   327 +-
 storm-core/src/clj/backtype/storm/cluster.clj   |   341 +-
 .../cluster_state/zookeeper_state_factory.clj   |   157 +
 .../clj/backtype/storm/command/healthcheck.clj  |    88 +
 .../clj/backtype/storm/command/heartbeats.clj   |    52 +
 .../clj/backtype/storm/command/kill_workers.clj |    33 +
 .../backtype/storm/command/set_log_level.clj    |    75 +
 storm-core/src/clj/backtype/storm/config.clj    |   112 +-
 storm-core/src/clj/backtype/storm/converter.clj |    73 +-
 .../backtype/storm/daemon/builtin_metrics.clj   |    84 +-
 .../src/clj/backtype/storm/daemon/common.clj    |    42 +-
 .../src/clj/backtype/storm/daemon/drpc.clj      |    46 +-
 .../src/clj/backtype/storm/daemon/executor.clj  |   273 +-
 .../src/clj/backtype/storm/daemon/logviewer.clj |  1060 +-
 .../src/clj/backtype/storm/daemon/nimbus.clj    |   652 +-
 .../clj/backtype/storm/daemon/supervisor.clj    |   283 +-
 .../src/clj/backtype/storm/daemon/task.clj      |    24 +-
 .../src/clj/backtype/storm/daemon/worker.clj    |   270 +-
 storm-core/src/clj/backtype/storm/disruptor.clj |    53 +-
 .../src/clj/backtype/storm/local_state.clj      |    44 +-
 storm-core/src/clj/backtype/storm/log.clj       |    12 +-
 .../src/clj/backtype/storm/messaging/loader.clj |    76 +-
 .../src/clj/backtype/storm/messaging/local.clj  |    56 +-
 storm-core/src/clj/backtype/storm/stats.clj     |  1519 +-
 storm-core/src/clj/backtype/storm/testing.clj   |    34 +-
 storm-core/src/clj/backtype/storm/timer.clj     |    20 +-
 storm-core/src/clj/backtype/storm/ui/core.clj   |  1356 +-
 .../src/clj/backtype/storm/ui/helpers.clj       |    77 +-
 storm-core/src/clj/backtype/storm/util.clj      |    82 +-
 .../org/apache/storm/pacemaker/pacemaker.clj    |   237 +
 .../storm/pacemaker/pacemaker_state_factory.clj |   124 +
 .../src/dev/logviewer-search-context-tests.log  |     1 +
 .../dev/logviewer-search-context-tests.log.gz   |   Bin 0 -> 72 bytes
 storm-core/src/dev/small-worker.log             |     1 +
 storm-core/src/dev/test-3072.log                |     3 +
 storm-core/src/dev/test-worker.log              |   380 +
 storm-core/src/genthrift.sh                     |     2 +-
 storm-core/src/jvm/backtype/storm/Config.java   |   868 +-
 .../jvm/backtype/storm/ConfigValidation.java    |   375 -
 .../src/jvm/backtype/storm/LogWriter.java       |     2 +-
 .../src/jvm/backtype/storm/StormSubmitter.java  |    55 +-
 .../backtype/storm/cluster/ClusterState.java    |   208 +
 .../storm/cluster/ClusterStateContext.java      |    41 +
 .../storm/cluster/ClusterStateFactory.java      |    28 +
 .../storm/cluster/ClusterStateListener.java     |    22 +
 .../backtype/storm/cluster/ConnectionState.java |    24 +
 .../jvm/backtype/storm/cluster/DaemonType.java  |    27 +
 .../storm/codedistributor/ICodeDistributor.java |    17 +
 .../LocalFileSystemCodeDistributor.java         |    17 +
 .../storm/coordination/BatchBoltExecutor.java   |     4 +-
 .../storm/coordination/CoordinatedBolt.java     |    16 +-
 .../storm/drpc/DRPCInvocationsClient.java       |     5 +-
 .../src/jvm/backtype/storm/drpc/DRPCSpout.java  |    10 +-
 .../src/jvm/backtype/storm/drpc/JoinResult.java |     8 +-
 .../storm/generated/AlreadyAliveException.java  |     4 +-
 .../backtype/storm/generated/Assignment.java    |   380 +-
 .../storm/generated/AuthorizationException.java |     4 +-
 .../src/jvm/backtype/storm/generated/Bolt.java  |     4 +-
 .../storm/generated/BoltAggregateStats.java     |   704 +
 .../jvm/backtype/storm/generated/BoltStats.java |   444 +-
 .../storm/generated/ClusterSummary.java         |   221 +-
 .../storm/generated/ClusterWorkerHeartbeat.java |    60 +-
 .../storm/generated/CommonAggregateStats.java   |   902 +
 .../generated/ComponentAggregateStats.java      |   752 +
 .../storm/generated/ComponentCommon.java        |     6 +-
 .../storm/generated/ComponentObject.java        |     2 +-
 .../storm/generated/ComponentPageInfo.java      |  2194 ++
 .../backtype/storm/generated/ComponentType.java |    62 +
 .../backtype/storm/generated/Credentials.java   |    48 +-
 .../storm/generated/DRPCExecutionException.java |     4 +-
 .../backtype/storm/generated/DRPCRequest.java   |     4 +-
 .../backtype/storm/generated/DebugOptions.java  |     8 +-
 .../storm/generated/DistributedRPC.java         |     4 +-
 .../generated/DistributedRPCInvocations.java    |     4 +-
 .../jvm/backtype/storm/generated/ErrorInfo.java |     8 +-
 .../storm/generated/ExecutorAggregateStats.java |   526 +
 .../backtype/storm/generated/ExecutorInfo.java  |     8 +-
 .../storm/generated/ExecutorSpecificStats.java  |     2 +-
 .../backtype/storm/generated/ExecutorStats.java |   174 +-
 .../storm/generated/ExecutorSummary.java        |     8 +-
 .../storm/generated/GetInfoOptions.java         |     4 +-
 .../storm/generated/GlobalStreamId.java         |     4 +-
 .../jvm/backtype/storm/generated/Grouping.java  |     2 +-
 .../generated/HBAuthorizationException.java     |   406 +
 .../storm/generated/HBExecutionException.java   |   406 +
 .../jvm/backtype/storm/generated/HBMessage.java |   636 +
 .../backtype/storm/generated/HBMessageData.java |   640 +
 .../jvm/backtype/storm/generated/HBNodes.java   |   461 +
 .../jvm/backtype/storm/generated/HBPulse.java   |   522 +
 .../jvm/backtype/storm/generated/HBRecords.java |   466 +
 .../storm/generated/HBServerMessageType.java    |   113 +
 .../generated/InvalidTopologyException.java     |     4 +-
 .../backtype/storm/generated/JavaObject.java    |     4 +-
 .../backtype/storm/generated/JavaObjectArg.java |     2 +-
 .../backtype/storm/generated/KillOptions.java   |     6 +-
 .../storm/generated/LSApprovedWorkers.java      |    48 +-
 .../generated/LSSupervisorAssignments.java      |    52 +-
 .../storm/generated/LSSupervisorId.java         |     4 +-
 .../backtype/storm/generated/LSTopoHistory.java |   805 +
 .../storm/generated/LSTopoHistoryList.java      |   460 +
 .../storm/generated/LSWorkerHeartbeat.java      |    44 +-
 .../storm/generated/LocalAssignment.java        |   157 +-
 .../storm/generated/LocalStateData.java         |    52 +-
 .../jvm/backtype/storm/generated/LogConfig.java |   475 +
 .../jvm/backtype/storm/generated/LogLevel.java  |   836 +
 .../storm/generated/LogLevelAction.java         |    65 +
 .../jvm/backtype/storm/generated/Nimbus.java    | 18163 ++++++++++++-----
 .../backtype/storm/generated/NimbusSummary.java |    10 +-
 .../jvm/backtype/storm/generated/NodeInfo.java  |    36 +-
 .../storm/generated/NotAliveException.java      |     4 +-
 .../backtype/storm/generated/NullStruct.java    |     4 +-
 .../storm/generated/NumErrorsChoice.java        |     2 +-
 .../backtype/storm/generated/ProfileAction.java |    74 +
 .../storm/generated/ProfileRequest.java         |   631 +
 .../storm/generated/RebalanceOptions.java       |    52 +-
 .../storm/generated/ShellComponent.java         |     4 +-
 .../storm/generated/SpecificAggregateStats.java |   387 +
 .../storm/generated/SpoutAggregateStats.java    |   407 +
 .../jvm/backtype/storm/generated/SpoutSpec.java |     4 +-
 .../backtype/storm/generated/SpoutStats.java    |   256 +-
 .../storm/generated/StateSpoutSpec.java         |     4 +-
 .../jvm/backtype/storm/generated/StormBase.java |   100 +-
 .../backtype/storm/generated/StormTopology.java |   255 +-
 .../backtype/storm/generated/StreamInfo.java    |     6 +-
 .../backtype/storm/generated/SubmitOptions.java |     4 +-
 .../storm/generated/SupervisorInfo.java         |   282 +-
 .../storm/generated/SupervisorSummary.java      |   374 +-
 .../storm/generated/ThriftSerializedObject.java |     4 +-
 .../storm/generated/TopologyActionOptions.java  |     2 +-
 .../storm/generated/TopologyHistoryInfo.java    |   461 +
 .../backtype/storm/generated/TopologyInfo.java  |   774 +-
 .../storm/generated/TopologyInitialStatus.java  |     2 +-
 .../storm/generated/TopologyPageInfo.java       |  2597 +++
 .../backtype/storm/generated/TopologyStats.java |  1094 +
 .../storm/generated/TopologyStatus.java         |     2 +-
 .../storm/generated/TopologySummary.java        |   618 +-
 .../storm/generated/WorkerResources.java        |   605 +
 .../src/jvm/backtype/storm/grouping/Load.java   |    77 +
 .../grouping/LoadAwareCustomStreamGrouping.java |    24 +
 .../grouping/LoadAwareShuffleGrouping.java      |    76 +
 .../backtype/storm/grouping/LoadMapping.java    |    64 +
 .../storm/grouping/PartialKeyGrouping.java      |     5 +-
 .../storm/grouping/ShuffleGrouping.java         |    65 +
 .../backtype/storm/hooks/BaseWorkerHook.java    |    51 +
 .../jvm/backtype/storm/hooks/IWorkerHook.java   |    44 +
 .../storm/logging/ThriftAccessLogger.java       |    27 +
 .../logging/filters/AccessLoggingFilter.java    |    52 +
 .../storm/messaging/AddressedTuple.java         |    46 +
 .../storm/messaging/ConnectionWithStatus.java   |     4 +-
 .../DeserializingConnectionCallback.java        |    60 +
 .../backtype/storm/messaging/IConnection.java   |    26 +-
 .../storm/messaging/IConnectionCallback.java    |    31 +
 .../jvm/backtype/storm/messaging/IContext.java  |     2 +-
 .../storm/messaging/TransportFactory.java       |     2 +-
 .../backtype/storm/messaging/local/Context.java |   164 +
 .../backtype/storm/messaging/netty/Client.java  |   115 +-
 .../backtype/storm/messaging/netty/Context.java |     8 +-
 .../storm/messaging/netty/ControlMessage.java   |    22 +-
 .../messaging/netty/INettySerializable.java     |    26 +
 .../storm/messaging/netty/ISaslClient.java      |    28 +
 .../storm/messaging/netty/ISaslServer.java      |    26 +
 .../backtype/storm/messaging/netty/IServer.java |    26 +
 .../netty/KerberosSaslClientHandler.java        |   152 +
 .../netty/KerberosSaslNettyClient.java          |   203 +
 .../netty/KerberosSaslNettyClientState.java     |    31 +
 .../netty/KerberosSaslNettyServer.java          |   210 +
 .../netty/KerberosSaslNettyServerState.java     |    30 +
 .../netty/KerberosSaslServerHandler.java        |   133 +
 .../storm/messaging/netty/MessageBatch.java     |    14 +-
 .../storm/messaging/netty/MessageDecoder.java   |    11 +-
 .../netty/NettyRenameThreadFactory.java         |    10 +-
 .../netty/NettyUncaughtExceptionHandler.java    |    35 +
 .../storm/messaging/netty/SaslMessageToken.java |    33 +-
 .../storm/messaging/netty/SaslNettyClient.java  |    28 +-
 .../storm/messaging/netty/SaslNettyServer.java  |   248 +-
 .../messaging/netty/SaslNettyServerState.java   |    13 +-
 .../messaging/netty/SaslStormClientHandler.java |    41 +-
 .../messaging/netty/SaslStormServerHandler.java |    32 +-
 .../storm/messaging/netty/SaslUtils.java        |    12 +-
 .../backtype/storm/messaging/netty/Server.java  |   232 +-
 .../messaging/netty/StormClientHandler.java     |    51 +-
 .../netty/StormClientPipelineFactory.java       |    11 +-
 .../messaging/netty/StormServerHandler.java     |    24 +-
 .../backtype/storm/metric/EventLoggerBolt.java  |    25 +-
 .../storm/metric/FileBasedEventLogger.java      |    37 +-
 .../metric/HttpForwardingMetricsConsumer.java   |    80 +
 .../metric/HttpForwardingMetricsServer.java     |   118 +
 .../jvm/backtype/storm/metric/IEventLogger.java |    25 +-
 .../storm/metric/LoggingMetricsConsumer.java    |     1 -
 .../storm/metric/MetricsConsumerBolt.java       |     1 -
 .../jvm/backtype/storm/metric/SystemBolt.java   |     5 -
 .../backtype/storm/metric/api/CountMetric.java  |     2 -
 .../backtype/storm/metric/api/MeanReducer.java  |     4 +-
 .../storm/metric/api/MultiCountMetric.java      |     2 +-
 .../storm/metric/api/MultiReducedMetric.java    |     2 +-
 .../storm/metric/api/rpc/CountShellMetric.java  |     3 +-
 .../metric/internal/CountStatAndMetric.java     |   211 +
 .../metric/internal/LatencyStatAndMetric.java   |   262 +
 .../storm/metric/internal/MetricStatTimer.java  |    27 +
 .../internal/MultiCountStatAndMetric.java       |   112 +
 .../internal/MultiLatencyStatAndMetric.java     |   109 +
 .../storm/metric/internal/RateTracker.java      |   165 +
 .../AbstractDNSToSwitchMapping.java             |    95 +
 .../networktopography/DNSToSwitchMapping.java   |    50 +
 .../DefaultRackDNSToSwitchMapping.java          |    52 +
 .../backtype/storm/nimbus/ILeaderElector.java   |    23 +-
 .../nimbus/ITopologyActionNotifierPlugin.java   |    43 +
 .../jvm/backtype/storm/nimbus/NimbusInfo.java   |    29 +-
 .../jvm/backtype/storm/scheduler/Cluster.java   |   234 +-
 .../scheduler/SchedulerAssignmentImpl.java      |    15 +-
 .../storm/scheduler/SupervisorDetails.java      |    63 +-
 .../backtype/storm/scheduler/Topologies.java    |    27 +-
 .../storm/scheduler/TopologyDetails.java        |   377 +-
 .../backtype/storm/scheduler/WorkerSlot.java    |    25 +
 .../scheduler/multitenant/DefaultPool.java      |    22 +-
 .../storm/scheduler/multitenant/FreePool.java   |     6 +-
 .../scheduler/multitenant/IsolatedPool.java     |    32 +-
 .../multitenant/MultitenantScheduler.java       |     6 +-
 .../storm/scheduler/multitenant/Node.java       |    17 +-
 .../storm/scheduler/multitenant/NodePool.java   |    16 +-
 .../storm/scheduler/resource/Component.java     |    54 +
 .../storm/scheduler/resource/RAS_Node.java      |   575 +
 .../resource/ResourceAwareScheduler.java        |   183 +
 .../storm/scheduler/resource/ResourceUtils.java |   133 +
 .../resource/strategies/IStrategy.java          |    37 +
 .../strategies/ResourceAwareStrategy.java       |   479 +
 .../backtype/storm/security/auth/AuthUtils.java |    96 +-
 .../auth/DefaultHttpCredentialsPlugin.java      |     6 +-
 .../security/auth/DefaultPrincipalToLocal.java  |     1 -
 .../storm/security/auth/IAuthorizer.java        |     4 +-
 .../security/auth/ICredentialsRenewer.java      |     3 +-
 .../security/auth/IHttpCredentialsPlugin.java   |     2 -
 .../storm/security/auth/IPrincipalToLocal.java  |     2 +-
 .../storm/security/auth/ITransportPlugin.java   |     4 -
 .../security/auth/KerberosPrincipalToLocal.java |     2 +-
 .../storm/security/auth/ReqContext.java         |    18 +-
 .../security/auth/SaslTransportPlugin.java      |    12 +-
 .../security/auth/ShellBasedGroupsMapping.java  |    10 +-
 .../security/auth/SimpleTransportPlugin.java    |     8 +-
 .../security/auth/SingleUserPrincipal.java      |     5 +-
 .../storm/security/auth/TBackoffConnect.java    |     1 -
 .../storm/security/auth/ThriftClient.java       |    10 +-
 .../storm/security/auth/ThriftServer.java       |     6 +-
 .../auth/authorizer/DRPCAuthorizerBase.java     |     2 +-
 .../authorizer/DRPCSimpleACLAuthorizer.java     |    19 +-
 .../auth/authorizer/DenyAuthorizer.java         |    16 +-
 .../authorizer/ImpersonationAuthorizer.java     |    17 +-
 .../auth/authorizer/NoopAuthorizer.java         |    12 +-
 .../auth/authorizer/SimpleACLAuthorizer.java    |    45 +-
 .../authorizer/SimpleWhitelistAuthorizer.java   |    16 +-
 .../auth/digest/ClientCallbackHandler.java      |     2 -
 .../auth/digest/DigestSaslTransportPlugin.java  |     2 -
 .../auth/digest/ServerCallbackHandler.java      |     5 +-
 .../storm/security/auth/kerberos/AutoTGT.java   |    10 +-
 .../security/auth/kerberos/NoOpTTrasport.java   |    20 +-
 .../auth/kerberos/ServerCallbackHandler.java    |     2 +
 .../serialization/BlowfishTupleSerializer.java  |     6 +-
 .../GzipThriftSerializationDelegate.java        |     1 -
 .../storm/serialization/ITupleDeserializer.java |     1 -
 .../serialization/KryoTupleDeserializer.java    |     3 -
 .../serialization/KryoValuesDeserializer.java   |     3 +-
 .../serialization/SerializationFactory.java     |    23 +-
 .../jvm/backtype/storm/spout/MultiScheme.java   |     3 +-
 .../backtype/storm/spout/RawMultiScheme.java    |     3 +-
 .../src/jvm/backtype/storm/spout/RawScheme.java |     9 +-
 .../src/jvm/backtype/storm/spout/Scheme.java    |     3 +-
 .../storm/spout/SchemeAsMultiScheme.java        |     3 +-
 .../jvm/backtype/storm/spout/ShellSpout.java    |    10 +-
 .../storm/task/GeneralTopologyContext.java      |    15 +-
 .../backtype/storm/task/OutputCollector.java    |     2 +-
 .../src/jvm/backtype/storm/task/ShellBolt.java  |    48 +-
 .../backtype/storm/task/TopologyContext.java    |     9 +-
 .../AlternateRackDNSToSwitchMapping.java        |    65 +
 .../storm/testing/MemoryTransactionalSpout.java |     9 +-
 .../testing/OpaqueMemoryTransactionalSpout.java |     8 +-
 .../storm/testing/TupleCaptureBolt.java         |     4 +-
 .../topology/BaseConfigurationDeclarer.java     |    31 +-
 .../storm/topology/BasicBoltExecutor.java       |     2 +-
 .../ComponentConfigurationDeclarer.java         |     3 +
 .../backtype/storm/topology/IWindowedBolt.java  |    40 +
 .../storm/topology/OutputFieldsGetter.java      |     2 +-
 .../storm/topology/TopologyBuilder.java         |    78 +-
 .../storm/topology/WindowedBoltExecutor.java    |   224 +
 .../storm/topology/base/BaseBatchBolt.java      |     1 -
 .../topology/base/BaseTransactionalSpout.java   |     1 -
 .../storm/topology/base/BaseWindowedBolt.java   |   179 +
 .../TransactionalSpoutBatchExecutor.java        |     4 +-
 .../TransactionalSpoutCoordinator.java          |     2 +-
 ...uePartitionedTransactionalSpoutExecutor.java |    13 +-
 .../PartitionedTransactionalSpoutExecutor.java  |     2 +-
 .../backtype/storm/tuple/AddressedTuple.java    |    48 +
 .../src/jvm/backtype/storm/tuple/Fields.java    |    10 +-
 .../src/jvm/backtype/storm/tuple/MessageId.java |    10 +-
 .../src/jvm/backtype/storm/tuple/Tuple.java     |     9 +-
 .../src/jvm/backtype/storm/tuple/TupleImpl.java |    17 +-
 .../jvm/backtype/storm/utils/DRPCClient.java    |     1 -
 .../backtype/storm/utils/DisruptorQueue.java    |   610 +-
 .../backtype/storm/utils/InprocMessaging.java   |     4 +-
 .../storm/utils/KeyedRoundRobinQueue.java       |     6 +-
 .../jvm/backtype/storm/utils/ListDelegate.java  |     6 +-
 .../jvm/backtype/storm/utils/LocalState.java    |    22 +-
 .../src/jvm/backtype/storm/utils/Monitor.java   |     3 +-
 .../jvm/backtype/storm/utils/MutableObject.java |     6 +-
 .../jvm/backtype/storm/utils/NimbusClient.java  |    10 +-
 .../jvm/backtype/storm/utils/RateTracker.java   |   119 -
 .../storm/utils/RegisteredGlobalState.java      |     6 +-
 .../jvm/backtype/storm/utils/RotatingMap.java   |     2 +-
 .../backtype/storm/utils/ServiceRegistry.java   |     2 +-
 .../jvm/backtype/storm/utils/ShellProcess.java  |     6 +-
 .../jvm/backtype/storm/utils/ShellUtils.java    |     2 +-
 .../StormBoundedExponentialBackoffRetry.java    |     3 +-
 .../storm/utils/ThriftTopologyUtils.java        |    36 +-
 .../src/jvm/backtype/storm/utils/Time.java      |    16 +-
 .../backtype/storm/utils/TransferDrainer.java   |    17 +-
 .../src/jvm/backtype/storm/utils/Utils.java     |   489 +-
 .../jvm/backtype/storm/utils/VersionInfo.java   |     2 +-
 .../storm/validation/ConfigValidation.java      |   646 +
 .../validation/ConfigValidationAnnotations.java |   214 +
 .../storm/validation/ConfigValidationUtils.java |   175 +
 .../storm/windowing/CountEvictionPolicy.java    |    68 +
 .../storm/windowing/CountTriggerPolicy.java     |    63 +
 .../src/jvm/backtype/storm/windowing/Event.java |    41 +
 .../jvm/backtype/storm/windowing/EventImpl.java |    38 +
 .../storm/windowing/EvictionPolicy.java         |    42 +
 .../storm/windowing/TimeEvictionPolicy.java     |    52 +
 .../storm/windowing/TimeTriggerPolicy.java      |   115 +
 .../storm/windowing/TriggerHandler.java         |    29 +
 .../backtype/storm/windowing/TriggerPolicy.java |    42 +
 .../backtype/storm/windowing/TupleWindow.java   |    26 +
 .../storm/windowing/TupleWindowImpl.java        |    61 +
 .../jvm/backtype/storm/windowing/Window.java    |    48 +
 .../windowing/WindowLifecycleListener.java      |    42 +
 .../backtype/storm/windowing/WindowManager.java |   212 +
 .../storm/pacemaker/IServerMessageHandler.java  |    25 +
 .../apache/storm/pacemaker/PacemakerClient.java |   255 +
 .../storm/pacemaker/PacemakerClientHandler.java |    75 +
 .../apache/storm/pacemaker/PacemakerServer.java |   163 +
 .../storm/pacemaker/codec/ThriftDecoder.java    |    76 +
 .../storm/pacemaker/codec/ThriftEncoder.java    |   110 +
 .../pacemaker/codec/ThriftNettyClientCodec.java |    94 +
 .../pacemaker/codec/ThriftNettyServerCodec.java |    99 +
 .../src/jvm/storm/trident/TridentTopology.java  |   100 +-
 .../trident/drpc/ReturnResultsReducer.java      |     4 +-
 .../fluent/ChainedAggregatorDeclarer.java       |     8 +-
 .../jvm/storm/trident/graph/GraphGrouper.java   |    22 +-
 .../src/jvm/storm/trident/graph/Group.java      |    23 +-
 .../trident/operation/builtin/SnapshotGet.java  |     4 +-
 .../operation/builtin/TupleCollectionGet.java   |     6 +-
 .../storm/trident/partition/GlobalGrouping.java |     5 +-
 .../trident/partition/IdentityGrouping.java     |     8 +-
 .../src/jvm/storm/trident/planner/Node.java     |     5 +-
 .../storm/trident/planner/PartitionNode.java    |     2 -
 .../storm/trident/planner/SubtopologyBolt.java  |    19 +-
 .../processor/MultiReducerProcessor.java        |     2 +-
 .../jvm/storm/trident/spout/ITridentSpout.java  |    51 +-
 .../OpaquePartitionedTridentSpoutExecutor.java  |    10 +-
 .../trident/spout/TridentSpoutExecutor.java     |     4 +-
 .../trident/topology/TridentBoltExecutor.java   |    10 +-
 .../topology/TridentTopologyBuilder.java        |    23 +-
 .../storm/trident/tuple/TridentTupleView.java   |    18 +-
 .../jvm/storm/trident/util/TridentUtils.java    |    33 +-
 .../src/native/worker-launcher/impl/main.c      |    10 +
 .../worker-launcher/impl/worker-launcher.c      |    49 +-
 .../worker-launcher/impl/worker-launcher.h      |     2 +
 storm-core/src/py/storm/DistributedRPC-remote   |     2 +-
 storm-core/src/py/storm/DistributedRPC.py       |    20 +-
 .../py/storm/DistributedRPCInvocations-remote   |     2 +-
 .../src/py/storm/DistributedRPCInvocations.py   |    41 +-
 storm-core/src/py/storm/Nimbus-remote           |    51 +-
 storm-core/src/py/storm/Nimbus.py               |  2383 ++-
 storm-core/src/py/storm/constants.py            |     2 +-
 storm-core/src/py/storm/ttypes.py               |  5870 ++++--
 storm-core/src/storm.thrift                     |   262 +-
 storm-core/src/ui/public/component.html         |   167 +-
 storm-core/src/ui/public/css/style.css          |    16 +
 .../src/ui/public/deep_search_result.html       |   155 +
 storm-core/src/ui/public/images/bug.png         |   Bin 0 -> 4045 bytes
 storm-core/src/ui/public/images/search.png      |   Bin 0 -> 2354 bytes
 storm-core/src/ui/public/images/statistic.png   |   Bin 0 -> 488 bytes
 storm-core/src/ui/public/index.html             |    10 +-
 storm-core/src/ui/public/js/script.js           |    20 +
 .../src/ui/public/js/typeahead.jquery.min.js    |     7 +
 storm-core/src/ui/public/js/visualization.js    |    92 +-
 storm-core/src/ui/public/logviewer_search.html  |    65 +
 storm-core/src/ui/public/search_result.html     |   100 +
 .../templates/component-page-template.html      |    55 +-
 .../deep-search-result-page-template.html       |    66 +
 .../public/templates/index-page-template.html   |    56 +-
 .../logviewer-search-page-template.html         |    44 +
 .../templates/search-result-page-template.html  |    60 +
 .../templates/topology-page-template.html       |   197 +-
 .../src/ui/public/templates/user-template.html  |    27 +-
 storm-core/src/ui/public/topology.html          |   168 +-
 .../test/clj/backtype/storm/cluster_test.clj    |    15 +-
 .../test/clj/backtype/storm/config_test.clj     |   186 -
 .../test/clj/backtype/storm/grouping_test.clj   |    90 +-
 .../clj/backtype/storm/integration_test.clj     |    12 +-
 .../test/clj/backtype/storm/logviewer_test.clj  |   730 +-
 .../storm/messaging/netty_integration_test.clj  |     3 +-
 .../storm/messaging/netty_unit_test.clj         |   288 +-
 .../test/clj/backtype/storm/messaging_test.clj  |    28 +-
 .../test/clj/backtype/storm/metrics_test.clj    |     2 +-
 .../test/clj/backtype/storm/multilang_test.clj  |     4 +-
 .../test/clj/backtype/storm/nimbus_test.clj     |   199 +-
 .../scheduler/multitenant_scheduler_test.clj    |    34 +-
 .../scheduler/resource_aware_scheduler_test.clj |   669 +
 .../test/clj/backtype/storm/scheduler_test.clj  |     3 +-
 .../auth/DefaultHttpCredentialsPlugin_test.clj  |    40 +-
 .../clj/backtype/storm/serialization_test.clj   |    14 +-
 .../test/clj/backtype/storm/supervisor_test.clj |   397 +-
 .../test/clj/backtype/storm/testing4j_test.clj  |     1 +
 .../clj/backtype/storm/transactional_test.clj   |     5 +-
 .../test/clj/backtype/storm/worker_test.clj     |   179 +-
 .../storm/pacemaker_state_factory_test.clj      |   150 +
 .../clj/org/apache/storm/pacemaker_test.clj     |   242 +
 .../jvm/backtype/storm/TestConfigValidate.java  |   660 +
 .../metric/internal/CountStatAndMetricTest.java |    86 +
 .../internal/LatencyStatAndMetricTest.java      |    83 +
 .../storm/metric/internal/RateTrackerTest.java  |    94 +
 .../nimbus/InMemoryTopologyActionNotifier.java  |    53 +
 .../storm/topology/TopologyBuilderTest.java     |     5 +
 .../utils/DisruptorQueueBackpressureTest.java   |    17 +-
 .../storm/utils/DisruptorQueueTest.java         |   106 +-
 .../backtype/storm/utils/MockTupleHelpers.java  |    40 +
 .../backtype/storm/utils/RateTrackerTest.java   |    62 -
 .../storm/utils/ThriftTopologyUtilsTest.java    |    94 +
 .../storm/windowing/WindowManagerTest.java      |   250 +
 .../jvm/storm/trident/TestTridentTopology.java  |    76 +
 storm-dist/binary/LICENSE                       |    29 +
 storm-dist/binary/src/main/assembly/binary.xml  |    37 -
 567 files changed, 70259 insertions(+), 14812 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/31b49594/CHANGELOG.md
----------------------------------------------------------------------
diff --cc CHANGELOG.md
index c5d052b,4eb137b..f513a7f
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@@ -1,6 -1,99 +1,101 @@@
  ## 0.11.0
 + * STORM-1060: Serialize Calcite plans into JSON format.
 + * STORM-1062: Establish the basic structure of the code generator.
+  * STORM-1341: Let topology have own heartbeat timeout for multilang subprocess
+  * STORM-1207: Added flux support for IWindowedBolt
+  * STORM-1352: Trident should support writing to multiple Kafka clusters.
+  * STORM-1220: Avoid double copying in the Kafka spout.
+  * STORM-1340: Use Travis-CI build matrix to improve test execution times
+  * STORM-1126: Allow a configMethod that takes no arguments (Flux)
+  * STORM-1203: worker metadata file creation doesn't use storm.log.dir config
+  * STORM-1349: [Flux] Allow constructorArgs to take Maps as arguments
+  * STORM-126: Add Lifecycle support API for worker nodes
+  * STORM-1213: Remove sigar binaries from source tree
+  * STORM-885:  Heartbeat Server (Pacemaker)
+  * STORM-1221: Create a common interface for all Trident spout.
+  * STORM-1198: Web UI to show resource usages and Total Resources on all supervisors
+  * STORM-1167: Add windowing support for storm core.
+  * STORM-1215: Use Async Loggers to avoid locking  and logging overhead
+  * STORM-1204: Logviewer should graceful report page-not-found instead of 500 for bad topo-id etc
+  * STORM-831: Add BugTracker and Central Logging URL to UI
+  * STORM-1208: UI: NPE seen when aggregating bolt streams stats
+  * STORM-1016: Generate trident bolt ids with sorted group names
+  * STORM-1190: System Load too high after recent changes
+  * STORM-1098: Nimbus hook for topology actions.
+  * STORM-1145: Have IConnection push tuples instead of pull them
+  * STORM-1191: bump timeout by 50% due to intermittent travis build failures
+  * STORM-794: Modify Spout async loop to treat activate/deactivate ASAP
+  * STORM-1196: Upgrade to thrift 0.9.3
+  * STORM-1155: Supervisor recurring health checks
+  * STORM-1189: Maintain wire compatability with 0.10.x versions of storm.
+  * STORM-1185: replace nimbus.host with nimbus.seeds
+  * STORM-1164: Code cleanup for typos, warnings and conciseness.
+  * STORM-902: Simple Log Search.
+  * STORM-1052: TridentKafkaState uses new Kafka Producer API.
+  * STORM-1182: Removing and wrapping some exceptions in ConfigValidation to make code cleaner
+  * STORM-1134. Windows: Fix log4j config.
+  * STORM-1127: allow for boolean arguments (Flux)
+  * STORM-1180: FLUX logo wasn't appearing quite right
+  * STORM-1138: Storm-hdfs README should be updated with Avro Bolt information
+  * STORM-1154: SequenceFileBolt needs unit tests
+  * STORM-162: Load Aware Shuffle Grouping
+  * STORM-1158: Storm metrics to profile various storm functions
+  * STORM-1161: Add License headers and add rat checks to builds
+  * STORM-1165: normalize the scales of CPU/Mem/Net when choosing the best node for Resource Aware Scheduler
+  * STORM-1163: use rmr rather than rmpath for remove worker-root
+  * STORM-1170: Fix the producer alive issue in DisruptorQueueTest
+  * STORM-1168: removes noisy log message & a TODO
+  * STORM-1143: Validate topology Configs during topology submission
+  * STORM-1157: Adding dynamic profiling for worker, restarting worker, jstack, heap dump, and profiling
+  * STORM-1123: TupleImpl - Unnecessary variable initialization.
+  * STORM-1153: Use static final instead of just static for class members.
+  * STORM-817: Kafka Wildcard Topic Support.
+  * STORM-40: Turn worker garbage collection and heapdump on by default.
+  * STORM-1152: Change map keySet iteration to entrySet iteration for efficiency.
+  * STORM-1147: Storm JDBCBolt should add validation to ensure either insertQuery or table name is specified and not both.
+  * STORM-1151: Batching in DisruptorQueue
+  * STORM-350: Update disruptor to latest version (3.3.2)
+  * STORM-697: Support for Emitting Kafka Message Offset and Partition
+  * STORM-1074: Add Avro HDFS bolt
+  * STORM-566: Improve documentation including incorrect Kryo ser. framework docs
+  * STORM-1073: Refactor AbstractHdfsBolt
+  * STORM-1128: Make metrics fast
+  * STORM-1122: Fix the format issue in Utils.java
+  * STORM-1111: Fix Validation for lots of different configs
+  * STORM-1125: Adding separate ZK client for read in Nimbus ZK State
+  * STORM-1121: Remove method call to avoid overhead during topology submission time
+  * STORM-1120: Fix keyword (schema -> scheme) from main-routes
+  * STORM-1115: Stale leader-lock key effectively bans all nodes from becoming leaders
+  * STORM-1119: Create access logging for all daemons
+  * STORM-1117: Adds visualization-init route previously missing
+  * STORM-1118: Added test to compare latency vs. throughput in storm.
+  * STORM-1110: Fix Component Page for system components
+  * STORM-1093: Launching Workers with resources specified in resource-aware schedulers
+  * STORM-1102: Add a default flush interval for HiveBolt
+  * STORM-1112: Add executor id to the thread name of the executor thread for debug
+  * STORM-1079: Batch Puts to HBase
+  * STORM-1084: Improve Storm config validation process to use java annotations instead of *_SCHEMA format
+  * STORM-1106: Netty should not limit attempts to reconnect
+  * STORM-1103: Changes log message to DEBUG from INFO
+  * STORM-1104: Nimbus HA fails to find newly downloaded code files
+  * STORM-1087: Avoid issues with transfer-queue backpressure.
+  * STORM-893: Resource Aware Scheduling (Experimental)
+  * STORM-1095: Tuple.getSourceGlobalStreamid() has wrong camel-case naming
+  * STORM-1091: Add unit test for tick tuples to HiveBolt and HdfsBolt
+  * STORM-1090: Nimbus HA should support `storm.local.hostname`
+  * STORM-820: Aggregate topo stats on nimbus, not ui
+  * STORM-412: Allow users to modify logging levels of running topologies
+  * STORM-1078: Updated RateTracker to be thread safe
+  * STORM-1082: fix nits for properties in kafka tests
+  * STORM-993: include uptimeSeconds as JSON integer field
+  * STORM-1053: Update storm-kafka README for new producer API confs.
+  * STORM-1058: create CLI kill_workers to kill workers on a supervisor node
+  * STORM-1063: support relative log4j conf dir for both daemons and workers
+  * STORM-1059: Upgrade Storm to use Clojure 1.7.0
+  * STORM-1069: add check case for external change of 'now' value.
+  * STORM-969: HDFS Bolt can end up in an unrecoverable state.
+  * STORM-1068: Configure request.required.acks to be 1 in KafkaUtilsTest for sync
+  * STORM-1017: If ignoreZkOffsets set true,KafkaSpout will reset zk offset when recover from failure.
   * STORM-1054: Excessive logging ShellBasedGroupsMapping if the user doesn't have any groups.
   * STORM-954: Toplogy Event Inspector
   * STORM-862: Pluggable System Metrics

http://git-wip-us.apache.org/repos/asf/storm/blob/31b49594/external/sql/storm-sql-core/src/test/org/apache/storm/sql/compiler/TestCompilerUtils.java
----------------------------------------------------------------------
diff --cc external/sql/storm-sql-core/src/test/org/apache/storm/sql/compiler/TestCompilerUtils.java
index 0e5fa0b,0000000..092230f
mode 100644,000000..100644
--- a/external/sql/storm-sql-core/src/test/org/apache/storm/sql/compiler/TestCompilerUtils.java
+++ b/external/sql/storm-sql-core/src/test/org/apache/storm/sql/compiler/TestCompilerUtils.java
@@@ -1,48 -1,0 +1,65 @@@
++/**
++ * Licensed to the Apache Software Foundation (ASF) under one
++ * or more contributor license agreements.  See the NOTICE file
++ * distributed with this work for additional information
++ * regarding copyright ownership.  The ASF licenses this file
++ * to you under the Apache License, Version 2.0 (the
++ * "License"); you may not use this file except in compliance
++ * with the License.  You may obtain a copy of the License at
++ * <p>
++ * http://www.apache.org/licenses/LICENSE-2.0
++ * <p>
++ * Unless required by applicable law or agreed to in writing, software
++ * distributed under the License is distributed on an "AS IS" BASIS,
++ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
++ * See the License for the specific language governing permissions and
++ * limitations under the License.
++ */
 +package org.apache.storm.sql.compiler;
 +
 +import org.apache.calcite.adapter.java.JavaTypeFactory;
 +import org.apache.calcite.jdbc.JavaTypeFactoryImpl;
 +import org.apache.calcite.rel.RelNode;
 +import org.apache.calcite.rel.type.RelDataTypeSystem;
 +import org.apache.calcite.schema.SchemaPlus;
 +import org.apache.calcite.schema.StreamableTable;
 +import org.apache.calcite.schema.Table;
 +import org.apache.calcite.sql.SqlNode;
 +import org.apache.calcite.sql.parser.SqlParseException;
 +import org.apache.calcite.sql.type.SqlTypeName;
 +import org.apache.calcite.tools.*;
 +
 +public class TestCompilerUtils {
 +  public static CalciteState sqlOverDummyTable(String sql)
 +      throws RelConversionException, ValidationException, SqlParseException {
 +    SchemaPlus schema = Frameworks.createRootSchema(true);
 +    JavaTypeFactory typeFactory = new JavaTypeFactoryImpl
 +        (RelDataTypeSystem.DEFAULT);
 +    StreamableTable streamableTable = new CompilerUtil.TableBuilderInfo(typeFactory)
 +        .field("ID", SqlTypeName.INTEGER).build();
 +    Table table = streamableTable.stream();
 +    schema.add("FOO", table);
 +    schema.add("BAR", table);
 +    FrameworkConfig config = Frameworks.newConfigBuilder().defaultSchema(
 +        schema).build();
 +    Planner planner = Frameworks.getPlanner(config);
 +    SqlNode parse = planner.parse(sql);
 +    SqlNode validate = planner.validate(parse);
 +    RelNode tree = planner.convert(validate);
 +    return new CalciteState(schema, tree);
 +  }
 +
 +  public static class CalciteState {
 +    final SchemaPlus schema;
 +    final RelNode tree;
 +
 +    private CalciteState(SchemaPlus schema, RelNode tree) {
 +      this.schema = schema;
 +      this.tree = tree;
 +    }
 +
 +    public SchemaPlus schema() { return schema; }
 +    public RelNode tree() { return tree; }
 +  }
 +
 +}

http://git-wip-us.apache.org/repos/asf/storm/blob/31b49594/external/sql/storm-sql-kafka/src/jvm/org/apache/storm/sql/kafka/JsonScheme.java
----------------------------------------------------------------------
diff --cc external/sql/storm-sql-kafka/src/jvm/org/apache/storm/sql/kafka/JsonScheme.java
index 80037c6,0000000..1b45b30
mode 100644,000000..100644
--- a/external/sql/storm-sql-kafka/src/jvm/org/apache/storm/sql/kafka/JsonScheme.java
+++ b/external/sql/storm-sql-kafka/src/jvm/org/apache/storm/sql/kafka/JsonScheme.java
@@@ -1,56 -1,0 +1,58 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + * <p>
 + * http://www.apache.org/licenses/LICENSE-2.0
 + * <p>
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.storm.sql.kafka;
 +
 +import backtype.storm.spout.Scheme;
 +import backtype.storm.tuple.Fields;
++import backtype.storm.utils.Utils;
 +import com.fasterxml.jackson.databind.ObjectMapper;
 +
 +import java.io.IOException;
++import java.nio.ByteBuffer;
 +import java.util.ArrayList;
 +import java.util.HashMap;
 +import java.util.List;
 +
 +public class JsonScheme implements Scheme {
 +  private final List<String> fields;
 +
 +  JsonScheme(List<String> fields) {
 +    this.fields = fields;
 +  }
 +
 +  @Override
-   public List<Object> deserialize(byte[] ser) {
++  public List<Object> deserialize(ByteBuffer ser) {
 +    ObjectMapper mapper = new ObjectMapper();
 +    try {
 +      @SuppressWarnings("unchecked")
-       HashMap<String, Object> map = mapper.readValue(ser, HashMap.class);
++      HashMap<String, Object> map = mapper.readValue(Utils.toByteArray(ser), HashMap.class);
 +      ArrayList<Object> list = new ArrayList<>();
 +      for (String f : fields) {
 +        list.add(map.get(f));
 +      }
 +      return list;
 +    } catch (IOException e) {
 +      throw new RuntimeException(e);
 +    }
 +  }
 +
 +  @Override
 +  public Fields getOutputFields() {
 +    return new Fields(fields);
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/storm/blob/31b49594/external/sql/storm-sql-kafka/src/test/org/apache/storm/sql/kafka/TestJsonRepresentation.java
----------------------------------------------------------------------
diff --cc external/sql/storm-sql-kafka/src/test/org/apache/storm/sql/kafka/TestJsonRepresentation.java
index d2898e8,0000000..5973672
mode 100644,000000..100644
--- a/external/sql/storm-sql-kafka/src/test/org/apache/storm/sql/kafka/TestJsonRepresentation.java
+++ b/external/sql/storm-sql-kafka/src/test/org/apache/storm/sql/kafka/TestJsonRepresentation.java
@@@ -1,50 -1,0 +1,50 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + * <p>
 + * http://www.apache.org/licenses/LICENSE-2.0
 + * <p>
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.storm.sql.kafka;
 +
 +import backtype.storm.utils.Utils;
 +import com.google.common.collect.Lists;
 +import org.junit.Test;
 +
 +import java.nio.ByteBuffer;
 +import java.nio.charset.Charset;
 +import java.util.List;
 +
 +import static org.junit.Assert.assertArrayEquals;
 +import static org.junit.Assert.assertEquals;
 +
 +public class TestJsonRepresentation {
 +  @Test
 +  public void testJsonScheme() {
 +    final List<String> fields = Lists.newArrayList("ID", "val");
 +    final String s = "{\"ID\": 1, \"val\": \"2\"}";
 +    JsonScheme scheme = new JsonScheme(fields);
-     List<Object> o = scheme.deserialize(s.getBytes(Charset.defaultCharset()));
++    List<Object> o = scheme.deserialize(ByteBuffer.wrap(s.getBytes(Charset.defaultCharset())));
 +    assertArrayEquals(new Object[] {1, "2"}, o.toArray());
 +  }
 +
 +  @Test
 +  public void testJsonSerializer() {
 +    final List<String> fields = Lists.newArrayList("ID", "val");
 +    List<Object> o = Lists.<Object> newArrayList(1, "2");
 +    JsonSerializer s = new JsonSerializer(fields);
 +    ByteBuffer buf = s.write(o, null);
 +    byte[] b = Utils.toByteArray(buf);
 +    assertEquals("{\"ID\":1,\"val\":\"2\"}", new String(b));
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/storm/blob/31b49594/external/sql/storm-sql-kafka/src/test/org/apache/storm/sql/kafka/TestKafkaDataSourcesProvider.java
----------------------------------------------------------------------
diff --cc external/sql/storm-sql-kafka/src/test/org/apache/storm/sql/kafka/TestKafkaDataSourcesProvider.java
index 531f764,0000000..418bc68
mode 100644,000000..100644
--- a/external/sql/storm-sql-kafka/src/test/org/apache/storm/sql/kafka/TestKafkaDataSourcesProvider.java
+++ b/external/sql/storm-sql-kafka/src/test/org/apache/storm/sql/kafka/TestKafkaDataSourcesProvider.java
@@@ -1,103 -1,0 +1,107 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + * <p>
 + * http://www.apache.org/licenses/LICENSE-2.0
 + * <p>
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.storm.sql.kafka;
 +
 +import com.google.common.collect.ImmutableList;
 +import com.google.common.collect.Lists;
 +import kafka.javaapi.producer.Producer;
 +import kafka.producer.KeyedMessage;
++import org.apache.kafka.clients.producer.KafkaProducer;
++import org.apache.kafka.clients.producer.ProducerRecord;
 +import org.apache.storm.sql.kafka.KafkaDataSourcesProvider.KafkaTridentSink;
 +import org.apache.storm.sql.runtime.DataSourcesRegistry;
 +import org.apache.storm.sql.runtime.FieldInfo;
 +import org.apache.storm.sql.runtime.ISqlTridentDataSource;
 +import org.junit.Assert;
 +import org.junit.Test;
 +import org.mockito.ArgumentMatcher;
 +import org.mockito.internal.util.reflection.Whitebox;
 +import storm.kafka.trident.TridentKafkaState;
 +import storm.trident.tuple.TridentTuple;
 +
 +import java.net.URI;
 +import java.nio.ByteBuffer;
 +import java.util.ArrayList;
 +import java.util.Collections;
 +import java.util.List;
++import java.util.concurrent.Future;
 +
 +import static org.mockito.Mockito.*;
 +
 +public class TestKafkaDataSourcesProvider {
 +  private static final List<FieldInfo> FIELDS = ImmutableList.of(
 +      new FieldInfo("ID", int.class, true),
 +      new FieldInfo("val", String.class, false));
 +  private static final List<String> FIELD_NAMES = ImmutableList.of("ID", "val");
 +  private static final JsonSerializer SERIALIZER = new JsonSerializer(FIELD_NAMES);
 +
 +  @SuppressWarnings("unchecked")
 +  @Test
 +  public void testKafkaSink() {
 +    ISqlTridentDataSource ds = DataSourcesRegistry.constructTridentDataSource(
 +        URI.create("kafka://mock?topic=foo"), null, null, FIELDS);
 +    Assert.assertNotNull(ds);
 +    KafkaTridentSink sink = (KafkaTridentSink) ds.getConsumer();
 +    sink.prepare(null, null);
 +    TridentKafkaState state = (TridentKafkaState) Whitebox.getInternalState(sink, "state");
-     Producer producer = mock(Producer.class);
++    KafkaProducer producer = mock(KafkaProducer.class);
++    doReturn(mock(Future.class)).when(producer).send(any(ProducerRecord.class));
 +    Whitebox.setInternalState(state, "producer", producer);
 +    List<TridentTuple> tupleList = mockTupleList();
 +    for (TridentTuple t : tupleList) {
 +      state.updateState(Collections.singletonList(t), null);
 +      verify(producer).send(argThat(new KafkaMessageMatcher(t)));
 +    }
 +    verifyNoMoreInteractions(producer);
 +  }
 +
 +  private static List<TridentTuple> mockTupleList() {
 +    List<TridentTuple> tupleList = new ArrayList<>();
 +    TridentTuple t0 = mock(TridentTuple.class);
 +    TridentTuple t1 = mock(TridentTuple.class);
 +    doReturn(1).when(t0).get(0);
 +    doReturn(2).when(t1).get(0);
 +    doReturn(Lists.<Object>newArrayList(1, "2")).when(t0).getValues();
 +    doReturn(Lists.<Object>newArrayList(2, "3")).when(t1).getValues();
 +    tupleList.add(t0);
 +    tupleList.add(t1);
 +    return tupleList;
 +  }
 +
-   private static class KafkaMessageMatcher extends ArgumentMatcher<KeyedMessage> {
++  private static class KafkaMessageMatcher extends ArgumentMatcher<ProducerRecord> {
 +    private static final int PRIMARY_INDEX = 0;
 +    private final TridentTuple tuple;
 +
 +    private KafkaMessageMatcher(TridentTuple tuple) {
 +      this.tuple = tuple;
 +    }
 +
 +    @SuppressWarnings("unchecked")
 +    @Override
 +    public boolean matches(Object o) {
-       KeyedMessage<Object, ByteBuffer> m = (KeyedMessage<Object,ByteBuffer>)o;
++      ProducerRecord<Object, ByteBuffer> m = (ProducerRecord<Object,ByteBuffer>)o;
 +      if (m.key() != tuple.get(PRIMARY_INDEX)) {
 +        return false;
 +      }
-       ByteBuffer buf = m.message();
++      ByteBuffer buf = m.value();
 +      ByteBuffer b = SERIALIZER.write(tuple.getValues(), null);
 +      return b.equals(buf);
 +    }
 +  }
 +
 +}

http://git-wip-us.apache.org/repos/asf/storm/blob/31b49594/pom.xml
----------------------------------------------------------------------
diff --cc pom.xml
index faf94ab,0da88b0..3ed8e3f
--- a/pom.xml
+++ b/pom.xml
@@@ -225,10 -210,37 +210,39 @@@
          <hive.version>0.14.0</hive.version>
          <hadoop.version>2.6.0</hadoop.version>
          <kryo.version>2.21</kryo.version>
+         <servlet.version>2.5</servlet.version>
+         <joda-time.version>2.3</joda-time.version>
+         <jackson.version>2.3.1</jackson.version>
+         <thrift.version>0.9.3</thrift.version>
+         <junit.version>4.11</junit.version>
+         <metrics-clojure.version>2.5.1</metrics-clojure.version>
+         <hdrhistogram.version>2.1.7</hdrhistogram.version>
 +        <calcite.version>1.4.0-incubating</calcite.version>
-         <jackson.version>2.6.3</jackson.version>
      </properties>
  
+     <modules>
+         <module>storm-multilang/javascript</module>
+         <module>storm-multilang/python</module>
+         <module>storm-multilang/ruby</module>
+         <module>storm-buildtools/maven-shade-clojure-transformer</module>
+         <module>storm-buildtools/storm-maven-plugins</module>
+         <module>storm-core</module>
+         <module>external/storm-kafka</module>
+         <module>external/storm-hdfs</module>
+         <module>external/storm-hbase</module>
+         <module>external/storm-hive</module>
+         <module>external/storm-jdbc</module>
+         <module>external/storm-redis</module>
+         <module>external/storm-eventhubs</module>
+         <module>external/flux</module>
+         <module>external/storm-elasticsearch</module>
+         <module>external/storm-solr</module>
+         <module>external/storm-metrics</module>
++        <module>external/sql</module>
+         <module>examples/storm-starter</module>
+     </modules>
+ 
+ 
      <profiles>
          <profile>
              <id>sign</id>
@@@ -577,22 -660,11 +662,21 @@@
                  <scope>compile</scope>
              </dependency>
              <dependency>
-               <groupId>org.apache.calcite</groupId>
-               <artifactId>calcite-core</artifactId>
-               <version>${calcite.version}</version>
++                <groupId>org.apache.calcite</groupId>
++                <artifactId>calcite-core</artifactId>
++                <version>${calcite.version}</version>
++            </dependency>
++            <dependency>
++                <groupId>com.fasterxml.jackson.core</groupId>
++                <artifactId>jackson-databind</artifactId>
++                <version>${jackson.version}</version>
 +            </dependency>
 +            <dependency>
-               <groupId>com.fasterxml.jackson.core</groupId>
-               <artifactId>jackson-databind</artifactId>
-               <version>${jackson.version}</version>
+                 <groupId>junit</groupId>
+                 <artifactId>junit</artifactId>
+                 <version>${junit.version}</version>
+                 <scope>test</scope>
              </dependency>
- 			<!-- used by examples/storm-starter -->
- 		    <dependency>
- 		      <groupId>junit</groupId>
- 		      <artifactId>junit</artifactId>
- 		      <version>4.11</version>
- 		      <scope>test</scope>
- 		    </dependency>
          </dependencies>
      </dependencyManagement>
  
@@@ -724,6 -796,73 +808,76 @@@
                  <artifactId>clojure-maven-plugin</artifactId>
                  <extensions>true</extensions>
              </plugin>
+             <plugin>
+                 <groupId>org.apache.rat</groupId>
+                 <artifactId>apache-rat-plugin</artifactId>
+                 <version>0.11</version>
+                 <executions>
+                     <execution>
+                         <phase>verify</phase>
+                         <goals>
+                             <goal>check</goal>
+                         </goals>
+                     </execution>
+                 </executions>
+                 <configuration>
+                     <excludeSubProjects>false</excludeSubProjects>
+                     <excludes>
+                         <!-- exclude maven artifacts -->
+                         <exclude>**/target/**</exclude>
+                         <!-- exclude intellij projects -->
+                         <exclude>**/*.iml</exclude>
+                         <exclude>**/.idea/**</exclude>
+                         <!-- module specific testing artifacts -->
+                         <exclude>**/metastore_db/**</exclude>
+ 
+                         <!-- exclude CHANGELOG, VERSION, AND TODO files -->
+                         <exclude>**/CHANGELOG.md</exclude>
+                         <exclude>**/README.md</exclude>
+                         <exclude>**/README.markdown</exclude>
+                         <exclude>**/DEVELOPER.md</exclude>
+                         <exclude>**/BYLAWS.md</exclude>
+                         <exclude>**/STORM-UI-REST-API.md</exclude>
+                         <exclude>SECURITY.md</exclude>
+                         <exclude>VERSION</exclude>
+                         <exclude>TODO</exclude>
+                         <!-- thrift-generated code -->
+                         <exclude>**/src/py/**</exclude>
+ 
+                         <!-- the following are in the LICENSE file -->
+                         <exclude>**/src/ui/public/js/jquery.dataTables.1.10.4.min.js</exclude>
+                         <exclude>**/src/ui/public/css/jquery.dataTables.1.10.4.min.css</exclude>
+                         <exclude>**/src/ui/public/images/*</exclude>
+                         <exclude>**/src/ui/public/js/bootstrap-3.3.1.min.js</exclude>
+                         <exclude>**/src/ui/public/css/bootstrap-3.3.1.min.css</exclude>
+                         <exclude>**/src/ui/public/js/dataTables.bootstrap.min.js</exclude>
+                         <exclude>**/src/ui/public/css/dataTables.bootstrap.css</exclude>
+                         <exclude>**/src/ui/public/js/jsonFormatter.min.js</exclude>
+                         <exclude>**/src/ui/public/css/jsonFormatter.min.css</exclude>
+                         <exclude>**/src/ui/public/js/jquery-1.11.1.min.js</exclude>
+                         <exclude>**/src/ui/public/js/jquery.cookies.2.2.0.min.js</exclude>
+                         <exclude>**/src/ui/public/js/moment.min.js</exclude>
+                         <exclude>**/src/ui/public/js/jquery.blockUI.min.js</exclude>
+                         <exclude>**/src/ui/public/js/url.min.js</exclude>
+                         <exclude>**/src/ui/public/js/arbor.js</exclude>
+                         <exclude>**/src/ui/public/js/arbor-graphics.js</exclude>
+                         <exclude>**/src/ui/public/js/arbor-tween.js</exclude>
+                         <exclude>**/src/ui/public/js/jquery.mustache.js</exclude>
+                         <exclude>**/src/ui/public/js/typeahead.jquery.min.js</exclude>
+ 
+                         <!-- generated by shade plugin -->
+                         <exclude>**/dependency-reduced-pom.xml</exclude>
+ 
+                         <exclude>**/docs/**</exclude>
+                         <exclude>**/.git/**</exclude>
+                         <exclude>**/derby.log</exclude>
+                         <exclude>**/src/dev/**</exclude>
++                        <!-- Storm SQL -->
++                        <exclude>**/src/codegen/config.fmpp</exclude>
++                        <exclude>**/src/codegen/data/Parser.tdd</exclude>
+                     </excludes>
+                 </configuration>
+             </plugin>
          </plugins>
      </build>
  

http://git-wip-us.apache.org/repos/asf/storm/blob/31b49594/storm-core/src/jvm/backtype/storm/utils/Utils.java
----------------------------------------------------------------------
diff --cc storm-core/src/jvm/backtype/storm/utils/Utils.java
index b80aa11,c086be2..47d2332
--- a/storm-core/src/jvm/backtype/storm/utils/Utils.java
+++ b/storm-core/src/jvm/backtype/storm/utils/Utils.java
@@@ -57,9 -83,10 +85,11 @@@ import org.apache.thrift.TSerializer
  public class Utils {
      private static final Logger LOG = LoggerFactory.getLogger(Utils.class);
      public static final String DEFAULT_STREAM_ID = "default";
+     private static ThreadLocal<TSerializer> threadSer = new ThreadLocal<TSerializer>();
+     private static ThreadLocal<TDeserializer> threadDes = new ThreadLocal<TDeserializer>();
  
      private static SerializationDelegate serializationDelegate;
 +    private static ClassLoader cl = ClassLoader.getSystemClassLoader();
  
      static {
          Map conf = readStormConfig();
@@@ -659,30 -744,123 +747,133 @@@
          return delegate;
      }
  
-   public static void handleUncaughtException(Throwable t) {
-     if (t != null && t instanceof Error) {
-       if (t instanceof OutOfMemoryError) {
+     public static void handleUncaughtException(Throwable t) {
+         if (t != null && t instanceof Error) {
+             if (t instanceof OutOfMemoryError) {
+                 try {
+                     System.err.println("Halting due to Out Of Memory Error..." + Thread.currentThread().getName());
+                 } catch (Throwable err) {
+                     //Again we don't want to exit because of logging issues.
+                 }
+                 Runtime.getRuntime().halt(-1);
+             } else {
+                 //Running in daemon mode, we would pass Error to calling thread.
+                 throw (Error) t;
+             }
+         }
+     }
+ 
+     /**
+      * Given a File input it will unzip the file in a the unzip directory
+      * passed as the second parameter
+      * @param inFile The zip file as input
+      * @param unzipDir The unzip directory where to unzip the zip file.
+      * @throws IOException
+      */
+     public static void unZip(File inFile, File unzipDir) throws IOException {
+         Enumeration<? extends ZipEntry> entries;
+         ZipFile zipFile = new ZipFile(inFile);
+ 
          try {
-           System.err.println("Halting due to Out Of Memory Error..." + Thread.currentThread().getName());
-         } catch (Throwable err) {
-           //Again we don't want to exit because of logging issues.
-         }
-         Runtime.getRuntime().halt(-1);
-       } else {
-         //Running in daemon mode, we would pass Error to calling thread.
-         throw (Error) t;
-       }
-     }
-   }
- 
-   @VisibleForTesting
-   public static void setClassLoaderForJavaDeSerialize(ClassLoader cl) {
-     Utils.cl = cl;
-   }
- 
-   @VisibleForTesting
-   public static void resetClassLoaderForJavaDeSerialize() {
-     Utils.cl = ClassLoader.getSystemClassLoader();
-   }
+             entries = zipFile.entries();
+             while (entries.hasMoreElements()) {
+                 ZipEntry entry = entries.nextElement();
+                 if (!entry.isDirectory()) {
+                     InputStream in = zipFile.getInputStream(entry);
+                     try {
+                         File file = new File(unzipDir, entry.getName());
+                         if (!file.getParentFile().mkdirs()) {
+                             if (!file.getParentFile().isDirectory()) {
+                                 throw new IOException("Mkdirs failed to create " +
+                                         file.getParentFile().toString());
+                             }
+                         }
+                         OutputStream out = new FileOutputStream(file);
+                         try {
+                             byte[] buffer = new byte[8192];
+                             int i;
+                             while ((i = in.read(buffer)) != -1) {
+                                 out.write(buffer, 0, i);
+                             }
+                         } finally {
+                             out.close();
+                         }
+                     } finally {
+                         in.close();
+                     }
+                 }
+             }
+         } finally {
+             zipFile.close();
+         }
+     }
+ 
+     /**
+      * Given a zip File input it will return its size
+      * Only works for zip files whose uncompressed size is less than 4 GB,
+      * otherwise returns the size module 2^32, per gzip specifications
+      * @param myFile The zip file as input
+      * @throws IOException
+      * @return zip file size as a long
+      */
+     public static long zipFileSize(File myFile) throws IOException{
+         RandomAccessFile raf = new RandomAccessFile(myFile, "r");
+         raf.seek(raf.length() - 4);
+         long b4 = raf.read();
+         long b3 = raf.read();
+         long b2 = raf.read();
+         long b1 = raf.read();
+         long val = (b1 << 24) | (b2 << 16) + (b3 << 8) + b4;
+         raf.close();
+         return val;
+     }
+ 
+     public static double zeroIfNaNOrInf(double x) {
+         return (Double.isNaN(x) || Double.isInfinite(x)) ? 0.0 : x;
+     }
+ 
+     /**
+      * parses the arguments to extract jvm heap memory size in MB.
+      * @param input
+      * @param defaultValue
+      * @return the value of the JVM heap memory setting (in MB) in a java command.
+      */
+     public static Double parseJvmHeapMemByChildOpts(String input, Double defaultValue) {
+         if (input != null) {
+             Pattern optsPattern = Pattern.compile("Xmx[0-9]+[mkgMKG]");
+             Matcher m = optsPattern.matcher(input);
+             String memoryOpts = null;
+             while (m.find()) {
+                 memoryOpts = m.group();
+             }
+             if (memoryOpts != null) {
+                 int unit = 1;
+                 if (memoryOpts.toLowerCase().endsWith("k")) {
+                     unit = 1024;
+                 } else if (memoryOpts.toLowerCase().endsWith("m")) {
+                     unit = 1024 * 1024;
+                 } else if (memoryOpts.toLowerCase().endsWith("g")) {
+                     unit = 1024 * 1024 * 1024;
+                 }
+                 memoryOpts = memoryOpts.replaceAll("[a-zA-Z]", "");
+                 Double result =  Double.parseDouble(memoryOpts) * unit / 1024.0 / 1024.0;
+                 return (result < 1.0) ? 1.0 : result;
+             } else {
+                 return defaultValue;
+             }
+         } else {
+             return defaultValue;
+         }
+     }
++
++    @VisibleForTesting
++    public static void setClassLoaderForJavaDeSerialize(ClassLoader cl) {
++        Utils.cl = cl;
++    }
++
++    @VisibleForTesting
++    public static void resetClassLoaderForJavaDeSerialize() {
++        Utils.cl = ClassLoader.getSystemClassLoader();
++    }
  }
  


[29/50] [abbrv] storm git commit: STORM-1220. Avoid double copying in the Kafka spout.

Posted by sr...@apache.org.
STORM-1220. Avoid double copying in the Kafka spout.


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/35f1da78
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/35f1da78
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/35f1da78

Branch: refs/heads/STORM-1040
Commit: 35f1da7890b597c9634904c0589f6ac64584e539
Parents: a8d253a
Author: Haohui Mai <wh...@apache.org>
Authored: Thu Nov 19 13:32:20 2015 -0800
Committer: Haohui Mai <wh...@apache.org>
Committed: Tue Nov 24 13:23:16 2015 -0800

----------------------------------------------------------------------
 .../src/jvm/storm/kafka/KafkaUtils.java         |  8 ++--
 .../src/jvm/storm/kafka/KeyValueScheme.java     |  5 +--
 .../kafka/KeyValueSchemeAsMultiScheme.java      |  5 ++-
 .../jvm/storm/kafka/MessageMetadataScheme.java  |  6 ++-
 .../MessageMetadataSchemeAsMultiScheme.java     |  3 +-
 .../jvm/storm/kafka/StringKeyValueScheme.java   |  3 +-
 .../kafka/StringMessageAndMetadataScheme.java   |  7 ++--
 .../storm/kafka/StringMultiSchemeWithTopic.java | 21 +++-------
 .../src/jvm/storm/kafka/StringScheme.java       | 20 ++++++----
 .../storm/kafka/StringKeyValueSchemeTest.java   | 17 ++++++---
 .../src/test/storm/kafka/TestStringScheme.java  | 40 ++++++++++++++++++++
 .../jvm/backtype/storm/spout/MultiScheme.java   |  3 +-
 .../backtype/storm/spout/RawMultiScheme.java    |  3 +-
 .../src/jvm/backtype/storm/spout/RawScheme.java |  9 ++++-
 .../src/jvm/backtype/storm/spout/Scheme.java    |  3 +-
 .../storm/spout/SchemeAsMultiScheme.java        |  3 +-
 16 files changed, 106 insertions(+), 50 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/external/storm-kafka/src/jvm/storm/kafka/KafkaUtils.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/jvm/storm/kafka/KafkaUtils.java b/external/storm-kafka/src/jvm/storm/kafka/KafkaUtils.java
index cd684df..52cdde8 100644
--- a/external/storm-kafka/src/jvm/storm/kafka/KafkaUtils.java
+++ b/external/storm-kafka/src/jvm/storm/kafka/KafkaUtils.java
@@ -221,12 +221,12 @@ public class KafkaUtils {
         }
         ByteBuffer key = msg.key();
         if (key != null && kafkaConfig.scheme instanceof KeyValueSchemeAsMultiScheme) {
-            tups = ((KeyValueSchemeAsMultiScheme) kafkaConfig.scheme).deserializeKeyAndValue(Utils.toByteArray(key), Utils.toByteArray(payload));
+            tups = ((KeyValueSchemeAsMultiScheme) kafkaConfig.scheme).deserializeKeyAndValue(key, payload);
         } else {
             if (kafkaConfig.scheme instanceof StringMultiSchemeWithTopic) {
-                tups = ((StringMultiSchemeWithTopic)kafkaConfig.scheme).deserializeWithTopic(topic, Utils.toByteArray(payload));
+                tups = ((StringMultiSchemeWithTopic)kafkaConfig.scheme).deserializeWithTopic(topic, payload);
             } else {
-                tups = kafkaConfig.scheme.deserialize(Utils.toByteArray(payload));
+                tups = kafkaConfig.scheme.deserialize(payload);
             }
         }
         return tups;
@@ -237,7 +237,7 @@ public class KafkaUtils {
         if (payload == null) {
             return null;
         }
-        return scheme.deserializeMessageWithMetadata(Utils.toByteArray(payload), partition, offset);
+        return scheme.deserializeMessageWithMetadata(payload, partition, offset);
     }
 
 

http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/external/storm-kafka/src/jvm/storm/kafka/KeyValueScheme.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/jvm/storm/kafka/KeyValueScheme.java b/external/storm-kafka/src/jvm/storm/kafka/KeyValueScheme.java
index f42f7c8..7c0dc6c 100644
--- a/external/storm-kafka/src/jvm/storm/kafka/KeyValueScheme.java
+++ b/external/storm-kafka/src/jvm/storm/kafka/KeyValueScheme.java
@@ -19,10 +19,9 @@ package storm.kafka;
 
 import backtype.storm.spout.Scheme;
 
+import java.nio.ByteBuffer;
 import java.util.List;
 
 public interface KeyValueScheme extends Scheme {
-
-    public List<Object> deserializeKeyAndValue(byte[] key, byte[] value);
-
+    List<Object> deserializeKeyAndValue(ByteBuffer key, ByteBuffer value);
 }

http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/external/storm-kafka/src/jvm/storm/kafka/KeyValueSchemeAsMultiScheme.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/jvm/storm/kafka/KeyValueSchemeAsMultiScheme.java b/external/storm-kafka/src/jvm/storm/kafka/KeyValueSchemeAsMultiScheme.java
index 7def6ac..d27ae7e 100644
--- a/external/storm-kafka/src/jvm/storm/kafka/KeyValueSchemeAsMultiScheme.java
+++ b/external/storm-kafka/src/jvm/storm/kafka/KeyValueSchemeAsMultiScheme.java
@@ -19,16 +19,17 @@ package storm.kafka;
 
 import backtype.storm.spout.SchemeAsMultiScheme;
 
+import java.nio.ByteBuffer;
 import java.util.Arrays;
 import java.util.List;
 
-public class KeyValueSchemeAsMultiScheme extends SchemeAsMultiScheme{
+public class KeyValueSchemeAsMultiScheme extends SchemeAsMultiScheme {
 
     public KeyValueSchemeAsMultiScheme(KeyValueScheme scheme) {
         super(scheme);
     }
 
-    public Iterable<List<Object>> deserializeKeyAndValue(final byte[] key, final byte[] value) {
+    public Iterable<List<Object>> deserializeKeyAndValue(final ByteBuffer key, final ByteBuffer value) {
         List<Object> o = ((KeyValueScheme)scheme).deserializeKeyAndValue(key, value);
         if(o == null) return null;
         else return Arrays.asList(o);

http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/external/storm-kafka/src/jvm/storm/kafka/MessageMetadataScheme.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/jvm/storm/kafka/MessageMetadataScheme.java b/external/storm-kafka/src/jvm/storm/kafka/MessageMetadataScheme.java
index 92a5598..62f652f 100644
--- a/external/storm-kafka/src/jvm/storm/kafka/MessageMetadataScheme.java
+++ b/external/storm-kafka/src/jvm/storm/kafka/MessageMetadataScheme.java
@@ -17,9 +17,11 @@
  */
 package storm.kafka;
 
-import java.util.List;
 import backtype.storm.spout.Scheme;
 
+import java.nio.ByteBuffer;
+import java.util.List;
+
 public interface MessageMetadataScheme extends Scheme {
-    public List<Object> deserializeMessageWithMetadata(byte[] message, Partition partition, long offset);
+    List<Object> deserializeMessageWithMetadata(ByteBuffer message, Partition partition, long offset);
 }

http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/external/storm-kafka/src/jvm/storm/kafka/MessageMetadataSchemeAsMultiScheme.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/jvm/storm/kafka/MessageMetadataSchemeAsMultiScheme.java b/external/storm-kafka/src/jvm/storm/kafka/MessageMetadataSchemeAsMultiScheme.java
index 0567809..f23a101 100644
--- a/external/storm-kafka/src/jvm/storm/kafka/MessageMetadataSchemeAsMultiScheme.java
+++ b/external/storm-kafka/src/jvm/storm/kafka/MessageMetadataSchemeAsMultiScheme.java
@@ -17,6 +17,7 @@
  */
 package storm.kafka;
 
+import java.nio.ByteBuffer;
 import java.util.Arrays;
 import java.util.List;
 
@@ -29,7 +30,7 @@ public class MessageMetadataSchemeAsMultiScheme extends SchemeAsMultiScheme {
         super(scheme);
     }
 
-    public Iterable<List<Object>> deserializeMessageWithMetadata(byte[] message, Partition partition, long offset) {
+    public Iterable<List<Object>> deserializeMessageWithMetadata(ByteBuffer message, Partition partition, long offset) {
         List<Object> o = ((MessageMetadataScheme) scheme).deserializeMessageWithMetadata(message, partition, offset);
         if (o == null) {
             return null;

http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/external/storm-kafka/src/jvm/storm/kafka/StringKeyValueScheme.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/jvm/storm/kafka/StringKeyValueScheme.java b/external/storm-kafka/src/jvm/storm/kafka/StringKeyValueScheme.java
index 41cacb6..6f6d339 100644
--- a/external/storm-kafka/src/jvm/storm/kafka/StringKeyValueScheme.java
+++ b/external/storm-kafka/src/jvm/storm/kafka/StringKeyValueScheme.java
@@ -20,12 +20,13 @@ package storm.kafka;
 import backtype.storm.tuple.Values;
 import com.google.common.collect.ImmutableMap;
 
+import java.nio.ByteBuffer;
 import java.util.List;
 
 public class StringKeyValueScheme extends StringScheme implements KeyValueScheme {
 
     @Override
-    public List<Object> deserializeKeyAndValue(byte[] key, byte[] value) {
+    public List<Object> deserializeKeyAndValue(ByteBuffer key, ByteBuffer value) {
         if ( key == null ) {
             return deserialize(value);
         }

http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/external/storm-kafka/src/jvm/storm/kafka/StringMessageAndMetadataScheme.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/jvm/storm/kafka/StringMessageAndMetadataScheme.java b/external/storm-kafka/src/jvm/storm/kafka/StringMessageAndMetadataScheme.java
index 031d497..1708b97 100644
--- a/external/storm-kafka/src/jvm/storm/kafka/StringMessageAndMetadataScheme.java
+++ b/external/storm-kafka/src/jvm/storm/kafka/StringMessageAndMetadataScheme.java
@@ -17,11 +17,12 @@
  */
 package storm.kafka;
 
-import java.util.List;
-
 import backtype.storm.tuple.Fields;
 import backtype.storm.tuple.Values;
 
+import java.nio.ByteBuffer;
+import java.util.List;
+
 public class StringMessageAndMetadataScheme extends StringScheme implements MessageMetadataScheme {
     private static final long serialVersionUID = -5441841920447947374L;
 
@@ -29,7 +30,7 @@ public class StringMessageAndMetadataScheme extends StringScheme implements Mess
     public static final String STRING_SCHEME_OFFSET = "offset";
 
     @Override
-    public List<Object> deserializeMessageWithMetadata(byte[] message, Partition partition, long offset) {
+    public List<Object> deserializeMessageWithMetadata(ByteBuffer message, Partition partition, long offset) {
         String stringMessage = StringScheme.deserializeString(message);
         return new Values(stringMessage, partition.partition, offset);
     }

http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/external/storm-kafka/src/jvm/storm/kafka/StringMultiSchemeWithTopic.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/jvm/storm/kafka/StringMultiSchemeWithTopic.java b/external/storm-kafka/src/jvm/storm/kafka/StringMultiSchemeWithTopic.java
index e0da2ce..1e7f216 100644
--- a/external/storm-kafka/src/jvm/storm/kafka/StringMultiSchemeWithTopic.java
+++ b/external/storm-kafka/src/jvm/storm/kafka/StringMultiSchemeWithTopic.java
@@ -18,13 +18,12 @@
 package storm.kafka;
 
 import backtype.storm.spout.MultiScheme;
-import backtype.storm.spout.Scheme;
 import backtype.storm.tuple.Fields;
 import backtype.storm.tuple.Values;
 import sun.reflect.generics.reflectiveObjects.NotImplementedException;
 
-import java.io.UnsupportedEncodingException;
-import java.util.Arrays;
+import java.nio.ByteBuffer;
+import java.util.Collections;
 import java.util.List;
 
 public class StringMultiSchemeWithTopic
@@ -34,24 +33,16 @@ public class StringMultiSchemeWithTopic
     public static final String TOPIC_KEY = "topic";
 
     @Override
-    public Iterable<List<Object>> deserialize(byte[] bytes) {
+    public Iterable<List<Object>> deserialize(ByteBuffer bytes) {
         throw new NotImplementedException();
     }
 
-    public Iterable<List<Object>> deserializeWithTopic(String topic, byte[] bytes) {
-        List<Object> items = new Values(deserializeString(bytes), topic);
-        return Arrays.asList(items);
+    public Iterable<List<Object>> deserializeWithTopic(String topic, ByteBuffer bytes) {
+        List<Object> items = new Values(StringScheme.deserializeString(bytes), topic);
+        return Collections.singletonList(items);
     }
 
     public Fields getOutputFields() {
         return new Fields(STRING_SCHEME_KEY, TOPIC_KEY);
     }
-
-    public static String deserializeString(byte[] string) {
-        try {
-            return new String(string, "UTF-8");
-        } catch (UnsupportedEncodingException e) {
-            throw new RuntimeException(e);
-        }
-    }
 }

http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/external/storm-kafka/src/jvm/storm/kafka/StringScheme.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/jvm/storm/kafka/StringScheme.java b/external/storm-kafka/src/jvm/storm/kafka/StringScheme.java
index 286dc9b..1071e60 100644
--- a/external/storm-kafka/src/jvm/storm/kafka/StringScheme.java
+++ b/external/storm-kafka/src/jvm/storm/kafka/StringScheme.java
@@ -20,23 +20,27 @@ package storm.kafka;
 import backtype.storm.spout.Scheme;
 import backtype.storm.tuple.Fields;
 import backtype.storm.tuple.Values;
+import backtype.storm.utils.Utils;
 
-import java.io.UnsupportedEncodingException;
+import java.nio.ByteBuffer;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
 import java.util.List;
 
 public class StringScheme implements Scheme {
-
+    private static final Charset UTF8_CHARSET = StandardCharsets.UTF_8;
     public static final String STRING_SCHEME_KEY = "str";
 
-    public List<Object> deserialize(byte[] bytes) {
+    public List<Object> deserialize(ByteBuffer bytes) {
         return new Values(deserializeString(bytes));
     }
 
-    public static String deserializeString(byte[] string) {
-        try {
-            return new String(string, "UTF-8");
-        } catch (UnsupportedEncodingException e) {
-            throw new RuntimeException(e);
+    public static String deserializeString(ByteBuffer string) {
+        if (string.hasArray()) {
+            int base = string.arrayOffset();
+            return new String(string.array(), base + string.position(), string.remaining());
+        } else {
+            return new String(Utils.toByteArray(string), UTF8_CHARSET);
         }
     }
 

http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/external/storm-kafka/src/test/storm/kafka/StringKeyValueSchemeTest.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/test/storm/kafka/StringKeyValueSchemeTest.java b/external/storm-kafka/src/test/storm/kafka/StringKeyValueSchemeTest.java
index 0b786ba..eddb900 100644
--- a/external/storm-kafka/src/test/storm/kafka/StringKeyValueSchemeTest.java
+++ b/external/storm-kafka/src/test/storm/kafka/StringKeyValueSchemeTest.java
@@ -21,7 +21,9 @@ import backtype.storm.tuple.Fields;
 import com.google.common.collect.ImmutableMap;
 import org.junit.Test;
 
-import java.util.Arrays;
+import java.nio.ByteBuffer;
+import java.nio.charset.Charset;
+import java.util.Collections;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -32,7 +34,7 @@ public class StringKeyValueSchemeTest {
 
     @Test
     public void testDeserialize() throws Exception {
-        assertEquals(Arrays.asList("test"), scheme.deserialize("test".getBytes()));
+        assertEquals(Collections.singletonList("test"), scheme.deserialize(wrapString("test")));
     }
 
     @Test
@@ -44,12 +46,17 @@ public class StringKeyValueSchemeTest {
 
     @Test
     public void testDeserializeWithNullKeyAndValue() throws Exception {
-        assertEquals(Arrays.asList("test"), scheme.deserializeKeyAndValue(null, "test".getBytes()));
+        assertEquals(Collections.singletonList("test"),
+            scheme.deserializeKeyAndValue(null, wrapString("test")));
     }
 
     @Test
     public void testDeserializeWithKeyAndValue() throws Exception {
-        assertEquals(Arrays.asList(ImmutableMap.of("key", "test")),
-                scheme.deserializeKeyAndValue("key".getBytes(), "test".getBytes()));
+        assertEquals(Collections.singletonList(ImmutableMap.of("key", "test")),
+                scheme.deserializeKeyAndValue(wrapString("key"), wrapString("test")));
+    }
+
+    private static ByteBuffer wrapString(String s) {
+        return ByteBuffer.wrap(s.getBytes(Charset.defaultCharset()));
     }
 }

http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/external/storm-kafka/src/test/storm/kafka/TestStringScheme.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/test/storm/kafka/TestStringScheme.java b/external/storm-kafka/src/test/storm/kafka/TestStringScheme.java
new file mode 100644
index 0000000..ae36409
--- /dev/null
+++ b/external/storm-kafka/src/test/storm/kafka/TestStringScheme.java
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package storm.kafka;
+
+import org.junit.Test;
+
+import java.nio.ByteBuffer;
+import java.nio.charset.StandardCharsets;
+
+import static org.junit.Assert.assertEquals;
+
+public class TestStringScheme {
+  @Test
+  public void testDeserializeString() {
+    String s = "foo";
+    byte[] bytes = s.getBytes(StandardCharsets.UTF_8);
+    ByteBuffer direct = ByteBuffer.allocateDirect(bytes.length);
+    direct.put(bytes);
+    direct.flip();
+    String s1 = StringScheme.deserializeString(ByteBuffer.wrap(bytes));
+    String s2 = StringScheme.deserializeString(direct);
+    assertEquals(s, s1);
+    assertEquals(s, s2);
+  }
+}

http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/storm-core/src/jvm/backtype/storm/spout/MultiScheme.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/spout/MultiScheme.java b/storm-core/src/jvm/backtype/storm/spout/MultiScheme.java
index ca2ce91..57bf4ce 100644
--- a/storm-core/src/jvm/backtype/storm/spout/MultiScheme.java
+++ b/storm-core/src/jvm/backtype/storm/spout/MultiScheme.java
@@ -17,12 +17,13 @@
  */
 package backtype.storm.spout;
 
+import java.nio.ByteBuffer;
 import java.util.List;
 import java.io.Serializable;
 
 import backtype.storm.tuple.Fields;
 
 public interface MultiScheme extends Serializable {
-  public Iterable<List<Object>> deserialize(byte[] ser);
+  public Iterable<List<Object>> deserialize(ByteBuffer ser);
   public Fields getOutputFields();
 }

http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/storm-core/src/jvm/backtype/storm/spout/RawMultiScheme.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/spout/RawMultiScheme.java b/storm-core/src/jvm/backtype/storm/spout/RawMultiScheme.java
index 7f73975..824d16c 100644
--- a/storm-core/src/jvm/backtype/storm/spout/RawMultiScheme.java
+++ b/storm-core/src/jvm/backtype/storm/spout/RawMultiScheme.java
@@ -17,6 +17,7 @@
  */
 package backtype.storm.spout;
 
+import java.nio.ByteBuffer;
 import java.util.List;
 
 import backtype.storm.tuple.Fields;
@@ -27,7 +28,7 @@ import static java.util.Arrays.asList;
 
 public class RawMultiScheme implements MultiScheme {
   @Override
-  public Iterable<List<Object>> deserialize(byte[] ser) {
+  public Iterable<List<Object>> deserialize(ByteBuffer ser) {
     return asList(tuple(ser));
   }
 

http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/storm-core/src/jvm/backtype/storm/spout/RawScheme.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/spout/RawScheme.java b/storm-core/src/jvm/backtype/storm/spout/RawScheme.java
index 7e26770..937acb7 100644
--- a/storm-core/src/jvm/backtype/storm/spout/RawScheme.java
+++ b/storm-core/src/jvm/backtype/storm/spout/RawScheme.java
@@ -18,12 +18,17 @@
 package backtype.storm.spout;
 
 import backtype.storm.tuple.Fields;
+import backtype.storm.utils.Utils;
+
+import java.nio.ByteBuffer;
 import java.util.List;
 import static backtype.storm.utils.Utils.tuple;
 
 public class RawScheme implements Scheme {
-    public List<Object> deserialize(byte[] ser) {
-        return tuple(ser);
+    public List<Object> deserialize(ByteBuffer ser) {
+        // Maintain backward compatibility for 0.10
+        byte[] b = Utils.toByteArray(ser);
+        return tuple(new Object[]{b});
     }
 
     public Fields getOutputFields() {

http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/storm-core/src/jvm/backtype/storm/spout/Scheme.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/spout/Scheme.java b/storm-core/src/jvm/backtype/storm/spout/Scheme.java
index ca68954..d696a9c 100644
--- a/storm-core/src/jvm/backtype/storm/spout/Scheme.java
+++ b/storm-core/src/jvm/backtype/storm/spout/Scheme.java
@@ -19,10 +19,11 @@ package backtype.storm.spout;
 
 import backtype.storm.tuple.Fields;
 import java.io.Serializable;
+import java.nio.ByteBuffer;
 import java.util.List;
 
 
 public interface Scheme extends Serializable {
-    public List<Object> deserialize(byte[] ser);
+    List<Object> deserialize(ByteBuffer ser);
     public Fields getOutputFields();
 }

http://git-wip-us.apache.org/repos/asf/storm/blob/35f1da78/storm-core/src/jvm/backtype/storm/spout/SchemeAsMultiScheme.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/spout/SchemeAsMultiScheme.java b/storm-core/src/jvm/backtype/storm/spout/SchemeAsMultiScheme.java
index 29f7fce..a49d55f 100644
--- a/storm-core/src/jvm/backtype/storm/spout/SchemeAsMultiScheme.java
+++ b/storm-core/src/jvm/backtype/storm/spout/SchemeAsMultiScheme.java
@@ -17,6 +17,7 @@
  */
 package backtype.storm.spout;
 
+import java.nio.ByteBuffer;
 import java.util.Arrays;
 import java.util.List;
 
@@ -29,7 +30,7 @@ public class SchemeAsMultiScheme implements MultiScheme {
     this.scheme = scheme;
   }
 
-  @Override public Iterable<List<Object>> deserialize(final byte[] ser) {
+  @Override public Iterable<List<Object>> deserialize(final ByteBuffer ser) {
     List<Object> o = scheme.deserialize(ser);
     if(o == null) return null;
     else return Arrays.asList(o);


[19/50] [abbrv] storm git commit: Added STORM-126 to Changelog

Posted by sr...@apache.org.
Added STORM-126 to Changelog


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/74cd0421
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/74cd0421
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/74cd0421

Branch: refs/heads/STORM-1040
Commit: 74cd042152df6ff0ea982340ac14e16c1d940161
Parents: 8bcb2f2
Author: Robert (Bobby) Evans <ev...@yahoo-inc.com>
Authored: Tue Nov 24 09:15:37 2015 -0600
Committer: Robert (Bobby) Evans <ev...@yahoo-inc.com>
Committed: Tue Nov 24 09:15:37 2015 -0600

----------------------------------------------------------------------
 CHANGELOG.md | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/74cd0421/CHANGELOG.md
----------------------------------------------------------------------
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 6c1135a..9f0d482 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,5 @@
 ## 0.11.0
+ * STORM-126: Add Lifecycle support API for worker nodes
  * STORM-1213: Remove sigar binaries from source tree
  * STORM-885:  Heartbeat Server (Pacemaker)
  * STORM-1221: Create a common interface for all Trident spout.


[33/50] [abbrv] storm git commit: Merge branch 'STORM-1217-merge'

Posted by sr...@apache.org.
Merge branch 'STORM-1217-merge'


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/5a71ea07
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/5a71ea07
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/5a71ea07

Branch: refs/heads/STORM-1040
Commit: 5a71ea07c71dd15854a30f5ecb555b19f0a4e59b
Parents: fc3b877 e182624
Author: Derek Dagit <de...@yahoo-inc.com>
Authored: Tue Nov 24 16:50:15 2015 -0600
Committer: Derek Dagit <de...@yahoo-inc.com>
Committed: Tue Nov 24 16:50:15 2015 -0600

----------------------------------------------------------------------
 CHANGELOG.md                                                       | 1 +
 .../src/jvm/storm/starter/ResourceAwareExampleTopology.java        | 2 +-
 .../backtype/storm/scheduler/resource/ResourceAwareScheduler.java  | 2 +-
 .../storm/scheduler/resource/strategies/ResourceAwareStrategy.java | 2 +-
 4 files changed, 4 insertions(+), 3 deletions(-)
----------------------------------------------------------------------



[18/50] [abbrv] storm git commit: Merge branch 'worker-hooks' of https://github.com/socialrank/storm into STORM-126

Posted by sr...@apache.org.
Merge branch 'worker-hooks' of https://github.com/socialrank/storm into STORM-126

STORM-126: Add Lifecycle support API for worker nodes


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/8bcb2f29
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/8bcb2f29
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/8bcb2f29

Branch: refs/heads/STORM-1040
Commit: 8bcb2f29d0fa043631d75f8e7a1a13adf8f5e2bd
Parents: ccf3fd2 9bc9350
Author: Robert (Bobby) Evans <ev...@yahoo-inc.com>
Authored: Tue Nov 24 09:15:01 2015 -0600
Committer: Robert (Bobby) Evans <ev...@yahoo-inc.com>
Committed: Tue Nov 24 09:15:01 2015 -0600

----------------------------------------------------------------------
 .../src/clj/backtype/storm/daemon/common.clj    |   30 +-
 .../src/clj/backtype/storm/daemon/worker.clj    |   27 +-
 .../backtype/storm/generated/Assignment.java    |  244 ++--
 .../jvm/backtype/storm/generated/BoltStats.java |  340 ++---
 .../storm/generated/ClusterSummary.java         |  108 +-
 .../storm/generated/ClusterWorkerHeartbeat.java |   52 +-
 .../storm/generated/ComponentPageInfo.java      |  220 ++--
 .../backtype/storm/generated/Credentials.java   |   44 +-
 .../backtype/storm/generated/ExecutorStats.java |  160 +--
 .../storm/generated/LSApprovedWorkers.java      |   44 +-
 .../generated/LSSupervisorAssignments.java      |   48 +-
 .../backtype/storm/generated/LSTopoHistory.java |   64 +-
 .../storm/generated/LSTopoHistoryList.java      |   36 +-
 .../storm/generated/LSWorkerHeartbeat.java      |   36 +-
 .../storm/generated/LocalAssignment.java        |   36 +-
 .../storm/generated/LocalStateData.java         |   48 +-
 .../jvm/backtype/storm/generated/LogConfig.java |   48 +-
 .../jvm/backtype/storm/generated/Nimbus.java    |   36 +-
 .../jvm/backtype/storm/generated/NodeInfo.java  |   32 +-
 .../storm/generated/RebalanceOptions.java       |   44 +-
 .../backtype/storm/generated/SpoutStats.java    |  224 ++--
 .../jvm/backtype/storm/generated/StormBase.java |   92 +-
 .../backtype/storm/generated/StormTopology.java |  251 +++-
 .../storm/generated/SupervisorInfo.java         |  152 +--
 .../storm/generated/SupervisorSummary.java      |   44 +-
 .../storm/generated/TopologyHistoryInfo.java    |   32 +-
 .../backtype/storm/generated/TopologyInfo.java  |  164 +--
 .../storm/generated/TopologyPageInfo.java       |   96 +-
 .../backtype/storm/generated/TopologyStats.java |  220 ++--
 .../backtype/storm/hooks/BaseWorkerHook.java    |   51 +
 .../jvm/backtype/storm/hooks/IWorkerHook.java   |   44 +
 .../storm/topology/TopologyBuilder.java         |   47 +-
 .../storm/utils/ThriftTopologyUtils.java        |   36 +-
 storm-core/src/py/storm/Nimbus.py               |   14 +-
 storm-core/src/py/storm/ttypes.py               | 1239 +++++++++---------
 storm-core/src/storm.thrift                     |   11 +-
 .../storm/topology/TopologyBuilderTest.java     |    5 +
 .../storm/utils/ThriftTopologyUtilsTest.java    |   94 ++
 38 files changed, 2466 insertions(+), 2047 deletions(-)
----------------------------------------------------------------------



[15/50] [abbrv] storm git commit: remove unused WORKER-HOOK-FIELD def

Posted by sr...@apache.org.
remove unused WORKER-HOOK-FIELD def


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/07d9733b
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/07d9733b
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/07d9733b

Branch: refs/heads/STORM-1040
Commit: 07d9733ba0e38aae02d75a47f8813ae378248e42
Parents: b0c3704
Author: Michael Schonfeld <mi...@schonfeld.org>
Authored: Tue Nov 17 12:28:54 2015 -0500
Committer: Michael Schonfeld <mi...@schonfeld.org>
Committed: Mon Nov 23 18:50:55 2015 -0500

----------------------------------------------------------------------
 storm-core/src/clj/backtype/storm/thrift.clj | 3 ---
 1 file changed, 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/07d9733b/storm-core/src/clj/backtype/storm/thrift.clj
----------------------------------------------------------------------
diff --git a/storm-core/src/clj/backtype/storm/thrift.clj b/storm-core/src/clj/backtype/storm/thrift.clj
index 545ce49..8f4c659 100644
--- a/storm-core/src/clj/backtype/storm/thrift.clj
+++ b/storm-core/src/clj/backtype/storm/thrift.clj
@@ -282,6 +282,3 @@
   [StormTopology$_Fields/SPOUTS
    StormTopology$_Fields/STATE_SPOUTS])
 
-(def WORKER-HOOK-FIELD
-  [StormTopology$_Fields/WORKER_HOOKS])
-


[23/50] [abbrv] storm git commit: Merge branch 'STORM-1203' of github.com:harshach/incubator-storm

Posted by sr...@apache.org.
Merge branch 'STORM-1203' of github.com:harshach/incubator-storm


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/c0c14628
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/c0c14628
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/c0c14628

Branch: refs/heads/STORM-1040
Commit: c0c146281b51a78d4d543049929ae0af08d43507
Parents: dc05a00 5cb4bf5
Author: P. Taylor Goetz <pt...@gmail.com>
Authored: Tue Nov 24 14:46:11 2015 -0500
Committer: P. Taylor Goetz <pt...@gmail.com>
Committed: Tue Nov 24 14:46:11 2015 -0500

----------------------------------------------------------------------
 storm-core/src/clj/backtype/storm/util.clj | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/c0c14628/storm-core/src/clj/backtype/storm/util.clj
----------------------------------------------------------------------


[06/50] [abbrv] storm git commit: add tests for ThriftTopologyUtils

Posted by sr...@apache.org.
add tests for ThriftTopologyUtils


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/9cb86669
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/9cb86669
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/9cb86669

Branch: refs/heads/STORM-1040
Commit: 9cb8666963bf3d00e21e3dfaf406650d5b780721
Parents: b03ce6b
Author: Michael Schonfeld <mi...@schonfeld.org>
Authored: Tue Nov 17 09:09:47 2015 -0500
Committer: Michael Schonfeld <mi...@schonfeld.org>
Committed: Mon Nov 23 18:50:54 2015 -0500

----------------------------------------------------------------------
 .../storm/utils/ThriftTopologyUtilsTest.java    | 77 ++++++++++++++++++++
 1 file changed, 77 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/9cb86669/storm-core/test/jvm/backtype/storm/utils/ThriftTopologyUtilsTest.java
----------------------------------------------------------------------
diff --git a/storm-core/test/jvm/backtype/storm/utils/ThriftTopologyUtilsTest.java b/storm-core/test/jvm/backtype/storm/utils/ThriftTopologyUtilsTest.java
new file mode 100644
index 0000000..0056538
--- /dev/null
+++ b/storm-core/test/jvm/backtype/storm/utils/ThriftTopologyUtilsTest.java
@@ -0,0 +1,77 @@
+package backtype.storm.utils;
+
+import backtype.storm.generated.*;
+import backtype.storm.hooks.BaseWorkerHook;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+import junit.framework.TestCase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.nio.ByteBuffer;
+import java.util.Set;
+
+public class ThriftTopologyUtilsTest extends TestCase {
+    @Test
+    public void testIsWorkerHook() {
+        Assert.assertEquals(false, ThriftTopologyUtils.isWorkerHook(StormTopology._Fields.BOLTS));
+        Assert.assertEquals(false, ThriftTopologyUtils.isWorkerHook(StormTopology._Fields.SPOUTS));
+        Assert.assertEquals(false, ThriftTopologyUtils.isWorkerHook(StormTopology._Fields.STATE_SPOUTS));
+        Assert.assertEquals(true, ThriftTopologyUtils.isWorkerHook(StormTopology._Fields.WORKER_HOOKS));
+    }
+
+    @Test
+    public void testGetComponentIdsWithWorkerHook() {
+        StormTopology stormTopology = genereateStormTopology(true);
+        Set<String> componentIds = ThriftTopologyUtils.getComponentIds(stormTopology);
+        Assert.assertEquals(
+                "We expect to get the IDs of the components sans the Worker Hook",
+                ImmutableSet.of("bolt-1", "spout-1"),
+                componentIds);
+    }
+
+    @Test
+    public void testGetComponentIdsWithoutWorkerHook() {
+        StormTopology stormTopology = genereateStormTopology(false);
+        Set<String> componentIds = ThriftTopologyUtils.getComponentIds(stormTopology);
+        Assert.assertEquals(
+                "We expect to get the IDs of the components sans the Worker Hook",
+                ImmutableSet.of("bolt-1", "spout-1"),
+                componentIds);
+    }
+
+    @Test
+    public void testGetComponentCommonWithWorkerHook() {
+        StormTopology stormTopology = genereateStormTopology(true);
+        ComponentCommon componentCommon = ThriftTopologyUtils.getComponentCommon(stormTopology, "bolt-1");
+        Assert.assertEquals(
+                "We expect to get bolt-1's common",
+                new Bolt().get_common(),
+                componentCommon);
+    }
+
+    @Test
+    public void testGetComponentCommonWithoutWorkerHook() {
+        StormTopology stormTopology = genereateStormTopology(false);
+        ComponentCommon componentCommon = ThriftTopologyUtils.getComponentCommon(stormTopology, "bolt-1");
+        Assert.assertEquals(
+                "We expect to get bolt-1's common",
+                new Bolt().get_common(),
+                componentCommon);
+    }
+
+    private StormTopology genereateStormTopology(boolean withWorkerHook) {
+        ImmutableMap<String,SpoutSpec> spouts = ImmutableMap.of("spout-1", new SpoutSpec());
+        ImmutableMap<String,Bolt> bolts = ImmutableMap.of("bolt-1", new Bolt());
+        ImmutableMap<String,StateSpoutSpec> state_spouts = ImmutableMap.of();
+
+        StormTopology stormTopology = new StormTopology(spouts, bolts, state_spouts);
+
+        if(withWorkerHook) {
+            BaseWorkerHook workerHook = new BaseWorkerHook();
+            stormTopology.add_to_worker_hooks(ByteBuffer.wrap(Utils.javaSerialize(workerHook)));
+        }
+
+        return stormTopology;
+    }
+}


[46/50] [abbrv] storm git commit: Merge branch 'STORM-1341'

Posted by sr...@apache.org.
Merge branch 'STORM-1341'


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/cf4407fd
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/cf4407fd
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/cf4407fd

Branch: refs/heads/STORM-1040
Commit: cf4407fd31e88112f92c0e85188859e0a396bd98
Parents: 2181433 63026ee
Author: Jungtaek Lim <ka...@gmail.com>
Authored: Fri Nov 27 05:59:49 2015 +0900
Committer: Jungtaek Lim <ka...@gmail.com>
Committed: Fri Nov 27 05:59:49 2015 +0900

----------------------------------------------------------------------
 storm-core/src/jvm/backtype/storm/Config.java           | 8 ++++++++
 storm-core/src/jvm/backtype/storm/spout/ShellSpout.java | 6 +++++-
 storm-core/src/jvm/backtype/storm/task/ShellBolt.java   | 6 +++++-
 3 files changed, 18 insertions(+), 2 deletions(-)
----------------------------------------------------------------------



[47/50] [abbrv] storm git commit: Merge branch 'feature/bash_env' of https://github.com/dwimsey/storm into bash_env

Posted by sr...@apache.org.
Merge branch 'feature/bash_env' of https://github.com/dwimsey/storm into bash_env


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/b082e85f
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/b082e85f
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/b082e85f

Branch: refs/heads/STORM-1040
Commit: b082e85f467e89dbc4b8b7fcb5d2dd7b849420ce
Parents: cf4407f a5ca650
Author: Jungtaek Lim <ka...@gmail.com>
Authored: Fri Nov 27 06:04:50 2015 +0900
Committer: Jungtaek Lim <ka...@gmail.com>
Committed: Fri Nov 27 06:04:50 2015 +0900

----------------------------------------------------------------------
 bin/storm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------



[21/50] [abbrv] storm git commit: Merge branch 'master' of github.com:lispking/storm

Posted by sr...@apache.org.
Merge branch 'master' of github.com:lispking/storm


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/609b11c5
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/609b11c5
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/609b11c5

Branch: refs/heads/STORM-1040
Commit: 609b11c54d41795c423d9a33bfa552b4fd89bc05
Parents: fd75ca7 a019d50
Author: P. Taylor Goetz <pt...@gmail.com>
Authored: Tue Nov 24 14:24:56 2015 -0500
Committer: P. Taylor Goetz <pt...@gmail.com>
Committed: Tue Nov 24 14:24:56 2015 -0500

----------------------------------------------------------------------
 .../src/main/java/org/apache/storm/flux/model/ObjectDef.java       | 2 ++
 1 file changed, 2 insertions(+)
----------------------------------------------------------------------



[45/50] [abbrv] storm git commit: add STORM-1341 to CHANGELOG.md

Posted by sr...@apache.org.
add STORM-1341 to CHANGELOG.md


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/21814334
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/21814334
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/21814334

Branch: refs/heads/STORM-1040
Commit: 21814334a857be21440edbdbf415eb42d6180d02
Parents: c7c367c
Author: Jungtaek Lim <ka...@gmail.com>
Authored: Fri Nov 27 05:59:26 2015 +0900
Committer: Jungtaek Lim <ka...@gmail.com>
Committed: Fri Nov 27 05:59:26 2015 +0900

----------------------------------------------------------------------
 CHANGELOG.md | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/21814334/CHANGELOG.md
----------------------------------------------------------------------
diff --git a/CHANGELOG.md b/CHANGELOG.md
index cb5a4a9..4eb137b 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,5 @@
 ## 0.11.0
+ * STORM-1341: Let topology have own heartbeat timeout for multilang subprocess
  * STORM-1207: Added flux support for IWindowedBolt
  * STORM-1352: Trident should support writing to multiple Kafka clusters.
  * STORM-1220: Avoid double copying in the Kafka spout.


[40/50] [abbrv] storm git commit: Added STORM-1352 to CHANGELOG.

Posted by sr...@apache.org.
Added STORM-1352 to CHANGELOG.


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/4c59de6f
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/4c59de6f
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/4c59de6f

Branch: refs/heads/STORM-1040
Commit: 4c59de6f324760cc33c9bbcb5f8d3644d2db4afc
Parents: 6d3bee9
Author: Sriharsha Chintalapani <ha...@hortonworks.com>
Authored: Wed Nov 25 15:30:22 2015 -0800
Committer: Sriharsha Chintalapani <ha...@hortonworks.com>
Committed: Wed Nov 25 15:30:22 2015 -0800

----------------------------------------------------------------------
 CHANGELOG.md | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/4c59de6f/CHANGELOG.md
----------------------------------------------------------------------
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 8106078..0f7919a 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,5 @@
 +## 0.11.0
+ * STORM-1352: Trident should support writing to multiple Kafka clusters.
  * STORM-1220: Avoid double copying in the Kafka spout.
  * STORM-1340: Use Travis-CI build matrix to improve test execution times
  * STORM-1126: Allow a configMethod that takes no arguments (Flux)


[17/50] [abbrv] storm git commit: make BaseWorkerHook serializable by default

Posted by sr...@apache.org.
make BaseWorkerHook serializable by default


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/4078d95e
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/4078d95e
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/4078d95e

Branch: refs/heads/STORM-1040
Commit: 4078d95e6f15fec386ecdfa4625cb3bd89906a61
Parents: fe64642
Author: Michael Schonfeld <mi...@schonfeld.org>
Authored: Tue Nov 17 12:26:18 2015 -0500
Committer: Michael Schonfeld <mi...@schonfeld.org>
Committed: Mon Nov 23 18:50:55 2015 -0500

----------------------------------------------------------------------
 storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/4078d95e/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java b/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java
index 6fe9f19..e04f19b 100644
--- a/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java
+++ b/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java
@@ -19,10 +19,13 @@ package backtype.storm.hooks;
 
 import backtype.storm.task.WorkerTopologyContext;
 
+import java.io.Serializable;
 import java.util.List;
 import java.util.Map;
 
-public class BaseWorkerHook implements IWorkerHook {
+public class BaseWorkerHook implements IWorkerHook, Serializable {
+    private static final long serialVersionUID = 2589466485198339529L;
+
     @Override
     public void start(Map stormConf, WorkerTopologyContext context, List taskIds) {
 


[13/50] [abbrv] storm git commit: remove task-ids from iworkerhook's start method

Posted by sr...@apache.org.
remove task-ids from iworkerhook's start method


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/ccb8031b
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/ccb8031b
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/ccb8031b

Branch: refs/heads/STORM-1040
Commit: ccb8031b3cebb06d266a8621b91255945221efc6
Parents: dfc33ec
Author: Michael Schonfeld <mi...@schonfeld.org>
Authored: Wed Nov 18 11:55:54 2015 -0500
Committer: Michael Schonfeld <mi...@schonfeld.org>
Committed: Mon Nov 23 18:50:55 2015 -0500

----------------------------------------------------------------------
 storm-core/src/clj/backtype/storm/daemon/worker.clj         | 3 +--
 storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java | 4 +---
 storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java    | 4 +---
 3 files changed, 3 insertions(+), 8 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/ccb8031b/storm-core/src/clj/backtype/storm/daemon/worker.clj
----------------------------------------------------------------------
diff --git a/storm-core/src/clj/backtype/storm/daemon/worker.clj b/storm-core/src/clj/backtype/storm/daemon/worker.clj
index c0a99de..64670c2 100644
--- a/storm-core/src/clj/backtype/storm/daemon/worker.clj
+++ b/storm-core/src/clj/backtype/storm/daemon/worker.clj
@@ -553,12 +553,11 @@
   (let [topology (:topology worker)
         topo-conf (:conf worker)
         worker-topology-context (worker-context worker)
-        task-ids (:task_ids worker)
         hooks (.get_worker_hooks topology)]
     (dofor [hook hooks]
       (let [hook-bytes (Utils/toByteArray hook)
             deser-hook (Utils/javaDeserialize hook-bytes BaseWorkerHook)]
-        (.start deser-hook topo-conf worker-topology-context task-ids)))))
+        (.start deser-hook topo-conf worker-topology-context)))))
 
 (defn run-worker-shutdown-hooks [worker]
   (let [topology (:topology worker)

http://git-wip-us.apache.org/repos/asf/storm/blob/ccb8031b/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java b/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java
index 029f671..c146ac2 100644
--- a/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java
+++ b/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java
@@ -20,7 +20,6 @@ package backtype.storm.hooks;
 import backtype.storm.task.WorkerTopologyContext;
 
 import java.io.Serializable;
-import java.util.List;
 import java.util.Map;
 
 /**
@@ -36,10 +35,9 @@ public class BaseWorkerHook implements IWorkerHook, Serializable {
      *
      * @param stormConf The Storm configuration for this worker
      * @param context This object can be used to get information about this worker's place within the topology
-     * @param taskIds A list of Integers denoting the task IDs assigned to this worker
      */
     @Override
-    public void start(Map stormConf, WorkerTopologyContext context, List<Integer> taskIds) {
+    public void start(Map stormConf, WorkerTopologyContext context) {
         // NOOP
     }
 

http://git-wip-us.apache.org/repos/asf/storm/blob/ccb8031b/storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java b/storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java
index 6584883..6fb3946 100644
--- a/storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java
+++ b/storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java
@@ -20,7 +20,6 @@ package backtype.storm.hooks;
 import backtype.storm.task.WorkerTopologyContext;
 
 import java.io.Serializable;
-import java.util.List;
 import java.util.Map;
 
 /**
@@ -35,9 +34,8 @@ public interface IWorkerHook extends Serializable {
      *
      * @param stormConf The Storm configuration for this worker
      * @param context This object can be used to get information about this worker's place within the topology
-     * @param taskIds A list of Integers denoting the task IDs assigned to this worker
      */
-    void start(Map stormConf, WorkerTopologyContext context, List<Integer> taskIds);
+    void start(Map stormConf, WorkerTopologyContext context);
 
     /**
      * This method is called right before a worker shuts down


[35/50] [abbrv] storm git commit: STORM-1341 Let topology have own heartbeat timeout for multilang subprocess

Posted by sr...@apache.org.
STORM-1341 Let topology have own heartbeat timeout for multilang subprocess

* config name: topology.subprocess.timeout.secs
* if it's not specified, supervisor.worker.timeout.secs will be used


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/fc8c296e
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/fc8c296e
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/fc8c296e

Branch: refs/heads/STORM-1040
Commit: fc8c296efc41c1efc6060ba09c07d406ceacc844
Parents: 20a864d
Author: Jungtaek Lim <ka...@gmail.com>
Authored: Wed Nov 25 13:46:59 2015 +0900
Committer: Jungtaek Lim <ka...@gmail.com>
Committed: Wed Nov 25 13:46:59 2015 +0900

----------------------------------------------------------------------
 storm-core/src/jvm/backtype/storm/Config.java           | 8 ++++++++
 storm-core/src/jvm/backtype/storm/spout/ShellSpout.java | 6 +++++-
 storm-core/src/jvm/backtype/storm/task/ShellBolt.java   | 6 +++++-
 3 files changed, 18 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/fc8c296e/storm-core/src/jvm/backtype/storm/Config.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/Config.java b/storm-core/src/jvm/backtype/storm/Config.java
index ab17263..89422f6 100644
--- a/storm-core/src/jvm/backtype/storm/Config.java
+++ b/storm-core/src/jvm/backtype/storm/Config.java
@@ -1712,6 +1712,14 @@ public class Config extends HashMap<String, Object> {
     public static final String TOPOLOGY_SHELLBOLT_MAX_PENDING="topology.shellbolt.max.pending";
 
     /**
+     * How long a subprocess can go without heartbeating before the ShellSpout/ShellBolt tries to
+     * suicide itself.
+     */
+    @isInteger
+    @isPositiveNumber
+    public static final String TOPOLOGY_SUBPROCESS_TIMEOUT_SECS = "topology.subprocess.timeout.secs";
+
+    /**
      * Topology central logging sensitivity to determine who has access to logs in central logging system.
      * The possible values are:
      *   S0 - Public (open to all users on grid)

http://git-wip-us.apache.org/repos/asf/storm/blob/fc8c296e/storm-core/src/jvm/backtype/storm/spout/ShellSpout.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/spout/ShellSpout.java b/storm-core/src/jvm/backtype/storm/spout/ShellSpout.java
index 4976903..bfdfe67 100644
--- a/storm-core/src/jvm/backtype/storm/spout/ShellSpout.java
+++ b/storm-core/src/jvm/backtype/storm/spout/ShellSpout.java
@@ -74,7 +74,11 @@ public class ShellSpout implements ISpout {
         _collector = collector;
         _context = context;
 
-        workerTimeoutMills = 1000 * RT.intCast(stormConf.get(Config.SUPERVISOR_WORKER_TIMEOUT_SECS));
+        if (stormConf.containsKey(Config.TOPOLOGY_SUBPROCESS_TIMEOUT_SECS)) {
+            workerTimeoutMills = 1000 * RT.intCast(stormConf.get(Config.TOPOLOGY_SUBPROCESS_TIMEOUT_SECS));
+        } else {
+            workerTimeoutMills = 1000 * RT.intCast(stormConf.get(Config.SUPERVISOR_WORKER_TIMEOUT_SECS));
+        }
 
         _process = new ShellProcess(_command);
         if (!env.isEmpty()) {

http://git-wip-us.apache.org/repos/asf/storm/blob/fc8c296e/storm-core/src/jvm/backtype/storm/task/ShellBolt.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/task/ShellBolt.java b/storm-core/src/jvm/backtype/storm/task/ShellBolt.java
index dda99ca..0103715 100644
--- a/storm-core/src/jvm/backtype/storm/task/ShellBolt.java
+++ b/storm-core/src/jvm/backtype/storm/task/ShellBolt.java
@@ -114,7 +114,11 @@ public class ShellBolt implements IBolt {
 
         _context = context;
 
-        workerTimeoutMills = 1000 * RT.intCast(stormConf.get(Config.SUPERVISOR_WORKER_TIMEOUT_SECS));
+        if (stormConf.containsKey(Config.TOPOLOGY_SUBPROCESS_TIMEOUT_SECS)) {
+            workerTimeoutMills = 1000 * RT.intCast(stormConf.get(Config.TOPOLOGY_SUBPROCESS_TIMEOUT_SECS));
+        } else {
+            workerTimeoutMills = 1000 * RT.intCast(stormConf.get(Config.SUPERVISOR_WORKER_TIMEOUT_SECS));
+        }
 
         _process = new ShellProcess(_command);
         if (!env.isEmpty()) {


[26/50] [abbrv] storm git commit: add STORM-1126 to changelog

Posted by sr...@apache.org.
add STORM-1126 to changelog


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/f8a2d65a
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/f8a2d65a
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/f8a2d65a

Branch: refs/heads/STORM-1040
Commit: f8a2d65a64f5648abed48d1ba94ea2c572ea8826
Parents: 6ebf247
Author: P. Taylor Goetz <pt...@gmail.com>
Authored: Tue Nov 24 14:57:34 2015 -0500
Committer: P. Taylor Goetz <pt...@gmail.com>
Committed: Tue Nov 24 14:57:34 2015 -0500

----------------------------------------------------------------------
 CHANGELOG.md | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/f8a2d65a/CHANGELOG.md
----------------------------------------------------------------------
diff --git a/CHANGELOG.md b/CHANGELOG.md
index b45afb1..ccae7d0 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,5 @@
 +## 0.11.0
+ * STORM-1126: Allow a configMethod that takes no arguments (Flux)
  * STORM-1203: worker metadata file creation doesn't use storm.log.dir config
  * STORM-1349: [Flux] Allow constructorArgs to take Maps as arguments
  * STORM-126: Add Lifecycle support API for worker nodes


[09/50] [abbrv] storm git commit: add support for worker lifecycle hooks

Posted by sr...@apache.org.
http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/SpoutStats.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/SpoutStats.java b/storm-core/src/jvm/backtype/storm/generated/SpoutStats.java
index 3f17136..03fb7fe 100644
--- a/storm-core/src/jvm/backtype/storm/generated/SpoutStats.java
+++ b/storm-core/src/jvm/backtype/storm/generated/SpoutStats.java
@@ -602,40 +602,8 @@ public class SpoutStats implements org.apache.thrift.TBase<SpoutStats, SpoutStat
           case 1: // ACKED
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map208 = iprot.readMapBegin();
-                struct.acked = new HashMap<String,Map<String,Long>>(2*_map208.size);
-                String _key209;
-                Map<String,Long> _val210;
-                for (int _i211 = 0; _i211 < _map208.size; ++_i211)
-                {
-                  _key209 = iprot.readString();
-                  {
-                    org.apache.thrift.protocol.TMap _map212 = iprot.readMapBegin();
-                    _val210 = new HashMap<String,Long>(2*_map212.size);
-                    String _key213;
-                    long _val214;
-                    for (int _i215 = 0; _i215 < _map212.size; ++_i215)
-                    {
-                      _key213 = iprot.readString();
-                      _val214 = iprot.readI64();
-                      _val210.put(_key213, _val214);
-                    }
-                    iprot.readMapEnd();
-                  }
-                  struct.acked.put(_key209, _val210);
-                }
-                iprot.readMapEnd();
-              }
-              struct.set_acked_isSet(true);
-            } else { 
-              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
-            }
-            break;
-          case 2: // FAILED
-            if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
-              {
                 org.apache.thrift.protocol.TMap _map216 = iprot.readMapBegin();
-                struct.failed = new HashMap<String,Map<String,Long>>(2*_map216.size);
+                struct.acked = new HashMap<String,Map<String,Long>>(2*_map216.size);
                 String _key217;
                 Map<String,Long> _val218;
                 for (int _i219 = 0; _i219 < _map216.size; ++_i219)
@@ -654,39 +622,71 @@ public class SpoutStats implements org.apache.thrift.TBase<SpoutStats, SpoutStat
                     }
                     iprot.readMapEnd();
                   }
-                  struct.failed.put(_key217, _val218);
+                  struct.acked.put(_key217, _val218);
                 }
                 iprot.readMapEnd();
               }
-              struct.set_failed_isSet(true);
+              struct.set_acked_isSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
             }
             break;
-          case 3: // COMPLETE_MS_AVG
+          case 2: // FAILED
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
                 org.apache.thrift.protocol.TMap _map224 = iprot.readMapBegin();
-                struct.complete_ms_avg = new HashMap<String,Map<String,Double>>(2*_map224.size);
+                struct.failed = new HashMap<String,Map<String,Long>>(2*_map224.size);
                 String _key225;
-                Map<String,Double> _val226;
+                Map<String,Long> _val226;
                 for (int _i227 = 0; _i227 < _map224.size; ++_i227)
                 {
                   _key225 = iprot.readString();
                   {
                     org.apache.thrift.protocol.TMap _map228 = iprot.readMapBegin();
-                    _val226 = new HashMap<String,Double>(2*_map228.size);
+                    _val226 = new HashMap<String,Long>(2*_map228.size);
                     String _key229;
-                    double _val230;
+                    long _val230;
                     for (int _i231 = 0; _i231 < _map228.size; ++_i231)
                     {
                       _key229 = iprot.readString();
-                      _val230 = iprot.readDouble();
+                      _val230 = iprot.readI64();
                       _val226.put(_key229, _val230);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.complete_ms_avg.put(_key225, _val226);
+                  struct.failed.put(_key225, _val226);
+                }
+                iprot.readMapEnd();
+              }
+              struct.set_failed_isSet(true);
+            } else { 
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
+          case 3: // COMPLETE_MS_AVG
+            if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
+              {
+                org.apache.thrift.protocol.TMap _map232 = iprot.readMapBegin();
+                struct.complete_ms_avg = new HashMap<String,Map<String,Double>>(2*_map232.size);
+                String _key233;
+                Map<String,Double> _val234;
+                for (int _i235 = 0; _i235 < _map232.size; ++_i235)
+                {
+                  _key233 = iprot.readString();
+                  {
+                    org.apache.thrift.protocol.TMap _map236 = iprot.readMapBegin();
+                    _val234 = new HashMap<String,Double>(2*_map236.size);
+                    String _key237;
+                    double _val238;
+                    for (int _i239 = 0; _i239 < _map236.size; ++_i239)
+                    {
+                      _key237 = iprot.readString();
+                      _val238 = iprot.readDouble();
+                      _val234.put(_key237, _val238);
+                    }
+                    iprot.readMapEnd();
+                  }
+                  struct.complete_ms_avg.put(_key233, _val234);
                 }
                 iprot.readMapEnd();
               }
@@ -712,15 +712,15 @@ public class SpoutStats implements org.apache.thrift.TBase<SpoutStats, SpoutStat
         oprot.writeFieldBegin(ACKED_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, struct.acked.size()));
-          for (Map.Entry<String, Map<String,Long>> _iter232 : struct.acked.entrySet())
+          for (Map.Entry<String, Map<String,Long>> _iter240 : struct.acked.entrySet())
           {
-            oprot.writeString(_iter232.getKey());
+            oprot.writeString(_iter240.getKey());
             {
-              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, _iter232.getValue().size()));
-              for (Map.Entry<String, Long> _iter233 : _iter232.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, _iter240.getValue().size()));
+              for (Map.Entry<String, Long> _iter241 : _iter240.getValue().entrySet())
               {
-                oprot.writeString(_iter233.getKey());
-                oprot.writeI64(_iter233.getValue());
+                oprot.writeString(_iter241.getKey());
+                oprot.writeI64(_iter241.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -733,15 +733,15 @@ public class SpoutStats implements org.apache.thrift.TBase<SpoutStats, SpoutStat
         oprot.writeFieldBegin(FAILED_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, struct.failed.size()));
-          for (Map.Entry<String, Map<String,Long>> _iter234 : struct.failed.entrySet())
+          for (Map.Entry<String, Map<String,Long>> _iter242 : struct.failed.entrySet())
           {
-            oprot.writeString(_iter234.getKey());
+            oprot.writeString(_iter242.getKey());
             {
-              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, _iter234.getValue().size()));
-              for (Map.Entry<String, Long> _iter235 : _iter234.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, _iter242.getValue().size()));
+              for (Map.Entry<String, Long> _iter243 : _iter242.getValue().entrySet())
               {
-                oprot.writeString(_iter235.getKey());
-                oprot.writeI64(_iter235.getValue());
+                oprot.writeString(_iter243.getKey());
+                oprot.writeI64(_iter243.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -754,15 +754,15 @@ public class SpoutStats implements org.apache.thrift.TBase<SpoutStats, SpoutStat
         oprot.writeFieldBegin(COMPLETE_MS_AVG_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, struct.complete_ms_avg.size()));
-          for (Map.Entry<String, Map<String,Double>> _iter236 : struct.complete_ms_avg.entrySet())
+          for (Map.Entry<String, Map<String,Double>> _iter244 : struct.complete_ms_avg.entrySet())
           {
-            oprot.writeString(_iter236.getKey());
+            oprot.writeString(_iter244.getKey());
             {
-              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.DOUBLE, _iter236.getValue().size()));
-              for (Map.Entry<String, Double> _iter237 : _iter236.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.DOUBLE, _iter244.getValue().size()));
+              for (Map.Entry<String, Double> _iter245 : _iter244.getValue().entrySet())
               {
-                oprot.writeString(_iter237.getKey());
-                oprot.writeDouble(_iter237.getValue());
+                oprot.writeString(_iter245.getKey());
+                oprot.writeDouble(_iter245.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -790,45 +790,45 @@ public class SpoutStats implements org.apache.thrift.TBase<SpoutStats, SpoutStat
       TTupleProtocol oprot = (TTupleProtocol) prot;
       {
         oprot.writeI32(struct.acked.size());
-        for (Map.Entry<String, Map<String,Long>> _iter238 : struct.acked.entrySet())
+        for (Map.Entry<String, Map<String,Long>> _iter246 : struct.acked.entrySet())
         {
-          oprot.writeString(_iter238.getKey());
+          oprot.writeString(_iter246.getKey());
           {
-            oprot.writeI32(_iter238.getValue().size());
-            for (Map.Entry<String, Long> _iter239 : _iter238.getValue().entrySet())
+            oprot.writeI32(_iter246.getValue().size());
+            for (Map.Entry<String, Long> _iter247 : _iter246.getValue().entrySet())
             {
-              oprot.writeString(_iter239.getKey());
-              oprot.writeI64(_iter239.getValue());
+              oprot.writeString(_iter247.getKey());
+              oprot.writeI64(_iter247.getValue());
             }
           }
         }
       }
       {
         oprot.writeI32(struct.failed.size());
-        for (Map.Entry<String, Map<String,Long>> _iter240 : struct.failed.entrySet())
+        for (Map.Entry<String, Map<String,Long>> _iter248 : struct.failed.entrySet())
         {
-          oprot.writeString(_iter240.getKey());
+          oprot.writeString(_iter248.getKey());
           {
-            oprot.writeI32(_iter240.getValue().size());
-            for (Map.Entry<String, Long> _iter241 : _iter240.getValue().entrySet())
+            oprot.writeI32(_iter248.getValue().size());
+            for (Map.Entry<String, Long> _iter249 : _iter248.getValue().entrySet())
             {
-              oprot.writeString(_iter241.getKey());
-              oprot.writeI64(_iter241.getValue());
+              oprot.writeString(_iter249.getKey());
+              oprot.writeI64(_iter249.getValue());
             }
           }
         }
       }
       {
         oprot.writeI32(struct.complete_ms_avg.size());
-        for (Map.Entry<String, Map<String,Double>> _iter242 : struct.complete_ms_avg.entrySet())
+        for (Map.Entry<String, Map<String,Double>> _iter250 : struct.complete_ms_avg.entrySet())
         {
-          oprot.writeString(_iter242.getKey());
+          oprot.writeString(_iter250.getKey());
           {
-            oprot.writeI32(_iter242.getValue().size());
-            for (Map.Entry<String, Double> _iter243 : _iter242.getValue().entrySet())
+            oprot.writeI32(_iter250.getValue().size());
+            for (Map.Entry<String, Double> _iter251 : _iter250.getValue().entrySet())
             {
-              oprot.writeString(_iter243.getKey());
-              oprot.writeDouble(_iter243.getValue());
+              oprot.writeString(_iter251.getKey());
+              oprot.writeDouble(_iter251.getValue());
             }
           }
         }
@@ -839,32 +839,8 @@ public class SpoutStats implements org.apache.thrift.TBase<SpoutStats, SpoutStat
     public void read(org.apache.thrift.protocol.TProtocol prot, SpoutStats struct) throws org.apache.thrift.TException {
       TTupleProtocol iprot = (TTupleProtocol) prot;
       {
-        org.apache.thrift.protocol.TMap _map244 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.acked = new HashMap<String,Map<String,Long>>(2*_map244.size);
-        String _key245;
-        Map<String,Long> _val246;
-        for (int _i247 = 0; _i247 < _map244.size; ++_i247)
-        {
-          _key245 = iprot.readString();
-          {
-            org.apache.thrift.protocol.TMap _map248 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, iprot.readI32());
-            _val246 = new HashMap<String,Long>(2*_map248.size);
-            String _key249;
-            long _val250;
-            for (int _i251 = 0; _i251 < _map248.size; ++_i251)
-            {
-              _key249 = iprot.readString();
-              _val250 = iprot.readI64();
-              _val246.put(_key249, _val250);
-            }
-          }
-          struct.acked.put(_key245, _val246);
-        }
-      }
-      struct.set_acked_isSet(true);
-      {
         org.apache.thrift.protocol.TMap _map252 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.failed = new HashMap<String,Map<String,Long>>(2*_map252.size);
+        struct.acked = new HashMap<String,Map<String,Long>>(2*_map252.size);
         String _key253;
         Map<String,Long> _val254;
         for (int _i255 = 0; _i255 < _map252.size; ++_i255)
@@ -882,31 +858,55 @@ public class SpoutStats implements org.apache.thrift.TBase<SpoutStats, SpoutStat
               _val254.put(_key257, _val258);
             }
           }
-          struct.failed.put(_key253, _val254);
+          struct.acked.put(_key253, _val254);
         }
       }
-      struct.set_failed_isSet(true);
+      struct.set_acked_isSet(true);
       {
         org.apache.thrift.protocol.TMap _map260 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.complete_ms_avg = new HashMap<String,Map<String,Double>>(2*_map260.size);
+        struct.failed = new HashMap<String,Map<String,Long>>(2*_map260.size);
         String _key261;
-        Map<String,Double> _val262;
+        Map<String,Long> _val262;
         for (int _i263 = 0; _i263 < _map260.size; ++_i263)
         {
           _key261 = iprot.readString();
           {
-            org.apache.thrift.protocol.TMap _map264 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.DOUBLE, iprot.readI32());
-            _val262 = new HashMap<String,Double>(2*_map264.size);
+            org.apache.thrift.protocol.TMap _map264 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, iprot.readI32());
+            _val262 = new HashMap<String,Long>(2*_map264.size);
             String _key265;
-            double _val266;
+            long _val266;
             for (int _i267 = 0; _i267 < _map264.size; ++_i267)
             {
               _key265 = iprot.readString();
-              _val266 = iprot.readDouble();
+              _val266 = iprot.readI64();
               _val262.put(_key265, _val266);
             }
           }
-          struct.complete_ms_avg.put(_key261, _val262);
+          struct.failed.put(_key261, _val262);
+        }
+      }
+      struct.set_failed_isSet(true);
+      {
+        org.apache.thrift.protocol.TMap _map268 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
+        struct.complete_ms_avg = new HashMap<String,Map<String,Double>>(2*_map268.size);
+        String _key269;
+        Map<String,Double> _val270;
+        for (int _i271 = 0; _i271 < _map268.size; ++_i271)
+        {
+          _key269 = iprot.readString();
+          {
+            org.apache.thrift.protocol.TMap _map272 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.DOUBLE, iprot.readI32());
+            _val270 = new HashMap<String,Double>(2*_map272.size);
+            String _key273;
+            double _val274;
+            for (int _i275 = 0; _i275 < _map272.size; ++_i275)
+            {
+              _key273 = iprot.readString();
+              _val274 = iprot.readDouble();
+              _val270.put(_key273, _val274);
+            }
+          }
+          struct.complete_ms_avg.put(_key269, _val270);
         }
       }
       struct.set_complete_ms_avg_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/StormBase.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/StormBase.java b/storm-core/src/jvm/backtype/storm/generated/StormBase.java
index 2d8bf15..5f80c59 100644
--- a/storm-core/src/jvm/backtype/storm/generated/StormBase.java
+++ b/storm-core/src/jvm/backtype/storm/generated/StormBase.java
@@ -1090,15 +1090,15 @@ public class StormBase implements org.apache.thrift.TBase<StormBase, StormBase._
           case 4: // COMPONENT_EXECUTORS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map580 = iprot.readMapBegin();
-                struct.component_executors = new HashMap<String,Integer>(2*_map580.size);
-                String _key581;
-                int _val582;
-                for (int _i583 = 0; _i583 < _map580.size; ++_i583)
+                org.apache.thrift.protocol.TMap _map588 = iprot.readMapBegin();
+                struct.component_executors = new HashMap<String,Integer>(2*_map588.size);
+                String _key589;
+                int _val590;
+                for (int _i591 = 0; _i591 < _map588.size; ++_i591)
                 {
-                  _key581 = iprot.readString();
-                  _val582 = iprot.readI32();
-                  struct.component_executors.put(_key581, _val582);
+                  _key589 = iprot.readString();
+                  _val590 = iprot.readI32();
+                  struct.component_executors.put(_key589, _val590);
                 }
                 iprot.readMapEnd();
               }
@@ -1143,16 +1143,16 @@ public class StormBase implements org.apache.thrift.TBase<StormBase, StormBase._
           case 9: // COMPONENT_DEBUG
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map584 = iprot.readMapBegin();
-                struct.component_debug = new HashMap<String,DebugOptions>(2*_map584.size);
-                String _key585;
-                DebugOptions _val586;
-                for (int _i587 = 0; _i587 < _map584.size; ++_i587)
+                org.apache.thrift.protocol.TMap _map592 = iprot.readMapBegin();
+                struct.component_debug = new HashMap<String,DebugOptions>(2*_map592.size);
+                String _key593;
+                DebugOptions _val594;
+                for (int _i595 = 0; _i595 < _map592.size; ++_i595)
                 {
-                  _key585 = iprot.readString();
-                  _val586 = new DebugOptions();
-                  _val586.read(iprot);
-                  struct.component_debug.put(_key585, _val586);
+                  _key593 = iprot.readString();
+                  _val594 = new DebugOptions();
+                  _val594.read(iprot);
+                  struct.component_debug.put(_key593, _val594);
                 }
                 iprot.readMapEnd();
               }
@@ -1192,10 +1192,10 @@ public class StormBase implements org.apache.thrift.TBase<StormBase, StormBase._
           oprot.writeFieldBegin(COMPONENT_EXECUTORS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I32, struct.component_executors.size()));
-            for (Map.Entry<String, Integer> _iter588 : struct.component_executors.entrySet())
+            for (Map.Entry<String, Integer> _iter596 : struct.component_executors.entrySet())
             {
-              oprot.writeString(_iter588.getKey());
-              oprot.writeI32(_iter588.getValue());
+              oprot.writeString(_iter596.getKey());
+              oprot.writeI32(_iter596.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -1233,10 +1233,10 @@ public class StormBase implements org.apache.thrift.TBase<StormBase, StormBase._
           oprot.writeFieldBegin(COMPONENT_DEBUG_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, struct.component_debug.size()));
-            for (Map.Entry<String, DebugOptions> _iter589 : struct.component_debug.entrySet())
+            for (Map.Entry<String, DebugOptions> _iter597 : struct.component_debug.entrySet())
             {
-              oprot.writeString(_iter589.getKey());
-              _iter589.getValue().write(oprot);
+              oprot.writeString(_iter597.getKey());
+              _iter597.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -1286,10 +1286,10 @@ public class StormBase implements org.apache.thrift.TBase<StormBase, StormBase._
       if (struct.is_set_component_executors()) {
         {
           oprot.writeI32(struct.component_executors.size());
-          for (Map.Entry<String, Integer> _iter590 : struct.component_executors.entrySet())
+          for (Map.Entry<String, Integer> _iter598 : struct.component_executors.entrySet())
           {
-            oprot.writeString(_iter590.getKey());
-            oprot.writeI32(_iter590.getValue());
+            oprot.writeString(_iter598.getKey());
+            oprot.writeI32(_iter598.getValue());
           }
         }
       }
@@ -1308,10 +1308,10 @@ public class StormBase implements org.apache.thrift.TBase<StormBase, StormBase._
       if (struct.is_set_component_debug()) {
         {
           oprot.writeI32(struct.component_debug.size());
-          for (Map.Entry<String, DebugOptions> _iter591 : struct.component_debug.entrySet())
+          for (Map.Entry<String, DebugOptions> _iter599 : struct.component_debug.entrySet())
           {
-            oprot.writeString(_iter591.getKey());
-            _iter591.getValue().write(oprot);
+            oprot.writeString(_iter599.getKey());
+            _iter599.getValue().write(oprot);
           }
         }
       }
@@ -1329,15 +1329,15 @@ public class StormBase implements org.apache.thrift.TBase<StormBase, StormBase._
       BitSet incoming = iprot.readBitSet(6);
       if (incoming.get(0)) {
         {
-          org.apache.thrift.protocol.TMap _map592 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I32, iprot.readI32());
-          struct.component_executors = new HashMap<String,Integer>(2*_map592.size);
-          String _key593;
-          int _val594;
-          for (int _i595 = 0; _i595 < _map592.size; ++_i595)
+          org.apache.thrift.protocol.TMap _map600 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I32, iprot.readI32());
+          struct.component_executors = new HashMap<String,Integer>(2*_map600.size);
+          String _key601;
+          int _val602;
+          for (int _i603 = 0; _i603 < _map600.size; ++_i603)
           {
-            _key593 = iprot.readString();
-            _val594 = iprot.readI32();
-            struct.component_executors.put(_key593, _val594);
+            _key601 = iprot.readString();
+            _val602 = iprot.readI32();
+            struct.component_executors.put(_key601, _val602);
           }
         }
         struct.set_component_executors_isSet(true);
@@ -1361,16 +1361,16 @@ public class StormBase implements org.apache.thrift.TBase<StormBase, StormBase._
       }
       if (incoming.get(5)) {
         {
-          org.apache.thrift.protocol.TMap _map596 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.component_debug = new HashMap<String,DebugOptions>(2*_map596.size);
-          String _key597;
-          DebugOptions _val598;
-          for (int _i599 = 0; _i599 < _map596.size; ++_i599)
+          org.apache.thrift.protocol.TMap _map604 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.component_debug = new HashMap<String,DebugOptions>(2*_map604.size);
+          String _key605;
+          DebugOptions _val606;
+          for (int _i607 = 0; _i607 < _map604.size; ++_i607)
           {
-            _key597 = iprot.readString();
-            _val598 = new DebugOptions();
-            _val598.read(iprot);
-            struct.component_debug.put(_key597, _val598);
+            _key605 = iprot.readString();
+            _val606 = new DebugOptions();
+            _val606.read(iprot);
+            struct.component_debug.put(_key605, _val606);
           }
         }
         struct.set_component_debug_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/StormTopology.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/StormTopology.java b/storm-core/src/jvm/backtype/storm/generated/StormTopology.java
index 81fe93e..352d12d 100644
--- a/storm-core/src/jvm/backtype/storm/generated/StormTopology.java
+++ b/storm-core/src/jvm/backtype/storm/generated/StormTopology.java
@@ -58,6 +58,7 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
   private static final org.apache.thrift.protocol.TField SPOUTS_FIELD_DESC = new org.apache.thrift.protocol.TField("spouts", org.apache.thrift.protocol.TType.MAP, (short)1);
   private static final org.apache.thrift.protocol.TField BOLTS_FIELD_DESC = new org.apache.thrift.protocol.TField("bolts", org.apache.thrift.protocol.TType.MAP, (short)2);
   private static final org.apache.thrift.protocol.TField STATE_SPOUTS_FIELD_DESC = new org.apache.thrift.protocol.TField("state_spouts", org.apache.thrift.protocol.TType.MAP, (short)3);
+  private static final org.apache.thrift.protocol.TField WORKER_HOOKS_FIELD_DESC = new org.apache.thrift.protocol.TField("worker_hooks", org.apache.thrift.protocol.TType.LIST, (short)4);
 
   private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
   static {
@@ -68,12 +69,14 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
   private Map<String,SpoutSpec> spouts; // required
   private Map<String,Bolt> bolts; // required
   private Map<String,StateSpoutSpec> state_spouts; // required
+  private List<ByteBuffer> worker_hooks; // optional
 
   /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
   public enum _Fields implements org.apache.thrift.TFieldIdEnum {
     SPOUTS((short)1, "spouts"),
     BOLTS((short)2, "bolts"),
-    STATE_SPOUTS((short)3, "state_spouts");
+    STATE_SPOUTS((short)3, "state_spouts"),
+    WORKER_HOOKS((short)4, "worker_hooks");
 
     private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
 
@@ -94,6 +97,8 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
           return BOLTS;
         case 3: // STATE_SPOUTS
           return STATE_SPOUTS;
+        case 4: // WORKER_HOOKS
+          return WORKER_HOOKS;
         default:
           return null;
       }
@@ -134,6 +139,7 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
   }
 
   // isset id assignments
+  private static final _Fields optionals[] = {_Fields.WORKER_HOOKS};
   public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -149,6 +155,9 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
         new org.apache.thrift.meta_data.MapMetaData(org.apache.thrift.protocol.TType.MAP, 
             new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING), 
             new org.apache.thrift.meta_data.StructMetaData(org.apache.thrift.protocol.TType.STRUCT, StateSpoutSpec.class))));
+    tmpMap.put(_Fields.WORKER_HOOKS, new org.apache.thrift.meta_data.FieldMetaData("worker_hooks", org.apache.thrift.TFieldRequirementType.OPTIONAL, 
+        new org.apache.thrift.meta_data.ListMetaData(org.apache.thrift.protocol.TType.LIST, 
+            new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING            , true))));
     metaDataMap = Collections.unmodifiableMap(tmpMap);
     org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(StormTopology.class, metaDataMap);
   }
@@ -216,6 +225,10 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
       }
       this.state_spouts = __this__state_spouts;
     }
+    if (other.is_set_worker_hooks()) {
+      List<ByteBuffer> __this__worker_hooks = new ArrayList<ByteBuffer>(other.worker_hooks);
+      this.worker_hooks = __this__worker_hooks;
+    }
   }
 
   public StormTopology deepCopy() {
@@ -227,6 +240,7 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
     this.spouts = null;
     this.bolts = null;
     this.state_spouts = null;
+    this.worker_hooks = null;
   }
 
   public int get_spouts_size() {
@@ -331,6 +345,44 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
     }
   }
 
+  public int get_worker_hooks_size() {
+    return (this.worker_hooks == null) ? 0 : this.worker_hooks.size();
+  }
+
+  public java.util.Iterator<ByteBuffer> get_worker_hooks_iterator() {
+    return (this.worker_hooks == null) ? null : this.worker_hooks.iterator();
+  }
+
+  public void add_to_worker_hooks(ByteBuffer elem) {
+    if (this.worker_hooks == null) {
+      this.worker_hooks = new ArrayList<ByteBuffer>();
+    }
+    this.worker_hooks.add(elem);
+  }
+
+  public List<ByteBuffer> get_worker_hooks() {
+    return this.worker_hooks;
+  }
+
+  public void set_worker_hooks(List<ByteBuffer> worker_hooks) {
+    this.worker_hooks = worker_hooks;
+  }
+
+  public void unset_worker_hooks() {
+    this.worker_hooks = null;
+  }
+
+  /** Returns true if field worker_hooks is set (has been assigned a value) and false otherwise */
+  public boolean is_set_worker_hooks() {
+    return this.worker_hooks != null;
+  }
+
+  public void set_worker_hooks_isSet(boolean value) {
+    if (!value) {
+      this.worker_hooks = null;
+    }
+  }
+
   public void setFieldValue(_Fields field, Object value) {
     switch (field) {
     case SPOUTS:
@@ -357,6 +409,14 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
       }
       break;
 
+    case WORKER_HOOKS:
+      if (value == null) {
+        unset_worker_hooks();
+      } else {
+        set_worker_hooks((List<ByteBuffer>)value);
+      }
+      break;
+
     }
   }
 
@@ -371,6 +431,9 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
     case STATE_SPOUTS:
       return get_state_spouts();
 
+    case WORKER_HOOKS:
+      return get_worker_hooks();
+
     }
     throw new IllegalStateException();
   }
@@ -388,6 +451,8 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
       return is_set_bolts();
     case STATE_SPOUTS:
       return is_set_state_spouts();
+    case WORKER_HOOKS:
+      return is_set_worker_hooks();
     }
     throw new IllegalStateException();
   }
@@ -432,6 +497,15 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
         return false;
     }
 
+    boolean this_present_worker_hooks = true && this.is_set_worker_hooks();
+    boolean that_present_worker_hooks = true && that.is_set_worker_hooks();
+    if (this_present_worker_hooks || that_present_worker_hooks) {
+      if (!(this_present_worker_hooks && that_present_worker_hooks))
+        return false;
+      if (!this.worker_hooks.equals(that.worker_hooks))
+        return false;
+    }
+
     return true;
   }
 
@@ -454,6 +528,11 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
     if (present_state_spouts)
       list.add(state_spouts);
 
+    boolean present_worker_hooks = true && (is_set_worker_hooks());
+    list.add(present_worker_hooks);
+    if (present_worker_hooks)
+      list.add(worker_hooks);
+
     return list.hashCode();
   }
 
@@ -495,6 +574,16 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
         return lastComparison;
       }
     }
+    lastComparison = Boolean.valueOf(is_set_worker_hooks()).compareTo(other.is_set_worker_hooks());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (is_set_worker_hooks()) {
+      lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.worker_hooks, other.worker_hooks);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
     return 0;
   }
 
@@ -538,6 +627,16 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
       sb.append(this.state_spouts);
     }
     first = false;
+    if (is_set_worker_hooks()) {
+      if (!first) sb.append(", ");
+      sb.append("worker_hooks:");
+      if (this.worker_hooks == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.worker_hooks, sb);
+      }
+      first = false;
+    }
     sb.append(")");
     return sb.toString();
   }
@@ -656,6 +755,24 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
             }
             break;
+          case 4: // WORKER_HOOKS
+            if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
+              {
+                org.apache.thrift.protocol.TList _list56 = iprot.readListBegin();
+                struct.worker_hooks = new ArrayList<ByteBuffer>(_list56.size);
+                ByteBuffer _elem57;
+                for (int _i58 = 0; _i58 < _list56.size; ++_i58)
+                {
+                  _elem57 = iprot.readBinary();
+                  struct.worker_hooks.add(_elem57);
+                }
+                iprot.readListEnd();
+              }
+              struct.set_worker_hooks_isSet(true);
+            } else { 
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
           default:
             org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
         }
@@ -673,10 +790,10 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
         oprot.writeFieldBegin(SPOUTS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, struct.spouts.size()));
-          for (Map.Entry<String, SpoutSpec> _iter56 : struct.spouts.entrySet())
+          for (Map.Entry<String, SpoutSpec> _iter59 : struct.spouts.entrySet())
           {
-            oprot.writeString(_iter56.getKey());
-            _iter56.getValue().write(oprot);
+            oprot.writeString(_iter59.getKey());
+            _iter59.getValue().write(oprot);
           }
           oprot.writeMapEnd();
         }
@@ -686,10 +803,10 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
         oprot.writeFieldBegin(BOLTS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, struct.bolts.size()));
-          for (Map.Entry<String, Bolt> _iter57 : struct.bolts.entrySet())
+          for (Map.Entry<String, Bolt> _iter60 : struct.bolts.entrySet())
           {
-            oprot.writeString(_iter57.getKey());
-            _iter57.getValue().write(oprot);
+            oprot.writeString(_iter60.getKey());
+            _iter60.getValue().write(oprot);
           }
           oprot.writeMapEnd();
         }
@@ -699,15 +816,29 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
         oprot.writeFieldBegin(STATE_SPOUTS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, struct.state_spouts.size()));
-          for (Map.Entry<String, StateSpoutSpec> _iter58 : struct.state_spouts.entrySet())
+          for (Map.Entry<String, StateSpoutSpec> _iter61 : struct.state_spouts.entrySet())
           {
-            oprot.writeString(_iter58.getKey());
-            _iter58.getValue().write(oprot);
+            oprot.writeString(_iter61.getKey());
+            _iter61.getValue().write(oprot);
           }
           oprot.writeMapEnd();
         }
         oprot.writeFieldEnd();
       }
+      if (struct.worker_hooks != null) {
+        if (struct.is_set_worker_hooks()) {
+          oprot.writeFieldBegin(WORKER_HOOKS_FIELD_DESC);
+          {
+            oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, struct.worker_hooks.size()));
+            for (ByteBuffer _iter62 : struct.worker_hooks)
+            {
+              oprot.writeBinary(_iter62);
+            }
+            oprot.writeListEnd();
+          }
+          oprot.writeFieldEnd();
+        }
+      }
       oprot.writeFieldStop();
       oprot.writeStructEnd();
     }
@@ -727,26 +858,40 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
       TTupleProtocol oprot = (TTupleProtocol) prot;
       {
         oprot.writeI32(struct.spouts.size());
-        for (Map.Entry<String, SpoutSpec> _iter59 : struct.spouts.entrySet())
+        for (Map.Entry<String, SpoutSpec> _iter63 : struct.spouts.entrySet())
         {
-          oprot.writeString(_iter59.getKey());
-          _iter59.getValue().write(oprot);
+          oprot.writeString(_iter63.getKey());
+          _iter63.getValue().write(oprot);
         }
       }
       {
         oprot.writeI32(struct.bolts.size());
-        for (Map.Entry<String, Bolt> _iter60 : struct.bolts.entrySet())
+        for (Map.Entry<String, Bolt> _iter64 : struct.bolts.entrySet())
         {
-          oprot.writeString(_iter60.getKey());
-          _iter60.getValue().write(oprot);
+          oprot.writeString(_iter64.getKey());
+          _iter64.getValue().write(oprot);
         }
       }
       {
         oprot.writeI32(struct.state_spouts.size());
-        for (Map.Entry<String, StateSpoutSpec> _iter61 : struct.state_spouts.entrySet())
+        for (Map.Entry<String, StateSpoutSpec> _iter65 : struct.state_spouts.entrySet())
+        {
+          oprot.writeString(_iter65.getKey());
+          _iter65.getValue().write(oprot);
+        }
+      }
+      BitSet optionals = new BitSet();
+      if (struct.is_set_worker_hooks()) {
+        optionals.set(0);
+      }
+      oprot.writeBitSet(optionals, 1);
+      if (struct.is_set_worker_hooks()) {
         {
-          oprot.writeString(_iter61.getKey());
-          _iter61.getValue().write(oprot);
+          oprot.writeI32(struct.worker_hooks.size());
+          for (ByteBuffer _iter66 : struct.worker_hooks)
+          {
+            oprot.writeBinary(_iter66);
+          }
         }
       }
     }
@@ -755,47 +900,61 @@ public class StormTopology implements org.apache.thrift.TBase<StormTopology, Sto
     public void read(org.apache.thrift.protocol.TProtocol prot, StormTopology struct) throws org.apache.thrift.TException {
       TTupleProtocol iprot = (TTupleProtocol) prot;
       {
-        org.apache.thrift.protocol.TMap _map62 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.spouts = new HashMap<String,SpoutSpec>(2*_map62.size);
-        String _key63;
-        SpoutSpec _val64;
-        for (int _i65 = 0; _i65 < _map62.size; ++_i65)
+        org.apache.thrift.protocol.TMap _map67 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.spouts = new HashMap<String,SpoutSpec>(2*_map67.size);
+        String _key68;
+        SpoutSpec _val69;
+        for (int _i70 = 0; _i70 < _map67.size; ++_i70)
         {
-          _key63 = iprot.readString();
-          _val64 = new SpoutSpec();
-          _val64.read(iprot);
-          struct.spouts.put(_key63, _val64);
+          _key68 = iprot.readString();
+          _val69 = new SpoutSpec();
+          _val69.read(iprot);
+          struct.spouts.put(_key68, _val69);
         }
       }
       struct.set_spouts_isSet(true);
       {
-        org.apache.thrift.protocol.TMap _map66 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.bolts = new HashMap<String,Bolt>(2*_map66.size);
-        String _key67;
-        Bolt _val68;
-        for (int _i69 = 0; _i69 < _map66.size; ++_i69)
+        org.apache.thrift.protocol.TMap _map71 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.bolts = new HashMap<String,Bolt>(2*_map71.size);
+        String _key72;
+        Bolt _val73;
+        for (int _i74 = 0; _i74 < _map71.size; ++_i74)
         {
-          _key67 = iprot.readString();
-          _val68 = new Bolt();
-          _val68.read(iprot);
-          struct.bolts.put(_key67, _val68);
+          _key72 = iprot.readString();
+          _val73 = new Bolt();
+          _val73.read(iprot);
+          struct.bolts.put(_key72, _val73);
         }
       }
       struct.set_bolts_isSet(true);
       {
-        org.apache.thrift.protocol.TMap _map70 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.state_spouts = new HashMap<String,StateSpoutSpec>(2*_map70.size);
-        String _key71;
-        StateSpoutSpec _val72;
-        for (int _i73 = 0; _i73 < _map70.size; ++_i73)
+        org.apache.thrift.protocol.TMap _map75 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.state_spouts = new HashMap<String,StateSpoutSpec>(2*_map75.size);
+        String _key76;
+        StateSpoutSpec _val77;
+        for (int _i78 = 0; _i78 < _map75.size; ++_i78)
         {
-          _key71 = iprot.readString();
-          _val72 = new StateSpoutSpec();
-          _val72.read(iprot);
-          struct.state_spouts.put(_key71, _val72);
+          _key76 = iprot.readString();
+          _val77 = new StateSpoutSpec();
+          _val77.read(iprot);
+          struct.state_spouts.put(_key76, _val77);
         }
       }
       struct.set_state_spouts_isSet(true);
+      BitSet incoming = iprot.readBitSet(1);
+      if (incoming.get(0)) {
+        {
+          org.apache.thrift.protocol.TList _list79 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+          struct.worker_hooks = new ArrayList<ByteBuffer>(_list79.size);
+          ByteBuffer _elem80;
+          for (int _i81 = 0; _i81 < _list79.size; ++_i81)
+          {
+            _elem80 = iprot.readBinary();
+            struct.worker_hooks.add(_elem80);
+          }
+        }
+        struct.set_worker_hooks_isSet(true);
+      }
     }
   }
 

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/SupervisorInfo.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/SupervisorInfo.java b/storm-core/src/jvm/backtype/storm/generated/SupervisorInfo.java
index dcfb353..2ce5eb9 100644
--- a/storm-core/src/jvm/backtype/storm/generated/SupervisorInfo.java
+++ b/storm-core/src/jvm/backtype/storm/generated/SupervisorInfo.java
@@ -1085,13 +1085,13 @@ public class SupervisorInfo implements org.apache.thrift.TBase<SupervisorInfo, S
           case 4: // USED_PORTS
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list480 = iprot.readListBegin();
-                struct.used_ports = new ArrayList<Long>(_list480.size);
-                long _elem481;
-                for (int _i482 = 0; _i482 < _list480.size; ++_i482)
+                org.apache.thrift.protocol.TList _list488 = iprot.readListBegin();
+                struct.used_ports = new ArrayList<Long>(_list488.size);
+                long _elem489;
+                for (int _i490 = 0; _i490 < _list488.size; ++_i490)
                 {
-                  _elem481 = iprot.readI64();
-                  struct.used_ports.add(_elem481);
+                  _elem489 = iprot.readI64();
+                  struct.used_ports.add(_elem489);
                 }
                 iprot.readListEnd();
               }
@@ -1103,13 +1103,13 @@ public class SupervisorInfo implements org.apache.thrift.TBase<SupervisorInfo, S
           case 5: // META
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list483 = iprot.readListBegin();
-                struct.meta = new ArrayList<Long>(_list483.size);
-                long _elem484;
-                for (int _i485 = 0; _i485 < _list483.size; ++_i485)
+                org.apache.thrift.protocol.TList _list491 = iprot.readListBegin();
+                struct.meta = new ArrayList<Long>(_list491.size);
+                long _elem492;
+                for (int _i493 = 0; _i493 < _list491.size; ++_i493)
                 {
-                  _elem484 = iprot.readI64();
-                  struct.meta.add(_elem484);
+                  _elem492 = iprot.readI64();
+                  struct.meta.add(_elem492);
                 }
                 iprot.readListEnd();
               }
@@ -1121,15 +1121,15 @@ public class SupervisorInfo implements org.apache.thrift.TBase<SupervisorInfo, S
           case 6: // SCHEDULER_META
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map486 = iprot.readMapBegin();
-                struct.scheduler_meta = new HashMap<String,String>(2*_map486.size);
-                String _key487;
-                String _val488;
-                for (int _i489 = 0; _i489 < _map486.size; ++_i489)
+                org.apache.thrift.protocol.TMap _map494 = iprot.readMapBegin();
+                struct.scheduler_meta = new HashMap<String,String>(2*_map494.size);
+                String _key495;
+                String _val496;
+                for (int _i497 = 0; _i497 < _map494.size; ++_i497)
                 {
-                  _key487 = iprot.readString();
-                  _val488 = iprot.readString();
-                  struct.scheduler_meta.put(_key487, _val488);
+                  _key495 = iprot.readString();
+                  _val496 = iprot.readString();
+                  struct.scheduler_meta.put(_key495, _val496);
                 }
                 iprot.readMapEnd();
               }
@@ -1157,15 +1157,15 @@ public class SupervisorInfo implements org.apache.thrift.TBase<SupervisorInfo, S
           case 9: // RESOURCES_MAP
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map490 = iprot.readMapBegin();
-                struct.resources_map = new HashMap<String,Double>(2*_map490.size);
-                String _key491;
-                double _val492;
-                for (int _i493 = 0; _i493 < _map490.size; ++_i493)
+                org.apache.thrift.protocol.TMap _map498 = iprot.readMapBegin();
+                struct.resources_map = new HashMap<String,Double>(2*_map498.size);
+                String _key499;
+                double _val500;
+                for (int _i501 = 0; _i501 < _map498.size; ++_i501)
                 {
-                  _key491 = iprot.readString();
-                  _val492 = iprot.readDouble();
-                  struct.resources_map.put(_key491, _val492);
+                  _key499 = iprot.readString();
+                  _val500 = iprot.readDouble();
+                  struct.resources_map.put(_key499, _val500);
                 }
                 iprot.readMapEnd();
               }
@@ -1207,9 +1207,9 @@ public class SupervisorInfo implements org.apache.thrift.TBase<SupervisorInfo, S
           oprot.writeFieldBegin(USED_PORTS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.I64, struct.used_ports.size()));
-            for (long _iter494 : struct.used_ports)
+            for (long _iter502 : struct.used_ports)
             {
-              oprot.writeI64(_iter494);
+              oprot.writeI64(_iter502);
             }
             oprot.writeListEnd();
           }
@@ -1221,9 +1221,9 @@ public class SupervisorInfo implements org.apache.thrift.TBase<SupervisorInfo, S
           oprot.writeFieldBegin(META_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.I64, struct.meta.size()));
-            for (long _iter495 : struct.meta)
+            for (long _iter503 : struct.meta)
             {
-              oprot.writeI64(_iter495);
+              oprot.writeI64(_iter503);
             }
             oprot.writeListEnd();
           }
@@ -1235,10 +1235,10 @@ public class SupervisorInfo implements org.apache.thrift.TBase<SupervisorInfo, S
           oprot.writeFieldBegin(SCHEDULER_META_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, struct.scheduler_meta.size()));
-            for (Map.Entry<String, String> _iter496 : struct.scheduler_meta.entrySet())
+            for (Map.Entry<String, String> _iter504 : struct.scheduler_meta.entrySet())
             {
-              oprot.writeString(_iter496.getKey());
-              oprot.writeString(_iter496.getValue());
+              oprot.writeString(_iter504.getKey());
+              oprot.writeString(_iter504.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -1262,10 +1262,10 @@ public class SupervisorInfo implements org.apache.thrift.TBase<SupervisorInfo, S
           oprot.writeFieldBegin(RESOURCES_MAP_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.DOUBLE, struct.resources_map.size()));
-            for (Map.Entry<String, Double> _iter497 : struct.resources_map.entrySet())
+            for (Map.Entry<String, Double> _iter505 : struct.resources_map.entrySet())
             {
-              oprot.writeString(_iter497.getKey());
-              oprot.writeDouble(_iter497.getValue());
+              oprot.writeString(_iter505.getKey());
+              oprot.writeDouble(_iter505.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -1320,28 +1320,28 @@ public class SupervisorInfo implements org.apache.thrift.TBase<SupervisorInfo, S
       if (struct.is_set_used_ports()) {
         {
           oprot.writeI32(struct.used_ports.size());
-          for (long _iter498 : struct.used_ports)
+          for (long _iter506 : struct.used_ports)
           {
-            oprot.writeI64(_iter498);
+            oprot.writeI64(_iter506);
           }
         }
       }
       if (struct.is_set_meta()) {
         {
           oprot.writeI32(struct.meta.size());
-          for (long _iter499 : struct.meta)
+          for (long _iter507 : struct.meta)
           {
-            oprot.writeI64(_iter499);
+            oprot.writeI64(_iter507);
           }
         }
       }
       if (struct.is_set_scheduler_meta()) {
         {
           oprot.writeI32(struct.scheduler_meta.size());
-          for (Map.Entry<String, String> _iter500 : struct.scheduler_meta.entrySet())
+          for (Map.Entry<String, String> _iter508 : struct.scheduler_meta.entrySet())
           {
-            oprot.writeString(_iter500.getKey());
-            oprot.writeString(_iter500.getValue());
+            oprot.writeString(_iter508.getKey());
+            oprot.writeString(_iter508.getValue());
           }
         }
       }
@@ -1354,10 +1354,10 @@ public class SupervisorInfo implements org.apache.thrift.TBase<SupervisorInfo, S
       if (struct.is_set_resources_map()) {
         {
           oprot.writeI32(struct.resources_map.size());
-          for (Map.Entry<String, Double> _iter501 : struct.resources_map.entrySet())
+          for (Map.Entry<String, Double> _iter509 : struct.resources_map.entrySet())
           {
-            oprot.writeString(_iter501.getKey());
-            oprot.writeDouble(_iter501.getValue());
+            oprot.writeString(_iter509.getKey());
+            oprot.writeDouble(_iter509.getValue());
           }
         }
       }
@@ -1377,41 +1377,41 @@ public class SupervisorInfo implements org.apache.thrift.TBase<SupervisorInfo, S
       }
       if (incoming.get(1)) {
         {
-          org.apache.thrift.protocol.TList _list502 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.I64, iprot.readI32());
-          struct.used_ports = new ArrayList<Long>(_list502.size);
-          long _elem503;
-          for (int _i504 = 0; _i504 < _list502.size; ++_i504)
+          org.apache.thrift.protocol.TList _list510 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.I64, iprot.readI32());
+          struct.used_ports = new ArrayList<Long>(_list510.size);
+          long _elem511;
+          for (int _i512 = 0; _i512 < _list510.size; ++_i512)
           {
-            _elem503 = iprot.readI64();
-            struct.used_ports.add(_elem503);
+            _elem511 = iprot.readI64();
+            struct.used_ports.add(_elem511);
           }
         }
         struct.set_used_ports_isSet(true);
       }
       if (incoming.get(2)) {
         {
-          org.apache.thrift.protocol.TList _list505 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.I64, iprot.readI32());
-          struct.meta = new ArrayList<Long>(_list505.size);
-          long _elem506;
-          for (int _i507 = 0; _i507 < _list505.size; ++_i507)
+          org.apache.thrift.protocol.TList _list513 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.I64, iprot.readI32());
+          struct.meta = new ArrayList<Long>(_list513.size);
+          long _elem514;
+          for (int _i515 = 0; _i515 < _list513.size; ++_i515)
           {
-            _elem506 = iprot.readI64();
-            struct.meta.add(_elem506);
+            _elem514 = iprot.readI64();
+            struct.meta.add(_elem514);
           }
         }
         struct.set_meta_isSet(true);
       }
       if (incoming.get(3)) {
         {
-          org.apache.thrift.protocol.TMap _map508 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-          struct.scheduler_meta = new HashMap<String,String>(2*_map508.size);
-          String _key509;
-          String _val510;
-          for (int _i511 = 0; _i511 < _map508.size; ++_i511)
+          org.apache.thrift.protocol.TMap _map516 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+          struct.scheduler_meta = new HashMap<String,String>(2*_map516.size);
+          String _key517;
+          String _val518;
+          for (int _i519 = 0; _i519 < _map516.size; ++_i519)
           {
-            _key509 = iprot.readString();
-            _val510 = iprot.readString();
-            struct.scheduler_meta.put(_key509, _val510);
+            _key517 = iprot.readString();
+            _val518 = iprot.readString();
+            struct.scheduler_meta.put(_key517, _val518);
           }
         }
         struct.set_scheduler_meta_isSet(true);
@@ -1426,15 +1426,15 @@ public class SupervisorInfo implements org.apache.thrift.TBase<SupervisorInfo, S
       }
       if (incoming.get(6)) {
         {
-          org.apache.thrift.protocol.TMap _map512 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.DOUBLE, iprot.readI32());
-          struct.resources_map = new HashMap<String,Double>(2*_map512.size);
-          String _key513;
-          double _val514;
-          for (int _i515 = 0; _i515 < _map512.size; ++_i515)
+          org.apache.thrift.protocol.TMap _map520 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.resources_map = new HashMap<String,Double>(2*_map520.size);
+          String _key521;
+          double _val522;
+          for (int _i523 = 0; _i523 < _map520.size; ++_i523)
           {
-            _key513 = iprot.readString();
-            _val514 = iprot.readDouble();
-            struct.resources_map.put(_key513, _val514);
+            _key521 = iprot.readString();
+            _val522 = iprot.readDouble();
+            struct.resources_map.put(_key521, _val522);
           }
         }
         struct.set_resources_map_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/SupervisorSummary.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/SupervisorSummary.java b/storm-core/src/jvm/backtype/storm/generated/SupervisorSummary.java
index edb2016..8bdf289 100644
--- a/storm-core/src/jvm/backtype/storm/generated/SupervisorSummary.java
+++ b/storm-core/src/jvm/backtype/storm/generated/SupervisorSummary.java
@@ -1063,15 +1063,15 @@ public class SupervisorSummary implements org.apache.thrift.TBase<SupervisorSumm
           case 7: // TOTAL_RESOURCES
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map74 = iprot.readMapBegin();
-                struct.total_resources = new HashMap<String,Double>(2*_map74.size);
-                String _key75;
-                double _val76;
-                for (int _i77 = 0; _i77 < _map74.size; ++_i77)
+                org.apache.thrift.protocol.TMap _map82 = iprot.readMapBegin();
+                struct.total_resources = new HashMap<String,Double>(2*_map82.size);
+                String _key83;
+                double _val84;
+                for (int _i85 = 0; _i85 < _map82.size; ++_i85)
                 {
-                  _key75 = iprot.readString();
-                  _val76 = iprot.readDouble();
-                  struct.total_resources.put(_key75, _val76);
+                  _key83 = iprot.readString();
+                  _val84 = iprot.readDouble();
+                  struct.total_resources.put(_key83, _val84);
                 }
                 iprot.readMapEnd();
               }
@@ -1140,10 +1140,10 @@ public class SupervisorSummary implements org.apache.thrift.TBase<SupervisorSumm
           oprot.writeFieldBegin(TOTAL_RESOURCES_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.DOUBLE, struct.total_resources.size()));
-            for (Map.Entry<String, Double> _iter78 : struct.total_resources.entrySet())
+            for (Map.Entry<String, Double> _iter86 : struct.total_resources.entrySet())
             {
-              oprot.writeString(_iter78.getKey());
-              oprot.writeDouble(_iter78.getValue());
+              oprot.writeString(_iter86.getKey());
+              oprot.writeDouble(_iter86.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -1202,10 +1202,10 @@ public class SupervisorSummary implements org.apache.thrift.TBase<SupervisorSumm
       if (struct.is_set_total_resources()) {
         {
           oprot.writeI32(struct.total_resources.size());
-          for (Map.Entry<String, Double> _iter79 : struct.total_resources.entrySet())
+          for (Map.Entry<String, Double> _iter87 : struct.total_resources.entrySet())
           {
-            oprot.writeString(_iter79.getKey());
-            oprot.writeDouble(_iter79.getValue());
+            oprot.writeString(_iter87.getKey());
+            oprot.writeDouble(_iter87.getValue());
           }
         }
       }
@@ -1237,15 +1237,15 @@ public class SupervisorSummary implements org.apache.thrift.TBase<SupervisorSumm
       }
       if (incoming.get(1)) {
         {
-          org.apache.thrift.protocol.TMap _map80 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.DOUBLE, iprot.readI32());
-          struct.total_resources = new HashMap<String,Double>(2*_map80.size);
-          String _key81;
-          double _val82;
-          for (int _i83 = 0; _i83 < _map80.size; ++_i83)
+          org.apache.thrift.protocol.TMap _map88 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.total_resources = new HashMap<String,Double>(2*_map88.size);
+          String _key89;
+          double _val90;
+          for (int _i91 = 0; _i91 < _map88.size; ++_i91)
           {
-            _key81 = iprot.readString();
-            _val82 = iprot.readDouble();
-            struct.total_resources.put(_key81, _val82);
+            _key89 = iprot.readString();
+            _val90 = iprot.readDouble();
+            struct.total_resources.put(_key89, _val90);
           }
         }
         struct.set_total_resources_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/TopologyHistoryInfo.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/TopologyHistoryInfo.java b/storm-core/src/jvm/backtype/storm/generated/TopologyHistoryInfo.java
index cf9bff1..cced456 100644
--- a/storm-core/src/jvm/backtype/storm/generated/TopologyHistoryInfo.java
+++ b/storm-core/src/jvm/backtype/storm/generated/TopologyHistoryInfo.java
@@ -364,13 +364,13 @@ public class TopologyHistoryInfo implements org.apache.thrift.TBase<TopologyHist
           case 1: // TOPO_IDS
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list690 = iprot.readListBegin();
-                struct.topo_ids = new ArrayList<String>(_list690.size);
-                String _elem691;
-                for (int _i692 = 0; _i692 < _list690.size; ++_i692)
+                org.apache.thrift.protocol.TList _list698 = iprot.readListBegin();
+                struct.topo_ids = new ArrayList<String>(_list698.size);
+                String _elem699;
+                for (int _i700 = 0; _i700 < _list698.size; ++_i700)
                 {
-                  _elem691 = iprot.readString();
-                  struct.topo_ids.add(_elem691);
+                  _elem699 = iprot.readString();
+                  struct.topo_ids.add(_elem699);
                 }
                 iprot.readListEnd();
               }
@@ -396,9 +396,9 @@ public class TopologyHistoryInfo implements org.apache.thrift.TBase<TopologyHist
         oprot.writeFieldBegin(TOPO_IDS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, struct.topo_ids.size()));
-          for (String _iter693 : struct.topo_ids)
+          for (String _iter701 : struct.topo_ids)
           {
-            oprot.writeString(_iter693);
+            oprot.writeString(_iter701);
           }
           oprot.writeListEnd();
         }
@@ -429,9 +429,9 @@ public class TopologyHistoryInfo implements org.apache.thrift.TBase<TopologyHist
       if (struct.is_set_topo_ids()) {
         {
           oprot.writeI32(struct.topo_ids.size());
-          for (String _iter694 : struct.topo_ids)
+          for (String _iter702 : struct.topo_ids)
           {
-            oprot.writeString(_iter694);
+            oprot.writeString(_iter702);
           }
         }
       }
@@ -443,13 +443,13 @@ public class TopologyHistoryInfo implements org.apache.thrift.TBase<TopologyHist
       BitSet incoming = iprot.readBitSet(1);
       if (incoming.get(0)) {
         {
-          org.apache.thrift.protocol.TList _list695 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-          struct.topo_ids = new ArrayList<String>(_list695.size);
-          String _elem696;
-          for (int _i697 = 0; _i697 < _list695.size; ++_i697)
+          org.apache.thrift.protocol.TList _list703 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+          struct.topo_ids = new ArrayList<String>(_list703.size);
+          String _elem704;
+          for (int _i705 = 0; _i705 < _list703.size; ++_i705)
           {
-            _elem696 = iprot.readString();
-            struct.topo_ids.add(_elem696);
+            _elem704 = iprot.readString();
+            struct.topo_ids.add(_elem704);
           }
         }
         struct.set_topo_ids_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/TopologyInfo.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/TopologyInfo.java b/storm-core/src/jvm/backtype/storm/generated/TopologyInfo.java
index f7c44b4..be1d706 100644
--- a/storm-core/src/jvm/backtype/storm/generated/TopologyInfo.java
+++ b/storm-core/src/jvm/backtype/storm/generated/TopologyInfo.java
@@ -231,7 +231,7 @@ public class TopologyInfo implements org.apache.thrift.TBase<TopologyInfo, Topol
     tmpMap.put(_Fields.COMPONENT_DEBUG, new org.apache.thrift.meta_data.FieldMetaData("component_debug", org.apache.thrift.TFieldRequirementType.OPTIONAL, 
         new org.apache.thrift.meta_data.MapMetaData(org.apache.thrift.protocol.TType.MAP, 
             new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING), 
-            new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT            , "DebugOptions"))));
+            new org.apache.thrift.meta_data.StructMetaData(org.apache.thrift.protocol.TType.STRUCT, DebugOptions.class))));
     tmpMap.put(_Fields.SCHED_STATUS, new org.apache.thrift.meta_data.FieldMetaData("sched_status", org.apache.thrift.TFieldRequirementType.OPTIONAL, 
         new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
     tmpMap.put(_Fields.OWNER, new org.apache.thrift.meta_data.FieldMetaData("owner", org.apache.thrift.TFieldRequirementType.OPTIONAL, 
@@ -324,7 +324,7 @@ public class TopologyInfo implements org.apache.thrift.TBase<TopologyInfo, Topol
 
         String __this__component_debug_copy_key = other_element_key;
 
-        DebugOptions __this__component_debug_copy_value = other_element_value;
+        DebugOptions __this__component_debug_copy_value = new DebugOptions(other_element_value);
 
         __this__component_debug.put(__this__component_debug_copy_key, __this__component_debug_copy_value);
       }
@@ -1650,14 +1650,14 @@ public class TopologyInfo implements org.apache.thrift.TBase<TopologyInfo, Topol
           case 4: // EXECUTORS
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list308 = iprot.readListBegin();
-                struct.executors = new ArrayList<ExecutorSummary>(_list308.size);
-                ExecutorSummary _elem309;
-                for (int _i310 = 0; _i310 < _list308.size; ++_i310)
+                org.apache.thrift.protocol.TList _list316 = iprot.readListBegin();
+                struct.executors = new ArrayList<ExecutorSummary>(_list316.size);
+                ExecutorSummary _elem317;
+                for (int _i318 = 0; _i318 < _list316.size; ++_i318)
                 {
-                  _elem309 = new ExecutorSummary();
-                  _elem309.read(iprot);
-                  struct.executors.add(_elem309);
+                  _elem317 = new ExecutorSummary();
+                  _elem317.read(iprot);
+                  struct.executors.add(_elem317);
                 }
                 iprot.readListEnd();
               }
@@ -1677,26 +1677,26 @@ public class TopologyInfo implements org.apache.thrift.TBase<TopologyInfo, Topol
           case 6: // ERRORS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map311 = iprot.readMapBegin();
-                struct.errors = new HashMap<String,List<ErrorInfo>>(2*_map311.size);
-                String _key312;
-                List<ErrorInfo> _val313;
-                for (int _i314 = 0; _i314 < _map311.size; ++_i314)
+                org.apache.thrift.protocol.TMap _map319 = iprot.readMapBegin();
+                struct.errors = new HashMap<String,List<ErrorInfo>>(2*_map319.size);
+                String _key320;
+                List<ErrorInfo> _val321;
+                for (int _i322 = 0; _i322 < _map319.size; ++_i322)
                 {
-                  _key312 = iprot.readString();
+                  _key320 = iprot.readString();
                   {
-                    org.apache.thrift.protocol.TList _list315 = iprot.readListBegin();
-                    _val313 = new ArrayList<ErrorInfo>(_list315.size);
-                    ErrorInfo _elem316;
-                    for (int _i317 = 0; _i317 < _list315.size; ++_i317)
+                    org.apache.thrift.protocol.TList _list323 = iprot.readListBegin();
+                    _val321 = new ArrayList<ErrorInfo>(_list323.size);
+                    ErrorInfo _elem324;
+                    for (int _i325 = 0; _i325 < _list323.size; ++_i325)
                     {
-                      _elem316 = new ErrorInfo();
-                      _elem316.read(iprot);
-                      _val313.add(_elem316);
+                      _elem324 = new ErrorInfo();
+                      _elem324.read(iprot);
+                      _val321.add(_elem324);
                     }
                     iprot.readListEnd();
                   }
-                  struct.errors.put(_key312, _val313);
+                  struct.errors.put(_key320, _val321);
                 }
                 iprot.readMapEnd();
               }
@@ -1708,16 +1708,16 @@ public class TopologyInfo implements org.apache.thrift.TBase<TopologyInfo, Topol
           case 7: // COMPONENT_DEBUG
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map318 = iprot.readMapBegin();
-                struct.component_debug = new HashMap<String,DebugOptions>(2*_map318.size);
-                String _key319;
-                DebugOptions _val320;
-                for (int _i321 = 0; _i321 < _map318.size; ++_i321)
+                org.apache.thrift.protocol.TMap _map326 = iprot.readMapBegin();
+                struct.component_debug = new HashMap<String,DebugOptions>(2*_map326.size);
+                String _key327;
+                DebugOptions _val328;
+                for (int _i329 = 0; _i329 < _map326.size; ++_i329)
                 {
-                  _key319 = iprot.readString();
-                  _val320 = new DebugOptions();
-                  _val320.read(iprot);
-                  struct.component_debug.put(_key319, _val320);
+                  _key327 = iprot.readString();
+                  _val328 = new DebugOptions();
+                  _val328.read(iprot);
+                  struct.component_debug.put(_key327, _val328);
                 }
                 iprot.readMapEnd();
               }
@@ -1828,9 +1828,9 @@ public class TopologyInfo implements org.apache.thrift.TBase<TopologyInfo, Topol
         oprot.writeFieldBegin(EXECUTORS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.executors.size()));
-          for (ExecutorSummary _iter322 : struct.executors)
+          for (ExecutorSummary _iter330 : struct.executors)
           {
-            _iter322.write(oprot);
+            _iter330.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -1845,14 +1845,14 @@ public class TopologyInfo implements org.apache.thrift.TBase<TopologyInfo, Topol
         oprot.writeFieldBegin(ERRORS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.LIST, struct.errors.size()));
-          for (Map.Entry<String, List<ErrorInfo>> _iter323 : struct.errors.entrySet())
+          for (Map.Entry<String, List<ErrorInfo>> _iter331 : struct.errors.entrySet())
           {
-            oprot.writeString(_iter323.getKey());
+            oprot.writeString(_iter331.getKey());
             {
-              oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, _iter323.getValue().size()));
-              for (ErrorInfo _iter324 : _iter323.getValue())
+              oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, _iter331.getValue().size()));
+              for (ErrorInfo _iter332 : _iter331.getValue())
               {
-                _iter324.write(oprot);
+                _iter332.write(oprot);
               }
               oprot.writeListEnd();
             }
@@ -1866,10 +1866,10 @@ public class TopologyInfo implements org.apache.thrift.TBase<TopologyInfo, Topol
           oprot.writeFieldBegin(COMPONENT_DEBUG_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, struct.component_debug.size()));
-            for (Map.Entry<String, DebugOptions> _iter325 : struct.component_debug.entrySet())
+            for (Map.Entry<String, DebugOptions> _iter333 : struct.component_debug.entrySet())
             {
-              oprot.writeString(_iter325.getKey());
-              _iter325.getValue().write(oprot);
+              oprot.writeString(_iter333.getKey());
+              _iter333.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -1947,22 +1947,22 @@ public class TopologyInfo implements org.apache.thrift.TBase<TopologyInfo, Topol
       oprot.writeI32(struct.uptime_secs);
       {
         oprot.writeI32(struct.executors.size());
-        for (ExecutorSummary _iter326 : struct.executors)
+        for (ExecutorSummary _iter334 : struct.executors)
         {
-          _iter326.write(oprot);
+          _iter334.write(oprot);
         }
       }
       oprot.writeString(struct.status);
       {
         oprot.writeI32(struct.errors.size());
-        for (Map.Entry<String, List<ErrorInfo>> _iter327 : struct.errors.entrySet())
+        for (Map.Entry<String, List<ErrorInfo>> _iter335 : struct.errors.entrySet())
         {
-          oprot.writeString(_iter327.getKey());
+          oprot.writeString(_iter335.getKey());
           {
-            oprot.writeI32(_iter327.getValue().size());
-            for (ErrorInfo _iter328 : _iter327.getValue())
+            oprot.writeI32(_iter335.getValue().size());
+            for (ErrorInfo _iter336 : _iter335.getValue())
             {
-              _iter328.write(oprot);
+              _iter336.write(oprot);
             }
           }
         }
@@ -2002,10 +2002,10 @@ public class TopologyInfo implements org.apache.thrift.TBase<TopologyInfo, Topol
       if (struct.is_set_component_debug()) {
         {
           oprot.writeI32(struct.component_debug.size());
-          for (Map.Entry<String, DebugOptions> _iter329 : struct.component_debug.entrySet())
+          for (Map.Entry<String, DebugOptions> _iter337 : struct.component_debug.entrySet())
           {
-            oprot.writeString(_iter329.getKey());
-            _iter329.getValue().write(oprot);
+            oprot.writeString(_iter337.getKey());
+            _iter337.getValue().write(oprot);
           }
         }
       }
@@ -2048,55 +2048,55 @@ public class TopologyInfo implements org.apache.thrift.TBase<TopologyInfo, Topol
       struct.uptime_secs = iprot.readI32();
       struct.set_uptime_secs_isSet(true);
       {
-        org.apache.thrift.protocol.TList _list330 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.executors = new ArrayList<ExecutorSummary>(_list330.size);
-        ExecutorSummary _elem331;
-        for (int _i332 = 0; _i332 < _list330.size; ++_i332)
+        org.apache.thrift.protocol.TList _list338 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.executors = new ArrayList<ExecutorSummary>(_list338.size);
+        ExecutorSummary _elem339;
+        for (int _i340 = 0; _i340 < _list338.size; ++_i340)
         {
-          _elem331 = new ExecutorSummary();
-          _elem331.read(iprot);
-          struct.executors.add(_elem331);
+          _elem339 = new ExecutorSummary();
+          _elem339.read(iprot);
+          struct.executors.add(_elem339);
         }
       }
       struct.set_executors_isSet(true);
       struct.status = iprot.readString();
       struct.set_status_isSet(true);
       {
-        org.apache.thrift.protocol.TMap _map333 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.LIST, iprot.readI32());
-        struct.errors = new HashMap<String,List<ErrorInfo>>(2*_map333.size);
-        String _key334;
-        List<ErrorInfo> _val335;
-        for (int _i336 = 0; _i336 < _map333.size; ++_i336)
+        org.apache.thrift.protocol.TMap _map341 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.LIST, iprot.readI32());
+        struct.errors = new HashMap<String,List<ErrorInfo>>(2*_map341.size);
+        String _key342;
+        List<ErrorInfo> _val343;
+        for (int _i344 = 0; _i344 < _map341.size; ++_i344)
         {
-          _key334 = iprot.readString();
+          _key342 = iprot.readString();
           {
-            org.apache.thrift.protocol.TList _list337 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-            _val335 = new ArrayList<ErrorInfo>(_list337.size);
-            ErrorInfo _elem338;
-            for (int _i339 = 0; _i339 < _list337.size; ++_i339)
+            org.apache.thrift.protocol.TList _list345 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+            _val343 = new ArrayList<ErrorInfo>(_list345.size);
+            ErrorInfo _elem346;
+            for (int _i347 = 0; _i347 < _list345.size; ++_i347)
             {
-              _elem338 = new ErrorInfo();
-              _elem338.read(iprot);
-              _val335.add(_elem338);
+              _elem346 = new ErrorInfo();
+              _elem346.read(iprot);
+              _val343.add(_elem346);
             }
           }
-          struct.errors.put(_key334, _val335);
+          struct.errors.put(_key342, _val343);
         }
       }
       struct.set_errors_isSet(true);
       BitSet incoming = iprot.readBitSet(10);
       if (incoming.get(0)) {
         {
-          org.apache.thrift.protocol.TMap _map340 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.component_debug = new HashMap<String,DebugOptions>(2*_map340.size);
-          String _key341;
-          DebugOptions _val342;
-          for (int _i343 = 0; _i343 < _map340.size; ++_i343)
+          org.apache.thrift.protocol.TMap _map348 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.component_debug = new HashMap<String,DebugOptions>(2*_map348.size);
+          String _key349;
+          DebugOptions _val350;
+          for (int _i351 = 0; _i351 < _map348.size; ++_i351)
           {
-            _key341 = iprot.readString();
-            _val342 = new DebugOptions();
-            _val342.read(iprot);
-            struct.component_debug.put(_key341, _val342);
+            _key349 = iprot.readString();
+            _val350 = new DebugOptions();
+            _val350.read(iprot);
+            struct.component_debug.put(_key349, _val350);
           }
         }
         struct.set_component_debug_isSet(true);


[48/50] [abbrv] storm git commit: add David Wimsey to the contributor list

Posted by sr...@apache.org.
add David Wimsey to the contributor list


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/0acc1cee
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/0acc1cee
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/0acc1cee

Branch: refs/heads/STORM-1040
Commit: 0acc1cee6928985357c869a62497a5ebc56d8bf9
Parents: b082e85
Author: Jungtaek Lim <ka...@gmail.com>
Authored: Fri Nov 27 06:05:49 2015 +0900
Committer: Jungtaek Lim <ka...@gmail.com>
Committed: Fri Nov 27 06:05:49 2015 +0900

----------------------------------------------------------------------
 README.markdown | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/0acc1cee/README.markdown
----------------------------------------------------------------------
diff --git a/README.markdown b/README.markdown
index d0963a8..19ad520 100644
--- a/README.markdown
+++ b/README.markdown
@@ -242,6 +242,7 @@ under the License.
 * Tom Graves ([@tgravescs](https://github.com/tgravescs))
 * Dror Weiss ([@drorweiss](https://github.com/drorweiss))
 * Victor Wong ([@victor-wong](https://github.com/victor-wong))
+* David Wimsey ([@dwimsey](https://github.com/dwimsey))
 
 ## Acknowledgements
 


[02/50] [abbrv] storm git commit: Added STORM-1189 to Changelog

Posted by sr...@apache.org.
Added STORM-1189 to Changelog

Adding setting to travis.

Add caching to speed up build.

Kicking Travis

Testing travis matrix.

Testing

Testing

Testing

Testing

Testing

Testing

Testing

Testing

Fixing .travis.yml

Cleanup

Testing split-up travis.

Fixing newline in travis.yml

Switching up the modules.

Switching up the modules.

Switching up the modules.

Revert whitespace.


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/bb266fb5
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/bb266fb5
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/bb266fb5

Branch: refs/heads/STORM-1040
Commit: bb266fb5f843a2791d9227df71563a958eafc256
Parents: 57a3b89
Author: Robert (Bobby) Evans <ev...@yahoo-inc.com>
Authored: Tue Nov 10 08:34:56 2015 -0600
Committer: Kyle Nusbaum <Ky...@gmail.com>
Committed: Mon Nov 23 16:33:14 2015 -0600

----------------------------------------------------------------------
 .travis.yml                       | 13 ++++++++++++-
 CHANGELOG.md                      |  1 +
 dev-tools/travis/travis-script.sh |  2 +-
 3 files changed, 14 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/bb266fb5/.travis.yml
----------------------------------------------------------------------
diff --git a/.travis.yml b/.travis.yml
index a7e2df4..05e24fe 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -9,6 +9,11 @@
 #  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 #  See the License for the specific language governing permissions and
 #  limitations under the License.
+
+env:
+  - MODULES=storm-core
+  - MODULES='!storm-core'
+
 language: java
 jdk:
   - oraclejdk7
@@ -18,4 +23,10 @@ before_install:
   - nvm install 0.12.2
   - nvm use 0.12.2
 install: /bin/bash ./dev-tools/travis/travis-install.sh `pwd`
-script: /bin/bash ./dev-tools/travis/travis-script.sh `pwd`
+script: /bin/bash ./dev-tools/travis/travis-script.sh `pwd` $MODULES
+sudo: false
+cache:
+  directories:
+    - "$HOME/.m2/repository"
+    - "$HOME/.rvm"
+    - "$NVM_DIR"

http://git-wip-us.apache.org/repos/asf/storm/blob/bb266fb5/CHANGELOG.md
----------------------------------------------------------------------
diff --git a/CHANGELOG.md b/CHANGELOG.md
index ae5d531..5ea5415 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,5 @@
 ## 0.11.0
+ * STORM-1189: Maintain wire compatability with 0.10.x versions of storm.
  * STORM-1185: replace nimbus.host with nimbus.seeds
  * STORM-1164: Code cleanup for typos, warnings and conciseness.
  * STORM-902: Simple Log Search.

http://git-wip-us.apache.org/repos/asf/storm/blob/bb266fb5/dev-tools/travis/travis-script.sh
----------------------------------------------------------------------
diff --git a/dev-tools/travis/travis-script.sh b/dev-tools/travis/travis-script.sh
index f388d4d..325915e 100755
--- a/dev-tools/travis/travis-script.sh
+++ b/dev-tools/travis/travis-script.sh
@@ -32,7 +32,7 @@ cd ${STORM_SRC_ROOT_DIR}
 export STORM_TEST_TIMEOUT_MS=100000
 
 # We now lean on Travis CI's implicit behavior, ```mvn clean install -DskipTests``` before running script
-mvn --batch-mode install -fae -Pnative
+mvn --batch-mode test -fae -Pnative -pl $2
 BUILD_RET_VAL=$?
 
 for dir in `find . -type d -and -wholename \*/target/\*-reports`;


[25/50] [abbrv] storm git commit: Merge branch 'noArgs' of https://github.com/ashnazg/storm

Posted by sr...@apache.org.
Merge branch 'noArgs' of https://github.com/ashnazg/storm


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/6ebf2477
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/6ebf2477
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/6ebf2477

Branch: refs/heads/STORM-1040
Commit: 6ebf24779b963bb52a7653d8d9d8ec04ed3e557b
Parents: 8eac4aa 8cb3dc3
Author: P. Taylor Goetz <pt...@gmail.com>
Authored: Tue Nov 24 14:56:01 2015 -0500
Committer: P. Taylor Goetz <pt...@gmail.com>
Committed: Tue Nov 24 14:56:01 2015 -0500

----------------------------------------------------------------------
 external/flux/README.md                                  |  3 +++
 .../src/main/java/org/apache/storm/flux/FluxBuilder.java | 11 ++++++++++-
 .../test/java/org/apache/storm/flux/test/TestBolt.java   |  4 ++++
 .../src/test/resources/configs/config-methods-test.yaml  |  1 +
 4 files changed, 18 insertions(+), 1 deletion(-)
----------------------------------------------------------------------



[43/50] [abbrv] storm git commit: add STORM-1207 to CHANGELOG.md

Posted by sr...@apache.org.
add STORM-1207 to CHANGELOG.md


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/c7c367ce
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/c7c367ce
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/c7c367ce

Branch: refs/heads/STORM-1040
Commit: c7c367ce846cc3d3738168ee497d6de4096ff6f3
Parents: 0d8a99d
Author: Jungtaek Lim <ka...@gmail.com>
Authored: Fri Nov 27 05:51:31 2015 +0900
Committer: Jungtaek Lim <ka...@gmail.com>
Committed: Fri Nov 27 05:51:31 2015 +0900

----------------------------------------------------------------------
 CHANGELOG.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/c7c367ce/CHANGELOG.md
----------------------------------------------------------------------
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 0f7919a..cb5a4a9 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,5 @@
-+## 0.11.0
+## 0.11.0
+ * STORM-1207: Added flux support for IWindowedBolt
  * STORM-1352: Trident should support writing to multiple Kafka clusters.
  * STORM-1220: Avoid double copying in the Kafka spout.
  * STORM-1340: Use Travis-CI build matrix to improve test execution times


[34/50] [abbrv] storm git commit: Merge branch 'exclude-intellij-output-file-from-rat-check'

Posted by sr...@apache.org.
Merge branch 'exclude-intellij-output-file-from-rat-check'


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/20a864d0
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/20a864d0
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/20a864d0

Branch: refs/heads/STORM-1040
Commit: 20a864d082dc0b302b8f66a654d159ab0f036cc3
Parents: 5a71ea0 7a02703
Author: Jungtaek Lim <ka...@gmail.com>
Authored: Wed Nov 25 10:24:08 2015 +0900
Committer: Jungtaek Lim <ka...@gmail.com>
Committed: Wed Nov 25 10:24:08 2015 +0900

----------------------------------------------------------------------
 pom.xml | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/20a864d0/pom.xml
----------------------------------------------------------------------


[08/50] [abbrv] storm git commit: add support for worker lifecycle hooks

Posted by sr...@apache.org.
http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/TopologyPageInfo.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/TopologyPageInfo.java b/storm-core/src/jvm/backtype/storm/generated/TopologyPageInfo.java
index 4813fde..c943cac 100644
--- a/storm-core/src/jvm/backtype/storm/generated/TopologyPageInfo.java
+++ b/storm-core/src/jvm/backtype/storm/generated/TopologyPageInfo.java
@@ -2041,16 +2041,16 @@ public class TopologyPageInfo implements org.apache.thrift.TBase<TopologyPageInf
           case 9: // ID_TO_SPOUT_AGG_STATS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map394 = iprot.readMapBegin();
-                struct.id_to_spout_agg_stats = new HashMap<String,ComponentAggregateStats>(2*_map394.size);
-                String _key395;
-                ComponentAggregateStats _val396;
-                for (int _i397 = 0; _i397 < _map394.size; ++_i397)
+                org.apache.thrift.protocol.TMap _map402 = iprot.readMapBegin();
+                struct.id_to_spout_agg_stats = new HashMap<String,ComponentAggregateStats>(2*_map402.size);
+                String _key403;
+                ComponentAggregateStats _val404;
+                for (int _i405 = 0; _i405 < _map402.size; ++_i405)
                 {
-                  _key395 = iprot.readString();
-                  _val396 = new ComponentAggregateStats();
-                  _val396.read(iprot);
-                  struct.id_to_spout_agg_stats.put(_key395, _val396);
+                  _key403 = iprot.readString();
+                  _val404 = new ComponentAggregateStats();
+                  _val404.read(iprot);
+                  struct.id_to_spout_agg_stats.put(_key403, _val404);
                 }
                 iprot.readMapEnd();
               }
@@ -2062,16 +2062,16 @@ public class TopologyPageInfo implements org.apache.thrift.TBase<TopologyPageInf
           case 10: // ID_TO_BOLT_AGG_STATS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map398 = iprot.readMapBegin();
-                struct.id_to_bolt_agg_stats = new HashMap<String,ComponentAggregateStats>(2*_map398.size);
-                String _key399;
-                ComponentAggregateStats _val400;
-                for (int _i401 = 0; _i401 < _map398.size; ++_i401)
+                org.apache.thrift.protocol.TMap _map406 = iprot.readMapBegin();
+                struct.id_to_bolt_agg_stats = new HashMap<String,ComponentAggregateStats>(2*_map406.size);
+                String _key407;
+                ComponentAggregateStats _val408;
+                for (int _i409 = 0; _i409 < _map406.size; ++_i409)
                 {
-                  _key399 = iprot.readString();
-                  _val400 = new ComponentAggregateStats();
-                  _val400.read(iprot);
-                  struct.id_to_bolt_agg_stats.put(_key399, _val400);
+                  _key407 = iprot.readString();
+                  _val408 = new ComponentAggregateStats();
+                  _val408.read(iprot);
+                  struct.id_to_bolt_agg_stats.put(_key407, _val408);
                 }
                 iprot.readMapEnd();
               }
@@ -2234,10 +2234,10 @@ public class TopologyPageInfo implements org.apache.thrift.TBase<TopologyPageInf
           oprot.writeFieldBegin(ID_TO_SPOUT_AGG_STATS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, struct.id_to_spout_agg_stats.size()));
-            for (Map.Entry<String, ComponentAggregateStats> _iter402 : struct.id_to_spout_agg_stats.entrySet())
+            for (Map.Entry<String, ComponentAggregateStats> _iter410 : struct.id_to_spout_agg_stats.entrySet())
             {
-              oprot.writeString(_iter402.getKey());
-              _iter402.getValue().write(oprot);
+              oprot.writeString(_iter410.getKey());
+              _iter410.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -2249,10 +2249,10 @@ public class TopologyPageInfo implements org.apache.thrift.TBase<TopologyPageInf
           oprot.writeFieldBegin(ID_TO_BOLT_AGG_STATS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, struct.id_to_bolt_agg_stats.size()));
-            for (Map.Entry<String, ComponentAggregateStats> _iter403 : struct.id_to_bolt_agg_stats.entrySet())
+            for (Map.Entry<String, ComponentAggregateStats> _iter411 : struct.id_to_bolt_agg_stats.entrySet())
             {
-              oprot.writeString(_iter403.getKey());
-              _iter403.getValue().write(oprot);
+              oprot.writeString(_iter411.getKey());
+              _iter411.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -2426,20 +2426,20 @@ public class TopologyPageInfo implements org.apache.thrift.TBase<TopologyPageInf
       if (struct.is_set_id_to_spout_agg_stats()) {
         {
           oprot.writeI32(struct.id_to_spout_agg_stats.size());
-          for (Map.Entry<String, ComponentAggregateStats> _iter404 : struct.id_to_spout_agg_stats.entrySet())
+          for (Map.Entry<String, ComponentAggregateStats> _iter412 : struct.id_to_spout_agg_stats.entrySet())
           {
-            oprot.writeString(_iter404.getKey());
-            _iter404.getValue().write(oprot);
+            oprot.writeString(_iter412.getKey());
+            _iter412.getValue().write(oprot);
           }
         }
       }
       if (struct.is_set_id_to_bolt_agg_stats()) {
         {
           oprot.writeI32(struct.id_to_bolt_agg_stats.size());
-          for (Map.Entry<String, ComponentAggregateStats> _iter405 : struct.id_to_bolt_agg_stats.entrySet())
+          for (Map.Entry<String, ComponentAggregateStats> _iter413 : struct.id_to_bolt_agg_stats.entrySet())
           {
-            oprot.writeString(_iter405.getKey());
-            _iter405.getValue().write(oprot);
+            oprot.writeString(_iter413.getKey());
+            _iter413.getValue().write(oprot);
           }
         }
       }
@@ -2514,32 +2514,32 @@ public class TopologyPageInfo implements org.apache.thrift.TBase<TopologyPageInf
       }
       if (incoming.get(7)) {
         {
-          org.apache.thrift.protocol.TMap _map406 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.id_to_spout_agg_stats = new HashMap<String,ComponentAggregateStats>(2*_map406.size);
-          String _key407;
-          ComponentAggregateStats _val408;
-          for (int _i409 = 0; _i409 < _map406.size; ++_i409)
+          org.apache.thrift.protocol.TMap _map414 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.id_to_spout_agg_stats = new HashMap<String,ComponentAggregateStats>(2*_map414.size);
+          String _key415;
+          ComponentAggregateStats _val416;
+          for (int _i417 = 0; _i417 < _map414.size; ++_i417)
           {
-            _key407 = iprot.readString();
-            _val408 = new ComponentAggregateStats();
-            _val408.read(iprot);
-            struct.id_to_spout_agg_stats.put(_key407, _val408);
+            _key415 = iprot.readString();
+            _val416 = new ComponentAggregateStats();
+            _val416.read(iprot);
+            struct.id_to_spout_agg_stats.put(_key415, _val416);
           }
         }
         struct.set_id_to_spout_agg_stats_isSet(true);
       }
       if (incoming.get(8)) {
         {
-          org.apache.thrift.protocol.TMap _map410 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.id_to_bolt_agg_stats = new HashMap<String,ComponentAggregateStats>(2*_map410.size);
-          String _key411;
-          ComponentAggregateStats _val412;
-          for (int _i413 = 0; _i413 < _map410.size; ++_i413)
+          org.apache.thrift.protocol.TMap _map418 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.id_to_bolt_agg_stats = new HashMap<String,ComponentAggregateStats>(2*_map418.size);
+          String _key419;
+          ComponentAggregateStats _val420;
+          for (int _i421 = 0; _i421 < _map418.size; ++_i421)
           {
-            _key411 = iprot.readString();
-            _val412 = new ComponentAggregateStats();
-            _val412.read(iprot);
-            struct.id_to_bolt_agg_stats.put(_key411, _val412);
+            _key419 = iprot.readString();
+            _val420 = new ComponentAggregateStats();
+            _val420.read(iprot);
+            struct.id_to_bolt_agg_stats.put(_key419, _val420);
           }
         }
         struct.set_id_to_bolt_agg_stats_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/TopologyStats.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/TopologyStats.java b/storm-core/src/jvm/backtype/storm/generated/TopologyStats.java
index 99f3922..aa598e4 100644
--- a/storm-core/src/jvm/backtype/storm/generated/TopologyStats.java
+++ b/storm-core/src/jvm/backtype/storm/generated/TopologyStats.java
@@ -737,15 +737,15 @@ public class TopologyStats implements org.apache.thrift.TBase<TopologyStats, Top
           case 1: // WINDOW_TO_EMITTED
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map344 = iprot.readMapBegin();
-                struct.window_to_emitted = new HashMap<String,Long>(2*_map344.size);
-                String _key345;
-                long _val346;
-                for (int _i347 = 0; _i347 < _map344.size; ++_i347)
+                org.apache.thrift.protocol.TMap _map352 = iprot.readMapBegin();
+                struct.window_to_emitted = new HashMap<String,Long>(2*_map352.size);
+                String _key353;
+                long _val354;
+                for (int _i355 = 0; _i355 < _map352.size; ++_i355)
                 {
-                  _key345 = iprot.readString();
-                  _val346 = iprot.readI64();
-                  struct.window_to_emitted.put(_key345, _val346);
+                  _key353 = iprot.readString();
+                  _val354 = iprot.readI64();
+                  struct.window_to_emitted.put(_key353, _val354);
                 }
                 iprot.readMapEnd();
               }
@@ -757,15 +757,15 @@ public class TopologyStats implements org.apache.thrift.TBase<TopologyStats, Top
           case 2: // WINDOW_TO_TRANSFERRED
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map348 = iprot.readMapBegin();
-                struct.window_to_transferred = new HashMap<String,Long>(2*_map348.size);
-                String _key349;
-                long _val350;
-                for (int _i351 = 0; _i351 < _map348.size; ++_i351)
+                org.apache.thrift.protocol.TMap _map356 = iprot.readMapBegin();
+                struct.window_to_transferred = new HashMap<String,Long>(2*_map356.size);
+                String _key357;
+                long _val358;
+                for (int _i359 = 0; _i359 < _map356.size; ++_i359)
                 {
-                  _key349 = iprot.readString();
-                  _val350 = iprot.readI64();
-                  struct.window_to_transferred.put(_key349, _val350);
+                  _key357 = iprot.readString();
+                  _val358 = iprot.readI64();
+                  struct.window_to_transferred.put(_key357, _val358);
                 }
                 iprot.readMapEnd();
               }
@@ -777,15 +777,15 @@ public class TopologyStats implements org.apache.thrift.TBase<TopologyStats, Top
           case 3: // WINDOW_TO_COMPLETE_LATENCIES_MS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map352 = iprot.readMapBegin();
-                struct.window_to_complete_latencies_ms = new HashMap<String,Double>(2*_map352.size);
-                String _key353;
-                double _val354;
-                for (int _i355 = 0; _i355 < _map352.size; ++_i355)
+                org.apache.thrift.protocol.TMap _map360 = iprot.readMapBegin();
+                struct.window_to_complete_latencies_ms = new HashMap<String,Double>(2*_map360.size);
+                String _key361;
+                double _val362;
+                for (int _i363 = 0; _i363 < _map360.size; ++_i363)
                 {
-                  _key353 = iprot.readString();
-                  _val354 = iprot.readDouble();
-                  struct.window_to_complete_latencies_ms.put(_key353, _val354);
+                  _key361 = iprot.readString();
+                  _val362 = iprot.readDouble();
+                  struct.window_to_complete_latencies_ms.put(_key361, _val362);
                 }
                 iprot.readMapEnd();
               }
@@ -797,15 +797,15 @@ public class TopologyStats implements org.apache.thrift.TBase<TopologyStats, Top
           case 4: // WINDOW_TO_ACKED
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map356 = iprot.readMapBegin();
-                struct.window_to_acked = new HashMap<String,Long>(2*_map356.size);
-                String _key357;
-                long _val358;
-                for (int _i359 = 0; _i359 < _map356.size; ++_i359)
+                org.apache.thrift.protocol.TMap _map364 = iprot.readMapBegin();
+                struct.window_to_acked = new HashMap<String,Long>(2*_map364.size);
+                String _key365;
+                long _val366;
+                for (int _i367 = 0; _i367 < _map364.size; ++_i367)
                 {
-                  _key357 = iprot.readString();
-                  _val358 = iprot.readI64();
-                  struct.window_to_acked.put(_key357, _val358);
+                  _key365 = iprot.readString();
+                  _val366 = iprot.readI64();
+                  struct.window_to_acked.put(_key365, _val366);
                 }
                 iprot.readMapEnd();
               }
@@ -817,15 +817,15 @@ public class TopologyStats implements org.apache.thrift.TBase<TopologyStats, Top
           case 5: // WINDOW_TO_FAILED
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map360 = iprot.readMapBegin();
-                struct.window_to_failed = new HashMap<String,Long>(2*_map360.size);
-                String _key361;
-                long _val362;
-                for (int _i363 = 0; _i363 < _map360.size; ++_i363)
+                org.apache.thrift.protocol.TMap _map368 = iprot.readMapBegin();
+                struct.window_to_failed = new HashMap<String,Long>(2*_map368.size);
+                String _key369;
+                long _val370;
+                for (int _i371 = 0; _i371 < _map368.size; ++_i371)
                 {
-                  _key361 = iprot.readString();
-                  _val362 = iprot.readI64();
-                  struct.window_to_failed.put(_key361, _val362);
+                  _key369 = iprot.readString();
+                  _val370 = iprot.readI64();
+                  struct.window_to_failed.put(_key369, _val370);
                 }
                 iprot.readMapEnd();
               }
@@ -852,10 +852,10 @@ public class TopologyStats implements org.apache.thrift.TBase<TopologyStats, Top
           oprot.writeFieldBegin(WINDOW_TO_EMITTED_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, struct.window_to_emitted.size()));
-            for (Map.Entry<String, Long> _iter364 : struct.window_to_emitted.entrySet())
+            for (Map.Entry<String, Long> _iter372 : struct.window_to_emitted.entrySet())
             {
-              oprot.writeString(_iter364.getKey());
-              oprot.writeI64(_iter364.getValue());
+              oprot.writeString(_iter372.getKey());
+              oprot.writeI64(_iter372.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -867,10 +867,10 @@ public class TopologyStats implements org.apache.thrift.TBase<TopologyStats, Top
           oprot.writeFieldBegin(WINDOW_TO_TRANSFERRED_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, struct.window_to_transferred.size()));
-            for (Map.Entry<String, Long> _iter365 : struct.window_to_transferred.entrySet())
+            for (Map.Entry<String, Long> _iter373 : struct.window_to_transferred.entrySet())
             {
-              oprot.writeString(_iter365.getKey());
-              oprot.writeI64(_iter365.getValue());
+              oprot.writeString(_iter373.getKey());
+              oprot.writeI64(_iter373.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -882,10 +882,10 @@ public class TopologyStats implements org.apache.thrift.TBase<TopologyStats, Top
           oprot.writeFieldBegin(WINDOW_TO_COMPLETE_LATENCIES_MS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.DOUBLE, struct.window_to_complete_latencies_ms.size()));
-            for (Map.Entry<String, Double> _iter366 : struct.window_to_complete_latencies_ms.entrySet())
+            for (Map.Entry<String, Double> _iter374 : struct.window_to_complete_latencies_ms.entrySet())
             {
-              oprot.writeString(_iter366.getKey());
-              oprot.writeDouble(_iter366.getValue());
+              oprot.writeString(_iter374.getKey());
+              oprot.writeDouble(_iter374.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -897,10 +897,10 @@ public class TopologyStats implements org.apache.thrift.TBase<TopologyStats, Top
           oprot.writeFieldBegin(WINDOW_TO_ACKED_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, struct.window_to_acked.size()));
-            for (Map.Entry<String, Long> _iter367 : struct.window_to_acked.entrySet())
+            for (Map.Entry<String, Long> _iter375 : struct.window_to_acked.entrySet())
             {
-              oprot.writeString(_iter367.getKey());
-              oprot.writeI64(_iter367.getValue());
+              oprot.writeString(_iter375.getKey());
+              oprot.writeI64(_iter375.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -912,10 +912,10 @@ public class TopologyStats implements org.apache.thrift.TBase<TopologyStats, Top
           oprot.writeFieldBegin(WINDOW_TO_FAILED_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, struct.window_to_failed.size()));
-            for (Map.Entry<String, Long> _iter368 : struct.window_to_failed.entrySet())
+            for (Map.Entry<String, Long> _iter376 : struct.window_to_failed.entrySet())
             {
-              oprot.writeString(_iter368.getKey());
-              oprot.writeI64(_iter368.getValue());
+              oprot.writeString(_iter376.getKey());
+              oprot.writeI64(_iter376.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -959,50 +959,50 @@ public class TopologyStats implements org.apache.thrift.TBase<TopologyStats, Top
       if (struct.is_set_window_to_emitted()) {
         {
           oprot.writeI32(struct.window_to_emitted.size());
-          for (Map.Entry<String, Long> _iter369 : struct.window_to_emitted.entrySet())
+          for (Map.Entry<String, Long> _iter377 : struct.window_to_emitted.entrySet())
           {
-            oprot.writeString(_iter369.getKey());
-            oprot.writeI64(_iter369.getValue());
+            oprot.writeString(_iter377.getKey());
+            oprot.writeI64(_iter377.getValue());
           }
         }
       }
       if (struct.is_set_window_to_transferred()) {
         {
           oprot.writeI32(struct.window_to_transferred.size());
-          for (Map.Entry<String, Long> _iter370 : struct.window_to_transferred.entrySet())
+          for (Map.Entry<String, Long> _iter378 : struct.window_to_transferred.entrySet())
           {
-            oprot.writeString(_iter370.getKey());
-            oprot.writeI64(_iter370.getValue());
+            oprot.writeString(_iter378.getKey());
+            oprot.writeI64(_iter378.getValue());
           }
         }
       }
       if (struct.is_set_window_to_complete_latencies_ms()) {
         {
           oprot.writeI32(struct.window_to_complete_latencies_ms.size());
-          for (Map.Entry<String, Double> _iter371 : struct.window_to_complete_latencies_ms.entrySet())
+          for (Map.Entry<String, Double> _iter379 : struct.window_to_complete_latencies_ms.entrySet())
           {
-            oprot.writeString(_iter371.getKey());
-            oprot.writeDouble(_iter371.getValue());
+            oprot.writeString(_iter379.getKey());
+            oprot.writeDouble(_iter379.getValue());
           }
         }
       }
       if (struct.is_set_window_to_acked()) {
         {
           oprot.writeI32(struct.window_to_acked.size());
-          for (Map.Entry<String, Long> _iter372 : struct.window_to_acked.entrySet())
+          for (Map.Entry<String, Long> _iter380 : struct.window_to_acked.entrySet())
           {
-            oprot.writeString(_iter372.getKey());
-            oprot.writeI64(_iter372.getValue());
+            oprot.writeString(_iter380.getKey());
+            oprot.writeI64(_iter380.getValue());
           }
         }
       }
       if (struct.is_set_window_to_failed()) {
         {
           oprot.writeI32(struct.window_to_failed.size());
-          for (Map.Entry<String, Long> _iter373 : struct.window_to_failed.entrySet())
+          for (Map.Entry<String, Long> _iter381 : struct.window_to_failed.entrySet())
           {
-            oprot.writeString(_iter373.getKey());
-            oprot.writeI64(_iter373.getValue());
+            oprot.writeString(_iter381.getKey());
+            oprot.writeI64(_iter381.getValue());
           }
         }
       }
@@ -1014,75 +1014,75 @@ public class TopologyStats implements org.apache.thrift.TBase<TopologyStats, Top
       BitSet incoming = iprot.readBitSet(5);
       if (incoming.get(0)) {
         {
-          org.apache.thrift.protocol.TMap _map374 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, iprot.readI32());
-          struct.window_to_emitted = new HashMap<String,Long>(2*_map374.size);
-          String _key375;
-          long _val376;
-          for (int _i377 = 0; _i377 < _map374.size; ++_i377)
+          org.apache.thrift.protocol.TMap _map382 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, iprot.readI32());
+          struct.window_to_emitted = new HashMap<String,Long>(2*_map382.size);
+          String _key383;
+          long _val384;
+          for (int _i385 = 0; _i385 < _map382.size; ++_i385)
           {
-            _key375 = iprot.readString();
-            _val376 = iprot.readI64();
-            struct.window_to_emitted.put(_key375, _val376);
+            _key383 = iprot.readString();
+            _val384 = iprot.readI64();
+            struct.window_to_emitted.put(_key383, _val384);
           }
         }
         struct.set_window_to_emitted_isSet(true);
       }
       if (incoming.get(1)) {
         {
-          org.apache.thrift.protocol.TMap _map378 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, iprot.readI32());
-          struct.window_to_transferred = new HashMap<String,Long>(2*_map378.size);
-          String _key379;
-          long _val380;
-          for (int _i381 = 0; _i381 < _map378.size; ++_i381)
+          org.apache.thrift.protocol.TMap _map386 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, iprot.readI32());
+          struct.window_to_transferred = new HashMap<String,Long>(2*_map386.size);
+          String _key387;
+          long _val388;
+          for (int _i389 = 0; _i389 < _map386.size; ++_i389)
           {
-            _key379 = iprot.readString();
-            _val380 = iprot.readI64();
-            struct.window_to_transferred.put(_key379, _val380);
+            _key387 = iprot.readString();
+            _val388 = iprot.readI64();
+            struct.window_to_transferred.put(_key387, _val388);
           }
         }
         struct.set_window_to_transferred_isSet(true);
       }
       if (incoming.get(2)) {
         {
-          org.apache.thrift.protocol.TMap _map382 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.DOUBLE, iprot.readI32());
-          struct.window_to_complete_latencies_ms = new HashMap<String,Double>(2*_map382.size);
-          String _key383;
-          double _val384;
-          for (int _i385 = 0; _i385 < _map382.size; ++_i385)
+          org.apache.thrift.protocol.TMap _map390 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.window_to_complete_latencies_ms = new HashMap<String,Double>(2*_map390.size);
+          String _key391;
+          double _val392;
+          for (int _i393 = 0; _i393 < _map390.size; ++_i393)
           {
-            _key383 = iprot.readString();
-            _val384 = iprot.readDouble();
-            struct.window_to_complete_latencies_ms.put(_key383, _val384);
+            _key391 = iprot.readString();
+            _val392 = iprot.readDouble();
+            struct.window_to_complete_latencies_ms.put(_key391, _val392);
           }
         }
         struct.set_window_to_complete_latencies_ms_isSet(true);
       }
       if (incoming.get(3)) {
         {
-          org.apache.thrift.protocol.TMap _map386 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, iprot.readI32());
-          struct.window_to_acked = new HashMap<String,Long>(2*_map386.size);
-          String _key387;
-          long _val388;
-          for (int _i389 = 0; _i389 < _map386.size; ++_i389)
+          org.apache.thrift.protocol.TMap _map394 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, iprot.readI32());
+          struct.window_to_acked = new HashMap<String,Long>(2*_map394.size);
+          String _key395;
+          long _val396;
+          for (int _i397 = 0; _i397 < _map394.size; ++_i397)
           {
-            _key387 = iprot.readString();
-            _val388 = iprot.readI64();
-            struct.window_to_acked.put(_key387, _val388);
+            _key395 = iprot.readString();
+            _val396 = iprot.readI64();
+            struct.window_to_acked.put(_key395, _val396);
           }
         }
         struct.set_window_to_acked_isSet(true);
       }
       if (incoming.get(4)) {
         {
-          org.apache.thrift.protocol.TMap _map390 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, iprot.readI32());
-          struct.window_to_failed = new HashMap<String,Long>(2*_map390.size);
-          String _key391;
-          long _val392;
-          for (int _i393 = 0; _i393 < _map390.size; ++_i393)
+          org.apache.thrift.protocol.TMap _map398 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I64, iprot.readI32());
+          struct.window_to_failed = new HashMap<String,Long>(2*_map398.size);
+          String _key399;
+          long _val400;
+          for (int _i401 = 0; _i401 < _map398.size; ++_i401)
           {
-            _key391 = iprot.readString();
-            _val392 = iprot.readI64();
-            struct.window_to_failed.put(_key391, _val392);
+            _key399 = iprot.readString();
+            _val400 = iprot.readI64();
+            struct.window_to_failed.put(_key399, _val400);
           }
         }
         struct.set_window_to_failed_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java b/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java
new file mode 100644
index 0000000..6fe9f19
--- /dev/null
+++ b/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.hooks;
+
+import backtype.storm.task.WorkerTopologyContext;
+
+import java.util.List;
+import java.util.Map;
+
+public class BaseWorkerHook implements IWorkerHook {
+    @Override
+    public void start(Map stormConf, WorkerTopologyContext context, List taskIds) {
+
+    }
+
+    @Override
+    public void shutdown() {
+    }
+}

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java b/storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java
new file mode 100644
index 0000000..6c2bab2
--- /dev/null
+++ b/storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java
@@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package backtype.storm.hooks;
+
+import backtype.storm.task.WorkerTopologyContext;
+
+import java.io.Serializable;
+import java.util.List;
+import java.util.Map;
+
+public interface IWorkerHook extends Serializable {
+    void start(Map stormConf, WorkerTopologyContext context, List taskIds);
+    void shutdown();
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/topology/TopologyBuilder.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/topology/TopologyBuilder.java b/storm-core/src/jvm/backtype/storm/topology/TopologyBuilder.java
index 9536faa..965540e 100644
--- a/storm-core/src/jvm/backtype/storm/topology/TopologyBuilder.java
+++ b/storm-core/src/jvm/backtype/storm/topology/TopologyBuilder.java
@@ -18,25 +18,20 @@
 package backtype.storm.topology;
 
 import backtype.storm.Config;
-import backtype.storm.generated.Bolt;
-import backtype.storm.generated.ComponentCommon;
-import backtype.storm.generated.ComponentObject;
-import backtype.storm.generated.GlobalStreamId;
-import backtype.storm.generated.Grouping;
-import backtype.storm.generated.NullStruct;
-import backtype.storm.generated.SpoutSpec;
-import backtype.storm.generated.StateSpoutSpec;
-import backtype.storm.generated.StormTopology;
+import backtype.storm.generated.*;
 import backtype.storm.grouping.CustomStreamGrouping;
 import backtype.storm.grouping.PartialKeyGrouping;
+import backtype.storm.hooks.IWorkerHook;
 import backtype.storm.tuple.Fields;
 import backtype.storm.utils.Utils;
+import org.json.simple.JSONValue;
+
+import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.HashMap;
+import java.util.List;
 import java.util.Map;
-
 import backtype.storm.windowing.TupleWindow;
-import org.json.simple.JSONValue;
 
 /**
  * TopologyBuilder exposes the Java API for specifying a topology for Storm
@@ -98,11 +93,13 @@ public class TopologyBuilder {
 //    private Map<String, Map<GlobalStreamId, Grouping>> _inputs = new HashMap<String, Map<GlobalStreamId, Grouping>>();
 
     private Map<String, StateSpoutSpec> _stateSpouts = new HashMap<>();
-    
-    
+    private List<ByteBuffer> _workerHooks = new ArrayList<>();
+
+
     public StormTopology createTopology() {
         Map<String, Bolt> boltSpecs = new HashMap<>();
         Map<String, SpoutSpec> spoutSpecs = new HashMap<>();
+
         for(String boltId: _bolts.keySet()) {
             IRichBolt bolt = _bolts.get(boltId);
             ComponentCommon common = getComponentCommon(boltId, bolt);
@@ -112,11 +109,15 @@ public class TopologyBuilder {
             IRichSpout spout = _spouts.get(spoutId);
             ComponentCommon common = getComponentCommon(spoutId, spout);
             spoutSpecs.put(spoutId, new SpoutSpec(ComponentObject.serialized_java(Utils.javaSerialize(spout)), common));
-            
         }
-        return new StormTopology(spoutSpecs,
-                                 boltSpecs,
-                                 new HashMap<String, StateSpoutSpec>());
+
+        StormTopology stormTopology = new StormTopology(spoutSpecs,
+                boltSpecs,
+                new HashMap<String, StateSpoutSpec>());
+
+        stormTopology.set_worker_hooks(_workerHooks);
+
+        return stormTopology;
     }
 
     /**
@@ -230,6 +231,14 @@ public class TopologyBuilder {
         // TODO: finish
     }
 
+    /**
+     * Add a new worker lifecycle hook
+     *
+     * @param workerHook the lifecycle hook to add
+     */
+    public void addWorkerHook(IWorkerHook workerHook) {
+        _workerHooks.add(ByteBuffer.wrap(Utils.javaSerialize(workerHook)));
+    }
 
     private void validateUnusedId(String id) {
         if(_bolts.containsKey(id)) {

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/utils/ThriftTopologyUtils.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/utils/ThriftTopologyUtils.java b/storm-core/src/jvm/backtype/storm/utils/ThriftTopologyUtils.java
index 8306d9b..d5c460f 100644
--- a/storm-core/src/jvm/backtype/storm/utils/ThriftTopologyUtils.java
+++ b/storm-core/src/jvm/backtype/storm/utils/ThriftTopologyUtils.java
@@ -27,30 +27,38 @@ import java.util.Map;
 import java.util.Set;
 
 public class ThriftTopologyUtils {
+    public static boolean isWorkerHook(StormTopology._Fields f) {
+        return f.equals(StormTopology._Fields.WORKER_HOOKS);
+    }
+
     public static Set<String> getComponentIds(StormTopology topology) {
         Set<String> ret = new HashSet<String>();
         for(StormTopology._Fields f: StormTopology.metaDataMap.keySet()) {
-            Map<String, Object> componentMap = (Map<String, Object>) topology.getFieldValue(f);
-            ret.addAll(componentMap.keySet());
+            if(StormTopology.metaDataMap.get(f).valueMetaData.type == org.apache.thrift.protocol.TType.MAP) {
+                Map<String, Object> componentMap = (Map<String, Object>) topology.getFieldValue(f);
+                ret.addAll(componentMap.keySet());
+            }
         }
         return ret;
     }
 
     public static ComponentCommon getComponentCommon(StormTopology topology, String componentId) {
         for(StormTopology._Fields f: StormTopology.metaDataMap.keySet()) {
-            Map<String, Object> componentMap = (Map<String, Object>) topology.getFieldValue(f);
-            if(componentMap.containsKey(componentId)) {
-                Object component = componentMap.get(componentId);
-                if(component instanceof Bolt) {
-                    return ((Bolt) component).get_common();
-                }
-                if(component instanceof SpoutSpec) {
-                    return ((SpoutSpec) component).get_common();
-                }
-                if(component instanceof StateSpoutSpec) {
-                    return ((StateSpoutSpec) component).get_common();
+            if(StormTopology.metaDataMap.get(f).valueMetaData.type == org.apache.thrift.protocol.TType.MAP) {
+                Map<String, Object> componentMap = (Map<String, Object>) topology.getFieldValue(f);
+                if(componentMap.containsKey(componentId)) {
+                    Object component = componentMap.get(componentId);
+                    if(component instanceof Bolt) {
+                        return ((Bolt) component).get_common();
+                    }
+                    if(component instanceof SpoutSpec) {
+                        return ((SpoutSpec) component).get_common();
+                    }
+                    if(component instanceof StateSpoutSpec) {
+                        return ((StateSpoutSpec) component).get_common();
+                    }
+                    throw new RuntimeException("Unreachable code! No get_common conversion for component " + component);
                 }
-                throw new RuntimeException("Unreachable code! No get_common conversion for component " + component);
             }
         }
         throw new IllegalArgumentException("Could not find component common for " + componentId);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/py/storm/Nimbus.py
----------------------------------------------------------------------
diff --git a/storm-core/src/py/storm/Nimbus.py b/storm-core/src/py/storm/Nimbus.py
index c1e1b02..c2bb9ac 100644
--- a/storm-core/src/py/storm/Nimbus.py
+++ b/storm-core/src/py/storm/Nimbus.py
@@ -3811,11 +3811,11 @@ class getComponentPendingProfileActions_result:
       if fid == 0:
         if ftype == TType.LIST:
           self.success = []
-          (_etype641, _size638) = iprot.readListBegin()
-          for _i642 in xrange(_size638):
-            _elem643 = ProfileRequest()
-            _elem643.read(iprot)
-            self.success.append(_elem643)
+          (_etype648, _size645) = iprot.readListBegin()
+          for _i649 in xrange(_size645):
+            _elem650 = ProfileRequest()
+            _elem650.read(iprot)
+            self.success.append(_elem650)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
@@ -3832,8 +3832,8 @@ class getComponentPendingProfileActions_result:
     if self.success is not None:
       oprot.writeFieldBegin('success', TType.LIST, 0)
       oprot.writeListBegin(TType.STRUCT, len(self.success))
-      for iter644 in self.success:
-        iter644.write(oprot)
+      for iter651 in self.success:
+        iter651.write(oprot)
       oprot.writeListEnd()
       oprot.writeFieldEnd()
     oprot.writeFieldStop()


[24/50] [abbrv] storm git commit: add STORM-1203 to changelog

Posted by sr...@apache.org.
add STORM-1203 to changelog


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/8eac4aad
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/8eac4aad
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/8eac4aad

Branch: refs/heads/STORM-1040
Commit: 8eac4aadc4e5de0658649d23d724b04c8e8c2548
Parents: c0c1462
Author: P. Taylor Goetz <pt...@gmail.com>
Authored: Tue Nov 24 14:54:10 2015 -0500
Committer: P. Taylor Goetz <pt...@gmail.com>
Committed: Tue Nov 24 14:54:10 2015 -0500

----------------------------------------------------------------------
 CHANGELOG.md | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/8eac4aad/CHANGELOG.md
----------------------------------------------------------------------
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 11aa0f5..b45afb1 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,5 @@
 +## 0.11.0
+ * STORM-1203: worker metadata file creation doesn't use storm.log.dir config
  * STORM-1349: [Flux] Allow constructorArgs to take Maps as arguments
  * STORM-126: Add Lifecycle support API for worker nodes
  * STORM-1213: Remove sigar binaries from source tree


[39/50] [abbrv] storm git commit: Merge branch 'STORM-1352' of https://github.com/haohui/storm into STORM-1352

Posted by sr...@apache.org.
Merge branch 'STORM-1352' of https://github.com/haohui/storm into STORM-1352


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/6d3bee9a
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/6d3bee9a
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/6d3bee9a

Branch: refs/heads/STORM-1040
Commit: 6d3bee9a69db23ea68ca59259f7139d165dee391
Parents: 01bab86 c1c5273
Author: Sriharsha Chintalapani <ha...@hortonworks.com>
Authored: Wed Nov 25 14:35:16 2015 -0800
Committer: Sriharsha Chintalapani <ha...@hortonworks.com>
Committed: Wed Nov 25 14:35:16 2015 -0800

----------------------------------------------------------------------
 .../starter/trident/TridentKafkaWordCount.java  | 15 ++++----
 external/storm-kafka/README.md                  | 37 ++++++++++----------
 .../src/jvm/storm/kafka/bolt/KafkaBolt.java     | 13 ++-----
 .../storm/kafka/trident/TridentKafkaState.java  | 10 ++----
 .../kafka/trident/TridentKafkaStateFactory.java | 10 ++++--
 .../src/test/storm/kafka/TestUtils.java         |  8 ++---
 .../src/test/storm/kafka/TridentKafkaTest.java  | 13 +++----
 .../test/storm/kafka/TridentKafkaTopology.java  | 33 +++++++----------
 .../test/storm/kafka/bolt/KafkaBoltTest.java    |  6 ++--
 9 files changed, 58 insertions(+), 87 deletions(-)
----------------------------------------------------------------------



[14/50] [abbrv] storm git commit: add javadocs

Posted by sr...@apache.org.
add javadocs


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/dfc33ec3
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/dfc33ec3
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/dfc33ec3

Branch: refs/heads/STORM-1040
Commit: dfc33ec3dee8968b9d337323ca42e178de8dbd53
Parents: 07d9733
Author: Michael Schonfeld <mi...@schonfeld.org>
Authored: Wed Nov 18 10:44:17 2015 -0500
Committer: Michael Schonfeld <mi...@schonfeld.org>
Committed: Mon Nov 23 18:50:55 2015 -0500

----------------------------------------------------------------------
 .../src/clj/backtype/storm/daemon/worker.clj    |  1 -
 .../backtype/storm/hooks/BaseWorkerHook.java    | 20 ++++++++++++++++++--
 .../jvm/backtype/storm/hooks/IWorkerHook.java   | 19 ++++++++++++++++++-
 3 files changed, 36 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/dfc33ec3/storm-core/src/clj/backtype/storm/daemon/worker.clj
----------------------------------------------------------------------
diff --git a/storm-core/src/clj/backtype/storm/daemon/worker.clj b/storm-core/src/clj/backtype/storm/daemon/worker.clj
index f522b02..c0a99de 100644
--- a/storm-core/src/clj/backtype/storm/daemon/worker.clj
+++ b/storm-core/src/clj/backtype/storm/daemon/worker.clj
@@ -680,7 +680,6 @@
 
                     (close-resources worker)
 
-                    ;; TODO: here need to invoke the "shutdown" method of WorkerHook
                     (log-message "Trigger any worker shutdown hooks")
                     (run-worker-shutdown-hooks worker)
 

http://git-wip-us.apache.org/repos/asf/storm/blob/dfc33ec3/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java b/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java
index e04f19b..029f671 100644
--- a/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java
+++ b/storm-core/src/jvm/backtype/storm/hooks/BaseWorkerHook.java
@@ -23,15 +23,31 @@ import java.io.Serializable;
 import java.util.List;
 import java.util.Map;
 
+/**
+ * A BaseWorkerHook is a noop implementation of IWorkerHook. You
+ * may extends this class and implement any and/or all methods you
+ * need for your workers.
+ */
 public class BaseWorkerHook implements IWorkerHook, Serializable {
     private static final long serialVersionUID = 2589466485198339529L;
 
+    /**
+     * This method is called when a worker is started
+     *
+     * @param stormConf The Storm configuration for this worker
+     * @param context This object can be used to get information about this worker's place within the topology
+     * @param taskIds A list of Integers denoting the task IDs assigned to this worker
+     */
     @Override
-    public void start(Map stormConf, WorkerTopologyContext context, List taskIds) {
-
+    public void start(Map stormConf, WorkerTopologyContext context, List<Integer> taskIds) {
+        // NOOP
     }
 
+    /**
+     * This method is called right before a worker shuts down
+     */
     @Override
     public void shutdown() {
+        // NOOP
     }
 }

http://git-wip-us.apache.org/repos/asf/storm/blob/dfc33ec3/storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java b/storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java
index 6c2bab2..6584883 100644
--- a/storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java
+++ b/storm-core/src/jvm/backtype/storm/hooks/IWorkerHook.java
@@ -23,7 +23,24 @@ import java.io.Serializable;
 import java.util.List;
 import java.util.Map;
 
+/**
+ * An IWorkerHook represents a topology component that can be executed
+ * when a worker starts, and when a worker shuts down. It can be useful
+ * when you want to execute operations before topology processing starts,
+ * or cleanup operations before your workers shut down.
+ */
 public interface IWorkerHook extends Serializable {
-    void start(Map stormConf, WorkerTopologyContext context, List taskIds);
+    /**
+     * This method is called when a worker is started
+     *
+     * @param stormConf The Storm configuration for this worker
+     * @param context This object can be used to get information about this worker's place within the topology
+     * @param taskIds A list of Integers denoting the task IDs assigned to this worker
+     */
+    void start(Map stormConf, WorkerTopologyContext context, List<Integer> taskIds);
+
+    /**
+     * This method is called right before a worker shuts down
+     */
     void shutdown();
 }
\ No newline at end of file


[36/50] [abbrv] storm git commit: Merge branch 'STORM-1220' of https://github.com/haohui/storm into STORM-1220

Posted by sr...@apache.org.
Merge branch 'STORM-1220' of https://github.com/haohui/storm into STORM-1220


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/352a2849
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/352a2849
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/352a2849

Branch: refs/heads/STORM-1040
Commit: 352a28498648747b25ea07fad313d63e627c3f44
Parents: 20a864d 35f1da7
Author: Sriharsha Chintalapani <ha...@hortonworks.com>
Authored: Wed Nov 25 11:22:34 2015 -0800
Committer: Sriharsha Chintalapani <ha...@hortonworks.com>
Committed: Wed Nov 25 11:22:34 2015 -0800

----------------------------------------------------------------------
 .../src/jvm/storm/kafka/KafkaUtils.java         |  8 ++--
 .../src/jvm/storm/kafka/KeyValueScheme.java     |  5 +--
 .../kafka/KeyValueSchemeAsMultiScheme.java      |  5 ++-
 .../jvm/storm/kafka/MessageMetadataScheme.java  |  6 ++-
 .../MessageMetadataSchemeAsMultiScheme.java     |  3 +-
 .../jvm/storm/kafka/StringKeyValueScheme.java   |  3 +-
 .../kafka/StringMessageAndMetadataScheme.java   |  7 ++--
 .../storm/kafka/StringMultiSchemeWithTopic.java | 21 +++-------
 .../src/jvm/storm/kafka/StringScheme.java       | 20 ++++++----
 .../storm/kafka/StringKeyValueSchemeTest.java   | 17 ++++++---
 .../src/test/storm/kafka/TestStringScheme.java  | 40 ++++++++++++++++++++
 .../jvm/backtype/storm/spout/MultiScheme.java   |  3 +-
 .../backtype/storm/spout/RawMultiScheme.java    |  3 +-
 .../src/jvm/backtype/storm/spout/RawScheme.java |  9 ++++-
 .../src/jvm/backtype/storm/spout/Scheme.java    |  3 +-
 .../storm/spout/SchemeAsMultiScheme.java        |  3 +-
 16 files changed, 106 insertions(+), 50 deletions(-)
----------------------------------------------------------------------



[30/50] [abbrv] storm git commit: Revert "enable markdown in javadoc" committed to master by accident.

Posted by sr...@apache.org.
Revert "enable markdown in javadoc" committed to master by accident.

This reverts commit e03b28ca6222f61471b4acb1a6086aab9b597b8a.


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/fc3b8773
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/fc3b8773
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/fc3b8773

Branch: refs/heads/STORM-1040
Commit: fc3b8773e62138abcc41f789072fc5902ab587e3
Parents: e2d267f
Author: P. Taylor Goetz <pt...@gmail.com>
Authored: Tue Nov 24 16:55:10 2015 -0500
Committer: P. Taylor Goetz <pt...@gmail.com>
Committed: Tue Nov 24 16:55:10 2015 -0500

----------------------------------------------------------------------
 pom.xml | 17 ++++-------------
 1 file changed, 4 insertions(+), 13 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/fc3b8773/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index eb8c5b7..626bebc 100644
--- a/pom.xml
+++ b/pom.xml
@@ -862,19 +862,6 @@
                     </excludes>
                 </configuration>
             </plugin>
-            <plugin>
-                <groupId>org.apache.maven.plugins</groupId>
-                <artifactId>maven-javadoc-plugin</artifactId>
-                <configuration>
-                    <doclet>ch.raffael.doclets.pegdown.PegdownDoclet</doclet>
-                    <docletArtifact>
-                        <groupId>ch.raffael.pegdown-doclet</groupId>
-                        <artifactId>pegdown-doclet</artifactId>
-                        <version>1.1</version>
-                    </docletArtifact>
-                    <useStandardDocletOptions>true</useStandardDocletOptions>
-                </configuration>
-            </plugin>
         </plugins>
     </build>
 
@@ -899,6 +886,10 @@
             </plugin>
             <plugin>
                 <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-javadoc-plugin</artifactId>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
                 <artifactId>maven-surefire-report-plugin</artifactId>
                 <configuration>
                     <reportsDirectories>


[11/50] [abbrv] storm git commit: add support for worker lifecycle hooks

Posted by sr...@apache.org.
add support for worker lifecycle hooks


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/b03ce6b2
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/b03ce6b2
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/b03ce6b2

Branch: refs/heads/STORM-1040
Commit: b03ce6b28e0f16d11b769f75a069de0328637794
Parents: 037cd00
Author: Michael Schonfeld <mi...@schonfeld.org>
Authored: Mon Nov 16 14:49:06 2015 -0500
Committer: Michael Schonfeld <mi...@schonfeld.org>
Committed: Mon Nov 23 18:50:54 2015 -0500

----------------------------------------------------------------------
 .../src/clj/backtype/storm/daemon/common.clj    |   30 +-
 .../src/clj/backtype/storm/daemon/worker.clj    |   27 +-
 storm-core/src/clj/backtype/storm/thrift.clj    |    3 +
 .../backtype/storm/generated/Assignment.java    |  244 ++--
 .../jvm/backtype/storm/generated/BoltStats.java |  340 ++---
 .../storm/generated/ClusterSummary.java         |  108 +-
 .../storm/generated/ClusterWorkerHeartbeat.java |   52 +-
 .../storm/generated/ComponentPageInfo.java      |  220 ++--
 .../backtype/storm/generated/Credentials.java   |   44 +-
 .../backtype/storm/generated/ExecutorStats.java |  160 +--
 .../storm/generated/LSApprovedWorkers.java      |   44 +-
 .../generated/LSSupervisorAssignments.java      |   48 +-
 .../backtype/storm/generated/LSTopoHistory.java |   64 +-
 .../storm/generated/LSTopoHistoryList.java      |   36 +-
 .../storm/generated/LSWorkerHeartbeat.java      |   36 +-
 .../storm/generated/LocalAssignment.java        |   36 +-
 .../storm/generated/LocalStateData.java         |   48 +-
 .../jvm/backtype/storm/generated/LogConfig.java |   48 +-
 .../jvm/backtype/storm/generated/Nimbus.java    |   36 +-
 .../jvm/backtype/storm/generated/NodeInfo.java  |   32 +-
 .../storm/generated/RebalanceOptions.java       |   44 +-
 .../backtype/storm/generated/SpoutStats.java    |  224 ++--
 .../jvm/backtype/storm/generated/StormBase.java |   92 +-
 .../backtype/storm/generated/StormTopology.java |  251 +++-
 .../storm/generated/SupervisorInfo.java         |  152 +--
 .../storm/generated/SupervisorSummary.java      |   44 +-
 .../storm/generated/TopologyHistoryInfo.java    |   32 +-
 .../backtype/storm/generated/TopologyInfo.java  |  164 +--
 .../storm/generated/TopologyPageInfo.java       |   96 +-
 .../backtype/storm/generated/TopologyStats.java |  220 ++--
 .../backtype/storm/hooks/BaseWorkerHook.java    |   34 +
 .../jvm/backtype/storm/hooks/IWorkerHook.java   |   29 +
 .../storm/topology/TopologyBuilder.java         |   43 +-
 .../storm/utils/ThriftTopologyUtils.java        |   36 +-
 storm-core/src/py/storm/Nimbus.py               |   14 +-
 storm-core/src/py/storm/ttypes.py               | 1239 +++++++++---------
 storm-core/src/storm.thrift                     |    1 +
 37 files changed, 2330 insertions(+), 2041 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/clj/backtype/storm/daemon/common.clj
----------------------------------------------------------------------
diff --git a/storm-core/src/clj/backtype/storm/daemon/common.clj b/storm-core/src/clj/backtype/storm/daemon/common.clj
index 35ae139..9b3aab3 100644
--- a/storm-core/src/clj/backtype/storm/daemon/common.clj
+++ b/storm-core/src/clj/backtype/storm/daemon/common.clj
@@ -16,7 +16,8 @@
 (ns backtype.storm.daemon.common
   (:use [backtype.storm log config util])
   (:import [backtype.storm.generated StormTopology
-            InvalidTopologyException GlobalStreamId])
+            InvalidTopologyException GlobalStreamId]
+           [backtype.storm.utils ThriftTopologyUtils])
   (:import [backtype.storm.utils Utils])
   (:import [backtype.storm.task WorkerTopologyContext])
   (:import [backtype.storm Constants])
@@ -113,22 +114,23 @@
               (str "Duplicate component ids: " offending))))
     (doseq [f thrift/STORM-TOPOLOGY-FIELDS
             :let [obj-map (.getFieldValue topology f)]]
-      (doseq [id (keys obj-map)]
-        (if (Utils/isSystemId id)
-          (throw (InvalidTopologyException.
-                  (str id " is not a valid component id")))))
-      (doseq [obj (vals obj-map)
-              id (-> obj .get_common .get_streams keys)]
-        (if (Utils/isSystemId id)
-          (throw (InvalidTopologyException.
-                  (str id " is not a valid stream id"))))))
-    ))
+      (if-not (ThriftTopologyUtils/isWorkerHook f)
+        (do
+          (doseq [id (keys obj-map)]
+            (if (Utils/isSystemId id)
+              (throw (InvalidTopologyException.
+                       (str id " is not a valid component id")))))
+          (doseq [obj (vals obj-map)
+                  id (-> obj .get_common .get_streams keys)]
+            (if (Utils/isSystemId id)
+              (throw (InvalidTopologyException.
+                       (str id " is not a valid stream id"))))))))))
 
 (defn all-components [^StormTopology topology]
   (apply merge {}
-         (for [f thrift/STORM-TOPOLOGY-FIELDS]
-           (.getFieldValue topology f)
-           )))
+    (for [f thrift/STORM-TOPOLOGY-FIELDS]
+      (if-not (ThriftTopologyUtils/isWorkerHook f)
+        (.getFieldValue topology f)))))
 
 (defn component-conf [component]
   (->> component

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/clj/backtype/storm/daemon/worker.clj
----------------------------------------------------------------------
diff --git a/storm-core/src/clj/backtype/storm/daemon/worker.clj b/storm-core/src/clj/backtype/storm/daemon/worker.clj
index 579d76a..f522b02 100644
--- a/storm-core/src/clj/backtype/storm/daemon/worker.clj
+++ b/storm-core/src/clj/backtype/storm/daemon/worker.clj
@@ -22,7 +22,8 @@
   (:require [backtype.storm [disruptor :as disruptor] [cluster :as cluster]])
   (:require [clojure.set :as set])
   (:require [backtype.storm.messaging.loader :as msg-loader])
-  (:import [java.util.concurrent Executors])
+  (:import [java.util.concurrent Executors]
+           [backtype.storm.hooks IWorkerHook BaseWorkerHook])
   (:import [java.util ArrayList HashMap])
   (:import [backtype.storm.utils Utils TransferDrainer ThriftTopologyUtils WorkerBackpressureThread DisruptorQueue])
   (:import [backtype.storm.grouping LoadMapping])
@@ -548,6 +549,25 @@
       (reset! latest-log-config new-log-configs)
       (log-debug "New merged log config is " @latest-log-config))))
 
+(defn run-worker-start-hooks [worker]
+  (let [topology (:topology worker)
+        topo-conf (:conf worker)
+        worker-topology-context (worker-context worker)
+        task-ids (:task_ids worker)
+        hooks (.get_worker_hooks topology)]
+    (dofor [hook hooks]
+      (let [hook-bytes (Utils/toByteArray hook)
+            deser-hook (Utils/javaDeserialize hook-bytes BaseWorkerHook)]
+        (.start deser-hook topo-conf worker-topology-context task-ids)))))
+
+(defn run-worker-shutdown-hooks [worker]
+  (let [topology (:topology worker)
+        hooks (.get_worker_hooks topology)]
+    (dofor [hook hooks]
+      (let [hook-bytes (Utils/toByteArray hook)
+            deser-hook (Utils/javaDeserialize hook-bytes BaseWorkerHook)]
+        (.shutdown deser-hook)))))
+
 ;; TODO: should worker even take the storm-id as input? this should be
 ;; deducable from cluster state (by searching through assignments)
 ;; what about if there's inconsistency in assignments? -> but nimbus
@@ -604,6 +624,7 @@
 
         _ (refresh-storm-active worker nil)
 
+        _ (run-worker-start-hooks worker)
 
         _ (reset! executors (dofor [e (:executors worker)] (executor/mk-executor worker e initial-credentials)))
 
@@ -660,6 +681,8 @@
                     (close-resources worker)
 
                     ;; TODO: here need to invoke the "shutdown" method of WorkerHook
+                    (log-message "Trigger any worker shutdown hooks")
+                    (run-worker-shutdown-hooks worker)
 
                     (.remove-worker-heartbeat! (:storm-cluster-state worker) storm-id assignment-id port)
                     (log-message "Disconnecting from storm cluster state context")
@@ -738,4 +761,4 @@
     (setup-default-uncaught-exception-handler)
     (validate-distributed-mode! conf)
     (let [worker (mk-worker conf nil storm-id assignment-id (Integer/parseInt port-str) worker-id)]
-      (add-shutdown-hook-with-force-kill-in-1-sec #(.shutdown worker)))))
+      (add-shutdown-hook-with-force-kill-in-1-sec #(.shutdown worker)))))
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/clj/backtype/storm/thrift.clj
----------------------------------------------------------------------
diff --git a/storm-core/src/clj/backtype/storm/thrift.clj b/storm-core/src/clj/backtype/storm/thrift.clj
index 8f4c659..545ce49 100644
--- a/storm-core/src/clj/backtype/storm/thrift.clj
+++ b/storm-core/src/clj/backtype/storm/thrift.clj
@@ -282,3 +282,6 @@
   [StormTopology$_Fields/SPOUTS
    StormTopology$_Fields/STATE_SPOUTS])
 
+(def WORKER-HOOK-FIELD
+  [StormTopology$_Fields/WORKER_HOOKS])
+

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/Assignment.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/Assignment.java b/storm-core/src/jvm/backtype/storm/generated/Assignment.java
index 25874a4..cc9bb19 100644
--- a/storm-core/src/jvm/backtype/storm/generated/Assignment.java
+++ b/storm-core/src/jvm/backtype/storm/generated/Assignment.java
@@ -787,15 +787,15 @@ public class Assignment implements org.apache.thrift.TBase<Assignment, Assignmen
           case 2: // NODE_HOST
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map524 = iprot.readMapBegin();
-                struct.node_host = new HashMap<String,String>(2*_map524.size);
-                String _key525;
-                String _val526;
-                for (int _i527 = 0; _i527 < _map524.size; ++_i527)
+                org.apache.thrift.protocol.TMap _map532 = iprot.readMapBegin();
+                struct.node_host = new HashMap<String,String>(2*_map532.size);
+                String _key533;
+                String _val534;
+                for (int _i535 = 0; _i535 < _map532.size; ++_i535)
                 {
-                  _key525 = iprot.readString();
-                  _val526 = iprot.readString();
-                  struct.node_host.put(_key525, _val526);
+                  _key533 = iprot.readString();
+                  _val534 = iprot.readString();
+                  struct.node_host.put(_key533, _val534);
                 }
                 iprot.readMapEnd();
               }
@@ -807,26 +807,26 @@ public class Assignment implements org.apache.thrift.TBase<Assignment, Assignmen
           case 3: // EXECUTOR_NODE_PORT
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map528 = iprot.readMapBegin();
-                struct.executor_node_port = new HashMap<List<Long>,NodeInfo>(2*_map528.size);
-                List<Long> _key529;
-                NodeInfo _val530;
-                for (int _i531 = 0; _i531 < _map528.size; ++_i531)
+                org.apache.thrift.protocol.TMap _map536 = iprot.readMapBegin();
+                struct.executor_node_port = new HashMap<List<Long>,NodeInfo>(2*_map536.size);
+                List<Long> _key537;
+                NodeInfo _val538;
+                for (int _i539 = 0; _i539 < _map536.size; ++_i539)
                 {
                   {
-                    org.apache.thrift.protocol.TList _list532 = iprot.readListBegin();
-                    _key529 = new ArrayList<Long>(_list532.size);
-                    long _elem533;
-                    for (int _i534 = 0; _i534 < _list532.size; ++_i534)
+                    org.apache.thrift.protocol.TList _list540 = iprot.readListBegin();
+                    _key537 = new ArrayList<Long>(_list540.size);
+                    long _elem541;
+                    for (int _i542 = 0; _i542 < _list540.size; ++_i542)
                     {
-                      _elem533 = iprot.readI64();
-                      _key529.add(_elem533);
+                      _elem541 = iprot.readI64();
+                      _key537.add(_elem541);
                     }
                     iprot.readListEnd();
                   }
-                  _val530 = new NodeInfo();
-                  _val530.read(iprot);
-                  struct.executor_node_port.put(_key529, _val530);
+                  _val538 = new NodeInfo();
+                  _val538.read(iprot);
+                  struct.executor_node_port.put(_key537, _val538);
                 }
                 iprot.readMapEnd();
               }
@@ -838,25 +838,25 @@ public class Assignment implements org.apache.thrift.TBase<Assignment, Assignmen
           case 4: // EXECUTOR_START_TIME_SECS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map535 = iprot.readMapBegin();
-                struct.executor_start_time_secs = new HashMap<List<Long>,Long>(2*_map535.size);
-                List<Long> _key536;
-                long _val537;
-                for (int _i538 = 0; _i538 < _map535.size; ++_i538)
+                org.apache.thrift.protocol.TMap _map543 = iprot.readMapBegin();
+                struct.executor_start_time_secs = new HashMap<List<Long>,Long>(2*_map543.size);
+                List<Long> _key544;
+                long _val545;
+                for (int _i546 = 0; _i546 < _map543.size; ++_i546)
                 {
                   {
-                    org.apache.thrift.protocol.TList _list539 = iprot.readListBegin();
-                    _key536 = new ArrayList<Long>(_list539.size);
-                    long _elem540;
-                    for (int _i541 = 0; _i541 < _list539.size; ++_i541)
+                    org.apache.thrift.protocol.TList _list547 = iprot.readListBegin();
+                    _key544 = new ArrayList<Long>(_list547.size);
+                    long _elem548;
+                    for (int _i549 = 0; _i549 < _list547.size; ++_i549)
                     {
-                      _elem540 = iprot.readI64();
-                      _key536.add(_elem540);
+                      _elem548 = iprot.readI64();
+                      _key544.add(_elem548);
                     }
                     iprot.readListEnd();
                   }
-                  _val537 = iprot.readI64();
-                  struct.executor_start_time_secs.put(_key536, _val537);
+                  _val545 = iprot.readI64();
+                  struct.executor_start_time_secs.put(_key544, _val545);
                 }
                 iprot.readMapEnd();
               }
@@ -868,17 +868,17 @@ public class Assignment implements org.apache.thrift.TBase<Assignment, Assignmen
           case 5: // WORKER_RESOURCES
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map542 = iprot.readMapBegin();
-                struct.worker_resources = new HashMap<NodeInfo,WorkerResources>(2*_map542.size);
-                NodeInfo _key543;
-                WorkerResources _val544;
-                for (int _i545 = 0; _i545 < _map542.size; ++_i545)
+                org.apache.thrift.protocol.TMap _map550 = iprot.readMapBegin();
+                struct.worker_resources = new HashMap<NodeInfo,WorkerResources>(2*_map550.size);
+                NodeInfo _key551;
+                WorkerResources _val552;
+                for (int _i553 = 0; _i553 < _map550.size; ++_i553)
                 {
-                  _key543 = new NodeInfo();
-                  _key543.read(iprot);
-                  _val544 = new WorkerResources();
-                  _val544.read(iprot);
-                  struct.worker_resources.put(_key543, _val544);
+                  _key551 = new NodeInfo();
+                  _key551.read(iprot);
+                  _val552 = new WorkerResources();
+                  _val552.read(iprot);
+                  struct.worker_resources.put(_key551, _val552);
                 }
                 iprot.readMapEnd();
               }
@@ -910,10 +910,10 @@ public class Assignment implements org.apache.thrift.TBase<Assignment, Assignmen
           oprot.writeFieldBegin(NODE_HOST_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, struct.node_host.size()));
-            for (Map.Entry<String, String> _iter546 : struct.node_host.entrySet())
+            for (Map.Entry<String, String> _iter554 : struct.node_host.entrySet())
             {
-              oprot.writeString(_iter546.getKey());
-              oprot.writeString(_iter546.getValue());
+              oprot.writeString(_iter554.getKey());
+              oprot.writeString(_iter554.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -925,17 +925,17 @@ public class Assignment implements org.apache.thrift.TBase<Assignment, Assignmen
           oprot.writeFieldBegin(EXECUTOR_NODE_PORT_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.LIST, org.apache.thrift.protocol.TType.STRUCT, struct.executor_node_port.size()));
-            for (Map.Entry<List<Long>, NodeInfo> _iter547 : struct.executor_node_port.entrySet())
+            for (Map.Entry<List<Long>, NodeInfo> _iter555 : struct.executor_node_port.entrySet())
             {
               {
-                oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.I64, _iter547.getKey().size()));
-                for (long _iter548 : _iter547.getKey())
+                oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.I64, _iter555.getKey().size()));
+                for (long _iter556 : _iter555.getKey())
                 {
-                  oprot.writeI64(_iter548);
+                  oprot.writeI64(_iter556);
                 }
                 oprot.writeListEnd();
               }
-              _iter547.getValue().write(oprot);
+              _iter555.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -947,17 +947,17 @@ public class Assignment implements org.apache.thrift.TBase<Assignment, Assignmen
           oprot.writeFieldBegin(EXECUTOR_START_TIME_SECS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.LIST, org.apache.thrift.protocol.TType.I64, struct.executor_start_time_secs.size()));
-            for (Map.Entry<List<Long>, Long> _iter549 : struct.executor_start_time_secs.entrySet())
+            for (Map.Entry<List<Long>, Long> _iter557 : struct.executor_start_time_secs.entrySet())
             {
               {
-                oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.I64, _iter549.getKey().size()));
-                for (long _iter550 : _iter549.getKey())
+                oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.I64, _iter557.getKey().size()));
+                for (long _iter558 : _iter557.getKey())
                 {
-                  oprot.writeI64(_iter550);
+                  oprot.writeI64(_iter558);
                 }
                 oprot.writeListEnd();
               }
-              oprot.writeI64(_iter549.getValue());
+              oprot.writeI64(_iter557.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -969,10 +969,10 @@ public class Assignment implements org.apache.thrift.TBase<Assignment, Assignmen
           oprot.writeFieldBegin(WORKER_RESOURCES_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.STRUCT, struct.worker_resources.size()));
-            for (Map.Entry<NodeInfo, WorkerResources> _iter551 : struct.worker_resources.entrySet())
+            for (Map.Entry<NodeInfo, WorkerResources> _iter559 : struct.worker_resources.entrySet())
             {
-              _iter551.getKey().write(oprot);
-              _iter551.getValue().write(oprot);
+              _iter559.getKey().write(oprot);
+              _iter559.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -1014,52 +1014,52 @@ public class Assignment implements org.apache.thrift.TBase<Assignment, Assignmen
       if (struct.is_set_node_host()) {
         {
           oprot.writeI32(struct.node_host.size());
-          for (Map.Entry<String, String> _iter552 : struct.node_host.entrySet())
+          for (Map.Entry<String, String> _iter560 : struct.node_host.entrySet())
           {
-            oprot.writeString(_iter552.getKey());
-            oprot.writeString(_iter552.getValue());
+            oprot.writeString(_iter560.getKey());
+            oprot.writeString(_iter560.getValue());
           }
         }
       }
       if (struct.is_set_executor_node_port()) {
         {
           oprot.writeI32(struct.executor_node_port.size());
-          for (Map.Entry<List<Long>, NodeInfo> _iter553 : struct.executor_node_port.entrySet())
+          for (Map.Entry<List<Long>, NodeInfo> _iter561 : struct.executor_node_port.entrySet())
           {
             {
-              oprot.writeI32(_iter553.getKey().size());
-              for (long _iter554 : _iter553.getKey())
+              oprot.writeI32(_iter561.getKey().size());
+              for (long _iter562 : _iter561.getKey())
               {
-                oprot.writeI64(_iter554);
+                oprot.writeI64(_iter562);
               }
             }
-            _iter553.getValue().write(oprot);
+            _iter561.getValue().write(oprot);
           }
         }
       }
       if (struct.is_set_executor_start_time_secs()) {
         {
           oprot.writeI32(struct.executor_start_time_secs.size());
-          for (Map.Entry<List<Long>, Long> _iter555 : struct.executor_start_time_secs.entrySet())
+          for (Map.Entry<List<Long>, Long> _iter563 : struct.executor_start_time_secs.entrySet())
           {
             {
-              oprot.writeI32(_iter555.getKey().size());
-              for (long _iter556 : _iter555.getKey())
+              oprot.writeI32(_iter563.getKey().size());
+              for (long _iter564 : _iter563.getKey())
               {
-                oprot.writeI64(_iter556);
+                oprot.writeI64(_iter564);
               }
             }
-            oprot.writeI64(_iter555.getValue());
+            oprot.writeI64(_iter563.getValue());
           }
         }
       }
       if (struct.is_set_worker_resources()) {
         {
           oprot.writeI32(struct.worker_resources.size());
-          for (Map.Entry<NodeInfo, WorkerResources> _iter557 : struct.worker_resources.entrySet())
+          for (Map.Entry<NodeInfo, WorkerResources> _iter565 : struct.worker_resources.entrySet())
           {
-            _iter557.getKey().write(oprot);
-            _iter557.getValue().write(oprot);
+            _iter565.getKey().write(oprot);
+            _iter565.getValue().write(oprot);
           }
         }
       }
@@ -1073,81 +1073,81 @@ public class Assignment implements org.apache.thrift.TBase<Assignment, Assignmen
       BitSet incoming = iprot.readBitSet(4);
       if (incoming.get(0)) {
         {
-          org.apache.thrift.protocol.TMap _map558 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-          struct.node_host = new HashMap<String,String>(2*_map558.size);
-          String _key559;
-          String _val560;
-          for (int _i561 = 0; _i561 < _map558.size; ++_i561)
+          org.apache.thrift.protocol.TMap _map566 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+          struct.node_host = new HashMap<String,String>(2*_map566.size);
+          String _key567;
+          String _val568;
+          for (int _i569 = 0; _i569 < _map566.size; ++_i569)
           {
-            _key559 = iprot.readString();
-            _val560 = iprot.readString();
-            struct.node_host.put(_key559, _val560);
+            _key567 = iprot.readString();
+            _val568 = iprot.readString();
+            struct.node_host.put(_key567, _val568);
           }
         }
         struct.set_node_host_isSet(true);
       }
       if (incoming.get(1)) {
         {
-          org.apache.thrift.protocol.TMap _map562 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.LIST, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.executor_node_port = new HashMap<List<Long>,NodeInfo>(2*_map562.size);
-          List<Long> _key563;
-          NodeInfo _val564;
-          for (int _i565 = 0; _i565 < _map562.size; ++_i565)
+          org.apache.thrift.protocol.TMap _map570 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.LIST, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.executor_node_port = new HashMap<List<Long>,NodeInfo>(2*_map570.size);
+          List<Long> _key571;
+          NodeInfo _val572;
+          for (int _i573 = 0; _i573 < _map570.size; ++_i573)
           {
             {
-              org.apache.thrift.protocol.TList _list566 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.I64, iprot.readI32());
-              _key563 = new ArrayList<Long>(_list566.size);
-              long _elem567;
-              for (int _i568 = 0; _i568 < _list566.size; ++_i568)
+              org.apache.thrift.protocol.TList _list574 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.I64, iprot.readI32());
+              _key571 = new ArrayList<Long>(_list574.size);
+              long _elem575;
+              for (int _i576 = 0; _i576 < _list574.size; ++_i576)
               {
-                _elem567 = iprot.readI64();
-                _key563.add(_elem567);
+                _elem575 = iprot.readI64();
+                _key571.add(_elem575);
               }
             }
-            _val564 = new NodeInfo();
-            _val564.read(iprot);
-            struct.executor_node_port.put(_key563, _val564);
+            _val572 = new NodeInfo();
+            _val572.read(iprot);
+            struct.executor_node_port.put(_key571, _val572);
           }
         }
         struct.set_executor_node_port_isSet(true);
       }
       if (incoming.get(2)) {
         {
-          org.apache.thrift.protocol.TMap _map569 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.LIST, org.apache.thrift.protocol.TType.I64, iprot.readI32());
-          struct.executor_start_time_secs = new HashMap<List<Long>,Long>(2*_map569.size);
-          List<Long> _key570;
-          long _val571;
-          for (int _i572 = 0; _i572 < _map569.size; ++_i572)
+          org.apache.thrift.protocol.TMap _map577 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.LIST, org.apache.thrift.protocol.TType.I64, iprot.readI32());
+          struct.executor_start_time_secs = new HashMap<List<Long>,Long>(2*_map577.size);
+          List<Long> _key578;
+          long _val579;
+          for (int _i580 = 0; _i580 < _map577.size; ++_i580)
           {
             {
-              org.apache.thrift.protocol.TList _list573 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.I64, iprot.readI32());
-              _key570 = new ArrayList<Long>(_list573.size);
-              long _elem574;
-              for (int _i575 = 0; _i575 < _list573.size; ++_i575)
+              org.apache.thrift.protocol.TList _list581 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.I64, iprot.readI32());
+              _key578 = new ArrayList<Long>(_list581.size);
+              long _elem582;
+              for (int _i583 = 0; _i583 < _list581.size; ++_i583)
               {
-                _elem574 = iprot.readI64();
-                _key570.add(_elem574);
+                _elem582 = iprot.readI64();
+                _key578.add(_elem582);
               }
             }
-            _val571 = iprot.readI64();
-            struct.executor_start_time_secs.put(_key570, _val571);
+            _val579 = iprot.readI64();
+            struct.executor_start_time_secs.put(_key578, _val579);
           }
         }
         struct.set_executor_start_time_secs_isSet(true);
       }
       if (incoming.get(3)) {
         {
-          org.apache.thrift.protocol.TMap _map576 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.worker_resources = new HashMap<NodeInfo,WorkerResources>(2*_map576.size);
-          NodeInfo _key577;
-          WorkerResources _val578;
-          for (int _i579 = 0; _i579 < _map576.size; ++_i579)
+          org.apache.thrift.protocol.TMap _map584 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.worker_resources = new HashMap<NodeInfo,WorkerResources>(2*_map584.size);
+          NodeInfo _key585;
+          WorkerResources _val586;
+          for (int _i587 = 0; _i587 < _map584.size; ++_i587)
           {
-            _key577 = new NodeInfo();
-            _key577.read(iprot);
-            _val578 = new WorkerResources();
-            _val578.read(iprot);
-            struct.worker_resources.put(_key577, _val578);
+            _key585 = new NodeInfo();
+            _key585.read(iprot);
+            _val586 = new WorkerResources();
+            _val586.read(iprot);
+            struct.worker_resources.put(_key585, _val586);
           }
         }
         struct.set_worker_resources_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/BoltStats.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/BoltStats.java b/storm-core/src/jvm/backtype/storm/generated/BoltStats.java
index 6cef48a..cbadd32 100644
--- a/storm-core/src/jvm/backtype/storm/generated/BoltStats.java
+++ b/storm-core/src/jvm/backtype/storm/generated/BoltStats.java
@@ -881,41 +881,8 @@ public class BoltStats implements org.apache.thrift.TBase<BoltStats, BoltStats._
           case 1: // ACKED
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map108 = iprot.readMapBegin();
-                struct.acked = new HashMap<String,Map<GlobalStreamId,Long>>(2*_map108.size);
-                String _key109;
-                Map<GlobalStreamId,Long> _val110;
-                for (int _i111 = 0; _i111 < _map108.size; ++_i111)
-                {
-                  _key109 = iprot.readString();
-                  {
-                    org.apache.thrift.protocol.TMap _map112 = iprot.readMapBegin();
-                    _val110 = new HashMap<GlobalStreamId,Long>(2*_map112.size);
-                    GlobalStreamId _key113;
-                    long _val114;
-                    for (int _i115 = 0; _i115 < _map112.size; ++_i115)
-                    {
-                      _key113 = new GlobalStreamId();
-                      _key113.read(iprot);
-                      _val114 = iprot.readI64();
-                      _val110.put(_key113, _val114);
-                    }
-                    iprot.readMapEnd();
-                  }
-                  struct.acked.put(_key109, _val110);
-                }
-                iprot.readMapEnd();
-              }
-              struct.set_acked_isSet(true);
-            } else { 
-              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
-            }
-            break;
-          case 2: // FAILED
-            if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
-              {
                 org.apache.thrift.protocol.TMap _map116 = iprot.readMapBegin();
-                struct.failed = new HashMap<String,Map<GlobalStreamId,Long>>(2*_map116.size);
+                struct.acked = new HashMap<String,Map<GlobalStreamId,Long>>(2*_map116.size);
                 String _key117;
                 Map<GlobalStreamId,Long> _val118;
                 for (int _i119 = 0; _i119 < _map116.size; ++_i119)
@@ -935,106 +902,139 @@ public class BoltStats implements org.apache.thrift.TBase<BoltStats, BoltStats._
                     }
                     iprot.readMapEnd();
                   }
-                  struct.failed.put(_key117, _val118);
+                  struct.acked.put(_key117, _val118);
                 }
                 iprot.readMapEnd();
               }
-              struct.set_failed_isSet(true);
+              struct.set_acked_isSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
             }
             break;
-          case 3: // PROCESS_MS_AVG
+          case 2: // FAILED
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
                 org.apache.thrift.protocol.TMap _map124 = iprot.readMapBegin();
-                struct.process_ms_avg = new HashMap<String,Map<GlobalStreamId,Double>>(2*_map124.size);
+                struct.failed = new HashMap<String,Map<GlobalStreamId,Long>>(2*_map124.size);
                 String _key125;
-                Map<GlobalStreamId,Double> _val126;
+                Map<GlobalStreamId,Long> _val126;
                 for (int _i127 = 0; _i127 < _map124.size; ++_i127)
                 {
                   _key125 = iprot.readString();
                   {
                     org.apache.thrift.protocol.TMap _map128 = iprot.readMapBegin();
-                    _val126 = new HashMap<GlobalStreamId,Double>(2*_map128.size);
+                    _val126 = new HashMap<GlobalStreamId,Long>(2*_map128.size);
                     GlobalStreamId _key129;
-                    double _val130;
+                    long _val130;
                     for (int _i131 = 0; _i131 < _map128.size; ++_i131)
                     {
                       _key129 = new GlobalStreamId();
                       _key129.read(iprot);
-                      _val130 = iprot.readDouble();
+                      _val130 = iprot.readI64();
                       _val126.put(_key129, _val130);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.process_ms_avg.put(_key125, _val126);
+                  struct.failed.put(_key125, _val126);
                 }
                 iprot.readMapEnd();
               }
-              struct.set_process_ms_avg_isSet(true);
+              struct.set_failed_isSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
             }
             break;
-          case 4: // EXECUTED
+          case 3: // PROCESS_MS_AVG
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
                 org.apache.thrift.protocol.TMap _map132 = iprot.readMapBegin();
-                struct.executed = new HashMap<String,Map<GlobalStreamId,Long>>(2*_map132.size);
+                struct.process_ms_avg = new HashMap<String,Map<GlobalStreamId,Double>>(2*_map132.size);
                 String _key133;
-                Map<GlobalStreamId,Long> _val134;
+                Map<GlobalStreamId,Double> _val134;
                 for (int _i135 = 0; _i135 < _map132.size; ++_i135)
                 {
                   _key133 = iprot.readString();
                   {
                     org.apache.thrift.protocol.TMap _map136 = iprot.readMapBegin();
-                    _val134 = new HashMap<GlobalStreamId,Long>(2*_map136.size);
+                    _val134 = new HashMap<GlobalStreamId,Double>(2*_map136.size);
                     GlobalStreamId _key137;
-                    long _val138;
+                    double _val138;
                     for (int _i139 = 0; _i139 < _map136.size; ++_i139)
                     {
                       _key137 = new GlobalStreamId();
                       _key137.read(iprot);
-                      _val138 = iprot.readI64();
+                      _val138 = iprot.readDouble();
                       _val134.put(_key137, _val138);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.executed.put(_key133, _val134);
+                  struct.process_ms_avg.put(_key133, _val134);
                 }
                 iprot.readMapEnd();
               }
-              struct.set_executed_isSet(true);
+              struct.set_process_ms_avg_isSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
             }
             break;
-          case 5: // EXECUTE_MS_AVG
+          case 4: // EXECUTED
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
                 org.apache.thrift.protocol.TMap _map140 = iprot.readMapBegin();
-                struct.execute_ms_avg = new HashMap<String,Map<GlobalStreamId,Double>>(2*_map140.size);
+                struct.executed = new HashMap<String,Map<GlobalStreamId,Long>>(2*_map140.size);
                 String _key141;
-                Map<GlobalStreamId,Double> _val142;
+                Map<GlobalStreamId,Long> _val142;
                 for (int _i143 = 0; _i143 < _map140.size; ++_i143)
                 {
                   _key141 = iprot.readString();
                   {
                     org.apache.thrift.protocol.TMap _map144 = iprot.readMapBegin();
-                    _val142 = new HashMap<GlobalStreamId,Double>(2*_map144.size);
+                    _val142 = new HashMap<GlobalStreamId,Long>(2*_map144.size);
                     GlobalStreamId _key145;
-                    double _val146;
+                    long _val146;
                     for (int _i147 = 0; _i147 < _map144.size; ++_i147)
                     {
                       _key145 = new GlobalStreamId();
                       _key145.read(iprot);
-                      _val146 = iprot.readDouble();
+                      _val146 = iprot.readI64();
                       _val142.put(_key145, _val146);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.execute_ms_avg.put(_key141, _val142);
+                  struct.executed.put(_key141, _val142);
+                }
+                iprot.readMapEnd();
+              }
+              struct.set_executed_isSet(true);
+            } else { 
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
+          case 5: // EXECUTE_MS_AVG
+            if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
+              {
+                org.apache.thrift.protocol.TMap _map148 = iprot.readMapBegin();
+                struct.execute_ms_avg = new HashMap<String,Map<GlobalStreamId,Double>>(2*_map148.size);
+                String _key149;
+                Map<GlobalStreamId,Double> _val150;
+                for (int _i151 = 0; _i151 < _map148.size; ++_i151)
+                {
+                  _key149 = iprot.readString();
+                  {
+                    org.apache.thrift.protocol.TMap _map152 = iprot.readMapBegin();
+                    _val150 = new HashMap<GlobalStreamId,Double>(2*_map152.size);
+                    GlobalStreamId _key153;
+                    double _val154;
+                    for (int _i155 = 0; _i155 < _map152.size; ++_i155)
+                    {
+                      _key153 = new GlobalStreamId();
+                      _key153.read(iprot);
+                      _val154 = iprot.readDouble();
+                      _val150.put(_key153, _val154);
+                    }
+                    iprot.readMapEnd();
+                  }
+                  struct.execute_ms_avg.put(_key149, _val150);
                 }
                 iprot.readMapEnd();
               }
@@ -1060,15 +1060,15 @@ public class BoltStats implements org.apache.thrift.TBase<BoltStats, BoltStats._
         oprot.writeFieldBegin(ACKED_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, struct.acked.size()));
-          for (Map.Entry<String, Map<GlobalStreamId,Long>> _iter148 : struct.acked.entrySet())
+          for (Map.Entry<String, Map<GlobalStreamId,Long>> _iter156 : struct.acked.entrySet())
           {
-            oprot.writeString(_iter148.getKey());
+            oprot.writeString(_iter156.getKey());
             {
-              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.I64, _iter148.getValue().size()));
-              for (Map.Entry<GlobalStreamId, Long> _iter149 : _iter148.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.I64, _iter156.getValue().size()));
+              for (Map.Entry<GlobalStreamId, Long> _iter157 : _iter156.getValue().entrySet())
               {
-                _iter149.getKey().write(oprot);
-                oprot.writeI64(_iter149.getValue());
+                _iter157.getKey().write(oprot);
+                oprot.writeI64(_iter157.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -1081,15 +1081,15 @@ public class BoltStats implements org.apache.thrift.TBase<BoltStats, BoltStats._
         oprot.writeFieldBegin(FAILED_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, struct.failed.size()));
-          for (Map.Entry<String, Map<GlobalStreamId,Long>> _iter150 : struct.failed.entrySet())
+          for (Map.Entry<String, Map<GlobalStreamId,Long>> _iter158 : struct.failed.entrySet())
           {
-            oprot.writeString(_iter150.getKey());
+            oprot.writeString(_iter158.getKey());
             {
-              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.I64, _iter150.getValue().size()));
-              for (Map.Entry<GlobalStreamId, Long> _iter151 : _iter150.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.I64, _iter158.getValue().size()));
+              for (Map.Entry<GlobalStreamId, Long> _iter159 : _iter158.getValue().entrySet())
               {
-                _iter151.getKey().write(oprot);
-                oprot.writeI64(_iter151.getValue());
+                _iter159.getKey().write(oprot);
+                oprot.writeI64(_iter159.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -1102,15 +1102,15 @@ public class BoltStats implements org.apache.thrift.TBase<BoltStats, BoltStats._
         oprot.writeFieldBegin(PROCESS_MS_AVG_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, struct.process_ms_avg.size()));
-          for (Map.Entry<String, Map<GlobalStreamId,Double>> _iter152 : struct.process_ms_avg.entrySet())
+          for (Map.Entry<String, Map<GlobalStreamId,Double>> _iter160 : struct.process_ms_avg.entrySet())
           {
-            oprot.writeString(_iter152.getKey());
+            oprot.writeString(_iter160.getKey());
             {
-              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.DOUBLE, _iter152.getValue().size()));
-              for (Map.Entry<GlobalStreamId, Double> _iter153 : _iter152.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.DOUBLE, _iter160.getValue().size()));
+              for (Map.Entry<GlobalStreamId, Double> _iter161 : _iter160.getValue().entrySet())
               {
-                _iter153.getKey().write(oprot);
-                oprot.writeDouble(_iter153.getValue());
+                _iter161.getKey().write(oprot);
+                oprot.writeDouble(_iter161.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -1123,15 +1123,15 @@ public class BoltStats implements org.apache.thrift.TBase<BoltStats, BoltStats._
         oprot.writeFieldBegin(EXECUTED_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, struct.executed.size()));
-          for (Map.Entry<String, Map<GlobalStreamId,Long>> _iter154 : struct.executed.entrySet())
+          for (Map.Entry<String, Map<GlobalStreamId,Long>> _iter162 : struct.executed.entrySet())
           {
-            oprot.writeString(_iter154.getKey());
+            oprot.writeString(_iter162.getKey());
             {
-              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.I64, _iter154.getValue().size()));
-              for (Map.Entry<GlobalStreamId, Long> _iter155 : _iter154.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.I64, _iter162.getValue().size()));
+              for (Map.Entry<GlobalStreamId, Long> _iter163 : _iter162.getValue().entrySet())
               {
-                _iter155.getKey().write(oprot);
-                oprot.writeI64(_iter155.getValue());
+                _iter163.getKey().write(oprot);
+                oprot.writeI64(_iter163.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -1144,15 +1144,15 @@ public class BoltStats implements org.apache.thrift.TBase<BoltStats, BoltStats._
         oprot.writeFieldBegin(EXECUTE_MS_AVG_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, struct.execute_ms_avg.size()));
-          for (Map.Entry<String, Map<GlobalStreamId,Double>> _iter156 : struct.execute_ms_avg.entrySet())
+          for (Map.Entry<String, Map<GlobalStreamId,Double>> _iter164 : struct.execute_ms_avg.entrySet())
           {
-            oprot.writeString(_iter156.getKey());
+            oprot.writeString(_iter164.getKey());
             {
-              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.DOUBLE, _iter156.getValue().size()));
-              for (Map.Entry<GlobalStreamId, Double> _iter157 : _iter156.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.DOUBLE, _iter164.getValue().size()));
+              for (Map.Entry<GlobalStreamId, Double> _iter165 : _iter164.getValue().entrySet())
               {
-                _iter157.getKey().write(oprot);
-                oprot.writeDouble(_iter157.getValue());
+                _iter165.getKey().write(oprot);
+                oprot.writeDouble(_iter165.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -1180,75 +1180,75 @@ public class BoltStats implements org.apache.thrift.TBase<BoltStats, BoltStats._
       TTupleProtocol oprot = (TTupleProtocol) prot;
       {
         oprot.writeI32(struct.acked.size());
-        for (Map.Entry<String, Map<GlobalStreamId,Long>> _iter158 : struct.acked.entrySet())
+        for (Map.Entry<String, Map<GlobalStreamId,Long>> _iter166 : struct.acked.entrySet())
         {
-          oprot.writeString(_iter158.getKey());
+          oprot.writeString(_iter166.getKey());
           {
-            oprot.writeI32(_iter158.getValue().size());
-            for (Map.Entry<GlobalStreamId, Long> _iter159 : _iter158.getValue().entrySet())
+            oprot.writeI32(_iter166.getValue().size());
+            for (Map.Entry<GlobalStreamId, Long> _iter167 : _iter166.getValue().entrySet())
             {
-              _iter159.getKey().write(oprot);
-              oprot.writeI64(_iter159.getValue());
+              _iter167.getKey().write(oprot);
+              oprot.writeI64(_iter167.getValue());
             }
           }
         }
       }
       {
         oprot.writeI32(struct.failed.size());
-        for (Map.Entry<String, Map<GlobalStreamId,Long>> _iter160 : struct.failed.entrySet())
+        for (Map.Entry<String, Map<GlobalStreamId,Long>> _iter168 : struct.failed.entrySet())
         {
-          oprot.writeString(_iter160.getKey());
+          oprot.writeString(_iter168.getKey());
           {
-            oprot.writeI32(_iter160.getValue().size());
-            for (Map.Entry<GlobalStreamId, Long> _iter161 : _iter160.getValue().entrySet())
+            oprot.writeI32(_iter168.getValue().size());
+            for (Map.Entry<GlobalStreamId, Long> _iter169 : _iter168.getValue().entrySet())
             {
-              _iter161.getKey().write(oprot);
-              oprot.writeI64(_iter161.getValue());
+              _iter169.getKey().write(oprot);
+              oprot.writeI64(_iter169.getValue());
             }
           }
         }
       }
       {
         oprot.writeI32(struct.process_ms_avg.size());
-        for (Map.Entry<String, Map<GlobalStreamId,Double>> _iter162 : struct.process_ms_avg.entrySet())
+        for (Map.Entry<String, Map<GlobalStreamId,Double>> _iter170 : struct.process_ms_avg.entrySet())
         {
-          oprot.writeString(_iter162.getKey());
+          oprot.writeString(_iter170.getKey());
           {
-            oprot.writeI32(_iter162.getValue().size());
-            for (Map.Entry<GlobalStreamId, Double> _iter163 : _iter162.getValue().entrySet())
+            oprot.writeI32(_iter170.getValue().size());
+            for (Map.Entry<GlobalStreamId, Double> _iter171 : _iter170.getValue().entrySet())
             {
-              _iter163.getKey().write(oprot);
-              oprot.writeDouble(_iter163.getValue());
+              _iter171.getKey().write(oprot);
+              oprot.writeDouble(_iter171.getValue());
             }
           }
         }
       }
       {
         oprot.writeI32(struct.executed.size());
-        for (Map.Entry<String, Map<GlobalStreamId,Long>> _iter164 : struct.executed.entrySet())
+        for (Map.Entry<String, Map<GlobalStreamId,Long>> _iter172 : struct.executed.entrySet())
         {
-          oprot.writeString(_iter164.getKey());
+          oprot.writeString(_iter172.getKey());
           {
-            oprot.writeI32(_iter164.getValue().size());
-            for (Map.Entry<GlobalStreamId, Long> _iter165 : _iter164.getValue().entrySet())
+            oprot.writeI32(_iter172.getValue().size());
+            for (Map.Entry<GlobalStreamId, Long> _iter173 : _iter172.getValue().entrySet())
             {
-              _iter165.getKey().write(oprot);
-              oprot.writeI64(_iter165.getValue());
+              _iter173.getKey().write(oprot);
+              oprot.writeI64(_iter173.getValue());
             }
           }
         }
       }
       {
         oprot.writeI32(struct.execute_ms_avg.size());
-        for (Map.Entry<String, Map<GlobalStreamId,Double>> _iter166 : struct.execute_ms_avg.entrySet())
+        for (Map.Entry<String, Map<GlobalStreamId,Double>> _iter174 : struct.execute_ms_avg.entrySet())
         {
-          oprot.writeString(_iter166.getKey());
+          oprot.writeString(_iter174.getKey());
           {
-            oprot.writeI32(_iter166.getValue().size());
-            for (Map.Entry<GlobalStreamId, Double> _iter167 : _iter166.getValue().entrySet())
+            oprot.writeI32(_iter174.getValue().size());
+            for (Map.Entry<GlobalStreamId, Double> _iter175 : _iter174.getValue().entrySet())
             {
-              _iter167.getKey().write(oprot);
-              oprot.writeDouble(_iter167.getValue());
+              _iter175.getKey().write(oprot);
+              oprot.writeDouble(_iter175.getValue());
             }
           }
         }
@@ -1259,33 +1259,8 @@ public class BoltStats implements org.apache.thrift.TBase<BoltStats, BoltStats._
     public void read(org.apache.thrift.protocol.TProtocol prot, BoltStats struct) throws org.apache.thrift.TException {
       TTupleProtocol iprot = (TTupleProtocol) prot;
       {
-        org.apache.thrift.protocol.TMap _map168 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.acked = new HashMap<String,Map<GlobalStreamId,Long>>(2*_map168.size);
-        String _key169;
-        Map<GlobalStreamId,Long> _val170;
-        for (int _i171 = 0; _i171 < _map168.size; ++_i171)
-        {
-          _key169 = iprot.readString();
-          {
-            org.apache.thrift.protocol.TMap _map172 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.I64, iprot.readI32());
-            _val170 = new HashMap<GlobalStreamId,Long>(2*_map172.size);
-            GlobalStreamId _key173;
-            long _val174;
-            for (int _i175 = 0; _i175 < _map172.size; ++_i175)
-            {
-              _key173 = new GlobalStreamId();
-              _key173.read(iprot);
-              _val174 = iprot.readI64();
-              _val170.put(_key173, _val174);
-            }
-          }
-          struct.acked.put(_key169, _val170);
-        }
-      }
-      struct.set_acked_isSet(true);
-      {
         org.apache.thrift.protocol.TMap _map176 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.failed = new HashMap<String,Map<GlobalStreamId,Long>>(2*_map176.size);
+        struct.acked = new HashMap<String,Map<GlobalStreamId,Long>>(2*_map176.size);
         String _key177;
         Map<GlobalStreamId,Long> _val178;
         for (int _i179 = 0; _i179 < _map176.size; ++_i179)
@@ -1304,82 +1279,107 @@ public class BoltStats implements org.apache.thrift.TBase<BoltStats, BoltStats._
               _val178.put(_key181, _val182);
             }
           }
-          struct.failed.put(_key177, _val178);
+          struct.acked.put(_key177, _val178);
         }
       }
-      struct.set_failed_isSet(true);
+      struct.set_acked_isSet(true);
       {
         org.apache.thrift.protocol.TMap _map184 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.process_ms_avg = new HashMap<String,Map<GlobalStreamId,Double>>(2*_map184.size);
+        struct.failed = new HashMap<String,Map<GlobalStreamId,Long>>(2*_map184.size);
         String _key185;
-        Map<GlobalStreamId,Double> _val186;
+        Map<GlobalStreamId,Long> _val186;
         for (int _i187 = 0; _i187 < _map184.size; ++_i187)
         {
           _key185 = iprot.readString();
           {
-            org.apache.thrift.protocol.TMap _map188 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.DOUBLE, iprot.readI32());
-            _val186 = new HashMap<GlobalStreamId,Double>(2*_map188.size);
+            org.apache.thrift.protocol.TMap _map188 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.I64, iprot.readI32());
+            _val186 = new HashMap<GlobalStreamId,Long>(2*_map188.size);
             GlobalStreamId _key189;
-            double _val190;
+            long _val190;
             for (int _i191 = 0; _i191 < _map188.size; ++_i191)
             {
               _key189 = new GlobalStreamId();
               _key189.read(iprot);
-              _val190 = iprot.readDouble();
+              _val190 = iprot.readI64();
               _val186.put(_key189, _val190);
             }
           }
-          struct.process_ms_avg.put(_key185, _val186);
+          struct.failed.put(_key185, _val186);
         }
       }
-      struct.set_process_ms_avg_isSet(true);
+      struct.set_failed_isSet(true);
       {
         org.apache.thrift.protocol.TMap _map192 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.executed = new HashMap<String,Map<GlobalStreamId,Long>>(2*_map192.size);
+        struct.process_ms_avg = new HashMap<String,Map<GlobalStreamId,Double>>(2*_map192.size);
         String _key193;
-        Map<GlobalStreamId,Long> _val194;
+        Map<GlobalStreamId,Double> _val194;
         for (int _i195 = 0; _i195 < _map192.size; ++_i195)
         {
           _key193 = iprot.readString();
           {
-            org.apache.thrift.protocol.TMap _map196 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.I64, iprot.readI32());
-            _val194 = new HashMap<GlobalStreamId,Long>(2*_map196.size);
+            org.apache.thrift.protocol.TMap _map196 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.DOUBLE, iprot.readI32());
+            _val194 = new HashMap<GlobalStreamId,Double>(2*_map196.size);
             GlobalStreamId _key197;
-            long _val198;
+            double _val198;
             for (int _i199 = 0; _i199 < _map196.size; ++_i199)
             {
               _key197 = new GlobalStreamId();
               _key197.read(iprot);
-              _val198 = iprot.readI64();
+              _val198 = iprot.readDouble();
               _val194.put(_key197, _val198);
             }
           }
-          struct.executed.put(_key193, _val194);
+          struct.process_ms_avg.put(_key193, _val194);
         }
       }
-      struct.set_executed_isSet(true);
+      struct.set_process_ms_avg_isSet(true);
       {
         org.apache.thrift.protocol.TMap _map200 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.execute_ms_avg = new HashMap<String,Map<GlobalStreamId,Double>>(2*_map200.size);
+        struct.executed = new HashMap<String,Map<GlobalStreamId,Long>>(2*_map200.size);
         String _key201;
-        Map<GlobalStreamId,Double> _val202;
+        Map<GlobalStreamId,Long> _val202;
         for (int _i203 = 0; _i203 < _map200.size; ++_i203)
         {
           _key201 = iprot.readString();
           {
-            org.apache.thrift.protocol.TMap _map204 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.DOUBLE, iprot.readI32());
-            _val202 = new HashMap<GlobalStreamId,Double>(2*_map204.size);
+            org.apache.thrift.protocol.TMap _map204 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.I64, iprot.readI32());
+            _val202 = new HashMap<GlobalStreamId,Long>(2*_map204.size);
             GlobalStreamId _key205;
-            double _val206;
+            long _val206;
             for (int _i207 = 0; _i207 < _map204.size; ++_i207)
             {
               _key205 = new GlobalStreamId();
               _key205.read(iprot);
-              _val206 = iprot.readDouble();
+              _val206 = iprot.readI64();
               _val202.put(_key205, _val206);
             }
           }
-          struct.execute_ms_avg.put(_key201, _val202);
+          struct.executed.put(_key201, _val202);
+        }
+      }
+      struct.set_executed_isSet(true);
+      {
+        org.apache.thrift.protocol.TMap _map208 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
+        struct.execute_ms_avg = new HashMap<String,Map<GlobalStreamId,Double>>(2*_map208.size);
+        String _key209;
+        Map<GlobalStreamId,Double> _val210;
+        for (int _i211 = 0; _i211 < _map208.size; ++_i211)
+        {
+          _key209 = iprot.readString();
+          {
+            org.apache.thrift.protocol.TMap _map212 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.DOUBLE, iprot.readI32());
+            _val210 = new HashMap<GlobalStreamId,Double>(2*_map212.size);
+            GlobalStreamId _key213;
+            double _val214;
+            for (int _i215 = 0; _i215 < _map212.size; ++_i215)
+            {
+              _key213 = new GlobalStreamId();
+              _key213.read(iprot);
+              _val214 = iprot.readDouble();
+              _val210.put(_key213, _val214);
+            }
+          }
+          struct.execute_ms_avg.put(_key209, _val210);
         }
       }
       struct.set_execute_ms_avg_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/ClusterSummary.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/ClusterSummary.java b/storm-core/src/jvm/backtype/storm/generated/ClusterSummary.java
index 5292b78..9c42427 100644
--- a/storm-core/src/jvm/backtype/storm/generated/ClusterSummary.java
+++ b/storm-core/src/jvm/backtype/storm/generated/ClusterSummary.java
@@ -664,14 +664,14 @@ public class ClusterSummary implements org.apache.thrift.TBase<ClusterSummary, C
           case 1: // SUPERVISORS
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list84 = iprot.readListBegin();
-                struct.supervisors = new ArrayList<SupervisorSummary>(_list84.size);
-                SupervisorSummary _elem85;
-                for (int _i86 = 0; _i86 < _list84.size; ++_i86)
+                org.apache.thrift.protocol.TList _list92 = iprot.readListBegin();
+                struct.supervisors = new ArrayList<SupervisorSummary>(_list92.size);
+                SupervisorSummary _elem93;
+                for (int _i94 = 0; _i94 < _list92.size; ++_i94)
                 {
-                  _elem85 = new SupervisorSummary();
-                  _elem85.read(iprot);
-                  struct.supervisors.add(_elem85);
+                  _elem93 = new SupervisorSummary();
+                  _elem93.read(iprot);
+                  struct.supervisors.add(_elem93);
                 }
                 iprot.readListEnd();
               }
@@ -691,14 +691,14 @@ public class ClusterSummary implements org.apache.thrift.TBase<ClusterSummary, C
           case 3: // TOPOLOGIES
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list87 = iprot.readListBegin();
-                struct.topologies = new ArrayList<TopologySummary>(_list87.size);
-                TopologySummary _elem88;
-                for (int _i89 = 0; _i89 < _list87.size; ++_i89)
+                org.apache.thrift.protocol.TList _list95 = iprot.readListBegin();
+                struct.topologies = new ArrayList<TopologySummary>(_list95.size);
+                TopologySummary _elem96;
+                for (int _i97 = 0; _i97 < _list95.size; ++_i97)
                 {
-                  _elem88 = new TopologySummary();
-                  _elem88.read(iprot);
-                  struct.topologies.add(_elem88);
+                  _elem96 = new TopologySummary();
+                  _elem96.read(iprot);
+                  struct.topologies.add(_elem96);
                 }
                 iprot.readListEnd();
               }
@@ -710,14 +710,14 @@ public class ClusterSummary implements org.apache.thrift.TBase<ClusterSummary, C
           case 4: // NIMBUSES
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list90 = iprot.readListBegin();
-                struct.nimbuses = new ArrayList<NimbusSummary>(_list90.size);
-                NimbusSummary _elem91;
-                for (int _i92 = 0; _i92 < _list90.size; ++_i92)
+                org.apache.thrift.protocol.TList _list98 = iprot.readListBegin();
+                struct.nimbuses = new ArrayList<NimbusSummary>(_list98.size);
+                NimbusSummary _elem99;
+                for (int _i100 = 0; _i100 < _list98.size; ++_i100)
                 {
-                  _elem91 = new NimbusSummary();
-                  _elem91.read(iprot);
-                  struct.nimbuses.add(_elem91);
+                  _elem99 = new NimbusSummary();
+                  _elem99.read(iprot);
+                  struct.nimbuses.add(_elem99);
                 }
                 iprot.readListEnd();
               }
@@ -743,9 +743,9 @@ public class ClusterSummary implements org.apache.thrift.TBase<ClusterSummary, C
         oprot.writeFieldBegin(SUPERVISORS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.supervisors.size()));
-          for (SupervisorSummary _iter93 : struct.supervisors)
+          for (SupervisorSummary _iter101 : struct.supervisors)
           {
-            _iter93.write(oprot);
+            _iter101.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -760,9 +760,9 @@ public class ClusterSummary implements org.apache.thrift.TBase<ClusterSummary, C
         oprot.writeFieldBegin(TOPOLOGIES_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.topologies.size()));
-          for (TopologySummary _iter94 : struct.topologies)
+          for (TopologySummary _iter102 : struct.topologies)
           {
-            _iter94.write(oprot);
+            _iter102.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -772,9 +772,9 @@ public class ClusterSummary implements org.apache.thrift.TBase<ClusterSummary, C
         oprot.writeFieldBegin(NIMBUSES_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.nimbuses.size()));
-          for (NimbusSummary _iter95 : struct.nimbuses)
+          for (NimbusSummary _iter103 : struct.nimbuses)
           {
-            _iter95.write(oprot);
+            _iter103.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -799,23 +799,23 @@ public class ClusterSummary implements org.apache.thrift.TBase<ClusterSummary, C
       TTupleProtocol oprot = (TTupleProtocol) prot;
       {
         oprot.writeI32(struct.supervisors.size());
-        for (SupervisorSummary _iter96 : struct.supervisors)
+        for (SupervisorSummary _iter104 : struct.supervisors)
         {
-          _iter96.write(oprot);
+          _iter104.write(oprot);
         }
       }
       {
         oprot.writeI32(struct.topologies.size());
-        for (TopologySummary _iter97 : struct.topologies)
+        for (TopologySummary _iter105 : struct.topologies)
         {
-          _iter97.write(oprot);
+          _iter105.write(oprot);
         }
       }
       {
         oprot.writeI32(struct.nimbuses.size());
-        for (NimbusSummary _iter98 : struct.nimbuses)
+        for (NimbusSummary _iter106 : struct.nimbuses)
         {
-          _iter98.write(oprot);
+          _iter106.write(oprot);
         }
       }
       BitSet optionals = new BitSet();
@@ -832,38 +832,38 @@ public class ClusterSummary implements org.apache.thrift.TBase<ClusterSummary, C
     public void read(org.apache.thrift.protocol.TProtocol prot, ClusterSummary struct) throws org.apache.thrift.TException {
       TTupleProtocol iprot = (TTupleProtocol) prot;
       {
-        org.apache.thrift.protocol.TList _list99 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.supervisors = new ArrayList<SupervisorSummary>(_list99.size);
-        SupervisorSummary _elem100;
-        for (int _i101 = 0; _i101 < _list99.size; ++_i101)
+        org.apache.thrift.protocol.TList _list107 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.supervisors = new ArrayList<SupervisorSummary>(_list107.size);
+        SupervisorSummary _elem108;
+        for (int _i109 = 0; _i109 < _list107.size; ++_i109)
         {
-          _elem100 = new SupervisorSummary();
-          _elem100.read(iprot);
-          struct.supervisors.add(_elem100);
+          _elem108 = new SupervisorSummary();
+          _elem108.read(iprot);
+          struct.supervisors.add(_elem108);
         }
       }
       struct.set_supervisors_isSet(true);
       {
-        org.apache.thrift.protocol.TList _list102 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.topologies = new ArrayList<TopologySummary>(_list102.size);
-        TopologySummary _elem103;
-        for (int _i104 = 0; _i104 < _list102.size; ++_i104)
+        org.apache.thrift.protocol.TList _list110 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.topologies = new ArrayList<TopologySummary>(_list110.size);
+        TopologySummary _elem111;
+        for (int _i112 = 0; _i112 < _list110.size; ++_i112)
         {
-          _elem103 = new TopologySummary();
-          _elem103.read(iprot);
-          struct.topologies.add(_elem103);
+          _elem111 = new TopologySummary();
+          _elem111.read(iprot);
+          struct.topologies.add(_elem111);
         }
       }
       struct.set_topologies_isSet(true);
       {
-        org.apache.thrift.protocol.TList _list105 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.nimbuses = new ArrayList<NimbusSummary>(_list105.size);
-        NimbusSummary _elem106;
-        for (int _i107 = 0; _i107 < _list105.size; ++_i107)
+        org.apache.thrift.protocol.TList _list113 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.nimbuses = new ArrayList<NimbusSummary>(_list113.size);
+        NimbusSummary _elem114;
+        for (int _i115 = 0; _i115 < _list113.size; ++_i115)
         {
-          _elem106 = new NimbusSummary();
-          _elem106.read(iprot);
-          struct.nimbuses.add(_elem106);
+          _elem114 = new NimbusSummary();
+          _elem114.read(iprot);
+          struct.nimbuses.add(_elem114);
         }
       }
       struct.set_nimbuses_isSet(true);

http://git-wip-us.apache.org/repos/asf/storm/blob/b03ce6b2/storm-core/src/jvm/backtype/storm/generated/ClusterWorkerHeartbeat.java
----------------------------------------------------------------------
diff --git a/storm-core/src/jvm/backtype/storm/generated/ClusterWorkerHeartbeat.java b/storm-core/src/jvm/backtype/storm/generated/ClusterWorkerHeartbeat.java
index 0ac0352..a1b7e2e 100644
--- a/storm-core/src/jvm/backtype/storm/generated/ClusterWorkerHeartbeat.java
+++ b/storm-core/src/jvm/backtype/storm/generated/ClusterWorkerHeartbeat.java
@@ -635,17 +635,17 @@ public class ClusterWorkerHeartbeat implements org.apache.thrift.TBase<ClusterWo
           case 2: // EXECUTOR_STATS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map600 = iprot.readMapBegin();
-                struct.executor_stats = new HashMap<ExecutorInfo,ExecutorStats>(2*_map600.size);
-                ExecutorInfo _key601;
-                ExecutorStats _val602;
-                for (int _i603 = 0; _i603 < _map600.size; ++_i603)
+                org.apache.thrift.protocol.TMap _map608 = iprot.readMapBegin();
+                struct.executor_stats = new HashMap<ExecutorInfo,ExecutorStats>(2*_map608.size);
+                ExecutorInfo _key609;
+                ExecutorStats _val610;
+                for (int _i611 = 0; _i611 < _map608.size; ++_i611)
                 {
-                  _key601 = new ExecutorInfo();
-                  _key601.read(iprot);
-                  _val602 = new ExecutorStats();
-                  _val602.read(iprot);
-                  struct.executor_stats.put(_key601, _val602);
+                  _key609 = new ExecutorInfo();
+                  _key609.read(iprot);
+                  _val610 = new ExecutorStats();
+                  _val610.read(iprot);
+                  struct.executor_stats.put(_key609, _val610);
                 }
                 iprot.readMapEnd();
               }
@@ -692,10 +692,10 @@ public class ClusterWorkerHeartbeat implements org.apache.thrift.TBase<ClusterWo
         oprot.writeFieldBegin(EXECUTOR_STATS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.STRUCT, struct.executor_stats.size()));
-          for (Map.Entry<ExecutorInfo, ExecutorStats> _iter604 : struct.executor_stats.entrySet())
+          for (Map.Entry<ExecutorInfo, ExecutorStats> _iter612 : struct.executor_stats.entrySet())
           {
-            _iter604.getKey().write(oprot);
-            _iter604.getValue().write(oprot);
+            _iter612.getKey().write(oprot);
+            _iter612.getValue().write(oprot);
           }
           oprot.writeMapEnd();
         }
@@ -727,10 +727,10 @@ public class ClusterWorkerHeartbeat implements org.apache.thrift.TBase<ClusterWo
       oprot.writeString(struct.storm_id);
       {
         oprot.writeI32(struct.executor_stats.size());
-        for (Map.Entry<ExecutorInfo, ExecutorStats> _iter605 : struct.executor_stats.entrySet())
+        for (Map.Entry<ExecutorInfo, ExecutorStats> _iter613 : struct.executor_stats.entrySet())
         {
-          _iter605.getKey().write(oprot);
-          _iter605.getValue().write(oprot);
+          _iter613.getKey().write(oprot);
+          _iter613.getValue().write(oprot);
         }
       }
       oprot.writeI32(struct.time_secs);
@@ -743,17 +743,17 @@ public class ClusterWorkerHeartbeat implements org.apache.thrift.TBase<ClusterWo
       struct.storm_id = iprot.readString();
       struct.set_storm_id_isSet(true);
       {
-        org.apache.thrift.protocol.TMap _map606 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.executor_stats = new HashMap<ExecutorInfo,ExecutorStats>(2*_map606.size);
-        ExecutorInfo _key607;
-        ExecutorStats _val608;
-        for (int _i609 = 0; _i609 < _map606.size; ++_i609)
+        org.apache.thrift.protocol.TMap _map614 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.executor_stats = new HashMap<ExecutorInfo,ExecutorStats>(2*_map614.size);
+        ExecutorInfo _key615;
+        ExecutorStats _val616;
+        for (int _i617 = 0; _i617 < _map614.size; ++_i617)
         {
-          _key607 = new ExecutorInfo();
-          _key607.read(iprot);
-          _val608 = new ExecutorStats();
-          _val608.read(iprot);
-          struct.executor_stats.put(_key607, _val608);
+          _key615 = new ExecutorInfo();
+          _key615.read(iprot);
+          _val616 = new ExecutorStats();
+          _val616.read(iprot);
+          struct.executor_stats.put(_key615, _val616);
         }
       }
       struct.set_executor_stats_isSet(true);


[37/50] [abbrv] storm git commit: Added STORM-1220 to CHANGELOG.

Posted by sr...@apache.org.
Added STORM-1220 to CHANGELOG.


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/01bab865
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/01bab865
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/01bab865

Branch: refs/heads/STORM-1040
Commit: 01bab865400df5dfce3909f8b9f1e6199792d5a3
Parents: 352a284
Author: Sriharsha Chintalapani <ha...@hortonworks.com>
Authored: Wed Nov 25 11:33:49 2015 -0800
Committer: Sriharsha Chintalapani <ha...@hortonworks.com>
Committed: Wed Nov 25 11:33:49 2015 -0800

----------------------------------------------------------------------
 CHANGELOG.md | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/01bab865/CHANGELOG.md
----------------------------------------------------------------------
diff --git a/CHANGELOG.md b/CHANGELOG.md
index b116116..8106078 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,5 @@
 +## 0.11.0
+ * STORM-1220: Avoid double copying in the Kafka spout.
  * STORM-1340: Use Travis-CI build matrix to improve test execution times
  * STORM-1126: Allow a configMethod that takes no arguments (Flux)
  * STORM-1203: worker metadata file creation doesn't use storm.log.dir config


[42/50] [abbrv] storm git commit: Merge branch 'windowing-flux' of https://github.com/arunmahadevan/storm into STORM-1207

Posted by sr...@apache.org.
Merge branch 'windowing-flux' of https://github.com/arunmahadevan/storm into STORM-1207


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/0d8a99d4
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/0d8a99d4
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/0d8a99d4

Branch: refs/heads/STORM-1040
Commit: 0d8a99d477078a9c9a88cd79c016a387c89f526a
Parents: 4c59de6 aa1e1ed
Author: Jungtaek Lim <ka...@gmail.com>
Authored: Fri Nov 27 05:50:37 2015 +0900
Committer: Jungtaek Lim <ka...@gmail.com>
Committed: Fri Nov 27 05:50:37 2015 +0900

----------------------------------------------------------------------
 .../java/org/apache/storm/flux/FluxBuilder.java | 37 +++++------
 external/flux/flux-examples/README.md           |  9 +++
 .../storm/flux/examples/TestPrintBolt.java      | 39 +++++++++++
 .../storm/flux/examples/TestWindowBolt.java     | 47 +++++++++++++
 .../src/main/resources/simple_windowing.yaml    | 69 ++++++++++++++++++++
 5 files changed, 182 insertions(+), 19 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/0d8a99d4/external/flux/flux-core/src/main/java/org/apache/storm/flux/FluxBuilder.java
----------------------------------------------------------------------


[27/50] [abbrv] storm git commit: Merge branch 'STORM-1340' of https://github.com/knusbaum/incubator-storm

Posted by sr...@apache.org.
Merge branch 'STORM-1340' of https://github.com/knusbaum/incubator-storm


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/53a46047
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/53a46047
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/53a46047

Branch: refs/heads/STORM-1040
Commit: 53a46047d18cb79a2360869ef864bb68a77447bb
Parents: f8a2d65 ba6ace8
Author: Kyle Nusbaum <Ky...@gmail.com>
Authored: Tue Nov 24 14:04:39 2015 -0600
Committer: Kyle Nusbaum <Ky...@gmail.com>
Committed: Tue Nov 24 14:04:39 2015 -0600

----------------------------------------------------------------------
 .travis.yml                       | 13 ++++++++++++-
 dev-tools/travis/travis-script.sh |  2 +-
 2 files changed, 13 insertions(+), 2 deletions(-)
----------------------------------------------------------------------



[04/50] [abbrv] storm git commit: DebugOptions needs to be above TopologyInfo

Posted by sr...@apache.org.
DebugOptions needs to be above TopologyInfo


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/037cd00d
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/037cd00d
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/037cd00d

Branch: refs/heads/STORM-1040
Commit: 037cd00d58418d588bffebf02c7897e29e193a9b
Parents: ccf3fd2
Author: Michael Schonfeld <mi...@schonfeld.org>
Authored: Mon Nov 16 14:48:45 2015 -0500
Committer: Michael Schonfeld <mi...@schonfeld.org>
Committed: Mon Nov 23 18:49:11 2015 -0500

----------------------------------------------------------------------
 storm-core/src/storm.thrift | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/037cd00d/storm-core/src/storm.thrift
----------------------------------------------------------------------
diff --git a/storm-core/src/storm.thrift b/storm-core/src/storm.thrift
index 6d967b0..d5952d7 100644
--- a/storm-core/src/storm.thrift
+++ b/storm-core/src/storm.thrift
@@ -231,6 +231,11 @@ struct ExecutorSummary {
   7: optional ExecutorStats stats;
 }
 
+struct DebugOptions {
+  1: optional bool enable
+  2: optional double samplingpct
+}
+
 struct TopologyInfo {
   1: required string id;
   2: required string name;
@@ -250,11 +255,6 @@ struct TopologyInfo {
 526: optional double assigned_cpu;
 }
 
-struct DebugOptions {
-  1: optional bool enable
-  2: optional double samplingpct
-}
-
 struct CommonAggregateStats {
 1: optional i32 num_executors;
 2: optional i32 num_tasks;


[20/50] [abbrv] storm git commit: Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/storm

Posted by sr...@apache.org.
Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/storm


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/fd75ca72
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/fd75ca72
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/fd75ca72

Branch: refs/heads/STORM-1040
Commit: fd75ca72016bfb8d20c996bb3d7514f8c2a20740
Parents: e03b28c 74cd042
Author: P. Taylor Goetz <pt...@gmail.com>
Authored: Tue Nov 24 13:56:43 2015 -0500
Committer: P. Taylor Goetz <pt...@gmail.com>
Committed: Tue Nov 24 13:56:43 2015 -0500

----------------------------------------------------------------------
 CHANGELOG.md                                    |   10 +
 LICENSE                                         |   38 +-
 bin/storm.py                                    |   43 +-
 conf/defaults.yaml                              |   12 +
 docs/documentation/Pacemaker.md                 |  108 +
 docs/documentation/Windowing.md                 |  144 ++
 docs/documentation/ui-rest-api.md               |   16 +-
 .../storm/starter/SlidingWindowTopology.java    |  185 ++
 external/storm-metrics/pom.xml                  |   53 +-
 .../resources/libsigar-amd64-freebsd-6.so       |  Bin 210641 -> 0 bytes
 .../resources/resources/libsigar-amd64-linux.so |  Bin 246605 -> 0 bytes
 .../resources/libsigar-amd64-solaris.so         |  Bin 251360 -> 0 bytes
 .../resources/libsigar-ia64-hpux-11.sl          |  Bin 577452 -> 0 bytes
 .../resources/resources/libsigar-ia64-linux.so  |  Bin 494929 -> 0 bytes
 .../resources/resources/libsigar-pa-hpux-11.sl  |  Bin 516096 -> 0 bytes
 .../resources/resources/libsigar-ppc-aix-5.so   |  Bin 400925 -> 0 bytes
 .../resources/resources/libsigar-ppc-linux.so   |  Bin 258547 -> 0 bytes
 .../resources/resources/libsigar-ppc64-aix-5.so |  Bin 425077 -> 0 bytes
 .../resources/resources/libsigar-ppc64-linux.so |  Bin 330767 -> 0 bytes
 .../resources/resources/libsigar-s390x-linux.so |  Bin 269932 -> 0 bytes
 .../resources/libsigar-sparc-solaris.so         |  Bin 285004 -> 0 bytes
 .../resources/libsigar-sparc64-solaris.so       |  Bin 261896 -> 0 bytes
 .../resources/libsigar-universal-macosx.dylib   |  Bin 377668 -> 0 bytes
 .../resources/libsigar-universal64-macosx.dylib |  Bin 397440 -> 0 bytes
 .../resources/libsigar-x86-freebsd-5.so         |  Bin 179751 -> 0 bytes
 .../resources/libsigar-x86-freebsd-6.so         |  Bin 179379 -> 0 bytes
 .../resources/resources/libsigar-x86-linux.so   |  Bin 233385 -> 0 bytes
 .../resources/resources/libsigar-x86-solaris.so |  Bin 242880 -> 0 bytes
 .../resources/resources/sigar-amd64-winnt.dll   |  Bin 402432 -> 0 bytes
 .../resources/resources/sigar-x86-winnt.dll     |  Bin 266240 -> 0 bytes
 .../resources/resources/sigar-x86-winnt.lib     |  Bin 99584 -> 0 bytes
 log4j2/cluster.xml                              |   12 +-
 log4j2/worker.xml                               |   10 +-
 pom.xml                                         |    1 +
 storm-core/pom.xml                              |    7 +-
 storm-core/src/clj/backtype/storm/cluster.clj   |  279 +--
 .../cluster_state/zookeeper_state_factory.clj   |  157 ++
 .../clj/backtype/storm/command/heartbeats.clj   |   52 +
 storm-core/src/clj/backtype/storm/config.clj    |    6 +
 .../src/clj/backtype/storm/daemon/common.clj    |   30 +-
 .../src/clj/backtype/storm/daemon/executor.clj  |    4 +
 .../src/clj/backtype/storm/daemon/logviewer.clj |   60 +-
 .../src/clj/backtype/storm/daemon/nimbus.clj    |   11 +-
 .../src/clj/backtype/storm/daemon/worker.clj    |   32 +-
 storm-core/src/clj/backtype/storm/stats.clj     |   88 +-
 storm-core/src/clj/backtype/storm/ui/core.clj   |   21 +-
 storm-core/src/clj/backtype/storm/util.clj      |   16 +
 .../org/apache/storm/pacemaker/pacemaker.clj    |  237 +++
 .../storm/pacemaker/pacemaker_state_factory.clj |  124 ++
 storm-core/src/jvm/backtype/storm/Config.java   |  105 +
 .../backtype/storm/cluster/ClusterState.java    |  208 ++
 .../storm/cluster/ClusterStateContext.java      |   41 +
 .../storm/cluster/ClusterStateFactory.java      |   28 +
 .../storm/cluster/ClusterStateListener.java     |   22 +
 .../backtype/storm/cluster/ConnectionState.java |   24 +
 .../jvm/backtype/storm/cluster/DaemonType.java  |   27 +
 .../backtype/storm/generated/Assignment.java    |  244 +--
 .../jvm/backtype/storm/generated/BoltStats.java |  340 ++--
 .../storm/generated/ClusterSummary.java         |  108 +-
 .../storm/generated/ClusterWorkerHeartbeat.java |   52 +-
 .../storm/generated/ComponentPageInfo.java      |  220 +-
 .../backtype/storm/generated/Credentials.java   |   44 +-
 .../backtype/storm/generated/ExecutorStats.java |  160 +-
 .../generated/HBAuthorizationException.java     |  406 ++++
 .../storm/generated/HBExecutionException.java   |  406 ++++
 .../jvm/backtype/storm/generated/HBMessage.java |  636 ++++++
 .../backtype/storm/generated/HBMessageData.java |  640 ++++++
 .../jvm/backtype/storm/generated/HBNodes.java   |  461 +++++
 .../jvm/backtype/storm/generated/HBPulse.java   |  522 +++++
 .../jvm/backtype/storm/generated/HBRecords.java |  466 +++++
 .../storm/generated/HBServerMessageType.java    |  113 +
 .../storm/generated/LSApprovedWorkers.java      |   44 +-
 .../generated/LSSupervisorAssignments.java      |   48 +-
 .../backtype/storm/generated/LSTopoHistory.java |   64 +-
 .../storm/generated/LSTopoHistoryList.java      |   36 +-
 .../storm/generated/LSWorkerHeartbeat.java      |   36 +-
 .../storm/generated/LocalAssignment.java        |   36 +-
 .../storm/generated/LocalStateData.java         |   48 +-
 .../jvm/backtype/storm/generated/LogConfig.java |   48 +-
 .../jvm/backtype/storm/generated/Nimbus.java    |   36 +-
 .../jvm/backtype/storm/generated/NodeInfo.java  |   32 +-
 .../storm/generated/RebalanceOptions.java       |   44 +-
 .../backtype/storm/generated/SpoutStats.java    |  224 +-
 .../jvm/backtype/storm/generated/StormBase.java |   92 +-
 .../backtype/storm/generated/StormTopology.java |  251 ++-
 .../storm/generated/SupervisorInfo.java         |  152 +-
 .../storm/generated/SupervisorSummary.java      |  250 ++-
 .../storm/generated/TopologyHistoryInfo.java    |   32 +-
 .../backtype/storm/generated/TopologyInfo.java  |  164 +-
 .../storm/generated/TopologyPageInfo.java       |   96 +-
 .../backtype/storm/generated/TopologyStats.java |  220 +-
 .../backtype/storm/hooks/BaseWorkerHook.java    |   51 +
 .../jvm/backtype/storm/hooks/IWorkerHook.java   |   44 +
 .../storm/messaging/netty/ControlMessage.java   |   17 +-
 .../messaging/netty/INettySerializable.java     |   26 +
 .../netty/KerberosSaslClientHandler.java        |  152 ++
 .../netty/KerberosSaslNettyClient.java          |  203 ++
 .../netty/KerberosSaslNettyClientState.java     |   31 +
 .../netty/KerberosSaslNettyServer.java          |  210 ++
 .../netty/KerberosSaslNettyServerState.java     |   30 +
 .../netty/KerberosSaslServerHandler.java        |  133 ++
 .../storm/messaging/netty/MessageDecoder.java   |    4 +-
 .../netty/NettyRenameThreadFactory.java         |   10 +-
 .../netty/NettyUncaughtExceptionHandler.java    |   35 +
 .../storm/messaging/netty/SaslMessageToken.java |   37 +-
 .../storm/messaging/netty/SaslNettyClient.java  |   22 +-
 .../storm/messaging/netty/SaslNettyServer.java  |  244 ++-
 .../messaging/netty/SaslNettyServerState.java   |   13 +-
 .../messaging/netty/SaslStormServerHandler.java |   21 +-
 .../storm/messaging/netty/SaslUtils.java        |    1 +
 .../backtype/storm/messaging/netty/Server.java  |   50 +-
 .../messaging/netty/StormServerHandler.java     |   24 +-
 .../metric/internal/LatencyStatAndMetric.java   |   13 +-
 .../jvm/backtype/storm/scheduler/Cluster.java   |  115 +-
 .../resource/ResourceAwareScheduler.java        |   18 +-
 .../backtype/storm/security/auth/AuthUtils.java |   69 +
 .../backtype/storm/task/OutputCollector.java    |    2 +-
 .../backtype/storm/topology/IWindowedBolt.java  |   40 +
 .../storm/topology/TopologyBuilder.java         |   62 +-
 .../storm/topology/WindowedBoltExecutor.java    |  224 ++
 .../storm/topology/base/BaseWindowedBolt.java   |  179 ++
 .../storm/utils/ThriftTopologyUtils.java        |   36 +-
 .../src/jvm/backtype/storm/utils/Utils.java     |  102 +
 .../storm/validation/ConfigValidation.java      |   20 +-
 .../storm/windowing/CountEvictionPolicy.java    |   68 +
 .../storm/windowing/CountTriggerPolicy.java     |   63 +
 .../src/jvm/backtype/storm/windowing/Event.java |   41 +
 .../jvm/backtype/storm/windowing/EventImpl.java |   38 +
 .../storm/windowing/EvictionPolicy.java         |   42 +
 .../storm/windowing/TimeEvictionPolicy.java     |   52 +
 .../storm/windowing/TimeTriggerPolicy.java      |  115 ++
 .../storm/windowing/TriggerHandler.java         |   29 +
 .../backtype/storm/windowing/TriggerPolicy.java |   42 +
 .../backtype/storm/windowing/TupleWindow.java   |   26 +
 .../storm/windowing/TupleWindowImpl.java        |   61 +
 .../jvm/backtype/storm/windowing/Window.java    |   48 +
 .../windowing/WindowLifecycleListener.java      |   42 +
 .../backtype/storm/windowing/WindowManager.java |  212 ++
 .../storm/pacemaker/IServerMessageHandler.java  |   25 +
 .../apache/storm/pacemaker/PacemakerClient.java |  255 +++
 .../storm/pacemaker/PacemakerClientHandler.java |   75 +
 .../apache/storm/pacemaker/PacemakerServer.java |  163 ++
 .../storm/pacemaker/codec/ThriftDecoder.java    |   76 +
 .../storm/pacemaker/codec/ThriftEncoder.java    |  110 +
 .../pacemaker/codec/ThriftNettyClientCodec.java |   94 +
 .../pacemaker/codec/ThriftNettyServerCodec.java |   99 +
 .../src/jvm/storm/trident/TridentTopology.java  |   17 +-
 .../jvm/storm/trident/spout/IBatchSpout.java    |    2 +-
 .../spout/IOpaquePartitionedTridentSpout.java   |    3 +-
 .../trident/spout/IPartitionedTridentSpout.java |    2 +-
 .../storm/trident/spout/ITridentDataSource.java |   26 +
 .../jvm/storm/trident/spout/ITridentSpout.java  |    2 +-
 .../jvm/storm/trident/util/TridentUtils.java    |   33 +-
 storm-core/src/py/storm/Nimbus.py               |   14 +-
 storm-core/src/py/storm/ttypes.py               | 1925 ++++++++++++------
 storm-core/src/storm.thrift                     |   72 +-
 storm-core/src/ui/public/css/style.css          |    8 +
 storm-core/src/ui/public/images/bug.png         |  Bin 0 -> 4045 bytes
 storm-core/src/ui/public/images/statistic.png   |  Bin 0 -> 488 bytes
 storm-core/src/ui/public/index.html             |    4 +-
 .../public/templates/index-page-template.html   |   36 +
 .../templates/topology-page-template.html       |    6 +
 .../src/ui/public/templates/user-template.html  |   22 +-
 storm-core/src/ui/public/topology.html          |    7 +-
 .../test/clj/backtype/storm/cluster_test.clj    |    7 +-
 .../storm/pacemaker_state_factory_test.clj      |  150 ++
 .../clj/org/apache/storm/pacemaker_test.clj     |  242 +++
 .../jvm/backtype/storm/TestConfigValidate.java  |   18 +
 .../storm/topology/TopologyBuilderTest.java     |    5 +
 .../storm/utils/ThriftTopologyUtilsTest.java    |   94 +
 .../storm/windowing/WindowManagerTest.java      |  250 +++
 storm-dist/binary/LICENSE                       |   29 +
 172 files changed, 13348 insertions(+), 2622 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/fd75ca72/pom.xml
----------------------------------------------------------------------


[28/50] [abbrv] storm git commit: Adding STORM-1340 to CHANGELOG.md

Posted by sr...@apache.org.
Adding STORM-1340 to CHANGELOG.md


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/e2d267ff
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/e2d267ff
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/e2d267ff

Branch: refs/heads/STORM-1040
Commit: e2d267ff9aab1b0a53cdf94e8880d57e64816179
Parents: 53a4604
Author: Kyle Nusbaum <Ky...@gmail.com>
Authored: Tue Nov 24 14:55:19 2015 -0600
Committer: Kyle Nusbaum <Ky...@gmail.com>
Committed: Tue Nov 24 14:55:19 2015 -0600

----------------------------------------------------------------------
 CHANGELOG.md | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/e2d267ff/CHANGELOG.md
----------------------------------------------------------------------
diff --git a/CHANGELOG.md b/CHANGELOG.md
index ccae7d0..2661422 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,5 @@
 +## 0.11.0
+ * STORM-1340: Use Travis-CI build matrix to improve test execution times
  * STORM-1126: Allow a configMethod that takes no arguments (Flux)
  * STORM-1203: worker metadata file creation doesn't use storm.log.dir config
  * STORM-1349: [Flux] Allow constructorArgs to take Maps as arguments


[05/50] [abbrv] storm git commit: add license header to ThriftTopologyUtilsTest

Posted by sr...@apache.org.
add license header to ThriftTopologyUtilsTest


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/fe646428
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/fe646428
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/fe646428

Branch: refs/heads/STORM-1040
Commit: fe646428aa044fde60638432b09da007a2acb7b7
Parents: 9cb8666
Author: Michael Schonfeld <mi...@schonfeld.org>
Authored: Tue Nov 17 09:14:15 2015 -0500
Committer: Michael Schonfeld <mi...@schonfeld.org>
Committed: Mon Nov 23 18:50:54 2015 -0500

----------------------------------------------------------------------
 .../storm/utils/ThriftTopologyUtilsTest.java       | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/fe646428/storm-core/test/jvm/backtype/storm/utils/ThriftTopologyUtilsTest.java
----------------------------------------------------------------------
diff --git a/storm-core/test/jvm/backtype/storm/utils/ThriftTopologyUtilsTest.java b/storm-core/test/jvm/backtype/storm/utils/ThriftTopologyUtilsTest.java
index 0056538..6793665 100644
--- a/storm-core/test/jvm/backtype/storm/utils/ThriftTopologyUtilsTest.java
+++ b/storm-core/test/jvm/backtype/storm/utils/ThriftTopologyUtilsTest.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package backtype.storm.utils;
 
 import backtype.storm.generated.*;


[31/50] [abbrv] storm git commit: Merge branch 'RAS_small_fixes' of https://github.com/jerrypeng/storm into STORM-1217-merge

Posted by sr...@apache.org.
Merge branch 'RAS_small_fixes' of https://github.com/jerrypeng/storm into STORM-1217-merge


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/05c70044
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/05c70044
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/05c70044

Branch: refs/heads/STORM-1040
Commit: 05c70044f8f98fdf50e8753781525449f77fe6b6
Parents: fc3b877 3c89f92
Author: Derek Dagit <de...@yahoo-inc.com>
Authored: Tue Nov 24 16:47:13 2015 -0600
Committer: Derek Dagit <de...@yahoo-inc.com>
Committed: Tue Nov 24 16:47:13 2015 -0600

----------------------------------------------------------------------
 .../src/jvm/storm/starter/ResourceAwareExampleTopology.java        | 2 +-
 .../backtype/storm/scheduler/resource/ResourceAwareScheduler.java  | 2 +-
 .../storm/scheduler/resource/strategies/ResourceAwareStrategy.java | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/05c70044/storm-core/src/jvm/backtype/storm/scheduler/resource/ResourceAwareScheduler.java
----------------------------------------------------------------------


[41/50] [abbrv] storm git commit: Address review comments

Posted by sr...@apache.org.
Address review comments

1. Removed reference to storm starter
2. Updated README


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/aa1e1ed8
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/aa1e1ed8
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/aa1e1ed8

Branch: refs/heads/STORM-1040
Commit: aa1e1ed8df065dea361d86995b31be3674f8eebe
Parents: 69b9cf5
Author: Arun Mahadevan <ai...@hortonworks.com>
Authored: Thu Nov 26 10:55:18 2015 +0530
Committer: Arun Mahadevan <ai...@hortonworks.com>
Committed: Thu Nov 26 11:08:43 2015 +0530

----------------------------------------------------------------------
 external/flux/flux-examples/README.md           |  9 +++++
 .../storm/flux/examples/TestPrintBolt.java      | 39 ++++++++++++++++++++
 .../src/main/resources/simple_windowing.yaml    |  2 +-
 3 files changed, 49 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/aa1e1ed8/external/flux/flux-examples/README.md
----------------------------------------------------------------------
diff --git a/external/flux/flux-examples/README.md b/external/flux/flux-examples/README.md
index 0a7085e..a6afec2 100644
--- a/external/flux/flux-examples/README.md
+++ b/external/flux/flux-examples/README.md
@@ -64,3 +64,12 @@ To run the `simple_hbase.yaml` example, copy the `hbase_bolt.properties` file to
 ```bash
 storm jar ./target/flux-examples-*.jar org.apache.storm.flux.Flux --local ./src/main/resources/simple_hbase.yaml --filter my_hbase_bolt.properties
 ```
+### [simple_windowing.yaml](src/main/resources/simple_windowing.yaml)
+
+This example illustrates how to use Flux to set up a storm topology that contains windowing operations.
+
+To run,
+
+```bash
+storm jar ./target/flux-examples-*.jar org.apache.storm.flux.Flux --local ./src/main/resources/simple_windowing.yaml
+```

http://git-wip-us.apache.org/repos/asf/storm/blob/aa1e1ed8/external/flux/flux-examples/src/main/java/org/apache/storm/flux/examples/TestPrintBolt.java
----------------------------------------------------------------------
diff --git a/external/flux/flux-examples/src/main/java/org/apache/storm/flux/examples/TestPrintBolt.java b/external/flux/flux-examples/src/main/java/org/apache/storm/flux/examples/TestPrintBolt.java
new file mode 100644
index 0000000..7e84441
--- /dev/null
+++ b/external/flux/flux-examples/src/main/java/org/apache/storm/flux/examples/TestPrintBolt.java
@@ -0,0 +1,39 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.flux.examples;
+
+import backtype.storm.topology.BasicOutputCollector;
+import backtype.storm.topology.OutputFieldsDeclarer;
+import backtype.storm.topology.base.BaseBasicBolt;
+import backtype.storm.tuple.Tuple;
+
+/**
+ * Prints the tuples to stdout
+ */
+public class TestPrintBolt extends BaseBasicBolt {
+
+    @Override
+    public void execute(Tuple tuple, BasicOutputCollector collector) {
+        System.out.println(tuple);
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer ofd) {
+    }
+
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/storm/blob/aa1e1ed8/external/flux/flux-examples/src/main/resources/simple_windowing.yaml
----------------------------------------------------------------------
diff --git a/external/flux/flux-examples/src/main/resources/simple_windowing.yaml b/external/flux/flux-examples/src/main/resources/simple_windowing.yaml
index a005a4a..31be109 100755
--- a/external/flux/flux-examples/src/main/resources/simple_windowing.yaml
+++ b/external/flux/flux-examples/src/main/resources/simple_windowing.yaml
@@ -46,7 +46,7 @@ bolts:
         args: [ref: "windowLength", ref: "slidingInterval"]
     parallelism: 1
   - id: "bolt-2"
-    className: "storm.starter.bolt.PrinterBolt"
+    className: "org.apache.storm.flux.examples.TestPrintBolt"
     parallelism: 1
 
 


[50/50] [abbrv] storm git commit: Merge branch 'STORM-1040' of https://github.com/haohui/storm into STORM-1040

Posted by sr...@apache.org.
Merge branch 'STORM-1040' of https://github.com/haohui/storm into STORM-1040


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/0f18238f
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/0f18238f
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/0f18238f

Branch: refs/heads/STORM-1040
Commit: 0f18238f55925343a52e9d772639471f1369775e
Parents: 9f214ab 31b4959
Author: Sriharsha Chintalapani <ha...@hortonworks.com>
Authored: Tue Dec 1 10:07:48 2015 -0800
Committer: Sriharsha Chintalapani <ha...@hortonworks.com>
Committed: Tue Dec 1 10:07:48 2015 -0800

----------------------------------------------------------------------
 .gitignore                                      |     3 +
 .travis.yml                                     |    17 +-
 CHANGELOG.md                                    |   115 +-
 DEVELOPER.md                                    |    35 +-
 DISCLAIMER                                      |    10 -
 LICENSE                                         |    41 +
 README.markdown                                 |    11 +
 STORM-UI-REST-API.md                            |   707 -
 bin/flight.bash                                 |   154 +
 bin/storm                                       |    22 +-
 bin/storm-config.cmd                            |    10 +-
 bin/storm.py                                    |   115 +-
 conf/defaults.yaml                              |    45 +-
 conf/storm.yaml.example                         |     2 +-
 dev-tools/storm-merge.py                        |     2 +-
 dev-tools/travis/ratprint.py                    |    26 +
 dev-tools/travis/travis-install.sh              |     9 +-
 dev-tools/travis/travis-script.sh               |    15 +-
 docs/documentation/Documentation.md             |     4 +
 docs/documentation/FAQ.md                       |     2 +-
 docs/documentation/Log-Search.md                |    14 +
 .../Message-passing-implementation.md           |    34 +-
 docs/documentation/Pacemaker.md                 |   108 +
 .../documentation/Setting-up-a-Storm-cluster.md |    19 +
 docs/documentation/Windowing.md                 |   144 +
 .../documentation/dynamic-log-level-settings.md |    41 +
 docs/documentation/dynamic-worker-profiling.md  |    29 +
 .../images/dynamic_log_level_settings_1.png     |   Bin 0 -> 93689 bytes
 .../images/dynamic_log_level_settings_2.png     |   Bin 0 -> 78785 bytes
 .../images/dynamic_profiling_debugging_1.png    |   Bin 0 -> 93635 bytes
 .../images/dynamic_profiling_debugging_2.png    |   Bin 0 -> 138120 bytes
 .../images/dynamic_profiling_debugging_3.png    |   Bin 0 -> 96974 bytes
 docs/documentation/images/search-a-topology.png |   Bin 0 -> 671031 bytes
 .../images/search-for-a-single-worker-log.png   |   Bin 0 -> 736579 bytes
 .../storm-metrics-profiling-internal-actions.md |    70 +
 docs/documentation/ui-rest-api.md               |   996 +
 docs/images/viewing_metrics_with_VisualVM.png   |   Bin 0 -> 225100 bytes
 examples/storm-starter/pom.xml                  |    17 +-
 .../storm/starter/FastWordCountTopology.java    |   198 +
 .../jvm/storm/starter/InOrderDeliveryTest.java  |   175 +
 .../storm/starter/MultipleLoggerTopology.java   |   105 +
 .../starter/ResourceAwareExampleTopology.java   |   101 +
 .../storm/starter/SlidingWindowTopology.java    |   185 +
 .../jvm/storm/starter/ThroughputVsLatency.java  |   432 +
 .../bolt/IntermediateRankingsBoltTest.java      |     2 +-
 .../starter/bolt/RollingCountBoltTest.java      |     2 +-
 .../starter/bolt/TotalRankingsBoltTest.java     |     2 +-
 .../storm/starter/tools/MockTupleHelpers.java   |    40 -
 external/flux/README.md                         |     4 +
 external/flux/flux-core/pom.xml                 |     1 -
 .../main/java/org/apache/storm/flux/Flux.java   |     3 +-
 .../java/org/apache/storm/flux/FluxBuilder.java |    55 +-
 .../org/apache/storm/flux/model/ObjectDef.java  |     2 +
 .../org/apache/storm/flux/test/TestBolt.java    |     8 +
 .../resources/configs/config-methods-test.yaml  |     2 +
 external/flux/flux-examples/README.md           |     9 +
 external/flux/flux-examples/pom.xml             |     1 -
 .../storm/flux/examples/TestPrintBolt.java      |    39 +
 .../storm/flux/examples/TestWindowBolt.java     |    47 +
 .../src/main/resources/simple_windowing.yaml    |    69 +
 .../storm/sql/compiler/TestCompilerUtils.java   |    17 +
 .../org/apache/storm/sql/kafka/JsonScheme.java  |     6 +-
 .../storm/sql/kafka/TestJsonRepresentation.java |     2 +-
 .../sql/kafka/TestKafkaDataSourcesProvider.java |    12 +-
 external/storm-elasticsearch/pom.xml            |     5 +
 external/storm-eventhubs/pom.xml                |     1 -
 external/storm-hbase/README.md                  |    10 +
 .../storm/hbase/bolt/AbstractHBaseBolt.java     |     2 +
 .../org/apache/storm/hbase/bolt/HBaseBolt.java  |    75 +-
 external/storm-hdfs/README.md                   |    33 +
 external/storm-hdfs/pom.xml                     |    71 +-
 .../storm/hdfs/bolt/AbstractHdfsBolt.java       |   124 +
 .../storm/hdfs/bolt/AvroGenericRecordBolt.java  |   145 +
 .../org/apache/storm/hdfs/bolt/HdfsBolt.java    |    51 +-
 .../storm/hdfs/bolt/SequenceFileBolt.java       |    42 +-
 .../ha/codedistributor/HDFSCodeDistributor.java |    17 +
 .../hdfs/bolt/AvroGenericRecordBoltTest.java    |   220 +
 .../apache/storm/hdfs/bolt/TestHdfsBolt.java    |   258 +
 .../storm/hdfs/bolt/TestSequenceFileBolt.java   |   186 +
 .../storm/hdfs/trident/HdfsStateTest.java       |    17 +
 external/storm-hive/pom.xml                     |     7 +
 .../org/apache/storm/hive/bolt/HiveBolt.java    |     9 +
 .../apache/storm/hive/bolt/TestHiveBolt.java    |    56 +-
 .../storm/jdbc/bolt/AbstractJdbcBolt.java       |     2 +
 .../apache/storm/jdbc/bolt/JdbcInsertBolt.java  |     9 +
 .../apache/storm/jdbc/bolt/JdbcLookupBolt.java  |     5 +
 .../jdbc/mapper/SimpleJdbcLookupMapper.java     |     3 +
 .../storm/jdbc/mapper/SimpleJdbcMapper.java     |     5 +
 .../storm/jdbc/bolt/JdbcInsertBoltTest.java     |    71 +
 .../storm/jdbc/bolt/JdbcLookupBoltTest.java     |    59 +
 external/storm-kafka/README.md                  |    64 +-
 external/storm-kafka/pom.xml                    |     5 +
 .../jvm/storm/kafka/DynamicBrokersReader.java   |    97 +-
 .../kafka/DynamicPartitionConnections.java      |    20 +-
 .../src/jvm/storm/kafka/KafkaConfig.java        |     3 +-
 .../src/jvm/storm/kafka/KafkaSpout.java         |    34 +-
 .../src/jvm/storm/kafka/KafkaUtils.java         |    95 +-
 .../src/jvm/storm/kafka/KeyValueScheme.java     |     5 +-
 .../kafka/KeyValueSchemeAsMultiScheme.java      |     5 +-
 .../jvm/storm/kafka/MessageMetadataScheme.java  |    27 +
 .../MessageMetadataSchemeAsMultiScheme.java     |    41 +
 .../src/jvm/storm/kafka/Partition.java          |    26 +-
 .../src/jvm/storm/kafka/PartitionManager.java   |    47 +-
 .../src/jvm/storm/kafka/StaticCoordinator.java  |    11 +-
 .../jvm/storm/kafka/StringKeyValueScheme.java   |     3 +-
 .../kafka/StringMessageAndMetadataScheme.java   |    43 +
 .../storm/kafka/StringMultiSchemeWithTopic.java |    48 +
 .../src/jvm/storm/kafka/StringScheme.java       |    20 +-
 .../src/jvm/storm/kafka/ZkCoordinator.java      |     2 +-
 .../jvm/storm/kafka/trident/Coordinator.java    |     7 +-
 .../trident/GlobalPartitionInformation.java     |    26 +-
 .../jvm/storm/kafka/trident/IBrokerReader.java  |     7 +-
 .../kafka/trident/OpaqueTridentKafkaSpout.java  |     9 +-
 .../storm/kafka/trident/StaticBrokerReader.java |    23 +-
 .../trident/TransactionalTridentKafkaSpout.java |     4 +-
 .../kafka/trident/TridentKafkaEmitter.java      |    48 +-
 .../jvm/storm/kafka/trident/ZkBrokerReader.java |    20 +-
 .../storm/kafka/DynamicBrokersReaderTest.java   |   114 +-
 .../src/test/storm/kafka/KafkaUtilsTest.java    |   112 +-
 .../storm/kafka/StringKeyValueSchemeTest.java   |    17 +-
 .../src/test/storm/kafka/TestStringScheme.java  |    40 +
 .../src/test/storm/kafka/TestUtils.java         |     4 +-
 .../src/test/storm/kafka/ZkCoordinatorTest.java |     8 +-
 .../test/storm/kafka/bolt/KafkaBoltTest.java    |    13 +-
 external/storm-metrics/pom.xml                  |   107 +
 .../metrics/hdrhistogram/HistogramMetric.java   |    79 +
 .../apache/storm/metrics/sigar/CPUMetric.java   |    60 +
 external/storm-solr/pom.xml                     |    21 +-
 log4j2/cluster.xml                              |    42 +-
 log4j2/worker.xml                               |    22 +-
 pom.xml                                         |   290 +-
 storm-core/pom.xml                              |   327 +-
 storm-core/src/clj/backtype/storm/cluster.clj   |   341 +-
 .../cluster_state/zookeeper_state_factory.clj   |   157 +
 .../clj/backtype/storm/command/healthcheck.clj  |    88 +
 .../clj/backtype/storm/command/heartbeats.clj   |    52 +
 .../clj/backtype/storm/command/kill_workers.clj |    33 +
 .../backtype/storm/command/set_log_level.clj    |    75 +
 storm-core/src/clj/backtype/storm/config.clj    |   112 +-
 storm-core/src/clj/backtype/storm/converter.clj |    73 +-
 .../backtype/storm/daemon/builtin_metrics.clj   |    84 +-
 .../src/clj/backtype/storm/daemon/common.clj    |    42 +-
 .../src/clj/backtype/storm/daemon/drpc.clj      |    46 +-
 .../src/clj/backtype/storm/daemon/executor.clj  |   273 +-
 .../src/clj/backtype/storm/daemon/logviewer.clj |  1060 +-
 .../src/clj/backtype/storm/daemon/nimbus.clj    |   652 +-
 .../clj/backtype/storm/daemon/supervisor.clj    |   283 +-
 .../src/clj/backtype/storm/daemon/task.clj      |    24 +-
 .../src/clj/backtype/storm/daemon/worker.clj    |   270 +-
 storm-core/src/clj/backtype/storm/disruptor.clj |    53 +-
 .../src/clj/backtype/storm/local_state.clj      |    44 +-
 storm-core/src/clj/backtype/storm/log.clj       |    12 +-
 .../src/clj/backtype/storm/messaging/loader.clj |    76 +-
 .../src/clj/backtype/storm/messaging/local.clj  |    56 +-
 storm-core/src/clj/backtype/storm/stats.clj     |  1519 +-
 storm-core/src/clj/backtype/storm/testing.clj   |    34 +-
 storm-core/src/clj/backtype/storm/timer.clj     |    20 +-
 storm-core/src/clj/backtype/storm/ui/core.clj   |  1356 +-
 .../src/clj/backtype/storm/ui/helpers.clj       |    77 +-
 storm-core/src/clj/backtype/storm/util.clj      |    82 +-
 .../org/apache/storm/pacemaker/pacemaker.clj    |   237 +
 .../storm/pacemaker/pacemaker_state_factory.clj |   124 +
 .../src/dev/logviewer-search-context-tests.log  |     1 +
 .../dev/logviewer-search-context-tests.log.gz   |   Bin 0 -> 72 bytes
 storm-core/src/dev/small-worker.log             |     1 +
 storm-core/src/dev/test-3072.log                |     3 +
 storm-core/src/dev/test-worker.log              |   380 +
 storm-core/src/genthrift.sh                     |     2 +-
 storm-core/src/jvm/backtype/storm/Config.java   |   868 +-
 .../jvm/backtype/storm/ConfigValidation.java    |   375 -
 .../src/jvm/backtype/storm/LogWriter.java       |     2 +-
 .../src/jvm/backtype/storm/StormSubmitter.java  |    55 +-
 .../backtype/storm/cluster/ClusterState.java    |   208 +
 .../storm/cluster/ClusterStateContext.java      |    41 +
 .../storm/cluster/ClusterStateFactory.java      |    28 +
 .../storm/cluster/ClusterStateListener.java     |    22 +
 .../backtype/storm/cluster/ConnectionState.java |    24 +
 .../jvm/backtype/storm/cluster/DaemonType.java  |    27 +
 .../storm/codedistributor/ICodeDistributor.java |    17 +
 .../LocalFileSystemCodeDistributor.java         |    17 +
 .../storm/coordination/BatchBoltExecutor.java   |     4 +-
 .../storm/coordination/CoordinatedBolt.java     |    16 +-
 .../storm/drpc/DRPCInvocationsClient.java       |     5 +-
 .../src/jvm/backtype/storm/drpc/DRPCSpout.java  |    10 +-
 .../src/jvm/backtype/storm/drpc/JoinResult.java |     8 +-
 .../storm/generated/AlreadyAliveException.java  |     4 +-
 .../backtype/storm/generated/Assignment.java    |   380 +-
 .../storm/generated/AuthorizationException.java |     4 +-
 .../src/jvm/backtype/storm/generated/Bolt.java  |     4 +-
 .../storm/generated/BoltAggregateStats.java     |   704 +
 .../jvm/backtype/storm/generated/BoltStats.java |   444 +-
 .../storm/generated/ClusterSummary.java         |   221 +-
 .../storm/generated/ClusterWorkerHeartbeat.java |    60 +-
 .../storm/generated/CommonAggregateStats.java   |   902 +
 .../generated/ComponentAggregateStats.java      |   752 +
 .../storm/generated/ComponentCommon.java        |     6 +-
 .../storm/generated/ComponentObject.java        |     2 +-
 .../storm/generated/ComponentPageInfo.java      |  2194 ++
 .../backtype/storm/generated/ComponentType.java |    62 +
 .../backtype/storm/generated/Credentials.java   |    48 +-
 .../storm/generated/DRPCExecutionException.java |     4 +-
 .../backtype/storm/generated/DRPCRequest.java   |     4 +-
 .../backtype/storm/generated/DebugOptions.java  |     8 +-
 .../storm/generated/DistributedRPC.java         |     4 +-
 .../generated/DistributedRPCInvocations.java    |     4 +-
 .../jvm/backtype/storm/generated/ErrorInfo.java |     8 +-
 .../storm/generated/ExecutorAggregateStats.java |   526 +
 .../backtype/storm/generated/ExecutorInfo.java  |     8 +-
 .../storm/generated/ExecutorSpecificStats.java  |     2 +-
 .../backtype/storm/generated/ExecutorStats.java |   174 +-
 .../storm/generated/ExecutorSummary.java        |     8 +-
 .../storm/generated/GetInfoOptions.java         |     4 +-
 .../storm/generated/GlobalStreamId.java         |     4 +-
 .../jvm/backtype/storm/generated/Grouping.java  |     2 +-
 .../generated/HBAuthorizationException.java     |   406 +
 .../storm/generated/HBExecutionException.java   |   406 +
 .../jvm/backtype/storm/generated/HBMessage.java |   636 +
 .../backtype/storm/generated/HBMessageData.java |   640 +
 .../jvm/backtype/storm/generated/HBNodes.java   |   461 +
 .../jvm/backtype/storm/generated/HBPulse.java   |   522 +
 .../jvm/backtype/storm/generated/HBRecords.java |   466 +
 .../storm/generated/HBServerMessageType.java    |   113 +
 .../generated/InvalidTopologyException.java     |     4 +-
 .../backtype/storm/generated/JavaObject.java    |     4 +-
 .../backtype/storm/generated/JavaObjectArg.java |     2 +-
 .../backtype/storm/generated/KillOptions.java   |     6 +-
 .../storm/generated/LSApprovedWorkers.java      |    48 +-
 .../generated/LSSupervisorAssignments.java      |    52 +-
 .../storm/generated/LSSupervisorId.java         |     4 +-
 .../backtype/storm/generated/LSTopoHistory.java |   805 +
 .../storm/generated/LSTopoHistoryList.java      |   460 +
 .../storm/generated/LSWorkerHeartbeat.java      |    44 +-
 .../storm/generated/LocalAssignment.java        |   157 +-
 .../storm/generated/LocalStateData.java         |    52 +-
 .../jvm/backtype/storm/generated/LogConfig.java |   475 +
 .../jvm/backtype/storm/generated/LogLevel.java  |   836 +
 .../storm/generated/LogLevelAction.java         |    65 +
 .../jvm/backtype/storm/generated/Nimbus.java    | 18163 ++++++++++++-----
 .../backtype/storm/generated/NimbusSummary.java |    10 +-
 .../jvm/backtype/storm/generated/NodeInfo.java  |    36 +-
 .../storm/generated/NotAliveException.java      |     4 +-
 .../backtype/storm/generated/NullStruct.java    |     4 +-
 .../storm/generated/NumErrorsChoice.java        |     2 +-
 .../backtype/storm/generated/ProfileAction.java |    74 +
 .../storm/generated/ProfileRequest.java         |   631 +
 .../storm/generated/RebalanceOptions.java       |    52 +-
 .../storm/generated/ShellComponent.java         |     4 +-
 .../storm/generated/SpecificAggregateStats.java |   387 +
 .../storm/generated/SpoutAggregateStats.java    |   407 +
 .../jvm/backtype/storm/generated/SpoutSpec.java |     4 +-
 .../backtype/storm/generated/SpoutStats.java    |   256 +-
 .../storm/generated/StateSpoutSpec.java         |     4 +-
 .../jvm/backtype/storm/generated/StormBase.java |   100 +-
 .../backtype/storm/generated/StormTopology.java |   255 +-
 .../backtype/storm/generated/StreamInfo.java    |     6 +-
 .../backtype/storm/generated/SubmitOptions.java |     4 +-
 .../storm/generated/SupervisorInfo.java         |   282 +-
 .../storm/generated/SupervisorSummary.java      |   374 +-
 .../storm/generated/ThriftSerializedObject.java |     4 +-
 .../storm/generated/TopologyActionOptions.java  |     2 +-
 .../storm/generated/TopologyHistoryInfo.java    |   461 +
 .../backtype/storm/generated/TopologyInfo.java  |   774 +-
 .../storm/generated/TopologyInitialStatus.java  |     2 +-
 .../storm/generated/TopologyPageInfo.java       |  2597 +++
 .../backtype/storm/generated/TopologyStats.java |  1094 +
 .../storm/generated/TopologyStatus.java         |     2 +-
 .../storm/generated/TopologySummary.java        |   618 +-
 .../storm/generated/WorkerResources.java        |   605 +
 .../src/jvm/backtype/storm/grouping/Load.java   |    77 +
 .../grouping/LoadAwareCustomStreamGrouping.java |    24 +
 .../grouping/LoadAwareShuffleGrouping.java      |    76 +
 .../backtype/storm/grouping/LoadMapping.java    |    64 +
 .../storm/grouping/PartialKeyGrouping.java      |     5 +-
 .../storm/grouping/ShuffleGrouping.java         |    65 +
 .../backtype/storm/hooks/BaseWorkerHook.java    |    51 +
 .../jvm/backtype/storm/hooks/IWorkerHook.java   |    44 +
 .../storm/logging/ThriftAccessLogger.java       |    27 +
 .../logging/filters/AccessLoggingFilter.java    |    52 +
 .../storm/messaging/AddressedTuple.java         |    46 +
 .../storm/messaging/ConnectionWithStatus.java   |     4 +-
 .../DeserializingConnectionCallback.java        |    60 +
 .../backtype/storm/messaging/IConnection.java   |    26 +-
 .../storm/messaging/IConnectionCallback.java    |    31 +
 .../jvm/backtype/storm/messaging/IContext.java  |     2 +-
 .../storm/messaging/TransportFactory.java       |     2 +-
 .../backtype/storm/messaging/local/Context.java |   164 +
 .../backtype/storm/messaging/netty/Client.java  |   115 +-
 .../backtype/storm/messaging/netty/Context.java |     8 +-
 .../storm/messaging/netty/ControlMessage.java   |    22 +-
 .../messaging/netty/INettySerializable.java     |    26 +
 .../storm/messaging/netty/ISaslClient.java      |    28 +
 .../storm/messaging/netty/ISaslServer.java      |    26 +
 .../backtype/storm/messaging/netty/IServer.java |    26 +
 .../netty/KerberosSaslClientHandler.java        |   152 +
 .../netty/KerberosSaslNettyClient.java          |   203 +
 .../netty/KerberosSaslNettyClientState.java     |    31 +
 .../netty/KerberosSaslNettyServer.java          |   210 +
 .../netty/KerberosSaslNettyServerState.java     |    30 +
 .../netty/KerberosSaslServerHandler.java        |   133 +
 .../storm/messaging/netty/MessageBatch.java     |    14 +-
 .../storm/messaging/netty/MessageDecoder.java   |    11 +-
 .../netty/NettyRenameThreadFactory.java         |    10 +-
 .../netty/NettyUncaughtExceptionHandler.java    |    35 +
 .../storm/messaging/netty/SaslMessageToken.java |    33 +-
 .../storm/messaging/netty/SaslNettyClient.java  |    28 +-
 .../storm/messaging/netty/SaslNettyServer.java  |   248 +-
 .../messaging/netty/SaslNettyServerState.java   |    13 +-
 .../messaging/netty/SaslStormClientHandler.java |    41 +-
 .../messaging/netty/SaslStormServerHandler.java |    32 +-
 .../storm/messaging/netty/SaslUtils.java        |    12 +-
 .../backtype/storm/messaging/netty/Server.java  |   232 +-
 .../messaging/netty/StormClientHandler.java     |    51 +-
 .../netty/StormClientPipelineFactory.java       |    11 +-
 .../messaging/netty/StormServerHandler.java     |    24 +-
 .../backtype/storm/metric/EventLoggerBolt.java  |    25 +-
 .../storm/metric/FileBasedEventLogger.java      |    37 +-
 .../metric/HttpForwardingMetricsConsumer.java   |    80 +
 .../metric/HttpForwardingMetricsServer.java     |   118 +
 .../jvm/backtype/storm/metric/IEventLogger.java |    25 +-
 .../storm/metric/LoggingMetricsConsumer.java    |     1 -
 .../storm/metric/MetricsConsumerBolt.java       |     1 -
 .../jvm/backtype/storm/metric/SystemBolt.java   |     5 -
 .../backtype/storm/metric/api/CountMetric.java  |     2 -
 .../backtype/storm/metric/api/MeanReducer.java  |     4 +-
 .../storm/metric/api/MultiCountMetric.java      |     2 +-
 .../storm/metric/api/MultiReducedMetric.java    |     2 +-
 .../storm/metric/api/rpc/CountShellMetric.java  |     3 +-
 .../metric/internal/CountStatAndMetric.java     |   211 +
 .../metric/internal/LatencyStatAndMetric.java   |   262 +
 .../storm/metric/internal/MetricStatTimer.java  |    27 +
 .../internal/MultiCountStatAndMetric.java       |   112 +
 .../internal/MultiLatencyStatAndMetric.java     |   109 +
 .../storm/metric/internal/RateTracker.java      |   165 +
 .../AbstractDNSToSwitchMapping.java             |    95 +
 .../networktopography/DNSToSwitchMapping.java   |    50 +
 .../DefaultRackDNSToSwitchMapping.java          |    52 +
 .../backtype/storm/nimbus/ILeaderElector.java   |    23 +-
 .../nimbus/ITopologyActionNotifierPlugin.java   |    43 +
 .../jvm/backtype/storm/nimbus/NimbusInfo.java   |    29 +-
 .../jvm/backtype/storm/scheduler/Cluster.java   |   234 +-
 .../scheduler/SchedulerAssignmentImpl.java      |    15 +-
 .../storm/scheduler/SupervisorDetails.java      |    63 +-
 .../backtype/storm/scheduler/Topologies.java    |    27 +-
 .../storm/scheduler/TopologyDetails.java        |   377 +-
 .../backtype/storm/scheduler/WorkerSlot.java    |    25 +
 .../scheduler/multitenant/DefaultPool.java      |    22 +-
 .../storm/scheduler/multitenant/FreePool.java   |     6 +-
 .../scheduler/multitenant/IsolatedPool.java     |    32 +-
 .../multitenant/MultitenantScheduler.java       |     6 +-
 .../storm/scheduler/multitenant/Node.java       |    17 +-
 .../storm/scheduler/multitenant/NodePool.java   |    16 +-
 .../storm/scheduler/resource/Component.java     |    54 +
 .../storm/scheduler/resource/RAS_Node.java      |   575 +
 .../resource/ResourceAwareScheduler.java        |   183 +
 .../storm/scheduler/resource/ResourceUtils.java |   133 +
 .../resource/strategies/IStrategy.java          |    37 +
 .../strategies/ResourceAwareStrategy.java       |   479 +
 .../backtype/storm/security/auth/AuthUtils.java |    96 +-
 .../auth/DefaultHttpCredentialsPlugin.java      |     6 +-
 .../security/auth/DefaultPrincipalToLocal.java  |     1 -
 .../storm/security/auth/IAuthorizer.java        |     4 +-
 .../security/auth/ICredentialsRenewer.java      |     3 +-
 .../security/auth/IHttpCredentialsPlugin.java   |     2 -
 .../storm/security/auth/IPrincipalToLocal.java  |     2 +-
 .../storm/security/auth/ITransportPlugin.java   |     4 -
 .../security/auth/KerberosPrincipalToLocal.java |     2 +-
 .../storm/security/auth/ReqContext.java         |    18 +-
 .../security/auth/SaslTransportPlugin.java      |    12 +-
 .../security/auth/ShellBasedGroupsMapping.java  |    10 +-
 .../security/auth/SimpleTransportPlugin.java    |     8 +-
 .../security/auth/SingleUserPrincipal.java      |     5 +-
 .../storm/security/auth/TBackoffConnect.java    |     1 -
 .../storm/security/auth/ThriftClient.java       |    10 +-
 .../storm/security/auth/ThriftServer.java       |     6 +-
 .../auth/authorizer/DRPCAuthorizerBase.java     |     2 +-
 .../authorizer/DRPCSimpleACLAuthorizer.java     |    19 +-
 .../auth/authorizer/DenyAuthorizer.java         |    16 +-
 .../authorizer/ImpersonationAuthorizer.java     |    17 +-
 .../auth/authorizer/NoopAuthorizer.java         |    12 +-
 .../auth/authorizer/SimpleACLAuthorizer.java    |    45 +-
 .../authorizer/SimpleWhitelistAuthorizer.java   |    16 +-
 .../auth/digest/ClientCallbackHandler.java      |     2 -
 .../auth/digest/DigestSaslTransportPlugin.java  |     2 -
 .../auth/digest/ServerCallbackHandler.java      |     5 +-
 .../storm/security/auth/kerberos/AutoTGT.java   |    10 +-
 .../security/auth/kerberos/NoOpTTrasport.java   |    20 +-
 .../auth/kerberos/ServerCallbackHandler.java    |     2 +
 .../serialization/BlowfishTupleSerializer.java  |     6 +-
 .../GzipThriftSerializationDelegate.java        |     1 -
 .../storm/serialization/ITupleDeserializer.java |     1 -
 .../serialization/KryoTupleDeserializer.java    |     3 -
 .../serialization/KryoValuesDeserializer.java   |     3 +-
 .../serialization/SerializationFactory.java     |    23 +-
 .../jvm/backtype/storm/spout/MultiScheme.java   |     3 +-
 .../backtype/storm/spout/RawMultiScheme.java    |     3 +-
 .../src/jvm/backtype/storm/spout/RawScheme.java |     9 +-
 .../src/jvm/backtype/storm/spout/Scheme.java    |     3 +-
 .../storm/spout/SchemeAsMultiScheme.java        |     3 +-
 .../jvm/backtype/storm/spout/ShellSpout.java    |    10 +-
 .../storm/task/GeneralTopologyContext.java      |    15 +-
 .../backtype/storm/task/OutputCollector.java    |     2 +-
 .../src/jvm/backtype/storm/task/ShellBolt.java  |    48 +-
 .../backtype/storm/task/TopologyContext.java    |     9 +-
 .../AlternateRackDNSToSwitchMapping.java        |    65 +
 .../storm/testing/MemoryTransactionalSpout.java |     9 +-
 .../testing/OpaqueMemoryTransactionalSpout.java |     8 +-
 .../storm/testing/TupleCaptureBolt.java         |     4 +-
 .../topology/BaseConfigurationDeclarer.java     |    31 +-
 .../storm/topology/BasicBoltExecutor.java       |     2 +-
 .../ComponentConfigurationDeclarer.java         |     3 +
 .../backtype/storm/topology/IWindowedBolt.java  |    40 +
 .../storm/topology/OutputFieldsGetter.java      |     2 +-
 .../storm/topology/TopologyBuilder.java         |    78 +-
 .../storm/topology/WindowedBoltExecutor.java    |   224 +
 .../storm/topology/base/BaseBatchBolt.java      |     1 -
 .../topology/base/BaseTransactionalSpout.java   |     1 -
 .../storm/topology/base/BaseWindowedBolt.java   |   179 +
 .../TransactionalSpoutBatchExecutor.java        |     4 +-
 .../TransactionalSpoutCoordinator.java          |     2 +-
 ...uePartitionedTransactionalSpoutExecutor.java |    13 +-
 .../PartitionedTransactionalSpoutExecutor.java  |     2 +-
 .../backtype/storm/tuple/AddressedTuple.java    |    48 +
 .../src/jvm/backtype/storm/tuple/Fields.java    |    10 +-
 .../src/jvm/backtype/storm/tuple/MessageId.java |    10 +-
 .../src/jvm/backtype/storm/tuple/Tuple.java     |     9 +-
 .../src/jvm/backtype/storm/tuple/TupleImpl.java |    17 +-
 .../jvm/backtype/storm/utils/DRPCClient.java    |     1 -
 .../backtype/storm/utils/DisruptorQueue.java    |   610 +-
 .../backtype/storm/utils/InprocMessaging.java   |     4 +-
 .../storm/utils/KeyedRoundRobinQueue.java       |     6 +-
 .../jvm/backtype/storm/utils/ListDelegate.java  |     6 +-
 .../jvm/backtype/storm/utils/LocalState.java    |    22 +-
 .../src/jvm/backtype/storm/utils/Monitor.java   |     3 +-
 .../jvm/backtype/storm/utils/MutableObject.java |     6 +-
 .../jvm/backtype/storm/utils/NimbusClient.java  |    10 +-
 .../jvm/backtype/storm/utils/RateTracker.java   |   119 -
 .../storm/utils/RegisteredGlobalState.java      |     6 +-
 .../jvm/backtype/storm/utils/RotatingMap.java   |     2 +-
 .../backtype/storm/utils/ServiceRegistry.java   |     2 +-
 .../jvm/backtype/storm/utils/ShellProcess.java  |     6 +-
 .../jvm/backtype/storm/utils/ShellUtils.java    |     2 +-
 .../StormBoundedExponentialBackoffRetry.java    |     3 +-
 .../storm/utils/ThriftTopologyUtils.java        |    36 +-
 .../src/jvm/backtype/storm/utils/Time.java      |    16 +-
 .../backtype/storm/utils/TransferDrainer.java   |    17 +-
 .../src/jvm/backtype/storm/utils/Utils.java     |   489 +-
 .../jvm/backtype/storm/utils/VersionInfo.java   |     2 +-
 .../storm/validation/ConfigValidation.java      |   646 +
 .../validation/ConfigValidationAnnotations.java |   214 +
 .../storm/validation/ConfigValidationUtils.java |   175 +
 .../storm/windowing/CountEvictionPolicy.java    |    68 +
 .../storm/windowing/CountTriggerPolicy.java     |    63 +
 .../src/jvm/backtype/storm/windowing/Event.java |    41 +
 .../jvm/backtype/storm/windowing/EventImpl.java |    38 +
 .../storm/windowing/EvictionPolicy.java         |    42 +
 .../storm/windowing/TimeEvictionPolicy.java     |    52 +
 .../storm/windowing/TimeTriggerPolicy.java      |   115 +
 .../storm/windowing/TriggerHandler.java         |    29 +
 .../backtype/storm/windowing/TriggerPolicy.java |    42 +
 .../backtype/storm/windowing/TupleWindow.java   |    26 +
 .../storm/windowing/TupleWindowImpl.java        |    61 +
 .../jvm/backtype/storm/windowing/Window.java    |    48 +
 .../windowing/WindowLifecycleListener.java      |    42 +
 .../backtype/storm/windowing/WindowManager.java |   212 +
 .../storm/pacemaker/IServerMessageHandler.java  |    25 +
 .../apache/storm/pacemaker/PacemakerClient.java |   255 +
 .../storm/pacemaker/PacemakerClientHandler.java |    75 +
 .../apache/storm/pacemaker/PacemakerServer.java |   163 +
 .../storm/pacemaker/codec/ThriftDecoder.java    |    76 +
 .../storm/pacemaker/codec/ThriftEncoder.java    |   110 +
 .../pacemaker/codec/ThriftNettyClientCodec.java |    94 +
 .../pacemaker/codec/ThriftNettyServerCodec.java |    99 +
 .../src/jvm/storm/trident/TridentTopology.java  |   100 +-
 .../trident/drpc/ReturnResultsReducer.java      |     4 +-
 .../fluent/ChainedAggregatorDeclarer.java       |     8 +-
 .../jvm/storm/trident/graph/GraphGrouper.java   |    22 +-
 .../src/jvm/storm/trident/graph/Group.java      |    23 +-
 .../trident/operation/builtin/SnapshotGet.java  |     4 +-
 .../operation/builtin/TupleCollectionGet.java   |     6 +-
 .../storm/trident/partition/GlobalGrouping.java |     5 +-
 .../trident/partition/IdentityGrouping.java     |     8 +-
 .../src/jvm/storm/trident/planner/Node.java     |     5 +-
 .../storm/trident/planner/PartitionNode.java    |     2 -
 .../storm/trident/planner/SubtopologyBolt.java  |    19 +-
 .../processor/MultiReducerProcessor.java        |     2 +-
 .../jvm/storm/trident/spout/ITridentSpout.java  |    51 +-
 .../OpaquePartitionedTridentSpoutExecutor.java  |    10 +-
 .../trident/spout/TridentSpoutExecutor.java     |     4 +-
 .../trident/topology/TridentBoltExecutor.java   |    10 +-
 .../topology/TridentTopologyBuilder.java        |    23 +-
 .../storm/trident/tuple/TridentTupleView.java   |    18 +-
 .../jvm/storm/trident/util/TridentUtils.java    |    33 +-
 .../src/native/worker-launcher/impl/main.c      |    10 +
 .../worker-launcher/impl/worker-launcher.c      |    49 +-
 .../worker-launcher/impl/worker-launcher.h      |     2 +
 storm-core/src/py/storm/DistributedRPC-remote   |     2 +-
 storm-core/src/py/storm/DistributedRPC.py       |    20 +-
 .../py/storm/DistributedRPCInvocations-remote   |     2 +-
 .../src/py/storm/DistributedRPCInvocations.py   |    41 +-
 storm-core/src/py/storm/Nimbus-remote           |    51 +-
 storm-core/src/py/storm/Nimbus.py               |  2383 ++-
 storm-core/src/py/storm/constants.py            |     2 +-
 storm-core/src/py/storm/ttypes.py               |  5870 ++++--
 storm-core/src/storm.thrift                     |   262 +-
 storm-core/src/ui/public/component.html         |   167 +-
 storm-core/src/ui/public/css/style.css          |    16 +
 .../src/ui/public/deep_search_result.html       |   155 +
 storm-core/src/ui/public/images/bug.png         |   Bin 0 -> 4045 bytes
 storm-core/src/ui/public/images/search.png      |   Bin 0 -> 2354 bytes
 storm-core/src/ui/public/images/statistic.png   |   Bin 0 -> 488 bytes
 storm-core/src/ui/public/index.html             |    10 +-
 storm-core/src/ui/public/js/script.js           |    20 +
 .../src/ui/public/js/typeahead.jquery.min.js    |     7 +
 storm-core/src/ui/public/js/visualization.js    |    92 +-
 storm-core/src/ui/public/logviewer_search.html  |    65 +
 storm-core/src/ui/public/search_result.html     |   100 +
 .../templates/component-page-template.html      |    55 +-
 .../deep-search-result-page-template.html       |    66 +
 .../public/templates/index-page-template.html   |    56 +-
 .../logviewer-search-page-template.html         |    44 +
 .../templates/search-result-page-template.html  |    60 +
 .../templates/topology-page-template.html       |   197 +-
 .../src/ui/public/templates/user-template.html  |    27 +-
 storm-core/src/ui/public/topology.html          |   168 +-
 .../test/clj/backtype/storm/cluster_test.clj    |    15 +-
 .../test/clj/backtype/storm/config_test.clj     |   186 -
 .../test/clj/backtype/storm/grouping_test.clj   |    90 +-
 .../clj/backtype/storm/integration_test.clj     |    12 +-
 .../test/clj/backtype/storm/logviewer_test.clj  |   730 +-
 .../storm/messaging/netty_integration_test.clj  |     3 +-
 .../storm/messaging/netty_unit_test.clj         |   288 +-
 .../test/clj/backtype/storm/messaging_test.clj  |    28 +-
 .../test/clj/backtype/storm/metrics_test.clj    |     2 +-
 .../test/clj/backtype/storm/multilang_test.clj  |     4 +-
 .../test/clj/backtype/storm/nimbus_test.clj     |   199 +-
 .../scheduler/multitenant_scheduler_test.clj    |    34 +-
 .../scheduler/resource_aware_scheduler_test.clj |   669 +
 .../test/clj/backtype/storm/scheduler_test.clj  |     3 +-
 .../auth/DefaultHttpCredentialsPlugin_test.clj  |    40 +-
 .../clj/backtype/storm/serialization_test.clj   |    14 +-
 .../test/clj/backtype/storm/supervisor_test.clj |   397 +-
 .../test/clj/backtype/storm/testing4j_test.clj  |     1 +
 .../clj/backtype/storm/transactional_test.clj   |     5 +-
 .../test/clj/backtype/storm/worker_test.clj     |   179 +-
 .../storm/pacemaker_state_factory_test.clj      |   150 +
 .../clj/org/apache/storm/pacemaker_test.clj     |   242 +
 .../jvm/backtype/storm/TestConfigValidate.java  |   660 +
 .../metric/internal/CountStatAndMetricTest.java |    86 +
 .../internal/LatencyStatAndMetricTest.java      |    83 +
 .../storm/metric/internal/RateTrackerTest.java  |    94 +
 .../nimbus/InMemoryTopologyActionNotifier.java  |    53 +
 .../storm/topology/TopologyBuilderTest.java     |     5 +
 .../utils/DisruptorQueueBackpressureTest.java   |    17 +-
 .../storm/utils/DisruptorQueueTest.java         |   106 +-
 .../backtype/storm/utils/MockTupleHelpers.java  |    40 +
 .../backtype/storm/utils/RateTrackerTest.java   |    62 -
 .../storm/utils/ThriftTopologyUtilsTest.java    |    94 +
 .../storm/windowing/WindowManagerTest.java      |   250 +
 .../jvm/storm/trident/TestTridentTopology.java  |    76 +
 storm-dist/binary/LICENSE                       |    29 +
 storm-dist/binary/src/main/assembly/binary.xml  |    37 -
 561 files changed, 70178 insertions(+), 14724 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/0f18238f/external/storm-kafka/README.md
----------------------------------------------------------------------
diff --cc external/storm-kafka/README.md
index a63284a,2fe930e..f6f14ac
--- a/external/storm-kafka/README.md
+++ b/external/storm-kafka/README.md
@@@ -218,9 -226,8 +226,9 @@@ You can return a null and the message w
  DefaultTopicSelector.java and set the name of the topic in the constructor.
  
  ### Specifying Kafka producer properties
- You can provide all the produce properties , see http://kafka.apache.org/documentation.html#newproducerconfigs
 -You can provide all the produce properties in your Storm topology by calling `KafkaBolt.withProducerProperties()` and `TridentKafkaStateFactory.withProducerProperties()`. Please see  http://kafka.apache.org/documentation.html#newproducerconfigs
 -Section "Important configuration properties for the producer" for more details.
++You can provide all the produce properties , see http://kafka.apache.org/documentation.html#producerconfigs 
 +section "Important configuration properties for the producer", in your Storm topology config by setting the properties
 +map with key kafka.broker.properties.
  
  ###Using wildcard kafka topic match
  You can do a wildcard topic match by adding the following config
@@@ -254,14 -269,7 +270,13 @@@ For the bolt 
          builder.setBolt("forwardToKafka", bolt, 8).shuffleGrouping("spout");
  
          Config conf = new Config();
 -
 +        //set producer properties.
 +        Properties props = new Properties();
-         props.put("bootstrap.servers", "localhost:9092");
-         props.put("acks", "1");
-         props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
-         props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
++        props.put("metadata.broker.list", "localhost:9092");
++        props.put("request.required.acks", "1");
++        props.put("serializer.class", "kafka.serializer.StringEncoder");
 +        conf.put(KafkaBolt.KAFKA_BROKER_PROPERTIES, props);
- 
++        
          StormSubmitter.submitTopology("kafkaboltTest", conf, builder.createTopology());
  ```
  
@@@ -294,13 -302,6 +309,12 @@@ For Trident
          stream.partitionPersist(stateFactory, fields, new TridentKafkaUpdater(), new Fields());
  
          Config conf = new Config();
 +        //set producer properties.
 +        Properties props = new Properties();
-         props.put("bootstrap.servers", "localhost:9092");
-         props.put("acks", "1");
-         props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
-         props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
++        props.put("metadata.broker.list", "localhost:9092");
++        props.put("request.required.acks", "1");
++        props.put("serializer.class", "kafka.serializer.StringEncoder");
 +        conf.put(TridentKafkaState.KAFKA_BROKER_PROPERTIES, props);
          StormSubmitter.submitTopology("kafkaTridentTest", conf, topology.build());
  ```
  


[38/50] [abbrv] storm git commit: STORM-1352. Trident should support writing to multiple Kafka clusters.

Posted by sr...@apache.org.
STORM-1352. Trident should support writing to multiple Kafka clusters.


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/c1c52735
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/c1c52735
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/c1c52735

Branch: refs/heads/STORM-1040
Commit: c1c52735b9bace12a64e9a8c3190b292604b1120
Parents: 20a864d
Author: Haohui Mai <wh...@apache.org>
Authored: Wed Nov 25 11:30:22 2015 -0800
Committer: Haohui Mai <wh...@apache.org>
Committed: Wed Nov 25 13:56:35 2015 -0800

----------------------------------------------------------------------
 .../starter/trident/TridentKafkaWordCount.java  | 15 ++++----
 external/storm-kafka/README.md                  | 37 ++++++++++----------
 .../src/jvm/storm/kafka/bolt/KafkaBolt.java     | 13 ++-----
 .../storm/kafka/trident/TridentKafkaState.java  | 10 ++----
 .../kafka/trident/TridentKafkaStateFactory.java | 10 ++++--
 .../src/test/storm/kafka/TestUtils.java         |  8 ++---
 .../src/test/storm/kafka/TridentKafkaTest.java  | 13 +++----
 .../test/storm/kafka/TridentKafkaTopology.java  | 33 +++++++----------
 .../test/storm/kafka/bolt/KafkaBoltTest.java    |  6 ++--
 9 files changed, 58 insertions(+), 87 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/c1c52735/examples/storm-starter/src/jvm/storm/starter/trident/TridentKafkaWordCount.java
----------------------------------------------------------------------
diff --git a/examples/storm-starter/src/jvm/storm/starter/trident/TridentKafkaWordCount.java b/examples/storm-starter/src/jvm/storm/starter/trident/TridentKafkaWordCount.java
index 813841a..bd8ecba 100644
--- a/examples/storm-starter/src/jvm/storm/starter/trident/TridentKafkaWordCount.java
+++ b/examples/storm-starter/src/jvm/storm/starter/trident/TridentKafkaWordCount.java
@@ -149,14 +149,14 @@ public class TridentKafkaWordCount {
      *
      * @return the storm topology
      */
-    public StormTopology buildProducerTopology() {
+    public StormTopology buildProducerTopology(Properties prop) {
         TopologyBuilder builder = new TopologyBuilder();
         builder.setSpout("spout", new RandomSentenceSpout(), 2);
         /**
          * The output field of the RandomSentenceSpout ("word") is provided as the boltMessageField
          * so that this gets written out as the message in the kafka topic.
          */
-        KafkaBolt bolt = new KafkaBolt()
+        KafkaBolt bolt = new KafkaBolt().withProducerProperties(prop)
                 .withTopicSelector(new DefaultTopicSelector("test"))
                 .withTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper("key", "word"));
         builder.setBolt("forwardToKafka", bolt, 1).shuffleGrouping("spout");
@@ -169,16 +169,13 @@ public class TridentKafkaWordCount {
      *
      * @return the topology config
      */
-    public Config getProducerConfig() {
-        Config conf = new Config();
-        conf.setMaxSpoutPending(20);
+    public Properties getProducerConfig() {
         Properties props = new Properties();
         props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerUrl);
         props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
         props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
         props.put(ProducerConfig.CLIENT_ID_CONFIG, "storm-kafka-producer");
-        conf.put(KafkaBolt.KAFKA_BROKER_PROPERTIES, props);
-        return conf;
+        return props;
     }
 
     /**
@@ -214,8 +211,10 @@ public class TridentKafkaWordCount {
         // submit the consumer topology.
         cluster.submitTopology("wordCounter", wordCount.getConsumerConfig(), wordCount.buildConsumerTopology(drpc));
 
+        Config conf = new Config();
+        conf.setMaxSpoutPending(20);
         // submit the producer topology.
-        cluster.submitTopology("kafkaBolt", wordCount.getProducerConfig(), wordCount.buildProducerTopology());
+        cluster.submitTopology("kafkaBolt", conf, wordCount.buildProducerTopology(wordCount.getProducerConfig()));
 
         // keep querying the word counts for a minute.
         for (int i = 0; i < 60; i++) {

http://git-wip-us.apache.org/repos/asf/storm/blob/c1c52735/external/storm-kafka/README.md
----------------------------------------------------------------------
diff --git a/external/storm-kafka/README.md b/external/storm-kafka/README.md
index 1d3678b..2fe930e 100644
--- a/external/storm-kafka/README.md
+++ b/external/storm-kafka/README.md
@@ -226,9 +226,8 @@ You can return a null and the message will be ignored. If you have one static to
 DefaultTopicSelector.java and set the name of the topic in the constructor.
 
 ### Specifying Kafka producer properties
-You can provide all the produce properties , see http://kafka.apache.org/documentation.html#newproducerconfigs
-section "Important configuration properties for the producer", in your Storm topology config by setting the properties
-map with key kafka.broker.properties.
+You can provide all the produce properties in your Storm topology by calling `KafkaBolt.withProducerProperties()` and `TridentKafkaStateFactory.withProducerProperties()`. Please see  http://kafka.apache.org/documentation.html#newproducerconfigs
+Section "Important configuration properties for the producer" for more details.
 
 ###Using wildcard kafka topic match
 You can do a wildcard topic match by adding the following config
@@ -238,7 +237,7 @@ You can do a wildcard topic match by adding the following config
 
 ```
 
-After this you can specifiy a wildcard topic for matching e.g. clickstream.*.log.  This will match all streams matching clickstream.my.log, clickstream.cart.log etc
+After this you can specify a wildcard topic for matching e.g. clickstream.*.log.  This will match all streams matching clickstream.my.log, clickstream.cart.log etc
 
 
 ###Putting it all together
@@ -256,19 +255,20 @@ For the bolt :
         );
         spout.setCycle(true);
         builder.setSpout("spout", spout, 5);
-        KafkaBolt bolt = new KafkaBolt()
-                .withTopicSelector(new DefaultTopicSelector("test"))
-                .withTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper());
-        builder.setBolt("forwardToKafka", bolt, 8).shuffleGrouping("spout");
-
-        Config conf = new Config();
         //set producer properties.
         Properties props = new Properties();
         props.put("bootstrap.servers", "localhost:9092");
         props.put("acks", "1");
         props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
         props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
-        conf.put(KafkaBolt.KAFKA_BROKER_PROPERTIES, props);
+
+        KafkaBolt bolt = new KafkaBolt()
+                .withProducerProperties(props)
+                .withTopicSelector(new DefaultTopicSelector("test"))
+                .withTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper());
+        builder.setBolt("forwardToKafka", bolt, 8).shuffleGrouping("spout");
+
+        Config conf = new Config();
 
         StormSubmitter.submitTopology("kafkaboltTest", conf, builder.createTopology());
 ```
@@ -288,19 +288,20 @@ For Trident:
         TridentTopology topology = new TridentTopology();
         Stream stream = topology.newStream("spout1", spout);
 
-        TridentKafkaStateFactory stateFactory = new TridentKafkaStateFactory()
-                .withKafkaTopicSelector(new DefaultTopicSelector("test"))
-                .withTridentTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper("word", "count"));
-        stream.partitionPersist(stateFactory, fields, new TridentKafkaUpdater(), new Fields());
-
-        Config conf = new Config();
         //set producer properties.
         Properties props = new Properties();
         props.put("bootstrap.servers", "localhost:9092");
         props.put("acks", "1");
         props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
         props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
-        conf.put(TridentKafkaState.KAFKA_BROKER_PROPERTIES, props);
+
+        TridentKafkaStateFactory stateFactory = new TridentKafkaStateFactory()
+                .withProducerProperties(props)
+                .withKafkaTopicSelector(new DefaultTopicSelector("test"))
+                .withTridentTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper("word", "count"));
+        stream.partitionPersist(stateFactory, fields, new TridentKafkaUpdater(), new Fields());
+
+        Config conf = new Config();
         StormSubmitter.submitTopology("kafkaTridentTest", conf, topology.build());
 ```
 

http://git-wip-us.apache.org/repos/asf/storm/blob/c1c52735/external/storm-kafka/src/jvm/storm/kafka/bolt/KafkaBolt.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/jvm/storm/kafka/bolt/KafkaBolt.java b/external/storm-kafka/src/jvm/storm/kafka/bolt/KafkaBolt.java
index 2cca826..1ebe142 100644
--- a/external/storm-kafka/src/jvm/storm/kafka/bolt/KafkaBolt.java
+++ b/external/storm-kafka/src/jvm/storm/kafka/bolt/KafkaBolt.java
@@ -57,7 +57,6 @@ public class KafkaBolt<K, V> extends BaseRichBolt {
     private static final Logger LOG = LoggerFactory.getLogger(KafkaBolt.class);
 
     public static final String TOPIC = "topic";
-    public static final String KAFKA_BROKER_PROPERTIES = "kafka.broker.properties";
 
     private KafkaProducer<K, V> producer;
     private OutputCollector collector;
@@ -73,8 +72,7 @@ public class KafkaBolt<K, V> extends BaseRichBolt {
     private boolean fireAndForget = false;
     private boolean async = true;
 
-    public KafkaBolt() {
-    }
+    public KafkaBolt() {}
 
     public KafkaBolt<K,V> withTupleToKafkaMapper(TupleToKafkaMapper<K,V> mapper) {
         this.mapper = mapper;
@@ -103,14 +101,7 @@ public class KafkaBolt<K, V> extends BaseRichBolt {
             this.topicSelector = new DefaultTopicSelector((String) stormConf.get(TOPIC));
         }
 
-        Map configMap = (Map) stormConf.get(KAFKA_BROKER_PROPERTIES);
-        Properties properties = new Properties();
-        if(configMap!= null)
-            properties.putAll(configMap);
-        if(boltSpecfiedProperties != null)
-            properties.putAll(boltSpecfiedProperties);
-
-        producer = new KafkaProducer<K, V>(properties);
+        producer = new KafkaProducer<>(boltSpecfiedProperties);
         this.collector = collector;
     }
 

http://git-wip-us.apache.org/repos/asf/storm/blob/c1c52735/external/storm-kafka/src/jvm/storm/kafka/trident/TridentKafkaState.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/jvm/storm/kafka/trident/TridentKafkaState.java b/external/storm-kafka/src/jvm/storm/kafka/trident/TridentKafkaState.java
index 1ed61b1..84b6a6a 100644
--- a/external/storm-kafka/src/jvm/storm/kafka/trident/TridentKafkaState.java
+++ b/external/storm-kafka/src/jvm/storm/kafka/trident/TridentKafkaState.java
@@ -32,7 +32,6 @@ import storm.trident.state.State;
 import storm.trident.tuple.TridentTuple;
 
 import java.util.List;
-import java.util.Map;
 import java.util.Properties;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.Future;
@@ -40,8 +39,6 @@ import java.util.concurrent.Future;
 public class TridentKafkaState implements State {
     private static final Logger LOG = LoggerFactory.getLogger(TridentKafkaState.class);
 
-    public static final String KAFKA_BROKER_PROPERTIES = "kafka.broker.properties";
-
     private KafkaProducer producer;
     private OutputCollector collector;
 
@@ -68,13 +65,10 @@ public class TridentKafkaState implements State {
         LOG.debug("commit is Noop.");
     }
 
-    public void prepare(Map stormConf) {
+    public void prepare(Properties options) {
         Validate.notNull(mapper, "mapper can not be null");
         Validate.notNull(topicSelector, "topicSelector can not be null");
-        Map configMap = (Map) stormConf.get(KAFKA_BROKER_PROPERTIES);
-        Properties properties = new Properties();
-        properties.putAll(configMap);
-        producer = new KafkaProducer(properties);
+        producer = new KafkaProducer(options);
     }
 
     public void updateState(List<TridentTuple> tuples, TridentCollector collector) {

http://git-wip-us.apache.org/repos/asf/storm/blob/c1c52735/external/storm-kafka/src/jvm/storm/kafka/trident/TridentKafkaStateFactory.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/jvm/storm/kafka/trident/TridentKafkaStateFactory.java b/external/storm-kafka/src/jvm/storm/kafka/trident/TridentKafkaStateFactory.java
index adca69e..a5d9d42 100644
--- a/external/storm-kafka/src/jvm/storm/kafka/trident/TridentKafkaStateFactory.java
+++ b/external/storm-kafka/src/jvm/storm/kafka/trident/TridentKafkaStateFactory.java
@@ -26,6 +26,7 @@ import storm.trident.state.State;
 import storm.trident.state.StateFactory;
 
 import java.util.Map;
+import java.util.Properties;
 
 public class TridentKafkaStateFactory implements StateFactory {
 
@@ -33,7 +34,7 @@ public class TridentKafkaStateFactory implements StateFactory {
 
     private TridentTupleToKafkaMapper mapper;
     private KafkaTopicSelector topicSelector;
-
+    private Properties producerProperties = new Properties();
 
     public TridentKafkaStateFactory withTridentTupleToKafkaMapper(TridentTupleToKafkaMapper mapper) {
         this.mapper = mapper;
@@ -45,13 +46,18 @@ public class TridentKafkaStateFactory implements StateFactory {
         return this;
     }
 
+    public TridentKafkaStateFactory withProducerProperties(Properties props) {
+        this.producerProperties = props;
+        return this;
+    }
+
     @Override
     public State makeState(Map conf, IMetricsContext metrics, int partitionIndex, int numPartitions) {
         LOG.info("makeState(partitonIndex={}, numpartitions={}", partitionIndex, numPartitions);
         TridentKafkaState state = new TridentKafkaState()
                 .withKafkaTopicSelector(this.topicSelector)
                 .withTridentTupleToKafkaMapper(this.mapper);
-        state.prepare(conf);
+        state.prepare(producerProperties);
         return state;
     }
 }

http://git-wip-us.apache.org/repos/asf/storm/blob/c1c52735/external/storm-kafka/src/test/storm/kafka/TestUtils.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/test/storm/kafka/TestUtils.java b/external/storm-kafka/src/test/storm/kafka/TestUtils.java
index 839b691..3e69160 100644
--- a/external/storm-kafka/src/test/storm/kafka/TestUtils.java
+++ b/external/storm-kafka/src/test/storm/kafka/TestUtils.java
@@ -73,17 +73,13 @@ public class TestUtils {
         return new StaticHosts(globalPartitionInformation);
     }
 
-    public static Config getConfig(String brokerConnectionString) {
-        Config config = new Config();
+    public static Properties getProducerProperties(String brokerConnectionString) {
         Properties props = new Properties();
         props.put("bootstrap.servers", brokerConnectionString);
         props.put("acks", "1");
         props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
         props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
-        config.put(KafkaBolt.KAFKA_BROKER_PROPERTIES, props);
-        config.put(KafkaBolt.TOPIC, TOPIC);
-
-        return config;
+        return props;
     }
 
     public static boolean verifyMessage(String key, String message, KafkaTestBroker broker, SimpleConsumer simpleConsumer) {

http://git-wip-us.apache.org/repos/asf/storm/blob/c1c52735/external/storm-kafka/src/test/storm/kafka/TridentKafkaTest.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/test/storm/kafka/TridentKafkaTest.java b/external/storm-kafka/src/test/storm/kafka/TridentKafkaTest.java
index d8a5e24..8213b07 100644
--- a/external/storm-kafka/src/test/storm/kafka/TridentKafkaTest.java
+++ b/external/storm-kafka/src/test/storm/kafka/TridentKafkaTest.java
@@ -17,7 +17,6 @@
  */
 package storm.kafka;
 
-import backtype.storm.Config;
 import backtype.storm.tuple.Fields;
 import kafka.javaapi.consumer.SimpleConsumer;
 import org.junit.After;
@@ -37,22 +36,18 @@ import java.util.List;
 public class TridentKafkaTest {
     private KafkaTestBroker broker;
     private TridentKafkaState state;
-    private Config config;
     private SimpleConsumer simpleConsumer;
-    private TridentTupleToKafkaMapper mapper;
-    private KafkaTopicSelector topicSelector;
 
     @Before
     public void setup() {
         broker = new KafkaTestBroker();
         simpleConsumer = TestUtils.getKafkaConsumer(broker);
-        config = TestUtils.getConfig(broker.getBrokerConnectionString());
-        mapper = new FieldNameBasedTupleToKafkaMapper("key", "message");
-        topicSelector = new DefaultTopicSelector(TestUtils.TOPIC);
+        TridentTupleToKafkaMapper mapper = new FieldNameBasedTupleToKafkaMapper("key", "message");
+        KafkaTopicSelector topicSelector = new DefaultTopicSelector(TestUtils.TOPIC);
         state = new TridentKafkaState()
                 .withKafkaTopicSelector(topicSelector)
                 .withTridentTupleToKafkaMapper(mapper);
-        state.prepare(config);
+        state.prepare(TestUtils.getProducerProperties(broker.getBrokerConnectionString()));
     }
 
     @Test
@@ -71,7 +66,7 @@ public class TridentKafkaTest {
     }
 
     private List<TridentTuple> generateTupleBatch(String key, String message, int batchsize) {
-        List<TridentTuple> batch = new ArrayList<TridentTuple>();
+        List<TridentTuple> batch = new ArrayList<>();
         for(int i =0 ; i < batchsize; i++) {
             batch.add(TridentTupleView.createFreshTuple(new Fields("key", "message"), key, message));
         }

http://git-wip-us.apache.org/repos/asf/storm/blob/c1c52735/external/storm-kafka/src/test/storm/kafka/TridentKafkaTopology.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/test/storm/kafka/TridentKafkaTopology.java b/external/storm-kafka/src/test/storm/kafka/TridentKafkaTopology.java
index cade6df..b9e25e4 100644
--- a/external/storm-kafka/src/test/storm/kafka/TridentKafkaTopology.java
+++ b/external/storm-kafka/src/test/storm/kafka/TridentKafkaTopology.java
@@ -22,7 +22,7 @@ import backtype.storm.LocalCluster;
 import backtype.storm.generated.StormTopology;
 import backtype.storm.tuple.Fields;
 import backtype.storm.tuple.Values;
-import storm.kafka.trident.TridentKafkaState;
+import com.google.common.collect.ImmutableMap;
 import storm.kafka.trident.TridentKafkaStateFactory;
 import storm.kafka.trident.TridentKafkaUpdater;
 import storm.kafka.trident.mapper.FieldNameBasedTupleToKafkaMapper;
@@ -31,14 +31,11 @@ import storm.trident.Stream;
 import storm.trident.TridentTopology;
 import storm.trident.testing.FixedBatchSpout;
 
-import java.util.HashMap;
-import java.util.Map;
 import java.util.Properties;
 
-
 public class TridentKafkaTopology {
 
-    private static StormTopology buildTopology() {
+    private static StormTopology buildTopology(String brokerConnectionString) {
         Fields fields = new Fields("word", "count");
         FixedBatchSpout spout = new FixedBatchSpout(fields, 4,
                 new Values("storm", "1"),
@@ -51,9 +48,16 @@ public class TridentKafkaTopology {
         TridentTopology topology = new TridentTopology();
         Stream stream = topology.newStream("spout1", spout);
 
+        Properties props = new Properties();
+        props.put("bootstrap.servers", brokerConnectionString);
+        props.put("acks", "1");
+        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
+        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
+
         TridentKafkaStateFactory stateFactory = new TridentKafkaStateFactory()
-                .withKafkaTopicSelector(new DefaultTopicSelector("test"))
-                .withTridentTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper("word", "count"));
+            .withProducerProperties(props)
+            .withKafkaTopicSelector(new DefaultTopicSelector("test"))
+            .withTridentTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper("word", "count"));
         stream.partitionPersist(stateFactory, fields, new TridentKafkaUpdater(), new Fields());
 
         return topology.build();
@@ -77,24 +81,11 @@ public class TridentKafkaTopology {
             System.out.println("Please provide kafka broker url ,e.g. localhost:9092");
         }
 
-        Config conf = getConfig(args[0]);
         LocalCluster cluster = new LocalCluster();
-        cluster.submitTopology("wordCounter", conf, buildTopology());
+        cluster.submitTopology("wordCounter", new Config(), buildTopology(args[0]));
         Thread.sleep(60 * 1000);
         cluster.killTopology("wordCounter");
 
         cluster.shutdown();
     }
-
-    private  static Config getConfig(String brokerConnectionString) {
-        Config conf = new Config();
-        Properties props = new Properties();
-        props.put("bootstrap.servers", brokerConnectionString);
-        props.put("acks", "1");
-        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
-        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
-        conf.put(TridentKafkaState.KAFKA_BROKER_PROPERTIES, props);
-        return conf;
-    }
-
 }

http://git-wip-us.apache.org/repos/asf/storm/blob/c1c52735/external/storm-kafka/src/test/storm/kafka/bolt/KafkaBoltTest.java
----------------------------------------------------------------------
diff --git a/external/storm-kafka/src/test/storm/kafka/bolt/KafkaBoltTest.java b/external/storm-kafka/src/test/storm/kafka/bolt/KafkaBoltTest.java
index 75c24d7..87daab0 100644
--- a/external/storm-kafka/src/test/storm/kafka/bolt/KafkaBoltTest.java
+++ b/external/storm-kafka/src/test/storm/kafka/bolt/KafkaBoltTest.java
@@ -183,21 +183,19 @@ public class KafkaBoltTest {
     }
 
     private KafkaBolt generateStringSerializerBolt() {
-        KafkaBolt bolt = new KafkaBolt();
         Properties props = new Properties();
         props.put("acks", "1");
         props.put("bootstrap.servers", broker.getBrokerConnectionString());
         props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
         props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
         props.put("metadata.fetch.timeout.ms", 1000);
-        config.put(KafkaBolt.KAFKA_BROKER_PROPERTIES, props);
+        KafkaBolt bolt = new KafkaBolt().withProducerProperties(props);
         bolt.prepare(config, null, new OutputCollector(collector));
         bolt.setAsync(false);
         return bolt;
     }
 
     private KafkaBolt generateDefaultSerializerBolt(boolean async, boolean fireAndForget) {
-        KafkaBolt bolt = new KafkaBolt();
         Properties props = new Properties();
         props.put("acks", "1");
         props.put("bootstrap.servers", broker.getBrokerConnectionString());
@@ -205,7 +203,7 @@ public class KafkaBoltTest {
         props.put("value.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer");
         props.put("metadata.fetch.timeout.ms", 1000);
         props.put("linger.ms", 0);
-        config.put(KafkaBolt.KAFKA_BROKER_PROPERTIES, props);
+        KafkaBolt bolt = new KafkaBolt().withProducerProperties(props);
         bolt.prepare(config, null, new OutputCollector(collector));
         bolt.setAsync(async);
         bolt.setFireAndForget(fireAndForget);


[12/50] [abbrv] storm git commit: use storm-conf instead of conf

Posted by sr...@apache.org.
use storm-conf instead of conf


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/9bc9350f
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/9bc9350f
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/9bc9350f

Branch: refs/heads/STORM-1040
Commit: 9bc9350f613a75de566cf670874c6a037ecc5bfd
Parents: ccb8031
Author: Michael Schonfeld <mi...@schonfeld.org>
Authored: Mon Nov 23 15:15:01 2015 -0500
Committer: Michael Schonfeld <mi...@schonfeld.org>
Committed: Mon Nov 23 18:50:55 2015 -0500

----------------------------------------------------------------------
 storm-core/src/clj/backtype/storm/daemon/worker.clj | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/9bc9350f/storm-core/src/clj/backtype/storm/daemon/worker.clj
----------------------------------------------------------------------
diff --git a/storm-core/src/clj/backtype/storm/daemon/worker.clj b/storm-core/src/clj/backtype/storm/daemon/worker.clj
index 64670c2..d978315 100644
--- a/storm-core/src/clj/backtype/storm/daemon/worker.clj
+++ b/storm-core/src/clj/backtype/storm/daemon/worker.clj
@@ -551,7 +551,7 @@
 
 (defn run-worker-start-hooks [worker]
   (let [topology (:topology worker)
-        topo-conf (:conf worker)
+        topo-conf (:storm-conf worker)
         worker-topology-context (worker-context worker)
         hooks (.get_worker_hooks topology)]
     (dofor [hook hooks]