You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@storm.apache.org by Marcelo Valle <mv...@redoop.org> on 2014/08/04 13:24:30 UTC
Re: kafka-spout running error
hello,
you can check your .jar application with command " jar tf " to see if class
kafka/api/OffsetRequest.class is part of the jar.
If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are using)
in storm_lib directory
Marcelo
2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
> Hi, all
>
> I am running a kafka-spout code in storm-server, the pom is
>
> <groupId>org.apache.kafka</groupId>
> <artifactId>kafka_2.9.2</artifactId>
> <version>0.8.0</version>
> <scope>provided</scope>
>
> <exclusions>
> <exclusion>
> <groupId>org.apache.zookeeper</groupId>
> <artifactId>zookeeper</artifactId>
> </exclusion>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> </exclusions>
>
> </dependency>
>
> <!-- Storm-Kafka compiled -->
>
> <dependency>
> <artifactId>storm-kafka</artifactId>
> <groupId>org.apache.storm</groupId>
> <version>0.9.2-incubating</version>
> <scope>*compile*</scope>
> </dependency>
>
> I can mvn package it, but when I run it
> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar
> target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar
> storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>
>
> I am getting such error
>
> 1657 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
> with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread
> Thread[main,5,main] died
> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_55]
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_55]
> at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.7.0_55]
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> ~[na:1.7.0_55]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> ~[na:1.7.0_55]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>
>
>
>
> I try to poke around online, could not find a solution for it, any idea
> about that?
>
>
> Thanks
>
> Alec
>
>
>
>
Re: kafka-spout running error
Posted by Sa Li <sa...@gmail.com>.
Hi, Kushan
I hate to say negative, but I re-compile it into the jar, but still having the same error, and I notice in the message on the screen,
2909 [Thread-7-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
It shows above that zookeeper connected none, so I really doubt if this is the zookeeper problem, I have tried zookeeper 3.4.6 instead of 3.3.6, but it even shows weird error.
Thanks
Alec
On Aug 5, 2014, at 3:13 PM, Kushan Maskey <ku...@mmillerassociates.com> wrote:
> Are you creating a jar to be deployed in your server? If yes then you will have to have the kafka jar with scope as compile so it bundles that jar with your storm jar.
> You can try this and see if that helps.
> <dependency>
>
> <groupId>org.apache.kafka</groupId>
>
> <artifactId>kafka_2.10</artifactId>
>
> <version>0.8.1.1</version>
>
> <scope>compile</scope>
>
> <.dependency>
>
>
>
>
> --
> Kushan Maskey
> 817.403.7500
>
>
> On Tue, Aug 5, 2014 at 4:45 PM, Sa Li <sa...@gmail.com> wrote:
> This is my complete pom
>
> <dependency>
> <groupId>org.json</groupId>
> <artifactId>json</artifactId>
> <version>20140107</version>
> </dependency>
>
> <!-- Slf4j Logger -->
>
> <dependency>
> <groupId>org.slf4j</groupId>
> <artifactId>slf4j-simple</artifactId>
> <version>1.7.2</version>
> </dependency>
> <dependency>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> <version>1.2.17</version>
> </dependency>
>
> <!-- Scala 2.9.2 -->
> <dependency>
> <groupId>org.scala-lang</groupId>
> <artifactId>scala-library</artifactId>
> <version>2.9.2</version>
> </dependency>
>
> <dependency>
> <groupId>org.mockito</groupId>
> <artifactId>mockito-all</artifactId>
> <version>1.9.0</version>
> <scope>test</scope>
> </dependency>
> <dependency>
> <groupId>junit</groupId>
> <artifactId>junit</artifactId>
> <version>4.11</version>
> <scope>test</scope>
> </dependency>
>
> <dependency>
> <groupId>org.apache.curator</groupId>
> <artifactId>curator-framework</artifactId>
> <version>2.6.0</version>
> <exclusions>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> <exclusion>
> <groupId>org.slf4j</groupId>
> <artifactId>slf4j-log4j12</artifactId>
> </exclusion>
> </exclusions>
> </dependency>
> <dependency>
> <groupId>org.apache.curator</groupId>
> <artifactId>curator-recipes</artifactId>
> <version>2.6.0</version>
> <exclusions>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> </exclusions>
> <scope>test</scope>
> </dependency>
> <dependency>
> <groupId>org.apache.curator</groupId>
> <artifactId>curator-test</artifactId>
> <version>2.6.0</version>
> <exclusions>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> <exclusion>
> <groupId>org.testng</groupId>
> <artifactId>testng</artifactId>
> </exclusion>
> </exclusions>
> <scope>test</scope>
> </dependency>
>
>
> <dependency>
> <groupId>org.apache.zookeeper</groupId>
> <artifactId>zookeeper</artifactId>
> <version>3.3.6</version>
> <exclusions>
> <exclusion>
> <groupId>com.sun.jmx</groupId>
> <artifactId>jmxri</artifactId>
> </exclusion>
> <exclusion>
> <groupId>com.sun.jdmk</groupId>
> <artifactId>jmxtools</artifactId>
> </exclusion>
> <exclusion>
> <groupId>javax.jms</groupId>
> <artifactId>jms</artifactId>
> </exclusion>
> </exclusions>
> </dependency>
>
> <!-- Kafka 0.8.0 compiled Scala 2.9.2 -->
>
> <dependency>
> <groupId>org.apache.kafka</groupId>
> <artifactId>kafka_2.10</artifactId>
> <version>0.8.1.1</version>
> <scope>provided</scope>
>
> <exclusions>
> <exclusion>
> <groupId>org.apache.zookeeper</groupId>
> <artifactId>zookeeper</artifactId>
> </exclusion>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> </exclusions>
>
> </dependency>
>
> <!-- Storm-Kafka compiled -->
>
> <dependency>
> <artifactId>storm-kafka</artifactId>
> <groupId>org.apache.storm</groupId>
> <version>0.9.2-incubating</version>
> <scope>*compile*</scope>
> </dependency>
> <!--
> <dependency>
> <groupId>storm</groupId>
> <artifactId>storm-kafka</artifactId>
> <version>0.9.0-wip16a-scala292</version>
> </dependency>
> -->
> <dependency>
> <groupId>org.testng</groupId>
> <artifactId>testng</artifactId>
> <version>6.8.5</version>
> <scope>test</scope>
> </dependency>
> <dependency>
> <groupId>org.easytesting</groupId>
> <artifactId>fest-assert-core</artifactId>
> <version>2.0M8</version>
> <scope>test</scope>
> </dependency>
> <dependency>
> <groupId>org.jmock</groupId>
> <artifactId>jmock</artifactId>
> <version>2.6.0</version>
> <scope>test</scope>
> </dependency>
>
> <dependency>
> <groupId>storm</groupId>
> <artifactId>storm</artifactId>
> <version>0.9.0.1</version>
> <!-- keep storm out of the jar-with-dependencies -->
> <scope>provided</scope>
> </dependency>
>
> <dependency>
> <groupId>commons-collections</groupId>
> <artifactId>commons-collections</artifactId>
> <version>3.2.1</version>
> </dependency>
> <dependency>
> <groupId>com.google.guava</groupId>
> <artifactId>guava</artifactId>
> <version>15.0</version>
> </dependency>
> </dependencies>
>
>
> On Aug 5, 2014, at 2:41 PM, Sa Li <sa...@gmail.com> wrote:
>
>> Thanks, Kushan and Parth, I tried to solve the problem as you two suggested, first I change the kafka version in pom, re-compile it, and also copy the kafka_2.10-0.8.1.1.jar into storm.lib directory from M2_REPO. Here is my pom
>>
>> <dependency>
>> <groupId>org.apache.curator</groupId>
>> <artifactId>curator-framework</artifactId>
>> <version>2.6.0</version>
>> <exclusions>
>> <exclusion>
>> <groupId>log4j</groupId>
>> <artifactId>log4j</artifactId>
>> </exclusion>
>> <exclusion>
>> <groupId>org.slf4j</groupId>
>> <artifactId>slf4j-log4j12</artifactId>
>> </exclusion>
>> </exclusions>
>> </dependency>
>> <dependency>
>> <groupId>org.apache.curator</groupId>
>> <artifactId>curator-recipes</artifactId>
>> <version>2.6.0</version>
>> <exclusions>
>> <exclusion>
>> <groupId>log4j</groupId>
>> <artifactId>log4j</artifactId>
>> </exclusion>
>> </exclusions>
>> <scope>test</scope>
>> </dependency>
>> <dependency>
>> <groupId>org.apache.curator</groupId>
>> <artifactId>curator-test</artifactId>
>> <version>2.6.0</version>
>> <exclusions>
>> <exclusion>
>> <groupId>log4j</groupId>
>> <artifactId>log4j</artifactId>
>> </exclusion>
>> <exclusion>
>> <groupId>org.testng</groupId>
>> <artifactId>testng</artifactId>
>> </exclusion>
>> </exclusions>
>> <scope>test</scope>
>> </dependency>
>>
>>
>> <dependency>
>> <groupId>org.apache.zookeeper</groupId>
>> <artifactId>zookeeper</artifactId>
>> <version>3.3.6</version>
>> <exclusions>
>> <exclusion>
>> <groupId>com.sun.jmx</groupId>
>> <artifactId>jmxri</artifactId>
>> </exclusion>
>> <exclusion>
>> <groupId>com.sun.jdmk</groupId>
>> <artifactId>jmxtools</artifactId>
>> </exclusion>
>> <exclusion>
>> <groupId>javax.jms</groupId>
>> <artifactId>jms</artifactId>
>> </exclusion>
>> </exclusions>
>> </dependency>
>>
>> <!-- Kafka 0.8.0 compiled Scala 2.9.2 -->
>>
>> <dependency>
>> <groupId>org.apache.kafka</groupId>
>> <artifactId>kafka_2.10</artifactId>
>> <version>0.8.1.1</version>
>> <scope>provided</scope>
>>
>> <exclusions>
>> <exclusion>
>> <groupId>org.apache.zookeeper</groupId>
>> <artifactId>zookeeper</artifactId>
>> </exclusion>
>> <exclusion>
>> <groupId>log4j</groupId>
>> <artifactId>log4j</artifactId>
>> </exclusion>
>> </exclusions>
>>
>> </dependency>
>>
>>
>> Here I can see the zookeeper version is 3.3.6 (here the version was downgraded since "java.lang.ClassNotFoundException: org.apache.zookeeper.server.NIOServerCnxn$Factory at java.net” error came out otherwise, the curator version is 2.6.0. I jar tf the project jar to see the class included:
>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# jar tf target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar | grep zookeeper
>> org/apache/zookeeper/
>> org/apache/zookeeper/client/
>> org/apache/zookeeper/common/
>> org/apache/zookeeper/data/
>> org/apache/zookeeper/jmx/
>> org/apache/zookeeper/proto/
>> org/apache/zookeeper/server/
>> org/apache/zookeeper/server/auth/
>> org/apache/zookeeper/server/persistence/
>> org/apache/zookeeper/server/quorum/
>> org/apache/zookeeper/server/quorum/flexible/
>> org/apache/zookeeper/server/upgrade/
>> org/apache/zookeeper/server/util/
>> org/apache/zookeeper/txn/
>> org/apache/zookeeper/version/
>> org/apache/zookeeper/version/util/
>> org/apache/zookeeper/AsyncCallback$ACLCallback.class
>> org/apache/zookeeper/AsyncCallback$Children2Callback.class
>> org/apache/zookeeper/AsyncCallback$ChildrenCallback.class
>> org/apache/zookeeper/AsyncCallback$DataCallback.class
>> org/apache/zookeeper/AsyncCallback$StatCallback.class
>> org/apache/zookeeper/AsyncCallback$StringCallback.class
>> org/apache/zookeeper/AsyncCallback$VoidCallback.class
>> org/apache/zookeeper/AsyncCallback.class
>> org/apache/zookeeper/ClientCnxn$1.class
>> org/apache/zookeeper/ClientCnxn$2.class
>> org/apache/zookeeper/ClientCnxn$AuthData.class
>> org/apache/zookeeper/ClientCnxn$EndOfStreamException.class
>> org/apache/zookeeper/ClientCnxn$EventThread.class
>> org/apache/zookeeper/ClientCnxn$Packet.class
>> org/apache/zookeeper/ClientCnxn$SendThread.class
>> org/apache/zookeeper/ClientCnxn$SessionExpiredException.class
>> org/apache/zookeeper/ClientCnxn$SessionTimeoutException.class
>> org/apache/zookeeper/ClientCnxn$WatcherSetEventPair.class
>> org/apache/zookeeper/ClientCnxn.class
>> org/apache/zookeeper/ClientWatchManager.class
>> org/apache/zookeeper/CreateMode.class
>> org/apache/zookeeper/Environment$Entry.class
>> org/apache/zookeeper/Environment.class
>> org/apache/zookeeper/JLineZNodeCompletor.class
>> org/apache/zookeeper/KeeperException$1.class
>> org/apache/zookeeper/KeeperException$APIErrorException.class
>> org/apache/zookeeper/KeeperException$AuthFailedException.class
>> org/apache/zookeeper/KeeperException$BadArgumentsException.class
>> org/apache/zookeeper/KeeperException$BadVersionException.class
>> org/apache/zookeeper/KeeperException$Code.class
>> org/apache/zookeeper/KeeperException$CodeDeprecated.class
>> org/apache/zookeeper/KeeperException$ConnectionLossException.class
>> org/apache/zookeeper/KeeperException$DataInconsistencyException.class
>> org/apache/zookeeper/KeeperException$InvalidACLException.class
>> org/apache/zookeeper/KeeperException$InvalidCallbackException.class
>> org/apache/zookeeper/KeeperException$MarshallingErrorException.class
>> org/apache/zookeeper/KeeperException$NoAuthException.class
>> org/apache/zookeeper/KeeperException$NoChildrenForEphemeralsException.class
>> org/apache/zookeeper/KeeperException$NoNodeException.class
>> org/apache/zookeeper/KeeperException$NodeExistsException.class
>> org/apache/zookeeper/KeeperException$NotEmptyException.class
>> org/apache/zookeeper/KeeperException$OperationTimeoutException.class
>> org/apache/zookeeper/KeeperException$RuntimeInconsistencyException.class
>> org/apache/zookeeper/KeeperException$SessionExpiredException.class
>> org/apache/zookeeper/KeeperException$SessionMovedException.class
>> org/apache/zookeeper/KeeperException$SystemErrorException.class
>> org/apache/zookeeper/KeeperException$UnimplementedException.class
>> org/apache/zookeeper/KeeperException.class
>> org/apache/zookeeper/Quotas.class
>> org/apache/zookeeper/ServerAdminClient.class
>> org/apache/zookeeper/StatsTrack.class
>> org/apache/zookeeper/Version.class
>> org/apache/zookeeper/WatchedEvent.class
>> org/apache/zookeeper/Watcher$Event$EventType.class
>> org/apache/zookeeper/Watcher$Event$KeeperState.class
>> org/apache/zookeeper/Watcher$Event.class
>> org/apache/zookeeper/Watcher.class
>> org/apache/zookeeper/ZooDefs$Ids.class
>> org/apache/zookeeper/ZooDefs$OpCode.class
>> org/apache/zookeeper/ZooDefs$Perms.class
>> org/apache/zookeeper/ZooDefs.class
>> org/apache/zookeeper/ZooKeeper$1.class
>> org/apache/zookeeper/ZooKeeper$ChildWatchRegistration.class
>> org/apache/zookeeper/ZooKeeper$DataWatchRegistration.class
>> org/apache/zookeeper/ZooKeeper$ExistsWatchRegistration.class
>> org/apache/zookeeper/ZooKeeper$States.class
>> org/apache/zookeeper/ZooKeeper$WatchRegistration.class
>> org/apache/zookeeper/ZooKeeper$ZKWatchManager.class
>> org/apache/zookeeper/ZooKeeper.class
>> org/apache/zookeeper/ZooKeeperMain$1.class
>> org/apache/zookeeper/ZooKeeperMain$MyCommandOptions.class
>> org/apache/zookeeper/ZooKeeperMain$MyWatcher.class
>> org/apache/zookeeper/ZooKeeperMain.class
>> org/apache/zookeeper/client/FourLetterWordMain.class
>> org/apache/zookeeper/common/PathTrie$1.class
>> org/apache/zookeeper/common/PathTrie$TrieNode.class
>> org/apache/zookeeper/common/PathTrie.class
>> org/apache/zookeeper/common/PathUtils.class
>> org/apache/zookeeper/data/ACL.class
>> org/apache/zookeeper/data/Id.class
>> org/apache/zookeeper/data/Stat.class
>> org/apache/zookeeper/data/StatPersisted.class
>> org/apache/zookeeper/data/StatPersistedV1.class
>> org/apache/zookeeper/jmx/CommonNames.class
>> org/apache/zookeeper/jmx/MBeanRegistry.class
>> org/apache/zookeeper/jmx/ManagedUtil.class
>> org/apache/zookeeper/jmx/ZKMBeanInfo.class
>> org/apache/zookeeper/proto/AuthPacket.class
>> org/apache/zookeeper/proto/ConnectRequest.class
>> org/apache/zookeeper/proto/ConnectResponse.class
>> org/apache/zookeeper/proto/CreateRequest.class
>> org/apache/zookeeper/proto/CreateResponse.class
>> org/apache/zookeeper/proto/DeleteRequest.class
>> org/apache/zookeeper/proto/ExistsRequest.class
>> org/apache/zookeeper/proto/ExistsResponse.class
>> org/apache/zookeeper/proto/GetACLRequest.class
>> org/apache/zookeeper/proto/GetACLResponse.class
>> org/apache/zookeeper/proto/GetChildren2Request.class
>> org/apache/zookeeper/proto/GetChildren2Response.class
>> org/apache/zookeeper/proto/GetChildrenRequest.class
>> org/apache/zookeeper/proto/GetChildrenResponse.class
>> org/apache/zookeeper/proto/GetDataRequest.class
>> org/apache/zookeeper/proto/GetDataResponse.class
>> org/apache/zookeeper/proto/GetMaxChildrenRequest.class
>> org/apache/zookeeper/proto/GetMaxChildrenResponse.class
>> org/apache/zookeeper/proto/ReplyHeader.class
>> org/apache/zookeeper/proto/RequestHeader.class
>> org/apache/zookeeper/proto/SetACLRequest.class
>> org/apache/zookeeper/proto/SetACLResponse.class
>> org/apache/zookeeper/proto/SetDataRequest.class
>> org/apache/zookeeper/proto/SetDataResponse.class
>> org/apache/zookeeper/proto/SetMaxChildrenRequest.class
>> org/apache/zookeeper/proto/SetWatches.class
>> org/apache/zookeeper/proto/SyncRequest.class
>> org/apache/zookeeper/proto/SyncResponse.class
>> org/apache/zookeeper/proto/WatcherEvent.class
>> org/apache/zookeeper/proto/op_result_t.class
>> org/apache/zookeeper/server/ByteBufferInputStream.class
>> org/apache/zookeeper/server/ConnectionBean.class
>> org/apache/zookeeper/server/ConnectionMXBean.class
>> org/apache/zookeeper/server/DataNode.class
>> org/apache/zookeeper/server/DataTree$1.class
>> org/apache/zookeeper/server/DataTree$Counts.class
>> org/apache/zookeeper/server/DataTree$ProcessTxnResult.class
>> org/apache/zookeeper/server/DataTree.class
>> org/apache/zookeeper/server/DataTreeBean.class
>> org/apache/zookeeper/server/DataTreeMXBean.class
>> org/apache/zookeeper/server/FinalRequestProcessor.class
>> org/apache/zookeeper/server/LogFormatter.class
>> org/apache/zookeeper/server/NIOServerCnxn$1.class
>> org/apache/zookeeper/server/NIOServerCnxn$CloseRequestException.class
>> org/apache/zookeeper/server/NIOServerCnxn$CnxnStatResetCommand.class
>> org/apache/zookeeper/server/NIOServerCnxn$CnxnStats.class
>> org/apache/zookeeper/server/NIOServerCnxn$CommandThread.class
>> org/apache/zookeeper/server/NIOServerCnxn$ConfCommand.class
>> org/apache/zookeeper/server/NIOServerCnxn$ConsCommand.class
>> org/apache/zookeeper/server/NIOServerCnxn$DumpCommand.class
>> org/apache/zookeeper/server/NIOServerCnxn$EndOfStreamException.class
>> org/apache/zookeeper/server/NIOServerCnxn$EnvCommand.class
>> org/apache/zookeeper/server/NIOServerCnxn$Factory$1.class
>> org/apache/zookeeper/server/NIOServerCnxn$Factory.class
>> org/apache/zookeeper/server/NIOServerCnxn$RuokCommand.class
>> org/apache/zookeeper/server/NIOServerCnxn$SendBufferWriter.class
>> org/apache/zookeeper/server/NIOServerCnxn$SetTraceMaskCommand.class
>> org/apache/zookeeper/server/NIOServerCnxn$StatCommand.class
>> org/apache/zookeeper/server/NIOServerCnxn$StatResetCommand.class
>> org/apache/zookeeper/server/NIOServerCnxn$TraceMaskCommand.class
>> org/apache/zookeeper/server/NIOServerCnxn$WatchCommand.class
>> org/apache/zookeeper/server/NIOServerCnxn.class
>> org/apache/zookeeper/server/ObserverBean.class
>> org/apache/zookeeper/server/PrepRequestProcessor.class
>> org/apache/zookeeper/server/PurgeTxnLog$1MyFileFilter.class
>> org/apache/zookeeper/server/PurgeTxnLog.class
>> org/apache/zookeeper/server/Request.class
>> org/apache/zookeeper/server/RequestProcessor$RequestProcessorException.class
>> org/apache/zookeeper/server/RequestProcessor.class
>> org/apache/zookeeper/server/ServerCnxn$Stats.class
>> org/apache/zookeeper/server/ServerCnxn.class
>> org/apache/zookeeper/server/ServerConfig.class
>> org/apache/zookeeper/server/ServerStats$Provider.class
>> org/apache/zookeeper/server/ServerStats.class
>> org/apache/zookeeper/server/SessionTracker$Session.class
>> org/apache/zookeeper/server/SessionTracker$SessionExpirer.class
>> org/apache/zookeeper/server/SessionTracker.class
>> org/apache/zookeeper/server/SessionTrackerImpl$SessionImpl.class
>> org/apache/zookeeper/server/SessionTrackerImpl$SessionSet.class
>> org/apache/zookeeper/server/SessionTrackerImpl.class
>> org/apache/zookeeper/server/SyncRequestProcessor$1.class
>> org/apache/zookeeper/server/SyncRequestProcessor.class
>> org/apache/zookeeper/server/TraceFormatter.class
>> org/apache/zookeeper/server/WatchManager.class
>> org/apache/zookeeper/server/ZKDatabase$1.class
>> org/apache/zookeeper/server/ZKDatabase.class
>> org/apache/zookeeper/server/ZooKeeperServer$BasicDataTreeBuilder.class
>> org/apache/zookeeper/server/ZooKeeperServer$ChangeRecord.class
>> org/apache/zookeeper/server/ZooKeeperServer$DataTreeBuilder.class
>> org/apache/zookeeper/server/ZooKeeperServer$Factory.class
>> org/apache/zookeeper/server/ZooKeeperServer$MissingSessionException.class
>> org/apache/zookeeper/server/ZooKeeperServer.class
>> org/apache/zookeeper/server/ZooKeeperServerBean.class
>> org/apache/zookeeper/server/ZooKeeperServerMXBean.class
>> org/apache/zookeeper/server/ZooKeeperServerMain.class
>> org/apache/zookeeper/server/ZooTrace.class
>> org/apache/zookeeper/server/auth/AuthenticationProvider.class
>> org/apache/zookeeper/server/auth/DigestAuthenticationProvider.class
>> org/apache/zookeeper/server/auth/IPAuthenticationProvider.class
>> org/apache/zookeeper/server/auth/ProviderRegistry.class
>> org/apache/zookeeper/server/persistence/FileHeader.class
>> org/apache/zookeeper/server/persistence/FileSnap.class
>> org/apache/zookeeper/server/persistence/FileTxnLog$FileTxnIterator.class
>> org/apache/zookeeper/server/persistence/FileTxnLog$PositionInputStream.class
>> org/apache/zookeeper/server/persistence/FileTxnLog.class
>> org/apache/zookeeper/server/persistence/FileTxnSnapLog$PlayBackListener.class
>> org/apache/zookeeper/server/persistence/FileTxnSnapLog.class
>> org/apache/zookeeper/server/persistence/SnapShot.class
>> org/apache/zookeeper/server/persistence/TxnLog$TxnIterator.class
>> org/apache/zookeeper/server/persistence/TxnLog.class
>> org/apache/zookeeper/server/persistence/Util$DataDirFileComparator.class
>> org/apache/zookeeper/server/persistence/Util.class
>> org/apache/zookeeper/server/quorum/AckRequestProcessor.class
>> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$1.class
>> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$Messenger$WorkerReceiver.class
>> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$Messenger$WorkerSender.class
>> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$Messenger.class
>> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$Notification.class
>> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$ToSend$mType.class
>> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$ToSend.class
>> org/apache/zookeeper/server/quorum/AuthFastLeaderElection.class
>> org/apache/zookeeper/server/quorum/CommitProcessor.class
>> org/apache/zookeeper/server/quorum/Election.class
>> org/apache/zookeeper/server/quorum/FastLeaderElection$1.class
>> org/apache/zookeeper/server/quorum/FastLeaderElection$Messenger$WorkerReceiver.class
>> org/apache/zookeeper/server/quorum/FastLeaderElection$Messenger$WorkerSender.class
>> org/apache/zookeeper/server/quorum/FastLeaderElection$Messenger.class
>> org/apache/zookeeper/server/quorum/FastLeaderElection$Notification.class
>> org/apache/zookeeper/server/quorum/FastLeaderElection$ToSend$mType.class
>> org/apache/zookeeper/server/quorum/FastLeaderElection$ToSend.class
>> org/apache/zookeeper/server/quorum/FastLeaderElection.class
>> org/apache/zookeeper/server/quorum/Follower.class
>> org/apache/zookeeper/server/quorum/FollowerBean.class
>> org/apache/zookeeper/server/quorum/FollowerMXBean.class
>> org/apache/zookeeper/server/quorum/FollowerRequestProcessor.class
>> org/apache/zookeeper/server/quorum/FollowerZooKeeperServer.class
>> org/apache/zookeeper/server/quorum/Leader$LearnerCnxAcceptor.class
>> org/apache/zookeeper/server/quorum/Leader$Proposal.class
>> org/apache/zookeeper/server/quorum/Leader$ToBeAppliedRequestProcessor.class
>> org/apache/zookeeper/server/quorum/Leader$XidRolloverException.class
>> org/apache/zookeeper/server/quorum/Leader.class
>> org/apache/zookeeper/server/quorum/LeaderBean.class
>> org/apache/zookeeper/server/quorum/LeaderElection$ElectionResult.class
>> org/apache/zookeeper/server/quorum/LeaderElection.class
>> org/apache/zookeeper/server/quorum/LeaderElectionBean.class
>> org/apache/zookeeper/server/quorum/LeaderElectionMXBean.class
>> org/apache/zookeeper/server/quorum/LeaderMXBean.class
>> org/apache/zookeeper/server/quorum/LeaderZooKeeperServer.class
>> org/apache/zookeeper/server/quorum/Learner$PacketInFlight.class
>> org/apache/zookeeper/server/quorum/Learner.class
>> org/apache/zookeeper/server/quorum/LearnerHandler$1.class
>> org/apache/zookeeper/server/quorum/LearnerHandler.class
>> org/apache/zookeeper/server/quorum/LearnerSessionTracker.class
>> org/apache/zookeeper/server/quorum/LearnerSyncRequest.class
>> org/apache/zookeeper/server/quorum/LearnerZooKeeperServer.class
>> org/apache/zookeeper/server/quorum/LocalPeerBean.class
>> org/apache/zookeeper/server/quorum/LocalPeerMXBean.class
>> org/apache/zookeeper/server/quorum/Observer.class
>> org/apache/zookeeper/server/quorum/ObserverMXBean.class
>> org/apache/zookeeper/server/quorum/ObserverRequestProcessor.class
>> org/apache/zookeeper/server/quorum/ObserverZooKeeperServer.class
>> org/apache/zookeeper/server/quorum/ProposalRequestProcessor.class
>> org/apache/zookeeper/server/quorum/QuorumBean.class
>> org/apache/zookeeper/server/quorum/QuorumCnxManager$Listener.class
>> org/apache/zookeeper/server/quorum/QuorumCnxManager$Message.class
>> org/apache/zookeeper/server/quorum/QuorumCnxManager$RecvWorker.class
>> org/apache/zookeeper/server/quorum/QuorumCnxManager$SendWorker.class
>> org/apache/zookeeper/server/quorum/QuorumCnxManager.class
>> org/apache/zookeeper/server/quorum/QuorumMXBean.class
>> org/apache/zookeeper/server/quorum/QuorumPacket.class
>> org/apache/zookeeper/server/quorum/QuorumPeer$1.class
>> org/apache/zookeeper/server/quorum/QuorumPeer$Factory.class
>> org/apache/zookeeper/server/quorum/QuorumPeer$LearnerType.class
>> org/apache/zookeeper/server/quorum/QuorumPeer$QuorumServer.class
>> org/apache/zookeeper/server/quorum/QuorumPeer$ResponderThread.class
>> org/apache/zookeeper/server/quorum/QuorumPeer$ServerState.class
>> org/apache/zookeeper/server/quorum/QuorumPeer.class
>> org/apache/zookeeper/server/quorum/QuorumPeerConfig$ConfigException.class
>> org/apache/zookeeper/server/quorum/QuorumPeerConfig.class
>> org/apache/zookeeper/server/quorum/QuorumPeerMain.class
>> org/apache/zookeeper/server/quorum/QuorumStats$Provider.class
>> org/apache/zookeeper/server/quorum/QuorumStats.class
>> org/apache/zookeeper/server/quorum/QuorumZooKeeperServer.class
>> org/apache/zookeeper/server/quorum/RemotePeerBean.class
>> org/apache/zookeeper/server/quorum/RemotePeerMXBean.class
>> org/apache/zookeeper/server/quorum/SendAckRequestProcessor.class
>> org/apache/zookeeper/server/quorum/ServerBean.class
>> org/apache/zookeeper/server/quorum/ServerMXBean.class
>> org/apache/zookeeper/server/quorum/Vote.class
>> org/apache/zookeeper/server/quorum/flexible/QuorumHierarchical.class
>> org/apache/zookeeper/server/quorum/flexible/QuorumMaj.class
>> org/apache/zookeeper/server/quorum/flexible/QuorumVerifier.class
>> org/apache/zookeeper/server/upgrade/DataNodeV1.class
>> org/apache/zookeeper/server/upgrade/DataTreeV1$ProcessTxnResult.class
>> org/apache/zookeeper/server/upgrade/DataTreeV1.class
>> org/apache/zookeeper/server/upgrade/UpgradeMain.class
>> org/apache/zookeeper/server/upgrade/UpgradeSnapShot.class
>> org/apache/zookeeper/server/upgrade/UpgradeSnapShotV1.class
>> org/apache/zookeeper/server/util/Profiler$Operation.class
>> org/apache/zookeeper/server/util/Profiler.class
>> org/apache/zookeeper/server/util/SerializeUtils.class
>> org/apache/zookeeper/txn/CreateSessionTxn.class
>> org/apache/zookeeper/txn/CreateTxn.class
>> org/apache/zookeeper/txn/DeleteTxn.class
>> org/apache/zookeeper/txn/ErrorTxn.class
>> org/apache/zookeeper/txn/SetACLTxn.class
>> org/apache/zookeeper/txn/SetDataTxn.class
>> org/apache/zookeeper/txn/SetMaxChildrenTxn.class
>> org/apache/zookeeper/txn/TxnHeader.class
>> org/apache/zookeeper/version/Info.class
>> org/apache/zookeeper/version/util/VerGen$Version.class
>> org/apache/zookeeper/version/util/VerGen.class
>>
>>
>> seems zookeeper is included, however I still got the same issue after I change all above, really have no idea what can I do now.
>>
>> thanks
>>
>> Alec
>>
>> On Aug 5, 2014, at 2:11 PM, Kushan Maskey <ku...@mmillerassociates.com> wrote:
>>
>>> You need to include kafka_2.10-0.8.1.1.jar into your project jar. I had this issue and that resolved it.
>>>
>>> --
>>> Kushan Maskey
>>> 817.403.7500
>>>
>>>
>>> On Tue, Aug 5, 2014 at 3:57 PM, Parth Brahmbhatt <pb...@hortonworks.com> wrote:
>>> I see a NoSuchMethodError, seems like there is some issue with your jar packing. Can you confirm that you have the zookeeper dependency packed in your jar? what version of curator and zookeeper are you using?
>>>
>>> Thanks
>>> Parth
>>>
>>>
>>> On Tue, Aug 5, 2014 at 1:45 PM, Sa Li <sa...@gmail.com> wrote:
>>> Thanks, Parth, I increase the sleep time as Thread.sleep(150000000), 150 seconds, but still I got such Asyn problem, it seems to be the problem to reading kafka topic from zookeeper.
>>>
>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>> 3100 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor -
>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>> 3101 [Thread-29-$mastercoord-bg0] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 3114 [Thread-10] INFO backtype.storm.daemon.worker - Worker has topology config {"storm.id" "kafka-1-1407271290", "dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/0610cc80-25a7-4304-acf0-9ead5f942429", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.kryo.decorators" (), "topology.name" "kafka", "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" 1, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" true, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" 3}
>>> 3115 [Thread-10] INFO backtype.storm.daemon.worker - Worker ee9ec3b6-5e13-4329-b12a-c3cffdd7e997 for storm kafka-1-1407271290 on 3aff208c-d065-448d-9026-bf452151d546:4 has finished loading
>>> 3207 [Thread-25-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
>>>
>>> Thanks
>>>
>>> Alec
>>>
>>>
>>> On Aug 5, 2014, at 1:32 PM, Parth Brahmbhatt <pb...@hortonworks.com> wrote:
>>>
>>>> Can you let the topology run for 120 seconds or so? In my experience the kafka bolt/spout takes a lot of latency initially as it tries to read/write from zookeeper and initialize connections. On my mac it takes about 15 seconds before the spout is actually opened.
>>>>
>>>> Thanks
>>>> Parth
>>>> On Aug 5, 2014, at 1:11 PM, Sa Li <sa...@gmail.com> wrote:
>>>>
>>>>> If I set the sleep time as 1000 milisec, I got such error:
>>>>>
>>>>> 3067 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/0f1851f1-9499-48a5-817e-41712921d054
>>>>> 3163 [Thread-10-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
>>>>> 3163 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
>>>>> 3164 [Thread-10-EventThread] WARN backtype.storm.cluster - Received event :disconnected::none: with disconnected Zookeeper.
>>>>> 3636 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
>>>>> java.net.ConnectException: Connection refused
>>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_55]
>>>>> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_55]
>>>>> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) ~[zookeeper-3.3.3.jar:3.3.3-1073969]
>>>>> 4877 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
>>>>> java.net.ConnectException: Connection refused
>>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_55]
>>>>> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_55]
>>>>> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) ~[zookeeper-3.3.3.jar:3.3.3-1073969]
>>>>> 5566 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
>>>>> java.net.ConnectException: Connection refused
>>>>>
>>>>> seems not even connected to zookeeper, any method to confirm to connection of zookeeper?
>>>>>
>>>>> Thanks a lot
>>>>>
>>>>> Alec
>>>>>
>>>>> On Aug 5, 2014, at 12:58 PM, Sa Li <sa...@gmail.com> wrote:
>>>>>
>>>>>> Thank you very much for your reply, Taylor. I tried to increase the sleep time as 1 sec or 10 sec, however I got such error, it seems to be Async loop error. Any idea about that?
>>>>>>
>>>>>> 3053 [Thread-19-$spoutcoord-spout0] INFO org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>> 3058 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.util - Async loop died!
>>>>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>>>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>>>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>>>>> 3058 [Thread-25-spout0] ERROR backtype.storm.util - Async loop died!
>>>>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>>>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>>>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>>>>> 3059 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor -
>>>>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>>>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>>>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>>>>> 3059 [Thread-25-spout0] ERROR backtype.storm.daemon.executor -
>>>>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>>>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>>>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>>>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>>>>> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker has topology config {"storm.id" "kafka-1-1407268492", "dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/ca948198-69df-440b-8acb-6dfc4db6c288", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.kryo.decorators" (), "topology.name" "kafka", "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" true, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>>> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker 64335058-7f94-447f-bc0a-5107084789a0 for storm kafka-1-1407268492 on cf2964b3-7655-4a33-88a1-f6e0ceb6f9ed:1 has finished loading
>>>>>> 3164 [Thread-29-$mastercoord-bg0] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>> 3173 [Thread-25-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
>>>>>> 3173 [Thread-19-$spoutcoord-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> Alec
>>>>>>
>>>>>> On Aug 5, 2014, at 10:26 AM, P. Taylor Goetz <pt...@gmail.com> wrote:
>>>>>>
>>>>>>> You are only sleeping for 100 milliseconds before shutting down the local cluster, which is probably not long enough for the topology to come up and start processing messages. Try increasing the sleep time to something like 10 seconds.
>>>>>>>
>>>>>>> You can also reduce startup time with the following JVM flag:
>>>>>>>
>>>>>>> -Djava.net.preferIPv4Stack=true
>>>>>>>
>>>>>>> - Taylor
>>>>>>>
>>>>>>> On Aug 5, 2014, at 1:16 PM, Sa Li <sa...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Sorry, the stormTopology:
>>>>>>>>
>>>>>>>>> TridentTopology topology = new TridentTopology();
>>>>>>>>> BrokerHosts zk = new ZkHosts("localhost");
>>>>>>>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, “topictest");
>>>>>>>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>>>>>>>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Aug 5, 2014, at 9:56 AM, Sa Li <sa...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Thank you very much, Marcelo, it indeed worked, now I can run my code without getting error. However, another thing is keeping bother me, following is my code:
>>>>>>>>>
>>>>>>>>> public static class PrintStream implements Filter {
>>>>>>>>>
>>>>>>>>> @SuppressWarnings("rawtypes”)
>>>>>>>>> @Override
>>>>>>>>> public void prepare(Map conf, TridentOperationContext context) {
>>>>>>>>> }
>>>>>>>>> @Override
>>>>>>>>> public void cleanup() {
>>>>>>>>> }
>>>>>>>>> @Override
>>>>>>>>> public boolean isKeep(TridentTuple tuple) {
>>>>>>>>> System.out.println(tuple);
>>>>>>>>> return true;
>>>>>>>>> }
>>>>>>>>> }
>>>>>>>>> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException {
>>>>>>>>>
>>>>>>>>> TridentTopology topology = new TridentTopology();
>>>>>>>>> BrokerHosts zk = new ZkHosts("localhost");
>>>>>>>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, "ingest_test");
>>>>>>>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>>>>>>>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>>>>>>>>
>>>>>>>>> topology.newStream("kafka", spout)
>>>>>>>>> .each(new Fields("str"),
>>>>>>>>> new PrintStream()
>>>>>>>>> );
>>>>>>>>>
>>>>>>>>> return topology.build();
>>>>>>>>> }
>>>>>>>>> public static void main(String[] args) throws Exception {
>>>>>>>>>
>>>>>>>>> Config conf = new Config();
>>>>>>>>> conf.setDebug(true);
>>>>>>>>> conf.setMaxSpoutPending(1);
>>>>>>>>> conf.setMaxTaskParallelism(3);
>>>>>>>>> LocalDRPC drpc = new LocalDRPC();
>>>>>>>>> LocalCluster cluster = new LocalCluster();
>>>>>>>>> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>>>>>>>>> Thread.sleep(100);
>>>>>>>>> cluster.shutdown();
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> What I expect is quite simple, print out the message I collect from a kafka producer playback process which is running separately. The topic is listed as:
>>>>>>>>>
>>>>>>>>> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper localhost:2181
>>>>>>>>> topic: topictest partition: 0 leader: 1 replicas: 1,3,2 isr: 1,3,2
>>>>>>>>> topic: topictest partition: 1 leader: 2 replicas: 2,1,3 isr: 2,1,3
>>>>>>>>> topic: topictest partition: 2 leader: 3 replicas: 3,2,1 isr: 3,2,1
>>>>>>>>> topic: topictest partition: 3 leader: 1 replicas: 1,2,3 isr: 1,2,3
>>>>>>>>> topic: topictest partition: 4 leader: 2 replicas: 2,3,1 isr: 2,3,1
>>>>>>>>>
>>>>>>>>> When I am running the code, this is what I saw on the screen, seems no error, but no message print out as well:
>>>>>>>>>
>>>>>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>>>>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>>>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>>>>>> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1 -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file= -cp /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>>>>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>>>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>>>>>> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>>>>>>>> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>>>>>> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
>>>>>>>>> 1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>>> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>>>>> 1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>>> 1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>>> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>>>>> 1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>>> 1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>>> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>>>>> 1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>>> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>>>>>> 1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>>> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>>>>> 1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>>> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
>>>>>>>>> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>>>>>> 1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>>> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>>>>> 1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>>> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
>>>>>>>>> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology submission for kafka with conf {"topology.max.task.parallelism" nil, "topology.acker.executors" nil, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id" "kafka-1-1407257070", "topology.debug" true}
>>>>>>>>> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka: kafka-1-1407257070
>>>>>>>>> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 3] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
>>>>>>>>> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment for topology id kafka-1-1407257070: #backtype.storm.daemon.common.Assignment{:master-code-dir "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070", :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"}, :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3] 1407257070}}
>>>>>>>>> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
>>>>>>>>> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
>>>>>>>>> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
>>>>>>>>> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
>>>>>>>>> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
>>>>>>>>> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
>>>>>>>>> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
>>>>>>>>> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
>>>>>>>>> 2256 [main] INFO backtype.storm.testing - Shutting down in process zookeeper
>>>>>>>>> 2257 [main] INFO backtype.storm.testing - Done shutting down in process zookeeper
>>>>>>>>> 2258 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
>>>>>>>>> 2259 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>>>>>>>> 2260 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
>>>>>>>>> 2261 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>>>>>>>>>
>>>>>>>>> Anyone can help me locate what the problem is? I really need to walk through this step in order to be able to replace .each(printStream()) with other functions.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>> Alec
>>>>>>>>>
>>>>>>>>> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>>>>>>>>>
>>>>>>>>>> hello,
>>>>>>>>>>
>>>>>>>>>> you can check your .jar application with command " jar tf " to see if class kafka/api/OffsetRequest.class is part of the jar.
>>>>>>>>>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are using) in storm_lib directory
>>>>>>>>>>
>>>>>>>>>> Marcelo
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>>>>>>>>>> Hi, all
>>>>>>>>>>
>>>>>>>>>> I am running a kafka-spout code in storm-server, the pom is
>>>>>>>>>>
>>>>>>>>>> <groupId>org.apache.kafka</groupId>
>>>>>>>>>> <artifactId>kafka_2.9.2</artifactId>
>>>>>>>>>> <version>0.8.0</version>
>>>>>>>>>> <scope>provided</scope>
>>>>>>>>>>
>>>>>>>>>> <exclusions>
>>>>>>>>>> <exclusion>
>>>>>>>>>> <groupId>org.apache.zookeeper</groupId>
>>>>>>>>>> <artifactId>zookeeper</artifactId>
>>>>>>>>>> </exclusion>
>>>>>>>>>> <exclusion>
>>>>>>>>>> <groupId>log4j</groupId>
>>>>>>>>>> <artifactId>log4j</artifactId>
>>>>>>>>>> </exclusion>
>>>>>>>>>> </exclusions>
>>>>>>>>>>
>>>>>>>>>> </dependency>
>>>>>>>>>>
>>>>>>>>>> <!-- Storm-Kafka compiled -->
>>>>>>>>>>
>>>>>>>>>> <dependency>
>>>>>>>>>> <artifactId>storm-kafka</artifactId>
>>>>>>>>>> <groupId>org.apache.storm</groupId>
>>>>>>>>>> <version>0.9.2-incubating</version>
>>>>>>>>>> <scope>*compile*</scope>
>>>>>>>>>> </dependency>
>>>>>>>>>>
>>>>>>>>>> I can mvn package it, but when I run it
>>>>>>>>>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I am getting such error
>>>>>>>>>>
>>>>>>>>>> 1657 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>>>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>>>>>>>>>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died
>>>>>>>>>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>>>>>>>>>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>>>>> at storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>>>>> at storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>>>>> at storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>>>>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>>>>>>>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_55]
>>>>>>>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_55]
>>>>>>>>>> at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_55]
>>>>>>>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_55]
>>>>>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
>>>>>>>>>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_55]
>>>>>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I try to poke around online, could not find a solution for it, any idea about that?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>>
>>>>>>>>>> Alec
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> CONFIDENTIALITY NOTICE
>>>> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
>>>
>>>
>>>
>>>
>>> --
>>> Thanks
>>> Parth
>>>
>>> CONFIDENTIALITY NOTICE
>>> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
>>>
>>
>
>
Re: kafka-spout running error
Posted by Kushan Maskey <ku...@mmillerassociates.com>.
Are you creating a jar to be deployed in your server? If yes then you will
have to have the kafka jar with scope as compile so it bundles that jar
with your storm jar.
You can try this and see if that helps.
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.8.1.1</version>
<scope>compile</scope>
<.dependency>
--
Kushan Maskey
817.403.7500
On Tue, Aug 5, 2014 at 4:45 PM, Sa Li <sa...@gmail.com> wrote:
> This is my complete pom
>
> <dependency>
> <groupId>org.json</groupId>
> <artifactId>json</artifactId>
> <version>20140107</version>
> </dependency>
>
>
> <!-- Slf4j Logger -->
>
> <dependency>
> <groupId>org.slf4j</groupId>
> <artifactId>slf4j-simple</artifactId>
> <version>1.7.2</version>
> </dependency>
> <dependency>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> <version>1.2.17</version>
> </dependency>
>
>
> <!-- Scala 2.9.2 -->
> <dependency>
> <groupId>org.scala-lang</groupId>
> <artifactId>scala-library</artifactId>
> <version>2.9.2</version>
> </dependency>
>
> <dependency>
> <groupId>org.mockito</groupId>
> <artifactId>mockito-all</artifactId>
> <version>1.9.0</version>
> <scope>test</scope>
> </dependency>
> <dependency>
> <groupId>junit</groupId>
> <artifactId>junit</artifactId>
> <version>4.11</version>
> <scope>test</scope>
> </dependency>
>
>
> <dependency>
> <groupId>org.apache.curator</groupId>
> <artifactId>curator-framework</artifactId>
> <version>2.6.0</version>
> <exclusions>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> <exclusion>
> <groupId>org.slf4j</groupId>
> <artifactId>slf4j-log4j12</artifactId>
> </exclusion>
> </exclusions>
> </dependency>
> <dependency>
> <groupId>org.apache.curator</groupId>
> <artifactId>curator-recipes</artifactId>
> <version>2.6.0</version>
> <exclusions>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> </exclusions>
> <scope>test</scope>
> </dependency>
> <dependency>
> <groupId>org.apache.curator</groupId>
> <artifactId>curator-test</artifactId>
> <version>2.6.0</version>
> <exclusions>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> <exclusion>
> <groupId>org.testng</groupId>
> <artifactId>testng</artifactId>
> </exclusion>
> </exclusions>
> <scope>test</scope>
> </dependency>
>
>
> <dependency>
> <groupId>org.apache.zookeeper</groupId>
> <artifactId>zookeeper</artifactId>
> <version>3.3.6</version>
> <exclusions>
> <exclusion>
> <groupId>com.sun.jmx</groupId>
> <artifactId>jmxri</artifactId>
> </exclusion>
> <exclusion>
> <groupId>com.sun.jdmk</groupId>
> <artifactId>jmxtools</artifactId>
> </exclusion>
> <exclusion>
> <groupId>javax.jms</groupId>
> <artifactId>jms</artifactId>
> </exclusion>
> </exclusions>
> </dependency>
>
>
> <!-- Kafka 0.8.0 compiled Scala 2.9.2 -->
>
> <dependency>
> <groupId>org.apache.kafka</groupId>
> <artifactId>kafka_2.10</artifactId>
> <version>0.8.1.1</version>
> <scope>provided</scope>
>
> <exclusions>
> <exclusion>
> <groupId>org.apache.zookeeper</groupId>
> <artifactId>zookeeper</artifactId>
> </exclusion>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> </exclusions>
>
>
> </dependency>
>
>
> <!-- Storm-Kafka compiled -->
>
> <dependency>
> <artifactId>storm-kafka</artifactId>
> <groupId>org.apache.storm</groupId>
> <version>0.9.2-incubating</version>
> <scope>*compile*</scope>
> </dependency>
> <!--
> <dependency>
> <groupId>storm</groupId>
> <artifactId>storm-kafka</artifactId>
> <version>0.9.0-wip16a-scala292</version>
> </dependency>
> -->
> <dependency>
> <groupId>org.testng</groupId>
> <artifactId>testng</artifactId>
> <version>6.8.5</version>
> <scope>test</scope>
> </dependency>
> <dependency>
> <groupId>org.easytesting</groupId>
> <artifactId>fest-assert-core</artifactId>
> <version>2.0M8</version>
> <scope>test</scope>
> </dependency>
> <dependency>
> <groupId>org.jmock</groupId>
> <artifactId>jmock</artifactId>
> <version>2.6.0</version>
> <scope>test</scope>
> </dependency>
>
> <dependency>
> <groupId>storm</groupId>
> <artifactId>storm</artifactId>
> <version>0.9.0.1</version>
> <!-- keep storm out of the jar-with-dependencies -->
> <scope>provided</scope>
> </dependency>
>
> <dependency>
> <groupId>commons-collections</groupId>
> <artifactId>commons-collections</artifactId>
> <version>3.2.1</version>
> </dependency>
> <dependency>
> <groupId>com.google.guava</groupId>
> <artifactId>guava</artifactId>
> <version>15.0</version>
> </dependency>
> </dependencies>
>
>
> On Aug 5, 2014, at 2:41 PM, Sa Li <sa...@gmail.com> wrote:
>
> Thanks, Kushan and Parth, I tried to solve the problem as you two
> suggested, first I change the kafka version in pom, re-compile it, and also
> copy the kafka_2.10-0.8.1.1.jar into storm.lib directory from M2_REPO. Here
> is my pom
>
> <dependency>
> <groupId>org.apache.curator</groupId>
> <artifactId>curator-framework</artifactId>
> <version>2.6.0</version>
> <exclusions>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> <exclusion>
> <groupId>org.slf4j</groupId>
> <artifactId>slf4j-log4j12</artifactId>
> </exclusion>
> </exclusions>
> </dependency>
> <dependency>
> <groupId>org.apache.curator</groupId>
> <artifactId>curator-recipes</artifactId>
> <version>2.6.0</version>
> <exclusions>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> </exclusions>
> <scope>test</scope>
> </dependency>
> <dependency>
> <groupId>org.apache.curator</groupId>
> <artifactId>curator-test</artifactId>
> <version>2.6.0</version>
> <exclusions>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> <exclusion>
> <groupId>org.testng</groupId>
> <artifactId>testng</artifactId>
> </exclusion>
> </exclusions>
> <scope>test</scope>
> </dependency>
>
>
> <dependency>
> <groupId>org.apache.zookeeper</groupId>
> <artifactId>zookeeper</artifactId>
> <version>3.3.6</version>
> <exclusions>
> <exclusion>
> <groupId>com.sun.jmx</groupId>
> <artifactId>jmxri</artifactId>
> </exclusion>
> <exclusion>
> <groupId>com.sun.jdmk</groupId>
> <artifactId>jmxtools</artifactId>
> </exclusion>
> <exclusion>
> <groupId>javax.jms</groupId>
> <artifactId>jms</artifactId>
> </exclusion>
> </exclusions>
> </dependency>
>
> <!-- Kafka 0.8.0 compiled Scala 2.9.2 -->
>
> <dependency>
> <groupId>org.apache.kafka</groupId>
> <artifactId>kafka_2.10</artifactId>
> <version>0.8.1.1</version>
> <scope>provided</scope>
>
> <exclusions>
> <exclusion>
> <groupId>org.apache.zookeeper</groupId>
> <artifactId>zookeeper</artifactId>
> </exclusion>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> </exclusions>
>
> </dependency>
>
>
> Here I can see the zookeeper version is 3.3.6 (here the version was
> downgraded since "java.lang.ClassNotFoundException:
> org.apache.zookeeper.server.NIOServerCnxn$Factory at java.net” error came
> out otherwise, the curator version is 2.6.0. I jar tf the project jar to
> see the class included:
> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# jar tf
> target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar | grep
> zookeeper
> org/apache/zookeeper/
> org/apache/zookeeper/client/
> org/apache/zookeeper/common/
> org/apache/zookeeper/data/
> org/apache/zookeeper/jmx/
> org/apache/zookeeper/proto/
> org/apache/zookeeper/server/
> org/apache/zookeeper/server/auth/
> org/apache/zookeeper/server/persistence/
> org/apache/zookeeper/server/quorum/
> org/apache/zookeeper/server/quorum/flexible/
> org/apache/zookeeper/server/upgrade/
> org/apache/zookeeper/server/util/
> org/apache/zookeeper/txn/
> org/apache/zookeeper/version/
> org/apache/zookeeper/version/util/
> org/apache/zookeeper/AsyncCallback$ACLCallback.class
> org/apache/zookeeper/AsyncCallback$Children2Callback.class
> org/apache/zookeeper/AsyncCallback$ChildrenCallback.class
> org/apache/zookeeper/AsyncCallback$DataCallback.class
> org/apache/zookeeper/AsyncCallback$StatCallback.class
> org/apache/zookeeper/AsyncCallback$StringCallback.class
> org/apache/zookeeper/AsyncCallback$VoidCallback.class
> org/apache/zookeeper/AsyncCallback.class
> org/apache/zookeeper/ClientCnxn$1.class
> org/apache/zookeeper/ClientCnxn$2.class
> org/apache/zookeeper/ClientCnxn$AuthData.class
> org/apache/zookeeper/ClientCnxn$EndOfStreamException.class
> org/apache/zookeeper/ClientCnxn$EventThread.class
> org/apache/zookeeper/ClientCnxn$Packet.class
> org/apache/zookeeper/ClientCnxn$SendThread.class
> org/apache/zookeeper/ClientCnxn$SessionExpiredException.class
> org/apache/zookeeper/ClientCnxn$SessionTimeoutException.class
> org/apache/zookeeper/ClientCnxn$WatcherSetEventPair.class
> org/apache/zookeeper/ClientCnxn.class
> org/apache/zookeeper/ClientWatchManager.class
> org/apache/zookeeper/CreateMode.class
> org/apache/zookeeper/Environment$Entry.class
> org/apache/zookeeper/Environment.class
> org/apache/zookeeper/JLineZNodeCompletor.class
> org/apache/zookeeper/KeeperException$1.class
> org/apache/zookeeper/KeeperException$APIErrorException.class
> org/apache/zookeeper/KeeperException$AuthFailedException.class
> org/apache/zookeeper/KeeperException$BadArgumentsException.class
> org/apache/zookeeper/KeeperException$BadVersionException.class
> org/apache/zookeeper/KeeperException$Code.class
> org/apache/zookeeper/KeeperException$CodeDeprecated.class
> org/apache/zookeeper/KeeperException$ConnectionLossException.class
> org/apache/zookeeper/KeeperException$DataInconsistencyException.class
> org/apache/zookeeper/KeeperException$InvalidACLException.class
> org/apache/zookeeper/KeeperException$InvalidCallbackException.class
> org/apache/zookeeper/KeeperException$MarshallingErrorException.class
> org/apache/zookeeper/KeeperException$NoAuthException.class
> org/apache/zookeeper
> /KeeperException$NoChildrenForEphemeralsException.class
> org/apache/zookeeper/KeeperException$NoNodeException.class
> org/apache/zookeeper/KeeperException$NodeExistsException.class
> org/apache/zookeeper/KeeperException$NotEmptyException.class
> org/apache/zookeeper/KeeperException$OperationTimeoutException.class
> org/apache/zookeeper/KeeperException$RuntimeInconsistencyException.class
> org/apache/zookeeper/KeeperException$SessionExpiredException.class
> org/apache/zookeeper/KeeperException$SessionMovedException.class
> org/apache/zookeeper/KeeperException$SystemErrorException.class
> org/apache/zookeeper/KeeperException$UnimplementedException.class
> org/apache/zookeeper/KeeperException.class
> org/apache/zookeeper/Quotas.class
> org/apache/zookeeper/ServerAdminClient.class
> org/apache/zookeeper/StatsTrack.class
> org/apache/zookeeper/Version.class
> org/apache/zookeeper/WatchedEvent.class
> org/apache/zookeeper/Watcher$Event$EventType.class
> org/apache/zookeeper/Watcher$Event$KeeperState.class
> org/apache/zookeeper/Watcher$Event.class
> org/apache/zookeeper/Watcher.class
> org/apache/zookeeper/ZooDefs$Ids.class
> org/apache/zookeeper/ZooDefs$OpCode.class
> org/apache/zookeeper/ZooDefs$Perms.class
> org/apache/zookeeper/ZooDefs.class
> org/apache/zookeeper/ZooKeeper$1.class
> org/apache/zookeeper/ZooKeeper$ChildWatchRegistration.class
> org/apache/zookeeper/ZooKeeper$DataWatchRegistration.class
> org/apache/zookeeper/ZooKeeper$ExistsWatchRegistration.class
> org/apache/zookeeper/ZooKeeper$States.class
> org/apache/zookeeper/ZooKeeper$WatchRegistration.class
> org/apache/zookeeper/ZooKeeper$ZKWatchManager.class
> org/apache/zookeeper/ZooKeeper.class
> org/apache/zookeeper/ZooKeeperMain$1.class
> org/apache/zookeeper/ZooKeeperMain$MyCommandOptions.class
> org/apache/zookeeper/ZooKeeperMain$MyWatcher.class
> org/apache/zookeeper/ZooKeeperMain.class
> org/apache/zookeeper/client/FourLetterWordMain.class
> org/apache/zookeeper/common/PathTrie$1.class
> org/apache/zookeeper/common/PathTrie$TrieNode.class
> org/apache/zookeeper/common/PathTrie.class
> org/apache/zookeeper/common/PathUtils.class
> org/apache/zookeeper/data/ACL.class
> org/apache/zookeeper/data/Id.class
> org/apache/zookeeper/data/Stat.class
> org/apache/zookeeper/data/StatPersisted.class
> org/apache/zookeeper/data/StatPersistedV1.class
> org/apache/zookeeper/jmx/CommonNames.class
> org/apache/zookeeper/jmx/MBeanRegistry.class
> org/apache/zookeeper/jmx/ManagedUtil.class
> org/apache/zookeeper/jmx/ZKMBeanInfo.class
> org/apache/zookeeper/proto/AuthPacket.class
> org/apache/zookeeper/proto/ConnectRequest.class
> org/apache/zookeeper/proto/ConnectResponse.class
> org/apache/zookeeper/proto/CreateRequest.class
> org/apache/zookeeper/proto/CreateResponse.class
> org/apache/zookeeper/proto/DeleteRequest.class
> org/apache/zookeeper/proto/ExistsRequest.class
> org/apache/zookeeper/proto/ExistsResponse.class
> org/apache/zookeeper/proto/GetACLRequest.class
> org/apache/zookeeper/proto/GetACLResponse.class
> org/apache/zookeeper/proto/GetChildren2Request.class
> org/apache/zookeeper/proto/GetChildren2Response.class
> org/apache/zookeeper/proto/GetChildrenRequest.class
> org/apache/zookeeper/proto/GetChildrenResponse.class
> org/apache/zookeeper/proto/GetDataRequest.class
> org/apache/zookeeper/proto/GetDataResponse.class
> org/apache/zookeeper/proto/GetMaxChildrenRequest.class
> org/apache/zookeeper/proto/GetMaxChildrenResponse.class
> org/apache/zookeeper/proto/ReplyHeader.class
> org/apache/zookeeper/proto/RequestHeader.class
> org/apache/zookeeper/proto/SetACLRequest.class
> org/apache/zookeeper/proto/SetACLResponse.class
> org/apache/zookeeper/proto/SetDataRequest.class
> org/apache/zookeeper/proto/SetDataResponse.class
> org/apache/zookeeper/proto/SetMaxChildrenRequest.class
> org/apache/zookeeper/proto/SetWatches.class
> org/apache/zookeeper/proto/SyncRequest.class
> org/apache/zookeeper/proto/SyncResponse.class
> org/apache/zookeeper/proto/WatcherEvent.class
> org/apache/zookeeper/proto/op_result_t.class
> org/apache/zookeeper/server/ByteBufferInputStream.class
> org/apache/zookeeper/server/ConnectionBean.class
> org/apache/zookeeper/server/ConnectionMXBean.class
> org/apache/zookeeper/server/DataNode.class
> org/apache/zookeeper/server/DataTree$1.class
> org/apache/zookeeper/server/DataTree$Counts.class
> org/apache/zookeeper/server/DataTree$ProcessTxnResult.class
> org/apache/zookeeper/server/DataTree.class
> org/apache/zookeeper/server/DataTreeBean.class
> org/apache/zookeeper/server/DataTreeMXBean.class
> org/apache/zookeeper/server/FinalRequestProcessor.class
> org/apache/zookeeper/server/LogFormatter.class
> org/apache/zookeeper/server/NIOServerCnxn$1.class
> org/apache/zookeeper/server/NIOServerCnxn$CloseRequestException.class
> org/apache/zookeeper/server/NIOServerCnxn$CnxnStatResetCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$CnxnStats.class
> org/apache/zookeeper/server/NIOServerCnxn$CommandThread.class
> org/apache/zookeeper/server/NIOServerCnxn$ConfCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$ConsCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$DumpCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$EndOfStreamException.class
> org/apache/zookeeper/server/NIOServerCnxn$EnvCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$Factory$1.class
> org/apache/zookeeper/server/NIOServerCnxn$Factory.class
> org/apache/zookeeper/server/NIOServerCnxn$RuokCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$SendBufferWriter.class
> org/apache/zookeeper/server/NIOServerCnxn$SetTraceMaskCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$StatCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$StatResetCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$TraceMaskCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$WatchCommand.class
> org/apache/zookeeper/server/NIOServerCnxn.class
> org/apache/zookeeper/server/ObserverBean.class
> org/apache/zookeeper/server/PrepRequestProcessor.class
> org/apache/zookeeper/server/PurgeTxnLog$1MyFileFilter.class
> org/apache/zookeeper/server/PurgeTxnLog.class
> org/apache/zookeeper/server/Request.class
> org/apache/zookeeper
> /server/RequestProcessor$RequestProcessorException.class
> org/apache/zookeeper/server/RequestProcessor.class
> org/apache/zookeeper/server/ServerCnxn$Stats.class
> org/apache/zookeeper/server/ServerCnxn.class
> org/apache/zookeeper/server/ServerConfig.class
> org/apache/zookeeper/server/ServerStats$Provider.class
> org/apache/zookeeper/server/ServerStats.class
> org/apache/zookeeper/server/SessionTracker$Session.class
> org/apache/zookeeper/server/SessionTracker$SessionExpirer.class
> org/apache/zookeeper/server/SessionTracker.class
> org/apache/zookeeper/server/SessionTrackerImpl$SessionImpl.class
> org/apache/zookeeper/server/SessionTrackerImpl$SessionSet.class
> org/apache/zookeeper/server/SessionTrackerImpl.class
> org/apache/zookeeper/server/SyncRequestProcessor$1.class
> org/apache/zookeeper/server/SyncRequestProcessor.class
> org/apache/zookeeper/server/TraceFormatter.class
> org/apache/zookeeper/server/WatchManager.class
> org/apache/zookeeper/server/ZKDatabase$1.class
> org/apache/zookeeper/server/ZKDatabase.class
> org/apache/zookeeper/server/ZooKeeperServer$BasicDataTreeBuilder.class
> org/apache/zookeeper/server/ZooKeeperServer$ChangeRecord.class
> org/apache/zookeeper/server/ZooKeeperServer$DataTreeBuilder.class
> org/apache/zookeeper/server/ZooKeeperServer$Factory.class
> org/apache/zookeeper/server/ZooKeeperServer$MissingSessionException.class
> org/apache/zookeeper/server/ZooKeeperServer.class
> org/apache/zookeeper/server/ZooKeeperServerBean.class
> org/apache/zookeeper/server/ZooKeeperServerMXBean.class
> org/apache/zookeeper/server/ZooKeeperServerMain.class
> org/apache/zookeeper/server/ZooTrace.class
> org/apache/zookeeper/server/auth/AuthenticationProvider.class
> org/apache/zookeeper/server/auth/DigestAuthenticationProvider.class
> org/apache/zookeeper/server/auth/IPAuthenticationProvider.class
> org/apache/zookeeper/server/auth/ProviderRegistry.class
> org/apache/zookeeper/server/persistence/FileHeader.class
> org/apache/zookeeper/server/persistence/FileSnap.class
> org/apache/zookeeper/server/persistence/FileTxnLog$FileTxnIterator.class
> org/apache/zookeeper
> /server/persistence/FileTxnLog$PositionInputStream.class
> org/apache/zookeeper/server/persistence/FileTxnLog.class
> org/apache/zookeeper
> /server/persistence/FileTxnSnapLog$PlayBackListener.class
> org/apache/zookeeper/server/persistence/FileTxnSnapLog.class
> org/apache/zookeeper/server/persistence/SnapShot.class
> org/apache/zookeeper/server/persistence/TxnLog$TxnIterator.class
> org/apache/zookeeper/server/persistence/TxnLog.class
> org/apache/zookeeper/server/persistence/Util$DataDirFileComparator.class
> org/apache/zookeeper/server/persistence/Util.class
> org/apache/zookeeper/server/quorum/AckRequestProcessor.class
> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$1.class
> org/apache/zookeeper
> /server/quorum/AuthFastLeaderElection$Messenger$WorkerReceiver.class
> org/apache/zookeeper
> /server/quorum/AuthFastLeaderElection$Messenger$WorkerSender.class
> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$Messenger.class
> org/apache/zookeeper
> /server/quorum/AuthFastLeaderElection$Notification.class
> org/apache/zookeeper
> /server/quorum/AuthFastLeaderElection$ToSend$mType.class
> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$ToSend.class
> org/apache/zookeeper/server/quorum/AuthFastLeaderElection.class
> org/apache/zookeeper/server/quorum/CommitProcessor.class
> org/apache/zookeeper/server/quorum/Election.class
> org/apache/zookeeper/server/quorum/FastLeaderElection$1.class
> org/apache/zookeeper
> /server/quorum/FastLeaderElection$Messenger$WorkerReceiver.class
> org/apache/zookeeper
> /server/quorum/FastLeaderElection$Messenger$WorkerSender.class
> org/apache/zookeeper/server/quorum/FastLeaderElection$Messenger.class
> org/apache/zookeeper/server/quorum/FastLeaderElection$Notification.class
> org/apache/zookeeper/server/quorum/FastLeaderElection$ToSend$mType.class
> org/apache/zookeeper/server/quorum/FastLeaderElection$ToSend.class
> org/apache/zookeeper/server/quorum/FastLeaderElection.class
> org/apache/zookeeper/server/quorum/Follower.class
> org/apache/zookeeper/server/quorum/FollowerBean.class
> org/apache/zookeeper/server/quorum/FollowerMXBean.class
> org/apache/zookeeper/server/quorum/FollowerRequestProcessor.class
> org/apache/zookeeper/server/quorum/FollowerZooKeeperServer.class
> org/apache/zookeeper/server/quorum/Leader$LearnerCnxAcceptor.class
> org/apache/zookeeper/server/quorum/Leader$Proposal.class
> org/apache/zookeeper
> /server/quorum/Leader$ToBeAppliedRequestProcessor.class
> org/apache/zookeeper/server/quorum/Leader$XidRolloverException.class
> org/apache/zookeeper/server/quorum/Leader.class
> org/apache/zookeeper/server/quorum/LeaderBean.class
> org/apache/zookeeper/server/quorum/LeaderElection$ElectionResult.class
> org/apache/zookeeper/server/quorum/LeaderElection.class
> org/apache/zookeeper/server/quorum/LeaderElectionBean.class
> org/apache/zookeeper/server/quorum/LeaderElectionMXBean.class
> org/apache/zookeeper/server/quorum/LeaderMXBean.class
> org/apache/zookeeper/server/quorum/LeaderZooKeeperServer.class
> org/apache/zookeeper/server/quorum/Learner$PacketInFlight.class
> org/apache/zookeeper/server/quorum/Learner.class
> org/apache/zookeeper/server/quorum/LearnerHandler$1.class
> org/apache/zookeeper/server/quorum/LearnerHandler.class
> org/apache/zookeeper/server/quorum/LearnerSessionTracker.class
> org/apache/zookeeper/server/quorum/LearnerSyncRequest.class
> org/apache/zookeeper/server/quorum/LearnerZooKeeperServer.class
> org/apache/zookeeper/server/quorum/LocalPeerBean.class
> org/apache/zookeeper/server/quorum/LocalPeerMXBean.class
> org/apache/zookeeper/server/quorum/Observer.class
> org/apache/zookeeper/server/quorum/ObserverMXBean.class
> org/apache/zookeeper/server/quorum/ObserverRequestProcessor.class
> org/apache/zookeeper/server/quorum/ObserverZooKeeperServer.class
> org/apache/zookeeper/server/quorum/ProposalRequestProcessor.class
> org/apache/zookeeper/server/quorum/QuorumBean.class
> org/apache/zookeeper/server/quorum/QuorumCnxManager$Listener.class
> org/apache/zookeeper/server/quorum/QuorumCnxManager$Message.class
> org/apache/zookeeper/server/quorum/QuorumCnxManager$RecvWorker.class
> org/apache/zookeeper/server/quorum/QuorumCnxManager$SendWorker.class
> org/apache/zookeeper/server/quorum/QuorumCnxManager.class
> org/apache/zookeeper/server/quorum/QuorumMXBean.class
> org/apache/zookeeper/server/quorum/QuorumPacket.class
> org/apache/zookeeper/server/quorum/QuorumPeer$1.class
> org/apache/zookeeper/server/quorum/QuorumPeer$Factory.class
> org/apache/zookeeper/server/quorum/QuorumPeer$LearnerType.class
> org/apache/zookeeper/server/quorum/QuorumPeer$QuorumServer.class
> org/apache/zookeeper/server/quorum/QuorumPeer$ResponderThread.class
> org/apache/zookeeper/server/quorum/QuorumPeer$ServerState.class
> org/apache/zookeeper/server/quorum/QuorumPeer.class
> org/apache/zookeeper/server/quorum/QuorumPeerConfig$ConfigException.class
> org/apache/zookeeper/server/quorum/QuorumPeerConfig.class
> org/apache/zookeeper/server/quorum/QuorumPeerMain.class
> org/apache/zookeeper/server/quorum/QuorumStats$Provider.class
> org/apache/zookeeper/server/quorum/QuorumStats.class
> org/apache/zookeeper/server/quorum/QuorumZooKeeperServer.class
> org/apache/zookeeper/server/quorum/RemotePeerBean.class
> org/apache/zookeeper/server/quorum/RemotePeerMXBean.class
> org/apache/zookeeper/server/quorum/SendAckRequestProcessor.class
> org/apache/zookeeper/server/quorum/ServerBean.class
> org/apache/zookeeper/server/quorum/ServerMXBean.class
> org/apache/zookeeper/server/quorum/Vote.class
> org/apache/zookeeper/server/quorum/flexible/QuorumHierarchical.class
> org/apache/zookeeper/server/quorum/flexible/QuorumMaj.class
> org/apache/zookeeper/server/quorum/flexible/QuorumVerifier.class
> org/apache/zookeeper/server/upgrade/DataNodeV1.class
> org/apache/zookeeper/server/upgrade/DataTreeV1$ProcessTxnResult.class
> org/apache/zookeeper/server/upgrade/DataTreeV1.class
> org/apache/zookeeper/server/upgrade/UpgradeMain.class
> org/apache/zookeeper/server/upgrade/UpgradeSnapShot.class
> org/apache/zookeeper/server/upgrade/UpgradeSnapShotV1.class
> org/apache/zookeeper/server/util/Profiler$Operation.class
> org/apache/zookeeper/server/util/Profiler.class
> org/apache/zookeeper/server/util/SerializeUtils.class
> org/apache/zookeeper/txn/CreateSessionTxn.class
> org/apache/zookeeper/txn/CreateTxn.class
> org/apache/zookeeper/txn/DeleteTxn.class
> org/apache/zookeeper/txn/ErrorTxn.class
> org/apache/zookeeper/txn/SetACLTxn.class
> org/apache/zookeeper/txn/SetDataTxn.class
> org/apache/zookeeper/txn/SetMaxChildrenTxn.class
> org/apache/zookeeper/txn/TxnHeader.class
> org/apache/zookeeper/version/Info.class
> org/apache/zookeeper/version/util/VerGen$Version.class
> org/apache/zookeeper/version/util/VerGen.class
>
>
> seems zookeeper is included, however I still got the same issue after I
> change all above, really have no idea what can I do now.
>
> thanks
>
> Alec
>
> On Aug 5, 2014, at 2:11 PM, Kushan Maskey <
> kushan.maskey@mmillerassociates.com> wrote:
>
> You need to include kafka_2.10-0.8.1.1.jar into your project jar. I had
> this issue and that resolved it.
>
> --
> Kushan Maskey
> 817.403.7500
>
>
> On Tue, Aug 5, 2014 at 3:57 PM, Parth Brahmbhatt <
> pbrahmbhatt@hortonworks.com> wrote:
>
>> I see a NoSuchMethodError, seems like there is some issue with your jar
>> packing. Can you confirm that you have the zookeeper dependency packed in
>> your jar? what version of curator and zookeeper are you using?
>>
>> Thanks
>> Parth
>>
>>
>> On Tue, Aug 5, 2014 at 1:45 PM, Sa Li <sa...@gmail.com> wrote:
>>
>>> Thanks, Parth, I increase the sleep time as Thread.sleep(150000000), 150
>>> seconds, but still I got such Asyn problem, it seems to be the problem to
>>> reading kafka topic from zookeeper.
>>>
>>> at
>>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>> 3100 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor
>>> -
>>> java.lang.NoSuchMethodError:
>>> org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>> at
>>> org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>> 3101 [Thread-29-$mastercoord-bg0] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 3114 [Thread-10] INFO backtype.storm.daemon.worker - Worker has
>>> topology config {"storm.id" "kafka-1-1407271290", "dev.zookeeper.path"
>>> "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil,
>>> "topology.builtin.metrics.bucket.size.secs" 60,
>>> "topology.fall.back.on.java.serialization" true,
>>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>>> "topology.skip.missing.kryo.registrations" true,
>>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>>> "topology.trident.batch.emit.interval.millis" 50,
>>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>>> "/tmp/0610cc80-25a7-4304-acf0-9ead5f942429",
>>> "storm.messaging.netty.buffer_size" 5242880,
>>> "supervisor.worker.start.timeout.secs" 120,
>>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>>> "/transactional", "topology.acker.executors" nil,
>>> "topology.kryo.decorators" (), "topology.name" "kafka",
>>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>>> "supervisor.heartbeat.frequency.secs" 5,
>>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>>> "topology.spout.wait.strategy"
>>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>>> 1, "storm.zookeeper.retry.interval" 1000, "
>>> topology.sleep.spout.wait.strategy.time.ms" 1,
>>> "nimbus.topology.validator"
>>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>>> (4 5 6), "topology.debug" true, "nimbus.task.launch.secs" 120,
>>> "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register"
>>> {"storm.trident.topology.TransactionAttempt" nil},
>>> "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10,
>>> "topology.workers" 1, "supervisor.childopts" "-Xmx256m",
>>> "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
>>> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
>>> "backtype.storm.serialization.types.ListDelegateSerializer",
>>> "topology.disruptor.wait.strategy"
>>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>>> 5, "storm.thrift.transport"
>>> "backtype.storm.security.auth.SimpleTransportPlugin",
>>> "topology.state.synchronization.timeout.secs" 60,
>>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms"
>>> 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false,
>>> "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode"
>>> "local", "topology.optimize" true, "topology.max.task.parallelism" 3}
>>> 3115 [Thread-10] INFO backtype.storm.daemon.worker - Worker
>>> ee9ec3b6-5e13-4329-b12a-c3cffdd7e997 for storm kafka-1-1407271290 on
>>> 3aff208c-d065-448d-9026-bf452151d546:4 has finished loading
>>> 3207 [Thread-25-spout0] INFO backtype.storm.util - Halting process:
>>> ("Worker died")
>>>
>>> Thanks
>>>
>>> Alec
>>>
>>>
>>> On Aug 5, 2014, at 1:32 PM, Parth Brahmbhatt <
>>> pbrahmbhatt@hortonworks.com> wrote:
>>>
>>> Can you let the topology run for 120 seconds or so? In my experience the
>>> kafka bolt/spout takes a lot of latency initially as it tries to read/write
>>> from zookeeper and initialize connections. On my mac it takes about 15
>>> seconds before the spout is actually opened.
>>>
>>> Thanks
>>> Parth
>>> On Aug 5, 2014, at 1:11 PM, Sa Li <sa...@gmail.com> wrote:
>>>
>>> If I set the sleep time as 1000 milisec, I got such error:
>>>
>>> 3067 [main] INFO backtype.storm.testing - Deleting temporary path
>>> /tmp/0f1851f1-9499-48a5-817e-41712921d054
>>> 3163 [Thread-10-EventThread] INFO
>>> com.netflix.curator.framework.state.ConnectionStateManager - State change:
>>> SUSPENDED
>>> 3163 [ConnectionStateManager-0] WARN
>>> com.netflix.curator.framework.state.ConnectionStateManager - There are no
>>> ConnectionStateListeners registered.
>>> 3164 [Thread-10-EventThread] WARN backtype.storm.cluster - Received
>>> event :disconnected::none: with disconnected Zookeeper.
>>> 3636 [Thread-10-SendThread(localhost:2000)] WARN
>>> org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server
>>> null, unexpected error, closing socket connection and attempting reconnect
>>> java.net.ConnectException: Connection refused
>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>> ~[na:1.7.0_55]
>>> at
>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
>>> ~[na:1.7.0_55]
>>> at
>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
>>> ~[zookeeper-3.3.3.jar:3.3.3-1073969]
>>> 4877 [Thread-10-SendThread(localhost:2000)] WARN
>>> org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server
>>> null, unexpected error, closing socket connection and attempting reconnect
>>> java.net.ConnectException: Connection refused
>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>> ~[na:1.7.0_55]
>>> at
>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
>>> ~[na:1.7.0_55]
>>> at
>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
>>> ~[zookeeper-3.3.3.jar:3.3.3-1073969]
>>> 5566 [Thread-10-SendThread(localhost:2000)] WARN
>>> org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server
>>> null, unexpected error, closing socket connection and attempting reconnect
>>> java.net.ConnectException: Connection refused
>>>
>>> seems not even connected to zookeeper, any method to confirm to
>>> connection of zookeeper?
>>>
>>> Thanks a lot
>>>
>>> Alec
>>>
>>> On Aug 5, 2014, at 12:58 PM, Sa Li <sa...@gmail.com> wrote:
>>>
>>> Thank you very much for your reply, Taylor. I tried to increase the
>>> sleep time as 1 sec or 10 sec, however I got such error, it seems to be
>>> Async loop error. Any idea about that?
>>>
>>> 3053 [Thread-19-$spoutcoord-spout0] INFO
>>> org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 3058 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.util - Async
>>> loop died!
>>> java.lang.NoSuchMethodError:
>>> org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>> at
>>> org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.ConnectionState.reset(ConnectionState.java:219)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.ConnectionState.start(ConnectionState.java:103)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>> 3058 [Thread-25-spout0] ERROR backtype.storm.util - Async loop died!
>>> java.lang.NoSuchMethodError:
>>> org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>> at
>>> org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.ConnectionState.reset(ConnectionState.java:219)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.ConnectionState.start(ConnectionState.java:103)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>> 3059 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor
>>> -
>>> java.lang.NoSuchMethodError:
>>> org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>> at
>>> org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.ConnectionState.reset(ConnectionState.java:219)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.ConnectionState.start(ConnectionState.java:103)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>> 3059 [Thread-25-spout0] ERROR backtype.storm.daemon.executor -
>>> java.lang.NoSuchMethodError:
>>> org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>> at
>>> org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.ConnectionState.reset(ConnectionState.java:219)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.ConnectionState.start(ConnectionState.java:103)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at
>>> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
>>> ~[storm-core-0.9.0.1.jar:na]
>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker has topology
>>> config {"storm.id" "kafka-1-1407268492", "dev.zookeeper.path"
>>> "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil,
>>> "topology.builtin.metrics.bucket.size.secs" 60,
>>> "topology.fall.back.on.java.serialization" true,
>>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>>> "topology.skip.missing.kryo.registrations" true,
>>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>>> "topology.trident.batch.emit.interval.millis" 50,
>>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>>> "/tmp/ca948198-69df-440b-8acb-6dfc4db6c288",
>>> "storm.messaging.netty.buffer_size" 5242880,
>>> "supervisor.worker.start.timeout.secs" 120,
>>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>>> "/transactional", "topology.acker.executors" nil,
>>> "topology.kryo.decorators" (), "topology.name" "kafka",
>>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>>> "supervisor.heartbeat.frequency.secs" 5,
>>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>>> "topology.spout.wait.strategy"
>>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>>> nil, "storm.zookeeper.retry.interval" 1000, "
>>> topology.sleep.spout.wait.strategy.time.ms" 1,
>>> "nimbus.topology.validator"
>>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>>> (1 2 3), "topology.debug" true, "nimbus.task.launch.secs" 120,
>>> "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register"
>>> {"storm.trident.topology.TransactionAttempt" nil},
>>> "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10,
>>> "topology.workers" 1, "supervisor.childopts" "-Xmx256m",
>>> "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
>>> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
>>> "backtype.storm.serialization.types.ListDelegateSerializer",
>>> "topology.disruptor.wait.strategy"
>>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>>> 5, "storm.thrift.transport"
>>> "backtype.storm.security.auth.SimpleTransportPlugin",
>>> "topology.state.synchronization.timeout.secs" 60,
>>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms"
>>> 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false,
>>> "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode"
>>> "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker
>>> 64335058-7f94-447f-bc0a-5107084789a0 for storm kafka-1-1407268492 on
>>> cf2964b3-7655-4a33-88a1-f6e0ceb6f9ed:1 has finished loading
>>> 3164 [Thread-29-$mastercoord-bg0] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 3173 [Thread-25-spout0] INFO backtype.storm.util - Halting process:
>>> ("Worker died")
>>> 3173 [Thread-19-$spoutcoord-spout0] INFO backtype.storm.util - Halting
>>> process: ("Worker died")
>>>
>>> Thanks
>>>
>>> Alec
>>>
>>> On Aug 5, 2014, at 10:26 AM, P. Taylor Goetz <pt...@gmail.com> wrote:
>>>
>>> You are only sleeping for 100 milliseconds before shutting down the
>>> local cluster, which is probably not long enough for the topology to come
>>> up and start processing messages. Try increasing the sleep time to
>>> something like 10 seconds.
>>>
>>> You can also reduce startup time with the following JVM flag:
>>>
>>> -Djava.net.preferIPv4Stack=true
>>>
>>> - Taylor
>>>
>>> On Aug 5, 2014, at 1:16 PM, Sa Li <sa...@gmail.com> wrote:
>>>
>>> Sorry, the stormTopology:
>>>
>>> TridentTopology topology = new
>>> TridentTopology();
>>> BrokerHosts zk = new ZkHosts("localhost");
>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk,
>>> “topictest");
>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>> OpaqueTridentKafkaSpout spout = new
>>> OpaqueTridentKafkaSpout(spoutConf);
>>>
>>>
>>>
>>>
>>>
>>> On Aug 5, 2014, at 9:56 AM, Sa Li <sa...@gmail.com> wrote:
>>>
>>> Thank you very much, Marcelo, it indeed worked, now I can run my code
>>> without getting error. However, another thing is keeping bother me,
>>> following is my code:
>>>
>>> public static class PrintStream implements Filter {
>>>
>>> @SuppressWarnings("rawtypes”)
>>> @Override
>>> public void prepare(Map conf, TridentOperationContext context) {
>>> }
>>> @Override
>>> public void cleanup() {
>>> }
>>> @Override
>>> public boolean isKeep(TridentTuple tuple) {
>>> System.out.println(tuple);
>>> return true;
>>> }
>>> }
>>> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException
>>> {
>>>
>>> TridentTopology topology = new TridentTopology();
>>> BrokerHosts zk = new ZkHosts("localhost");
>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk,
>>> "ingest_test");
>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>> OpaqueTridentKafkaSpout spout = new
>>> OpaqueTridentKafkaSpout(spoutConf);
>>>
>>> topology.newStream("kafka", spout)
>>> .each(new Fields("str"),
>>> new PrintStream()
>>> );
>>>
>>> return topology.build();
>>> }
>>> public static void main(String[] args) throws Exception {
>>>
>>> Config conf = new Config();
>>> conf.setDebug(true);
>>> conf.setMaxSpoutPending(1);
>>> conf.setMaxTaskParallelism(3);
>>> LocalDRPC drpc = new LocalDRPC();
>>> LocalCluster cluster = new LocalCluster();
>>> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>>>
>>> Thread.sleep(100);
>>> cluster.shutdown();
>>> }
>>>
>>> What I expect is quite simple, print out the message I collect from a
>>> kafka producer playback process which is running separately. The topic is
>>> listed as:
>>>
>>> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper
>>> localhost:2181
>>> topic: topictest partition: 0 leader: 1 replicas: 1,3,2
>>> isr: 1,3,2
>>> topic: topictest partition: 1 leader: 2 replicas: 2,1,3
>>> isr: 2,1,3
>>> topic: topictest partition: 2 leader: 3 replicas: 3,2,1
>>> isr: 3,2,1
>>> topic: topictest partition: 3 leader: 1 replicas: 1,2,3
>>> isr: 1,2,3
>>> topic: topictest partition: 4 leader: 2 replicas: 2,3,1
>>> isr: 2,3,1
>>>
>>> When I am running the code, this is what I saw on the screen, seems no
>>> error, but no message print out as well:
>>>
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in
>>> [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in
>>> [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> explanation.
>>> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1
>>> -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file=
>>> -cp
>>> /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin
>>> -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar
>>> storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in
>>> [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in
>>> [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> explanation.
>>> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess
>>> zookeeper at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with
>>> conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>>> "topology.tick.tuple.freq.secs" nil,
>>> "topology.builtin.metrics.bucket.size.secs" 60,
>>> "topology.fall.back.on.java.serialization" true,
>>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>>> "topology.skip.missing.kryo.registrations" true,
>>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>>> "topology.trident.batch.emit.interval.millis" 50,
>>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>>> "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9",
>>> "storm.messaging.netty.buffer_size" 5242880,
>>> "supervisor.worker.start.timeout.secs" 120,
>>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>>> "/transactional", "topology.acker.executors" nil,
>>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>>> "supervisor.heartbeat.frequency.secs" 5,
>>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>>> "topology.spout.wait.strategy"
>>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>>> nil, "storm.zookeeper.retry.interval" 1000, "
>>> topology.sleep.spout.wait.strategy.time.ms" 1,
>>> "nimbus.topology.validator"
>>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>>> [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs"
>>> 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs"
>>> 30, "task.refresh.poll.secs" 10, "topology.workers" 1,
>>> "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627,
>>> "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1,
>>> "topology.tuple.serializer"
>>> "backtype.storm.serialization.types.ListDelegateSerializer",
>>> "topology.disruptor.wait.strategy"
>>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>>> 5, "storm.thrift.transport"
>>> "backtype.storm.security.auth.SimpleTransportPlugin",
>>> "topology.state.synchronization.timeout.secs" 60,
>>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms"
>>> 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false,
>>> "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode"
>>> "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
>>> 1237 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>>> update: :connected:none
>>> 1350 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1417 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>>> update: :connected:none
>>> 1482 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1484 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>>> update: :connected:none
>>> 1540 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor
>>> with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>>> "topology.tick.tuple.freq.secs" nil,
>>> "topology.builtin.metrics.bucket.size.secs" 60,
>>> "topology.fall.back.on.java.serialization" true,
>>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>>> "topology.skip.missing.kryo.registrations" true,
>>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>>> "topology.trident.batch.emit.interval.millis" 50,
>>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>>> "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388",
>>> "storm.messaging.netty.buffer_size" 5242880,
>>> "supervisor.worker.start.timeout.secs" 120,
>>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>>> "/transactional", "topology.acker.executors" nil,
>>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>>> "supervisor.heartbeat.frequency.secs" 5,
>>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>>> "topology.spout.wait.strategy"
>>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>>> nil, "storm.zookeeper.retry.interval" 1000, "
>>> topology.sleep.spout.wait.strategy.time.ms" 1,
>>> "nimbus.topology.validator"
>>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>>> (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120,
>>> "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30,
>>> "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts"
>>> "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
>>> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
>>> "backtype.storm.serialization.types.ListDelegateSerializer",
>>> "topology.disruptor.wait.strategy"
>>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>>> 5, "storm.thrift.transport"
>>> "backtype.storm.security.auth.SimpleTransportPlugin",
>>> "topology.state.synchronization.timeout.secs" 60,
>>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms"
>>> 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false,
>>> "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode"
>>> "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>> 1576 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>>> update: :connected:none
>>> 1590 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
>>> with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
>>> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor
>>> with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>>> "topology.tick.tuple.freq.secs" nil,
>>> "topology.builtin.metrics.bucket.size.secs" 60,
>>> "topology.fall.back.on.java.serialization" true,
>>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>>> "topology.skip.missing.kryo.registrations" true,
>>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>>> "topology.trident.batch.emit.interval.millis" 50,
>>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>>> "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912",
>>> "storm.messaging.netty.buffer_size" 5242880,
>>> "supervisor.worker.start.timeout.secs" 120,
>>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>>> "/transactional", "topology.acker.executors" nil,
>>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>>> "supervisor.heartbeat.frequency.secs" 5,
>>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>>> "topology.spout.wait.strategy"
>>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>>> nil, "storm.zookeeper.retry.interval" 1000, "
>>> topology.sleep.spout.wait.strategy.time.ms" 1,
>>> "nimbus.topology.validator"
>>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>>> (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120,
>>> "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30,
>>> "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts"
>>> "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
>>> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
>>> "backtype.storm.serialization.types.ListDelegateSerializer",
>>> "topology.disruptor.wait.strategy"
>>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>>> 5, "storm.thrift.transport"
>>> "backtype.storm.security.auth.SimpleTransportPlugin",
>>> "topology.state.synchronization.timeout.secs" 60,
>>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms"
>>> 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false,
>>> "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode"
>>> "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>> 1638 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>>> update: :connected:none
>>> 1690 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
>>> with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
>>> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology
>>> submission for kafka with conf {"topology.max.task.parallelism" nil,
>>> "topology.acker.executors" nil, "topology.kryo.register"
>>> {"storm.trident.topology.TransactionAttempt" nil},
>>> "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id"
>>> "kafka-1-1407257070", "topology.debug" true}
>>> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka:
>>> kafka-1-1407257070
>>> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available
>>> slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1]
>>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 2]
>>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 3]
>>> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4]
>>> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5]
>>> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
>>> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment
>>> for topology id kafka-1-1407257070:
>>> #backtype.storm.daemon.common.Assignment{:master-code-dir
>>> "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070",
>>> :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"},
>>> :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5
>>> 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4]
>>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2]
>>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1]
>>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1
>>> 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3]
>>> 1407257070}}
>>> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
>>> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
>>> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down
>>> supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
>>> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
>>> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
>>> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down
>>> supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
>>> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
>>> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
>>> 2256 [main] INFO backtype.storm.testing - Shutting down in process
>>> zookeeper
>>> 2257 [main] INFO backtype.storm.testing - Done shutting down in process
>>> zookeeper
>>> 2258 [main] INFO backtype.storm.testing - Deleting temporary path
>>> /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
>>> 2259 [main] INFO backtype.storm.testing - Deleting temporary path
>>> /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>> 2260 [main] INFO backtype.storm.testing - Deleting temporary path
>>> /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
>>> 2261 [main] INFO backtype.storm.testing - Deleting temporary path
>>> /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>>>
>>> Anyone can help me locate what the problem is? I really need to walk
>>> through this step in order to be able to replace .each(printStream()) with
>>> other functions.
>>>
>>>
>>> Thanks
>>>
>>> Alec
>>>
>>> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>>>
>>> hello,
>>>
>>> you can check your .jar application with command " jar tf " to see if
>>> class kafka/api/OffsetRequest.class is part of the jar.
>>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are
>>> using) in storm_lib directory
>>>
>>> Marcelo
>>>
>>>
>>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>>>
>>>> Hi, all
>>>>
>>>> I am running a kafka-spout code in storm-server, the pom is
>>>>
>>>> <groupId>org.apache.kafka</groupId>
>>>> <artifactId>kafka_2.9.2</artifactId>
>>>> <version>0.8.0</version>
>>>> <scope>provided</scope>
>>>>
>>>> <exclusions>
>>>> <exclusion>
>>>> <groupId>org.apache.zookeeper</groupId>
>>>> <artifactId>zookeeper</artifactId>
>>>> </exclusion>
>>>> <exclusion>
>>>> <groupId>log4j</groupId>
>>>> <artifactId>log4j</artifactId>
>>>> </exclusion>
>>>> </exclusions>
>>>>
>>>> </dependency>
>>>>
>>>> <!-- Storm-Kafka compiled -->
>>>>
>>>> <dependency>
>>>> <artifactId>storm-kafka</artifactId>
>>>> <groupId>org.apache.storm</groupId>
>>>> <version>0.9.2-incubating</version>
>>>> <scope>*compile*</scope>
>>>> </dependency>
>>>>
>>>> I can mvn package it, but when I run it
>>>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar
>>>> target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar
>>>> storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>
>>>>
>>>> I am getting such error
>>>>
>>>> 1657 [main]
>>>> INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting
>>>> supervisor with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>>>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread
>>>> Thread[main,5,main] died
>>>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>>>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26)
>>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at
>>>> storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13)
>>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at
>>>> storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115)
>>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at
>>>> storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144)
>>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>> ~[na:1.7.0_55]
>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>> ~[na:1.7.0_55]
>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>> ~[na:1.7.0_55]
>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>> ~[na:1.7.0_55]
>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>>> ~[na:1.7.0_55]
>>>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>>> ~[na:1.7.0_55]
>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>>> ~[na:1.7.0_55]
>>>>
>>>>
>>>>
>>>>
>>>> I try to poke around online, could not find a solution for it, any idea
>>>> about that?
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Alec
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> CONFIDENTIALITY NOTICE
>>> NOTICE: This message is intended for the use of the individual or entity
>>> to which it is addressed and may contain information that is confidential,
>>> privileged and exempt from disclosure under applicable law. If the reader
>>> of this message is not the intended recipient, you are hereby notified that
>>> any printing, copying, dissemination, distribution, disclosure or
>>> forwarding of this communication is strictly prohibited. If you have
>>> received this communication in error, please contact the sender immediately
>>> and delete it from your system. Thank You.
>>>
>>>
>>>
>>
>>
>> --
>> Thanks
>> Parth
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>>
>
>
>
>
Re: kafka-spout running error
Posted by Sa Li <sa...@gmail.com>.
This is my complete pom
<dependency>
<groupId>org.json</groupId>
<artifactId>json</artifactId>
<version>20140107</version>
</dependency>
<!-- Slf4j Logger -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.2</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<!-- Scala 2.9.2 -->
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-all</artifactId>
<version>1.9.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-framework</artifactId>
<version>2.6.0</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-recipes</artifactId>
<version>2.6.0</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-test</artifactId>
<version>2.6.0</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
</exclusion>
</exclusions>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
<version>3.3.6</version>
<exclusions>
<exclusion>
<groupId>com.sun.jmx</groupId>
<artifactId>jmxri</artifactId>
</exclusion>
<exclusion>
<groupId>com.sun.jdmk</groupId>
<artifactId>jmxtools</artifactId>
</exclusion>
<exclusion>
<groupId>javax.jms</groupId>
<artifactId>jms</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- Kafka 0.8.0 compiled Scala 2.9.2 -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.8.1.1</version>
<scope>provided</scope>
<exclusions>
<exclusion>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- Storm-Kafka compiled -->
<dependency>
<artifactId>storm-kafka</artifactId>
<groupId>org.apache.storm</groupId>
<version>0.9.2-incubating</version>
<scope>*compile*</scope>
</dependency>
<!--
<dependency>
<groupId>storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>0.9.0-wip16a-scala292</version>
</dependency>
-->
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>6.8.5</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.easytesting</groupId>
<artifactId>fest-assert-core</artifactId>
<version>2.0M8</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jmock</groupId>
<artifactId>jmock</artifactId>
<version>2.6.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>storm</groupId>
<artifactId>storm</artifactId>
<version>0.9.0.1</version>
<!-- keep storm out of the jar-with-dependencies -->
<scope>provided</scope>
</dependency>
<dependency>
<groupId>commons-collections</groupId>
<artifactId>commons-collections</artifactId>
<version>3.2.1</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>15.0</version>
</dependency>
</dependencies>
On Aug 5, 2014, at 2:41 PM, Sa Li <sa...@gmail.com> wrote:
> Thanks, Kushan and Parth, I tried to solve the problem as you two suggested, first I change the kafka version in pom, re-compile it, and also copy the kafka_2.10-0.8.1.1.jar into storm.lib directory from M2_REPO. Here is my pom
>
> <dependency>
> <groupId>org.apache.curator</groupId>
> <artifactId>curator-framework</artifactId>
> <version>2.6.0</version>
> <exclusions>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> <exclusion>
> <groupId>org.slf4j</groupId>
> <artifactId>slf4j-log4j12</artifactId>
> </exclusion>
> </exclusions>
> </dependency>
> <dependency>
> <groupId>org.apache.curator</groupId>
> <artifactId>curator-recipes</artifactId>
> <version>2.6.0</version>
> <exclusions>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> </exclusions>
> <scope>test</scope>
> </dependency>
> <dependency>
> <groupId>org.apache.curator</groupId>
> <artifactId>curator-test</artifactId>
> <version>2.6.0</version>
> <exclusions>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> <exclusion>
> <groupId>org.testng</groupId>
> <artifactId>testng</artifactId>
> </exclusion>
> </exclusions>
> <scope>test</scope>
> </dependency>
>
>
> <dependency>
> <groupId>org.apache.zookeeper</groupId>
> <artifactId>zookeeper</artifactId>
> <version>3.3.6</version>
> <exclusions>
> <exclusion>
> <groupId>com.sun.jmx</groupId>
> <artifactId>jmxri</artifactId>
> </exclusion>
> <exclusion>
> <groupId>com.sun.jdmk</groupId>
> <artifactId>jmxtools</artifactId>
> </exclusion>
> <exclusion>
> <groupId>javax.jms</groupId>
> <artifactId>jms</artifactId>
> </exclusion>
> </exclusions>
> </dependency>
>
> <!-- Kafka 0.8.0 compiled Scala 2.9.2 -->
>
> <dependency>
> <groupId>org.apache.kafka</groupId>
> <artifactId>kafka_2.10</artifactId>
> <version>0.8.1.1</version>
> <scope>provided</scope>
>
> <exclusions>
> <exclusion>
> <groupId>org.apache.zookeeper</groupId>
> <artifactId>zookeeper</artifactId>
> </exclusion>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> </exclusions>
>
> </dependency>
>
>
> Here I can see the zookeeper version is 3.3.6 (here the version was downgraded since "java.lang.ClassNotFoundException: org.apache.zookeeper.server.NIOServerCnxn$Factory at java.net” error came out otherwise, the curator version is 2.6.0. I jar tf the project jar to see the class included:
> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# jar tf target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar | grep zookeeper
> org/apache/zookeeper/
> org/apache/zookeeper/client/
> org/apache/zookeeper/common/
> org/apache/zookeeper/data/
> org/apache/zookeeper/jmx/
> org/apache/zookeeper/proto/
> org/apache/zookeeper/server/
> org/apache/zookeeper/server/auth/
> org/apache/zookeeper/server/persistence/
> org/apache/zookeeper/server/quorum/
> org/apache/zookeeper/server/quorum/flexible/
> org/apache/zookeeper/server/upgrade/
> org/apache/zookeeper/server/util/
> org/apache/zookeeper/txn/
> org/apache/zookeeper/version/
> org/apache/zookeeper/version/util/
> org/apache/zookeeper/AsyncCallback$ACLCallback.class
> org/apache/zookeeper/AsyncCallback$Children2Callback.class
> org/apache/zookeeper/AsyncCallback$ChildrenCallback.class
> org/apache/zookeeper/AsyncCallback$DataCallback.class
> org/apache/zookeeper/AsyncCallback$StatCallback.class
> org/apache/zookeeper/AsyncCallback$StringCallback.class
> org/apache/zookeeper/AsyncCallback$VoidCallback.class
> org/apache/zookeeper/AsyncCallback.class
> org/apache/zookeeper/ClientCnxn$1.class
> org/apache/zookeeper/ClientCnxn$2.class
> org/apache/zookeeper/ClientCnxn$AuthData.class
> org/apache/zookeeper/ClientCnxn$EndOfStreamException.class
> org/apache/zookeeper/ClientCnxn$EventThread.class
> org/apache/zookeeper/ClientCnxn$Packet.class
> org/apache/zookeeper/ClientCnxn$SendThread.class
> org/apache/zookeeper/ClientCnxn$SessionExpiredException.class
> org/apache/zookeeper/ClientCnxn$SessionTimeoutException.class
> org/apache/zookeeper/ClientCnxn$WatcherSetEventPair.class
> org/apache/zookeeper/ClientCnxn.class
> org/apache/zookeeper/ClientWatchManager.class
> org/apache/zookeeper/CreateMode.class
> org/apache/zookeeper/Environment$Entry.class
> org/apache/zookeeper/Environment.class
> org/apache/zookeeper/JLineZNodeCompletor.class
> org/apache/zookeeper/KeeperException$1.class
> org/apache/zookeeper/KeeperException$APIErrorException.class
> org/apache/zookeeper/KeeperException$AuthFailedException.class
> org/apache/zookeeper/KeeperException$BadArgumentsException.class
> org/apache/zookeeper/KeeperException$BadVersionException.class
> org/apache/zookeeper/KeeperException$Code.class
> org/apache/zookeeper/KeeperException$CodeDeprecated.class
> org/apache/zookeeper/KeeperException$ConnectionLossException.class
> org/apache/zookeeper/KeeperException$DataInconsistencyException.class
> org/apache/zookeeper/KeeperException$InvalidACLException.class
> org/apache/zookeeper/KeeperException$InvalidCallbackException.class
> org/apache/zookeeper/KeeperException$MarshallingErrorException.class
> org/apache/zookeeper/KeeperException$NoAuthException.class
> org/apache/zookeeper/KeeperException$NoChildrenForEphemeralsException.class
> org/apache/zookeeper/KeeperException$NoNodeException.class
> org/apache/zookeeper/KeeperException$NodeExistsException.class
> org/apache/zookeeper/KeeperException$NotEmptyException.class
> org/apache/zookeeper/KeeperException$OperationTimeoutException.class
> org/apache/zookeeper/KeeperException$RuntimeInconsistencyException.class
> org/apache/zookeeper/KeeperException$SessionExpiredException.class
> org/apache/zookeeper/KeeperException$SessionMovedException.class
> org/apache/zookeeper/KeeperException$SystemErrorException.class
> org/apache/zookeeper/KeeperException$UnimplementedException.class
> org/apache/zookeeper/KeeperException.class
> org/apache/zookeeper/Quotas.class
> org/apache/zookeeper/ServerAdminClient.class
> org/apache/zookeeper/StatsTrack.class
> org/apache/zookeeper/Version.class
> org/apache/zookeeper/WatchedEvent.class
> org/apache/zookeeper/Watcher$Event$EventType.class
> org/apache/zookeeper/Watcher$Event$KeeperState.class
> org/apache/zookeeper/Watcher$Event.class
> org/apache/zookeeper/Watcher.class
> org/apache/zookeeper/ZooDefs$Ids.class
> org/apache/zookeeper/ZooDefs$OpCode.class
> org/apache/zookeeper/ZooDefs$Perms.class
> org/apache/zookeeper/ZooDefs.class
> org/apache/zookeeper/ZooKeeper$1.class
> org/apache/zookeeper/ZooKeeper$ChildWatchRegistration.class
> org/apache/zookeeper/ZooKeeper$DataWatchRegistration.class
> org/apache/zookeeper/ZooKeeper$ExistsWatchRegistration.class
> org/apache/zookeeper/ZooKeeper$States.class
> org/apache/zookeeper/ZooKeeper$WatchRegistration.class
> org/apache/zookeeper/ZooKeeper$ZKWatchManager.class
> org/apache/zookeeper/ZooKeeper.class
> org/apache/zookeeper/ZooKeeperMain$1.class
> org/apache/zookeeper/ZooKeeperMain$MyCommandOptions.class
> org/apache/zookeeper/ZooKeeperMain$MyWatcher.class
> org/apache/zookeeper/ZooKeeperMain.class
> org/apache/zookeeper/client/FourLetterWordMain.class
> org/apache/zookeeper/common/PathTrie$1.class
> org/apache/zookeeper/common/PathTrie$TrieNode.class
> org/apache/zookeeper/common/PathTrie.class
> org/apache/zookeeper/common/PathUtils.class
> org/apache/zookeeper/data/ACL.class
> org/apache/zookeeper/data/Id.class
> org/apache/zookeeper/data/Stat.class
> org/apache/zookeeper/data/StatPersisted.class
> org/apache/zookeeper/data/StatPersistedV1.class
> org/apache/zookeeper/jmx/CommonNames.class
> org/apache/zookeeper/jmx/MBeanRegistry.class
> org/apache/zookeeper/jmx/ManagedUtil.class
> org/apache/zookeeper/jmx/ZKMBeanInfo.class
> org/apache/zookeeper/proto/AuthPacket.class
> org/apache/zookeeper/proto/ConnectRequest.class
> org/apache/zookeeper/proto/ConnectResponse.class
> org/apache/zookeeper/proto/CreateRequest.class
> org/apache/zookeeper/proto/CreateResponse.class
> org/apache/zookeeper/proto/DeleteRequest.class
> org/apache/zookeeper/proto/ExistsRequest.class
> org/apache/zookeeper/proto/ExistsResponse.class
> org/apache/zookeeper/proto/GetACLRequest.class
> org/apache/zookeeper/proto/GetACLResponse.class
> org/apache/zookeeper/proto/GetChildren2Request.class
> org/apache/zookeeper/proto/GetChildren2Response.class
> org/apache/zookeeper/proto/GetChildrenRequest.class
> org/apache/zookeeper/proto/GetChildrenResponse.class
> org/apache/zookeeper/proto/GetDataRequest.class
> org/apache/zookeeper/proto/GetDataResponse.class
> org/apache/zookeeper/proto/GetMaxChildrenRequest.class
> org/apache/zookeeper/proto/GetMaxChildrenResponse.class
> org/apache/zookeeper/proto/ReplyHeader.class
> org/apache/zookeeper/proto/RequestHeader.class
> org/apache/zookeeper/proto/SetACLRequest.class
> org/apache/zookeeper/proto/SetACLResponse.class
> org/apache/zookeeper/proto/SetDataRequest.class
> org/apache/zookeeper/proto/SetDataResponse.class
> org/apache/zookeeper/proto/SetMaxChildrenRequest.class
> org/apache/zookeeper/proto/SetWatches.class
> org/apache/zookeeper/proto/SyncRequest.class
> org/apache/zookeeper/proto/SyncResponse.class
> org/apache/zookeeper/proto/WatcherEvent.class
> org/apache/zookeeper/proto/op_result_t.class
> org/apache/zookeeper/server/ByteBufferInputStream.class
> org/apache/zookeeper/server/ConnectionBean.class
> org/apache/zookeeper/server/ConnectionMXBean.class
> org/apache/zookeeper/server/DataNode.class
> org/apache/zookeeper/server/DataTree$1.class
> org/apache/zookeeper/server/DataTree$Counts.class
> org/apache/zookeeper/server/DataTree$ProcessTxnResult.class
> org/apache/zookeeper/server/DataTree.class
> org/apache/zookeeper/server/DataTreeBean.class
> org/apache/zookeeper/server/DataTreeMXBean.class
> org/apache/zookeeper/server/FinalRequestProcessor.class
> org/apache/zookeeper/server/LogFormatter.class
> org/apache/zookeeper/server/NIOServerCnxn$1.class
> org/apache/zookeeper/server/NIOServerCnxn$CloseRequestException.class
> org/apache/zookeeper/server/NIOServerCnxn$CnxnStatResetCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$CnxnStats.class
> org/apache/zookeeper/server/NIOServerCnxn$CommandThread.class
> org/apache/zookeeper/server/NIOServerCnxn$ConfCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$ConsCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$DumpCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$EndOfStreamException.class
> org/apache/zookeeper/server/NIOServerCnxn$EnvCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$Factory$1.class
> org/apache/zookeeper/server/NIOServerCnxn$Factory.class
> org/apache/zookeeper/server/NIOServerCnxn$RuokCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$SendBufferWriter.class
> org/apache/zookeeper/server/NIOServerCnxn$SetTraceMaskCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$StatCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$StatResetCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$TraceMaskCommand.class
> org/apache/zookeeper/server/NIOServerCnxn$WatchCommand.class
> org/apache/zookeeper/server/NIOServerCnxn.class
> org/apache/zookeeper/server/ObserverBean.class
> org/apache/zookeeper/server/PrepRequestProcessor.class
> org/apache/zookeeper/server/PurgeTxnLog$1MyFileFilter.class
> org/apache/zookeeper/server/PurgeTxnLog.class
> org/apache/zookeeper/server/Request.class
> org/apache/zookeeper/server/RequestProcessor$RequestProcessorException.class
> org/apache/zookeeper/server/RequestProcessor.class
> org/apache/zookeeper/server/ServerCnxn$Stats.class
> org/apache/zookeeper/server/ServerCnxn.class
> org/apache/zookeeper/server/ServerConfig.class
> org/apache/zookeeper/server/ServerStats$Provider.class
> org/apache/zookeeper/server/ServerStats.class
> org/apache/zookeeper/server/SessionTracker$Session.class
> org/apache/zookeeper/server/SessionTracker$SessionExpirer.class
> org/apache/zookeeper/server/SessionTracker.class
> org/apache/zookeeper/server/SessionTrackerImpl$SessionImpl.class
> org/apache/zookeeper/server/SessionTrackerImpl$SessionSet.class
> org/apache/zookeeper/server/SessionTrackerImpl.class
> org/apache/zookeeper/server/SyncRequestProcessor$1.class
> org/apache/zookeeper/server/SyncRequestProcessor.class
> org/apache/zookeeper/server/TraceFormatter.class
> org/apache/zookeeper/server/WatchManager.class
> org/apache/zookeeper/server/ZKDatabase$1.class
> org/apache/zookeeper/server/ZKDatabase.class
> org/apache/zookeeper/server/ZooKeeperServer$BasicDataTreeBuilder.class
> org/apache/zookeeper/server/ZooKeeperServer$ChangeRecord.class
> org/apache/zookeeper/server/ZooKeeperServer$DataTreeBuilder.class
> org/apache/zookeeper/server/ZooKeeperServer$Factory.class
> org/apache/zookeeper/server/ZooKeeperServer$MissingSessionException.class
> org/apache/zookeeper/server/ZooKeeperServer.class
> org/apache/zookeeper/server/ZooKeeperServerBean.class
> org/apache/zookeeper/server/ZooKeeperServerMXBean.class
> org/apache/zookeeper/server/ZooKeeperServerMain.class
> org/apache/zookeeper/server/ZooTrace.class
> org/apache/zookeeper/server/auth/AuthenticationProvider.class
> org/apache/zookeeper/server/auth/DigestAuthenticationProvider.class
> org/apache/zookeeper/server/auth/IPAuthenticationProvider.class
> org/apache/zookeeper/server/auth/ProviderRegistry.class
> org/apache/zookeeper/server/persistence/FileHeader.class
> org/apache/zookeeper/server/persistence/FileSnap.class
> org/apache/zookeeper/server/persistence/FileTxnLog$FileTxnIterator.class
> org/apache/zookeeper/server/persistence/FileTxnLog$PositionInputStream.class
> org/apache/zookeeper/server/persistence/FileTxnLog.class
> org/apache/zookeeper/server/persistence/FileTxnSnapLog$PlayBackListener.class
> org/apache/zookeeper/server/persistence/FileTxnSnapLog.class
> org/apache/zookeeper/server/persistence/SnapShot.class
> org/apache/zookeeper/server/persistence/TxnLog$TxnIterator.class
> org/apache/zookeeper/server/persistence/TxnLog.class
> org/apache/zookeeper/server/persistence/Util$DataDirFileComparator.class
> org/apache/zookeeper/server/persistence/Util.class
> org/apache/zookeeper/server/quorum/AckRequestProcessor.class
> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$1.class
> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$Messenger$WorkerReceiver.class
> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$Messenger$WorkerSender.class
> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$Messenger.class
> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$Notification.class
> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$ToSend$mType.class
> org/apache/zookeeper/server/quorum/AuthFastLeaderElection$ToSend.class
> org/apache/zookeeper/server/quorum/AuthFastLeaderElection.class
> org/apache/zookeeper/server/quorum/CommitProcessor.class
> org/apache/zookeeper/server/quorum/Election.class
> org/apache/zookeeper/server/quorum/FastLeaderElection$1.class
> org/apache/zookeeper/server/quorum/FastLeaderElection$Messenger$WorkerReceiver.class
> org/apache/zookeeper/server/quorum/FastLeaderElection$Messenger$WorkerSender.class
> org/apache/zookeeper/server/quorum/FastLeaderElection$Messenger.class
> org/apache/zookeeper/server/quorum/FastLeaderElection$Notification.class
> org/apache/zookeeper/server/quorum/FastLeaderElection$ToSend$mType.class
> org/apache/zookeeper/server/quorum/FastLeaderElection$ToSend.class
> org/apache/zookeeper/server/quorum/FastLeaderElection.class
> org/apache/zookeeper/server/quorum/Follower.class
> org/apache/zookeeper/server/quorum/FollowerBean.class
> org/apache/zookeeper/server/quorum/FollowerMXBean.class
> org/apache/zookeeper/server/quorum/FollowerRequestProcessor.class
> org/apache/zookeeper/server/quorum/FollowerZooKeeperServer.class
> org/apache/zookeeper/server/quorum/Leader$LearnerCnxAcceptor.class
> org/apache/zookeeper/server/quorum/Leader$Proposal.class
> org/apache/zookeeper/server/quorum/Leader$ToBeAppliedRequestProcessor.class
> org/apache/zookeeper/server/quorum/Leader$XidRolloverException.class
> org/apache/zookeeper/server/quorum/Leader.class
> org/apache/zookeeper/server/quorum/LeaderBean.class
> org/apache/zookeeper/server/quorum/LeaderElection$ElectionResult.class
> org/apache/zookeeper/server/quorum/LeaderElection.class
> org/apache/zookeeper/server/quorum/LeaderElectionBean.class
> org/apache/zookeeper/server/quorum/LeaderElectionMXBean.class
> org/apache/zookeeper/server/quorum/LeaderMXBean.class
> org/apache/zookeeper/server/quorum/LeaderZooKeeperServer.class
> org/apache/zookeeper/server/quorum/Learner$PacketInFlight.class
> org/apache/zookeeper/server/quorum/Learner.class
> org/apache/zookeeper/server/quorum/LearnerHandler$1.class
> org/apache/zookeeper/server/quorum/LearnerHandler.class
> org/apache/zookeeper/server/quorum/LearnerSessionTracker.class
> org/apache/zookeeper/server/quorum/LearnerSyncRequest.class
> org/apache/zookeeper/server/quorum/LearnerZooKeeperServer.class
> org/apache/zookeeper/server/quorum/LocalPeerBean.class
> org/apache/zookeeper/server/quorum/LocalPeerMXBean.class
> org/apache/zookeeper/server/quorum/Observer.class
> org/apache/zookeeper/server/quorum/ObserverMXBean.class
> org/apache/zookeeper/server/quorum/ObserverRequestProcessor.class
> org/apache/zookeeper/server/quorum/ObserverZooKeeperServer.class
> org/apache/zookeeper/server/quorum/ProposalRequestProcessor.class
> org/apache/zookeeper/server/quorum/QuorumBean.class
> org/apache/zookeeper/server/quorum/QuorumCnxManager$Listener.class
> org/apache/zookeeper/server/quorum/QuorumCnxManager$Message.class
> org/apache/zookeeper/server/quorum/QuorumCnxManager$RecvWorker.class
> org/apache/zookeeper/server/quorum/QuorumCnxManager$SendWorker.class
> org/apache/zookeeper/server/quorum/QuorumCnxManager.class
> org/apache/zookeeper/server/quorum/QuorumMXBean.class
> org/apache/zookeeper/server/quorum/QuorumPacket.class
> org/apache/zookeeper/server/quorum/QuorumPeer$1.class
> org/apache/zookeeper/server/quorum/QuorumPeer$Factory.class
> org/apache/zookeeper/server/quorum/QuorumPeer$LearnerType.class
> org/apache/zookeeper/server/quorum/QuorumPeer$QuorumServer.class
> org/apache/zookeeper/server/quorum/QuorumPeer$ResponderThread.class
> org/apache/zookeeper/server/quorum/QuorumPeer$ServerState.class
> org/apache/zookeeper/server/quorum/QuorumPeer.class
> org/apache/zookeeper/server/quorum/QuorumPeerConfig$ConfigException.class
> org/apache/zookeeper/server/quorum/QuorumPeerConfig.class
> org/apache/zookeeper/server/quorum/QuorumPeerMain.class
> org/apache/zookeeper/server/quorum/QuorumStats$Provider.class
> org/apache/zookeeper/server/quorum/QuorumStats.class
> org/apache/zookeeper/server/quorum/QuorumZooKeeperServer.class
> org/apache/zookeeper/server/quorum/RemotePeerBean.class
> org/apache/zookeeper/server/quorum/RemotePeerMXBean.class
> org/apache/zookeeper/server/quorum/SendAckRequestProcessor.class
> org/apache/zookeeper/server/quorum/ServerBean.class
> org/apache/zookeeper/server/quorum/ServerMXBean.class
> org/apache/zookeeper/server/quorum/Vote.class
> org/apache/zookeeper/server/quorum/flexible/QuorumHierarchical.class
> org/apache/zookeeper/server/quorum/flexible/QuorumMaj.class
> org/apache/zookeeper/server/quorum/flexible/QuorumVerifier.class
> org/apache/zookeeper/server/upgrade/DataNodeV1.class
> org/apache/zookeeper/server/upgrade/DataTreeV1$ProcessTxnResult.class
> org/apache/zookeeper/server/upgrade/DataTreeV1.class
> org/apache/zookeeper/server/upgrade/UpgradeMain.class
> org/apache/zookeeper/server/upgrade/UpgradeSnapShot.class
> org/apache/zookeeper/server/upgrade/UpgradeSnapShotV1.class
> org/apache/zookeeper/server/util/Profiler$Operation.class
> org/apache/zookeeper/server/util/Profiler.class
> org/apache/zookeeper/server/util/SerializeUtils.class
> org/apache/zookeeper/txn/CreateSessionTxn.class
> org/apache/zookeeper/txn/CreateTxn.class
> org/apache/zookeeper/txn/DeleteTxn.class
> org/apache/zookeeper/txn/ErrorTxn.class
> org/apache/zookeeper/txn/SetACLTxn.class
> org/apache/zookeeper/txn/SetDataTxn.class
> org/apache/zookeeper/txn/SetMaxChildrenTxn.class
> org/apache/zookeeper/txn/TxnHeader.class
> org/apache/zookeeper/version/Info.class
> org/apache/zookeeper/version/util/VerGen$Version.class
> org/apache/zookeeper/version/util/VerGen.class
>
>
> seems zookeeper is included, however I still got the same issue after I change all above, really have no idea what can I do now.
>
> thanks
>
> Alec
>
> On Aug 5, 2014, at 2:11 PM, Kushan Maskey <ku...@mmillerassociates.com> wrote:
>
>> You need to include kafka_2.10-0.8.1.1.jar into your project jar. I had this issue and that resolved it.
>>
>> --
>> Kushan Maskey
>> 817.403.7500
>>
>>
>> On Tue, Aug 5, 2014 at 3:57 PM, Parth Brahmbhatt <pb...@hortonworks.com> wrote:
>> I see a NoSuchMethodError, seems like there is some issue with your jar packing. Can you confirm that you have the zookeeper dependency packed in your jar? what version of curator and zookeeper are you using?
>>
>> Thanks
>> Parth
>>
>>
>> On Tue, Aug 5, 2014 at 1:45 PM, Sa Li <sa...@gmail.com> wrote:
>> Thanks, Parth, I increase the sleep time as Thread.sleep(150000000), 150 seconds, but still I got such Asyn problem, it seems to be the problem to reading kafka topic from zookeeper.
>>
>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>> 3100 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor -
>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>> 3101 [Thread-29-$mastercoord-bg0] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 3114 [Thread-10] INFO backtype.storm.daemon.worker - Worker has topology config {"storm.id" "kafka-1-1407271290", "dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/0610cc80-25a7-4304-acf0-9ead5f942429", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.kryo.decorators" (), "topology.name" "kafka", "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" 1, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" true, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" 3}
>> 3115 [Thread-10] INFO backtype.storm.daemon.worker - Worker ee9ec3b6-5e13-4329-b12a-c3cffdd7e997 for storm kafka-1-1407271290 on 3aff208c-d065-448d-9026-bf452151d546:4 has finished loading
>> 3207 [Thread-25-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
>>
>> Thanks
>>
>> Alec
>>
>>
>> On Aug 5, 2014, at 1:32 PM, Parth Brahmbhatt <pb...@hortonworks.com> wrote:
>>
>>> Can you let the topology run for 120 seconds or so? In my experience the kafka bolt/spout takes a lot of latency initially as it tries to read/write from zookeeper and initialize connections. On my mac it takes about 15 seconds before the spout is actually opened.
>>>
>>> Thanks
>>> Parth
>>> On Aug 5, 2014, at 1:11 PM, Sa Li <sa...@gmail.com> wrote:
>>>
>>>> If I set the sleep time as 1000 milisec, I got such error:
>>>>
>>>> 3067 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/0f1851f1-9499-48a5-817e-41712921d054
>>>> 3163 [Thread-10-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
>>>> 3163 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
>>>> 3164 [Thread-10-EventThread] WARN backtype.storm.cluster - Received event :disconnected::none: with disconnected Zookeeper.
>>>> 3636 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
>>>> java.net.ConnectException: Connection refused
>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_55]
>>>> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_55]
>>>> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) ~[zookeeper-3.3.3.jar:3.3.3-1073969]
>>>> 4877 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
>>>> java.net.ConnectException: Connection refused
>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_55]
>>>> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_55]
>>>> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) ~[zookeeper-3.3.3.jar:3.3.3-1073969]
>>>> 5566 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
>>>> java.net.ConnectException: Connection refused
>>>>
>>>> seems not even connected to zookeeper, any method to confirm to connection of zookeeper?
>>>>
>>>> Thanks a lot
>>>>
>>>> Alec
>>>>
>>>> On Aug 5, 2014, at 12:58 PM, Sa Li <sa...@gmail.com> wrote:
>>>>
>>>>> Thank you very much for your reply, Taylor. I tried to increase the sleep time as 1 sec or 10 sec, however I got such error, it seems to be Async loop error. Any idea about that?
>>>>>
>>>>> 3053 [Thread-19-$spoutcoord-spout0] INFO org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 3058 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.util - Async loop died!
>>>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
>>>>> at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
>>>>> at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
>>>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>>>> 3058 [Thread-25-spout0] ERROR backtype.storm.util - Async loop died!
>>>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
>>>>> at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
>>>>> at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
>>>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>>>> 3059 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor -
>>>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
>>>>> at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
>>>>> at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
>>>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>>>> 3059 [Thread-25-spout0] ERROR backtype.storm.daemon.executor -
>>>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
>>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
>>>>> at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
>>>>> at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
>>>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>>>> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker has topology config {"storm.id" "kafka-1-1407268492", "dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/ca948198-69df-440b-8acb-6dfc4db6c288", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.kryo.decorators" (), "topology.name" "kafka", "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" true, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker 64335058-7f94-447f-bc0a-5107084789a0 for storm kafka-1-1407268492 on cf2964b3-7655-4a33-88a1-f6e0ceb6f9ed:1 has finished loading
>>>>> 3164 [Thread-29-$mastercoord-bg0] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 3173 [Thread-25-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
>>>>> 3173 [Thread-19-$spoutcoord-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
>>>>>
>>>>> Thanks
>>>>>
>>>>> Alec
>>>>>
>>>>> On Aug 5, 2014, at 10:26 AM, P. Taylor Goetz <pt...@gmail.com> wrote:
>>>>>
>>>>>> You are only sleeping for 100 milliseconds before shutting down the local cluster, which is probably not long enough for the topology to come up and start processing messages. Try increasing the sleep time to something like 10 seconds.
>>>>>>
>>>>>> You can also reduce startup time with the following JVM flag:
>>>>>>
>>>>>> -Djava.net.preferIPv4Stack=true
>>>>>>
>>>>>> - Taylor
>>>>>>
>>>>>> On Aug 5, 2014, at 1:16 PM, Sa Li <sa...@gmail.com> wrote:
>>>>>>
>>>>>>> Sorry, the stormTopology:
>>>>>>>
>>>>>>>> TridentTopology topology = new TridentTopology();
>>>>>>>> BrokerHosts zk = new ZkHosts("localhost");
>>>>>>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, “topictest");
>>>>>>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>>>>>>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Aug 5, 2014, at 9:56 AM, Sa Li <sa...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Thank you very much, Marcelo, it indeed worked, now I can run my code without getting error. However, another thing is keeping bother me, following is my code:
>>>>>>>>
>>>>>>>> public static class PrintStream implements Filter {
>>>>>>>>
>>>>>>>> @SuppressWarnings("rawtypes”)
>>>>>>>> @Override
>>>>>>>> public void prepare(Map conf, TridentOperationContext context) {
>>>>>>>> }
>>>>>>>> @Override
>>>>>>>> public void cleanup() {
>>>>>>>> }
>>>>>>>> @Override
>>>>>>>> public boolean isKeep(TridentTuple tuple) {
>>>>>>>> System.out.println(tuple);
>>>>>>>> return true;
>>>>>>>> }
>>>>>>>> }
>>>>>>>> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException {
>>>>>>>>
>>>>>>>> TridentTopology topology = new TridentTopology();
>>>>>>>> BrokerHosts zk = new ZkHosts("localhost");
>>>>>>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, "ingest_test");
>>>>>>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>>>>>>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>>>>>>>
>>>>>>>> topology.newStream("kafka", spout)
>>>>>>>> .each(new Fields("str"),
>>>>>>>> new PrintStream()
>>>>>>>> );
>>>>>>>>
>>>>>>>> return topology.build();
>>>>>>>> }
>>>>>>>> public static void main(String[] args) throws Exception {
>>>>>>>>
>>>>>>>> Config conf = new Config();
>>>>>>>> conf.setDebug(true);
>>>>>>>> conf.setMaxSpoutPending(1);
>>>>>>>> conf.setMaxTaskParallelism(3);
>>>>>>>> LocalDRPC drpc = new LocalDRPC();
>>>>>>>> LocalCluster cluster = new LocalCluster();
>>>>>>>> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>>>>>>>> Thread.sleep(100);
>>>>>>>> cluster.shutdown();
>>>>>>>> }
>>>>>>>>
>>>>>>>> What I expect is quite simple, print out the message I collect from a kafka producer playback process which is running separately. The topic is listed as:
>>>>>>>>
>>>>>>>> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper localhost:2181
>>>>>>>> topic: topictest partition: 0 leader: 1 replicas: 1,3,2 isr: 1,3,2
>>>>>>>> topic: topictest partition: 1 leader: 2 replicas: 2,1,3 isr: 2,1,3
>>>>>>>> topic: topictest partition: 2 leader: 3 replicas: 3,2,1 isr: 3,2,1
>>>>>>>> topic: topictest partition: 3 leader: 1 replicas: 1,2,3 isr: 1,2,3
>>>>>>>> topic: topictest partition: 4 leader: 2 replicas: 2,3,1 isr: 2,3,1
>>>>>>>>
>>>>>>>> When I am running the code, this is what I saw on the screen, seems no error, but no message print out as well:
>>>>>>>>
>>>>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>>>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>>>>> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1 -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file= -cp /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>>>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>>>>> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>>>>>>> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>>>>> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
>>>>>>>> 1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>>>> 1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>> 1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>>>> 1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>> 1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>>>> 1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>>>>> 1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>>>> 1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
>>>>>>>> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>>>>> 1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>>>> 1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
>>>>>>>> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology submission for kafka with conf {"topology.max.task.parallelism" nil, "topology.acker.executors" nil, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id" "kafka-1-1407257070", "topology.debug" true}
>>>>>>>> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka: kafka-1-1407257070
>>>>>>>> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 3] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
>>>>>>>> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment for topology id kafka-1-1407257070: #backtype.storm.daemon.common.Assignment{:master-code-dir "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070", :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"}, :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3] 1407257070}}
>>>>>>>> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
>>>>>>>> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
>>>>>>>> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
>>>>>>>> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
>>>>>>>> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
>>>>>>>> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
>>>>>>>> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
>>>>>>>> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
>>>>>>>> 2256 [main] INFO backtype.storm.testing - Shutting down in process zookeeper
>>>>>>>> 2257 [main] INFO backtype.storm.testing - Done shutting down in process zookeeper
>>>>>>>> 2258 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
>>>>>>>> 2259 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>>>>>>> 2260 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
>>>>>>>> 2261 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>>>>>>>>
>>>>>>>> Anyone can help me locate what the problem is? I really need to walk through this step in order to be able to replace .each(printStream()) with other functions.
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>> Alec
>>>>>>>>
>>>>>>>> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>>>>>>>>
>>>>>>>>> hello,
>>>>>>>>>
>>>>>>>>> you can check your .jar application with command " jar tf " to see if class kafka/api/OffsetRequest.class is part of the jar.
>>>>>>>>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are using) in storm_lib directory
>>>>>>>>>
>>>>>>>>> Marcelo
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>>>>>>>>> Hi, all
>>>>>>>>>
>>>>>>>>> I am running a kafka-spout code in storm-server, the pom is
>>>>>>>>>
>>>>>>>>> <groupId>org.apache.kafka</groupId>
>>>>>>>>> <artifactId>kafka_2.9.2</artifactId>
>>>>>>>>> <version>0.8.0</version>
>>>>>>>>> <scope>provided</scope>
>>>>>>>>>
>>>>>>>>> <exclusions>
>>>>>>>>> <exclusion>
>>>>>>>>> <groupId>org.apache.zookeeper</groupId>
>>>>>>>>> <artifactId>zookeeper</artifactId>
>>>>>>>>> </exclusion>
>>>>>>>>> <exclusion>
>>>>>>>>> <groupId>log4j</groupId>
>>>>>>>>> <artifactId>log4j</artifactId>
>>>>>>>>> </exclusion>
>>>>>>>>> </exclusions>
>>>>>>>>>
>>>>>>>>> </dependency>
>>>>>>>>>
>>>>>>>>> <!-- Storm-Kafka compiled -->
>>>>>>>>>
>>>>>>>>> <dependency>
>>>>>>>>> <artifactId>storm-kafka</artifactId>
>>>>>>>>> <groupId>org.apache.storm</groupId>
>>>>>>>>> <version>0.9.2-incubating</version>
>>>>>>>>> <scope>*compile*</scope>
>>>>>>>>> </dependency>
>>>>>>>>>
>>>>>>>>> I can mvn package it, but when I run it
>>>>>>>>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I am getting such error
>>>>>>>>>
>>>>>>>>> 1657 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>>>>>>>>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died
>>>>>>>>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>>>>>>>>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>>>> at storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>>>> at storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>>>> at storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>>>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>>>>>>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_55]
>>>>>>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_55]
>>>>>>>>> at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_55]
>>>>>>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_55]
>>>>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
>>>>>>>>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_55]
>>>>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I try to poke around online, could not find a solution for it, any idea about that?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>> Alec
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>> CONFIDENTIALITY NOTICE
>>> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
>>
>>
>>
>>
>> --
>> Thanks
>> Parth
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
>>
>
Re: kafka-spout running error
Posted by Sa Li <sa...@gmail.com>.
Thanks, Kushan and Parth, I tried to solve the problem as you two suggested, first I change the kafka version in pom, re-compile it, and also copy the kafka_2.10-0.8.1.1.jar into storm.lib directory from M2_REPO. Here is my pom
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-framework</artifactId>
<version>2.6.0</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-recipes</artifactId>
<version>2.6.0</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-test</artifactId>
<version>2.6.0</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
</exclusion>
</exclusions>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
<version>3.3.6</version>
<exclusions>
<exclusion>
<groupId>com.sun.jmx</groupId>
<artifactId>jmxri</artifactId>
</exclusion>
<exclusion>
<groupId>com.sun.jdmk</groupId>
<artifactId>jmxtools</artifactId>
</exclusion>
<exclusion>
<groupId>javax.jms</groupId>
<artifactId>jms</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- Kafka 0.8.0 compiled Scala 2.9.2 -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.8.1.1</version>
<scope>provided</scope>
<exclusions>
<exclusion>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
Here I can see the zookeeper version is 3.3.6 (here the version was downgraded since "java.lang.ClassNotFoundException: org.apache.zookeeper.server.NIOServerCnxn$Factory at java.net” error came out otherwise, the curator version is 2.6.0. I jar tf the project jar to see the class included:
root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# jar tf target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar | grep zookeeper
org/apache/zookeeper/
org/apache/zookeeper/client/
org/apache/zookeeper/common/
org/apache/zookeeper/data/
org/apache/zookeeper/jmx/
org/apache/zookeeper/proto/
org/apache/zookeeper/server/
org/apache/zookeeper/server/auth/
org/apache/zookeeper/server/persistence/
org/apache/zookeeper/server/quorum/
org/apache/zookeeper/server/quorum/flexible/
org/apache/zookeeper/server/upgrade/
org/apache/zookeeper/server/util/
org/apache/zookeeper/txn/
org/apache/zookeeper/version/
org/apache/zookeeper/version/util/
org/apache/zookeeper/AsyncCallback$ACLCallback.class
org/apache/zookeeper/AsyncCallback$Children2Callback.class
org/apache/zookeeper/AsyncCallback$ChildrenCallback.class
org/apache/zookeeper/AsyncCallback$DataCallback.class
org/apache/zookeeper/AsyncCallback$StatCallback.class
org/apache/zookeeper/AsyncCallback$StringCallback.class
org/apache/zookeeper/AsyncCallback$VoidCallback.class
org/apache/zookeeper/AsyncCallback.class
org/apache/zookeeper/ClientCnxn$1.class
org/apache/zookeeper/ClientCnxn$2.class
org/apache/zookeeper/ClientCnxn$AuthData.class
org/apache/zookeeper/ClientCnxn$EndOfStreamException.class
org/apache/zookeeper/ClientCnxn$EventThread.class
org/apache/zookeeper/ClientCnxn$Packet.class
org/apache/zookeeper/ClientCnxn$SendThread.class
org/apache/zookeeper/ClientCnxn$SessionExpiredException.class
org/apache/zookeeper/ClientCnxn$SessionTimeoutException.class
org/apache/zookeeper/ClientCnxn$WatcherSetEventPair.class
org/apache/zookeeper/ClientCnxn.class
org/apache/zookeeper/ClientWatchManager.class
org/apache/zookeeper/CreateMode.class
org/apache/zookeeper/Environment$Entry.class
org/apache/zookeeper/Environment.class
org/apache/zookeeper/JLineZNodeCompletor.class
org/apache/zookeeper/KeeperException$1.class
org/apache/zookeeper/KeeperException$APIErrorException.class
org/apache/zookeeper/KeeperException$AuthFailedException.class
org/apache/zookeeper/KeeperException$BadArgumentsException.class
org/apache/zookeeper/KeeperException$BadVersionException.class
org/apache/zookeeper/KeeperException$Code.class
org/apache/zookeeper/KeeperException$CodeDeprecated.class
org/apache/zookeeper/KeeperException$ConnectionLossException.class
org/apache/zookeeper/KeeperException$DataInconsistencyException.class
org/apache/zookeeper/KeeperException$InvalidACLException.class
org/apache/zookeeper/KeeperException$InvalidCallbackException.class
org/apache/zookeeper/KeeperException$MarshallingErrorException.class
org/apache/zookeeper/KeeperException$NoAuthException.class
org/apache/zookeeper/KeeperException$NoChildrenForEphemeralsException.class
org/apache/zookeeper/KeeperException$NoNodeException.class
org/apache/zookeeper/KeeperException$NodeExistsException.class
org/apache/zookeeper/KeeperException$NotEmptyException.class
org/apache/zookeeper/KeeperException$OperationTimeoutException.class
org/apache/zookeeper/KeeperException$RuntimeInconsistencyException.class
org/apache/zookeeper/KeeperException$SessionExpiredException.class
org/apache/zookeeper/KeeperException$SessionMovedException.class
org/apache/zookeeper/KeeperException$SystemErrorException.class
org/apache/zookeeper/KeeperException$UnimplementedException.class
org/apache/zookeeper/KeeperException.class
org/apache/zookeeper/Quotas.class
org/apache/zookeeper/ServerAdminClient.class
org/apache/zookeeper/StatsTrack.class
org/apache/zookeeper/Version.class
org/apache/zookeeper/WatchedEvent.class
org/apache/zookeeper/Watcher$Event$EventType.class
org/apache/zookeeper/Watcher$Event$KeeperState.class
org/apache/zookeeper/Watcher$Event.class
org/apache/zookeeper/Watcher.class
org/apache/zookeeper/ZooDefs$Ids.class
org/apache/zookeeper/ZooDefs$OpCode.class
org/apache/zookeeper/ZooDefs$Perms.class
org/apache/zookeeper/ZooDefs.class
org/apache/zookeeper/ZooKeeper$1.class
org/apache/zookeeper/ZooKeeper$ChildWatchRegistration.class
org/apache/zookeeper/ZooKeeper$DataWatchRegistration.class
org/apache/zookeeper/ZooKeeper$ExistsWatchRegistration.class
org/apache/zookeeper/ZooKeeper$States.class
org/apache/zookeeper/ZooKeeper$WatchRegistration.class
org/apache/zookeeper/ZooKeeper$ZKWatchManager.class
org/apache/zookeeper/ZooKeeper.class
org/apache/zookeeper/ZooKeeperMain$1.class
org/apache/zookeeper/ZooKeeperMain$MyCommandOptions.class
org/apache/zookeeper/ZooKeeperMain$MyWatcher.class
org/apache/zookeeper/ZooKeeperMain.class
org/apache/zookeeper/client/FourLetterWordMain.class
org/apache/zookeeper/common/PathTrie$1.class
org/apache/zookeeper/common/PathTrie$TrieNode.class
org/apache/zookeeper/common/PathTrie.class
org/apache/zookeeper/common/PathUtils.class
org/apache/zookeeper/data/ACL.class
org/apache/zookeeper/data/Id.class
org/apache/zookeeper/data/Stat.class
org/apache/zookeeper/data/StatPersisted.class
org/apache/zookeeper/data/StatPersistedV1.class
org/apache/zookeeper/jmx/CommonNames.class
org/apache/zookeeper/jmx/MBeanRegistry.class
org/apache/zookeeper/jmx/ManagedUtil.class
org/apache/zookeeper/jmx/ZKMBeanInfo.class
org/apache/zookeeper/proto/AuthPacket.class
org/apache/zookeeper/proto/ConnectRequest.class
org/apache/zookeeper/proto/ConnectResponse.class
org/apache/zookeeper/proto/CreateRequest.class
org/apache/zookeeper/proto/CreateResponse.class
org/apache/zookeeper/proto/DeleteRequest.class
org/apache/zookeeper/proto/ExistsRequest.class
org/apache/zookeeper/proto/ExistsResponse.class
org/apache/zookeeper/proto/GetACLRequest.class
org/apache/zookeeper/proto/GetACLResponse.class
org/apache/zookeeper/proto/GetChildren2Request.class
org/apache/zookeeper/proto/GetChildren2Response.class
org/apache/zookeeper/proto/GetChildrenRequest.class
org/apache/zookeeper/proto/GetChildrenResponse.class
org/apache/zookeeper/proto/GetDataRequest.class
org/apache/zookeeper/proto/GetDataResponse.class
org/apache/zookeeper/proto/GetMaxChildrenRequest.class
org/apache/zookeeper/proto/GetMaxChildrenResponse.class
org/apache/zookeeper/proto/ReplyHeader.class
org/apache/zookeeper/proto/RequestHeader.class
org/apache/zookeeper/proto/SetACLRequest.class
org/apache/zookeeper/proto/SetACLResponse.class
org/apache/zookeeper/proto/SetDataRequest.class
org/apache/zookeeper/proto/SetDataResponse.class
org/apache/zookeeper/proto/SetMaxChildrenRequest.class
org/apache/zookeeper/proto/SetWatches.class
org/apache/zookeeper/proto/SyncRequest.class
org/apache/zookeeper/proto/SyncResponse.class
org/apache/zookeeper/proto/WatcherEvent.class
org/apache/zookeeper/proto/op_result_t.class
org/apache/zookeeper/server/ByteBufferInputStream.class
org/apache/zookeeper/server/ConnectionBean.class
org/apache/zookeeper/server/ConnectionMXBean.class
org/apache/zookeeper/server/DataNode.class
org/apache/zookeeper/server/DataTree$1.class
org/apache/zookeeper/server/DataTree$Counts.class
org/apache/zookeeper/server/DataTree$ProcessTxnResult.class
org/apache/zookeeper/server/DataTree.class
org/apache/zookeeper/server/DataTreeBean.class
org/apache/zookeeper/server/DataTreeMXBean.class
org/apache/zookeeper/server/FinalRequestProcessor.class
org/apache/zookeeper/server/LogFormatter.class
org/apache/zookeeper/server/NIOServerCnxn$1.class
org/apache/zookeeper/server/NIOServerCnxn$CloseRequestException.class
org/apache/zookeeper/server/NIOServerCnxn$CnxnStatResetCommand.class
org/apache/zookeeper/server/NIOServerCnxn$CnxnStats.class
org/apache/zookeeper/server/NIOServerCnxn$CommandThread.class
org/apache/zookeeper/server/NIOServerCnxn$ConfCommand.class
org/apache/zookeeper/server/NIOServerCnxn$ConsCommand.class
org/apache/zookeeper/server/NIOServerCnxn$DumpCommand.class
org/apache/zookeeper/server/NIOServerCnxn$EndOfStreamException.class
org/apache/zookeeper/server/NIOServerCnxn$EnvCommand.class
org/apache/zookeeper/server/NIOServerCnxn$Factory$1.class
org/apache/zookeeper/server/NIOServerCnxn$Factory.class
org/apache/zookeeper/server/NIOServerCnxn$RuokCommand.class
org/apache/zookeeper/server/NIOServerCnxn$SendBufferWriter.class
org/apache/zookeeper/server/NIOServerCnxn$SetTraceMaskCommand.class
org/apache/zookeeper/server/NIOServerCnxn$StatCommand.class
org/apache/zookeeper/server/NIOServerCnxn$StatResetCommand.class
org/apache/zookeeper/server/NIOServerCnxn$TraceMaskCommand.class
org/apache/zookeeper/server/NIOServerCnxn$WatchCommand.class
org/apache/zookeeper/server/NIOServerCnxn.class
org/apache/zookeeper/server/ObserverBean.class
org/apache/zookeeper/server/PrepRequestProcessor.class
org/apache/zookeeper/server/PurgeTxnLog$1MyFileFilter.class
org/apache/zookeeper/server/PurgeTxnLog.class
org/apache/zookeeper/server/Request.class
org/apache/zookeeper/server/RequestProcessor$RequestProcessorException.class
org/apache/zookeeper/server/RequestProcessor.class
org/apache/zookeeper/server/ServerCnxn$Stats.class
org/apache/zookeeper/server/ServerCnxn.class
org/apache/zookeeper/server/ServerConfig.class
org/apache/zookeeper/server/ServerStats$Provider.class
org/apache/zookeeper/server/ServerStats.class
org/apache/zookeeper/server/SessionTracker$Session.class
org/apache/zookeeper/server/SessionTracker$SessionExpirer.class
org/apache/zookeeper/server/SessionTracker.class
org/apache/zookeeper/server/SessionTrackerImpl$SessionImpl.class
org/apache/zookeeper/server/SessionTrackerImpl$SessionSet.class
org/apache/zookeeper/server/SessionTrackerImpl.class
org/apache/zookeeper/server/SyncRequestProcessor$1.class
org/apache/zookeeper/server/SyncRequestProcessor.class
org/apache/zookeeper/server/TraceFormatter.class
org/apache/zookeeper/server/WatchManager.class
org/apache/zookeeper/server/ZKDatabase$1.class
org/apache/zookeeper/server/ZKDatabase.class
org/apache/zookeeper/server/ZooKeeperServer$BasicDataTreeBuilder.class
org/apache/zookeeper/server/ZooKeeperServer$ChangeRecord.class
org/apache/zookeeper/server/ZooKeeperServer$DataTreeBuilder.class
org/apache/zookeeper/server/ZooKeeperServer$Factory.class
org/apache/zookeeper/server/ZooKeeperServer$MissingSessionException.class
org/apache/zookeeper/server/ZooKeeperServer.class
org/apache/zookeeper/server/ZooKeeperServerBean.class
org/apache/zookeeper/server/ZooKeeperServerMXBean.class
org/apache/zookeeper/server/ZooKeeperServerMain.class
org/apache/zookeeper/server/ZooTrace.class
org/apache/zookeeper/server/auth/AuthenticationProvider.class
org/apache/zookeeper/server/auth/DigestAuthenticationProvider.class
org/apache/zookeeper/server/auth/IPAuthenticationProvider.class
org/apache/zookeeper/server/auth/ProviderRegistry.class
org/apache/zookeeper/server/persistence/FileHeader.class
org/apache/zookeeper/server/persistence/FileSnap.class
org/apache/zookeeper/server/persistence/FileTxnLog$FileTxnIterator.class
org/apache/zookeeper/server/persistence/FileTxnLog$PositionInputStream.class
org/apache/zookeeper/server/persistence/FileTxnLog.class
org/apache/zookeeper/server/persistence/FileTxnSnapLog$PlayBackListener.class
org/apache/zookeeper/server/persistence/FileTxnSnapLog.class
org/apache/zookeeper/server/persistence/SnapShot.class
org/apache/zookeeper/server/persistence/TxnLog$TxnIterator.class
org/apache/zookeeper/server/persistence/TxnLog.class
org/apache/zookeeper/server/persistence/Util$DataDirFileComparator.class
org/apache/zookeeper/server/persistence/Util.class
org/apache/zookeeper/server/quorum/AckRequestProcessor.class
org/apache/zookeeper/server/quorum/AuthFastLeaderElection$1.class
org/apache/zookeeper/server/quorum/AuthFastLeaderElection$Messenger$WorkerReceiver.class
org/apache/zookeeper/server/quorum/AuthFastLeaderElection$Messenger$WorkerSender.class
org/apache/zookeeper/server/quorum/AuthFastLeaderElection$Messenger.class
org/apache/zookeeper/server/quorum/AuthFastLeaderElection$Notification.class
org/apache/zookeeper/server/quorum/AuthFastLeaderElection$ToSend$mType.class
org/apache/zookeeper/server/quorum/AuthFastLeaderElection$ToSend.class
org/apache/zookeeper/server/quorum/AuthFastLeaderElection.class
org/apache/zookeeper/server/quorum/CommitProcessor.class
org/apache/zookeeper/server/quorum/Election.class
org/apache/zookeeper/server/quorum/FastLeaderElection$1.class
org/apache/zookeeper/server/quorum/FastLeaderElection$Messenger$WorkerReceiver.class
org/apache/zookeeper/server/quorum/FastLeaderElection$Messenger$WorkerSender.class
org/apache/zookeeper/server/quorum/FastLeaderElection$Messenger.class
org/apache/zookeeper/server/quorum/FastLeaderElection$Notification.class
org/apache/zookeeper/server/quorum/FastLeaderElection$ToSend$mType.class
org/apache/zookeeper/server/quorum/FastLeaderElection$ToSend.class
org/apache/zookeeper/server/quorum/FastLeaderElection.class
org/apache/zookeeper/server/quorum/Follower.class
org/apache/zookeeper/server/quorum/FollowerBean.class
org/apache/zookeeper/server/quorum/FollowerMXBean.class
org/apache/zookeeper/server/quorum/FollowerRequestProcessor.class
org/apache/zookeeper/server/quorum/FollowerZooKeeperServer.class
org/apache/zookeeper/server/quorum/Leader$LearnerCnxAcceptor.class
org/apache/zookeeper/server/quorum/Leader$Proposal.class
org/apache/zookeeper/server/quorum/Leader$ToBeAppliedRequestProcessor.class
org/apache/zookeeper/server/quorum/Leader$XidRolloverException.class
org/apache/zookeeper/server/quorum/Leader.class
org/apache/zookeeper/server/quorum/LeaderBean.class
org/apache/zookeeper/server/quorum/LeaderElection$ElectionResult.class
org/apache/zookeeper/server/quorum/LeaderElection.class
org/apache/zookeeper/server/quorum/LeaderElectionBean.class
org/apache/zookeeper/server/quorum/LeaderElectionMXBean.class
org/apache/zookeeper/server/quorum/LeaderMXBean.class
org/apache/zookeeper/server/quorum/LeaderZooKeeperServer.class
org/apache/zookeeper/server/quorum/Learner$PacketInFlight.class
org/apache/zookeeper/server/quorum/Learner.class
org/apache/zookeeper/server/quorum/LearnerHandler$1.class
org/apache/zookeeper/server/quorum/LearnerHandler.class
org/apache/zookeeper/server/quorum/LearnerSessionTracker.class
org/apache/zookeeper/server/quorum/LearnerSyncRequest.class
org/apache/zookeeper/server/quorum/LearnerZooKeeperServer.class
org/apache/zookeeper/server/quorum/LocalPeerBean.class
org/apache/zookeeper/server/quorum/LocalPeerMXBean.class
org/apache/zookeeper/server/quorum/Observer.class
org/apache/zookeeper/server/quorum/ObserverMXBean.class
org/apache/zookeeper/server/quorum/ObserverRequestProcessor.class
org/apache/zookeeper/server/quorum/ObserverZooKeeperServer.class
org/apache/zookeeper/server/quorum/ProposalRequestProcessor.class
org/apache/zookeeper/server/quorum/QuorumBean.class
org/apache/zookeeper/server/quorum/QuorumCnxManager$Listener.class
org/apache/zookeeper/server/quorum/QuorumCnxManager$Message.class
org/apache/zookeeper/server/quorum/QuorumCnxManager$RecvWorker.class
org/apache/zookeeper/server/quorum/QuorumCnxManager$SendWorker.class
org/apache/zookeeper/server/quorum/QuorumCnxManager.class
org/apache/zookeeper/server/quorum/QuorumMXBean.class
org/apache/zookeeper/server/quorum/QuorumPacket.class
org/apache/zookeeper/server/quorum/QuorumPeer$1.class
org/apache/zookeeper/server/quorum/QuorumPeer$Factory.class
org/apache/zookeeper/server/quorum/QuorumPeer$LearnerType.class
org/apache/zookeeper/server/quorum/QuorumPeer$QuorumServer.class
org/apache/zookeeper/server/quorum/QuorumPeer$ResponderThread.class
org/apache/zookeeper/server/quorum/QuorumPeer$ServerState.class
org/apache/zookeeper/server/quorum/QuorumPeer.class
org/apache/zookeeper/server/quorum/QuorumPeerConfig$ConfigException.class
org/apache/zookeeper/server/quorum/QuorumPeerConfig.class
org/apache/zookeeper/server/quorum/QuorumPeerMain.class
org/apache/zookeeper/server/quorum/QuorumStats$Provider.class
org/apache/zookeeper/server/quorum/QuorumStats.class
org/apache/zookeeper/server/quorum/QuorumZooKeeperServer.class
org/apache/zookeeper/server/quorum/RemotePeerBean.class
org/apache/zookeeper/server/quorum/RemotePeerMXBean.class
org/apache/zookeeper/server/quorum/SendAckRequestProcessor.class
org/apache/zookeeper/server/quorum/ServerBean.class
org/apache/zookeeper/server/quorum/ServerMXBean.class
org/apache/zookeeper/server/quorum/Vote.class
org/apache/zookeeper/server/quorum/flexible/QuorumHierarchical.class
org/apache/zookeeper/server/quorum/flexible/QuorumMaj.class
org/apache/zookeeper/server/quorum/flexible/QuorumVerifier.class
org/apache/zookeeper/server/upgrade/DataNodeV1.class
org/apache/zookeeper/server/upgrade/DataTreeV1$ProcessTxnResult.class
org/apache/zookeeper/server/upgrade/DataTreeV1.class
org/apache/zookeeper/server/upgrade/UpgradeMain.class
org/apache/zookeeper/server/upgrade/UpgradeSnapShot.class
org/apache/zookeeper/server/upgrade/UpgradeSnapShotV1.class
org/apache/zookeeper/server/util/Profiler$Operation.class
org/apache/zookeeper/server/util/Profiler.class
org/apache/zookeeper/server/util/SerializeUtils.class
org/apache/zookeeper/txn/CreateSessionTxn.class
org/apache/zookeeper/txn/CreateTxn.class
org/apache/zookeeper/txn/DeleteTxn.class
org/apache/zookeeper/txn/ErrorTxn.class
org/apache/zookeeper/txn/SetACLTxn.class
org/apache/zookeeper/txn/SetDataTxn.class
org/apache/zookeeper/txn/SetMaxChildrenTxn.class
org/apache/zookeeper/txn/TxnHeader.class
org/apache/zookeeper/version/Info.class
org/apache/zookeeper/version/util/VerGen$Version.class
org/apache/zookeeper/version/util/VerGen.class
seems zookeeper is included, however I still got the same issue after I change all above, really have no idea what can I do now.
thanks
Alec
On Aug 5, 2014, at 2:11 PM, Kushan Maskey <ku...@mmillerassociates.com> wrote:
> You need to include kafka_2.10-0.8.1.1.jar into your project jar. I had this issue and that resolved it.
>
> --
> Kushan Maskey
> 817.403.7500
>
>
> On Tue, Aug 5, 2014 at 3:57 PM, Parth Brahmbhatt <pb...@hortonworks.com> wrote:
> I see a NoSuchMethodError, seems like there is some issue with your jar packing. Can you confirm that you have the zookeeper dependency packed in your jar? what version of curator and zookeeper are you using?
>
> Thanks
> Parth
>
>
> On Tue, Aug 5, 2014 at 1:45 PM, Sa Li <sa...@gmail.com> wrote:
> Thanks, Parth, I increase the sleep time as Thread.sleep(150000000), 150 seconds, but still I got such Asyn problem, it seems to be the problem to reading kafka topic from zookeeper.
>
> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
> 3100 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor -
> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
> 3101 [Thread-29-$mastercoord-bg0] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 3114 [Thread-10] INFO backtype.storm.daemon.worker - Worker has topology config {"storm.id" "kafka-1-1407271290", "dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/0610cc80-25a7-4304-acf0-9ead5f942429", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.kryo.decorators" (), "topology.name" "kafka", "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" 1, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" true, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" 3}
> 3115 [Thread-10] INFO backtype.storm.daemon.worker - Worker ee9ec3b6-5e13-4329-b12a-c3cffdd7e997 for storm kafka-1-1407271290 on 3aff208c-d065-448d-9026-bf452151d546:4 has finished loading
> 3207 [Thread-25-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
>
> Thanks
>
> Alec
>
>
> On Aug 5, 2014, at 1:32 PM, Parth Brahmbhatt <pb...@hortonworks.com> wrote:
>
>> Can you let the topology run for 120 seconds or so? In my experience the kafka bolt/spout takes a lot of latency initially as it tries to read/write from zookeeper and initialize connections. On my mac it takes about 15 seconds before the spout is actually opened.
>>
>> Thanks
>> Parth
>> On Aug 5, 2014, at 1:11 PM, Sa Li <sa...@gmail.com> wrote:
>>
>>> If I set the sleep time as 1000 milisec, I got such error:
>>>
>>> 3067 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/0f1851f1-9499-48a5-817e-41712921d054
>>> 3163 [Thread-10-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
>>> 3163 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
>>> 3164 [Thread-10-EventThread] WARN backtype.storm.cluster - Received event :disconnected::none: with disconnected Zookeeper.
>>> 3636 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
>>> java.net.ConnectException: Connection refused
>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_55]
>>> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_55]
>>> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) ~[zookeeper-3.3.3.jar:3.3.3-1073969]
>>> 4877 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
>>> java.net.ConnectException: Connection refused
>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_55]
>>> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_55]
>>> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) ~[zookeeper-3.3.3.jar:3.3.3-1073969]
>>> 5566 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
>>> java.net.ConnectException: Connection refused
>>>
>>> seems not even connected to zookeeper, any method to confirm to connection of zookeeper?
>>>
>>> Thanks a lot
>>>
>>> Alec
>>>
>>> On Aug 5, 2014, at 12:58 PM, Sa Li <sa...@gmail.com> wrote:
>>>
>>>> Thank you very much for your reply, Taylor. I tried to increase the sleep time as 1 sec or 10 sec, however I got such error, it seems to be Async loop error. Any idea about that?
>>>>
>>>> 3053 [Thread-19-$spoutcoord-spout0] INFO org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 3058 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.util - Async loop died!
>>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
>>>> at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
>>>> at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
>>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>>> 3058 [Thread-25-spout0] ERROR backtype.storm.util - Async loop died!
>>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
>>>> at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
>>>> at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
>>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>>> 3059 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor -
>>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
>>>> at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
>>>> at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
>>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>>> 3059 [Thread-25-spout0] ERROR backtype.storm.daemon.executor -
>>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
>>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
>>>> at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
>>>> at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
>>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>>> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker has topology config {"storm.id" "kafka-1-1407268492", "dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/ca948198-69df-440b-8acb-6dfc4db6c288", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.kryo.decorators" (), "topology.name" "kafka", "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" true, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker 64335058-7f94-447f-bc0a-5107084789a0 for storm kafka-1-1407268492 on cf2964b3-7655-4a33-88a1-f6e0ceb6f9ed:1 has finished loading
>>>> 3164 [Thread-29-$mastercoord-bg0] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 3173 [Thread-25-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
>>>> 3173 [Thread-19-$spoutcoord-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
>>>>
>>>> Thanks
>>>>
>>>> Alec
>>>>
>>>> On Aug 5, 2014, at 10:26 AM, P. Taylor Goetz <pt...@gmail.com> wrote:
>>>>
>>>>> You are only sleeping for 100 milliseconds before shutting down the local cluster, which is probably not long enough for the topology to come up and start processing messages. Try increasing the sleep time to something like 10 seconds.
>>>>>
>>>>> You can also reduce startup time with the following JVM flag:
>>>>>
>>>>> -Djava.net.preferIPv4Stack=true
>>>>>
>>>>> - Taylor
>>>>>
>>>>> On Aug 5, 2014, at 1:16 PM, Sa Li <sa...@gmail.com> wrote:
>>>>>
>>>>>> Sorry, the stormTopology:
>>>>>>
>>>>>>> TridentTopology topology = new TridentTopology();
>>>>>>> BrokerHosts zk = new ZkHosts("localhost");
>>>>>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, “topictest");
>>>>>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>>>>>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Aug 5, 2014, at 9:56 AM, Sa Li <sa...@gmail.com> wrote:
>>>>>>
>>>>>>> Thank you very much, Marcelo, it indeed worked, now I can run my code without getting error. However, another thing is keeping bother me, following is my code:
>>>>>>>
>>>>>>> public static class PrintStream implements Filter {
>>>>>>>
>>>>>>> @SuppressWarnings("rawtypes”)
>>>>>>> @Override
>>>>>>> public void prepare(Map conf, TridentOperationContext context) {
>>>>>>> }
>>>>>>> @Override
>>>>>>> public void cleanup() {
>>>>>>> }
>>>>>>> @Override
>>>>>>> public boolean isKeep(TridentTuple tuple) {
>>>>>>> System.out.println(tuple);
>>>>>>> return true;
>>>>>>> }
>>>>>>> }
>>>>>>> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException {
>>>>>>>
>>>>>>> TridentTopology topology = new TridentTopology();
>>>>>>> BrokerHosts zk = new ZkHosts("localhost");
>>>>>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, "ingest_test");
>>>>>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>>>>>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>>>>>>
>>>>>>> topology.newStream("kafka", spout)
>>>>>>> .each(new Fields("str"),
>>>>>>> new PrintStream()
>>>>>>> );
>>>>>>>
>>>>>>> return topology.build();
>>>>>>> }
>>>>>>> public static void main(String[] args) throws Exception {
>>>>>>>
>>>>>>> Config conf = new Config();
>>>>>>> conf.setDebug(true);
>>>>>>> conf.setMaxSpoutPending(1);
>>>>>>> conf.setMaxTaskParallelism(3);
>>>>>>> LocalDRPC drpc = new LocalDRPC();
>>>>>>> LocalCluster cluster = new LocalCluster();
>>>>>>> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>>>>>>> Thread.sleep(100);
>>>>>>> cluster.shutdown();
>>>>>>> }
>>>>>>>
>>>>>>> What I expect is quite simple, print out the message I collect from a kafka producer playback process which is running separately. The topic is listed as:
>>>>>>>
>>>>>>> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper localhost:2181
>>>>>>> topic: topictest partition: 0 leader: 1 replicas: 1,3,2 isr: 1,3,2
>>>>>>> topic: topictest partition: 1 leader: 2 replicas: 2,1,3 isr: 2,1,3
>>>>>>> topic: topictest partition: 2 leader: 3 replicas: 3,2,1 isr: 3,2,1
>>>>>>> topic: topictest partition: 3 leader: 1 replicas: 1,2,3 isr: 1,2,3
>>>>>>> topic: topictest partition: 4 leader: 2 replicas: 2,3,1 isr: 2,3,1
>>>>>>>
>>>>>>> When I am running the code, this is what I saw on the screen, seems no error, but no message print out as well:
>>>>>>>
>>>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>>>> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1 -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file= -cp /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>>>> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>>>>>> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>>>> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
>>>>>>> 1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>>> 1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>> 1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>>> 1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>> 1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>>> 1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>>>> 1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>>> 1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
>>>>>>> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>>>> 1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>>> 1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
>>>>>>> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology submission for kafka with conf {"topology.max.task.parallelism" nil, "topology.acker.executors" nil, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id" "kafka-1-1407257070", "topology.debug" true}
>>>>>>> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka: kafka-1-1407257070
>>>>>>> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 3] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
>>>>>>> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment for topology id kafka-1-1407257070: #backtype.storm.daemon.common.Assignment{:master-code-dir "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070", :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"}, :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3] 1407257070}}
>>>>>>> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
>>>>>>> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
>>>>>>> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
>>>>>>> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
>>>>>>> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
>>>>>>> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
>>>>>>> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
>>>>>>> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
>>>>>>> 2256 [main] INFO backtype.storm.testing - Shutting down in process zookeeper
>>>>>>> 2257 [main] INFO backtype.storm.testing - Done shutting down in process zookeeper
>>>>>>> 2258 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
>>>>>>> 2259 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>>>>>> 2260 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
>>>>>>> 2261 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>>>>>>>
>>>>>>> Anyone can help me locate what the problem is? I really need to walk through this step in order to be able to replace .each(printStream()) with other functions.
>>>>>>>
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> Alec
>>>>>>>
>>>>>>> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>>>>>>>
>>>>>>>> hello,
>>>>>>>>
>>>>>>>> you can check your .jar application with command " jar tf " to see if class kafka/api/OffsetRequest.class is part of the jar.
>>>>>>>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are using) in storm_lib directory
>>>>>>>>
>>>>>>>> Marcelo
>>>>>>>>
>>>>>>>>
>>>>>>>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>>>>>>>> Hi, all
>>>>>>>>
>>>>>>>> I am running a kafka-spout code in storm-server, the pom is
>>>>>>>>
>>>>>>>> <groupId>org.apache.kafka</groupId>
>>>>>>>> <artifactId>kafka_2.9.2</artifactId>
>>>>>>>> <version>0.8.0</version>
>>>>>>>> <scope>provided</scope>
>>>>>>>>
>>>>>>>> <exclusions>
>>>>>>>> <exclusion>
>>>>>>>> <groupId>org.apache.zookeeper</groupId>
>>>>>>>> <artifactId>zookeeper</artifactId>
>>>>>>>> </exclusion>
>>>>>>>> <exclusion>
>>>>>>>> <groupId>log4j</groupId>
>>>>>>>> <artifactId>log4j</artifactId>
>>>>>>>> </exclusion>
>>>>>>>> </exclusions>
>>>>>>>>
>>>>>>>> </dependency>
>>>>>>>>
>>>>>>>> <!-- Storm-Kafka compiled -->
>>>>>>>>
>>>>>>>> <dependency>
>>>>>>>> <artifactId>storm-kafka</artifactId>
>>>>>>>> <groupId>org.apache.storm</groupId>
>>>>>>>> <version>0.9.2-incubating</version>
>>>>>>>> <scope>*compile*</scope>
>>>>>>>> </dependency>
>>>>>>>>
>>>>>>>> I can mvn package it, but when I run it
>>>>>>>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>>>>>
>>>>>>>>
>>>>>>>> I am getting such error
>>>>>>>>
>>>>>>>> 1657 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>>>>>>>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died
>>>>>>>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>>>>>>>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>>> at storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>>> at storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>>> at storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>>>>>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_55]
>>>>>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_55]
>>>>>>>> at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_55]
>>>>>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_55]
>>>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
>>>>>>>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_55]
>>>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> I try to poke around online, could not find a solution for it, any idea about that?
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>> Alec
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
>
>
>
>
> --
> Thanks
> Parth
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
>
Re: kafka-spout running error
Posted by Kushan Maskey <ku...@mmillerassociates.com>.
You need to include kafka_2.10-0.8.1.1.jar into your project jar. I had
this issue and that resolved it.
--
Kushan Maskey
817.403.7500
On Tue, Aug 5, 2014 at 3:57 PM, Parth Brahmbhatt <
pbrahmbhatt@hortonworks.com> wrote:
> I see a NoSuchMethodError, seems like there is some issue with your jar
> packing. Can you confirm that you have the zookeeper dependency packed in
> your jar? what version of curator and zookeeper are you using?
>
> Thanks
> Parth
>
>
> On Tue, Aug 5, 2014 at 1:45 PM, Sa Li <sa...@gmail.com> wrote:
>
>> Thanks, Parth, I increase the sleep time as Thread.sleep(150000000), 150
>> seconds, but still I got such Asyn problem, it seems to be the problem to
>> reading kafka topic from zookeeper.
>>
>> at
>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
>> ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
>> ~[storm-core-0.9.0.1.jar:na]
>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>> 3100 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor
>> -
>> java.lang.NoSuchMethodError:
>> org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>> at
>> org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
>> ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
>> ~[storm-core-0.9.0.1.jar:na]
>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>> 3101 [Thread-29-$mastercoord-bg0] INFO
>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 3114 [Thread-10] INFO backtype.storm.daemon.worker - Worker has topology
>> config {"storm.id" "kafka-1-1407271290", "dev.zookeeper.path"
>> "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil,
>> "topology.builtin.metrics.bucket.size.secs" 60,
>> "topology.fall.back.on.java.serialization" true,
>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>> "topology.skip.missing.kryo.registrations" true,
>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>> "topology.trident.batch.emit.interval.millis" 50,
>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>> "/tmp/0610cc80-25a7-4304-acf0-9ead5f942429",
>> "storm.messaging.netty.buffer_size" 5242880,
>> "supervisor.worker.start.timeout.secs" 120,
>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>> "/transactional", "topology.acker.executors" nil,
>> "topology.kryo.decorators" (), "topology.name" "kafka",
>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>> "supervisor.heartbeat.frequency.secs" 5,
>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>> "topology.spout.wait.strategy"
>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>> 1, "storm.zookeeper.retry.interval" 1000, "
>> topology.sleep.spout.wait.strategy.time.ms" 1,
>> "nimbus.topology.validator"
>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>> (4 5 6), "topology.debug" true, "nimbus.task.launch.secs" 120,
>> "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register"
>> {"storm.trident.topology.TransactionAttempt" nil},
>> "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10,
>> "topology.workers" 1, "supervisor.childopts" "-Xmx256m",
>> "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
>> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
>> "backtype.storm.serialization.types.ListDelegateSerializer",
>> "topology.disruptor.wait.strategy"
>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>> 5, "storm.thrift.transport"
>> "backtype.storm.security.auth.SimpleTransportPlugin",
>> "topology.state.synchronization.timeout.secs" 60,
>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
>> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
>> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
>> "topology.optimize" true, "topology.max.task.parallelism" 3}
>> 3115 [Thread-10] INFO backtype.storm.daemon.worker - Worker
>> ee9ec3b6-5e13-4329-b12a-c3cffdd7e997 for storm kafka-1-1407271290 on
>> 3aff208c-d065-448d-9026-bf452151d546:4 has finished loading
>> 3207 [Thread-25-spout0] INFO backtype.storm.util - Halting process:
>> ("Worker died")
>>
>> Thanks
>>
>> Alec
>>
>>
>> On Aug 5, 2014, at 1:32 PM, Parth Brahmbhatt <pb...@hortonworks.com>
>> wrote:
>>
>> Can you let the topology run for 120 seconds or so? In my experience the
>> kafka bolt/spout takes a lot of latency initially as it tries to read/write
>> from zookeeper and initialize connections. On my mac it takes about 15
>> seconds before the spout is actually opened.
>>
>> Thanks
>> Parth
>> On Aug 5, 2014, at 1:11 PM, Sa Li <sa...@gmail.com> wrote:
>>
>> If I set the sleep time as 1000 milisec, I got such error:
>>
>> 3067 [main] INFO backtype.storm.testing - Deleting temporary path
>> /tmp/0f1851f1-9499-48a5-817e-41712921d054
>> 3163 [Thread-10-EventThread] INFO
>> com.netflix.curator.framework.state.ConnectionStateManager - State change:
>> SUSPENDED
>> 3163 [ConnectionStateManager-0] WARN
>> com.netflix.curator.framework.state.ConnectionStateManager - There are no
>> ConnectionStateListeners registered.
>> 3164 [Thread-10-EventThread] WARN backtype.storm.cluster - Received
>> event :disconnected::none: with disconnected Zookeeper.
>> 3636 [Thread-10-SendThread(localhost:2000)] WARN
>> org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server
>> null, unexpected error, closing socket connection and attempting reconnect
>> java.net.ConnectException: Connection refused
>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>> ~[na:1.7.0_55]
>> at
>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
>> ~[na:1.7.0_55]
>> at
>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
>> ~[zookeeper-3.3.3.jar:3.3.3-1073969]
>> 4877 [Thread-10-SendThread(localhost:2000)] WARN
>> org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server
>> null, unexpected error, closing socket connection and attempting reconnect
>> java.net.ConnectException: Connection refused
>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>> ~[na:1.7.0_55]
>> at
>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
>> ~[na:1.7.0_55]
>> at
>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
>> ~[zookeeper-3.3.3.jar:3.3.3-1073969]
>> 5566 [Thread-10-SendThread(localhost:2000)] WARN
>> org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server
>> null, unexpected error, closing socket connection and attempting reconnect
>> java.net.ConnectException: Connection refused
>>
>> seems not even connected to zookeeper, any method to confirm to
>> connection of zookeeper?
>>
>> Thanks a lot
>>
>> Alec
>>
>> On Aug 5, 2014, at 12:58 PM, Sa Li <sa...@gmail.com> wrote:
>>
>> Thank you very much for your reply, Taylor. I tried to increase the sleep
>> time as 1 sec or 10 sec, however I got such error, it seems to be Async
>> loop error. Any idea about that?
>>
>> 3053 [Thread-19-$spoutcoord-spout0] INFO
>> org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 3058 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.util - Async
>> loop died!
>> java.lang.NoSuchMethodError:
>> org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>> at
>> org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.ConnectionState.reset(ConnectionState.java:219)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.ConnectionState.start(ConnectionState.java:103)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
>> ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
>> ~[storm-core-0.9.0.1.jar:na]
>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>> 3058 [Thread-25-spout0] ERROR backtype.storm.util - Async loop died!
>> java.lang.NoSuchMethodError:
>> org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>> at
>> org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.ConnectionState.reset(ConnectionState.java:219)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.ConnectionState.start(ConnectionState.java:103)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
>> ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
>> ~[storm-core-0.9.0.1.jar:na]
>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>> 3059 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor -
>> java.lang.NoSuchMethodError:
>> org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>> at
>> org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.ConnectionState.reset(ConnectionState.java:219)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.ConnectionState.start(ConnectionState.java:103)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
>> ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
>> ~[storm-core-0.9.0.1.jar:na]
>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>> 3059 [Thread-25-spout0] ERROR backtype.storm.daemon.executor -
>> java.lang.NoSuchMethodError:
>> org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>> at
>> org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.ConnectionState.reset(ConnectionState.java:219)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.ConnectionState.start(ConnectionState.java:103)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214)
>> ~[storm-core-0.9.0.1.jar:na]
>> at
>> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
>> ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
>> ~[storm-core-0.9.0.1.jar:na]
>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker has topology
>> config {"storm.id" "kafka-1-1407268492", "dev.zookeeper.path"
>> "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil,
>> "topology.builtin.metrics.bucket.size.secs" 60,
>> "topology.fall.back.on.java.serialization" true,
>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>> "topology.skip.missing.kryo.registrations" true,
>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>> "topology.trident.batch.emit.interval.millis" 50,
>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>> "/tmp/ca948198-69df-440b-8acb-6dfc4db6c288",
>> "storm.messaging.netty.buffer_size" 5242880,
>> "supervisor.worker.start.timeout.secs" 120,
>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>> "/transactional", "topology.acker.executors" nil,
>> "topology.kryo.decorators" (), "topology.name" "kafka",
>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>> "supervisor.heartbeat.frequency.secs" 5,
>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>> "topology.spout.wait.strategy"
>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>> nil, "storm.zookeeper.retry.interval" 1000, "
>> topology.sleep.spout.wait.strategy.time.ms" 1,
>> "nimbus.topology.validator"
>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>> (1 2 3), "topology.debug" true, "nimbus.task.launch.secs" 120,
>> "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register"
>> {"storm.trident.topology.TransactionAttempt" nil},
>> "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10,
>> "topology.workers" 1, "supervisor.childopts" "-Xmx256m",
>> "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
>> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
>> "backtype.storm.serialization.types.ListDelegateSerializer",
>> "topology.disruptor.wait.strategy"
>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>> 5, "storm.thrift.transport"
>> "backtype.storm.security.auth.SimpleTransportPlugin",
>> "topology.state.synchronization.timeout.secs" 60,
>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
>> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
>> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
>> "topology.optimize" true, "topology.max.task.parallelism" nil}
>> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker
>> 64335058-7f94-447f-bc0a-5107084789a0 for storm kafka-1-1407268492 on
>> cf2964b3-7655-4a33-88a1-f6e0ceb6f9ed:1 has finished loading
>> 3164 [Thread-29-$mastercoord-bg0] INFO
>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 3173 [Thread-25-spout0] INFO backtype.storm.util - Halting process:
>> ("Worker died")
>> 3173 [Thread-19-$spoutcoord-spout0] INFO backtype.storm.util - Halting
>> process: ("Worker died")
>>
>> Thanks
>>
>> Alec
>>
>> On Aug 5, 2014, at 10:26 AM, P. Taylor Goetz <pt...@gmail.com> wrote:
>>
>> You are only sleeping for 100 milliseconds before shutting down the local
>> cluster, which is probably not long enough for the topology to come up and
>> start processing messages. Try increasing the sleep time to something like
>> 10 seconds.
>>
>> You can also reduce startup time with the following JVM flag:
>>
>> -Djava.net.preferIPv4Stack=true
>>
>> - Taylor
>>
>> On Aug 5, 2014, at 1:16 PM, Sa Li <sa...@gmail.com> wrote:
>>
>> Sorry, the stormTopology:
>>
>> TridentTopology topology = new TridentTopology();
>> BrokerHosts zk = new ZkHosts("localhost");
>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk,
>> “topictest");
>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>> OpaqueTridentKafkaSpout spout = new
>> OpaqueTridentKafkaSpout(spoutConf);
>>
>>
>>
>>
>>
>> On Aug 5, 2014, at 9:56 AM, Sa Li <sa...@gmail.com> wrote:
>>
>> Thank you very much, Marcelo, it indeed worked, now I can run my code
>> without getting error. However, another thing is keeping bother me,
>> following is my code:
>>
>> public static class PrintStream implements Filter {
>>
>> @SuppressWarnings("rawtypes”)
>> @Override
>> public void prepare(Map conf, TridentOperationContext context) {
>> }
>> @Override
>> public void cleanup() {
>> }
>> @Override
>> public boolean isKeep(TridentTuple tuple) {
>> System.out.println(tuple);
>> return true;
>> }
>> }
>> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException
>> {
>>
>> TridentTopology topology = new TridentTopology();
>> BrokerHosts zk = new ZkHosts("localhost");
>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk,
>> "ingest_test");
>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>> OpaqueTridentKafkaSpout spout = new
>> OpaqueTridentKafkaSpout(spoutConf);
>>
>> topology.newStream("kafka", spout)
>> .each(new Fields("str"),
>> new PrintStream()
>> );
>>
>> return topology.build();
>> }
>> public static void main(String[] args) throws Exception {
>>
>> Config conf = new Config();
>> conf.setDebug(true);
>> conf.setMaxSpoutPending(1);
>> conf.setMaxTaskParallelism(3);
>> LocalDRPC drpc = new LocalDRPC();
>> LocalCluster cluster = new LocalCluster();
>> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>>
>> Thread.sleep(100);
>> cluster.shutdown();
>> }
>>
>> What I expect is quite simple, print out the message I collect from a
>> kafka producer playback process which is running separately. The topic is
>> listed as:
>>
>> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper
>> localhost:2181
>> topic: topictest partition: 0 leader: 1 replicas: 1,3,2
>> isr: 1,3,2
>> topic: topictest partition: 1 leader: 2 replicas: 2,1,3
>> isr: 2,1,3
>> topic: topictest partition: 2 leader: 3 replicas: 3,2,1
>> isr: 3,2,1
>> topic: topictest partition: 3 leader: 1 replicas: 1,2,3
>> isr: 1,2,3
>> topic: topictest partition: 4 leader: 2 replicas: 2,3,1
>> isr: 2,3,1
>>
>> When I am running the code, this is what I saw on the screen, seems no
>> error, but no message print out as well:
>>
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1
>> -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file=
>> -cp
>> /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin
>> -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar
>> storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper
>> at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with
>> conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>> "topology.tick.tuple.freq.secs" nil,
>> "topology.builtin.metrics.bucket.size.secs" 60,
>> "topology.fall.back.on.java.serialization" true,
>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>> "topology.skip.missing.kryo.registrations" true,
>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>> "topology.trident.batch.emit.interval.millis" 50,
>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>> "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9",
>> "storm.messaging.netty.buffer_size" 5242880,
>> "supervisor.worker.start.timeout.secs" 120,
>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>> "/transactional", "topology.acker.executors" nil,
>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>> "supervisor.heartbeat.frequency.secs" 5,
>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>> "topology.spout.wait.strategy"
>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>> nil, "storm.zookeeper.retry.interval" 1000, "
>> topology.sleep.spout.wait.strategy.time.ms" 1,
>> "nimbus.topology.validator"
>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>> [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs"
>> 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs"
>> 30, "task.refresh.poll.secs" 10, "topology.workers" 1,
>> "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627,
>> "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1,
>> "topology.tuple.serializer"
>> "backtype.storm.serialization.types.ListDelegateSerializer",
>> "topology.disruptor.wait.strategy"
>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>> 5, "storm.thrift.transport"
>> "backtype.storm.security.auth.SimpleTransportPlugin",
>> "topology.state.synchronization.timeout.secs" 60,
>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
>> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
>> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
>> "topology.optimize" true, "topology.max.task.parallelism" nil}
>> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
>> 1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>> update: :connected:none
>> 1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>> update: :connected:none
>> 1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>> update: :connected:none
>> 1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor
>> with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>> "topology.tick.tuple.freq.secs" nil,
>> "topology.builtin.metrics.bucket.size.secs" 60,
>> "topology.fall.back.on.java.serialization" true,
>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>> "topology.skip.missing.kryo.registrations" true,
>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>> "topology.trident.batch.emit.interval.millis" 50,
>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>> "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388",
>> "storm.messaging.netty.buffer_size" 5242880,
>> "supervisor.worker.start.timeout.secs" 120,
>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>> "/transactional", "topology.acker.executors" nil,
>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>> "supervisor.heartbeat.frequency.secs" 5,
>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>> "topology.spout.wait.strategy"
>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>> nil, "storm.zookeeper.retry.interval" 1000, "
>> topology.sleep.spout.wait.strategy.time.ms" 1,
>> "nimbus.topology.validator"
>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>> (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120,
>> "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30,
>> "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts"
>> "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
>> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
>> "backtype.storm.serialization.types.ListDelegateSerializer",
>> "topology.disruptor.wait.strategy"
>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>> 5, "storm.thrift.transport"
>> "backtype.storm.security.auth.SimpleTransportPlugin",
>> "topology.state.synchronization.timeout.secs" 60,
>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
>> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
>> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
>> "topology.optimize" true, "topology.max.task.parallelism" nil}
>> 1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>> update: :connected:none
>> 1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
>> with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
>> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor
>> with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>> "topology.tick.tuple.freq.secs" nil,
>> "topology.builtin.metrics.bucket.size.secs" 60,
>> "topology.fall.back.on.java.serialization" true,
>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>> "topology.skip.missing.kryo.registrations" true,
>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>> "topology.trident.batch.emit.interval.millis" 50,
>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>> "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912",
>> "storm.messaging.netty.buffer_size" 5242880,
>> "supervisor.worker.start.timeout.secs" 120,
>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>> "/transactional", "topology.acker.executors" nil,
>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>> "supervisor.heartbeat.frequency.secs" 5,
>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>> "topology.spout.wait.strategy"
>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>> nil, "storm.zookeeper.retry.interval" 1000, "
>> topology.sleep.spout.wait.strategy.time.ms" 1,
>> "nimbus.topology.validator"
>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>> (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120,
>> "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30,
>> "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts"
>> "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
>> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
>> "backtype.storm.serialization.types.ListDelegateSerializer",
>> "topology.disruptor.wait.strategy"
>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>> 5, "storm.thrift.transport"
>> "backtype.storm.security.auth.SimpleTransportPlugin",
>> "topology.state.synchronization.timeout.secs" 60,
>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
>> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
>> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
>> "topology.optimize" true, "topology.max.task.parallelism" nil}
>> 1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>> update: :connected:none
>> 1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
>> with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
>> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology
>> submission for kafka with conf {"topology.max.task.parallelism" nil,
>> "topology.acker.executors" nil, "topology.kryo.register"
>> {"storm.trident.topology.TransactionAttempt" nil},
>> "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id"
>> "kafka-1-1407257070", "topology.debug" true}
>> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka:
>> kafka-1-1407257070
>> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available
>> slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1]
>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 2]
>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 3]
>> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4]
>> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5]
>> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
>> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment
>> for topology id kafka-1-1407257070:
>> #backtype.storm.daemon.common.Assignment{:master-code-dir
>> "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070",
>> :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"},
>> :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5
>> 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4]
>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2]
>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1]
>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1
>> 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3]
>> 1407257070}}
>> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
>> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
>> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down
>> supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
>> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
>> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
>> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down
>> supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
>> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
>> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
>> 2256 [main] INFO backtype.storm.testing - Shutting down in process
>> zookeeper
>> 2257 [main] INFO backtype.storm.testing - Done shutting down in process
>> zookeeper
>> 2258 [main] INFO backtype.storm.testing - Deleting temporary path
>> /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
>> 2259 [main] INFO backtype.storm.testing - Deleting temporary path
>> /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>> 2260 [main] INFO backtype.storm.testing - Deleting temporary path
>> /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
>> 2261 [main] INFO backtype.storm.testing - Deleting temporary path
>> /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>>
>> Anyone can help me locate what the problem is? I really need to walk
>> through this step in order to be able to replace .each(printStream()) with
>> other functions.
>>
>>
>> Thanks
>>
>> Alec
>>
>> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>>
>> hello,
>>
>> you can check your .jar application with command " jar tf " to see if
>> class kafka/api/OffsetRequest.class is part of the jar.
>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are
>> using) in storm_lib directory
>>
>> Marcelo
>>
>>
>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>>
>>> Hi, all
>>>
>>> I am running a kafka-spout code in storm-server, the pom is
>>>
>>> <groupId>org.apache.kafka</groupId>
>>> <artifactId>kafka_2.9.2</artifactId>
>>> <version>0.8.0</version>
>>> <scope>provided</scope>
>>>
>>> <exclusions>
>>> <exclusion>
>>> <groupId>org.apache.zookeeper</groupId>
>>> <artifactId>zookeeper</artifactId>
>>> </exclusion>
>>> <exclusion>
>>> <groupId>log4j</groupId>
>>> <artifactId>log4j</artifactId>
>>> </exclusion>
>>> </exclusions>
>>>
>>> </dependency>
>>>
>>> <!-- Storm-Kafka compiled -->
>>>
>>> <dependency>
>>> <artifactId>storm-kafka</artifactId>
>>> <groupId>org.apache.storm</groupId>
>>> <version>0.9.2-incubating</version>
>>> <scope>*compile*</scope>
>>> </dependency>
>>>
>>> I can mvn package it, but when I run it
>>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar
>>> target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar
>>> storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>
>>>
>>> I am getting such error
>>>
>>> 1657 [main]
>>> INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
>>> with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread
>>> Thread[main,5,main] died
>>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>> ~[na:1.7.0_55]
>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>> ~[na:1.7.0_55]
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> ~[na:1.7.0_55]
>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>> ~[na:1.7.0_55]
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
>>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>> ~[na:1.7.0_55]
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>>>
>>>
>>>
>>>
>>> I try to poke around online, could not find a solution for it, any idea
>>> about that?
>>>
>>>
>>> Thanks
>>>
>>> Alec
>>>
>>>
>>>
>>>
>>
>>
>>
>>
>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>>
>>
>>
>
>
> --
> Thanks
> Parth
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>
Re: kafka-spout running error
Posted by Parth Brahmbhatt <pb...@hortonworks.com>.
I see a NoSuchMethodError, seems like there is some issue with your jar
packing. Can you confirm that you have the zookeeper dependency packed in
your jar? what version of curator and zookeeper are you using?
Thanks
Parth
On Tue, Aug 5, 2014 at 1:45 PM, Sa Li <sa...@gmail.com> wrote:
> Thanks, Parth, I increase the sleep time as Thread.sleep(150000000), 150
> seconds, but still I got such Asyn problem, it seems to be the problem to
> reading kafka topic from zookeeper.
>
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214)
> ~[storm-core-0.9.0.1.jar:na]
> at
> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
> ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
> ~[storm-core-0.9.0.1.jar:na]
> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
> 3100 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor -
> java.lang.NoSuchMethodError:
> org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
> at
> org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.ConnectionState.start(ConnectionState.java:103)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38)
> ~[storm-core-0.9.0.1.jar:na]
> at
> backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26)
> ~[storm-core-0.9.0.1.jar:na]
> at
> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
> ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
> ~[storm-core-0.9.0.1.jar:na]
> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
> 3101 [Thread-29-$mastercoord-bg0] INFO
> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 3114 [Thread-10] INFO backtype.storm.daemon.worker - Worker has topology
> config {"storm.id" "kafka-1-1407271290", "dev.zookeeper.path"
> "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil,
> "topology.builtin.metrics.bucket.size.secs" 60,
> "topology.fall.back.on.java.serialization" true,
> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
> "topology.skip.missing.kryo.registrations" true,
> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
> "topology.trident.batch.emit.interval.millis" 50,
> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
> "/tmp/0610cc80-25a7-4304-acf0-9ead5f942429",
> "storm.messaging.netty.buffer_size" 5242880,
> "supervisor.worker.start.timeout.secs" 120,
> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
> "/transactional", "topology.acker.executors" nil,
> "topology.kryo.decorators" (), "topology.name" "kafka",
> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
> "supervisor.heartbeat.frequency.secs" 5,
> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
> "topology.spout.wait.strategy"
> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
> 1, "storm.zookeeper.retry.interval" 1000, "
> topology.sleep.spout.wait.strategy.time.ms" 1,
> "nimbus.topology.validator"
> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
> (4 5 6), "topology.debug" true, "nimbus.task.launch.secs" 120,
> "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register"
> {"storm.trident.topology.TransactionAttempt" nil},
> "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10,
> "topology.workers" 1, "supervisor.childopts" "-Xmx256m",
> "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
> "backtype.storm.serialization.types.ListDelegateSerializer",
> "topology.disruptor.wait.strategy"
> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
> 5, "storm.thrift.transport"
> "backtype.storm.security.auth.SimpleTransportPlugin",
> "topology.state.synchronization.timeout.secs" 60,
> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
> "topology.optimize" true, "topology.max.task.parallelism" 3}
> 3115 [Thread-10] INFO backtype.storm.daemon.worker - Worker
> ee9ec3b6-5e13-4329-b12a-c3cffdd7e997 for storm kafka-1-1407271290 on
> 3aff208c-d065-448d-9026-bf452151d546:4 has finished loading
> 3207 [Thread-25-spout0] INFO backtype.storm.util - Halting process:
> ("Worker died")
>
> Thanks
>
> Alec
>
>
> On Aug 5, 2014, at 1:32 PM, Parth Brahmbhatt <pb...@hortonworks.com>
> wrote:
>
> Can you let the topology run for 120 seconds or so? In my experience the
> kafka bolt/spout takes a lot of latency initially as it tries to read/write
> from zookeeper and initialize connections. On my mac it takes about 15
> seconds before the spout is actually opened.
>
> Thanks
> Parth
> On Aug 5, 2014, at 1:11 PM, Sa Li <sa...@gmail.com> wrote:
>
> If I set the sleep time as 1000 milisec, I got such error:
>
> 3067 [main] INFO backtype.storm.testing - Deleting temporary path
> /tmp/0f1851f1-9499-48a5-817e-41712921d054
> 3163 [Thread-10-EventThread] INFO
> com.netflix.curator.framework.state.ConnectionStateManager - State change:
> SUSPENDED
> 3163 [ConnectionStateManager-0] WARN
> com.netflix.curator.framework.state.ConnectionStateManager - There are no
> ConnectionStateListeners registered.
> 3164 [Thread-10-EventThread] WARN backtype.storm.cluster - Received event
> :disconnected::none: with disconnected Zookeeper.
> 3636 [Thread-10-SendThread(localhost:2000)] WARN
> org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server
> null, unexpected error, closing socket connection and attempting reconnect
> java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> ~[na:1.7.0_55]
> at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
> ~[na:1.7.0_55]
> at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
> ~[zookeeper-3.3.3.jar:3.3.3-1073969]
> 4877 [Thread-10-SendThread(localhost:2000)] WARN
> org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server
> null, unexpected error, closing socket connection and attempting reconnect
> java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> ~[na:1.7.0_55]
> at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
> ~[na:1.7.0_55]
> at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
> ~[zookeeper-3.3.3.jar:3.3.3-1073969]
> 5566 [Thread-10-SendThread(localhost:2000)] WARN
> org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server
> null, unexpected error, closing socket connection and attempting reconnect
> java.net.ConnectException: Connection refused
>
> seems not even connected to zookeeper, any method to confirm to connection
> of zookeeper?
>
> Thanks a lot
>
> Alec
>
> On Aug 5, 2014, at 12:58 PM, Sa Li <sa...@gmail.com> wrote:
>
> Thank you very much for your reply, Taylor. I tried to increase the sleep
> time as 1 sec or 10 sec, however I got such error, it seems to be Async
> loop error. Any idea about that?
>
> 3053 [Thread-19-$spoutcoord-spout0] INFO
> org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting
> 3058 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.util - Async loop
> died!
> java.lang.NoSuchMethodError:
> org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
> at
> org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.ConnectionState.reset(ConnectionState.java:219)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.ConnectionState.start(ConnectionState.java:103)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38)
> ~[storm-core-0.9.0.1.jar:na]
> at
> backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26)
> ~[storm-core-0.9.0.1.jar:na]
> at
> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
> ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
> ~[storm-core-0.9.0.1.jar:na]
> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
> 3058 [Thread-25-spout0] ERROR backtype.storm.util - Async loop died!
> java.lang.NoSuchMethodError:
> org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
> at
> org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.ConnectionState.reset(ConnectionState.java:219)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.ConnectionState.start(ConnectionState.java:103)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214)
> ~[storm-core-0.9.0.1.jar:na]
> at
> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
> ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
> ~[storm-core-0.9.0.1.jar:na]
> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
> 3059 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor -
> java.lang.NoSuchMethodError:
> org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
> at
> org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.ConnectionState.reset(ConnectionState.java:219)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.ConnectionState.start(ConnectionState.java:103)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38)
> ~[storm-core-0.9.0.1.jar:na]
> at
> backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26)
> ~[storm-core-0.9.0.1.jar:na]
> at
> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
> ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
> ~[storm-core-0.9.0.1.jar:na]
> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
> 3059 [Thread-25-spout0] ERROR backtype.storm.daemon.executor -
> java.lang.NoSuchMethodError:
> org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
> at
> org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.ConnectionState.reset(ConnectionState.java:219)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.ConnectionState.start(ConnectionState.java:103)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24)
> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43)
> ~[storm-core-0.9.0.1.jar:na]
> at
> storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214)
> ~[storm-core-0.9.0.1.jar:na]
> at
> backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674)
> ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401)
> ~[storm-core-0.9.0.1.jar:na]
> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker has topology
> config {"storm.id" "kafka-1-1407268492", "dev.zookeeper.path"
> "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil,
> "topology.builtin.metrics.bucket.size.secs" 60,
> "topology.fall.back.on.java.serialization" true,
> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
> "topology.skip.missing.kryo.registrations" true,
> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
> "topology.trident.batch.emit.interval.millis" 50,
> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
> "/tmp/ca948198-69df-440b-8acb-6dfc4db6c288",
> "storm.messaging.netty.buffer_size" 5242880,
> "supervisor.worker.start.timeout.secs" 120,
> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
> "/transactional", "topology.acker.executors" nil,
> "topology.kryo.decorators" (), "topology.name" "kafka",
> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
> "supervisor.heartbeat.frequency.secs" 5,
> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
> "topology.spout.wait.strategy"
> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
> nil, "storm.zookeeper.retry.interval" 1000, "
> topology.sleep.spout.wait.strategy.time.ms" 1,
> "nimbus.topology.validator"
> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
> (1 2 3), "topology.debug" true, "nimbus.task.launch.secs" 120,
> "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register"
> {"storm.trident.topology.TransactionAttempt" nil},
> "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10,
> "topology.workers" 1, "supervisor.childopts" "-Xmx256m",
> "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
> "backtype.storm.serialization.types.ListDelegateSerializer",
> "topology.disruptor.wait.strategy"
> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
> 5, "storm.thrift.transport"
> "backtype.storm.security.auth.SimpleTransportPlugin",
> "topology.state.synchronization.timeout.secs" 60,
> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
> "topology.optimize" true, "topology.max.task.parallelism" nil}
> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker
> 64335058-7f94-447f-bc0a-5107084789a0 for storm kafka-1-1407268492 on
> cf2964b3-7655-4a33-88a1-f6e0ceb6f9ed:1 has finished loading
> 3164 [Thread-29-$mastercoord-bg0] INFO
> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 3173 [Thread-25-spout0] INFO backtype.storm.util - Halting process:
> ("Worker died")
> 3173 [Thread-19-$spoutcoord-spout0] INFO backtype.storm.util - Halting
> process: ("Worker died")
>
> Thanks
>
> Alec
>
> On Aug 5, 2014, at 10:26 AM, P. Taylor Goetz <pt...@gmail.com> wrote:
>
> You are only sleeping for 100 milliseconds before shutting down the local
> cluster, which is probably not long enough for the topology to come up and
> start processing messages. Try increasing the sleep time to something like
> 10 seconds.
>
> You can also reduce startup time with the following JVM flag:
>
> -Djava.net.preferIPv4Stack=true
>
> - Taylor
>
> On Aug 5, 2014, at 1:16 PM, Sa Li <sa...@gmail.com> wrote:
>
> Sorry, the stormTopology:
>
> TridentTopology topology = new TridentTopology();
> BrokerHosts zk = new ZkHosts("localhost");
> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk,
> “topictest");
> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
> OpaqueTridentKafkaSpout spout = new
> OpaqueTridentKafkaSpout(spoutConf);
>
>
>
>
>
> On Aug 5, 2014, at 9:56 AM, Sa Li <sa...@gmail.com> wrote:
>
> Thank you very much, Marcelo, it indeed worked, now I can run my code
> without getting error. However, another thing is keeping bother me,
> following is my code:
>
> public static class PrintStream implements Filter {
>
> @SuppressWarnings("rawtypes”)
> @Override
> public void prepare(Map conf, TridentOperationContext context) {
> }
> @Override
> public void cleanup() {
> }
> @Override
> public boolean isKeep(TridentTuple tuple) {
> System.out.println(tuple);
> return true;
> }
> }
> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException
> {
>
> TridentTopology topology = new TridentTopology();
> BrokerHosts zk = new ZkHosts("localhost");
> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk,
> "ingest_test");
> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
> OpaqueTridentKafkaSpout spout = new
> OpaqueTridentKafkaSpout(spoutConf);
>
> topology.newStream("kafka", spout)
> .each(new Fields("str"),
> new PrintStream()
> );
>
> return topology.build();
> }
> public static void main(String[] args) throws Exception {
>
> Config conf = new Config();
> conf.setDebug(true);
> conf.setMaxSpoutPending(1);
> conf.setMaxTaskParallelism(3);
> LocalDRPC drpc = new LocalDRPC();
> LocalCluster cluster = new LocalCluster();
> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>
> Thread.sleep(100);
> cluster.shutdown();
> }
>
> What I expect is quite simple, print out the message I collect from a
> kafka producer playback process which is running separately. The topic is
> listed as:
>
> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper
> localhost:2181
> topic: topictest partition: 0 leader: 1 replicas: 1,3,2
> isr: 1,3,2
> topic: topictest partition: 1 leader: 2 replicas: 2,1,3
> isr: 2,1,3
> topic: topictest partition: 2 leader: 3 replicas: 3,2,1
> isr: 3,2,1
> topic: topictest partition: 3 leader: 1 replicas: 1,2,3
> isr: 1,2,3
> topic: topictest partition: 4 leader: 2 replicas: 2,3,1
> isr: 2,3,1
>
> When I am running the code, this is what I saw on the screen, seems no
> error, but no message print out as well:
>
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1
> -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file=
> -cp
> /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin
> -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar
> storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper
> at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf
> {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
> "topology.tick.tuple.freq.secs" nil,
> "topology.builtin.metrics.bucket.size.secs" 60,
> "topology.fall.back.on.java.serialization" true,
> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
> "topology.skip.missing.kryo.registrations" true,
> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
> "topology.trident.batch.emit.interval.millis" 50,
> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
> "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9",
> "storm.messaging.netty.buffer_size" 5242880,
> "supervisor.worker.start.timeout.secs" 120,
> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
> "/transactional", "topology.acker.executors" nil,
> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
> "supervisor.heartbeat.frequency.secs" 5,
> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
> "topology.spout.wait.strategy"
> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
> nil, "storm.zookeeper.retry.interval" 1000, "
> topology.sleep.spout.wait.strategy.time.ms" 1,
> "nimbus.topology.validator"
> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
> [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs"
> 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs"
> 30, "task.refresh.poll.secs" 10, "topology.workers" 1,
> "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627,
> "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1,
> "topology.tuple.serializer"
> "backtype.storm.serialization.types.ListDelegateSerializer",
> "topology.disruptor.wait.strategy"
> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
> 5, "storm.thrift.transport"
> "backtype.storm.security.auth.SimpleTransportPlugin",
> "topology.state.synchronization.timeout.secs" 60,
> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
> "topology.optimize" true, "topology.max.task.parallelism" nil}
> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
> 1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
> update: :connected:none
> 1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
> update: :connected:none
> 1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
> update: :connected:none
> 1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor
> with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
> "topology.tick.tuple.freq.secs" nil,
> "topology.builtin.metrics.bucket.size.secs" 60,
> "topology.fall.back.on.java.serialization" true,
> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
> "topology.skip.missing.kryo.registrations" true,
> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
> "topology.trident.batch.emit.interval.millis" 50,
> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
> "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388",
> "storm.messaging.netty.buffer_size" 5242880,
> "supervisor.worker.start.timeout.secs" 120,
> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
> "/transactional", "topology.acker.executors" nil,
> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
> "supervisor.heartbeat.frequency.secs" 5,
> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
> "topology.spout.wait.strategy"
> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
> nil, "storm.zookeeper.retry.interval" 1000, "
> topology.sleep.spout.wait.strategy.time.ms" 1,
> "nimbus.topology.validator"
> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
> (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120,
> "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30,
> "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts"
> "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
> "backtype.storm.serialization.types.ListDelegateSerializer",
> "topology.disruptor.wait.strategy"
> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
> 5, "storm.thrift.transport"
> "backtype.storm.security.auth.SimpleTransportPlugin",
> "topology.state.synchronization.timeout.secs" 60,
> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
> "topology.optimize" true, "topology.max.task.parallelism" nil}
> 1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
> update: :connected:none
> 1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
> with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor
> with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
> "topology.tick.tuple.freq.secs" nil,
> "topology.builtin.metrics.bucket.size.secs" 60,
> "topology.fall.back.on.java.serialization" true,
> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
> "topology.skip.missing.kryo.registrations" true,
> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
> "topology.trident.batch.emit.interval.millis" 50,
> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
> "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912",
> "storm.messaging.netty.buffer_size" 5242880,
> "supervisor.worker.start.timeout.secs" 120,
> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
> "/transactional", "topology.acker.executors" nil,
> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
> "supervisor.heartbeat.frequency.secs" 5,
> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
> "topology.spout.wait.strategy"
> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
> nil, "storm.zookeeper.retry.interval" 1000, "
> topology.sleep.spout.wait.strategy.time.ms" 1,
> "nimbus.topology.validator"
> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
> (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120,
> "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30,
> "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts"
> "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
> "backtype.storm.serialization.types.ListDelegateSerializer",
> "topology.disruptor.wait.strategy"
> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
> 5, "storm.thrift.transport"
> "backtype.storm.security.auth.SimpleTransportPlugin",
> "topology.state.synchronization.timeout.secs" 60,
> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
> "topology.optimize" true, "topology.max.task.parallelism" nil}
> 1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
> update: :connected:none
> 1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
> with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology
> submission for kafka with conf {"topology.max.task.parallelism" nil,
> "topology.acker.executors" nil, "topology.kryo.register"
> {"storm.trident.topology.TransactionAttempt" nil},
> "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id"
> "kafka-1-1407257070", "topology.debug" true}
> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka:
> kafka-1-1407257070
> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available
> slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1]
> ["944e6152-ca58-4d2b-8325-94ac98f43995" 2]
> ["944e6152-ca58-4d2b-8325-94ac98f43995" 3]
> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4]
> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5]
> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment
> for topology id kafka-1-1407257070:
> #backtype.storm.daemon.common.Assignment{:master-code-dir
> "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070",
> :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"},
> :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5
> 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4]
> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2]
> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1]
> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1
> 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3]
> 1407257070}}
> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down
> supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down
> supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
> 2256 [main] INFO backtype.storm.testing - Shutting down in process
> zookeeper
> 2257 [main] INFO backtype.storm.testing - Done shutting down in process
> zookeeper
> 2258 [main] INFO backtype.storm.testing - Deleting temporary path
> /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
> 2259 [main] INFO backtype.storm.testing - Deleting temporary path
> /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
> 2260 [main] INFO backtype.storm.testing - Deleting temporary path
> /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
> 2261 [main] INFO backtype.storm.testing - Deleting temporary path
> /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>
> Anyone can help me locate what the problem is? I really need to walk
> through this step in order to be able to replace .each(printStream()) with
> other functions.
>
>
> Thanks
>
> Alec
>
> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>
> hello,
>
> you can check your .jar application with command " jar tf " to see if
> class kafka/api/OffsetRequest.class is part of the jar.
> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are
> using) in storm_lib directory
>
> Marcelo
>
>
> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>
>> Hi, all
>>
>> I am running a kafka-spout code in storm-server, the pom is
>>
>> <groupId>org.apache.kafka</groupId>
>> <artifactId>kafka_2.9.2</artifactId>
>> <version>0.8.0</version>
>> <scope>provided</scope>
>>
>> <exclusions>
>> <exclusion>
>> <groupId>org.apache.zookeeper</groupId>
>> <artifactId>zookeeper</artifactId>
>> </exclusion>
>> <exclusion>
>> <groupId>log4j</groupId>
>> <artifactId>log4j</artifactId>
>> </exclusion>
>> </exclusions>
>>
>> </dependency>
>>
>> <!-- Storm-Kafka compiled -->
>>
>> <dependency>
>> <artifactId>storm-kafka</artifactId>
>> <groupId>org.apache.storm</groupId>
>> <version>0.9.2-incubating</version>
>> <scope>*compile*</scope>
>> </dependency>
>>
>> I can mvn package it, but when I run it
>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar
>> target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar
>> storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>
>>
>> I am getting such error
>>
>> 1657 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
>> with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread
>> Thread[main,5,main] died
>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_55]
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_55]
>> at java.security.AccessController.doPrivileged(Native Method)
>> ~[na:1.7.0_55]
>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>> ~[na:1.7.0_55]
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>> ~[na:1.7.0_55]
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>>
>>
>>
>>
>> I try to poke around online, could not find a solution for it, any idea
>> about that?
>>
>>
>> Thanks
>>
>> Alec
>>
>>
>>
>>
>
>
>
>
>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>
>
>
--
Thanks
Parth
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.
Re: kafka-spout running error
Posted by Sa Li <sa...@gmail.com>.
Thanks, Parth, I increase the sleep time as Thread.sleep(150000000), 150 seconds, but still I got such Asyn problem, it seems to be the problem to reading kafka topic from zookeeper.
at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
3100 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor -
java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
3101 [Thread-29-$mastercoord-bg0] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
3114 [Thread-10] INFO backtype.storm.daemon.worker - Worker has topology config {"storm.id" "kafka-1-1407271290", "dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/0610cc80-25a7-4304-acf0-9ead5f942429", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.kryo.decorators" (), "topology.name" "kafka", "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" 1, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" true, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" 3}
3115 [Thread-10] INFO backtype.storm.daemon.worker - Worker ee9ec3b6-5e13-4329-b12a-c3cffdd7e997 for storm kafka-1-1407271290 on 3aff208c-d065-448d-9026-bf452151d546:4 has finished loading
3207 [Thread-25-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
Thanks
Alec
On Aug 5, 2014, at 1:32 PM, Parth Brahmbhatt <pb...@hortonworks.com> wrote:
> Can you let the topology run for 120 seconds or so? In my experience the kafka bolt/spout takes a lot of latency initially as it tries to read/write from zookeeper and initialize connections. On my mac it takes about 15 seconds before the spout is actually opened.
>
> Thanks
> Parth
> On Aug 5, 2014, at 1:11 PM, Sa Li <sa...@gmail.com> wrote:
>
>> If I set the sleep time as 1000 milisec, I got such error:
>>
>> 3067 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/0f1851f1-9499-48a5-817e-41712921d054
>> 3163 [Thread-10-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
>> 3163 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
>> 3164 [Thread-10-EventThread] WARN backtype.storm.cluster - Received event :disconnected::none: with disconnected Zookeeper.
>> 3636 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
>> java.net.ConnectException: Connection refused
>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_55]
>> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_55]
>> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) ~[zookeeper-3.3.3.jar:3.3.3-1073969]
>> 4877 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
>> java.net.ConnectException: Connection refused
>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_55]
>> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_55]
>> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) ~[zookeeper-3.3.3.jar:3.3.3-1073969]
>> 5566 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
>> java.net.ConnectException: Connection refused
>>
>> seems not even connected to zookeeper, any method to confirm to connection of zookeeper?
>>
>> Thanks a lot
>>
>> Alec
>>
>> On Aug 5, 2014, at 12:58 PM, Sa Li <sa...@gmail.com> wrote:
>>
>>> Thank you very much for your reply, Taylor. I tried to increase the sleep time as 1 sec or 10 sec, however I got such error, it seems to be Async loop error. Any idea about that?
>>>
>>> 3053 [Thread-19-$spoutcoord-spout0] INFO org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 3058 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.util - Async loop died!
>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>> 3058 [Thread-25-spout0] ERROR backtype.storm.util - Async loop died!
>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>> 3059 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor -
>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>> 3059 [Thread-25-spout0] ERROR backtype.storm.daemon.executor -
>>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
>>> at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>>> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker has topology config {"storm.id" "kafka-1-1407268492", "dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/ca948198-69df-440b-8acb-6dfc4db6c288", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.kryo.decorators" (), "topology.name" "kafka", "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" true, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker 64335058-7f94-447f-bc0a-5107084789a0 for storm kafka-1-1407268492 on cf2964b3-7655-4a33-88a1-f6e0ceb6f9ed:1 has finished loading
>>> 3164 [Thread-29-$mastercoord-bg0] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 3173 [Thread-25-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
>>> 3173 [Thread-19-$spoutcoord-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
>>>
>>> Thanks
>>>
>>> Alec
>>>
>>> On Aug 5, 2014, at 10:26 AM, P. Taylor Goetz <pt...@gmail.com> wrote:
>>>
>>>> You are only sleeping for 100 milliseconds before shutting down the local cluster, which is probably not long enough for the topology to come up and start processing messages. Try increasing the sleep time to something like 10 seconds.
>>>>
>>>> You can also reduce startup time with the following JVM flag:
>>>>
>>>> -Djava.net.preferIPv4Stack=true
>>>>
>>>> - Taylor
>>>>
>>>> On Aug 5, 2014, at 1:16 PM, Sa Li <sa...@gmail.com> wrote:
>>>>
>>>>> Sorry, the stormTopology:
>>>>>
>>>>>> TridentTopology topology = new TridentTopology();
>>>>>> BrokerHosts zk = new ZkHosts("localhost");
>>>>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, “topictest");
>>>>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>>>>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Aug 5, 2014, at 9:56 AM, Sa Li <sa...@gmail.com> wrote:
>>>>>
>>>>>> Thank you very much, Marcelo, it indeed worked, now I can run my code without getting error. However, another thing is keeping bother me, following is my code:
>>>>>>
>>>>>> public static class PrintStream implements Filter {
>>>>>>
>>>>>> @SuppressWarnings("rawtypes”)
>>>>>> @Override
>>>>>> public void prepare(Map conf, TridentOperationContext context) {
>>>>>> }
>>>>>> @Override
>>>>>> public void cleanup() {
>>>>>> }
>>>>>> @Override
>>>>>> public boolean isKeep(TridentTuple tuple) {
>>>>>> System.out.println(tuple);
>>>>>> return true;
>>>>>> }
>>>>>> }
>>>>>> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException {
>>>>>>
>>>>>> TridentTopology topology = new TridentTopology();
>>>>>> BrokerHosts zk = new ZkHosts("localhost");
>>>>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, "ingest_test");
>>>>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>>>>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>>>>>
>>>>>> topology.newStream("kafka", spout)
>>>>>> .each(new Fields("str"),
>>>>>> new PrintStream()
>>>>>> );
>>>>>>
>>>>>> return topology.build();
>>>>>> }
>>>>>> public static void main(String[] args) throws Exception {
>>>>>>
>>>>>> Config conf = new Config();
>>>>>> conf.setDebug(true);
>>>>>> conf.setMaxSpoutPending(1);
>>>>>> conf.setMaxTaskParallelism(3);
>>>>>> LocalDRPC drpc = new LocalDRPC();
>>>>>> LocalCluster cluster = new LocalCluster();
>>>>>> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>>>>>> Thread.sleep(100);
>>>>>> cluster.shutdown();
>>>>>> }
>>>>>>
>>>>>> What I expect is quite simple, print out the message I collect from a kafka producer playback process which is running separately. The topic is listed as:
>>>>>>
>>>>>> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper localhost:2181
>>>>>> topic: topictest partition: 0 leader: 1 replicas: 1,3,2 isr: 1,3,2
>>>>>> topic: topictest partition: 1 leader: 2 replicas: 2,1,3 isr: 2,1,3
>>>>>> topic: topictest partition: 2 leader: 3 replicas: 3,2,1 isr: 3,2,1
>>>>>> topic: topictest partition: 3 leader: 1 replicas: 1,2,3 isr: 1,2,3
>>>>>> topic: topictest partition: 4 leader: 2 replicas: 2,3,1 isr: 2,3,1
>>>>>>
>>>>>> When I am running the code, this is what I saw on the screen, seems no error, but no message print out as well:
>>>>>>
>>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>>> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1 -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file= -cp /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>>> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>>>>> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>>> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
>>>>>> 1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>> 1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>> 1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>> 1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>> 1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>> 1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>>> 1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>> 1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
>>>>>> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>>> 1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>>> 1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
>>>>>> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology submission for kafka with conf {"topology.max.task.parallelism" nil, "topology.acker.executors" nil, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id" "kafka-1-1407257070", "topology.debug" true}
>>>>>> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka: kafka-1-1407257070
>>>>>> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 3] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
>>>>>> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment for topology id kafka-1-1407257070: #backtype.storm.daemon.common.Assignment{:master-code-dir "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070", :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"}, :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3] 1407257070}}
>>>>>> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
>>>>>> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
>>>>>> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
>>>>>> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
>>>>>> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
>>>>>> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
>>>>>> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
>>>>>> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
>>>>>> 2256 [main] INFO backtype.storm.testing - Shutting down in process zookeeper
>>>>>> 2257 [main] INFO backtype.storm.testing - Done shutting down in process zookeeper
>>>>>> 2258 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
>>>>>> 2259 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>>>>> 2260 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
>>>>>> 2261 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>>>>>>
>>>>>> Anyone can help me locate what the problem is? I really need to walk through this step in order to be able to replace .each(printStream()) with other functions.
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> Alec
>>>>>>
>>>>>> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>>>>>>
>>>>>>> hello,
>>>>>>>
>>>>>>> you can check your .jar application with command " jar tf " to see if class kafka/api/OffsetRequest.class is part of the jar.
>>>>>>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are using) in storm_lib directory
>>>>>>>
>>>>>>> Marcelo
>>>>>>>
>>>>>>>
>>>>>>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>>>>>>> Hi, all
>>>>>>>
>>>>>>> I am running a kafka-spout code in storm-server, the pom is
>>>>>>>
>>>>>>> <groupId>org.apache.kafka</groupId>
>>>>>>> <artifactId>kafka_2.9.2</artifactId>
>>>>>>> <version>0.8.0</version>
>>>>>>> <scope>provided</scope>
>>>>>>>
>>>>>>> <exclusions>
>>>>>>> <exclusion>
>>>>>>> <groupId>org.apache.zookeeper</groupId>
>>>>>>> <artifactId>zookeeper</artifactId>
>>>>>>> </exclusion>
>>>>>>> <exclusion>
>>>>>>> <groupId>log4j</groupId>
>>>>>>> <artifactId>log4j</artifactId>
>>>>>>> </exclusion>
>>>>>>> </exclusions>
>>>>>>>
>>>>>>> </dependency>
>>>>>>>
>>>>>>> <!-- Storm-Kafka compiled -->
>>>>>>>
>>>>>>> <dependency>
>>>>>>> <artifactId>storm-kafka</artifactId>
>>>>>>> <groupId>org.apache.storm</groupId>
>>>>>>> <version>0.9.2-incubating</version>
>>>>>>> <scope>*compile*</scope>
>>>>>>> </dependency>
>>>>>>>
>>>>>>> I can mvn package it, but when I run it
>>>>>>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>>>>
>>>>>>>
>>>>>>> I am getting such error
>>>>>>>
>>>>>>> 1657 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>>>>>>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died
>>>>>>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>>>>>>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>> at storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>> at storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>> at storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>>>>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_55]
>>>>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_55]
>>>>>>> at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_55]
>>>>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_55]
>>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
>>>>>>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_55]
>>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I try to poke around online, could not find a solution for it, any idea about that?
>>>>>>>
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> Alec
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: kafka-spout running error
Posted by Parth Brahmbhatt <pb...@hortonworks.com>.
Can you let the topology run for 120 seconds or so? In my experience the kafka bolt/spout takes a lot of latency initially as it tries to read/write from zookeeper and initialize connections. On my mac it takes about 15 seconds before the spout is actually opened.
Thanks
Parth
On Aug 5, 2014, at 1:11 PM, Sa Li <sa...@gmail.com> wrote:
> If I set the sleep time as 1000 milisec, I got such error:
>
> 3067 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/0f1851f1-9499-48a5-817e-41712921d054
> 3163 [Thread-10-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
> 3163 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
> 3164 [Thread-10-EventThread] WARN backtype.storm.cluster - Received event :disconnected::none: with disconnected Zookeeper.
> 3636 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
> java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_55]
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_55]
> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) ~[zookeeper-3.3.3.jar:3.3.3-1073969]
> 4877 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
> java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_55]
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_55]
> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) ~[zookeeper-3.3.3.jar:3.3.3-1073969]
> 5566 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
> java.net.ConnectException: Connection refused
>
> seems not even connected to zookeeper, any method to confirm to connection of zookeeper?
>
> Thanks a lot
>
> Alec
>
> On Aug 5, 2014, at 12:58 PM, Sa Li <sa...@gmail.com> wrote:
>
>> Thank you very much for your reply, Taylor. I tried to increase the sleep time as 1 sec or 10 sec, however I got such error, it seems to be Async loop error. Any idea about that?
>>
>> 3053 [Thread-19-$spoutcoord-spout0] INFO org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 3058 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.util - Async loop died!
>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>> 3058 [Thread-25-spout0] ERROR backtype.storm.util - Async loop died!
>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>> 3059 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor -
>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>> 3059 [Thread-25-spout0] ERROR backtype.storm.daemon.executor -
>> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
>> at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
>> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker has topology config {"storm.id" "kafka-1-1407268492", "dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/ca948198-69df-440b-8acb-6dfc4db6c288", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.kryo.decorators" (), "topology.name" "kafka", "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" true, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker 64335058-7f94-447f-bc0a-5107084789a0 for storm kafka-1-1407268492 on cf2964b3-7655-4a33-88a1-f6e0ceb6f9ed:1 has finished loading
>> 3164 [Thread-29-$mastercoord-bg0] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 3173 [Thread-25-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
>> 3173 [Thread-19-$spoutcoord-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
>>
>> Thanks
>>
>> Alec
>>
>> On Aug 5, 2014, at 10:26 AM, P. Taylor Goetz <pt...@gmail.com> wrote:
>>
>>> You are only sleeping for 100 milliseconds before shutting down the local cluster, which is probably not long enough for the topology to come up and start processing messages. Try increasing the sleep time to something like 10 seconds.
>>>
>>> You can also reduce startup time with the following JVM flag:
>>>
>>> -Djava.net.preferIPv4Stack=true
>>>
>>> - Taylor
>>>
>>> On Aug 5, 2014, at 1:16 PM, Sa Li <sa...@gmail.com> wrote:
>>>
>>>> Sorry, the stormTopology:
>>>>
>>>>> TridentTopology topology = new TridentTopology();
>>>>> BrokerHosts zk = new ZkHosts("localhost");
>>>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, “topictest");
>>>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>>>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>>>
>>>>
>>>>
>>>>
>>>> On Aug 5, 2014, at 9:56 AM, Sa Li <sa...@gmail.com> wrote:
>>>>
>>>>> Thank you very much, Marcelo, it indeed worked, now I can run my code without getting error. However, another thing is keeping bother me, following is my code:
>>>>>
>>>>> public static class PrintStream implements Filter {
>>>>>
>>>>> @SuppressWarnings("rawtypes”)
>>>>> @Override
>>>>> public void prepare(Map conf, TridentOperationContext context) {
>>>>> }
>>>>> @Override
>>>>> public void cleanup() {
>>>>> }
>>>>> @Override
>>>>> public boolean isKeep(TridentTuple tuple) {
>>>>> System.out.println(tuple);
>>>>> return true;
>>>>> }
>>>>> }
>>>>> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException {
>>>>>
>>>>> TridentTopology topology = new TridentTopology();
>>>>> BrokerHosts zk = new ZkHosts("localhost");
>>>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, "ingest_test");
>>>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>>>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>>>>
>>>>> topology.newStream("kafka", spout)
>>>>> .each(new Fields("str"),
>>>>> new PrintStream()
>>>>> );
>>>>>
>>>>> return topology.build();
>>>>> }
>>>>> public static void main(String[] args) throws Exception {
>>>>>
>>>>> Config conf = new Config();
>>>>> conf.setDebug(true);
>>>>> conf.setMaxSpoutPending(1);
>>>>> conf.setMaxTaskParallelism(3);
>>>>> LocalDRPC drpc = new LocalDRPC();
>>>>> LocalCluster cluster = new LocalCluster();
>>>>> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>>>>> Thread.sleep(100);
>>>>> cluster.shutdown();
>>>>> }
>>>>>
>>>>> What I expect is quite simple, print out the message I collect from a kafka producer playback process which is running separately. The topic is listed as:
>>>>>
>>>>> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper localhost:2181
>>>>> topic: topictest partition: 0 leader: 1 replicas: 1,3,2 isr: 1,3,2
>>>>> topic: topictest partition: 1 leader: 2 replicas: 2,1,3 isr: 2,1,3
>>>>> topic: topictest partition: 2 leader: 3 replicas: 3,2,1 isr: 3,2,1
>>>>> topic: topictest partition: 3 leader: 1 replicas: 1,2,3 isr: 1,2,3
>>>>> topic: topictest partition: 4 leader: 2 replicas: 2,3,1 isr: 2,3,1
>>>>>
>>>>> When I am running the code, this is what I saw on the screen, seems no error, but no message print out as well:
>>>>>
>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1 -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file= -cp /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>>>> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
>>>>> 1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>> 1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>> 1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>> 1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>> 1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>> 1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
>>>>> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>> 1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>> 1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
>>>>> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology submission for kafka with conf {"topology.max.task.parallelism" nil, "topology.acker.executors" nil, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id" "kafka-1-1407257070", "topology.debug" true}
>>>>> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka: kafka-1-1407257070
>>>>> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 3] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
>>>>> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment for topology id kafka-1-1407257070: #backtype.storm.daemon.common.Assignment{:master-code-dir "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070", :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"}, :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3] 1407257070}}
>>>>> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
>>>>> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
>>>>> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
>>>>> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
>>>>> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
>>>>> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
>>>>> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
>>>>> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
>>>>> 2256 [main] INFO backtype.storm.testing - Shutting down in process zookeeper
>>>>> 2257 [main] INFO backtype.storm.testing - Done shutting down in process zookeeper
>>>>> 2258 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
>>>>> 2259 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>>>> 2260 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
>>>>> 2261 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>>>>>
>>>>> Anyone can help me locate what the problem is? I really need to walk through this step in order to be able to replace .each(printStream()) with other functions.
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>> Alec
>>>>>
>>>>> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>>>>>
>>>>>> hello,
>>>>>>
>>>>>> you can check your .jar application with command " jar tf " to see if class kafka/api/OffsetRequest.class is part of the jar.
>>>>>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are using) in storm_lib directory
>>>>>>
>>>>>> Marcelo
>>>>>>
>>>>>>
>>>>>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>>>>>> Hi, all
>>>>>>
>>>>>> I am running a kafka-spout code in storm-server, the pom is
>>>>>>
>>>>>> <groupId>org.apache.kafka</groupId>
>>>>>> <artifactId>kafka_2.9.2</artifactId>
>>>>>> <version>0.8.0</version>
>>>>>> <scope>provided</scope>
>>>>>>
>>>>>> <exclusions>
>>>>>> <exclusion>
>>>>>> <groupId>org.apache.zookeeper</groupId>
>>>>>> <artifactId>zookeeper</artifactId>
>>>>>> </exclusion>
>>>>>> <exclusion>
>>>>>> <groupId>log4j</groupId>
>>>>>> <artifactId>log4j</artifactId>
>>>>>> </exclusion>
>>>>>> </exclusions>
>>>>>>
>>>>>> </dependency>
>>>>>>
>>>>>> <!-- Storm-Kafka compiled -->
>>>>>>
>>>>>> <dependency>
>>>>>> <artifactId>storm-kafka</artifactId>
>>>>>> <groupId>org.apache.storm</groupId>
>>>>>> <version>0.9.2-incubating</version>
>>>>>> <scope>*compile*</scope>
>>>>>> </dependency>
>>>>>>
>>>>>> I can mvn package it, but when I run it
>>>>>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>>>
>>>>>>
>>>>>> I am getting such error
>>>>>>
>>>>>> 1657 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>>>>>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died
>>>>>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>>>>>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> at storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>>>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_55]
>>>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_55]
>>>>>> at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_55]
>>>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_55]
>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
>>>>>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_55]
>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> I try to poke around online, could not find a solution for it, any idea about that?
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> Alec
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.
Re: kafka-spout running error
Posted by Sa Li <sa...@gmail.com>.
If I set the sleep time as 1000 milisec, I got such error:
3067 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/0f1851f1-9499-48a5-817e-41712921d054
3163 [Thread-10-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
3163 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
3164 [Thread-10-EventThread] WARN backtype.storm.cluster - Received event :disconnected::none: with disconnected Zookeeper.
3636 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_55]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_55]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) ~[zookeeper-3.3.3.jar:3.3.3-1073969]
4877 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_55]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_55]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119) ~[zookeeper-3.3.3.jar:3.3.3-1073969]
5566 [Thread-10-SendThread(localhost:2000)] WARN org.apache.zookeeper.ClientCnxn - Session 0x147a7c868ef000b for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
seems not even connected to zookeeper, any method to confirm to connection of zookeeper?
Thanks a lot
Alec
On Aug 5, 2014, at 12:58 PM, Sa Li <sa...@gmail.com> wrote:
> Thank you very much for your reply, Taylor. I tried to increase the sleep time as 1 sec or 10 sec, however I got such error, it seems to be Async loop error. Any idea about that?
>
> 3053 [Thread-19-$spoutcoord-spout0] INFO org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting
> 3058 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.util - Async loop died!
> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
> 3058 [Thread-25-spout0] ERROR backtype.storm.util - Async loop died!
> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
> 3059 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor -
> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
> 3059 [Thread-25-spout0] ERROR backtype.storm.daemon.executor -
> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
> at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
> at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
> at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker has topology config {"storm.id" "kafka-1-1407268492", "dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/ca948198-69df-440b-8acb-6dfc4db6c288", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.kryo.decorators" (), "topology.name" "kafka", "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" true, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
> 3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker 64335058-7f94-447f-bc0a-5107084789a0 for storm kafka-1-1407268492 on cf2964b3-7655-4a33-88a1-f6e0ceb6f9ed:1 has finished loading
> 3164 [Thread-29-$mastercoord-bg0] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 3173 [Thread-25-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
> 3173 [Thread-19-$spoutcoord-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
>
> Thanks
>
> Alec
>
> On Aug 5, 2014, at 10:26 AM, P. Taylor Goetz <pt...@gmail.com> wrote:
>
>> You are only sleeping for 100 milliseconds before shutting down the local cluster, which is probably not long enough for the topology to come up and start processing messages. Try increasing the sleep time to something like 10 seconds.
>>
>> You can also reduce startup time with the following JVM flag:
>>
>> -Djava.net.preferIPv4Stack=true
>>
>> - Taylor
>>
>> On Aug 5, 2014, at 1:16 PM, Sa Li <sa...@gmail.com> wrote:
>>
>>> Sorry, the stormTopology:
>>>
>>>> TridentTopology topology = new TridentTopology();
>>>> BrokerHosts zk = new ZkHosts("localhost");
>>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, “topictest");
>>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>>
>>>
>>>
>>>
>>> On Aug 5, 2014, at 9:56 AM, Sa Li <sa...@gmail.com> wrote:
>>>
>>>> Thank you very much, Marcelo, it indeed worked, now I can run my code without getting error. However, another thing is keeping bother me, following is my code:
>>>>
>>>> public static class PrintStream implements Filter {
>>>>
>>>> @SuppressWarnings("rawtypes”)
>>>> @Override
>>>> public void prepare(Map conf, TridentOperationContext context) {
>>>> }
>>>> @Override
>>>> public void cleanup() {
>>>> }
>>>> @Override
>>>> public boolean isKeep(TridentTuple tuple) {
>>>> System.out.println(tuple);
>>>> return true;
>>>> }
>>>> }
>>>> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException {
>>>>
>>>> TridentTopology topology = new TridentTopology();
>>>> BrokerHosts zk = new ZkHosts("localhost");
>>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, "ingest_test");
>>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>>>
>>>> topology.newStream("kafka", spout)
>>>> .each(new Fields("str"),
>>>> new PrintStream()
>>>> );
>>>>
>>>> return topology.build();
>>>> }
>>>> public static void main(String[] args) throws Exception {
>>>>
>>>> Config conf = new Config();
>>>> conf.setDebug(true);
>>>> conf.setMaxSpoutPending(1);
>>>> conf.setMaxTaskParallelism(3);
>>>> LocalDRPC drpc = new LocalDRPC();
>>>> LocalCluster cluster = new LocalCluster();
>>>> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>>>> Thread.sleep(100);
>>>> cluster.shutdown();
>>>> }
>>>>
>>>> What I expect is quite simple, print out the message I collect from a kafka producer playback process which is running separately. The topic is listed as:
>>>>
>>>> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper localhost:2181
>>>> topic: topictest partition: 0 leader: 1 replicas: 1,3,2 isr: 1,3,2
>>>> topic: topictest partition: 1 leader: 2 replicas: 2,1,3 isr: 2,1,3
>>>> topic: topictest partition: 2 leader: 3 replicas: 3,2,1 isr: 3,2,1
>>>> topic: topictest partition: 3 leader: 1 replicas: 1,2,3 isr: 1,2,3
>>>> topic: topictest partition: 4 leader: 2 replicas: 2,3,1 isr: 2,3,1
>>>>
>>>> When I am running the code, this is what I saw on the screen, seems no error, but no message print out as well:
>>>>
>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1 -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file= -cp /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>>> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
>>>> 1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>> 1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>> 1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>> 1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>> 1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>> 1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
>>>> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>> 1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>> 1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
>>>> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology submission for kafka with conf {"topology.max.task.parallelism" nil, "topology.acker.executors" nil, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id" "kafka-1-1407257070", "topology.debug" true}
>>>> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka: kafka-1-1407257070
>>>> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 3] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
>>>> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment for topology id kafka-1-1407257070: #backtype.storm.daemon.common.Assignment{:master-code-dir "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070", :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"}, :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3] 1407257070}}
>>>> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
>>>> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
>>>> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
>>>> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
>>>> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
>>>> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
>>>> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
>>>> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
>>>> 2256 [main] INFO backtype.storm.testing - Shutting down in process zookeeper
>>>> 2257 [main] INFO backtype.storm.testing - Done shutting down in process zookeeper
>>>> 2258 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
>>>> 2259 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>>> 2260 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
>>>> 2261 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>>>>
>>>> Anyone can help me locate what the problem is? I really need to walk through this step in order to be able to replace .each(printStream()) with other functions.
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Alec
>>>>
>>>> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>>>>
>>>>> hello,
>>>>>
>>>>> you can check your .jar application with command " jar tf " to see if class kafka/api/OffsetRequest.class is part of the jar.
>>>>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are using) in storm_lib directory
>>>>>
>>>>> Marcelo
>>>>>
>>>>>
>>>>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>>>>> Hi, all
>>>>>
>>>>> I am running a kafka-spout code in storm-server, the pom is
>>>>>
>>>>> <groupId>org.apache.kafka</groupId>
>>>>> <artifactId>kafka_2.9.2</artifactId>
>>>>> <version>0.8.0</version>
>>>>> <scope>provided</scope>
>>>>>
>>>>> <exclusions>
>>>>> <exclusion>
>>>>> <groupId>org.apache.zookeeper</groupId>
>>>>> <artifactId>zookeeper</artifactId>
>>>>> </exclusion>
>>>>> <exclusion>
>>>>> <groupId>log4j</groupId>
>>>>> <artifactId>log4j</artifactId>
>>>>> </exclusion>
>>>>> </exclusions>
>>>>>
>>>>> </dependency>
>>>>>
>>>>> <!-- Storm-Kafka compiled -->
>>>>>
>>>>> <dependency>
>>>>> <artifactId>storm-kafka</artifactId>
>>>>> <groupId>org.apache.storm</groupId>
>>>>> <version>0.9.2-incubating</version>
>>>>> <scope>*compile*</scope>
>>>>> </dependency>
>>>>>
>>>>> I can mvn package it, but when I run it
>>>>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>>
>>>>>
>>>>> I am getting such error
>>>>>
>>>>> 1657 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>>>>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died
>>>>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>>>>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> at storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_55]
>>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_55]
>>>>> at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_55]
>>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_55]
>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
>>>>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_55]
>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> I try to poke around online, could not find a solution for it, any idea about that?
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>> Alec
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>
Re: kafka-spout running error
Posted by Sa Li <sa...@gmail.com>.
Thank you very much for your reply, Taylor. I tried to increase the sleep time as 1 sec or 10 sec, however I got such error, it seems to be Async loop error. Any idea about that?
3053 [Thread-19-$spoutcoord-spout0] INFO org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting
3058 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.util - Async loop died!
java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
3058 [Thread-25-spout0] ERROR backtype.storm.util - Async loop died!
java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
3059 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.daemon.executor -
java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.trident.Coordinator.<init>(Coordinator.java:16) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getCoordinator(OpaqueTridentKafkaSpout.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Coordinator.<init>(OpaquePartitionedTridentSpoutExecutor.java:27) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getCoordinator(OpaquePartitionedTridentSpoutExecutor.java:166) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.spout.TridentSpoutCoordinator.prepare(TridentSpoutCoordinator.java:38) ~[storm-core-0.9.0.1.jar:na]
at backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:26) ~[storm-core-0.9.0.1.jar:na]
at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
3059 [Thread-25-spout0] ERROR backtype.storm.daemon.executor -
java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.framework.imps.CuratorFrameworkImpl$2.newZooKeeper(CuratorFrameworkImpl.java:154) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.HandleHolder$1.getZooKeeper(HandleHolder.java:94) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:190) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:256) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.DynamicBrokersReader.<init>(DynamicBrokersReader.java:36) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:40) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.trident.TridentKafkaEmitter.<init>(TridentKafkaEmitter.java:44) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.artemis.kafka.trident.OpaqueTridentKafkaSpout.getEmitter(OpaqueTridentKafkaSpout.java:24) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.<init>(OpaquePartitionedTridentSpoutExecutor.java:69) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:171) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor.getEmitter(OpaquePartitionedTridentSpoutExecutor.java:20) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.spout.TridentSpoutExecutor.prepare(TridentSpoutExecutor.java:43) ~[storm-core-0.9.0.1.jar:na]
at storm.trident.topology.TridentBoltExecutor.prepare(TridentBoltExecutor.java:214) ~[storm-core-0.9.0.1.jar:na]
at backtype.storm.daemon.executor$fn__3498$fn__3510.invoke(executor.clj:674) ~[storm-core-0.9.0.1.jar:na]
at backtype.storm.util$async_loop$fn__444.invoke(util.clj:401) ~[storm-core-0.9.0.1.jar:na]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.4.0.jar:na]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker has topology config {"storm.id" "kafka-1-1407268492", "dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/ca948198-69df-440b-8acb-6dfc4db6c288", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.kryo.decorators" (), "topology.name" "kafka", "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" true, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
3059 [Thread-7] INFO backtype.storm.daemon.worker - Worker 64335058-7f94-447f-bc0a-5107084789a0 for storm kafka-1-1407268492 on cf2964b3-7655-4a33-88a1-f6e0ceb6f9ed:1 has finished loading
3164 [Thread-29-$mastercoord-bg0] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
3173 [Thread-25-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
3173 [Thread-19-$spoutcoord-spout0] INFO backtype.storm.util - Halting process: ("Worker died")
Thanks
Alec
On Aug 5, 2014, at 10:26 AM, P. Taylor Goetz <pt...@gmail.com> wrote:
> You are only sleeping for 100 milliseconds before shutting down the local cluster, which is probably not long enough for the topology to come up and start processing messages. Try increasing the sleep time to something like 10 seconds.
>
> You can also reduce startup time with the following JVM flag:
>
> -Djava.net.preferIPv4Stack=true
>
> - Taylor
>
> On Aug 5, 2014, at 1:16 PM, Sa Li <sa...@gmail.com> wrote:
>
>> Sorry, the stormTopology:
>>
>>> TridentTopology topology = new TridentTopology();
>>> BrokerHosts zk = new ZkHosts("localhost");
>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, “topictest");
>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>
>>
>>
>>
>> On Aug 5, 2014, at 9:56 AM, Sa Li <sa...@gmail.com> wrote:
>>
>>> Thank you very much, Marcelo, it indeed worked, now I can run my code without getting error. However, another thing is keeping bother me, following is my code:
>>>
>>> public static class PrintStream implements Filter {
>>>
>>> @SuppressWarnings("rawtypes”)
>>> @Override
>>> public void prepare(Map conf, TridentOperationContext context) {
>>> }
>>> @Override
>>> public void cleanup() {
>>> }
>>> @Override
>>> public boolean isKeep(TridentTuple tuple) {
>>> System.out.println(tuple);
>>> return true;
>>> }
>>> }
>>> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException {
>>>
>>> TridentTopology topology = new TridentTopology();
>>> BrokerHosts zk = new ZkHosts("localhost");
>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, "ingest_test");
>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>>
>>> topology.newStream("kafka", spout)
>>> .each(new Fields("str"),
>>> new PrintStream()
>>> );
>>>
>>> return topology.build();
>>> }
>>> public static void main(String[] args) throws Exception {
>>>
>>> Config conf = new Config();
>>> conf.setDebug(true);
>>> conf.setMaxSpoutPending(1);
>>> conf.setMaxTaskParallelism(3);
>>> LocalDRPC drpc = new LocalDRPC();
>>> LocalCluster cluster = new LocalCluster();
>>> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>>> Thread.sleep(100);
>>> cluster.shutdown();
>>> }
>>>
>>> What I expect is quite simple, print out the message I collect from a kafka producer playback process which is running separately. The topic is listed as:
>>>
>>> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper localhost:2181
>>> topic: topictest partition: 0 leader: 1 replicas: 1,3,2 isr: 1,3,2
>>> topic: topictest partition: 1 leader: 2 replicas: 2,1,3 isr: 2,1,3
>>> topic: topictest partition: 2 leader: 3 replicas: 3,2,1 isr: 3,2,1
>>> topic: topictest partition: 3 leader: 1 replicas: 1,2,3 isr: 1,2,3
>>> topic: topictest partition: 4 leader: 2 replicas: 2,3,1 isr: 2,3,1
>>>
>>> When I am running the code, this is what I saw on the screen, seems no error, but no message print out as well:
>>>
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1 -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file= -cp /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
>>> 1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>> 1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>> 1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>> 1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>> 1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>> 1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
>>> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>> 1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>> 1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
>>> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology submission for kafka with conf {"topology.max.task.parallelism" nil, "topology.acker.executors" nil, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id" "kafka-1-1407257070", "topology.debug" true}
>>> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka: kafka-1-1407257070
>>> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 3] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
>>> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment for topology id kafka-1-1407257070: #backtype.storm.daemon.common.Assignment{:master-code-dir "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070", :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"}, :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3] 1407257070}}
>>> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
>>> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
>>> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
>>> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
>>> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
>>> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
>>> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
>>> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
>>> 2256 [main] INFO backtype.storm.testing - Shutting down in process zookeeper
>>> 2257 [main] INFO backtype.storm.testing - Done shutting down in process zookeeper
>>> 2258 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
>>> 2259 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>> 2260 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
>>> 2261 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>>>
>>> Anyone can help me locate what the problem is? I really need to walk through this step in order to be able to replace .each(printStream()) with other functions.
>>>
>>>
>>> Thanks
>>>
>>> Alec
>>>
>>> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>>>
>>>> hello,
>>>>
>>>> you can check your .jar application with command " jar tf " to see if class kafka/api/OffsetRequest.class is part of the jar.
>>>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are using) in storm_lib directory
>>>>
>>>> Marcelo
>>>>
>>>>
>>>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>>>> Hi, all
>>>>
>>>> I am running a kafka-spout code in storm-server, the pom is
>>>>
>>>> <groupId>org.apache.kafka</groupId>
>>>> <artifactId>kafka_2.9.2</artifactId>
>>>> <version>0.8.0</version>
>>>> <scope>provided</scope>
>>>>
>>>> <exclusions>
>>>> <exclusion>
>>>> <groupId>org.apache.zookeeper</groupId>
>>>> <artifactId>zookeeper</artifactId>
>>>> </exclusion>
>>>> <exclusion>
>>>> <groupId>log4j</groupId>
>>>> <artifactId>log4j</artifactId>
>>>> </exclusion>
>>>> </exclusions>
>>>>
>>>> </dependency>
>>>>
>>>> <!-- Storm-Kafka compiled -->
>>>>
>>>> <dependency>
>>>> <artifactId>storm-kafka</artifactId>
>>>> <groupId>org.apache.storm</groupId>
>>>> <version>0.9.2-incubating</version>
>>>> <scope>*compile*</scope>
>>>> </dependency>
>>>>
>>>> I can mvn package it, but when I run it
>>>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>
>>>>
>>>> I am getting such error
>>>>
>>>> 1657 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>>>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died
>>>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>>>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_55]
>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_55]
>>>> at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_55]
>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_55]
>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
>>>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_55]
>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>>>>
>>>>
>>>>
>>>>
>>>> I try to poke around online, could not find a solution for it, any idea about that?
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Alec
>>>>
>>>>
>>>>
>>>>
>>>
>>
>
Re: kafka-spout running error
Posted by "P. Taylor Goetz" <pt...@gmail.com>.
You are only sleeping for 100 milliseconds before shutting down the local cluster, which is probably not long enough for the topology to come up and start processing messages. Try increasing the sleep time to something like 10 seconds.
You can also reduce startup time with the following JVM flag:
-Djava.net.preferIPv4Stack=true
- Taylor
On Aug 5, 2014, at 1:16 PM, Sa Li <sa...@gmail.com> wrote:
> Sorry, the stormTopology:
>
>> TridentTopology topology = new TridentTopology();
>> BrokerHosts zk = new ZkHosts("localhost");
>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, “topictest");
>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>
>
>
>
> On Aug 5, 2014, at 9:56 AM, Sa Li <sa...@gmail.com> wrote:
>
>> Thank you very much, Marcelo, it indeed worked, now I can run my code without getting error. However, another thing is keeping bother me, following is my code:
>>
>> public static class PrintStream implements Filter {
>>
>> @SuppressWarnings("rawtypes”)
>> @Override
>> public void prepare(Map conf, TridentOperationContext context) {
>> }
>> @Override
>> public void cleanup() {
>> }
>> @Override
>> public boolean isKeep(TridentTuple tuple) {
>> System.out.println(tuple);
>> return true;
>> }
>> }
>> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException {
>>
>> TridentTopology topology = new TridentTopology();
>> BrokerHosts zk = new ZkHosts("localhost");
>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, "ingest_test");
>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>
>> topology.newStream("kafka", spout)
>> .each(new Fields("str"),
>> new PrintStream()
>> );
>>
>> return topology.build();
>> }
>> public static void main(String[] args) throws Exception {
>>
>> Config conf = new Config();
>> conf.setDebug(true);
>> conf.setMaxSpoutPending(1);
>> conf.setMaxTaskParallelism(3);
>> LocalDRPC drpc = new LocalDRPC();
>> LocalCluster cluster = new LocalCluster();
>> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>> Thread.sleep(100);
>> cluster.shutdown();
>> }
>>
>> What I expect is quite simple, print out the message I collect from a kafka producer playback process which is running separately. The topic is listed as:
>>
>> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper localhost:2181
>> topic: topictest partition: 0 leader: 1 replicas: 1,3,2 isr: 1,3,2
>> topic: topictest partition: 1 leader: 2 replicas: 2,1,3 isr: 2,1,3
>> topic: topictest partition: 2 leader: 3 replicas: 3,2,1 isr: 3,2,1
>> topic: topictest partition: 3 leader: 1 replicas: 1,2,3 isr: 1,2,3
>> topic: topictest partition: 4 leader: 2 replicas: 2,3,1 isr: 2,3,1
>>
>> When I am running the code, this is what I saw on the screen, seems no error, but no message print out as well:
>>
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1 -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file= -cp /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
>> 1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>> 1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>> 1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>> 1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>> 1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>> 1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
>> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>> 1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>> 1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
>> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology submission for kafka with conf {"topology.max.task.parallelism" nil, "topology.acker.executors" nil, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id" "kafka-1-1407257070", "topology.debug" true}
>> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka: kafka-1-1407257070
>> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 3] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
>> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment for topology id kafka-1-1407257070: #backtype.storm.daemon.common.Assignment{:master-code-dir "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070", :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"}, :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3] 1407257070}}
>> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
>> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
>> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
>> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
>> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
>> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
>> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
>> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
>> 2256 [main] INFO backtype.storm.testing - Shutting down in process zookeeper
>> 2257 [main] INFO backtype.storm.testing - Done shutting down in process zookeeper
>> 2258 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
>> 2259 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>> 2260 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
>> 2261 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>>
>> Anyone can help me locate what the problem is? I really need to walk through this step in order to be able to replace .each(printStream()) with other functions.
>>
>>
>> Thanks
>>
>> Alec
>>
>> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>>
>>> hello,
>>>
>>> you can check your .jar application with command " jar tf " to see if class kafka/api/OffsetRequest.class is part of the jar.
>>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are using) in storm_lib directory
>>>
>>> Marcelo
>>>
>>>
>>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>>> Hi, all
>>>
>>> I am running a kafka-spout code in storm-server, the pom is
>>>
>>> <groupId>org.apache.kafka</groupId>
>>> <artifactId>kafka_2.9.2</artifactId>
>>> <version>0.8.0</version>
>>> <scope>provided</scope>
>>>
>>> <exclusions>
>>> <exclusion>
>>> <groupId>org.apache.zookeeper</groupId>
>>> <artifactId>zookeeper</artifactId>
>>> </exclusion>
>>> <exclusion>
>>> <groupId>log4j</groupId>
>>> <artifactId>log4j</artifactId>
>>> </exclusion>
>>> </exclusions>
>>>
>>> </dependency>
>>>
>>> <!-- Storm-Kafka compiled -->
>>>
>>> <dependency>
>>> <artifactId>storm-kafka</artifactId>
>>> <groupId>org.apache.storm</groupId>
>>> <version>0.9.2-incubating</version>
>>> <scope>*compile*</scope>
>>> </dependency>
>>>
>>> I can mvn package it, but when I run it
>>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>
>>>
>>> I am getting such error
>>>
>>> 1657 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died
>>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_55]
>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_55]
>>> at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_55]
>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_55]
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
>>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_55]
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>>>
>>>
>>>
>>>
>>> I try to poke around online, could not find a solution for it, any idea about that?
>>>
>>>
>>> Thanks
>>>
>>> Alec
>>>
>>>
>>>
>>>
>>
>
Re: kafka-spout running error
Posted by Sa Li <sa...@gmail.com>.
Sorry, the stormTopology:
> TridentTopology topology = new TridentTopology();
> BrokerHosts zk = new ZkHosts("localhost");
> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, “topictest");
> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
On Aug 5, 2014, at 9:56 AM, Sa Li <sa...@gmail.com> wrote:
> Thank you very much, Marcelo, it indeed worked, now I can run my code without getting error. However, another thing is keeping bother me, following is my code:
>
> public static class PrintStream implements Filter {
>
> @SuppressWarnings("rawtypes”)
> @Override
> public void prepare(Map conf, TridentOperationContext context) {
> }
> @Override
> public void cleanup() {
> }
> @Override
> public boolean isKeep(TridentTuple tuple) {
> System.out.println(tuple);
> return true;
> }
> }
> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException {
>
> TridentTopology topology = new TridentTopology();
> BrokerHosts zk = new ZkHosts("localhost");
> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, "ingest_test");
> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>
> topology.newStream("kafka", spout)
> .each(new Fields("str"),
> new PrintStream()
> );
>
> return topology.build();
> }
> public static void main(String[] args) throws Exception {
>
> Config conf = new Config();
> conf.setDebug(true);
> conf.setMaxSpoutPending(1);
> conf.setMaxTaskParallelism(3);
> LocalDRPC drpc = new LocalDRPC();
> LocalCluster cluster = new LocalCluster();
> cluster.submitTopology("kafka", conf, buildTopology(drpc));
> Thread.sleep(100);
> cluster.shutdown();
> }
>
> What I expect is quite simple, print out the message I collect from a kafka producer playback process which is running separately. The topic is listed as:
>
> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper localhost:2181
> topic: topictest partition: 0 leader: 1 replicas: 1,3,2 isr: 1,3,2
> topic: topictest partition: 1 leader: 2 replicas: 2,1,3 isr: 2,1,3
> topic: topictest partition: 2 leader: 3 replicas: 3,2,1 isr: 3,2,1
> topic: topictest partition: 3 leader: 1 replicas: 1,2,3 isr: 1,2,3
> topic: topictest partition: 4 leader: 2 replicas: 2,3,1 isr: 2,3,1
>
> When I am running the code, this is what I saw on the screen, seems no error, but no message print out as well:
>
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1 -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file= -cp /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
> 1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
> 1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
> 1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
> 1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
> 1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
> 1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
> 1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
> 1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology submission for kafka with conf {"topology.max.task.parallelism" nil, "topology.acker.executors" nil, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id" "kafka-1-1407257070", "topology.debug" true}
> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka: kafka-1-1407257070
> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 3] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment for topology id kafka-1-1407257070: #backtype.storm.daemon.common.Assignment{:master-code-dir "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070", :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"}, :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3] 1407257070}}
> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
> 2256 [main] INFO backtype.storm.testing - Shutting down in process zookeeper
> 2257 [main] INFO backtype.storm.testing - Done shutting down in process zookeeper
> 2258 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
> 2259 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
> 2260 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
> 2261 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>
> Anyone can help me locate what the problem is? I really need to walk through this step in order to be able to replace .each(printStream()) with other functions.
>
>
> Thanks
>
> Alec
>
> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>
>> hello,
>>
>> you can check your .jar application with command " jar tf " to see if class kafka/api/OffsetRequest.class is part of the jar.
>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are using) in storm_lib directory
>>
>> Marcelo
>>
>>
>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>> Hi, all
>>
>> I am running a kafka-spout code in storm-server, the pom is
>>
>> <groupId>org.apache.kafka</groupId>
>> <artifactId>kafka_2.9.2</artifactId>
>> <version>0.8.0</version>
>> <scope>provided</scope>
>>
>> <exclusions>
>> <exclusion>
>> <groupId>org.apache.zookeeper</groupId>
>> <artifactId>zookeeper</artifactId>
>> </exclusion>
>> <exclusion>
>> <groupId>log4j</groupId>
>> <artifactId>log4j</artifactId>
>> </exclusion>
>> </exclusions>
>>
>> </dependency>
>>
>> <!-- Storm-Kafka compiled -->
>>
>> <dependency>
>> <artifactId>storm-kafka</artifactId>
>> <groupId>org.apache.storm</groupId>
>> <version>0.9.2-incubating</version>
>> <scope>*compile*</scope>
>> </dependency>
>>
>> I can mvn package it, but when I run it
>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>
>>
>> I am getting such error
>>
>> 1657 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died
>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_55]
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_55]
>> at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_55]
>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_55]
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_55]
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>>
>>
>>
>>
>> I try to poke around online, could not find a solution for it, any idea about that?
>>
>>
>> Thanks
>>
>> Alec
>>
>>
>>
>>
>
Re: kafka-spout running error
Posted by "P. Taylor Goetz" <pt...@gmail.com>.
You have two different versions of zookeeper on the classpath (or in your topology jar).
You need to find out where the conflicting zookeeper dependency is sneaking in and exclude it.
If you are using maven 'mvn dependency:tree' and exclusions will help.
-Taylor
> On Aug 6, 2014, at 6:14 PM, Sa Li <sa...@gmail.com> wrote:
>
> Thanks, Taylor, that makes sense, I check my kafka config, the host.name=10.100.70.128, and correspondingly change the spout config as
> BrokerHosts zk = new ZkHosts("10.100.70.128");
> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, "topictest");
>
> it used to be localhost, actually localhost=10.100.70.128, so spout listen to 10.100.70.128 and collect the topictest, but still same error:
>
> 3237 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.util - Async loop died!
> java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
>
> thanks
>
> Alec
>
>
>> On Wed, Aug 6, 2014 at 1:27 PM, P. Taylor Goetz <pt...@gmail.com> wrote:
>> You are running in local mode. So storm will start an in-process zookeeper for it’s own use (usually on port 2000). In distributed mode, Storm will connect to the zookeeper quorum specified in your storm.yaml.
>>
>> In local mode, you would only need the external zookeeper for kafka and the kafka spout. When configuring the kafka spout, point it to the zookeeper used by kafka.
>>
>> - Taylor
>>
>>
>>> On Aug 6, 2014, at 3:34 PM, Sa Li <sa...@gmail.com> wrote:
>>>
>>> Hi, Kushan
>>>
>>> You are completely right, I noticed this after you mentioned it, apparently I am able to consumer the messages by kafka-console-consumer.sh which listen to 2181, but storm goes to 2000 instead.
>>>
>>> 1319 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper at port 2000 and dir /tmp/f41ad971-9f6b-433f-9dc9-9797afcc2e46
>>> 1425 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>>>
>>> I spent whole morning to walk through my configuration, this is the zoo.cfg
>>>
>>> # The number of milliseconds of each tick
>>> tickTime=2000
>>> # The number of ticks that the initial
>>> # synchronization phase can take
>>> initLimit=5
>>> # The number of ticks that can pass between
>>> # sending a request and getting an acknowledgement
>>> syncLimit=2
>>> # the directory where the snapshot is stored.
>>> dataDir=/var/lib/zookeeper
>>> # Place the dataLogDir to a separate physical disc for better performance
>>> # dataLogDir=/disk2/zookeeper
>>> # the port at which the clients will connect
>>> clientPort=2181
>>> # specify all zookeeper servers
>>> # The fist port is used by followers to connect to the leader
>>> # The second one is used for leader election
>>> #server.1=zookeeper1:2888:3888
>>> #server.2=zookeeper2:2888:3888
>>> #server.3=zookeeper3:2888:3888
>>>
>>> # To avoid seeks ZooKeeper allocates space in the transaction log file in
>>> # blocks of preAllocSize kilobytes. The default block size is 64M. One reason
>>> # for changing the size of the blocks is to reduce the block size if snapshots
>>> # are taken more often. (Also, see snapCount).
>>> #preAllocSize=65536
>>> # Clients can submit requests faster than ZooKeeper can process them,
>>> # especially if there are a lot of clients. To prevent ZooKeeper from running
>>> # out of memory due to queued requests, ZooKeeper will throttle clients so that
>>> # there is no more than globalOutstandingLimit outstanding requests in the
>>> # system. The default limit is 1,000.ZooKeeper logs transactions to a
>>> # transaction log. After snapCount transactions are written to a log file a
>>> # snapshot is started and a new transaction log file is started. The default
>>> # snapCount is 10,000.
>>> #snapCount=1000
>>>
>>> # If this option is defined, requests will be will logged to a trace file named
>>> # traceFile.year.month.day.
>>> #traceFile=
>>> # Leader accepts client connections. Default value is "yes". The leader machine
>>> # coordinates updates. For higher update throughput at thes slight expense of
>>> # read throughput the leader can be configured to not accept clients and focus
>>> # on coordination.
>>> leaderServes=yes
>>> # Enable regular purging of old data and transaction logs every 24 hours
>>> autopurge.purgeInterval=24
>>> autopurge.snapRetainCount=5
>>>
>>> Only thing that I thought to change was to make "multi-server" setup, uncomment the server.1, server.2, server.3, but didn't help. And this is the storm.yaml sitting in ~/.storm
>>>
>>> storm.zookeeper.servers:
>>> - "10.100.70.128"
>>> # - "server2"
>>> storm.zookeeper.port: 2181
>>> nimbus.host: "10.100.70.128"
>>> nimbus.childopts: "-Xmx1024m"
>>> storm.local.dir: "/app/storm"
>>> java.library.path: "/usr/lib/jvm/java-7-openjdk-amd64"
>>> supervisor.slots.ports:
>>> - 6700
>>> - 6701
>>> - 6702
>>> - 6703
>>> # ##### These may optionally be filled in:
>>> #
>>> ## List of custom serializations
>>> # topology.kryo.register:
>>> # - org.mycompany.MyType
>>> # - org.mycompany.MyType2: org.mycompany.MyType2Serializer
>>> #
>>> ## List of custom kryo decorators
>>> # topology.kryo.decorators:
>>> # - org.mycompany.MyDecorator
>>> #
>>> ## Locations of the drpc servers
>>> drpc.servers:
>>> - "10.100.70.128"
>>> # - "server2"
>>> drpc.port: 3772
>>> drpc.worker.threads: 64
>>> drpc.queue.size: 128
>>> drpc.invocations.port: 3773
>>> drpc.request.timeout.secs: 600
>>> drpc.childopts: "-Xmx768m"
>>> ## Metrics Consumers
>>> # topology.metrics.consumer.register:
>>> # - class: "backtype.storm.metrics.LoggingMetricsConsumer"
>>> # parallelism.hint: 1
>>> # - class: "org.mycompany.MyMetricsConsumer"
>>> # parallelism.hint: 1
>>> # argument:
>>> # - endpoint: "metrics-collector.mycompany.org"
>>>
>>> I really couldn't figure out what is trick to configure zK and storm cluster, and why zookeeper listen to 2000 which is really a weird thing.
>>>
>>> thanks
>>>
>>> Alec
>>>
>>>
>>>
>>>> On Wed, Aug 6, 2014 at 6:48 AM, Kushan Maskey <ku...@mmillerassociates.com> wrote:
>>>> I see that your zookeeper is listening on port 2000. Is that how you have configured the zookeeper?
>>>>
>>>> --
>>>> Kushan Maskey
>>>> 817.403.7500
>>>>
>>>>
>>>>> On Tue, Aug 5, 2014 at 11:56 AM, Sa Li <sa...@gmail.com> wrote:
>>>>> Thank you very much, Marcelo, it indeed worked, now I can run my code without getting error. However, another thing is keeping bother me, following is my code:
>>>>>
>>>>> public static class PrintStream implements Filter {
>>>>>
>>>>> @SuppressWarnings("rawtypes”)
>>>>> @Override
>>>>> public void prepare(Map conf, TridentOperationContext context) {
>>>>> }
>>>>> @Override
>>>>> public void cleanup() {
>>>>> }
>>>>> @Override
>>>>> public boolean isKeep(TridentTuple tuple) {
>>>>> System.out.println(tuple);
>>>>> return true;
>>>>> }
>>>>> }
>>>>> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException {
>>>>>
>>>>> TridentTopology topology = new TridentTopology();
>>>>> BrokerHosts zk = new ZkHosts("localhost");
>>>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, "ingest_test");
>>>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>>>> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>>>>>
>>>>> topology.newStream("kafka", spout)
>>>>> .each(new Fields("str"),
>>>>> new PrintStream()
>>>>> );
>>>>>
>>>>> return topology.build();
>>>>> }
>>>>> public static void main(String[] args) throws Exception {
>>>>>
>>>>> Config conf = new Config();
>>>>> conf.setDebug(true);
>>>>> conf.setMaxSpoutPending(1);
>>>>> conf.setMaxTaskParallelism(3);
>>>>> LocalDRPC drpc = new LocalDRPC();
>>>>> LocalCluster cluster = new LocalCluster();
>>>>> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>>>>> Thread.sleep(100);
>>>>> cluster.shutdown();
>>>>> }
>>>>>
>>>>> What I expect is quite simple, print out the message I collect from a kafka producer playback process which is running separately. The topic is listed as:
>>>>>
>>>>> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper localhost:2181
>>>>> topic: topictest partition: 0 leader: 1 replicas: 1,3,2 isr: 1,3,2
>>>>> topic: topictest partition: 1 leader: 2 replicas: 2,1,3 isr: 2,1,3
>>>>> topic: topictest partition: 2 leader: 3 replicas: 3,2,1 isr: 3,2,1
>>>>> topic: topictest partition: 3 leader: 1 replicas: 1,2,3 isr: 1,2,3
>>>>> topic: topictest partition: 4 leader: 2 replicas: 2,3,1 isr: 2,3,1
>>>>>
>>>>> When I am running the code, this is what I saw on the screen, seems no error, but no message print out as well:
>>>>>
>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1 -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file= -cp /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>>>> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
>>>>> 1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>> 1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>> 1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>> 1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>> 1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>> 1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
>>>>> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>>>> 1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
>>>>> 1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
>>>>> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology submission for kafka with conf {"topology.max.task.parallelism" nil, "topology.acker.executors" nil, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id" "kafka-1-1407257070", "topology.debug" true}
>>>>> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka: kafka-1-1407257070
>>>>> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 3] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
>>>>> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment for topology id kafka-1-1407257070: #backtype.storm.daemon.common.Assignment{:master-code-dir "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070", :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"}, :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3] 1407257070}}
>>>>> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
>>>>> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
>>>>> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
>>>>> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
>>>>> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
>>>>> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
>>>>> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
>>>>> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
>>>>> 2256 [main] INFO backtype.storm.testing - Shutting down in process zookeeper
>>>>> 2257 [main] INFO backtype.storm.testing - Done shutting down in process zookeeper
>>>>> 2258 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
>>>>> 2259 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>>>> 2260 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
>>>>> 2261 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>>>>>
>>>>> Anyone can help me locate what the problem is? I really need to walk through this step in order to be able to replace .each(printStream()) with other functions.
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>> Alec
>>>>>
>>>>>> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>>>>>>
>>>>>> hello,
>>>>>>
>>>>>> you can check your .jar application with command " jar tf " to see if class kafka/api/OffsetRequest.class is part of the jar.
>>>>>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are using) in storm_lib directory
>>>>>>
>>>>>> Marcelo
>>>>>>
>>>>>>
>>>>>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>>>>>>> Hi, all
>>>>>>>
>>>>>>> I am running a kafka-spout code in storm-server, the pom is
>>>>>>>
>>>>>>> <groupId>org.apache.kafka</groupId>
>>>>>>> <artifactId>kafka_2.9.2</artifactId>
>>>>>>> <version>0.8.0</version>
>>>>>>> <scope>provided</scope>
>>>>>>>
>>>>>>> <exclusions>
>>>>>>> <exclusion>
>>>>>>> <groupId>org.apache.zookeeper</groupId>
>>>>>>> <artifactId>zookeeper</artifactId>
>>>>>>> </exclusion>
>>>>>>> <exclusion>
>>>>>>> <groupId>log4j</groupId>
>>>>>>> <artifactId>log4j</artifactId>
>>>>>>> </exclusion>
>>>>>>> </exclusions>
>>>>>>>
>>>>>>> </dependency>
>>>>>>>
>>>>>>> <!-- Storm-Kafka compiled -->
>>>>>>>
>>>>>>> <dependency>
>>>>>>> <artifactId>storm-kafka</artifactId>
>>>>>>> <groupId>org.apache.storm</groupId>
>>>>>>> <version>0.9.2-incubating</version>
>>>>>>> <scope>*compile*</scope>
>>>>>>> </dependency>
>>>>>>>
>>>>>>> I can mvn package it, but when I run it
>>>>>>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>>>>
>>>>>>>
>>>>>>> I am getting such error
>>>>>>>
>>>>>>> 1657 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>>>>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>>>>>>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died
>>>>>>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>>>>>>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>> at storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>> at storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>> at storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>>>>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>>>>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_55]
>>>>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_55]
>>>>>>> at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_55]
>>>>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_55]
>>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
>>>>>>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_55]
>>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I try to poke around online, could not find a solution for it, any idea about that?
>>>>>>>
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> Alec
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
Re: kafka-spout running error
Posted by Sa Li <sa...@gmail.com>.
Thanks, Taylor, that makes sense, I check my kafka config, the
host.name=10.100.70.128,
and correspondingly change the spout config as
BrokerHosts zk = new ZkHosts("10.100.70.128");
TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, "topictest");
it used to be localhost, actually localhost=10.100.70.128, so spout listen
to 10.100.70.128 and collect the topictest, but still same error:
3237 [Thread-19-$spoutcoord-spout0] ERROR backtype.storm.util - Async loop
died!
java.lang.NoSuchMethodError:
org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
thanks
Alec
On Wed, Aug 6, 2014 at 1:27 PM, P. Taylor Goetz <pt...@gmail.com> wrote:
> You are running in local mode. So storm will start an in-process zookeeper
> for it’s own use (usually on port 2000). In distributed mode, Storm will
> connect to the zookeeper quorum specified in your storm.yaml.
>
> In local mode, you would only need the external zookeeper for kafka and
> the kafka spout. When configuring the kafka spout, point it to the
> zookeeper used by kafka.
>
> - Taylor
>
>
> On Aug 6, 2014, at 3:34 PM, Sa Li <sa...@gmail.com> wrote:
>
> Hi, Kushan
>
> You are completely right, I noticed this after you mentioned it,
> apparently I am able to consumer the messages by kafka-console-consumer.sh
> which listen to 2181, but storm goes to 2000 instead.
>
> 1319 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper
> at port 2000 and dir /tmp/f41ad971-9f6b-433f-9dc9-9797afcc2e46
> 1425 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf
> {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>
> I spent whole morning to walk through my configuration, this is the zoo.cfg
>
> # The number of milliseconds of each tick
> tickTime=2000
> # The number of ticks that the initial
> # synchronization phase can take
> initLimit=5
> # The number of ticks that can pass between
> # sending a request and getting an acknowledgement
> syncLimit=2
> # the directory where the snapshot is stored.
> dataDir=/var/lib/zookeeper
> # Place the dataLogDir to a separate physical disc for better performance
> # dataLogDir=/disk2/zookeeper
> # the port at which the clients will connect
> clientPort=2181
> # specify all zookeeper servers
> # The fist port is used by followers to connect to the leader
> # The second one is used for leader election
> #server.1=zookeeper1:2888:3888
> #server.2=zookeeper2:2888:3888
> #server.3=zookeeper3:2888:3888
>
> # To avoid seeks ZooKeeper allocates space in the transaction log file in
> # blocks of preAllocSize kilobytes. The default block size is 64M. One
> reason
> # for changing the size of the blocks is to reduce the block size if
> snapshots
> # are taken more often. (Also, see snapCount).
> #preAllocSize=65536
> # Clients can submit requests faster than ZooKeeper can process them,
> # especially if there are a lot of clients. To prevent ZooKeeper from
> running
> # out of memory due to queued requests, ZooKeeper will throttle clients so
> that
> # there is no more than globalOutstandingLimit outstanding requests in the
> # system. The default limit is 1,000.ZooKeeper logs transactions to a
> # transaction log. After snapCount transactions are written to a log file a
> # snapshot is started and a new transaction log file is started. The
> default
> # snapCount is 10,000.
> #snapCount=1000
>
> # If this option is defined, requests will be will logged to a trace file
> named
> # traceFile.year.month.day.
> #traceFile=
> # Leader accepts client connections. Default value is "yes". The leader
> machine
> # coordinates updates. For higher update throughput at thes slight expense
> of
> # read throughput the leader can be configured to not accept clients and
> focus
> # on coordination.
> leaderServes=yes
> # Enable regular purging of old data and transaction logs every 24 hours
> autopurge.purgeInterval=24
> autopurge.snapRetainCount=5
>
> Only thing that I thought to change was to make "multi-server" setup,
> uncomment the server.1, server.2, server.3, but didn't help. And this is
> the storm.yaml sitting in ~/.storm
>
> storm.zookeeper.servers:
> - "10.100.70.128"
> # - "server2"
> storm.zookeeper.port: 2181
> nimbus.host: "10.100.70.128"
> nimbus.childopts: "-Xmx1024m"
> storm.local.dir: "/app/storm"
> java.library.path: "/usr/lib/jvm/java-7-openjdk-amd64"
> supervisor.slots.ports:
> - 6700
> - 6701
> - 6702
> - 6703
> # ##### These may optionally be filled in:
> #
> ## List of custom serializations
> # topology.kryo.register:
> # - org.mycompany.MyType
> # - org.mycompany.MyType2: org.mycompany.MyType2Serializer
> #
> ## List of custom kryo decorators
> # topology.kryo.decorators:
> # - org.mycompany.MyDecorator
> #
> ## Locations of the drpc servers
> drpc.servers:
> - "10.100.70.128"
> # - "server2"
> drpc.port: 3772
> drpc.worker.threads: 64
> drpc.queue.size: 128
> drpc.invocations.port: 3773
> drpc.request.timeout.secs: 600
> drpc.childopts: "-Xmx768m"
> ## Metrics Consumers
> # topology.metrics.consumer.register:
> # - class: "backtype.storm.metrics.LoggingMetricsConsumer"
> # parallelism.hint: 1
> # - class: "org.mycompany.MyMetricsConsumer"
> # parallelism.hint: 1
> # argument:
> # - endpoint: "metrics-collector.mycompany.org"
>
> I really couldn't figure out what is trick to configure zK and storm
> cluster, and why zookeeper listen to 2000 which is really a weird thing.
>
> thanks
>
> Alec
>
>
>
> On Wed, Aug 6, 2014 at 6:48 AM, Kushan Maskey <
> kushan.maskey@mmillerassociates.com> wrote:
>
>> I see that your zookeeper is listening on port 2000. Is that how you have
>> configured the zookeeper?
>>
>> --
>> Kushan Maskey
>> 817.403.7500
>>
>>
>> On Tue, Aug 5, 2014 at 11:56 AM, Sa Li <sa...@gmail.com> wrote:
>>
>>> Thank you very much, Marcelo, it indeed worked, now I can run my code
>>> without getting error. However, another thing is keeping bother me,
>>> following is my code:
>>>
>>> public static class PrintStream implements Filter {
>>>
>>> @SuppressWarnings("rawtypes”)
>>> @Override
>>> public void prepare(Map conf, TridentOperationContext context) {
>>> }
>>> @Override
>>> public void cleanup() {
>>> }
>>> @Override
>>> public boolean isKeep(TridentTuple tuple) {
>>> System.out.println(tuple);
>>> return true;
>>> }
>>> }
>>> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException
>>> {
>>>
>>> TridentTopology topology = new TridentTopology();
>>> BrokerHosts zk = new ZkHosts("localhost");
>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk,
>>> "ingest_test");
>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>> OpaqueTridentKafkaSpout spout = new
>>> OpaqueTridentKafkaSpout(spoutConf);
>>>
>>> topology.newStream("kafka", spout)
>>> .each(new Fields("str"),
>>> new PrintStream()
>>> );
>>>
>>> return topology.build();
>>> }
>>> public static void main(String[] args) throws Exception {
>>>
>>> Config conf = new Config();
>>> conf.setDebug(true);
>>> conf.setMaxSpoutPending(1);
>>> conf.setMaxTaskParallelism(3);
>>> LocalDRPC drpc = new LocalDRPC();
>>> LocalCluster cluster = new LocalCluster();
>>> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>>>
>>> Thread.sleep(100);
>>> cluster.shutdown();
>>> }
>>>
>>> What I expect is quite simple, print out the message I collect from a
>>> kafka producer playback process which is running separately. The topic is
>>> listed as:
>>>
>>> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper
>>> localhost:2181
>>> topic: topictest partition: 0 leader: 1 replicas: 1,3,2
>>> isr: 1,3,2
>>> topic: topictest partition: 1 leader: 2 replicas: 2,1,3
>>> isr: 2,1,3
>>> topic: topictest partition: 2 leader: 3 replicas: 3,2,1
>>> isr: 3,2,1
>>> topic: topictest partition: 3 leader: 1 replicas: 1,2,3
>>> isr: 1,2,3
>>> topic: topictest partition: 4 leader: 2 replicas: 2,3,1
>>> isr: 2,3,1
>>>
>>> When I am running the code, this is what I saw on the screen, seems no
>>> error, but no message print out as well:
>>>
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in
>>> [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in
>>> [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> explanation.
>>> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1
>>> -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file=
>>> -cp
>>> /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin
>>> -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar
>>> storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in
>>> [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in
>>> [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> explanation.
>>> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess
>>> zookeeper at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with
>>> conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>>> "topology.tick.tuple.freq.secs" nil,
>>> "topology.builtin.metrics.bucket.size.secs" 60,
>>> "topology.fall.back.on.java.serialization" true,
>>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>>> "topology.skip.missing.kryo.registrations" true,
>>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>>> "topology.trident.batch.emit.interval.millis" 50,
>>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>>> "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9",
>>> "storm.messaging.netty.buffer_size" 5242880,
>>> "supervisor.worker.start.timeout.secs" 120,
>>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>>> "/transactional", "topology.acker.executors" nil,
>>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>>> "supervisor.heartbeat.frequency.secs" 5,
>>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>>> "topology.spout.wait.strategy"
>>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>>> nil, "storm.zookeeper.retry.interval" 1000, "
>>> topology.sleep.spout.wait.strategy.time.ms" 1,
>>> "nimbus.topology.validator"
>>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>>> [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs"
>>> 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs"
>>> 30, "task.refresh.poll.secs" 10, "topology.workers" 1,
>>> "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627,
>>> "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1,
>>> "topology.tuple.serializer"
>>> "backtype.storm.serialization.types.ListDelegateSerializer",
>>> "topology.disruptor.wait.strategy"
>>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>>> 5, "storm.thrift.transport"
>>> "backtype.storm.security.auth.SimpleTransportPlugin",
>>> "topology.state.synchronization.timeout.secs" 60,
>>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms"
>>> 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false,
>>> "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode"
>>> "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
>>> 1237 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>>> update: :connected:none
>>> 1350 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1417 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>>> update: :connected:none
>>> 1482 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1484 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>>> update: :connected:none
>>> 1540 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor
>>> with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>>> "topology.tick.tuple.freq.secs" nil,
>>> "topology.builtin.metrics.bucket.size.secs" 60,
>>> "topology.fall.back.on.java.serialization" true,
>>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>>> "topology.skip.missing.kryo.registrations" true,
>>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>>> "topology.trident.batch.emit.interval.millis" 50,
>>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>>> "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388",
>>> "storm.messaging.netty.buffer_size" 5242880,
>>> "supervisor.worker.start.timeout.secs" 120,
>>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>>> "/transactional", "topology.acker.executors" nil,
>>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>>> "supervisor.heartbeat.frequency.secs" 5,
>>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>>> "topology.spout.wait.strategy"
>>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>>> nil, "storm.zookeeper.retry.interval" 1000, "
>>> topology.sleep.spout.wait.strategy.time.ms" 1,
>>> "nimbus.topology.validator"
>>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>>> (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120,
>>> "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30,
>>> "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts"
>>> "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
>>> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
>>> "backtype.storm.serialization.types.ListDelegateSerializer",
>>> "topology.disruptor.wait.strategy"
>>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>>> 5, "storm.thrift.transport"
>>> "backtype.storm.security.auth.SimpleTransportPlugin",
>>> "topology.state.synchronization.timeout.secs" 60,
>>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms"
>>> 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false,
>>> "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode"
>>> "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>> 1576 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>>> update: :connected:none
>>> 1590 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
>>> with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
>>> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor
>>> with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>>> "topology.tick.tuple.freq.secs" nil,
>>> "topology.builtin.metrics.bucket.size.secs" 60,
>>> "topology.fall.back.on.java.serialization" true,
>>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>>> "topology.skip.missing.kryo.registrations" true,
>>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>>> "topology.trident.batch.emit.interval.millis" 50,
>>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>>> "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912",
>>> "storm.messaging.netty.buffer_size" 5242880,
>>> "supervisor.worker.start.timeout.secs" 120,
>>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>>> "/transactional", "topology.acker.executors" nil,
>>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>>> "supervisor.heartbeat.frequency.secs" 5,
>>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>>> "topology.spout.wait.strategy"
>>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>>> nil, "storm.zookeeper.retry.interval" 1000, "
>>> topology.sleep.spout.wait.strategy.time.ms" 1,
>>> "nimbus.topology.validator"
>>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>>> (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120,
>>> "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30,
>>> "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts"
>>> "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
>>> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
>>> "backtype.storm.serialization.types.ListDelegateSerializer",
>>> "topology.disruptor.wait.strategy"
>>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>>> 5, "storm.thrift.transport"
>>> "backtype.storm.security.auth.SimpleTransportPlugin",
>>> "topology.state.synchronization.timeout.secs" 60,
>>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms"
>>> 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false,
>>> "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode"
>>> "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>> 1638 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>>> update: :connected:none
>>> 1690 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
>>> with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
>>> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology
>>> submission for kafka with conf {"topology.max.task.parallelism" nil,
>>> "topology.acker.executors" nil, "topology.kryo.register"
>>> {"storm.trident.topology.TransactionAttempt" nil},
>>> "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id"
>>> "kafka-1-1407257070", "topology.debug" true}
>>> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka:
>>> kafka-1-1407257070
>>> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available
>>> slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1]
>>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 2]
>>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 3]
>>> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4]
>>> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5]
>>> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
>>> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment
>>> for topology id kafka-1-1407257070:
>>> #backtype.storm.daemon.common.Assignment{:master-code-dir
>>> "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070",
>>> :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"},
>>> :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5
>>> 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4]
>>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2]
>>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1]
>>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1
>>> 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3]
>>> 1407257070}}
>>> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
>>> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
>>> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down
>>> supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
>>> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
>>> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
>>> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down
>>> supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
>>> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
>>> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
>>> 2256 [main] INFO backtype.storm.testing - Shutting down in process
>>> zookeeper
>>> 2257 [main] INFO backtype.storm.testing - Done shutting down in process
>>> zookeeper
>>> 2258 [main] INFO backtype.storm.testing - Deleting temporary path
>>> /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
>>> 2259 [main] INFO backtype.storm.testing - Deleting temporary path
>>> /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>> 2260 [main] INFO backtype.storm.testing - Deleting temporary path
>>> /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
>>> 2261 [main] INFO backtype.storm.testing - Deleting temporary path
>>> /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>>>
>>> Anyone can help me locate what the problem is? I really need to walk
>>> through this step in order to be able to replace .each(printStream()) with
>>> other functions.
>>>
>>>
>>> Thanks
>>>
>>> Alec
>>>
>>> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>>>
>>> hello,
>>>
>>> you can check your .jar application with command " jar tf " to see if
>>> class kafka/api/OffsetRequest.class is part of the jar.
>>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are
>>> using) in storm_lib directory
>>>
>>> Marcelo
>>>
>>>
>>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>>>
>>>> Hi, all
>>>>
>>>> I am running a kafka-spout code in storm-server, the pom is
>>>>
>>>> <groupId>org.apache.kafka</groupId>
>>>> <artifactId>kafka_2.9.2</artifactId>
>>>> <version>0.8.0</version>
>>>> <scope>provided</scope>
>>>>
>>>> <exclusions>
>>>> <exclusion>
>>>> <groupId>org.apache.zookeeper</groupId>
>>>> <artifactId>zookeeper</artifactId>
>>>> </exclusion>
>>>> <exclusion>
>>>> <groupId>log4j</groupId>
>>>> <artifactId>log4j</artifactId>
>>>> </exclusion>
>>>> </exclusions>
>>>>
>>>> </dependency>
>>>>
>>>> <!-- Storm-Kafka compiled -->
>>>>
>>>> <dependency>
>>>> <artifactId>storm-kafka</artifactId>
>>>> <groupId>org.apache.storm</groupId>
>>>> <version>0.9.2-incubating</version>
>>>> <scope>*compile*</scope>
>>>> </dependency>
>>>>
>>>> I can mvn package it, but when I run it
>>>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar
>>>> target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar
>>>> storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>
>>>>
>>>> I am getting such error
>>>>
>>>> 1657 [main]
>>>> INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting
>>>> supervisor with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>>>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread
>>>> Thread[main,5,main] died
>>>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>>>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26)
>>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at
>>>> storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13)
>>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at
>>>> storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115)
>>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at
>>>> storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144)
>>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>> ~[na:1.7.0_55]
>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>> ~[na:1.7.0_55]
>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>> ~[na:1.7.0_55]
>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>> ~[na:1.7.0_55]
>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>>> ~[na:1.7.0_55]
>>>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>>> ~[na:1.7.0_55]
>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>>> ~[na:1.7.0_55]
>>>>
>>>>
>>>>
>>>>
>>>> I try to poke around online, could not find a solution for it, any idea
>>>> about that?
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Alec
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>
>
Re: kafka-spout running error
Posted by "P. Taylor Goetz" <pt...@gmail.com>.
You are running in local mode. So storm will start an in-process zookeeper for it’s own use (usually on port 2000). In distributed mode, Storm will connect to the zookeeper quorum specified in your storm.yaml.
In local mode, you would only need the external zookeeper for kafka and the kafka spout. When configuring the kafka spout, point it to the zookeeper used by kafka.
- Taylor
On Aug 6, 2014, at 3:34 PM, Sa Li <sa...@gmail.com> wrote:
> Hi, Kushan
>
> You are completely right, I noticed this after you mentioned it, apparently I am able to consumer the messages by kafka-console-consumer.sh which listen to 2181, but storm goes to 2000 instead.
>
> 1319 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper at port 2000 and dir /tmp/f41ad971-9f6b-433f-9dc9-9797afcc2e46
> 1425 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>
> I spent whole morning to walk through my configuration, this is the zoo.cfg
>
> # The number of milliseconds of each tick
> tickTime=2000
> # The number of ticks that the initial
> # synchronization phase can take
> initLimit=5
> # The number of ticks that can pass between
> # sending a request and getting an acknowledgement
> syncLimit=2
> # the directory where the snapshot is stored.
> dataDir=/var/lib/zookeeper
> # Place the dataLogDir to a separate physical disc for better performance
> # dataLogDir=/disk2/zookeeper
> # the port at which the clients will connect
> clientPort=2181
> # specify all zookeeper servers
> # The fist port is used by followers to connect to the leader
> # The second one is used for leader election
> #server.1=zookeeper1:2888:3888
> #server.2=zookeeper2:2888:3888
> #server.3=zookeeper3:2888:3888
>
> # To avoid seeks ZooKeeper allocates space in the transaction log file in
> # blocks of preAllocSize kilobytes. The default block size is 64M. One reason
> # for changing the size of the blocks is to reduce the block size if snapshots
> # are taken more often. (Also, see snapCount).
> #preAllocSize=65536
> # Clients can submit requests faster than ZooKeeper can process them,
> # especially if there are a lot of clients. To prevent ZooKeeper from running
> # out of memory due to queued requests, ZooKeeper will throttle clients so that
> # there is no more than globalOutstandingLimit outstanding requests in the
> # system. The default limit is 1,000.ZooKeeper logs transactions to a
> # transaction log. After snapCount transactions are written to a log file a
> # snapshot is started and a new transaction log file is started. The default
> # snapCount is 10,000.
> #snapCount=1000
>
> # If this option is defined, requests will be will logged to a trace file named
> # traceFile.year.month.day.
> #traceFile=
> # Leader accepts client connections. Default value is "yes". The leader machine
> # coordinates updates. For higher update throughput at thes slight expense of
> # read throughput the leader can be configured to not accept clients and focus
> # on coordination.
> leaderServes=yes
> # Enable regular purging of old data and transaction logs every 24 hours
> autopurge.purgeInterval=24
> autopurge.snapRetainCount=5
>
> Only thing that I thought to change was to make "multi-server" setup, uncomment the server.1, server.2, server.3, but didn't help. And this is the storm.yaml sitting in ~/.storm
>
> storm.zookeeper.servers:
> - "10.100.70.128"
> # - "server2"
> storm.zookeeper.port: 2181
> nimbus.host: "10.100.70.128"
> nimbus.childopts: "-Xmx1024m"
> storm.local.dir: "/app/storm"
> java.library.path: "/usr/lib/jvm/java-7-openjdk-amd64"
> supervisor.slots.ports:
> - 6700
> - 6701
> - 6702
> - 6703
> # ##### These may optionally be filled in:
> #
> ## List of custom serializations
> # topology.kryo.register:
> # - org.mycompany.MyType
> # - org.mycompany.MyType2: org.mycompany.MyType2Serializer
> #
> ## List of custom kryo decorators
> # topology.kryo.decorators:
> # - org.mycompany.MyDecorator
> #
> ## Locations of the drpc servers
> drpc.servers:
> - "10.100.70.128"
> # - "server2"
> drpc.port: 3772
> drpc.worker.threads: 64
> drpc.queue.size: 128
> drpc.invocations.port: 3773
> drpc.request.timeout.secs: 600
> drpc.childopts: "-Xmx768m"
> ## Metrics Consumers
> # topology.metrics.consumer.register:
> # - class: "backtype.storm.metrics.LoggingMetricsConsumer"
> # parallelism.hint: 1
> # - class: "org.mycompany.MyMetricsConsumer"
> # parallelism.hint: 1
> # argument:
> # - endpoint: "metrics-collector.mycompany.org"
>
> I really couldn't figure out what is trick to configure zK and storm cluster, and why zookeeper listen to 2000 which is really a weird thing.
>
> thanks
>
> Alec
>
>
>
> On Wed, Aug 6, 2014 at 6:48 AM, Kushan Maskey <ku...@mmillerassociates.com> wrote:
> I see that your zookeeper is listening on port 2000. Is that how you have configured the zookeeper?
>
> --
> Kushan Maskey
> 817.403.7500
>
>
> On Tue, Aug 5, 2014 at 11:56 AM, Sa Li <sa...@gmail.com> wrote:
> Thank you very much, Marcelo, it indeed worked, now I can run my code without getting error. However, another thing is keeping bother me, following is my code:
>
> public static class PrintStream implements Filter {
>
> @SuppressWarnings("rawtypes”)
> @Override
> public void prepare(Map conf, TridentOperationContext context) {
> }
> @Override
> public void cleanup() {
> }
> @Override
> public boolean isKeep(TridentTuple tuple) {
> System.out.println(tuple);
> return true;
> }
> }
> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException {
>
> TridentTopology topology = new TridentTopology();
> BrokerHosts zk = new ZkHosts("localhost");
> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, "ingest_test");
> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
> OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
>
> topology.newStream("kafka", spout)
> .each(new Fields("str"),
> new PrintStream()
> );
>
> return topology.build();
> }
> public static void main(String[] args) throws Exception {
>
> Config conf = new Config();
> conf.setDebug(true);
> conf.setMaxSpoutPending(1);
> conf.setMaxTaskParallelism(3);
> LocalDRPC drpc = new LocalDRPC();
> LocalCluster cluster = new LocalCluster();
> cluster.submitTopology("kafka", conf, buildTopology(drpc));
> Thread.sleep(100);
> cluster.shutdown();
> }
>
> What I expect is quite simple, print out the message I collect from a kafka producer playback process which is running separately. The topic is listed as:
>
> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper localhost:2181
> topic: topictest partition: 0 leader: 1 replicas: 1,3,2 isr: 1,3,2
> topic: topictest partition: 1 leader: 2 replicas: 2,1,3 isr: 2,1,3
> topic: topictest partition: 2 leader: 3 replicas: 3,2,1 isr: 3,2,1
> topic: topictest partition: 3 leader: 1 replicas: 1,2,3 isr: 1,2,3
> topic: topictest partition: 4 leader: 2 replicas: 2,3,1 isr: 2,3,1
>
> When I am running the code, this is what I saw on the screen, seems no error, but no message print out as well:
>
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1 -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file= -cp /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
> 1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
> 1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
> 1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
> 1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
> 1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
> 1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
> 1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
> 1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology submission for kafka with conf {"topology.max.task.parallelism" nil, "topology.acker.executors" nil, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id" "kafka-1-1407257070", "topology.debug" true}
> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka: kafka-1-1407257070
> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 3] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment for topology id kafka-1-1407257070: #backtype.storm.daemon.common.Assignment{:master-code-dir "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070", :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"}, :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3] 1407257070}}
> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
> 2256 [main] INFO backtype.storm.testing - Shutting down in process zookeeper
> 2257 [main] INFO backtype.storm.testing - Done shutting down in process zookeeper
> 2258 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
> 2259 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
> 2260 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
> 2261 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>
> Anyone can help me locate what the problem is? I really need to walk through this step in order to be able to replace .each(printStream()) with other functions.
>
>
> Thanks
>
> Alec
>
> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>
>> hello,
>>
>> you can check your .jar application with command " jar tf " to see if class kafka/api/OffsetRequest.class is part of the jar.
>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are using) in storm_lib directory
>>
>> Marcelo
>>
>>
>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>> Hi, all
>>
>> I am running a kafka-spout code in storm-server, the pom is
>>
>> <groupId>org.apache.kafka</groupId>
>> <artifactId>kafka_2.9.2</artifactId>
>> <version>0.8.0</version>
>> <scope>provided</scope>
>>
>> <exclusions>
>> <exclusion>
>> <groupId>org.apache.zookeeper</groupId>
>> <artifactId>zookeeper</artifactId>
>> </exclusion>
>> <exclusion>
>> <groupId>log4j</groupId>
>> <artifactId>log4j</artifactId>
>> </exclusion>
>> </exclusions>
>>
>> </dependency>
>>
>> <!-- Storm-Kafka compiled -->
>>
>> <dependency>
>> <artifactId>storm-kafka</artifactId>
>> <groupId>org.apache.storm</groupId>
>> <version>0.9.2-incubating</version>
>> <scope>*compile*</scope>
>> </dependency>
>>
>> I can mvn package it, but when I run it
>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>
>>
>> I am getting such error
>>
>> 1657 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died
>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_55]
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_55]
>> at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_55]
>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_55]
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_55]
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>>
>>
>>
>>
>> I try to poke around online, could not find a solution for it, any idea about that?
>>
>>
>> Thanks
>>
>> Alec
>>
>>
>>
>>
>
>
>
Re: kafka-spout running error
Posted by Kushan Maskey <ku...@mmillerassociates.com>.
Can you set zkPort in SpoutConfig to 2181 in your topology builder and see
if that helps?
--
Kushan Maskey
817.403.7500
On Wed, Aug 6, 2014 at 2:34 PM, Sa Li <sa...@gmail.com> wrote:
> Hi, Kushan
>
> You are completely right, I noticed this after you mentioned it,
> apparently I am able to consumer the messages by kafka-console-consumer.sh
> which listen to 2181, but storm goes to 2000 instead.
>
> 1319 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper
> at port 2000 and dir /tmp/f41ad971-9f6b-433f-9dc9-9797afcc2e46
> 1425 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf
> {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>
> I spent whole morning to walk through my configuration, this is the zoo.cfg
>
> # The number of milliseconds of each tick
> tickTime=2000
> # The number of ticks that the initial
> # synchronization phase can take
> initLimit=5
> # The number of ticks that can pass between
> # sending a request and getting an acknowledgement
> syncLimit=2
> # the directory where the snapshot is stored.
> dataDir=/var/lib/zookeeper
> # Place the dataLogDir to a separate physical disc for better performance
> # dataLogDir=/disk2/zookeeper
> # the port at which the clients will connect
> clientPort=2181
> # specify all zookeeper servers
> # The fist port is used by followers to connect to the leader
> # The second one is used for leader election
> #server.1=zookeeper1:2888:3888
> #server.2=zookeeper2:2888:3888
> #server.3=zookeeper3:2888:3888
>
> # To avoid seeks ZooKeeper allocates space in the transaction log file in
> # blocks of preAllocSize kilobytes. The default block size is 64M. One
> reason
> # for changing the size of the blocks is to reduce the block size if
> snapshots
> # are taken more often. (Also, see snapCount).
> #preAllocSize=65536
> # Clients can submit requests faster than ZooKeeper can process them,
> # especially if there are a lot of clients. To prevent ZooKeeper from
> running
> # out of memory due to queued requests, ZooKeeper will throttle clients so
> that
> # there is no more than globalOutstandingLimit outstanding requests in the
> # system. The default limit is 1,000.ZooKeeper logs transactions to a
> # transaction log. After snapCount transactions are written to a log file a
> # snapshot is started and a new transaction log file is started. The
> default
> # snapCount is 10,000.
> #snapCount=1000
>
> # If this option is defined, requests will be will logged to a trace file
> named
> # traceFile.year.month.day.
> #traceFile=
> # Leader accepts client connections. Default value is "yes". The leader
> machine
> # coordinates updates. For higher update throughput at thes slight expense
> of
> # read throughput the leader can be configured to not accept clients and
> focus
> # on coordination.
> leaderServes=yes
> # Enable regular purging of old data and transaction logs every 24 hours
> autopurge.purgeInterval=24
> autopurge.snapRetainCount=5
>
> Only thing that I thought to change was to make "multi-server" setup,
> uncomment the server.1, server.2, server.3, but didn't help. And this is
> the storm.yaml sitting in ~/.storm
>
> storm.zookeeper.servers:
> - "10.100.70.128"
> # - "server2"
> storm.zookeeper.port: 2181
> nimbus.host: "10.100.70.128"
> nimbus.childopts: "-Xmx1024m"
> storm.local.dir: "/app/storm"
> java.library.path: "/usr/lib/jvm/java-7-openjdk-amd64"
> supervisor.slots.ports:
> - 6700
> - 6701
> - 6702
> - 6703
> # ##### These may optionally be filled in:
> #
> ## List of custom serializations
> # topology.kryo.register:
> # - org.mycompany.MyType
> # - org.mycompany.MyType2: org.mycompany.MyType2Serializer
> #
> ## List of custom kryo decorators
> # topology.kryo.decorators:
> # - org.mycompany.MyDecorator
> #
> ## Locations of the drpc servers
> drpc.servers:
> - "10.100.70.128"
> # - "server2"
> drpc.port: 3772
> drpc.worker.threads: 64
> drpc.queue.size: 128
> drpc.invocations.port: 3773
> drpc.request.timeout.secs: 600
> drpc.childopts: "-Xmx768m"
> ## Metrics Consumers
> # topology.metrics.consumer.register:
> # - class: "backtype.storm.metrics.LoggingMetricsConsumer"
> # parallelism.hint: 1
> # - class: "org.mycompany.MyMetricsConsumer"
> # parallelism.hint: 1
> # argument:
> # - endpoint: "metrics-collector.mycompany.org"
>
> I really couldn't figure out what is trick to configure zK and storm
> cluster, and why zookeeper listen to 2000 which is really a weird thing.
>
> thanks
>
> Alec
>
>
>
> On Wed, Aug 6, 2014 at 6:48 AM, Kushan Maskey <
> kushan.maskey@mmillerassociates.com> wrote:
>
>> I see that your zookeeper is listening on port 2000. Is that how you have
>> configured the zookeeper?
>>
>> --
>> Kushan Maskey
>> 817.403.7500
>>
>>
>> On Tue, Aug 5, 2014 at 11:56 AM, Sa Li <sa...@gmail.com> wrote:
>>
>>> Thank you very much, Marcelo, it indeed worked, now I can run my code
>>> without getting error. However, another thing is keeping bother me,
>>> following is my code:
>>>
>>> public static class PrintStream implements Filter {
>>>
>>> @SuppressWarnings("rawtypes”)
>>> @Override
>>> public void prepare(Map conf, TridentOperationContext context) {
>>> }
>>> @Override
>>> public void cleanup() {
>>> }
>>> @Override
>>> public boolean isKeep(TridentTuple tuple) {
>>> System.out.println(tuple);
>>> return true;
>>> }
>>> }
>>> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException
>>> {
>>>
>>> TridentTopology topology = new TridentTopology();
>>> BrokerHosts zk = new ZkHosts("localhost");
>>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk,
>>> "ingest_test");
>>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>>> OpaqueTridentKafkaSpout spout = new
>>> OpaqueTridentKafkaSpout(spoutConf);
>>>
>>> topology.newStream("kafka", spout)
>>> .each(new Fields("str"),
>>> new PrintStream()
>>> );
>>>
>>> return topology.build();
>>> }
>>> public static void main(String[] args) throws Exception {
>>>
>>> Config conf = new Config();
>>> conf.setDebug(true);
>>> conf.setMaxSpoutPending(1);
>>> conf.setMaxTaskParallelism(3);
>>> LocalDRPC drpc = new LocalDRPC();
>>> LocalCluster cluster = new LocalCluster();
>>> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>>>
>>> Thread.sleep(100);
>>> cluster.shutdown();
>>> }
>>>
>>> What I expect is quite simple, print out the message I collect from a
>>> kafka producer playback process which is running separately. The topic is
>>> listed as:
>>>
>>> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper
>>> localhost:2181
>>> topic: topictest partition: 0 leader: 1 replicas: 1,3,2
>>> isr: 1,3,2
>>> topic: topictest partition: 1 leader: 2 replicas: 2,1,3
>>> isr: 2,1,3
>>> topic: topictest partition: 2 leader: 3 replicas: 3,2,1
>>> isr: 3,2,1
>>> topic: topictest partition: 3 leader: 1 replicas: 1,2,3
>>> isr: 1,2,3
>>> topic: topictest partition: 4 leader: 2 replicas: 2,3,1
>>> isr: 2,3,1
>>>
>>> When I am running the code, this is what I saw on the screen, seems no
>>> error, but no message print out as well:
>>>
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in
>>> [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in
>>> [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> explanation.
>>> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1
>>> -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file=
>>> -cp
>>> /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin
>>> -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar
>>> storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in
>>> [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in
>>> [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> explanation.
>>> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess
>>> zookeeper at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with
>>> conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>>> "topology.tick.tuple.freq.secs" nil,
>>> "topology.builtin.metrics.bucket.size.secs" 60,
>>> "topology.fall.back.on.java.serialization" true,
>>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>>> "topology.skip.missing.kryo.registrations" true,
>>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>>> "topology.trident.batch.emit.interval.millis" 50,
>>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>>> "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9",
>>> "storm.messaging.netty.buffer_size" 5242880,
>>> "supervisor.worker.start.timeout.secs" 120,
>>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>>> "/transactional", "topology.acker.executors" nil,
>>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>>> "supervisor.heartbeat.frequency.secs" 5,
>>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>>> "topology.spout.wait.strategy"
>>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>>> nil, "storm.zookeeper.retry.interval" 1000, "
>>> topology.sleep.spout.wait.strategy.time.ms" 1,
>>> "nimbus.topology.validator"
>>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>>> [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs"
>>> 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs"
>>> 30, "task.refresh.poll.secs" 10, "topology.workers" 1,
>>> "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627,
>>> "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1,
>>> "topology.tuple.serializer"
>>> "backtype.storm.serialization.types.ListDelegateSerializer",
>>> "topology.disruptor.wait.strategy"
>>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>>> 5, "storm.thrift.transport"
>>> "backtype.storm.security.auth.SimpleTransportPlugin",
>>> "topology.state.synchronization.timeout.secs" 60,
>>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms"
>>> 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false,
>>> "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode"
>>> "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
>>> 1237 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>>> update: :connected:none
>>> 1350 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1417 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>>> update: :connected:none
>>> 1482 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1484 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>>> update: :connected:none
>>> 1540 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor
>>> with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>>> "topology.tick.tuple.freq.secs" nil,
>>> "topology.builtin.metrics.bucket.size.secs" 60,
>>> "topology.fall.back.on.java.serialization" true,
>>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>>> "topology.skip.missing.kryo.registrations" true,
>>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>>> "topology.trident.batch.emit.interval.millis" 50,
>>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>>> "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388",
>>> "storm.messaging.netty.buffer_size" 5242880,
>>> "supervisor.worker.start.timeout.secs" 120,
>>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>>> "/transactional", "topology.acker.executors" nil,
>>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>>> "supervisor.heartbeat.frequency.secs" 5,
>>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>>> "topology.spout.wait.strategy"
>>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>>> nil, "storm.zookeeper.retry.interval" 1000, "
>>> topology.sleep.spout.wait.strategy.time.ms" 1,
>>> "nimbus.topology.validator"
>>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>>> (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120,
>>> "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30,
>>> "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts"
>>> "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
>>> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
>>> "backtype.storm.serialization.types.ListDelegateSerializer",
>>> "topology.disruptor.wait.strategy"
>>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>>> 5, "storm.thrift.transport"
>>> "backtype.storm.security.auth.SimpleTransportPlugin",
>>> "topology.state.synchronization.timeout.secs" 60,
>>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms"
>>> 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false,
>>> "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode"
>>> "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>> 1576 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>>> update: :connected:none
>>> 1590 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
>>> with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
>>> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor
>>> with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>>> "topology.tick.tuple.freq.secs" nil,
>>> "topology.builtin.metrics.bucket.size.secs" 60,
>>> "topology.fall.back.on.java.serialization" true,
>>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>>> "topology.skip.missing.kryo.registrations" true,
>>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>>> "topology.trident.batch.emit.interval.millis" 50,
>>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>>> "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912",
>>> "storm.messaging.netty.buffer_size" 5242880,
>>> "supervisor.worker.start.timeout.secs" 120,
>>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>>> "/transactional", "topology.acker.executors" nil,
>>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>>> "supervisor.heartbeat.frequency.secs" 5,
>>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>>> "topology.spout.wait.strategy"
>>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>>> nil, "storm.zookeeper.retry.interval" 1000, "
>>> topology.sleep.spout.wait.strategy.time.ms" 1,
>>> "nimbus.topology.validator"
>>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>>> (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120,
>>> "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30,
>>> "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts"
>>> "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
>>> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
>>> "backtype.storm.serialization.types.ListDelegateSerializer",
>>> "topology.disruptor.wait.strategy"
>>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>>> 5, "storm.thrift.transport"
>>> "backtype.storm.security.auth.SimpleTransportPlugin",
>>> "topology.state.synchronization.timeout.secs" 60,
>>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms"
>>> 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false,
>>> "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode"
>>> "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
>>> 1638 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>>> update: :connected:none
>>> 1690 [main] INFO
>>> com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
>>> with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
>>> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology
>>> submission for kafka with conf {"topology.max.task.parallelism" nil,
>>> "topology.acker.executors" nil, "topology.kryo.register"
>>> {"storm.trident.topology.TransactionAttempt" nil},
>>> "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id"
>>> "kafka-1-1407257070", "topology.debug" true}
>>> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka:
>>> kafka-1-1407257070
>>> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available
>>> slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1]
>>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 2]
>>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 3]
>>> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4]
>>> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5]
>>> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
>>> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment
>>> for topology id kafka-1-1407257070:
>>> #backtype.storm.daemon.common.Assignment{:master-code-dir
>>> "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070",
>>> :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"},
>>> :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5
>>> 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4]
>>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2]
>>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1]
>>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1
>>> 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3]
>>> 1407257070}}
>>> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
>>> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
>>> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down
>>> supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
>>> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
>>> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
>>> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down
>>> supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
>>> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
>>> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
>>> 2256 [main] INFO backtype.storm.testing - Shutting down in process
>>> zookeeper
>>> 2257 [main] INFO backtype.storm.testing - Done shutting down in process
>>> zookeeper
>>> 2258 [main] INFO backtype.storm.testing - Deleting temporary path
>>> /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
>>> 2259 [main] INFO backtype.storm.testing - Deleting temporary path
>>> /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>>> 2260 [main] INFO backtype.storm.testing - Deleting temporary path
>>> /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
>>> 2261 [main] INFO backtype.storm.testing - Deleting temporary path
>>> /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>>>
>>> Anyone can help me locate what the problem is? I really need to walk
>>> through this step in order to be able to replace .each(printStream()) with
>>> other functions.
>>>
>>>
>>> Thanks
>>>
>>> Alec
>>>
>>> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>>>
>>> hello,
>>>
>>> you can check your .jar application with command " jar tf " to see if
>>> class kafka/api/OffsetRequest.class is part of the jar.
>>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are
>>> using) in storm_lib directory
>>>
>>> Marcelo
>>>
>>>
>>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>>>
>>>> Hi, all
>>>>
>>>> I am running a kafka-spout code in storm-server, the pom is
>>>>
>>>> <groupId>org.apache.kafka</groupId>
>>>> <artifactId>kafka_2.9.2</artifactId>
>>>> <version>0.8.0</version>
>>>> <scope>provided</scope>
>>>>
>>>> <exclusions>
>>>> <exclusion>
>>>> <groupId>org.apache.zookeeper</groupId>
>>>> <artifactId>zookeeper</artifactId>
>>>> </exclusion>
>>>> <exclusion>
>>>> <groupId>log4j</groupId>
>>>> <artifactId>log4j</artifactId>
>>>> </exclusion>
>>>> </exclusions>
>>>>
>>>> </dependency>
>>>>
>>>> <!-- Storm-Kafka compiled -->
>>>>
>>>> <dependency>
>>>> <artifactId>storm-kafka</artifactId>
>>>> <groupId>org.apache.storm</groupId>
>>>> <version>0.9.2-incubating</version>
>>>> <scope>*compile*</scope>
>>>> </dependency>
>>>>
>>>> I can mvn package it, but when I run it
>>>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar
>>>> target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar
>>>> storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>>
>>>>
>>>> I am getting such error
>>>>
>>>> 1657 [main]
>>>> INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting
>>>> supervisor with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>>>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread
>>>> Thread[main,5,main] died
>>>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>>>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26)
>>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at
>>>> storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13)
>>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at
>>>> storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115)
>>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> at
>>>> storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144)
>>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>> ~[na:1.7.0_55]
>>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>> ~[na:1.7.0_55]
>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>> ~[na:1.7.0_55]
>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>> ~[na:1.7.0_55]
>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>>>> ~[na:1.7.0_55]
>>>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>>> ~[na:1.7.0_55]
>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>>> ~[na:1.7.0_55]
>>>>
>>>>
>>>>
>>>>
>>>> I try to poke around online, could not find a solution for it, any idea
>>>> about that?
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Alec
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>
Re: kafka-spout running error
Posted by Sa Li <sa...@gmail.com>.
Hi, Kushan
You are completely right, I noticed this after you mentioned it, apparently
I am able to consumer the messages by kafka-console-consumer.sh which
listen to 2181, but storm goes to 2000 instead.
1319 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper
at port 2000 and dir /tmp/f41ad971-9f6b-433f-9dc9-9797afcc2e46
1425 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf
{"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
I spent whole morning to walk through my configuration, this is the zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=5
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=2
# the directory where the snapshot is stored.
dataDir=/var/lib/zookeeper
# Place the dataLogDir to a separate physical disc for better performance
# dataLogDir=/disk2/zookeeper
# the port at which the clients will connect
clientPort=2181
# specify all zookeeper servers
# The fist port is used by followers to connect to the leader
# The second one is used for leader election
#server.1=zookeeper1:2888:3888
#server.2=zookeeper2:2888:3888
#server.3=zookeeper3:2888:3888
# To avoid seeks ZooKeeper allocates space in the transaction log file in
# blocks of preAllocSize kilobytes. The default block size is 64M. One
reason
# for changing the size of the blocks is to reduce the block size if
snapshots
# are taken more often. (Also, see snapCount).
#preAllocSize=65536
# Clients can submit requests faster than ZooKeeper can process them,
# especially if there are a lot of clients. To prevent ZooKeeper from
running
# out of memory due to queued requests, ZooKeeper will throttle clients so
that
# there is no more than globalOutstandingLimit outstanding requests in the
# system. The default limit is 1,000.ZooKeeper logs transactions to a
# transaction log. After snapCount transactions are written to a log file a
# snapshot is started and a new transaction log file is started. The default
# snapCount is 10,000.
#snapCount=1000
# If this option is defined, requests will be will logged to a trace file
named
# traceFile.year.month.day.
#traceFile=
# Leader accepts client connections. Default value is "yes". The leader
machine
# coordinates updates. For higher update throughput at thes slight expense
of
# read throughput the leader can be configured to not accept clients and
focus
# on coordination.
leaderServes=yes
# Enable regular purging of old data and transaction logs every 24 hours
autopurge.purgeInterval=24
autopurge.snapRetainCount=5
Only thing that I thought to change was to make "multi-server" setup,
uncomment the server.1, server.2, server.3, but didn't help. And this is
the storm.yaml sitting in ~/.storm
storm.zookeeper.servers:
- "10.100.70.128"
# - "server2"
storm.zookeeper.port: 2181
nimbus.host: "10.100.70.128"
nimbus.childopts: "-Xmx1024m"
storm.local.dir: "/app/storm"
java.library.path: "/usr/lib/jvm/java-7-openjdk-amd64"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
# ##### These may optionally be filled in:
#
## List of custom serializations
# topology.kryo.register:
# - org.mycompany.MyType
# - org.mycompany.MyType2: org.mycompany.MyType2Serializer
#
## List of custom kryo decorators
# topology.kryo.decorators:
# - org.mycompany.MyDecorator
#
## Locations of the drpc servers
drpc.servers:
- "10.100.70.128"
# - "server2"
drpc.port: 3772
drpc.worker.threads: 64
drpc.queue.size: 128
drpc.invocations.port: 3773
drpc.request.timeout.secs: 600
drpc.childopts: "-Xmx768m"
## Metrics Consumers
# topology.metrics.consumer.register:
# - class: "backtype.storm.metrics.LoggingMetricsConsumer"
# parallelism.hint: 1
# - class: "org.mycompany.MyMetricsConsumer"
# parallelism.hint: 1
# argument:
# - endpoint: "metrics-collector.mycompany.org"
I really couldn't figure out what is trick to configure zK and storm
cluster, and why zookeeper listen to 2000 which is really a weird thing.
thanks
Alec
On Wed, Aug 6, 2014 at 6:48 AM, Kushan Maskey <
kushan.maskey@mmillerassociates.com> wrote:
> I see that your zookeeper is listening on port 2000. Is that how you have
> configured the zookeeper?
>
> --
> Kushan Maskey
> 817.403.7500
>
>
> On Tue, Aug 5, 2014 at 11:56 AM, Sa Li <sa...@gmail.com> wrote:
>
>> Thank you very much, Marcelo, it indeed worked, now I can run my code
>> without getting error. However, another thing is keeping bother me,
>> following is my code:
>>
>> public static class PrintStream implements Filter {
>>
>> @SuppressWarnings("rawtypes”)
>> @Override
>> public void prepare(Map conf, TridentOperationContext context) {
>> }
>> @Override
>> public void cleanup() {
>> }
>> @Override
>> public boolean isKeep(TridentTuple tuple) {
>> System.out.println(tuple);
>> return true;
>> }
>> }
>> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException
>> {
>>
>> TridentTopology topology = new TridentTopology();
>> BrokerHosts zk = new ZkHosts("localhost");
>> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk,
>> "ingest_test");
>> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
>> OpaqueTridentKafkaSpout spout = new
>> OpaqueTridentKafkaSpout(spoutConf);
>>
>> topology.newStream("kafka", spout)
>> .each(new Fields("str"),
>> new PrintStream()
>> );
>>
>> return topology.build();
>> }
>> public static void main(String[] args) throws Exception {
>>
>> Config conf = new Config();
>> conf.setDebug(true);
>> conf.setMaxSpoutPending(1);
>> conf.setMaxTaskParallelism(3);
>> LocalDRPC drpc = new LocalDRPC();
>> LocalCluster cluster = new LocalCluster();
>> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>>
>> Thread.sleep(100);
>> cluster.shutdown();
>> }
>>
>> What I expect is quite simple, print out the message I collect from a
>> kafka producer playback process which is running separately. The topic is
>> listed as:
>>
>> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper
>> localhost:2181
>> topic: topictest partition: 0 leader: 1 replicas: 1,3,2
>> isr: 1,3,2
>> topic: topictest partition: 1 leader: 2 replicas: 2,1,3
>> isr: 2,1,3
>> topic: topictest partition: 2 leader: 3 replicas: 3,2,1
>> isr: 3,2,1
>> topic: topictest partition: 3 leader: 1 replicas: 1,2,3
>> isr: 1,2,3
>> topic: topictest partition: 4 leader: 2 replicas: 2,3,1
>> isr: 2,3,1
>>
>> When I am running the code, this is what I saw on the screen, seems no
>> error, but no message print out as well:
>>
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1
>> -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file=
>> -cp
>> /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin
>> -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar
>> storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper
>> at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with
>> conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>> "topology.tick.tuple.freq.secs" nil,
>> "topology.builtin.metrics.bucket.size.secs" 60,
>> "topology.fall.back.on.java.serialization" true,
>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>> "topology.skip.missing.kryo.registrations" true,
>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>> "topology.trident.batch.emit.interval.millis" 50,
>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>> "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9",
>> "storm.messaging.netty.buffer_size" 5242880,
>> "supervisor.worker.start.timeout.secs" 120,
>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>> "/transactional", "topology.acker.executors" nil,
>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>> "supervisor.heartbeat.frequency.secs" 5,
>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>> "topology.spout.wait.strategy"
>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>> nil, "storm.zookeeper.retry.interval" 1000, "
>> topology.sleep.spout.wait.strategy.time.ms" 1,
>> "nimbus.topology.validator"
>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>> [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs"
>> 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs"
>> 30, "task.refresh.poll.secs" 10, "topology.workers" 1,
>> "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627,
>> "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1,
>> "topology.tuple.serializer"
>> "backtype.storm.serialization.types.ListDelegateSerializer",
>> "topology.disruptor.wait.strategy"
>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>> 5, "storm.thrift.transport"
>> "backtype.storm.security.auth.SimpleTransportPlugin",
>> "topology.state.synchronization.timeout.secs" 60,
>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
>> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
>> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
>> "topology.optimize" true, "topology.max.task.parallelism" nil}
>> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
>> 1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>> update: :connected:none
>> 1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>> update: :connected:none
>> 1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>> update: :connected:none
>> 1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor
>> with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>> "topology.tick.tuple.freq.secs" nil,
>> "topology.builtin.metrics.bucket.size.secs" 60,
>> "topology.fall.back.on.java.serialization" true,
>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>> "topology.skip.missing.kryo.registrations" true,
>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>> "topology.trident.batch.emit.interval.millis" 50,
>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>> "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388",
>> "storm.messaging.netty.buffer_size" 5242880,
>> "supervisor.worker.start.timeout.secs" 120,
>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>> "/transactional", "topology.acker.executors" nil,
>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>> "supervisor.heartbeat.frequency.secs" 5,
>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>> "topology.spout.wait.strategy"
>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>> nil, "storm.zookeeper.retry.interval" 1000, "
>> topology.sleep.spout.wait.strategy.time.ms" 1,
>> "nimbus.topology.validator"
>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>> (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120,
>> "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30,
>> "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts"
>> "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
>> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
>> "backtype.storm.serialization.types.ListDelegateSerializer",
>> "topology.disruptor.wait.strategy"
>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>> 5, "storm.thrift.transport"
>> "backtype.storm.security.auth.SimpleTransportPlugin",
>> "topology.state.synchronization.timeout.secs" 60,
>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
>> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
>> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
>> "topology.optimize" true, "topology.max.task.parallelism" nil}
>> 1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>> update: :connected:none
>> 1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
>> with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
>> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor
>> with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>> "topology.tick.tuple.freq.secs" nil,
>> "topology.builtin.metrics.bucket.size.secs" 60,
>> "topology.fall.back.on.java.serialization" true,
>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
>> "topology.skip.missing.kryo.registrations" true,
>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>> "topology.trident.batch.emit.interval.millis" 50,
>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>> "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912",
>> "storm.messaging.netty.buffer_size" 5242880,
>> "supervisor.worker.start.timeout.secs" 120,
>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
>> "/transactional", "topology.acker.executors" nil,
>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>> "supervisor.heartbeat.frequency.secs" 5,
>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>> "topology.spout.wait.strategy"
>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>> nil, "storm.zookeeper.retry.interval" 1000, "
>> topology.sleep.spout.wait.strategy.time.ms" 1,
>> "nimbus.topology.validator"
>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>> (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120,
>> "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30,
>> "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts"
>> "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
>> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
>> "backtype.storm.serialization.types.ListDelegateSerializer",
>> "topology.disruptor.wait.strategy"
>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>> 5, "storm.thrift.transport"
>> "backtype.storm.security.auth.SimpleTransportPlugin",
>> "topology.state.synchronization.timeout.secs" 60,
>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
>> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
>> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
>> "topology.optimize" true, "topology.max.task.parallelism" nil}
>> 1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
>> update: :connected:none
>> 1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
>> with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
>> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology
>> submission for kafka with conf {"topology.max.task.parallelism" nil,
>> "topology.acker.executors" nil, "topology.kryo.register"
>> {"storm.trident.topology.TransactionAttempt" nil},
>> "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id"
>> "kafka-1-1407257070", "topology.debug" true}
>> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka:
>> kafka-1-1407257070
>> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available
>> slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1]
>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 2]
>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 3]
>> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4]
>> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5]
>> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
>> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment
>> for topology id kafka-1-1407257070:
>> #backtype.storm.daemon.common.Assignment{:master-code-dir
>> "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070",
>> :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"},
>> :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5
>> 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4]
>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2]
>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1]
>> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1
>> 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3]
>> 1407257070}}
>> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
>> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
>> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down
>> supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
>> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
>> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
>> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down
>> supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
>> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
>> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
>> 2256 [main] INFO backtype.storm.testing - Shutting down in process
>> zookeeper
>> 2257 [main] INFO backtype.storm.testing - Done shutting down in process
>> zookeeper
>> 2258 [main] INFO backtype.storm.testing - Deleting temporary path
>> /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
>> 2259 [main] INFO backtype.storm.testing - Deleting temporary path
>> /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
>> 2260 [main] INFO backtype.storm.testing - Deleting temporary path
>> /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
>> 2261 [main] INFO backtype.storm.testing - Deleting temporary path
>> /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>>
>> Anyone can help me locate what the problem is? I really need to walk
>> through this step in order to be able to replace .each(printStream()) with
>> other functions.
>>
>>
>> Thanks
>>
>> Alec
>>
>> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>>
>> hello,
>>
>> you can check your .jar application with command " jar tf " to see if
>> class kafka/api/OffsetRequest.class is part of the jar.
>> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are
>> using) in storm_lib directory
>>
>> Marcelo
>>
>>
>> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>>
>>> Hi, all
>>>
>>> I am running a kafka-spout code in storm-server, the pom is
>>>
>>> <groupId>org.apache.kafka</groupId>
>>> <artifactId>kafka_2.9.2</artifactId>
>>> <version>0.8.0</version>
>>> <scope>provided</scope>
>>>
>>> <exclusions>
>>> <exclusion>
>>> <groupId>org.apache.zookeeper</groupId>
>>> <artifactId>zookeeper</artifactId>
>>> </exclusion>
>>> <exclusion>
>>> <groupId>log4j</groupId>
>>> <artifactId>log4j</artifactId>
>>> </exclusion>
>>> </exclusions>
>>>
>>> </dependency>
>>>
>>> <!-- Storm-Kafka compiled -->
>>>
>>> <dependency>
>>> <artifactId>storm-kafka</artifactId>
>>> <groupId>org.apache.storm</groupId>
>>> <version>0.9.2-incubating</version>
>>> <scope>*compile*</scope>
>>> </dependency>
>>>
>>> I can mvn package it, but when I run it
>>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar
>>> target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar
>>> storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>>
>>>
>>> I am getting such error
>>>
>>> 1657 [main]
>>> INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
>>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
>>> with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread
>>> Thread[main,5,main] died
>>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> at
>>> storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144)
>>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>> ~[na:1.7.0_55]
>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>> ~[na:1.7.0_55]
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> ~[na:1.7.0_55]
>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>> ~[na:1.7.0_55]
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
>>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>> ~[na:1.7.0_55]
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>>>
>>>
>>>
>>>
>>> I try to poke around online, could not find a solution for it, any idea
>>> about that?
>>>
>>>
>>> Thanks
>>>
>>> Alec
>>>
>>>
>>>
>>>
>>
>>
>
Re: kafka-spout running error
Posted by Kushan Maskey <ku...@mmillerassociates.com>.
I see that your zookeeper is listening on port 2000. Is that how you have
configured the zookeeper?
--
Kushan Maskey
817.403.7500
On Tue, Aug 5, 2014 at 11:56 AM, Sa Li <sa...@gmail.com> wrote:
> Thank you very much, Marcelo, it indeed worked, now I can run my code
> without getting error. However, another thing is keeping bother me,
> following is my code:
>
> public static class PrintStream implements Filter {
>
> @SuppressWarnings("rawtypes”)
> @Override
> public void prepare(Map conf, TridentOperationContext context) {
> }
> @Override
> public void cleanup() {
> }
> @Override
> public boolean isKeep(TridentTuple tuple) {
> System.out.println(tuple);
> return true;
> }
> }
> public static StormTopology buildTopology(LocalDRPC drpc) throws IOException
> {
>
> TridentTopology topology = new TridentTopology();
> BrokerHosts zk = new ZkHosts("localhost");
> TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk,
> "ingest_test");
> spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
> OpaqueTridentKafkaSpout spout = new
> OpaqueTridentKafkaSpout(spoutConf);
>
> topology.newStream("kafka", spout)
> .each(new Fields("str"),
> new PrintStream()
> );
>
> return topology.build();
> }
> public static void main(String[] args) throws Exception {
>
> Config conf = new Config();
> conf.setDebug(true);
> conf.setMaxSpoutPending(1);
> conf.setMaxTaskParallelism(3);
> LocalDRPC drpc = new LocalDRPC();
> LocalCluster cluster = new LocalCluster();
> cluster.submitTopology("kafka", conf, buildTopology(drpc));
>
> Thread.sleep(100);
> cluster.shutdown();
> }
>
> What I expect is quite simple, print out the message I collect from a
> kafka producer playback process which is running separately. The topic is
> listed as:
>
> root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper
> localhost:2181
> topic: topictest partition: 0 leader: 1 replicas: 1,3,2
> isr: 1,3,2
> topic: topictest partition: 1 leader: 2 replicas: 2,1,3
> isr: 2,1,3
> topic: topictest partition: 2 leader: 3 replicas: 3,2,1
> isr: 3,2,1
> topic: topictest partition: 3 leader: 1 replicas: 1,2,3
> isr: 1,2,3
> topic: topictest partition: 4 leader: 2 replicas: 2,3,1
> isr: 2,3,1
>
> When I am running the code, this is what I saw on the screen, seems no
> error, but no message print out as well:
>
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1
> -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file=
> -cp
> /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin
> -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar
> storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> 1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper
> at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
> 1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf
> {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
> "topology.tick.tuple.freq.secs" nil,
> "topology.builtin.metrics.bucket.size.secs" 60,
> "topology.fall.back.on.java.serialization" true,
> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
> "topology.skip.missing.kryo.registrations" true,
> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
> "topology.trident.batch.emit.interval.millis" 50,
> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
> "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9",
> "storm.messaging.netty.buffer_size" 5242880,
> "supervisor.worker.start.timeout.secs" 120,
> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
> "/transactional", "topology.acker.executors" nil,
> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
> "supervisor.heartbeat.frequency.secs" 5,
> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
> "topology.spout.wait.strategy"
> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
> nil, "storm.zookeeper.retry.interval" 1000, "
> topology.sleep.spout.wait.strategy.time.ms" 1,
> "nimbus.topology.validator"
> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
> [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs"
> 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs"
> 30, "task.refresh.poll.secs" 10, "topology.workers" 1,
> "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627,
> "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1,
> "topology.tuple.serializer"
> "backtype.storm.serialization.types.ListDelegateSerializer",
> "topology.disruptor.wait.strategy"
> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
> 5, "storm.thrift.transport"
> "backtype.storm.security.auth.SimpleTransportPlugin",
> "topology.state.synchronization.timeout.secs" 60,
> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
> "topology.optimize" true, "topology.max.task.parallelism" nil}
> 1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
> 1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
> update: :connected:none
> 1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
> update: :connected:none
> 1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
> update: :connected:none
> 1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor
> with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
> "topology.tick.tuple.freq.secs" nil,
> "topology.builtin.metrics.bucket.size.secs" 60,
> "topology.fall.back.on.java.serialization" true,
> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
> "topology.skip.missing.kryo.registrations" true,
> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
> "topology.trident.batch.emit.interval.millis" 50,
> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
> "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388",
> "storm.messaging.netty.buffer_size" 5242880,
> "supervisor.worker.start.timeout.secs" 120,
> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
> "/transactional", "topology.acker.executors" nil,
> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
> "supervisor.heartbeat.frequency.secs" 5,
> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
> "topology.spout.wait.strategy"
> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
> nil, "storm.zookeeper.retry.interval" 1000, "
> topology.sleep.spout.wait.strategy.time.ms" 1,
> "nimbus.topology.validator"
> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
> (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120,
> "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30,
> "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts"
> "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
> "backtype.storm.serialization.types.ListDelegateSerializer",
> "topology.disruptor.wait.strategy"
> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
> 5, "storm.thrift.transport"
> "backtype.storm.security.auth.SimpleTransportPlugin",
> "topology.state.synchronization.timeout.secs" 60,
> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
> "topology.optimize" true, "topology.max.task.parallelism" nil}
> 1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
> update: :connected:none
> 1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
> with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
> 1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor
> with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
> "topology.tick.tuple.freq.secs" nil,
> "topology.builtin.metrics.bucket.size.secs" 60,
> "topology.fall.back.on.java.serialization" true,
> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0,
> "topology.skip.missing.kryo.registrations" true,
> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
> "topology.trident.batch.emit.interval.millis" 50,
> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
> "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64",
> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
> "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912",
> "storm.messaging.netty.buffer_size" 5242880,
> "supervisor.worker.start.timeout.secs" 120,
> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128",
> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000,
> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
> "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root"
> "/transactional", "topology.acker.executors" nil,
> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
> "supervisor.heartbeat.frequency.secs" 5,
> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
> "topology.spout.wait.strategy"
> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
> nil, "storm.zookeeper.retry.interval" 1000, "
> topology.sleep.spout.wait.strategy.time.ms" 1,
> "nimbus.topology.validator"
> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
> (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120,
> "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30,
> "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts"
> "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05,
> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
> "backtype.storm.serialization.types.ListDelegateSerializer",
> "topology.disruptor.wait.strategy"
> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
> 5, "storm.thrift.transport"
> "backtype.storm.security.auth.SimpleTransportPlugin",
> "topology.state.synchronization.timeout.secs" 60,
> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local",
> "topology.optimize" true, "topology.max.task.parallelism" nil}
> 1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state
> update: :connected:none
> 1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
> - Starting
> 1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
> with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
> 1944 [main] INFO backtype.storm.daemon.nimbus - Received topology
> submission for kafka with conf {"topology.max.task.parallelism" nil,
> "topology.acker.executors" nil, "topology.kryo.register"
> {"storm.trident.topology.TransactionAttempt" nil},
> "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id"
> "kafka-1-1407257070", "topology.debug" true}
> 1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka:
> kafka-1-1407257070
> 2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available
> slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1]
> ["944e6152-ca58-4d2b-8325-94ac98f43995" 2]
> ["944e6152-ca58-4d2b-8325-94ac98f43995" 3]
> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4]
> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5]
> ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
> 2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment
> for topology id kafka-1-1407257070:
> #backtype.storm.daemon.common.Assignment{:master-code-dir
> "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070",
> :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"},
> :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5
> 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4]
> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2]
> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1]
> ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1
> 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3]
> 1407257070}}
> 2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
> 2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
> 2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down
> supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
> 2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
> 2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
> 2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down
> supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
> 2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
> 2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
> 2256 [main] INFO backtype.storm.testing - Shutting down in process
> zookeeper
> 2257 [main] INFO backtype.storm.testing - Done shutting down in process
> zookeeper
> 2258 [main] INFO backtype.storm.testing - Deleting temporary path
> /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
> 2259 [main] INFO backtype.storm.testing - Deleting temporary path
> /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
> 2260 [main] INFO backtype.storm.testing - Deleting temporary path
> /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
> 2261 [main] INFO backtype.storm.testing - Deleting temporary path
> /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
>
> Anyone can help me locate what the problem is? I really need to walk
> through this step in order to be able to replace .each(printStream()) with
> other functions.
>
>
> Thanks
>
> Alec
>
> On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
>
> hello,
>
> you can check your .jar application with command " jar tf " to see if
> class kafka/api/OffsetRequest.class is part of the jar.
> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are
> using) in storm_lib directory
>
> Marcelo
>
>
> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
>
>> Hi, all
>>
>> I am running a kafka-spout code in storm-server, the pom is
>>
>> <groupId>org.apache.kafka</groupId>
>> <artifactId>kafka_2.9.2</artifactId>
>> <version>0.8.0</version>
>> <scope>provided</scope>
>>
>> <exclusions>
>> <exclusion>
>> <groupId>org.apache.zookeeper</groupId>
>> <artifactId>zookeeper</artifactId>
>> </exclusion>
>> <exclusion>
>> <groupId>log4j</groupId>
>> <artifactId>log4j</artifactId>
>> </exclusion>
>> </exclusions>
>>
>> </dependency>
>>
>> <!-- Storm-Kafka compiled -->
>>
>> <dependency>
>> <artifactId>storm-kafka</artifactId>
>> <groupId>org.apache.storm</groupId>
>> <version>0.9.2-incubating</version>
>> <scope>*compile*</scope>
>> </dependency>
>>
>> I can mvn package it, but when I run it
>> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar
>> target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar
>> storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>>
>>
>> I am getting such error
>>
>> 1657 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl
>> - Starting
>> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor
>> with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
>> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread
>> Thread[main,5,main] died
>> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
>> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> at
>> storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144)
>> ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
>> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_55]
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_55]
>> at java.security.AccessController.doPrivileged(Native Method)
>> ~[na:1.7.0_55]
>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>> ~[na:1.7.0_55]
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>> ~[na:1.7.0_55]
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>>
>>
>>
>>
>> I try to poke around online, could not find a solution for it, any idea
>> about that?
>>
>>
>> Thanks
>>
>> Alec
>>
>>
>>
>>
>
>
Re: kafka-spout running error
Posted by Sa Li <sa...@gmail.com>.
Thank you very much, Marcelo, it indeed worked, now I can run my code without getting error. However, another thing is keeping bother me, following is my code:
public static class PrintStream implements Filter {
@SuppressWarnings("rawtypes”)
@Override
public void prepare(Map conf, TridentOperationContext context) {
}
@Override
public void cleanup() {
}
@Override
public boolean isKeep(TridentTuple tuple) {
System.out.println(tuple);
return true;
}
}
public static StormTopology buildTopology(LocalDRPC drpc) throws IOException {
TridentTopology topology = new TridentTopology();
BrokerHosts zk = new ZkHosts("localhost");
TridentKafkaConfig spoutConf = new TridentKafkaConfig(zk, "ingest_test");
spoutConf.scheme = new SchemeAsMultiScheme(new StringScheme());
OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpout(spoutConf);
topology.newStream("kafka", spout)
.each(new Fields("str"),
new PrintStream()
);
return topology.build();
}
public static void main(String[] args) throws Exception {
Config conf = new Config();
conf.setDebug(true);
conf.setMaxSpoutPending(1);
conf.setMaxTaskParallelism(3);
LocalDRPC drpc = new LocalDRPC();
LocalCluster cluster = new LocalCluster();
cluster.submitTopology("kafka", conf, buildTopology(drpc));
Thread.sleep(100);
cluster.shutdown();
}
What I expect is quite simple, print out the message I collect from a kafka producer playback process which is running separately. The topic is listed as:
root@DO-mq-dev:/etc/kafka# bin/kafka-list-topic.sh --zookeeper localhost:2181
topic: topictest partition: 0 leader: 1 replicas: 1,3,2 isr: 1,3,2
topic: topictest partition: 1 leader: 2 replicas: 2,1,3 isr: 2,1,3
topic: topictest partition: 2 leader: 3 replicas: 3,2,1 isr: 3,2,1
topic: topictest partition: 3 leader: 1 replicas: 1,2,3 isr: 1,2,3
topic: topictest partition: 4 leader: 2 replicas: 2,3,1 isr: 2,3,1
When I am running the code, this is what I saw on the screen, seems no error, but no message print out as well:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
Running: java -client -Dstorm.options= -Dstorm.home=/etc/storm-0.9.0.1 -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file= -cp /etc/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-core-0.9.0.1.jar:/etc/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/etc/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/etc/storm-0.9.0.1/lib/commons-io-1.4.jar:/etc/storm-0.9.0.1/lib/joda-time-2.0.jar:/etc/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5.jar:/etc/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/etc/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/etc/storm-0.9.0.1/lib/jetty-6.1.26.jar:/etc/storm-0.9.0.1/lib/commons-codec-1.4.jar:/etc/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/etc/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/etc/storm-0.9.0.1/lib/commons-exec-1.1.jar:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/etc/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/etc/storm-0.9.0.1/lib/minlog-1.2.jar:/etc/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/etc/storm-0.9.0.1/lib/clojure-1.4.0.jar:/etc/storm-0.9.0.1/lib/asm-4.0.jar:/etc/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/etc/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/etc/storm-0.9.0.1/lib/clout-1.0.1.jar:/etc/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/etc/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/etc/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/etc/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/etc/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/etc/storm-0.9.0.1/lib/kryo-2.17.jar:/etc/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/etc/storm-0.9.0.1/lib/compojure-1.1.3.jar:/etc/storm-0.9.0.1/lib/objenesis-1.2.jar:/etc/storm-0.9.0.1/lib/commons-logging-1.1.1.jar:/etc/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/etc/storm-0.9.0.1/lib/junit-3.8.1.jar:/etc/storm-0.9.0.1/lib/json-simple-1.1.jar:/etc/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/etc/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/etc/storm-0.9.0.1/lib/jline-0.9.94.jar:/etc/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/etc/storm-0.9.0.1/lib/guava-13.0.jar:/etc/storm-0.9.0.1/lib/commons-lang-2.5.jar:/etc/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/etc/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/etc/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/etc/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/etc/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/etc/storm-0.9.0.1/lib/kafka_2.9.2-0.8.0.jar:/etc/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/etc/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/etc/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/etc/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/etc/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/etc/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/etc/storm-0.9.0.1/lib/httpcore-4.1.jar:/etc/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/etc/storm-0.9.0.1/lib/clj-time-0.4.1.jar:target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/etc/storm-0.9.0.1/conf:/etc/storm-0.9.0.1/bin -Dstorm.jar=target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/etc/storm-0.9.0.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/stuser/kafkaprj/kafka-storm-bitmap/target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
1113 [main] INFO backtype.storm.zookeeper - Starting inprocess zookeeper at port 2000 and dir /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
1216 [main] INFO backtype.storm.daemon.nimbus - Starting Nimbus with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" [6700 6701 6702 6703], "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
1219 [main] INFO backtype.storm.daemon.nimbus - Using default scheduler
1237 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
1303 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
1350 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
1417 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
1432 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
1482 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
1484 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
1532 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
1540 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
1568 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1 2 3), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
1576 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
1582 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
1590 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
1632 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id 944e6152-ca58-4d2b-8325-94ac98f43995 at host DO-mq-dev
1636 [main] INFO backtype.storm.daemon.supervisor - Starting Supervisor with conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/lib/jvm/java-7-openjdk-amd64", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "10.100.70.128", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (4 5 6), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
1638 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
1648 [main-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
1690 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
1740 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id e8303ca7-9cc4-4551-8387-7559fc3c53fc at host DO-mq-dev
1944 [main] INFO backtype.storm.daemon.nimbus - Received topology submission for kafka with conf {"topology.max.task.parallelism" nil, "topology.acker.executors" nil, "topology.kryo.register" {"storm.trident.topology.TransactionAttempt" nil}, "topology.kryo.decorators" (), "topology.name" "kafka", "storm.id" "kafka-1-1407257070", "topology.debug" true}
1962 [main] INFO backtype.storm.daemon.nimbus - Activating kafka: kafka-1-1407257070
2067 [main] INFO backtype.storm.scheduler.EvenScheduler - Available slots: (["944e6152-ca58-4d2b-8325-94ac98f43995" 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 3] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 4] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 5] ["e8303ca7-9cc4-4551-8387-7559fc3c53fc" 6])
2088 [main] INFO backtype.storm.daemon.nimbus - Setting new assignment for topology id kafka-1-1407257070: #backtype.storm.daemon.common.Assignment{:master-code-dir "/tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9/nimbus/stormdist/kafka-1-1407257070", :node->host {"944e6152-ca58-4d2b-8325-94ac98f43995" "DO-mq-dev"}, :executor->node+port {[3 3] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [5 5] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [4 4] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [2 2] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1], [1 1] ["944e6152-ca58-4d2b-8325-94ac98f43995" 1]}, :executor->start-time-secs {[1 1] 1407257070, [2 2] 1407257070, [4 4] 1407257070, [5 5] 1407257070, [3 3] 1407257070}}
2215 [main] INFO backtype.storm.daemon.nimbus - Shutting down master
2223 [main] INFO backtype.storm.daemon.nimbus - Shut down master
2239 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor 944e6152-ca58-4d2b-8325-94ac98f43995
2240 [Thread-6] INFO backtype.storm.event - Event manager interrupted
2241 [Thread-7] INFO backtype.storm.event - Event manager interrupted
2248 [main] INFO backtype.storm.daemon.supervisor - Shutting down supervisor e8303ca7-9cc4-4551-8387-7559fc3c53fc
2248 [Thread-9] INFO backtype.storm.event - Event manager interrupted
2248 [Thread-10] INFO backtype.storm.event - Event manager interrupted
2256 [main] INFO backtype.storm.testing - Shutting down in process zookeeper
2257 [main] INFO backtype.storm.testing - Done shutting down in process zookeeper
2258 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/cf44f174-2cda-4e67-8c85-e9f96897fcd9
2259 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/dd37d0cc-79b3-4f23-b6a5-3bcf5a9f0879
2260 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/3e515769-ebf5-4085-a6bf-35f4ad8be388
2261 [main] INFO backtype.storm.testing - Deleting temporary path /tmp/d0aeb5f4-0830-4efd-be7f-bc40d5b66912
Anyone can help me locate what the problem is? I really need to walk through this step in order to be able to replace .each(printStream()) with other functions.
Thanks
Alec
On Aug 4, 2014, at 4:24 AM, Marcelo Valle <mv...@redoop.org> wrote:
> hello,
>
> you can check your .jar application with command " jar tf " to see if class kafka/api/OffsetRequest.class is part of the jar.
> If not you can try to copy kafka-2.9.2-0.8.0.jar (or version you are using) in storm_lib directory
>
> Marcelo
>
>
> 2014-07-31 23:33 GMT+02:00 Sa Li <sa...@gmail.com>:
> Hi, all
>
> I am running a kafka-spout code in storm-server, the pom is
>
> <groupId>org.apache.kafka</groupId>
> <artifactId>kafka_2.9.2</artifactId>
> <version>0.8.0</version>
> <scope>provided</scope>
>
> <exclusions>
> <exclusion>
> <groupId>org.apache.zookeeper</groupId>
> <artifactId>zookeeper</artifactId>
> </exclusion>
> <exclusion>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> </exclusion>
> </exclusions>
>
> </dependency>
>
> <!-- Storm-Kafka compiled -->
>
> <dependency>
> <artifactId>storm-kafka</artifactId>
> <groupId>org.apache.storm</groupId>
> <version>0.9.2-incubating</version>
> <scope>*compile*</scope>
> </dependency>
>
> I can mvn package it, but when I run it
> root@DO-mq-dev:/home/stuser/kafkaprj/kafka-storm-bitmap# storm jar target/kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.artemis.KafkaConsumerTopology KafkaConsumerTopology
>
>
> I am getting such error
>
> 1657 [main] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
> 1682 [main] INFO backtype.storm.daemon.supervisor - Starting supervisor with id a66e0c61-a951-4c1b-a43f-3fb0d12cb226 at host DO-mq-dev
> 1698 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died
> java.lang.NoClassDefFoundError: kafka/api/OffsetRequest
> at storm.artemis.kafka.KafkaConfig.<init>(KafkaConfig.java:26) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.kafka.trident.TridentKafkaConfig.<init>(TridentKafkaConfig.java:13) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.KafkaConsumerTopology.buildTopology(KafkaConsumerTopology.java:115) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> at storm.artemis.KafkaConsumerTopology.main(KafkaConsumerTopology.java:144) ~[kafka-storm-bitmap-0.0.1-SNAPSHOT-jar-with-dependencies.jar:na]
> Caused by: java.lang.ClassNotFoundException: kafka.api.OffsetRequest
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_55]
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_55]
> at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_55]
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_55]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_55]
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_55]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_55]
>
>
>
>
> I try to poke around online, could not find a solution for it, any idea about that?
>
>
> Thanks
>
> Alec
>
>
>
>