You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@storm.apache.org by Sa Li <sa...@gmail.com> on 2014/10/29 22:07:44 UTC
not writing data into DB in storm cluster, but does in localcluster
Hi, All
I am running a kafkaSpout to consume data from kafka and write data into
postgresql DB, it works in localcluster even it is slow (we need to
diagnose what the problem is). When I sumbitted it in storm cluster, it
doesn't show exceptions, and I see the topology is alive in StormUI, but
just no data being written into DB, why that happen?
thanks
Alec
Re: not writing data into DB in storm cluster, but does in localcluster
Posted by Stephen Armstrong <st...@linqia.com>.
Have you looked in the worker logs on the machines in the cluster? The logs
you are showing us seem to be on the nimbus, which will only show errors
that happen during submission. An error that happens in a bolt during
processing will show up in the worker log of whatever machine is running
the bolt.
Also, you should exclude logback from the jar-with-dependencies you are
submitting. The error "SLF4J: Class path contains multiple SLF4J bindings."
means that Storm's provided copy of logback is conflicting with the copy in
your jar. Run "mvn dependency:tree" to show where all the dependencies are
coming from, then add an <exclude> to whatever is pulling it in:
http://maven.apache.org/guides/introduction/introduction-to-optional-and-excludes-dependencies.html
On Thu, Oct 30, 2014 at 4:08 PM, Sa Li <sa...@gmail.com> wrote:
> What I can see in the screen after submit the topology:
>
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/etc/apache-storm-0.9.2-incubating/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/stuser/kafkaprj/kafka-storm-ingress/target/kafka-storm-ingress-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> Running: /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -client
> -Dstorm.options= -Dstorm.home=/etc/apache-storm-0.9.2-incubating
> -Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file=
> -cp
> /etc/apache-storm-0.9.2-incubating/lib/log4j-over-slf4j-1.6.6.jar:/etc/apache-storm-0.9.2-incubating/lib/joda-time-2.0.jar:/etc/apache-storm-0.9.2-incubating/lib/commons-codec-1.6.jar:/etc/apache-storm-0.9.2-incubating/lib/curator-framework-2.4.0.jar:/etc/apache-storm-0.9.2-incubating/lib/servlet-api-2.5.jar:/etc/apache-storm-0.9.2-incubating/lib/core.incubator-0.1.0.jar:/etc/apache-storm-0.9.2-incubating/lib/jetty-6.1.26.jar:/etc/apache-storm-0.9.2-incubating/lib/httpcore-4.3.2.jar:/etc/apache-storm-0.9.2-incubating/lib/servlet-api-2.5-20081211.jar:/etc/apache-storm-0.9.2-incubating/lib/commons-exec-1.1.jar:/etc/apache-storm-0.9.2-incubating/lib/logback-classic-1.0.6.jar:/etc/apache-storm-0.9.2-incubating/lib/minlog-1.2.jar:/etc/apache-storm-0.9.2-incubating/lib/asm-4.0.jar:/etc/apache-storm-0.9.2-incubating/lib/clojure-1.5.1.jar:/etc/apache-storm-0.9.2-incubating/lib/jline-2.11.jar:/etc/apache-storm-0.9.2-incubating/lib/clj-stacktrace-0.2.4.jar:/etc/apache-storm-0.9.2-incubating/lib/netty-3.2.2.Final.jar:/etc/apache-storm-0.9.2-incubating/lib/commons-fileupload-1.2.1.jar:/etc/apache-storm-0.9.2-incubating/lib/clout-1.0.1.jar:/etc/apache-storm-0.9.2-incubating/lib/curator-client-2.4.0.jar:/etc/apache-storm-0.9.2-incubating/lib/ring-servlet-0.3.11.jar:/etc/apache-storm-0.9.2-incubating/lib/commons-io-2.4.jar:/etc/apache-storm-0.9.2-incubating/lib/ring-devel-0.3.11.jar:/etc/apache-storm-0.9.2-incubating/lib/snakeyaml-1.11.jar:/etc/apache-storm-0.9.2-incubating/lib/reflectasm-1.07-shaded.jar:/etc/apache-storm-0.9.2-incubating/lib/chill-java-0.3.5.jar:/etc/apache-storm-0.9.2-incubating/lib/ring-jetty-adapter-0.3.11.jar:/etc/apache-storm-0.9.2-incubating/lib/compojure-1.1.3.jar:/etc/apache-storm-0.9.2-incubating/lib/objenesis-1.2.jar:/etc/apache-storm-0.9.2-incubating/lib/tools.macro-0.1.0.jar:/etc/apache-storm-0.9.2-incubating/lib/httpclient-4.3.3.jar:/etc/apache-storm-0.9.2-incubating/lib/json-simple-1.1.jar:/etc/apache-storm-0.9.2-incubating/lib/guava-13.0.jar:/etc/apache-storm-0.9.2-incubating/lib/commons-lang-2.5.jar:/etc/apache-storm-0.9.2-incubating/lib/storm-core-0.9.2-incubating.jar:/etc/apache-storm-0.9.2-incubating/lib/ring-core-1.1.5.jar:/etc/apache-storm-0.9.2-incubating/lib/hiccup-0.3.6.jar:/etc/apache-storm-0.9.2-incubating/lib/tools.logging-0.2.3.jar:/etc/apache-storm-0.9.2-incubating/lib/carbonite-1.4.0.jar:/etc/apache-storm-0.9.2-incubating/lib/math.numeric-tower-0.0.1.jar:/etc/apache-storm-0.9.2-incubating/lib/slf4j-api-1.6.5.jar:/etc/apache-storm-0.9.2-incubating/lib/tools.cli-0.2.4.jar:/etc/apache-storm-0.9.2-incubating/lib/netty-3.6.3.Final.jar:/etc/apache-storm-0.9.2-incubating/lib/disruptor-2.10.1.jar:/etc/apache-storm-0.9.2-incubating/lib/jetty-util-6.1.26.jar:/etc/apache-storm-0.9.2-incubating/lib/commons-logging-1.1.3.jar:/etc/apache-storm-0.9.2-incubating/lib/jgrapht-core-0.9.0.jar:/etc/apache-storm-0.9.2-incubating/lib/zookeeper-3.4.5.jar:/etc/apache-storm-0.9.2-incubating/lib/logback-core-1.0.6.jar:/etc/apache-storm-0.9.2-incubating/lib/clj-time-0.4.1.jar:/etc/apache-storm-0.9.2-incubating/lib/kryo-2.21.jar:target/kafka-storm-ingress-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/root/.storm:/etc/apache-storm-0.9.2-incubating/bin
> -Dstorm.jar=target/kafka-storm-ingress-0.0.1-SNAPSHOT-jar-with-dependencies.jar
> storm.ingress.KafkaIngressTopology topictest
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/etc/apache-storm-0.9.2-incubating/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/stuser/kafkaprj/kafka-storm-ingress/target/kafka-storm-ingress-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> Storm cluster....
> 487 [main] INFO backtype.storm.StormSubmitter - Jar not uploaded to
> master yet. Submitting jar...
> 491 [main] INFO backtype.storm.StormSubmitter - Uploading topology jar
> target/kafka-storm-ingress-0.0.1-SNAPSHOT-jar-with-dependencies.jar to
> assigned location:
> /app/storm/nimbus/inbox/stormjar-0364215c-c1e4-4405-b682-8b02cfea03ca.jar
> 639 [main] INFO backtype.storm.StormSubmitter - Successfully uploaded
> topology jar to assigned location:
> /app/storm/nimbus/inbox/stormjar-0364215c-c1e4-4405-b682-8b02cfea03ca.jar
> 639 [main] INFO backtype.storm.StormSubmitter - Submitting topology
> topictest in distributed mode with conf {"topology.workers":10}
> 746 [main] INFO backtype.storm.StormSubmitter - Finished submitting
> topology: topictest
>
> Seems to be OK though ....., really can't diagnose what is the problem.
>
>
> Thanks
>
> Alec
>
> On Thu, Oct 30, 2014 at 4:04 PM, Sa Li <sa...@gmail.com> wrote:
>
>> Thanks, Bill, I turn on the debug mode,
>> by
>> Config conf = new Config();
>> conf.setDebug(true);
>>
>> This is what I get from log:
>> root@DO-mq-dev:/etc/storm/logs# ll
>> total 708
>> drwxr-xr-x 2 root root 4096 Aug 11 13:40 ./
>> drwxr-xr-x 10 root root 4096 Aug 11 13:33 ../
>> -rw-r--r-- 1 root root 0 Aug 11 13:33 access.log
>> -rw-r--r-- 1 root root 0 Aug 11 13:33 metrics.log
>> -rw-r--r-- 1 root root 407392 Oct 28 16:08 nimbus.log
>> -rw-r--r-- 1 root root 270708 Oct 28 15:45 supervisor.log
>> -rw-r--r-- 1 root root 19965 Oct 28 16:02 ui.log
>> root@DO-mq-dev:/etc/storm/logs# tail nimbus.log
>> at
>> org.apache.thrift7.ProcessFunction.process(ProcessFunction.java:32)
>> ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
>> at
>> org.apache.thrift7.TBaseProcessor.process(TBaseProcessor.java:34)
>> ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
>> at
>> org.apache.thrift7.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:632)
>> ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
>> at
>> org.apache.thrift7.server.THsHaServer$Invocation.run(THsHaServer.java:201)
>> [storm-core-0.9.2-incubating.jar:0.9.2-incubating]
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> [na:1.7.0_55]
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> [na:1.7.0_55]
>> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
>> 2014-10-28 16:08:38 o.a.z.ZooKeeper [INFO] Session: 0x14915a47cdde1f1
>> closed
>> 2014-10-28 16:08:38 o.a.z.ClientCnxn [INFO] EventThread shut down
>> 2014-10-28 16:08:38 b.s.d.nimbus [INFO] Shut down master
>>
>>
>> This is not correct, since I submit many times topology to cluster,
>> seems didn't get recorded in log files. Any clues?
>>
>> thanks
>>
>> Alec
>>
>>
>>
>>
>> On Thu, Oct 30, 2014 at 4:20 AM, Brunner, Bill <bi...@baml.com>
>> wrote:
>>
>>> Turn on debug mode and tail the log
>>>
>>>
>>>
>>> *From:* Sa Li [mailto:sa.in.vanc@gmail.com]
>>> *Sent:* Wednesday, October 29, 2014 8:13 PM
>>> *To:* user@storm.apache.org
>>> *Subject:* Re: not writing data into DB in storm cluster, but does in
>>> localcluster
>>>
>>>
>>>
>>> I compile the code as
>>>
>>> mvn clean package -P cluster
>>>
>>>
>>>
>>> and run as
>>>
>>>
>>>
>>> storm jar
>>> target/kafka-storm-ingress-0.0.1-SNAPSHOT-jar-with-dependencies.jar
>>> storm.ingress.KafkaIngressTopology
>>>
>>>
>>>
>>> it running, but nothing populated into DB, how can debug it in cluster
>>> mode?
>>>
>>>
>>>
>>> thanks
>>>
>>>
>>>
>>> Alec
>>>
>>>
>>>
>>> On Wed, Oct 29, 2014 at 2:24 PM, Sa Li <sa...@gmail.com> wrote:
>>>
>>> Thanks for reply Bill, here how I submit it :
>>>
>>>
>>>
>>> if (args != null && args.length > 0) {
>>>
>>> System.out.println("local mode....");
>>>
>>> cluster.submitTopology("topictest", conf,
>>> buildTridentKafkaTopology());
>>>
>>> Thread.sleep(1500);
>>>
>>>
>>>
>>> //cluster.shutdown();
>>>
>>> //drpc.shutdown();
>>>
>>> }
>>>
>>> else {
>>>
>>> System.out.println("Storm cluster....");
>>>
>>> conf.setNumWorkers(10);
>>>
>>> StormSubmitter.submitTopology("topictest",
>>> conf, buildTridentKafkaTopology());
>>>
>>> }
>>>
>>>
>>>
>>>
>>>
>>> So I get the number of Workers as 10.
>>>
>>>
>>>
>>>
>>>
>>> Thanks
>>>
>>>
>>>
>>> Alec
>>>
>>>
>>>
>>> On Wed, Oct 29, 2014 at 2:16 PM, Brunner, Bill <bi...@baml.com>
>>> wrote:
>>>
>>> Do you have least 1 Worker defined in your topology?
>>>
>>>
>>>
>>> *From:* Sa Li [mailto:sa.in.vanc@gmail.com]
>>> *Sent:* Wednesday, October 29, 2014 5:08 PM
>>> *To:* user@storm.apache.org
>>> *Subject:* not writing data into DB in storm cluster, but does in
>>> localcluster
>>>
>>>
>>>
>>> Hi, All
>>>
>>>
>>>
>>> I am running a kafkaSpout to consume data from kafka and write data into
>>> postgresql DB, it works in localcluster even it is slow (we need to
>>> diagnose what the problem is). When I sumbitted it in storm cluster, it
>>> doesn't show exceptions, and I see the topology is alive in StormUI, but
>>> just no data being written into DB, why that happen?
>>>
>>>
>>>
>>> thanks
>>>
>>>
>>>
>>> Alec
>>> ------------------------------
>>>
>>> This message, and any attachments, is for the intended recipient(s)
>>> only, may contain information that is privileged, confidential and/or
>>> proprietary and subject to important terms and conditions available at
>>> http://www.bankofamerica.com/emaildisclaimer. If you are not the
>>> intended recipient, please delete this message.
>>>
>>>
>>>
>>>
>>> ------------------------------
>>> This message, and any attachments, is for the intended recipient(s)
>>> only, may contain information that is privileged, confidential and/or
>>> proprietary and subject to important terms and conditions available at
>>> http://www.bankofamerica.com/emaildisclaimer. If you are not the
>>> intended recipient, please delete this message.
>>>
>>
>>
>
Re: not writing data into DB in storm cluster, but does in localcluster
Posted by Sa Li <sa...@gmail.com>.
What I can see in the screen after submit the topology:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/etc/apache-storm-0.9.2-incubating/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/home/stuser/kafkaprj/kafka-storm-ingress/target/kafka-storm-ingress-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
Running: /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -client
-Dstorm.options= -Dstorm.home=/etc/apache-storm-0.9.2-incubating
-Djava.library.path=/usr/lib/jvm/java-7-openjdk-amd64 -Dstorm.conf.file=
-cp
/etc/apache-storm-0.9.2-incubating/lib/log4j-over-slf4j-1.6.6.jar:/etc/apache-storm-0.9.2-incubating/lib/joda-time-2.0.jar:/etc/apache-storm-0.9.2-incubating/lib/commons-codec-1.6.jar:/etc/apache-storm-0.9.2-incubating/lib/curator-framework-2.4.0.jar:/etc/apache-storm-0.9.2-incubating/lib/servlet-api-2.5.jar:/etc/apache-storm-0.9.2-incubating/lib/core.incubator-0.1.0.jar:/etc/apache-storm-0.9.2-incubating/lib/jetty-6.1.26.jar:/etc/apache-storm-0.9.2-incubating/lib/httpcore-4.3.2.jar:/etc/apache-storm-0.9.2-incubating/lib/servlet-api-2.5-20081211.jar:/etc/apache-storm-0.9.2-incubating/lib/commons-exec-1.1.jar:/etc/apache-storm-0.9.2-incubating/lib/logback-classic-1.0.6.jar:/etc/apache-storm-0.9.2-incubating/lib/minlog-1.2.jar:/etc/apache-storm-0.9.2-incubating/lib/asm-4.0.jar:/etc/apache-storm-0.9.2-incubating/lib/clojure-1.5.1.jar:/etc/apache-storm-0.9.2-incubating/lib/jline-2.11.jar:/etc/apache-storm-0.9.2-incubating/lib/clj-stacktrace-0.2.4.jar:/etc/apache-storm-0.9.2-incubating/lib/netty-3.2.2.Final.jar:/etc/apache-storm-0.9.2-incubating/lib/commons-fileupload-1.2.1.jar:/etc/apache-storm-0.9.2-incubating/lib/clout-1.0.1.jar:/etc/apache-storm-0.9.2-incubating/lib/curator-client-2.4.0.jar:/etc/apache-storm-0.9.2-incubating/lib/ring-servlet-0.3.11.jar:/etc/apache-storm-0.9.2-incubating/lib/commons-io-2.4.jar:/etc/apache-storm-0.9.2-incubating/lib/ring-devel-0.3.11.jar:/etc/apache-storm-0.9.2-incubating/lib/snakeyaml-1.11.jar:/etc/apache-storm-0.9.2-incubating/lib/reflectasm-1.07-shaded.jar:/etc/apache-storm-0.9.2-incubating/lib/chill-java-0.3.5.jar:/etc/apache-storm-0.9.2-incubating/lib/ring-jetty-adapter-0.3.11.jar:/etc/apache-storm-0.9.2-incubating/lib/compojure-1.1.3.jar:/etc/apache-storm-0.9.2-incubating/lib/objenesis-1.2.jar:/etc/apache-storm-0.9.2-incubating/lib/tools.macro-0.1.0.jar:/etc/apache-storm-0.9.2-incubating/lib/httpclient-4.3.3.jar:/etc/apache-storm-0.9.2-incubating/lib/json-simple-1.1.jar:/etc/apache-storm-0.9.2-incubating/lib/guava-13.0.jar:/etc/apache-storm-0.9.2-incubating/lib/commons-lang-2.5.jar:/etc/apache-storm-0.9.2-incubating/lib/storm-core-0.9.2-incubating.jar:/etc/apache-storm-0.9.2-incubating/lib/ring-core-1.1.5.jar:/etc/apache-storm-0.9.2-incubating/lib/hiccup-0.3.6.jar:/etc/apache-storm-0.9.2-incubating/lib/tools.logging-0.2.3.jar:/etc/apache-storm-0.9.2-incubating/lib/carbonite-1.4.0.jar:/etc/apache-storm-0.9.2-incubating/lib/math.numeric-tower-0.0.1.jar:/etc/apache-storm-0.9.2-incubating/lib/slf4j-api-1.6.5.jar:/etc/apache-storm-0.9.2-incubating/lib/tools.cli-0.2.4.jar:/etc/apache-storm-0.9.2-incubating/lib/netty-3.6.3.Final.jar:/etc/apache-storm-0.9.2-incubating/lib/disruptor-2.10.1.jar:/etc/apache-storm-0.9.2-incubating/lib/jetty-util-6.1.26.jar:/etc/apache-storm-0.9.2-incubating/lib/commons-logging-1.1.3.jar:/etc/apache-storm-0.9.2-incubating/lib/jgrapht-core-0.9.0.jar:/etc/apache-storm-0.9.2-incubating/lib/zookeeper-3.4.5.jar:/etc/apache-storm-0.9.2-incubating/lib/logback-core-1.0.6.jar:/etc/apache-storm-0.9.2-incubating/lib/clj-time-0.4.1.jar:/etc/apache-storm-0.9.2-incubating/lib/kryo-2.21.jar:target/kafka-storm-ingress-0.0.1-SNAPSHOT-jar-with-dependencies.jar:/root/.storm:/etc/apache-storm-0.9.2-incubating/bin
-Dstorm.jar=target/kafka-storm-ingress-0.0.1-SNAPSHOT-jar-with-dependencies.jar
storm.ingress.KafkaIngressTopology topictest
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/etc/apache-storm-0.9.2-incubating/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/home/stuser/kafkaprj/kafka-storm-ingress/target/kafka-storm-ingress-0.0.1-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
Storm cluster....
487 [main] INFO backtype.storm.StormSubmitter - Jar not uploaded to
master yet. Submitting jar...
491 [main] INFO backtype.storm.StormSubmitter - Uploading topology jar
target/kafka-storm-ingress-0.0.1-SNAPSHOT-jar-with-dependencies.jar to
assigned location:
/app/storm/nimbus/inbox/stormjar-0364215c-c1e4-4405-b682-8b02cfea03ca.jar
639 [main] INFO backtype.storm.StormSubmitter - Successfully uploaded
topology jar to assigned location:
/app/storm/nimbus/inbox/stormjar-0364215c-c1e4-4405-b682-8b02cfea03ca.jar
639 [main] INFO backtype.storm.StormSubmitter - Submitting topology
topictest in distributed mode with conf {"topology.workers":10}
746 [main] INFO backtype.storm.StormSubmitter - Finished submitting
topology: topictest
Seems to be OK though ....., really can't diagnose what is the problem.
Thanks
Alec
On Thu, Oct 30, 2014 at 4:04 PM, Sa Li <sa...@gmail.com> wrote:
> Thanks, Bill, I turn on the debug mode,
> by
> Config conf = new Config();
> conf.setDebug(true);
>
> This is what I get from log:
> root@DO-mq-dev:/etc/storm/logs# ll
> total 708
> drwxr-xr-x 2 root root 4096 Aug 11 13:40 ./
> drwxr-xr-x 10 root root 4096 Aug 11 13:33 ../
> -rw-r--r-- 1 root root 0 Aug 11 13:33 access.log
> -rw-r--r-- 1 root root 0 Aug 11 13:33 metrics.log
> -rw-r--r-- 1 root root 407392 Oct 28 16:08 nimbus.log
> -rw-r--r-- 1 root root 270708 Oct 28 15:45 supervisor.log
> -rw-r--r-- 1 root root 19965 Oct 28 16:02 ui.log
> root@DO-mq-dev:/etc/storm/logs# tail nimbus.log
> at
> org.apache.thrift7.ProcessFunction.process(ProcessFunction.java:32)
> ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
> at
> org.apache.thrift7.TBaseProcessor.process(TBaseProcessor.java:34)
> ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
> at
> org.apache.thrift7.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:632)
> ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
> at
> org.apache.thrift7.server.THsHaServer$Invocation.run(THsHaServer.java:201)
> [storm-core-0.9.2-incubating.jar:0.9.2-incubating]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> [na:1.7.0_55]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> [na:1.7.0_55]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
> 2014-10-28 16:08:38 o.a.z.ZooKeeper [INFO] Session: 0x14915a47cdde1f1
> closed
> 2014-10-28 16:08:38 o.a.z.ClientCnxn [INFO] EventThread shut down
> 2014-10-28 16:08:38 b.s.d.nimbus [INFO] Shut down master
>
>
> This is not correct, since I submit many times topology to cluster, seems
> didn't get recorded in log files. Any clues?
>
> thanks
>
> Alec
>
>
>
>
> On Thu, Oct 30, 2014 at 4:20 AM, Brunner, Bill <bi...@baml.com>
> wrote:
>
>> Turn on debug mode and tail the log
>>
>>
>>
>> *From:* Sa Li [mailto:sa.in.vanc@gmail.com]
>> *Sent:* Wednesday, October 29, 2014 8:13 PM
>> *To:* user@storm.apache.org
>> *Subject:* Re: not writing data into DB in storm cluster, but does in
>> localcluster
>>
>>
>>
>> I compile the code as
>>
>> mvn clean package -P cluster
>>
>>
>>
>> and run as
>>
>>
>>
>> storm jar
>> target/kafka-storm-ingress-0.0.1-SNAPSHOT-jar-with-dependencies.jar
>> storm.ingress.KafkaIngressTopology
>>
>>
>>
>> it running, but nothing populated into DB, how can debug it in cluster
>> mode?
>>
>>
>>
>> thanks
>>
>>
>>
>> Alec
>>
>>
>>
>> On Wed, Oct 29, 2014 at 2:24 PM, Sa Li <sa...@gmail.com> wrote:
>>
>> Thanks for reply Bill, here how I submit it :
>>
>>
>>
>> if (args != null && args.length > 0) {
>>
>> System.out.println("local mode....");
>>
>> cluster.submitTopology("topictest", conf,
>> buildTridentKafkaTopology());
>>
>> Thread.sleep(1500);
>>
>>
>>
>> //cluster.shutdown();
>>
>> //drpc.shutdown();
>>
>> }
>>
>> else {
>>
>> System.out.println("Storm cluster....");
>>
>> conf.setNumWorkers(10);
>>
>> StormSubmitter.submitTopology("topictest",
>> conf, buildTridentKafkaTopology());
>>
>> }
>>
>>
>>
>>
>>
>> So I get the number of Workers as 10.
>>
>>
>>
>>
>>
>> Thanks
>>
>>
>>
>> Alec
>>
>>
>>
>> On Wed, Oct 29, 2014 at 2:16 PM, Brunner, Bill <bi...@baml.com>
>> wrote:
>>
>> Do you have least 1 Worker defined in your topology?
>>
>>
>>
>> *From:* Sa Li [mailto:sa.in.vanc@gmail.com]
>> *Sent:* Wednesday, October 29, 2014 5:08 PM
>> *To:* user@storm.apache.org
>> *Subject:* not writing data into DB in storm cluster, but does in
>> localcluster
>>
>>
>>
>> Hi, All
>>
>>
>>
>> I am running a kafkaSpout to consume data from kafka and write data into
>> postgresql DB, it works in localcluster even it is slow (we need to
>> diagnose what the problem is). When I sumbitted it in storm cluster, it
>> doesn't show exceptions, and I see the topology is alive in StormUI, but
>> just no data being written into DB, why that happen?
>>
>>
>>
>> thanks
>>
>>
>>
>> Alec
>> ------------------------------
>>
>> This message, and any attachments, is for the intended recipient(s) only,
>> may contain information that is privileged, confidential and/or proprietary
>> and subject to important terms and conditions available at
>> http://www.bankofamerica.com/emaildisclaimer. If you are not the
>> intended recipient, please delete this message.
>>
>>
>>
>>
>> ------------------------------
>> This message, and any attachments, is for the intended recipient(s) only,
>> may contain information that is privileged, confidential and/or proprietary
>> and subject to important terms and conditions available at
>> http://www.bankofamerica.com/emaildisclaimer. If you are not the
>> intended recipient, please delete this message.
>>
>
>
Re: not writing data into DB in storm cluster, but does in localcluster
Posted by Sa Li <sa...@gmail.com>.
Thanks, Bill, I turn on the debug mode,
by
Config conf = new Config();
conf.setDebug(true);
This is what I get from log:
root@DO-mq-dev:/etc/storm/logs# ll
total 708
drwxr-xr-x 2 root root 4096 Aug 11 13:40 ./
drwxr-xr-x 10 root root 4096 Aug 11 13:33 ../
-rw-r--r-- 1 root root 0 Aug 11 13:33 access.log
-rw-r--r-- 1 root root 0 Aug 11 13:33 metrics.log
-rw-r--r-- 1 root root 407392 Oct 28 16:08 nimbus.log
-rw-r--r-- 1 root root 270708 Oct 28 15:45 supervisor.log
-rw-r--r-- 1 root root 19965 Oct 28 16:02 ui.log
root@DO-mq-dev:/etc/storm/logs# tail nimbus.log
at
org.apache.thrift7.ProcessFunction.process(ProcessFunction.java:32)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
org.apache.thrift7.TBaseProcessor.process(TBaseProcessor.java:34)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
org.apache.thrift7.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:632)
~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
org.apache.thrift7.server.THsHaServer$Invocation.run(THsHaServer.java:201)
[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[na:1.7.0_55]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[na:1.7.0_55]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
2014-10-28 16:08:38 o.a.z.ZooKeeper [INFO] Session: 0x14915a47cdde1f1 closed
2014-10-28 16:08:38 o.a.z.ClientCnxn [INFO] EventThread shut down
2014-10-28 16:08:38 b.s.d.nimbus [INFO] Shut down master
This is not correct, since I submit many times topology to cluster, seems
didn't get recorded in log files. Any clues?
thanks
Alec
On Thu, Oct 30, 2014 at 4:20 AM, Brunner, Bill <bi...@baml.com>
wrote:
> Turn on debug mode and tail the log
>
>
>
> *From:* Sa Li [mailto:sa.in.vanc@gmail.com]
> *Sent:* Wednesday, October 29, 2014 8:13 PM
> *To:* user@storm.apache.org
> *Subject:* Re: not writing data into DB in storm cluster, but does in
> localcluster
>
>
>
> I compile the code as
>
> mvn clean package -P cluster
>
>
>
> and run as
>
>
>
> storm jar
> target/kafka-storm-ingress-0.0.1-SNAPSHOT-jar-with-dependencies.jar
> storm.ingress.KafkaIngressTopology
>
>
>
> it running, but nothing populated into DB, how can debug it in cluster
> mode?
>
>
>
> thanks
>
>
>
> Alec
>
>
>
> On Wed, Oct 29, 2014 at 2:24 PM, Sa Li <sa...@gmail.com> wrote:
>
> Thanks for reply Bill, here how I submit it :
>
>
>
> if (args != null && args.length > 0) {
>
> System.out.println("local mode....");
>
> cluster.submitTopology("topictest", conf,
> buildTridentKafkaTopology());
>
> Thread.sleep(1500);
>
>
>
> //cluster.shutdown();
>
> //drpc.shutdown();
>
> }
>
> else {
>
> System.out.println("Storm cluster....");
>
> conf.setNumWorkers(10);
>
> StormSubmitter.submitTopology("topictest", conf,
> buildTridentKafkaTopology());
>
> }
>
>
>
>
>
> So I get the number of Workers as 10.
>
>
>
>
>
> Thanks
>
>
>
> Alec
>
>
>
> On Wed, Oct 29, 2014 at 2:16 PM, Brunner, Bill <bi...@baml.com>
> wrote:
>
> Do you have least 1 Worker defined in your topology?
>
>
>
> *From:* Sa Li [mailto:sa.in.vanc@gmail.com]
> *Sent:* Wednesday, October 29, 2014 5:08 PM
> *To:* user@storm.apache.org
> *Subject:* not writing data into DB in storm cluster, but does in
> localcluster
>
>
>
> Hi, All
>
>
>
> I am running a kafkaSpout to consume data from kafka and write data into
> postgresql DB, it works in localcluster even it is slow (we need to
> diagnose what the problem is). When I sumbitted it in storm cluster, it
> doesn't show exceptions, and I see the topology is alive in StormUI, but
> just no data being written into DB, why that happen?
>
>
>
> thanks
>
>
>
> Alec
> ------------------------------
>
> This message, and any attachments, is for the intended recipient(s) only,
> may contain information that is privileged, confidential and/or proprietary
> and subject to important terms and conditions available at
> http://www.bankofamerica.com/emaildisclaimer. If you are not the intended
> recipient, please delete this message.
>
>
>
>
> ------------------------------
> This message, and any attachments, is for the intended recipient(s) only,
> may contain information that is privileged, confidential and/or proprietary
> and subject to important terms and conditions available at
> http://www.bankofamerica.com/emaildisclaimer. If you are not the intended
> recipient, please delete this message.
>
RE: not writing data into DB in storm cluster, but does in localcluster
Posted by "Brunner, Bill" <bi...@baml.com>.
Turn on debug mode and tail the log
From: Sa Li [mailto:sa.in.vanc@gmail.com]
Sent: Wednesday, October 29, 2014 8:13 PM
To: user@storm.apache.org
Subject: Re: not writing data into DB in storm cluster, but does in localcluster
I compile the code as
mvn clean package -P cluster
and run as
storm jar target/kafka-storm-ingress-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.ingress.KafkaIngressTopology
it running, but nothing populated into DB, how can debug it in cluster mode?
thanks
Alec
On Wed, Oct 29, 2014 at 2:24 PM, Sa Li <sa...@gmail.com>> wrote:
Thanks for reply Bill, here how I submit it :
if (args != null && args.length > 0) {
System.out.println("local mode....");
cluster.submitTopology("topictest", conf, buildTridentKafkaTopology());
Thread.sleep(1500);
//cluster.shutdown();
//drpc.shutdown();
}
else {
System.out.println("Storm cluster....");
conf.setNumWorkers(10);
StormSubmitter.submitTopology("topictest", conf, buildTridentKafkaTopology());
}
So I get the number of Workers as 10.
Thanks
Alec
On Wed, Oct 29, 2014 at 2:16 PM, Brunner, Bill <bi...@baml.com>> wrote:
Do you have least 1 Worker defined in your topology?
From: Sa Li [mailto:sa.in.vanc@gmail.com<ma...@gmail.com>]
Sent: Wednesday, October 29, 2014 5:08 PM
To: user@storm.apache.org<ma...@storm.apache.org>
Subject: not writing data into DB in storm cluster, but does in localcluster
Hi, All
I am running a kafkaSpout to consume data from kafka and write data into postgresql DB, it works in localcluster even it is slow (we need to diagnose what the problem is). When I sumbitted it in storm cluster, it doesn't show exceptions, and I see the topology is alive in StormUI, but just no data being written into DB, why that happen?
thanks
Alec
________________________________
This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message.
----------------------------------------------------------------------
This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message.
Re: not writing data into DB in storm cluster, but does in localcluster
Posted by Sa Li <sa...@gmail.com>.
I compile the code as
mvn clean package -P cluster
and run as
storm jar
target/kafka-storm-ingress-0.0.1-SNAPSHOT-jar-with-dependencies.jar
storm.ingress.KafkaIngressTopology
it running, but nothing populated into DB, how can debug it in cluster mode?
thanks
Alec
On Wed, Oct 29, 2014 at 2:24 PM, Sa Li <sa...@gmail.com> wrote:
> Thanks for reply Bill, here how I submit it :
>
> if (args != null && args.length > 0) {
> System.out.println("local mode....");
> cluster.submitTopology("topictest", conf,
> buildTridentKafkaTopology());
> Thread.sleep(1500);
>
> //cluster.shutdown();
> //drpc.shutdown();
> }
> else {
> System.out.println("Storm cluster....");
> conf.setNumWorkers(10);
> StormSubmitter.submitTopology("topictest", conf,
> buildTridentKafkaTopology());
> }
>
>
> So I get the number of Workers as 10.
>
>
> Thanks
>
> Alec
>
> On Wed, Oct 29, 2014 at 2:16 PM, Brunner, Bill <bi...@baml.com>
> wrote:
>
>> Do you have least 1 Worker defined in your topology?
>>
>>
>>
>> *From:* Sa Li [mailto:sa.in.vanc@gmail.com]
>> *Sent:* Wednesday, October 29, 2014 5:08 PM
>> *To:* user@storm.apache.org
>> *Subject:* not writing data into DB in storm cluster, but does in
>> localcluster
>>
>>
>>
>> Hi, All
>>
>>
>>
>> I am running a kafkaSpout to consume data from kafka and write data into
>> postgresql DB, it works in localcluster even it is slow (we need to
>> diagnose what the problem is). When I sumbitted it in storm cluster, it
>> doesn't show exceptions, and I see the topology is alive in StormUI, but
>> just no data being written into DB, why that happen?
>>
>>
>>
>> thanks
>>
>>
>>
>> Alec
>> ------------------------------
>> This message, and any attachments, is for the intended recipient(s) only,
>> may contain information that is privileged, confidential and/or proprietary
>> and subject to important terms and conditions available at
>> http://www.bankofamerica.com/emaildisclaimer. If you are not the
>> intended recipient, please delete this message.
>>
>
>
Re: not writing data into DB in storm cluster, but does in localcluster
Posted by Sa Li <sa...@gmail.com>.
Thanks for reply Bill, here how I submit it :
if (args != null && args.length > 0) {
System.out.println("local mode....");
cluster.submitTopology("topictest", conf,
buildTridentKafkaTopology());
Thread.sleep(1500);
//cluster.shutdown();
//drpc.shutdown();
}
else {
System.out.println("Storm cluster....");
conf.setNumWorkers(10);
StormSubmitter.submitTopology("topictest", conf,
buildTridentKafkaTopology());
}
So I get the number of Workers as 10.
Thanks
Alec
On Wed, Oct 29, 2014 at 2:16 PM, Brunner, Bill <bi...@baml.com>
wrote:
> Do you have least 1 Worker defined in your topology?
>
>
>
> *From:* Sa Li [mailto:sa.in.vanc@gmail.com]
> *Sent:* Wednesday, October 29, 2014 5:08 PM
> *To:* user@storm.apache.org
> *Subject:* not writing data into DB in storm cluster, but does in
> localcluster
>
>
>
> Hi, All
>
>
>
> I am running a kafkaSpout to consume data from kafka and write data into
> postgresql DB, it works in localcluster even it is slow (we need to
> diagnose what the problem is). When I sumbitted it in storm cluster, it
> doesn't show exceptions, and I see the topology is alive in StormUI, but
> just no data being written into DB, why that happen?
>
>
>
> thanks
>
>
>
> Alec
> ------------------------------
> This message, and any attachments, is for the intended recipient(s) only,
> may contain information that is privileged, confidential and/or proprietary
> and subject to important terms and conditions available at
> http://www.bankofamerica.com/emaildisclaimer. If you are not the intended
> recipient, please delete this message.
>
RE: not writing data into DB in storm cluster, but does in localcluster
Posted by "Brunner, Bill" <bi...@baml.com>.
Do you have least 1 Worker defined in your topology?
From: Sa Li [mailto:sa.in.vanc@gmail.com]
Sent: Wednesday, October 29, 2014 5:08 PM
To: user@storm.apache.org
Subject: not writing data into DB in storm cluster, but does in localcluster
Hi, All
I am running a kafkaSpout to consume data from kafka and write data into postgresql DB, it works in localcluster even it is slow (we need to diagnose what the problem is). When I sumbitted it in storm cluster, it doesn't show exceptions, and I see the topology is alive in StormUI, but just no data being written into DB, why that happen?
thanks
Alec
----------------------------------------------------------------------
This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message.