You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@metron.apache.org by updates on tube <ab...@gmail.com> on 2019/11/01 15:41:35 UTC

apache storm error

worker1.sip.com 6700
<http://worker1.sip.com:8000/log?file=squid-66-1572614993%2F6700%2Fworker.log>
java.util.concurrent.ExecutionException:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata
after 60000 ms. at
org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:730)
at
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:483)
at
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:430)
at
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:353)
at org.apache.metron.writer.kafka.KafkaWriter.write(KafkaWriter.java:258)
at
org.apache.metron.writer.BulkWriterComponent.flush(BulkWriterComponent.java:123)
at
org.apache.metron.writer.BulkWriterComponent.applyShouldFlush(BulkWriterComponent.java:179)
at
org.apache.metron.writer.BulkWriterComponent.write(BulkWriterComponent.java:99)
at
org.apache.metron.parsers.bolt.WriterHandler.write(WriterHandler.java:90)
at org.apache.metron.parsers.bolt.WriterBolt.execute(WriterBolt.java:90) at
org.apache.storm.daemon.executor$fn__10195$tuple_action_fn__10197.invoke(executor.clj:735)
at
org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:466)
at
org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40)
at
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472)
at
org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451)
at
org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
at
org.apache.storm.daemon.executor$fn__10195$fn__10208$fn__10263.invoke(executor.clj:855)
at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484) at
clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to
update metadata after 60000 ms.

Re: apache storm error

Posted by updates on tube <ab...@gmail.com>.
we were trying to add new data telemetry source using https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source post but i see this error on storm ui parserbolt i installed using ambari and hcp

On 2019/11/01 15:41:35, updates on tube <ab...@gmail.com> wrote: 
> worker1.sip.com 6700
> <http://worker1.sip.com:8000/log?file=squid-66-1572614993%2F6700%2Fworker.log>
> java.util.concurrent.ExecutionException:
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata
> after 60000 ms. at
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:730)
> at
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:483)
> at
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:430)
> at
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:353)
> at org.apache.metron.writer.kafka.KafkaWriter.write(KafkaWriter.java:258)
> at
> org.apache.metron.writer.BulkWriterComponent.flush(BulkWriterComponent.java:123)
> at
> org.apache.metron.writer.BulkWriterComponent.applyShouldFlush(BulkWriterComponent.java:179)
> at
> org.apache.metron.writer.BulkWriterComponent.write(BulkWriterComponent.java:99)
> at
> org.apache.metron.parsers.bolt.WriterHandler.write(WriterHandler.java:90)
> at org.apache.metron.parsers.bolt.WriterBolt.execute(WriterBolt.java:90) at
> org.apache.storm.daemon.executor$fn__10195$tuple_action_fn__10197.invoke(executor.clj:735)
> at
> org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:466)
> at
> org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40)
> at
> org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472)
> at
> org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451)
> at
> org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
> at
> org.apache.storm.daemon.executor$fn__10195$fn__10208$fn__10263.invoke(executor.clj:855)
> at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484) at
> clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to
> update metadata after 60000 ms.
> 

Re: apache storm error

Posted by Nick Allen <ni...@nickallen.org>.
This indicates that your Parser topology is not keeping up with the amount
of telemetry that it is consuming.  You need to do some performance tuning
of the topology.

   - How much telemetry are you trying to parse (in events per second)?
   - If you just send a low volume of telemetry does it work as-is?
   - What are your current Parser settings?





On Mon, Nov 4, 2019 at 1:05 AM updates on tube <ab...@gmail.com>
wrote:

> still the same
>
> On 2019/11/01 16:52:08, "Yerex, Tom" <to...@ubc.ca> wrote:
> > I am working from memory so I am not entirely certain, but I think we
> had a similar error that was resolved by increasing the JVM heap for
> Elasticsearch from the default. In Ambari, under “Advanced
> elastic-jvm-options”, the “heap_size” setting. In our environment it is set
> to 2048m.
> >
> >
> >
> >
> >
> >
> >
> > From: updates on tube <ab...@gmail.com>
> > Reply-To: "user@metron.apache.org" <us...@metron.apache.org>
> > Date: Friday, November 1, 2019 at 8:42 AM
> > To: "user@metron.apache.org" <us...@metron.apache.org>
> > Subject: apache storm error
> >
> >
> >
> > worker1.sip.com6700java.util.concurrent.ExecutionException:
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata
> after 60000 ms. at
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:730)
> at
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:483)
> at
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:430)
> at
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:353)
> at org.apache.metron.writer.kafka.KafkaWriter.write(KafkaWriter.java:258)
> at
> org.apache.metron.writer.BulkWriterComponent.flush(BulkWriterComponent.java:123)
> at
> org.apache.metron.writer.BulkWriterComponent.applyShouldFlush(BulkWriterComponent.java:179)
> at
> org.apache.metron.writer.BulkWriterComponent.write(BulkWriterComponent.java:99)
> at
> org.apache.metron.parsers.bolt.WriterHandler.write(WriterHandler.java:90)
> at org.apache.metron.parsers.bolt.WriterBolt.execute(WriterBolt.java:90) at
> org.apache.storm.daemon.executor$fn__10195$tuple_action_fn__10197.invoke(executor.clj:735)
> at
> org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:466)
> at
> org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40)
> at
> org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472)
> at
> org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451)
> at
> org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
> at
> org.apache.storm.daemon.executor$fn__10195$fn__10208$fn__10263.invoke(executor.clj:855)
> at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484) at
> clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to
> update metadata after 60000 ms.
> >
> >
> >
> >
>

Re: apache storm error

Posted by updates on tube <ab...@gmail.com>.
still the same

On 2019/11/01 16:52:08, "Yerex, Tom" <to...@ubc.ca> wrote: 
> I am working from memory so I am not entirely certain, but I think we had a similar error that was resolved by increasing the JVM heap for Elasticsearch from the default. In Ambari, under “Advanced elastic-jvm-options”, the “heap_size” setting. In our environment it is set to 2048m.
> 
>  
> 
>  
> 
>  
> 
> From: updates on tube <ab...@gmail.com>
> Reply-To: "user@metron.apache.org" <us...@metron.apache.org>
> Date: Friday, November 1, 2019 at 8:42 AM
> To: "user@metron.apache.org" <us...@metron.apache.org>
> Subject: apache storm error
> 
>  
> 
> worker1.sip.com6700java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:730) at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:483) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:430) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:353) at org.apache.metron.writer.kafka.KafkaWriter.write(KafkaWriter.java:258) at org.apache.metron.writer.BulkWriterComponent.flush(BulkWriterComponent.java:123) at org.apache.metron.writer.BulkWriterComponent.applyShouldFlush(BulkWriterComponent.java:179) at org.apache.metron.writer.BulkWriterComponent.write(BulkWriterComponent.java:99) at org.apache.metron.parsers.bolt.WriterHandler.write(WriterHandler.java:90) at org.apache.metron.parsers.bolt.WriterBolt.execute(WriterBolt.java:90) at org.apache.storm.daemon.executor$fn__10195$tuple_action_fn__10197.invoke(executor.clj:735) at org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:466) at org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40) at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472) at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451) at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) at org.apache.storm.daemon.executor$fn__10195$fn__10208$fn__10263.invoke(executor.clj:855) at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
> 
>  
> 
> 

Re: apache storm error

Posted by "Yerex, Tom" <to...@ubc.ca>.
I am working from memory so I am not entirely certain, but I think we had a similar error that was resolved by increasing the JVM heap for Elasticsearch from the default. In Ambari, under “Advanced elastic-jvm-options”, the “heap_size” setting. In our environment it is set to 2048m.

 

 

 

From: updates on tube <ab...@gmail.com>
Reply-To: "user@metron.apache.org" <us...@metron.apache.org>
Date: Friday, November 1, 2019 at 8:42 AM
To: "user@metron.apache.org" <us...@metron.apache.org>
Subject: apache storm error

 

worker1.sip.com6700java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:730) at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:483) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:430) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:353) at org.apache.metron.writer.kafka.KafkaWriter.write(KafkaWriter.java:258) at org.apache.metron.writer.BulkWriterComponent.flush(BulkWriterComponent.java:123) at org.apache.metron.writer.BulkWriterComponent.applyShouldFlush(BulkWriterComponent.java:179) at org.apache.metron.writer.BulkWriterComponent.write(BulkWriterComponent.java:99) at org.apache.metron.parsers.bolt.WriterHandler.write(WriterHandler.java:90) at org.apache.metron.parsers.bolt.WriterBolt.execute(WriterBolt.java:90) at org.apache.storm.daemon.executor$fn__10195$tuple_action_fn__10197.invoke(executor.clj:735) at org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:466) at org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40) at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472) at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451) at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) at org.apache.storm.daemon.executor$fn__10195$fn__10208$fn__10263.invoke(executor.clj:855) at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.