You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@metron.apache.org by Gaurav Bapat <ga...@gmail.com> on 2018/01/11 06:43:54 UTC

Getting Syslogs to Metron

Hello everyone, I have deployed Metron on a single node machine and I would
like to know how do I get Syslogs from NiFi into Kibana dashboard?

I have created a Kafka topic by the name "cef" and I can see that the topic
exists in
Metron Configuration but I am unable to connect it with Kibana

Need Help!!

Re: Getting Syslogs to Metron

Posted by Farrukh Naveed Anjum <an...@gmail.com>.
I can confirm every parameter is intact, yet I am unable to get any syslog
in, as indexing bolt is not creating the Elasticsearch Index using CEF
Parser.

Kindly help what can I do to get ride of that exception and get it going.

On Mon, Jan 22, 2018 at 5:05 PM, Otto Fowler <ot...@gmail.com>
wrote:

> https://metron.apache.org/current-book/metron-platform/
> metron-indexing/index.html
>
>
> On January 22, 2018 at 02:48:20, Farrukh Naveed Anjum (
> anjum.farrukh@gmail.com) wrote:
>
> Hi,
>
> It seems like Indexing Topic is giving following errors.
>
> Any Idea
>
> On Mon, Jan 22, 2018 at 12:40 PM, Farrukh Naveed Anjum <
> anjum.farrukh@gmail.com> wrote:
>
>> Hi,
>>
>> I looked into indexing topic seems like its giving following errors
>>
>>        at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-16 02:34:16.543 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-16 02:34:16.543 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-16 02:34:16.547 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-16 02:34:16.581 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-16 02:49:16.516 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-16 02:49:16.516 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-16 02:49:16.520 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-16 02:49:16.521 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-16 03:04:16.518 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-16 03:04:16.518 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-16 03:04:16.525 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-16 03:04:16.555 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-16 04:07:19.924 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-16 04:07:19.956 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-16 04:07:19.956 o.a.m.w.h.SourceHandler [INFO] File rotation took 32 ms
>> 2018-01-16 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-16 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-16 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] File rotation took 17 ms
>> 2018-01-16 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-16 04:07:36.409 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-16 04:07:36.409 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
>> 2018-01-16 04:08:02.265 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-16 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-16 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 24 ms
>> 2018-01-17 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-17 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-17 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
>> 2018-01-17 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-17 04:07:19.958 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-17 04:07:19.958 o.a.m.w.h.SourceHandler [INFO] File rotation took 35 ms
>> 2018-01-17 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-17 04:07:23.546 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-17 04:07:23.546 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
>> 2018-01-17 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-17 04:07:36.422 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-17 04:07:36.422 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
>> 2018-01-17 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-17 04:08:02.265 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-17 04:08:02.266 o.a.m.w.h.SourceHandler [INFO] File rotation took 1 ms
>> 2018-01-17 09:49:16.529 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-17 09:49:16.529 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-17 09:49:16.572 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-17 09:49:16.594 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-18 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-18 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-18 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
>> 2018-01-18 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-18 04:07:19.945 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-18 04:07:19.946 o.a.m.w.h.SourceHandler [INFO] File rotation took 23 ms
>> 2018-01-18 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-18 04:07:23.570 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-18 04:07:23.570 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
>> 2018-01-18 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-18 04:07:36.407 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-18 04:07:36.407 o.a.m.w.h.SourceHandler [INFO] File rotation took 1 ms
>> 2018-01-18 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-18 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-18 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 25 ms
>> 2018-01-18 09:46:50.425 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-18 09:46:50.460 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-18 09:46:50.460 o.a.m.w.h.SourceHandler [INFO] File rotation took 35 ms
>> 2018-01-18 09:49:16.568 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-18 09:49:16.614 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-18 09:49:16.614 o.a.m.w.h.SourceHandler [INFO] File rotation took 46 ms
>> 2018-01-18 17:19:16.540 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-18 17:19:16.540 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-19 01:24:20.877 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-19 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-19 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
>> 2018-01-19 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-19 04:07:19.939 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-19 04:07:19.939 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
>> 2018-01-19 04:07:23.545 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-19 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-19 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
>> 2018-01-19 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-19 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-19 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] File rotation took 23 ms
>> 2018-01-19 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-19 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-19 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 25 ms
>> 2018-01-19 09:46:50.425 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-19 09:46:50.442 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-19 09:46:50.442 o.a.m.w.h.SourceHandler [INFO] File rotation took 17 ms
>> 2018-01-19 09:49:16.568 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-19 09:49:16.586 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-19 09:49:16.586 o.a.m.w.h.SourceHandler [INFO] File rotation took 18 ms
>> 2018-01-19 17:08:22.126 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-19 17:08:22.142 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-19 17:08:22.142 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
>> 2018-01-19 17:19:16.556 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-19 17:19:16.582 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-19 17:19:16.582 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
>> 2018-01-20 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-20 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-20 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
>> 2018-01-20 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-20 04:07:19.962 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-20 04:07:19.962 o.a.m.w.h.SourceHandler [INFO] File rotation took 38 ms
>> 2018-01-20 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-20 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-20 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] File rotation took 17 ms
>> 2018-01-20 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-20 04:07:36.407 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-20 04:07:36.408 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
>> 2018-01-20 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-20 04:08:02.290 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-20 04:08:02.290 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
>> 2018-01-20 09:34:16.559 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-20 09:34:16.559 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-20 09:46:50.425 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-20 09:46:50.445 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-20 09:46:50.446 o.a.m.w.h.SourceHandler [INFO] File rotation took 21 ms
>> 2018-01-20 09:49:16.568 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-20 09:49:16.570 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-20 09:49:16.570 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
>> 2018-01-20 10:19:16.560 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-20 10:19:16.560 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-20 17:08:22.127 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-20 17:08:22.129 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-20 17:08:22.129 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
>> 2018-01-20 17:19:16.556 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-20 17:19:16.558 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-20 17:19:16.558 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
>> 2018-01-21 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-21 01:24:20.912 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-21 01:24:20.912 o.a.m.w.h.SourceHandler [INFO] File rotation took 32 ms
>> 2018-01-21 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-21 04:07:19.949 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-21 04:07:19.950 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
>> 2018-01-21 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-21 04:07:23.545 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-21 04:07:23.545 o.a.m.w.h.SourceHandler [INFO] File rotation took 1 ms
>> 2018-01-21 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-21 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-21 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] File rotation took 23 ms
>> 2018-01-21 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
>> 2018-01-21 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
>> 2018-01-21 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 25 ms
>> 2018-01-21 07:34:16.569 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-21 07:34:16.573 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-21 07:34:16.593 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
>> 2018-01-21 07:34:16.773 o.a.s.d.executor [ERROR]
>> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>>
>>
>> On Mon, Jan 22, 2018 at 12:36 PM, Farrukh Naveed Anjum <
>> anjum.farrukh@gmail.com> wrote:
>>
>>> Hi Guys,
>>>
>>> Its seems like we are able to make NIFI connection and data indeed is
>>> going through KAFKA Topic yet using CEF Parser (SysLogs) we are unable to
>>> create the elastic search index.
>>>
>>>
>>>
>>>
>>> On Mon, Jan 22, 2018 at 12:32 PM, Farrukh Naveed Anjum <
>>> anjum.farrukh@gmail.com> wrote:
>>>
>>>> Hi, Gaurav,
>>>>
>>>> Did you solved it ? I am also following same usecase for SysLog using
>>>> UDP (Rsyslogs)
>>>>
>>>> It seems like data is coming to KAFKA Topic. As you can see its showing
>>>> up.
>>>>
>>>> But Elasticsearch index is not created.
>>>>
>>>>
>>>>
>>>> On Tue, Jan 16, 2018 at 12:37 PM, Gaurav Bapat <ga...@gmail.com>
>>>> wrote:
>>>>
>>>>> But I cant find how to configure it
>>>>>
>>>>> On 16 January 2018 at 11:38, Farrukh Naveed Anjum <
>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>
>>>>>> yes, do configure it as per metron reference usecase
>>>>>>
>>>>>> On Tue, Jan 16, 2018 at 8:35 AM, Gaurav Bapat <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Kyle,
>>>>>>>
>>>>>>> I saw that I can ping from my OS to VM and from VM to OS. Looks like
>>>>>>> this is some Kafka or Zookeeper environment variables setup issue, do I
>>>>>>> need to configure that in vagrant ssh?
>>>>>>>
>>>>>>> On 16 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hey Kyle,
>>>>>>>>
>>>>>>>> I am running NiFi not on Ambari but on localhost:8089, I can ping
>>>>>>>> from my OS terminal to node1 but can't ping from node1 to my OS terminal, I
>>>>>>>> have attached few screenshots and the contents of /etc/hosts
>>>>>>>>
>>>>>>>> Thank You!
>>>>>>>>
>>>>>>>> On 15 January 2018 at 20:04, Kyle Richardson <
>>>>>>>> kylerichardson2@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> It looks like your Nifi instance is running on your laptop/desktop
>>>>>>>>> (e.g. the VM host). My guess would be that name resolution or networking is
>>>>>>>>> not properly configured between the host and the guest preventing the data
>>>>>>>>> from getting from Nifi to Kafka. What's the contents of /etc/hosts on the
>>>>>>>>> VM host? Can you ping node1 from the VM host by name and by IP address?
>>>>>>>>>
>>>>>>>>> -Kyle
>>>>>>>>>
>>>>>>>>> On Mon, Jan 15, 2018 at 6:55 AM, Gaurav Bapat <
>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Failed while waiting for acks from Kafka is what I am getting in
>>>>>>>>>> Kafka, am I missing some configuration with Kafka?
>>>>>>>>>>
>>>>>>>>>> On 15 January 2018 at 16:50, Gaurav Bapat <ga...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Farrukh,
>>>>>>>>>>>
>>>>>>>>>>> I cant find any folder by my topic
>>>>>>>>>>>
>>>>>>>>>>> On 15 January 2018 at 16:33, Farrukh Naveed Anjum <
>>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Can you check /kafaka-logs on your VM box (It should have a
>>>>>>>>>>>> folder named your topic). Can you check if it is there ?
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Jan 15, 2018 at 3:49 PM, Gaurav Bapat <
>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> I am not getting data into my Kafka topic
>>>>>>>>>>>>>
>>>>>>>>>>>>> I have used i5 4 Core Processor with 16 GB RAM and I have
>>>>>>>>>>>>> allocated 12 GB RAM to my vagrant VM.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I dont understand how to configure Kafka broker because it is
>>>>>>>>>>>>> giving me failed while waiting for acks to Kafka
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 15 January 2018 at 16:10, Farrukh Naveed Anjum <
>>>>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Can you tell me is your KAFKA Topic getting data ? What are
>>>>>>>>>>>>>> you machine specifications ?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <
>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thanks Farrukh,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I am not getting data in my kafka topic even after creating
>>>>>>>>>>>>>>> one, the issue seems to be with broker config, how to configure Kafka and
>>>>>>>>>>>>>>> Zookeeper port?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>>>>>>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I had similar issue it turned out to be the issue in STROM
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> No worker is assigned to togolgoy all you need is to add
>>>>>>>>>>>>>>>> additional port in
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by
>>>>>>>>>>>>>>>> assigning an additional port to the list
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> https://community.hortonworks.
>>>>>>>>>>>>>>>> com/questions/32499/no-workers
>>>>>>>>>>>>>>>> -in-storm-for-squid-topology.html
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I had similar issue and finally got it fixed
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <
>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Storm UI
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On 15 January 2018 at 08:59, Gaurav Bapat <
>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I have Storm UI and the logs are coming from firewalls,
>>>>>>>>>>>>>>>>>> servers, etc from other machines(HP ArcSight Logger).
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I have attached the NiFi screenshots, my logs are coming
>>>>>>>>>>>>>>>>>> but there is some error with Kafka and I am having issues with configuring
>>>>>>>>>>>>>>>>>> Kafka broker
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> In Ambari under storm you can find the UI under quick
>>>>>>>>>>>>>>>>>>> links at the top.  That said, the issue seems to be upstream of Metron, in
>>>>>>>>>>>>>>>>>>> NiFi.  That is something I can't help with as much, but if you can share
>>>>>>>>>>>>>>>>>>> the listensyslog processor config that would be a start.  Also, share the
>>>>>>>>>>>>>>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>>>>>>>>>>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <
>>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I have created a Kafka topic "cef" but my Listen
>>>>>>>>>>>>>>>>>>>> Syslogs is not getting logs in the processor.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Also I checked using tcpdump -i and it is getting logs
>>>>>>>>>>>>>>>>>>>> in my machine but ListenSyslogs is not getting the logs
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <
>>>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> [root@metron incubator-metron]#
>>>>>>>>>>>>>>>>>>>>> ./metron-deployment/scripts/platform-info.sh
>>>>>>>>>>>>>>>>>>>>> Metron 0.4.3
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> * master
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>>>>>>>>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>>>>>>>>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function
>>>>>>>>>>>>>>>>>>>>> closes apache/incubator-metron#880
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile
>>>>>>>>>>>>>>>>>>>>> | 2 +-
>>>>>>>>>>>>>>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> ansible 2.0.0.2
>>>>>>>>>>>>>>>>>>>>>   config file =
>>>>>>>>>>>>>>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> Vagrant 1.9.6
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> Python 2.7.5
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>>>>>>>>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>>>>>>>>>>>>>>> Maven home: /opt/maven/current
>>>>>>>>>>>>>>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>>>>>>>>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>>>>>>>>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>>>>>>>>>>>>>>> OS name: "linux", version:
>>>>>>>>>>>>>>>>>>>>> "3.10.0-693.11.6.el7.x86_64", arch: "amd64", family: "unix"
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> node
>>>>>>>>>>>>>>>>>>>>> v8.9.3
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> npm
>>>>>>>>>>>>>>>>>>>>> 5.5.1
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>>>>>>>>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>>>>>>>>>>>>>>> This is free software; see the source for copying
>>>>>>>>>>>>>>>>>>>>> conditions.  There is NO
>>>>>>>>>>>>>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR
>>>>>>>>>>>>>>>>>>>>> A PARTICULAR PURPOSE.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> Compiler is C++11 compliant
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP
>>>>>>>>>>>>>>>>>>>>> Thu Jan 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>>>>>>>>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @
>>>>>>>>>>>>>>>>>>>>> 3.10GHz
>>>>>>>>>>>>>>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>>>>>>>>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>>>>>>>>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>>>>>>>>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>>>>>>>>>>>>>>> Total Physical Processors: 4
>>>>>>>>>>>>>>>>>>>>> Total cores: 16
>>>>>>>>>>>>>>>>>>>>> Disk information:
>>>>>>>>>>>>>>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>>>>>>>>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>>>>>>>>>>>>>>> /dev/sda2               1022M   12K 1022M   1%
>>>>>>>>>>>>>>>>>>>>> /boot/efi
>>>>>>>>>>>>>>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>>>>>>>>>>>>>>> This CPU appears to support virtualization
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <
>>>>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> I gone through your answer but still I can't figure
>>>>>>>>>>>>>>>>>>>>>> out how do I do parsing/indexing in Storm UI as I cant find any option for
>>>>>>>>>>>>>>>>>>>>>> the same.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> So, you created a new cef topic, and set up the
>>>>>>>>>>>>>>>>>>>>>>> appropriate parser config for it (if not, this
>>>>>>>>>>>>>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>>>>>>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>>>>>>>>>>>>>>> 1.  Validate that the logs are getting onto the
>>>>>>>>>>>>>>>>>>>>>>> kafka topic that you are sending to.  If they aren't there, the problem is
>>>>>>>>>>>>>>>>>>>>>>> upstream from Metron.
>>>>>>>>>>>>>>>>>>>>>>> 2.  If they are getting onto the kafka topic they
>>>>>>>>>>>>>>>>>>>>>>> are being directly sent to, check the indexing kafka topic for an enriched
>>>>>>>>>>>>>>>>>>>>>>> version of those same logs.
>>>>>>>>>>>>>>>>>>>>>>> 3.  Do a binary search of the various components
>>>>>>>>>>>>>>>>>>>>>>> involved with ingest.
>>>>>>>>>>>>>>>>>>>>>>>     a. If the logs are *not* on the indexing kafka
>>>>>>>>>>>>>>>>>>>>>>> topic, check the enrichments topic for those logs.
>>>>>>>>>>>>>>>>>>>>>>>     b. If the logs are *not* on the enrichments
>>>>>>>>>>>>>>>>>>>>>>> topic, check the parser storm topology.
>>>>>>>>>>>>>>>>>>>>>>>     c. If the logs are on the enrichments topic, but
>>>>>>>>>>>>>>>>>>>>>>> *not* indexing, check the enrichments storm
>>>>>>>>>>>>>>>>>>>>>>> topology.
>>>>>>>>>>>>>>>>>>>>>>>     d. If the logs are on the indexing but *not*
>>>>>>>>>>>>>>>>>>>>>>> Kibana, check the indexing storm topic.
>>>>>>>>>>>>>>>>>>>>>>>     e. If the logs are in on the indexing topic and
>>>>>>>>>>>>>>>>>>>>>>> indexing storm topic is in good shape, check
>>>>>>>>>>>>>>>>>>>>>>> elasticsearch directly.
>>>>>>>>>>>>>>>>>>>>>>> 4.  You should have identified where the issue is at
>>>>>>>>>>>>>>>>>>>>>>> this point.  Report back here with what you observed, any relevant error
>>>>>>>>>>>>>>>>>>>>>>> messages, etc.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>>>>>>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>>>>>>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>>>>>>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Hello everyone, I have deployed Metron on a single
>>>>>>>>>>>>>>>>>>>>>>>> node machine and I would like to know how do I get Syslogs from NiFi into
>>>>>>>>>>>>>>>>>>>>>>>> Kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> I have created a Kafka topic by the name "cef" and
>>>>>>>>>>>>>>>>>>>>>>>> I can see that the topic exists in
>>>>>>>>>>>>>>>>>>>>>>>> Metron Configuration but I am unable to connect it
>>>>>>>>>>>>>>>>>>>>>>>> with Kibana
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Need Help!!
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> With Regards
>>>>>>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> With Regards
>>>>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> With Regards
>>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> With Regards
>>>>>> Farrukh Naveed Anjum
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> With Regards
>>>> Farrukh Naveed Anjum
>>>>
>>>
>>>
>>>
>>> --
>>> With Regards
>>> Farrukh Naveed Anjum
>>>
>>
>>
>>
>> --
>> With Regards
>> Farrukh Naveed Anjum
>>
>
>
>
> --
> With Regards
> Farrukh Naveed Anjum
>
>


-- 
With Regards
Farrukh Naveed Anjum

Re: Getting Syslogs to Metron

Posted by Otto Fowler <ot...@gmail.com>.
https://metron.apache.org/current-book/metron-platform/metron-indexing/index.html


On January 22, 2018 at 02:48:20, Farrukh Naveed Anjum (
anjum.farrukh@gmail.com) wrote:

Hi,

It seems like Indexing Topic is giving following errors.

Any Idea

On Mon, Jan 22, 2018 at 12:40 PM, Farrukh Naveed Anjum <
anjum.farrukh@gmail.com> wrote:

> Hi,
>
> I looked into indexing topic seems like its giving following errors
>
>        at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:34:16.543 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:34:16.543 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:34:16.547 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:34:16.581 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:49:16.516 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:49:16.516 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:49:16.520 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:49:16.521 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 03:04:16.518 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 03:04:16.518 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 03:04:16.525 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 03:04:16.555 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 04:07:19.924 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-16 04:07:19.956 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-16 04:07:19.956 o.a.m.w.h.SourceHandler [INFO] File rotation took 32 ms
> 2018-01-16 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-16 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-16 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] File rotation took 17 ms
> 2018-01-16 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-16 04:07:36.409 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-16 04:07:36.409 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
> 2018-01-16 04:08:02.265 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-16 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-16 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 24 ms
> 2018-01-17 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-17 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-17 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
> 2018-01-17 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-17 04:07:19.958 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-17 04:07:19.958 o.a.m.w.h.SourceHandler [INFO] File rotation took 35 ms
> 2018-01-17 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-17 04:07:23.546 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-17 04:07:23.546 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
> 2018-01-17 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-17 04:07:36.422 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-17 04:07:36.422 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
> 2018-01-17 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-17 04:08:02.265 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-17 04:08:02.266 o.a.m.w.h.SourceHandler [INFO] File rotation took 1 ms
> 2018-01-17 09:49:16.529 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-17 09:49:16.529 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-17 09:49:16.572 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-17 09:49:16.594 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-18 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-18 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-18 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
> 2018-01-18 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-18 04:07:19.945 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-18 04:07:19.946 o.a.m.w.h.SourceHandler [INFO] File rotation took 23 ms
> 2018-01-18 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-18 04:07:23.570 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-18 04:07:23.570 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
> 2018-01-18 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-18 04:07:36.407 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-18 04:07:36.407 o.a.m.w.h.SourceHandler [INFO] File rotation took 1 ms
> 2018-01-18 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-18 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-18 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 25 ms
> 2018-01-18 09:46:50.425 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-18 09:46:50.460 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-18 09:46:50.460 o.a.m.w.h.SourceHandler [INFO] File rotation took 35 ms
> 2018-01-18 09:49:16.568 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-18 09:49:16.614 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-18 09:49:16.614 o.a.m.w.h.SourceHandler [INFO] File rotation took 46 ms
> 2018-01-18 17:19:16.540 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-18 17:19:16.540 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-19 01:24:20.877 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
> 2018-01-19 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 04:07:19.939 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 04:07:19.939 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
> 2018-01-19 04:07:23.545 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
> 2018-01-19 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] File rotation took 23 ms
> 2018-01-19 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 25 ms
> 2018-01-19 09:46:50.425 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 09:46:50.442 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 09:46:50.442 o.a.m.w.h.SourceHandler [INFO] File rotation took 17 ms
> 2018-01-19 09:49:16.568 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 09:49:16.586 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 09:49:16.586 o.a.m.w.h.SourceHandler [INFO] File rotation took 18 ms
> 2018-01-19 17:08:22.126 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 17:08:22.142 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 17:08:22.142 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
> 2018-01-19 17:19:16.556 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 17:19:16.582 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 17:19:16.582 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
> 2018-01-20 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
> 2018-01-20 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 04:07:19.962 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 04:07:19.962 o.a.m.w.h.SourceHandler [INFO] File rotation took 38 ms
> 2018-01-20 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] File rotation took 17 ms
> 2018-01-20 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 04:07:36.407 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 04:07:36.408 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
> 2018-01-20 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 04:08:02.290 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 04:08:02.290 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
> 2018-01-20 09:34:16.559 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-20 09:34:16.559 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-20 09:46:50.425 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 09:46:50.445 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 09:46:50.446 o.a.m.w.h.SourceHandler [INFO] File rotation took 21 ms
> 2018-01-20 09:49:16.568 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 09:49:16.570 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 09:49:16.570 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
> 2018-01-20 10:19:16.560 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-20 10:19:16.560 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-20 17:08:22.127 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 17:08:22.129 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 17:08:22.129 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
> 2018-01-20 17:19:16.556 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 17:19:16.558 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 17:19:16.558 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
> 2018-01-21 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-21 01:24:20.912 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-21 01:24:20.912 o.a.m.w.h.SourceHandler [INFO] File rotation took 32 ms
> 2018-01-21 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-21 04:07:19.949 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-21 04:07:19.950 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
> 2018-01-21 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-21 04:07:23.545 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-21 04:07:23.545 o.a.m.w.h.SourceHandler [INFO] File rotation took 1 ms
> 2018-01-21 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-21 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-21 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] File rotation took 23 ms
> 2018-01-21 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-21 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-21 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 25 ms
> 2018-01-21 07:34:16.569 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-21 07:34:16.573 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-21 07:34:16.593 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-21 07:34:16.773 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
>         at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
>         at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>
>
> On Mon, Jan 22, 2018 at 12:36 PM, Farrukh Naveed Anjum <
> anjum.farrukh@gmail.com> wrote:
>
>> Hi Guys,
>>
>> Its seems like we are able to make NIFI connection and data indeed is
>> going through KAFKA Topic yet using CEF Parser (SysLogs) we are unable to
>> create the elastic search index.
>>
>>
>>
>>
>> On Mon, Jan 22, 2018 at 12:32 PM, Farrukh Naveed Anjum <
>> anjum.farrukh@gmail.com> wrote:
>>
>>> Hi, Gaurav,
>>>
>>> Did you solved it ? I am also following same usecase for SysLog using
>>> UDP (Rsyslogs)
>>>
>>> It seems like data is coming to KAFKA Topic. As you can see its showing
>>> up.
>>>
>>> But Elasticsearch index is not created.
>>>
>>>
>>>
>>> On Tue, Jan 16, 2018 at 12:37 PM, Gaurav Bapat <ga...@gmail.com>
>>> wrote:
>>>
>>>> But I cant find how to configure it
>>>>
>>>> On 16 January 2018 at 11:38, Farrukh Naveed Anjum <
>>>> anjum.farrukh@gmail.com> wrote:
>>>>
>>>>> yes, do configure it as per metron reference usecase
>>>>>
>>>>> On Tue, Jan 16, 2018 at 8:35 AM, Gaurav Bapat <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Kyle,
>>>>>>
>>>>>> I saw that I can ping from my OS to VM and from VM to OS. Looks like
>>>>>> this is some Kafka or Zookeeper environment variables setup issue, do I
>>>>>> need to configure that in vagrant ssh?
>>>>>>
>>>>>> On 16 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hey Kyle,
>>>>>>>
>>>>>>> I am running NiFi not on Ambari but on localhost:8089, I can ping
>>>>>>> from my OS terminal to node1 but can't ping from node1 to my OS terminal, I
>>>>>>> have attached few screenshots and the contents of /etc/hosts
>>>>>>>
>>>>>>> Thank You!
>>>>>>>
>>>>>>> On 15 January 2018 at 20:04, Kyle Richardson <
>>>>>>> kylerichardson2@gmail.com> wrote:
>>>>>>>
>>>>>>>> It looks like your Nifi instance is running on your laptop/desktop
>>>>>>>> (e.g. the VM host). My guess would be that name resolution or networking is
>>>>>>>> not properly configured between the host and the guest preventing the data
>>>>>>>> from getting from Nifi to Kafka. What's the contents of /etc/hosts on the
>>>>>>>> VM host? Can you ping node1 from the VM host by name and by IP address?
>>>>>>>>
>>>>>>>> -Kyle
>>>>>>>>
>>>>>>>> On Mon, Jan 15, 2018 at 6:55 AM, Gaurav Bapat <
>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Failed while waiting for acks from Kafka is what I am getting in
>>>>>>>>> Kafka, am I missing some configuration with Kafka?
>>>>>>>>>
>>>>>>>>> On 15 January 2018 at 16:50, Gaurav Bapat <ga...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Farrukh,
>>>>>>>>>>
>>>>>>>>>> I cant find any folder by my topic
>>>>>>>>>>
>>>>>>>>>> On 15 January 2018 at 16:33, Farrukh Naveed Anjum <
>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Can you check /kafaka-logs on your VM box (It should have a
>>>>>>>>>>> folder named your topic). Can you check if it is there ?
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Jan 15, 2018 at 3:49 PM, Gaurav Bapat <
>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> I am not getting data into my Kafka topic
>>>>>>>>>>>>
>>>>>>>>>>>> I have used i5 4 Core Processor with 16 GB RAM and I have
>>>>>>>>>>>> allocated 12 GB RAM to my vagrant VM.
>>>>>>>>>>>>
>>>>>>>>>>>> I dont understand how to configure Kafka broker because it is
>>>>>>>>>>>> giving me failed while waiting for acks to Kafka
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 15 January 2018 at 16:10, Farrukh Naveed Anjum <
>>>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Can you tell me is your KAFKA Topic getting data ? What are
>>>>>>>>>>>>> you machine specifications ?
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <
>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks Farrukh,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I am not getting data in my kafka topic even after creating
>>>>>>>>>>>>>> one, the issue seems to be with broker config, how to configure Kafka and
>>>>>>>>>>>>>> Zookeeper port?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>>>>>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I had similar issue it turned out to be the issue in STROM
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> No worker is assigned to togolgoy all you need is to add
>>>>>>>>>>>>>>> additional port in
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by
>>>>>>>>>>>>>>> assigning an additional port to the list
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> https://community.hortonworks.com/questions/32499/no-workers
>>>>>>>>>>>>>>> -in-storm-for-squid-topology.html
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I had similar issue and finally got it fixed
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <
>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Storm UI
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On 15 January 2018 at 08:59, Gaurav Bapat <
>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I have Storm UI and the logs are coming from firewalls,
>>>>>>>>>>>>>>>>> servers, etc from other machines(HP ArcSight Logger).
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I have attached the NiFi screenshots, my logs are coming
>>>>>>>>>>>>>>>>> but there is some error with Kafka and I am having issues with configuring
>>>>>>>>>>>>>>>>> Kafka broker
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> In Ambari under storm you can find the UI under quick
>>>>>>>>>>>>>>>>>> links at the top.  That said, the issue seems to be upstream of Metron, in
>>>>>>>>>>>>>>>>>> NiFi.  That is something I can't help with as much, but if you can share
>>>>>>>>>>>>>>>>>> the listensyslog processor config that would be a start.  Also, share the
>>>>>>>>>>>>>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>>>>>>>>>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <
>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs
>>>>>>>>>>>>>>>>>>> is not getting logs in the processor.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Also I checked using tcpdump -i and it is getting logs
>>>>>>>>>>>>>>>>>>> in my machine but ListenSyslogs is not getting the logs
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <
>>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [root@metron incubator-metron]#
>>>>>>>>>>>>>>>>>>>> ./metron-deployment/scripts/platform-info.sh
>>>>>>>>>>>>>>>>>>>> Metron 0.4.3
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> * master
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>>>>>>>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>>>>>>>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function
>>>>>>>>>>>>>>>>>>>> closes apache/incubator-metron#880
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile
>>>>>>>>>>>>>>>>>>>> | 2 +-
>>>>>>>>>>>>>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> ansible 2.0.0.2
>>>>>>>>>>>>>>>>>>>>   config file =
>>>>>>>>>>>>>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Vagrant 1.9.6
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Python 2.7.5
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>>>>>>>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>>>>>>>>>>>>>> Maven home: /opt/maven/current
>>>>>>>>>>>>>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>>>>>>>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>>>>>>>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>>>>>>>>>>>>>> OS name: "linux", version:
>>>>>>>>>>>>>>>>>>>> "3.10.0-693.11.6.el7.x86_64", arch: "amd64", family: "unix"
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> node
>>>>>>>>>>>>>>>>>>>> v8.9.3
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> npm
>>>>>>>>>>>>>>>>>>>> 5.5.1
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>>>>>>>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>>>>>>>>>>>>>> This is free software; see the source for copying
>>>>>>>>>>>>>>>>>>>> conditions.  There is NO
>>>>>>>>>>>>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A
>>>>>>>>>>>>>>>>>>>> PARTICULAR PURPOSE.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Compiler is C++11 compliant
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu
>>>>>>>>>>>>>>>>>>>> Jan 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>>>>>>>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>>>>>>>>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>>>>>>>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>>>>>>>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>>>>>>>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>>>>>>>>>>>>>> Total Physical Processors: 4
>>>>>>>>>>>>>>>>>>>> Total cores: 16
>>>>>>>>>>>>>>>>>>>> Disk information:
>>>>>>>>>>>>>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>>>>>>>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>>>>>>>>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>>>>>>>>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>>>>>>>>>>>>>> This CPU appears to support virtualization
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <
>>>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> I gone through your answer but still I can't figure
>>>>>>>>>>>>>>>>>>>>> out how do I do parsing/indexing in Storm UI as I cant find any option for
>>>>>>>>>>>>>>>>>>>>> the same.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> So, you created a new cef topic, and set up the
>>>>>>>>>>>>>>>>>>>>>> appropriate parser config for it (if not, this
>>>>>>>>>>>>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>>>>>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>>>>>>>>>>>>>> 1.  Validate that the logs are getting onto the
>>>>>>>>>>>>>>>>>>>>>> kafka topic that you are sending to.  If they aren't there, the problem is
>>>>>>>>>>>>>>>>>>>>>> upstream from Metron.
>>>>>>>>>>>>>>>>>>>>>> 2.  If they are getting onto the kafka topic they are
>>>>>>>>>>>>>>>>>>>>>> being directly sent to, check the indexing kafka topic for an enriched
>>>>>>>>>>>>>>>>>>>>>> version of those same logs.
>>>>>>>>>>>>>>>>>>>>>> 3.  Do a binary search of the various components
>>>>>>>>>>>>>>>>>>>>>> involved with ingest.
>>>>>>>>>>>>>>>>>>>>>>     a. If the logs are *not* on the indexing kafka
>>>>>>>>>>>>>>>>>>>>>> topic, check the enrichments topic for those logs.
>>>>>>>>>>>>>>>>>>>>>>     b. If the logs are *not* on the enrichments
>>>>>>>>>>>>>>>>>>>>>> topic, check the parser storm topology.
>>>>>>>>>>>>>>>>>>>>>>     c. If the logs are on the enrichments topic, but
>>>>>>>>>>>>>>>>>>>>>> *not* indexing, check the enrichments storm topology.
>>>>>>>>>>>>>>>>>>>>>>     d. If the logs are on the indexing but *not*
>>>>>>>>>>>>>>>>>>>>>> Kibana, check the indexing storm topic.
>>>>>>>>>>>>>>>>>>>>>>     e. If the logs are in on the indexing topic and
>>>>>>>>>>>>>>>>>>>>>> indexing storm topic is in good shape, check
>>>>>>>>>>>>>>>>>>>>>> elasticsearch directly.
>>>>>>>>>>>>>>>>>>>>>> 4.  You should have identified where the issue is at
>>>>>>>>>>>>>>>>>>>>>> this point.  Report back here with what you observed, any relevant error
>>>>>>>>>>>>>>>>>>>>>> messages, etc.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>>>>>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>>>>>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>>>>>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Hello everyone, I have deployed Metron on a single
>>>>>>>>>>>>>>>>>>>>>>> node machine and I would like to know how do I get Syslogs from NiFi into
>>>>>>>>>>>>>>>>>>>>>>> Kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> I have created a Kafka topic by the name "cef" and I
>>>>>>>>>>>>>>>>>>>>>>> can see that the topic exists in
>>>>>>>>>>>>>>>>>>>>>>> Metron Configuration but I am unable to connect it
>>>>>>>>>>>>>>>>>>>>>>> with Kibana
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Need Help!!
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> With Regards
>>>>>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> With Regards
>>>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> With Regards
>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> With Regards
>>>>> Farrukh Naveed Anjum
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> With Regards
>>> Farrukh Naveed Anjum
>>>
>>
>>
>>
>> --
>> With Regards
>> Farrukh Naveed Anjum
>>
>
>
>
> --
> With Regards
> Farrukh Naveed Anjum
>



--
With Regards
Farrukh Naveed Anjum

Re: Getting Syslogs to Metron

Posted by Farrukh Naveed Anjum <an...@gmail.com>.
Hi,

It seems like Indexing Topic is giving following errors.

Any Idea

On Mon, Jan 22, 2018 at 12:40 PM, Farrukh Naveed Anjum <
anjum.farrukh@gmail.com> wrote:

> Hi,
>
> I looked into indexing topic seems like its giving following errors
>
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:34:16.543 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:34:16.543 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:34:16.547 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:34:16.581 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:49:16.516 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:49:16.516 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:49:16.520 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 02:49:16.521 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 03:04:16.518 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 03:04:16.518 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 03:04:16.525 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 03:04:16.555 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-16 04:07:19.924 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-16 04:07:19.956 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-16 04:07:19.956 o.a.m.w.h.SourceHandler [INFO] File rotation took 32 ms
> 2018-01-16 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-16 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-16 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] File rotation took 17 ms
> 2018-01-16 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-16 04:07:36.409 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-16 04:07:36.409 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
> 2018-01-16 04:08:02.265 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-16 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-16 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 24 ms
> 2018-01-17 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-17 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-17 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
> 2018-01-17 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-17 04:07:19.958 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-17 04:07:19.958 o.a.m.w.h.SourceHandler [INFO] File rotation took 35 ms
> 2018-01-17 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-17 04:07:23.546 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-17 04:07:23.546 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
> 2018-01-17 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-17 04:07:36.422 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-17 04:07:36.422 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
> 2018-01-17 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-17 04:08:02.265 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-17 04:08:02.266 o.a.m.w.h.SourceHandler [INFO] File rotation took 1 ms
> 2018-01-17 09:49:16.529 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-17 09:49:16.529 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-17 09:49:16.572 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-17 09:49:16.594 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-18 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-18 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-18 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
> 2018-01-18 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-18 04:07:19.945 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-18 04:07:19.946 o.a.m.w.h.SourceHandler [INFO] File rotation took 23 ms
> 2018-01-18 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-18 04:07:23.570 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-18 04:07:23.570 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
> 2018-01-18 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-18 04:07:36.407 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-18 04:07:36.407 o.a.m.w.h.SourceHandler [INFO] File rotation took 1 ms
> 2018-01-18 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-18 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-18 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 25 ms
> 2018-01-18 09:46:50.425 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-18 09:46:50.460 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-18 09:46:50.460 o.a.m.w.h.SourceHandler [INFO] File rotation took 35 ms
> 2018-01-18 09:49:16.568 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-18 09:49:16.614 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-18 09:49:16.614 o.a.m.w.h.SourceHandler [INFO] File rotation took 46 ms
> 2018-01-18 17:19:16.540 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-18 17:19:16.540 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-19 01:24:20.877 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
> 2018-01-19 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 04:07:19.939 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 04:07:19.939 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
> 2018-01-19 04:07:23.545 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
> 2018-01-19 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] File rotation took 23 ms
> 2018-01-19 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 25 ms
> 2018-01-19 09:46:50.425 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 09:46:50.442 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 09:46:50.442 o.a.m.w.h.SourceHandler [INFO] File rotation took 17 ms
> 2018-01-19 09:49:16.568 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 09:49:16.586 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 09:49:16.586 o.a.m.w.h.SourceHandler [INFO] File rotation took 18 ms
> 2018-01-19 17:08:22.126 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 17:08:22.142 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 17:08:22.142 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
> 2018-01-19 17:19:16.556 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-19 17:19:16.582 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-19 17:19:16.582 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
> 2018-01-20 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
> 2018-01-20 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 04:07:19.962 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 04:07:19.962 o.a.m.w.h.SourceHandler [INFO] File rotation took 38 ms
> 2018-01-20 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] File rotation took 17 ms
> 2018-01-20 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 04:07:36.407 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 04:07:36.408 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
> 2018-01-20 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 04:08:02.290 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 04:08:02.290 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
> 2018-01-20 09:34:16.559 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-20 09:34:16.559 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-20 09:46:50.425 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 09:46:50.445 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 09:46:50.446 o.a.m.w.h.SourceHandler [INFO] File rotation took 21 ms
> 2018-01-20 09:49:16.568 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 09:49:16.570 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 09:49:16.570 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
> 2018-01-20 10:19:16.560 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-20 10:19:16.560 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-20 17:08:22.127 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 17:08:22.129 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 17:08:22.129 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
> 2018-01-20 17:19:16.556 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-20 17:19:16.558 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-20 17:19:16.558 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
> 2018-01-21 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-21 01:24:20.912 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-21 01:24:20.912 o.a.m.w.h.SourceHandler [INFO] File rotation took 32 ms
> 2018-01-21 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-21 04:07:19.949 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-21 04:07:19.950 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
> 2018-01-21 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-21 04:07:23.545 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-21 04:07:23.545 o.a.m.w.h.SourceHandler [INFO] File rotation took 1 ms
> 2018-01-21 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-21 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-21 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] File rotation took 23 ms
> 2018-01-21 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
> 2018-01-21 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0 file rotation actions.
> 2018-01-21 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 25 ms
> 2018-01-21 07:34:16.569 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-21 07:34:16.573 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-21 07:34:16.593 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> 	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
> 2018-01-21 07:34:16.773 o.a.s.d.executor [ERROR]
> java.lang.Exception: WARNING: Default and (likely) unoptimized writer config used for elasticsearch writer and sensor profiler
> 	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234) [stormjar.jar:?]
> 	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
> 	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
>
>
> On Mon, Jan 22, 2018 at 12:36 PM, Farrukh Naveed Anjum <
> anjum.farrukh@gmail.com> wrote:
>
>> Hi Guys,
>>
>> Its seems like we are able to make NIFI connection and data indeed is
>> going through KAFKA Topic yet using CEF Parser (SysLogs) we are unable to
>> create the elastic search index.
>>
>>
>>
>>
>> On Mon, Jan 22, 2018 at 12:32 PM, Farrukh Naveed Anjum <
>> anjum.farrukh@gmail.com> wrote:
>>
>>> Hi, Gaurav,
>>>
>>> Did you solved it ? I am also following same usecase for SysLog using
>>> UDP (Rsyslogs)
>>>
>>> It seems like data is coming to KAFKA Topic. As you can see its showing
>>> up.
>>>
>>> But Elasticsearch index is not created.
>>>
>>>
>>>
>>> On Tue, Jan 16, 2018 at 12:37 PM, Gaurav Bapat <ga...@gmail.com>
>>> wrote:
>>>
>>>> But I cant find how to configure it
>>>>
>>>> On 16 January 2018 at 11:38, Farrukh Naveed Anjum <
>>>> anjum.farrukh@gmail.com> wrote:
>>>>
>>>>> yes, do configure it as per metron reference usecase
>>>>>
>>>>> On Tue, Jan 16, 2018 at 8:35 AM, Gaurav Bapat <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Kyle,
>>>>>>
>>>>>> I saw that I can ping from my OS to VM and from VM to OS. Looks like
>>>>>> this is some Kafka or Zookeeper environment variables setup issue, do I
>>>>>> need to configure that in vagrant ssh?
>>>>>>
>>>>>> On 16 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hey Kyle,
>>>>>>>
>>>>>>> I am running NiFi not on Ambari but on localhost:8089, I can ping
>>>>>>> from my OS terminal to node1 but can't ping from node1 to my OS terminal, I
>>>>>>> have attached few screenshots and the contents of /etc/hosts
>>>>>>>
>>>>>>> Thank You!
>>>>>>>
>>>>>>> On 15 January 2018 at 20:04, Kyle Richardson <
>>>>>>> kylerichardson2@gmail.com> wrote:
>>>>>>>
>>>>>>>> It looks like your Nifi instance is running on your laptop/desktop
>>>>>>>> (e.g. the VM host). My guess would be that name resolution or networking is
>>>>>>>> not properly configured between the host and the guest preventing the data
>>>>>>>> from getting from Nifi to Kafka. What's the contents of /etc/hosts on the
>>>>>>>> VM host? Can you ping node1 from the VM host by name and by IP address?
>>>>>>>>
>>>>>>>> -Kyle
>>>>>>>>
>>>>>>>> On Mon, Jan 15, 2018 at 6:55 AM, Gaurav Bapat <
>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Failed while waiting for acks from Kafka is what I am getting in
>>>>>>>>> Kafka, am I missing some configuration with Kafka?
>>>>>>>>>
>>>>>>>>> On 15 January 2018 at 16:50, Gaurav Bapat <ga...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Farrukh,
>>>>>>>>>>
>>>>>>>>>> I cant find any folder by my topic
>>>>>>>>>>
>>>>>>>>>> On 15 January 2018 at 16:33, Farrukh Naveed Anjum <
>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Can you check /kafaka-logs on your VM box (It should have a
>>>>>>>>>>> folder named your topic). Can you check if it is there ?
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Jan 15, 2018 at 3:49 PM, Gaurav Bapat <
>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> I am not getting data into my Kafka topic
>>>>>>>>>>>>
>>>>>>>>>>>> I have used i5 4 Core Processor with 16 GB RAM and I have
>>>>>>>>>>>> allocated 12 GB RAM to my vagrant VM.
>>>>>>>>>>>>
>>>>>>>>>>>> I dont understand how to configure Kafka broker because it is
>>>>>>>>>>>> giving me failed while waiting for acks to Kafka
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 15 January 2018 at 16:10, Farrukh Naveed Anjum <
>>>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Can you tell me is your KAFKA Topic getting data ? What are
>>>>>>>>>>>>> you machine specifications ?
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <
>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks Farrukh,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I am not getting data in my kafka topic even after creating
>>>>>>>>>>>>>> one, the issue seems to be with broker config, how to configure Kafka and
>>>>>>>>>>>>>> Zookeeper port?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>>>>>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I had similar issue it turned out to be the issue in STROM
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> No worker is assigned to togolgoy all you need is to add
>>>>>>>>>>>>>>> additional port in
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by
>>>>>>>>>>>>>>> assigning an additional port to the list
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> https://community.hortonworks.com/questions/32499/no-workers
>>>>>>>>>>>>>>> -in-storm-for-squid-topology.html
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I had similar issue and finally got it fixed
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <
>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Storm UI
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On 15 January 2018 at 08:59, Gaurav Bapat <
>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I have Storm UI and the logs are coming from firewalls,
>>>>>>>>>>>>>>>>> servers, etc from other machines(HP ArcSight Logger).
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I have attached the NiFi screenshots, my logs are coming
>>>>>>>>>>>>>>>>> but there is some error with Kafka and I am having issues with configuring
>>>>>>>>>>>>>>>>> Kafka broker
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> In Ambari under storm you can find the UI under quick
>>>>>>>>>>>>>>>>>> links at the top.  That said, the issue seems to be upstream of Metron, in
>>>>>>>>>>>>>>>>>> NiFi.  That is something I can't help with as much, but if you can share
>>>>>>>>>>>>>>>>>> the listensyslog processor config that would be a start.  Also, share the
>>>>>>>>>>>>>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>>>>>>>>>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <
>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs
>>>>>>>>>>>>>>>>>>> is not getting logs in the processor.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Also I checked using tcpdump -i and it is getting logs
>>>>>>>>>>>>>>>>>>> in my machine but ListenSyslogs is not getting the logs
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <
>>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [root@metron incubator-metron]#
>>>>>>>>>>>>>>>>>>>> ./metron-deployment/scripts/platform-info.sh
>>>>>>>>>>>>>>>>>>>> Metron 0.4.3
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> * master
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>>>>>>>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>>>>>>>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function
>>>>>>>>>>>>>>>>>>>> closes apache/incubator-metron#880
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile
>>>>>>>>>>>>>>>>>>>> | 2 +-
>>>>>>>>>>>>>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> ansible 2.0.0.2
>>>>>>>>>>>>>>>>>>>>   config file =
>>>>>>>>>>>>>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Vagrant 1.9.6
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Python 2.7.5
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>>>>>>>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>>>>>>>>>>>>>> Maven home: /opt/maven/current
>>>>>>>>>>>>>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>>>>>>>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>>>>>>>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>>>>>>>>>>>>>> OS name: "linux", version:
>>>>>>>>>>>>>>>>>>>> "3.10.0-693.11.6.el7.x86_64", arch: "amd64", family: "unix"
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> node
>>>>>>>>>>>>>>>>>>>> v8.9.3
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> npm
>>>>>>>>>>>>>>>>>>>> 5.5.1
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>>>>>>>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>>>>>>>>>>>>>> This is free software; see the source for copying
>>>>>>>>>>>>>>>>>>>> conditions.  There is NO
>>>>>>>>>>>>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A
>>>>>>>>>>>>>>>>>>>> PARTICULAR PURPOSE.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Compiler is C++11 compliant
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu
>>>>>>>>>>>>>>>>>>>> Jan 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>>>>>>>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>>>>>>>>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>>>>>>>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>>>>>>>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>>>>>>>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>>>>>>>>>>>>>> Total Physical Processors: 4
>>>>>>>>>>>>>>>>>>>> Total cores: 16
>>>>>>>>>>>>>>>>>>>> Disk information:
>>>>>>>>>>>>>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>>>>>>>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>>>>>>>>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>>>>>>>>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>>>>>>>>>>>>>> This CPU appears to support virtualization
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <
>>>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> I gone through your answer but still I can't figure
>>>>>>>>>>>>>>>>>>>>> out how do I do parsing/indexing in Storm UI as I cant find any option for
>>>>>>>>>>>>>>>>>>>>> the same.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> So, you created a new cef topic, and set up the
>>>>>>>>>>>>>>>>>>>>>> appropriate parser config for it (if not, this
>>>>>>>>>>>>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>>>>>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>>>>>>>>>>>>>> 1.  Validate that the logs are getting onto the
>>>>>>>>>>>>>>>>>>>>>> kafka topic that you are sending to.  If they aren't there, the problem is
>>>>>>>>>>>>>>>>>>>>>> upstream from Metron.
>>>>>>>>>>>>>>>>>>>>>> 2.  If they are getting onto the kafka topic they are
>>>>>>>>>>>>>>>>>>>>>> being directly sent to, check the indexing kafka topic for an enriched
>>>>>>>>>>>>>>>>>>>>>> version of those same logs.
>>>>>>>>>>>>>>>>>>>>>> 3.  Do a binary search of the various components
>>>>>>>>>>>>>>>>>>>>>> involved with ingest.
>>>>>>>>>>>>>>>>>>>>>>     a. If the logs are *not* on the indexing kafka
>>>>>>>>>>>>>>>>>>>>>> topic, check the enrichments topic for those logs.
>>>>>>>>>>>>>>>>>>>>>>     b. If the logs are *not* on the enrichments
>>>>>>>>>>>>>>>>>>>>>> topic, check the parser storm topology.
>>>>>>>>>>>>>>>>>>>>>>     c. If the logs are on the enrichments topic, but
>>>>>>>>>>>>>>>>>>>>>> *not* indexing, check the enrichments storm topology.
>>>>>>>>>>>>>>>>>>>>>>     d. If the logs are on the indexing but *not*
>>>>>>>>>>>>>>>>>>>>>> Kibana, check the indexing storm topic.
>>>>>>>>>>>>>>>>>>>>>>     e. If the logs are in on the indexing topic and
>>>>>>>>>>>>>>>>>>>>>> indexing storm topic is in good shape, check
>>>>>>>>>>>>>>>>>>>>>> elasticsearch directly.
>>>>>>>>>>>>>>>>>>>>>> 4.  You should have identified where the issue is at
>>>>>>>>>>>>>>>>>>>>>> this point.  Report back here with what you observed, any relevant error
>>>>>>>>>>>>>>>>>>>>>> messages, etc.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>>>>>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>>>>>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>>>>>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Hello everyone, I have deployed Metron on a single
>>>>>>>>>>>>>>>>>>>>>>> node machine and I would like to know how do I get Syslogs from NiFi into
>>>>>>>>>>>>>>>>>>>>>>> Kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> I have created a Kafka topic by the name "cef" and I
>>>>>>>>>>>>>>>>>>>>>>> can see that the topic exists in
>>>>>>>>>>>>>>>>>>>>>>> Metron Configuration but I am unable to connect it
>>>>>>>>>>>>>>>>>>>>>>> with Kibana
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Need Help!!
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> With Regards
>>>>>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> With Regards
>>>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> With Regards
>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> With Regards
>>>>> Farrukh Naveed Anjum
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> With Regards
>>> Farrukh Naveed Anjum
>>>
>>
>>
>>
>> --
>> With Regards
>> Farrukh Naveed Anjum
>>
>
>
>
> --
> With Regards
> Farrukh Naveed Anjum
>



-- 
With Regards
Farrukh Naveed Anjum

Re: Getting Syslogs to Metron

Posted by Otto Fowler <ot...@gmail.com>.
https://metron.apache.org/current-book/metron-platform/metron-indexing/index.html


On January 22, 2018 at 02:41:14, Farrukh Naveed Anjum (
anjum.farrukh@gmail.com) wrote:

Default and (likely) unoptimized writer config used for hdfs writer
and sensor profiler

Re: Getting Syslogs to Metron

Posted by Farrukh Naveed Anjum <an...@gmail.com>.
Hi,

I looked into indexing topic seems like its giving following errors

	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-16 02:34:16.543 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for hdfs writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-16 02:34:16.543 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for elasticsearch writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-16 02:34:16.547 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for hdfs writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-16 02:34:16.581 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for elasticsearch writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-16 02:49:16.516 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for hdfs writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-16 02:49:16.516 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for elasticsearch writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-16 02:49:16.520 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for elasticsearch writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-16 02:49:16.521 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for hdfs writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-16 03:04:16.518 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for hdfs writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-16 03:04:16.518 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for elasticsearch writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-16 03:04:16.525 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for hdfs writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-16 03:04:16.555 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for elasticsearch writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-16 04:07:19.924 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-16 04:07:19.956 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-16 04:07:19.956 o.a.m.w.h.SourceHandler [INFO] File rotation took 32 ms
2018-01-16 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-16 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-16 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] File rotation took 17 ms
2018-01-16 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-16 04:07:36.409 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-16 04:07:36.409 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
2018-01-16 04:08:02.265 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-16 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-16 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 24 ms
2018-01-17 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-17 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-17 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
2018-01-17 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-17 04:07:19.958 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-17 04:07:19.958 o.a.m.w.h.SourceHandler [INFO] File rotation took 35 ms
2018-01-17 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-17 04:07:23.546 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-17 04:07:23.546 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
2018-01-17 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-17 04:07:36.422 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-17 04:07:36.422 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
2018-01-17 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-17 04:08:02.265 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-17 04:08:02.266 o.a.m.w.h.SourceHandler [INFO] File rotation took 1 ms
2018-01-17 09:49:16.529 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for hdfs writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-17 09:49:16.529 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for elasticsearch writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-17 09:49:16.572 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for hdfs writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-17 09:49:16.594 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for elasticsearch writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-18 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-18 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-18 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
2018-01-18 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-18 04:07:19.945 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-18 04:07:19.946 o.a.m.w.h.SourceHandler [INFO] File rotation took 23 ms
2018-01-18 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-18 04:07:23.570 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-18 04:07:23.570 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
2018-01-18 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-18 04:07:36.407 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-18 04:07:36.407 o.a.m.w.h.SourceHandler [INFO] File rotation took 1 ms
2018-01-18 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-18 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-18 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 25 ms
2018-01-18 09:46:50.425 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-18 09:46:50.460 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-18 09:46:50.460 o.a.m.w.h.SourceHandler [INFO] File rotation took 35 ms
2018-01-18 09:49:16.568 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-18 09:49:16.614 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-18 09:49:16.614 o.a.m.w.h.SourceHandler [INFO] File rotation took 46 ms
2018-01-18 17:19:16.540 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for hdfs writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-18 17:19:16.540 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for elasticsearch writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-19 01:24:20.877 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-19 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-19 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
2018-01-19 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-19 04:07:19.939 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-19 04:07:19.939 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
2018-01-19 04:07:23.545 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-19 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-19 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
2018-01-19 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-19 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-19 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] File rotation took 23 ms
2018-01-19 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-19 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-19 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 25 ms
2018-01-19 09:46:50.425 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-19 09:46:50.442 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-19 09:46:50.442 o.a.m.w.h.SourceHandler [INFO] File rotation took 17 ms
2018-01-19 09:49:16.568 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-19 09:49:16.586 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-19 09:49:16.586 o.a.m.w.h.SourceHandler [INFO] File rotation took 18 ms
2018-01-19 17:08:22.126 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-19 17:08:22.142 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-19 17:08:22.142 o.a.m.w.h.SourceHandler [INFO] File rotation took 16 ms
2018-01-19 17:19:16.556 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-19 17:19:16.582 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-19 17:19:16.582 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
2018-01-20 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-20 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-20 01:24:20.879 o.a.m.w.h.SourceHandler [INFO] File rotation took 3 ms
2018-01-20 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-20 04:07:19.962 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-20 04:07:19.962 o.a.m.w.h.SourceHandler [INFO] File rotation took 38 ms
2018-01-20 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-20 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-20 04:07:23.561 o.a.m.w.h.SourceHandler [INFO] File rotation took 17 ms
2018-01-20 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-20 04:07:36.407 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-20 04:07:36.408 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
2018-01-20 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-20 04:08:02.290 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-20 04:08:02.290 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
2018-01-20 09:34:16.559 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for hdfs writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-20 09:34:16.559 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for elasticsearch writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-20 09:46:50.425 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-20 09:46:50.445 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-20 09:46:50.446 o.a.m.w.h.SourceHandler [INFO] File rotation took 21 ms
2018-01-20 09:49:16.568 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-20 09:49:16.570 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-20 09:49:16.570 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
2018-01-20 10:19:16.560 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for hdfs writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-20 10:19:16.560 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for elasticsearch writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-20 17:08:22.127 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-20 17:08:22.129 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-20 17:08:22.129 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
2018-01-20 17:19:16.556 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-20 17:19:16.558 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-20 17:19:16.558 o.a.m.w.h.SourceHandler [INFO] File rotation took 2 ms
2018-01-21 01:24:20.876 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-21 01:24:20.912 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-21 01:24:20.912 o.a.m.w.h.SourceHandler [INFO] File rotation took 32 ms
2018-01-21 04:07:19.923 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-21 04:07:19.949 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-21 04:07:19.950 o.a.m.w.h.SourceHandler [INFO] File rotation took 26 ms
2018-01-21 04:07:23.544 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-21 04:07:23.545 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-21 04:07:23.545 o.a.m.w.h.SourceHandler [INFO] File rotation took 1 ms
2018-01-21 04:07:36.406 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-21 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-21 04:07:36.429 o.a.m.w.h.SourceHandler [INFO] File rotation took 23 ms
2018-01-21 04:08:02.264 o.a.m.w.h.SourceHandler [INFO] Rotating output file...
2018-01-21 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] Performing 0
file rotation actions.
2018-01-21 04:08:02.289 o.a.m.w.h.SourceHandler [INFO] File rotation took 25 ms
2018-01-21 07:34:16.569 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for elasticsearch writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-21 07:34:16.573 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for hdfs writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-21 07:34:16.593 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for hdfs writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
2018-01-21 07:34:16.773 o.a.s.d.executor [ERROR]
java.lang.Exception: WARNING: Default and (likely) unoptimized writer
config used for elasticsearch writer and sensor profiler
	at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:234)
[stormjar.jar:?]
	at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
	at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]


On Mon, Jan 22, 2018 at 12:36 PM, Farrukh Naveed Anjum <
anjum.farrukh@gmail.com> wrote:

> Hi Guys,
>
> Its seems like we are able to make NIFI connection and data indeed is
> going through KAFKA Topic yet using CEF Parser (SysLogs) we are unable to
> create the elastic search index.
>
>
>
>
> On Mon, Jan 22, 2018 at 12:32 PM, Farrukh Naveed Anjum <
> anjum.farrukh@gmail.com> wrote:
>
>> Hi, Gaurav,
>>
>> Did you solved it ? I am also following same usecase for SysLog using UDP
>> (Rsyslogs)
>>
>> It seems like data is coming to KAFKA Topic. As you can see its showing
>> up.
>>
>> But Elasticsearch index is not created.
>>
>>
>>
>> On Tue, Jan 16, 2018 at 12:37 PM, Gaurav Bapat <ga...@gmail.com>
>> wrote:
>>
>>> But I cant find how to configure it
>>>
>>> On 16 January 2018 at 11:38, Farrukh Naveed Anjum <
>>> anjum.farrukh@gmail.com> wrote:
>>>
>>>> yes, do configure it as per metron reference usecase
>>>>
>>>> On Tue, Jan 16, 2018 at 8:35 AM, Gaurav Bapat <ga...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Kyle,
>>>>>
>>>>> I saw that I can ping from my OS to VM and from VM to OS. Looks like
>>>>> this is some Kafka or Zookeeper environment variables setup issue, do I
>>>>> need to configure that in vagrant ssh?
>>>>>
>>>>> On 16 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hey Kyle,
>>>>>>
>>>>>> I am running NiFi not on Ambari but on localhost:8089, I can ping
>>>>>> from my OS terminal to node1 but can't ping from node1 to my OS terminal, I
>>>>>> have attached few screenshots and the contents of /etc/hosts
>>>>>>
>>>>>> Thank You!
>>>>>>
>>>>>> On 15 January 2018 at 20:04, Kyle Richardson <
>>>>>> kylerichardson2@gmail.com> wrote:
>>>>>>
>>>>>>> It looks like your Nifi instance is running on your laptop/desktop
>>>>>>> (e.g. the VM host). My guess would be that name resolution or networking is
>>>>>>> not properly configured between the host and the guest preventing the data
>>>>>>> from getting from Nifi to Kafka. What's the contents of /etc/hosts on the
>>>>>>> VM host? Can you ping node1 from the VM host by name and by IP address?
>>>>>>>
>>>>>>> -Kyle
>>>>>>>
>>>>>>> On Mon, Jan 15, 2018 at 6:55 AM, Gaurav Bapat <gauravb3007@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> Failed while waiting for acks from Kafka is what I am getting in
>>>>>>>> Kafka, am I missing some configuration with Kafka?
>>>>>>>>
>>>>>>>> On 15 January 2018 at 16:50, Gaurav Bapat <ga...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hi Farrukh,
>>>>>>>>>
>>>>>>>>> I cant find any folder by my topic
>>>>>>>>>
>>>>>>>>> On 15 January 2018 at 16:33, Farrukh Naveed Anjum <
>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Can you check /kafaka-logs on your VM box (It should have a
>>>>>>>>>> folder named your topic). Can you check if it is there ?
>>>>>>>>>>
>>>>>>>>>> On Mon, Jan 15, 2018 at 3:49 PM, Gaurav Bapat <
>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> I am not getting data into my Kafka topic
>>>>>>>>>>>
>>>>>>>>>>> I have used i5 4 Core Processor with 16 GB RAM and I have
>>>>>>>>>>> allocated 12 GB RAM to my vagrant VM.
>>>>>>>>>>>
>>>>>>>>>>> I dont understand how to configure Kafka broker because it is
>>>>>>>>>>> giving me failed while waiting for acks to Kafka
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 15 January 2018 at 16:10, Farrukh Naveed Anjum <
>>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Can you tell me is your KAFKA Topic getting data ? What are you
>>>>>>>>>>>> machine specifications ?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <
>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks Farrukh,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I am not getting data in my kafka topic even after creating
>>>>>>>>>>>>> one, the issue seems to be with broker config, how to configure Kafka and
>>>>>>>>>>>>> Zookeeper port?
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>>>>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I had similar issue it turned out to be the issue in STROM
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> No worker is assigned to togolgoy all you need is to add
>>>>>>>>>>>>>> additional port in
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by
>>>>>>>>>>>>>> assigning an additional port to the list
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> https://community.hortonworks.com/questions/32499/no-workers
>>>>>>>>>>>>>> -in-storm-for-squid-topology.html
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I had similar issue and finally got it fixed
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <
>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Storm UI
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 15 January 2018 at 08:59, Gaurav Bapat <
>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I have Storm UI and the logs are coming from firewalls,
>>>>>>>>>>>>>>>> servers, etc from other machines(HP ArcSight Logger).
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I have attached the NiFi screenshots, my logs are coming
>>>>>>>>>>>>>>>> but there is some error with Kafka and I am having issues with configuring
>>>>>>>>>>>>>>>> Kafka broker
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <
>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> In Ambari under storm you can find the UI under quick
>>>>>>>>>>>>>>>>> links at the top.  That said, the issue seems to be upstream of Metron, in
>>>>>>>>>>>>>>>>> NiFi.  That is something I can't help with as much, but if you can share
>>>>>>>>>>>>>>>>> the listensyslog processor config that would be a start.  Also, share the
>>>>>>>>>>>>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>>>>>>>>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <
>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs
>>>>>>>>>>>>>>>>>> is not getting logs in the processor.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Also I checked using tcpdump -i and it is getting logs in
>>>>>>>>>>>>>>>>>> my machine but ListenSyslogs is not getting the logs
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <
>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [root@metron incubator-metron]#
>>>>>>>>>>>>>>>>>>> ./metron-deployment/scripts/platform-info.sh
>>>>>>>>>>>>>>>>>>> Metron 0.4.3
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> * master
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>>>>>>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>>>>>>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function
>>>>>>>>>>>>>>>>>>> closes apache/incubator-metron#880
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile
>>>>>>>>>>>>>>>>>>> | 2 +-
>>>>>>>>>>>>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> ansible 2.0.0.2
>>>>>>>>>>>>>>>>>>>   config file =
>>>>>>>>>>>>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> Vagrant 1.9.6
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> Python 2.7.5
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>>>>>>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>>>>>>>>>>>>> Maven home: /opt/maven/current
>>>>>>>>>>>>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>>>>>>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>>>>>>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>>>>>>>>>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64",
>>>>>>>>>>>>>>>>>>> arch: "amd64", family: "unix"
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> node
>>>>>>>>>>>>>>>>>>> v8.9.3
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> npm
>>>>>>>>>>>>>>>>>>> 5.5.1
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>>>>>>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>>>>>>>>>>>>> This is free software; see the source for copying
>>>>>>>>>>>>>>>>>>> conditions.  There is NO
>>>>>>>>>>>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A
>>>>>>>>>>>>>>>>>>> PARTICULAR PURPOSE.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> Compiler is C++11 compliant
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu
>>>>>>>>>>>>>>>>>>> Jan 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>>>>>>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>>>>>>>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>>>>>>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>>>>>>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>>>>>>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>>>>>>>>>>>>> Total Physical Processors: 4
>>>>>>>>>>>>>>>>>>> Total cores: 16
>>>>>>>>>>>>>>>>>>> Disk information:
>>>>>>>>>>>>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>>>>>>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>>>>>>>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>>>>>>>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>>>>>>>>>>>>> This CPU appears to support virtualization
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <
>>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I gone through your answer but still I can't figure out
>>>>>>>>>>>>>>>>>>>> how do I do parsing/indexing in Storm UI as I cant find any option for the
>>>>>>>>>>>>>>>>>>>> same.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> So, you created a new cef topic, and set up the
>>>>>>>>>>>>>>>>>>>>> appropriate parser config for it (if not, this
>>>>>>>>>>>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>>>>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>>>>>>>>>>>>> 1.  Validate that the logs are getting onto the
>>>>>>>>>>>>>>>>>>>>> kafka topic that you are sending to.  If they aren't there, the problem is
>>>>>>>>>>>>>>>>>>>>> upstream from Metron.
>>>>>>>>>>>>>>>>>>>>> 2.  If they are getting onto the kafka topic they are
>>>>>>>>>>>>>>>>>>>>> being directly sent to, check the indexing kafka topic for an enriched
>>>>>>>>>>>>>>>>>>>>> version of those same logs.
>>>>>>>>>>>>>>>>>>>>> 3.  Do a binary search of the various components
>>>>>>>>>>>>>>>>>>>>> involved with ingest.
>>>>>>>>>>>>>>>>>>>>>     a. If the logs are *not* on the indexing kafka
>>>>>>>>>>>>>>>>>>>>> topic, check the enrichments topic for those logs.
>>>>>>>>>>>>>>>>>>>>>     b. If the logs are *not* on the enrichments
>>>>>>>>>>>>>>>>>>>>> topic, check the parser storm topology.
>>>>>>>>>>>>>>>>>>>>>     c. If the logs are on the enrichments topic, but
>>>>>>>>>>>>>>>>>>>>> *not* indexing, check the enrichments storm topology.
>>>>>>>>>>>>>>>>>>>>>     d. If the logs are on the indexing but *not*
>>>>>>>>>>>>>>>>>>>>> Kibana, check the indexing storm topic.
>>>>>>>>>>>>>>>>>>>>>     e. If the logs are in on the indexing topic and
>>>>>>>>>>>>>>>>>>>>> indexing storm topic is in good shape, check
>>>>>>>>>>>>>>>>>>>>> elasticsearch directly.
>>>>>>>>>>>>>>>>>>>>> 4.  You should have identified where the issue is at
>>>>>>>>>>>>>>>>>>>>> this point.  Report back here with what you observed, any relevant error
>>>>>>>>>>>>>>>>>>>>> messages, etc.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>>>>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>>>>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>>>>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Hello everyone, I have deployed Metron on a single
>>>>>>>>>>>>>>>>>>>>>> node machine and I would like to know how do I get Syslogs from NiFi into
>>>>>>>>>>>>>>>>>>>>>> Kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> I have created a Kafka topic by the name "cef" and I
>>>>>>>>>>>>>>>>>>>>>> can see that the topic exists in
>>>>>>>>>>>>>>>>>>>>>> Metron Configuration but I am unable to connect it
>>>>>>>>>>>>>>>>>>>>>> with Kibana
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Need Help!!
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> With Regards
>>>>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> With Regards
>>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> With Regards
>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> With Regards
>>>> Farrukh Naveed Anjum
>>>>
>>>
>>>
>>
>>
>> --
>> With Regards
>> Farrukh Naveed Anjum
>>
>
>
>
> --
> With Regards
> Farrukh Naveed Anjum
>



-- 
With Regards
Farrukh Naveed Anjum

Re: Getting Syslogs to Metron

Posted by Farrukh Naveed Anjum <an...@gmail.com>.
Hi Guys,

Its seems like we are able to make NIFI connection and data indeed is going
through KAFKA Topic yet using CEF Parser (SysLogs) we are unable to create
the elastic search index.




On Mon, Jan 22, 2018 at 12:32 PM, Farrukh Naveed Anjum <
anjum.farrukh@gmail.com> wrote:

> Hi, Gaurav,
>
> Did you solved it ? I am also following same usecase for SysLog using UDP
> (Rsyslogs)
>
> It seems like data is coming to KAFKA Topic. As you can see its showing up.
>
> But Elasticsearch index is not created.
>
>
>
> On Tue, Jan 16, 2018 at 12:37 PM, Gaurav Bapat <ga...@gmail.com>
> wrote:
>
>> But I cant find how to configure it
>>
>> On 16 January 2018 at 11:38, Farrukh Naveed Anjum <
>> anjum.farrukh@gmail.com> wrote:
>>
>>> yes, do configure it as per metron reference usecase
>>>
>>> On Tue, Jan 16, 2018 at 8:35 AM, Gaurav Bapat <ga...@gmail.com>
>>> wrote:
>>>
>>>> Hi Kyle,
>>>>
>>>> I saw that I can ping from my OS to VM and from VM to OS. Looks like
>>>> this is some Kafka or Zookeeper environment variables setup issue, do I
>>>> need to configure that in vagrant ssh?
>>>>
>>>> On 16 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hey Kyle,
>>>>>
>>>>> I am running NiFi not on Ambari but on localhost:8089, I can ping from
>>>>> my OS terminal to node1 but can't ping from node1 to my OS terminal, I have
>>>>> attached few screenshots and the contents of /etc/hosts
>>>>>
>>>>> Thank You!
>>>>>
>>>>> On 15 January 2018 at 20:04, Kyle Richardson <
>>>>> kylerichardson2@gmail.com> wrote:
>>>>>
>>>>>> It looks like your Nifi instance is running on your laptop/desktop
>>>>>> (e.g. the VM host). My guess would be that name resolution or networking is
>>>>>> not properly configured between the host and the guest preventing the data
>>>>>> from getting from Nifi to Kafka. What's the contents of /etc/hosts on the
>>>>>> VM host? Can you ping node1 from the VM host by name and by IP address?
>>>>>>
>>>>>> -Kyle
>>>>>>
>>>>>> On Mon, Jan 15, 2018 at 6:55 AM, Gaurav Bapat <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Failed while waiting for acks from Kafka is what I am getting in
>>>>>>> Kafka, am I missing some configuration with Kafka?
>>>>>>>
>>>>>>> On 15 January 2018 at 16:50, Gaurav Bapat <ga...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Farrukh,
>>>>>>>>
>>>>>>>> I cant find any folder by my topic
>>>>>>>>
>>>>>>>> On 15 January 2018 at 16:33, Farrukh Naveed Anjum <
>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Can you check /kafaka-logs on your VM box (It should have a folder
>>>>>>>>> named your topic). Can you check if it is there ?
>>>>>>>>>
>>>>>>>>> On Mon, Jan 15, 2018 at 3:49 PM, Gaurav Bapat <
>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> I am not getting data into my Kafka topic
>>>>>>>>>>
>>>>>>>>>> I have used i5 4 Core Processor with 16 GB RAM and I have
>>>>>>>>>> allocated 12 GB RAM to my vagrant VM.
>>>>>>>>>>
>>>>>>>>>> I dont understand how to configure Kafka broker because it is
>>>>>>>>>> giving me failed while waiting for acks to Kafka
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 15 January 2018 at 16:10, Farrukh Naveed Anjum <
>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Can you tell me is your KAFKA Topic getting data ? What are you
>>>>>>>>>>> machine specifications ?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <
>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Thanks Farrukh,
>>>>>>>>>>>>
>>>>>>>>>>>> I am not getting data in my kafka topic even after creating
>>>>>>>>>>>> one, the issue seems to be with broker config, how to configure Kafka and
>>>>>>>>>>>> Zookeeper port?
>>>>>>>>>>>>
>>>>>>>>>>>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>>>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I had similar issue it turned out to be the issue in STROM
>>>>>>>>>>>>>
>>>>>>>>>>>>> No worker is assigned to togolgoy all you need is to add
>>>>>>>>>>>>> additional port in
>>>>>>>>>>>>>
>>>>>>>>>>>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by
>>>>>>>>>>>>> assigning an additional port to the list
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> https://community.hortonworks.com/questions/32499/no-workers
>>>>>>>>>>>>> -in-storm-for-squid-topology.html
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> I had similar issue and finally got it fixed
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <
>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Storm UI
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 15 January 2018 at 08:59, Gaurav Bapat <
>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have Storm UI and the logs are coming from firewalls,
>>>>>>>>>>>>>>> servers, etc from other machines(HP ArcSight Logger).
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have attached the NiFi screenshots, my logs are coming but
>>>>>>>>>>>>>>> there is some error with Kafka and I am having issues with configuring
>>>>>>>>>>>>>>> Kafka broker
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <
>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> In Ambari under storm you can find the UI under quick links
>>>>>>>>>>>>>>>> at the top.  That said, the issue seems to be upstream of Metron, in NiFi.
>>>>>>>>>>>>>>>> That is something I can't help with as much, but if you can share the
>>>>>>>>>>>>>>>> listensyslog processor config that would be a start.  Also, share the
>>>>>>>>>>>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>>>>>>>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <
>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs
>>>>>>>>>>>>>>>>> is not getting logs in the processor.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Also I checked using tcpdump -i and it is getting logs in
>>>>>>>>>>>>>>>>> my machine but ListenSyslogs is not getting the logs
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <
>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [root@metron incubator-metron]#
>>>>>>>>>>>>>>>>>> ./metron-deployment/scripts/platform-info.sh
>>>>>>>>>>>>>>>>>> Metron 0.4.3
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> * master
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>>>>>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>>>>>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function
>>>>>>>>>>>>>>>>>> closes apache/incubator-metron#880
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile
>>>>>>>>>>>>>>>>>> | 2 +-
>>>>>>>>>>>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> ansible 2.0.0.2
>>>>>>>>>>>>>>>>>>   config file =
>>>>>>>>>>>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Vagrant 1.9.6
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Python 2.7.5
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>>>>>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>>>>>>>>>>>> Maven home: /opt/maven/current
>>>>>>>>>>>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>>>>>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>>>>>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>>>>>>>>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64",
>>>>>>>>>>>>>>>>>> arch: "amd64", family: "unix"
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> node
>>>>>>>>>>>>>>>>>> v8.9.3
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> npm
>>>>>>>>>>>>>>>>>> 5.5.1
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>>>>>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>>>>>>>>>>>> This is free software; see the source for copying
>>>>>>>>>>>>>>>>>> conditions.  There is NO
>>>>>>>>>>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A
>>>>>>>>>>>>>>>>>> PARTICULAR PURPOSE.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Compiler is C++11 compliant
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu
>>>>>>>>>>>>>>>>>> Jan 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>>>>>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>>>>>>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>>>>>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>>>>>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>>>>>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>>>>>>>>>>>> Total Physical Processors: 4
>>>>>>>>>>>>>>>>>> Total cores: 16
>>>>>>>>>>>>>>>>>> Disk information:
>>>>>>>>>>>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>>>>>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>>>>>>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>>>>>>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>>>>>>>>>>>> This CPU appears to support virtualization
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <
>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I gone through your answer but still I can't figure out
>>>>>>>>>>>>>>>>>>> how do I do parsing/indexing in Storm UI as I cant find any option for the
>>>>>>>>>>>>>>>>>>> same.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> So, you created a new cef topic, and set up the
>>>>>>>>>>>>>>>>>>>> appropriate parser config for it (if not, this
>>>>>>>>>>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>>>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>>>>>>>>>>>> 1.  Validate that the logs are getting onto the
>>>>>>>>>>>>>>>>>>>> kafka topic that you are sending to.  If they aren't there, the problem is
>>>>>>>>>>>>>>>>>>>> upstream from Metron.
>>>>>>>>>>>>>>>>>>>> 2.  If they are getting onto the kafka topic they are
>>>>>>>>>>>>>>>>>>>> being directly sent to, check the indexing kafka topic for an enriched
>>>>>>>>>>>>>>>>>>>> version of those same logs.
>>>>>>>>>>>>>>>>>>>> 3.  Do a binary search of the various components
>>>>>>>>>>>>>>>>>>>> involved with ingest.
>>>>>>>>>>>>>>>>>>>>     a. If the logs are *not* on the indexing kafka
>>>>>>>>>>>>>>>>>>>> topic, check the enrichments topic for those logs.
>>>>>>>>>>>>>>>>>>>>     b. If the logs are *not* on the enrichments topic,
>>>>>>>>>>>>>>>>>>>> check the parser storm topology.
>>>>>>>>>>>>>>>>>>>>     c. If the logs are on the enrichments topic, but
>>>>>>>>>>>>>>>>>>>> *not* indexing, check the enrichments storm topology.
>>>>>>>>>>>>>>>>>>>>     d. If the logs are on the indexing but *not*
>>>>>>>>>>>>>>>>>>>> Kibana, check the indexing storm topic.
>>>>>>>>>>>>>>>>>>>>     e. If the logs are in on the indexing topic and
>>>>>>>>>>>>>>>>>>>> indexing storm topic is in good shape, check
>>>>>>>>>>>>>>>>>>>> elasticsearch directly.
>>>>>>>>>>>>>>>>>>>> 4.  You should have identified where the issue is at
>>>>>>>>>>>>>>>>>>>> this point.  Report back here with what you observed, any relevant error
>>>>>>>>>>>>>>>>>>>> messages, etc.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>>>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>>>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>>>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Hello everyone, I have deployed Metron on a single
>>>>>>>>>>>>>>>>>>>>> node machine and I would like to know how do I get Syslogs from NiFi into
>>>>>>>>>>>>>>>>>>>>> Kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> I have created a Kafka topic by the name "cef" and I
>>>>>>>>>>>>>>>>>>>>> can see that the topic exists in
>>>>>>>>>>>>>>>>>>>>> Metron Configuration but I am unable to connect it
>>>>>>>>>>>>>>>>>>>>> with Kibana
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Need Help!!
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> With Regards
>>>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> With Regards
>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> With Regards
>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> With Regards
>>> Farrukh Naveed Anjum
>>>
>>
>>
>
>
> --
> With Regards
> Farrukh Naveed Anjum
>



-- 
With Regards
Farrukh Naveed Anjum

Re: Getting Syslogs to Metron

Posted by Gaurav Bapat <ga...@gmail.com>.
Mine isn't coming into Alerts UI

Did you configure Kafka or Zookeeper?

On 22 January 2018 at 13:02, Farrukh Naveed Anjum <an...@gmail.com>
wrote:

> Hi, Gaurav,
>
> Did you solved it ? I am also following same usecase for SysLog using UDP
> (Rsyslogs)
>
> It seems like data is coming to KAFKA Topic. As you can see its showing up.
>
> But Elasticsearch index is not created.
>
>
>
> On Tue, Jan 16, 2018 at 12:37 PM, Gaurav Bapat <ga...@gmail.com>
> wrote:
>
>> But I cant find how to configure it
>>
>> On 16 January 2018 at 11:38, Farrukh Naveed Anjum <
>> anjum.farrukh@gmail.com> wrote:
>>
>>> yes, do configure it as per metron reference usecase
>>>
>>> On Tue, Jan 16, 2018 at 8:35 AM, Gaurav Bapat <ga...@gmail.com>
>>> wrote:
>>>
>>>> Hi Kyle,
>>>>
>>>> I saw that I can ping from my OS to VM and from VM to OS. Looks like
>>>> this is some Kafka or Zookeeper environment variables setup issue, do I
>>>> need to configure that in vagrant ssh?
>>>>
>>>> On 16 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hey Kyle,
>>>>>
>>>>> I am running NiFi not on Ambari but on localhost:8089, I can ping from
>>>>> my OS terminal to node1 but can't ping from node1 to my OS terminal, I have
>>>>> attached few screenshots and the contents of /etc/hosts
>>>>>
>>>>> Thank You!
>>>>>
>>>>> On 15 January 2018 at 20:04, Kyle Richardson <
>>>>> kylerichardson2@gmail.com> wrote:
>>>>>
>>>>>> It looks like your Nifi instance is running on your laptop/desktop
>>>>>> (e.g. the VM host). My guess would be that name resolution or networking is
>>>>>> not properly configured between the host and the guest preventing the data
>>>>>> from getting from Nifi to Kafka. What's the contents of /etc/hosts on the
>>>>>> VM host? Can you ping node1 from the VM host by name and by IP address?
>>>>>>
>>>>>> -Kyle
>>>>>>
>>>>>> On Mon, Jan 15, 2018 at 6:55 AM, Gaurav Bapat <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Failed while waiting for acks from Kafka is what I am getting in
>>>>>>> Kafka, am I missing some configuration with Kafka?
>>>>>>>
>>>>>>> On 15 January 2018 at 16:50, Gaurav Bapat <ga...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Farrukh,
>>>>>>>>
>>>>>>>> I cant find any folder by my topic
>>>>>>>>
>>>>>>>> On 15 January 2018 at 16:33, Farrukh Naveed Anjum <
>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Can you check /kafaka-logs on your VM box (It should have a folder
>>>>>>>>> named your topic). Can you check if it is there ?
>>>>>>>>>
>>>>>>>>> On Mon, Jan 15, 2018 at 3:49 PM, Gaurav Bapat <
>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> I am not getting data into my Kafka topic
>>>>>>>>>>
>>>>>>>>>> I have used i5 4 Core Processor with 16 GB RAM and I have
>>>>>>>>>> allocated 12 GB RAM to my vagrant VM.
>>>>>>>>>>
>>>>>>>>>> I dont understand how to configure Kafka broker because it is
>>>>>>>>>> giving me failed while waiting for acks to Kafka
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 15 January 2018 at 16:10, Farrukh Naveed Anjum <
>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Can you tell me is your KAFKA Topic getting data ? What are you
>>>>>>>>>>> machine specifications ?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <
>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Thanks Farrukh,
>>>>>>>>>>>>
>>>>>>>>>>>> I am not getting data in my kafka topic even after creating
>>>>>>>>>>>> one, the issue seems to be with broker config, how to configure Kafka and
>>>>>>>>>>>> Zookeeper port?
>>>>>>>>>>>>
>>>>>>>>>>>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>>>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I had similar issue it turned out to be the issue in STROM
>>>>>>>>>>>>>
>>>>>>>>>>>>> No worker is assigned to togolgoy all you need is to add
>>>>>>>>>>>>> additional port in
>>>>>>>>>>>>>
>>>>>>>>>>>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by
>>>>>>>>>>>>> assigning an additional port to the list
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> https://community.hortonworks.com/questions/32499/no-workers
>>>>>>>>>>>>> -in-storm-for-squid-topology.html
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> I had similar issue and finally got it fixed
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <
>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Storm UI
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 15 January 2018 at 08:59, Gaurav Bapat <
>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have Storm UI and the logs are coming from firewalls,
>>>>>>>>>>>>>>> servers, etc from other machines(HP ArcSight Logger).
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have attached the NiFi screenshots, my logs are coming but
>>>>>>>>>>>>>>> there is some error with Kafka and I am having issues with configuring
>>>>>>>>>>>>>>> Kafka broker
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <
>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> In Ambari under storm you can find the UI under quick links
>>>>>>>>>>>>>>>> at the top.  That said, the issue seems to be upstream of Metron, in NiFi.
>>>>>>>>>>>>>>>> That is something I can't help with as much, but if you can share the
>>>>>>>>>>>>>>>> listensyslog processor config that would be a start.  Also, share the
>>>>>>>>>>>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>>>>>>>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <
>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs
>>>>>>>>>>>>>>>>> is not getting logs in the processor.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Also I checked using tcpdump -i and it is getting logs in
>>>>>>>>>>>>>>>>> my machine but ListenSyslogs is not getting the logs
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <
>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [root@metron incubator-metron]#
>>>>>>>>>>>>>>>>>> ./metron-deployment/scripts/platform-info.sh
>>>>>>>>>>>>>>>>>> Metron 0.4.3
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> * master
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>>>>>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>>>>>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function
>>>>>>>>>>>>>>>>>> closes apache/incubator-metron#880
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile
>>>>>>>>>>>>>>>>>> | 2 +-
>>>>>>>>>>>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> ansible 2.0.0.2
>>>>>>>>>>>>>>>>>>   config file =
>>>>>>>>>>>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Vagrant 1.9.6
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Python 2.7.5
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>>>>>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>>>>>>>>>>>> Maven home: /opt/maven/current
>>>>>>>>>>>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>>>>>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>>>>>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>>>>>>>>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64",
>>>>>>>>>>>>>>>>>> arch: "amd64", family: "unix"
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> node
>>>>>>>>>>>>>>>>>> v8.9.3
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> npm
>>>>>>>>>>>>>>>>>> 5.5.1
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>>>>>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>>>>>>>>>>>> This is free software; see the source for copying
>>>>>>>>>>>>>>>>>> conditions.  There is NO
>>>>>>>>>>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A
>>>>>>>>>>>>>>>>>> PARTICULAR PURPOSE.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Compiler is C++11 compliant
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu
>>>>>>>>>>>>>>>>>> Jan 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>>>>>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>>>>>>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>>>>>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>>>>>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>>>>>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>>>>>>>>>>>> Total Physical Processors: 4
>>>>>>>>>>>>>>>>>> Total cores: 16
>>>>>>>>>>>>>>>>>> Disk information:
>>>>>>>>>>>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>>>>>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>>>>>>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>>>>>>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>>>>>>>>>>>> This CPU appears to support virtualization
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <
>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I gone through your answer but still I can't figure out
>>>>>>>>>>>>>>>>>>> how do I do parsing/indexing in Storm UI as I cant find any option for the
>>>>>>>>>>>>>>>>>>> same.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> So, you created a new cef topic, and set up the
>>>>>>>>>>>>>>>>>>>> appropriate parser config for it (if not, this
>>>>>>>>>>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>>>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>>>>>>>>>>>> 1.  Validate that the logs are getting onto the
>>>>>>>>>>>>>>>>>>>> kafka topic that you are sending to.  If they aren't there, the problem is
>>>>>>>>>>>>>>>>>>>> upstream from Metron.
>>>>>>>>>>>>>>>>>>>> 2.  If they are getting onto the kafka topic they are
>>>>>>>>>>>>>>>>>>>> being directly sent to, check the indexing kafka topic for an enriched
>>>>>>>>>>>>>>>>>>>> version of those same logs.
>>>>>>>>>>>>>>>>>>>> 3.  Do a binary search of the various components
>>>>>>>>>>>>>>>>>>>> involved with ingest.
>>>>>>>>>>>>>>>>>>>>     a. If the logs are *not* on the indexing kafka
>>>>>>>>>>>>>>>>>>>> topic, check the enrichments topic for those logs.
>>>>>>>>>>>>>>>>>>>>     b. If the logs are *not* on the enrichments topic,
>>>>>>>>>>>>>>>>>>>> check the parser storm topology.
>>>>>>>>>>>>>>>>>>>>     c. If the logs are on the enrichments topic, but
>>>>>>>>>>>>>>>>>>>> *not* indexing, check the enrichments storm topology.
>>>>>>>>>>>>>>>>>>>>     d. If the logs are on the indexing but *not*
>>>>>>>>>>>>>>>>>>>> Kibana, check the indexing storm topic.
>>>>>>>>>>>>>>>>>>>>     e. If the logs are in on the indexing topic and
>>>>>>>>>>>>>>>>>>>> indexing storm topic is in good shape, check
>>>>>>>>>>>>>>>>>>>> elasticsearch directly.
>>>>>>>>>>>>>>>>>>>> 4.  You should have identified where the issue is at
>>>>>>>>>>>>>>>>>>>> this point.  Report back here with what you observed, any relevant error
>>>>>>>>>>>>>>>>>>>> messages, etc.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>>>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>>>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>>>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Hello everyone, I have deployed Metron on a single
>>>>>>>>>>>>>>>>>>>>> node machine and I would like to know how do I get Syslogs from NiFi into
>>>>>>>>>>>>>>>>>>>>> Kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> I have created a Kafka topic by the name "cef" and I
>>>>>>>>>>>>>>>>>>>>> can see that the topic exists in
>>>>>>>>>>>>>>>>>>>>> Metron Configuration but I am unable to connect it
>>>>>>>>>>>>>>>>>>>>> with Kibana
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Need Help!!
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> With Regards
>>>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> With Regards
>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> With Regards
>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> With Regards
>>> Farrukh Naveed Anjum
>>>
>>
>>
>
>
> --
> With Regards
> Farrukh Naveed Anjum
>

Re: Getting Syslogs to Metron

Posted by Farrukh Naveed Anjum <an...@gmail.com>.
Hi, Gaurav,

Did you solved it ? I am also following same usecase for SysLog using UDP
(Rsyslogs)

It seems like data is coming to KAFKA Topic. As you can see its showing up.

But Elasticsearch index is not created.



On Tue, Jan 16, 2018 at 12:37 PM, Gaurav Bapat <ga...@gmail.com>
wrote:

> But I cant find how to configure it
>
> On 16 January 2018 at 11:38, Farrukh Naveed Anjum <anjum.farrukh@gmail.com
> > wrote:
>
>> yes, do configure it as per metron reference usecase
>>
>> On Tue, Jan 16, 2018 at 8:35 AM, Gaurav Bapat <ga...@gmail.com>
>> wrote:
>>
>>> Hi Kyle,
>>>
>>> I saw that I can ping from my OS to VM and from VM to OS. Looks like
>>> this is some Kafka or Zookeeper environment variables setup issue, do I
>>> need to configure that in vagrant ssh?
>>>
>>> On 16 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com> wrote:
>>>
>>>> Hey Kyle,
>>>>
>>>> I am running NiFi not on Ambari but on localhost:8089, I can ping from
>>>> my OS terminal to node1 but can't ping from node1 to my OS terminal, I have
>>>> attached few screenshots and the contents of /etc/hosts
>>>>
>>>> Thank You!
>>>>
>>>> On 15 January 2018 at 20:04, Kyle Richardson <kylerichardson2@gmail.com
>>>> > wrote:
>>>>
>>>>> It looks like your Nifi instance is running on your laptop/desktop
>>>>> (e.g. the VM host). My guess would be that name resolution or networking is
>>>>> not properly configured between the host and the guest preventing the data
>>>>> from getting from Nifi to Kafka. What's the contents of /etc/hosts on the
>>>>> VM host? Can you ping node1 from the VM host by name and by IP address?
>>>>>
>>>>> -Kyle
>>>>>
>>>>> On Mon, Jan 15, 2018 at 6:55 AM, Gaurav Bapat <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Failed while waiting for acks from Kafka is what I am getting in
>>>>>> Kafka, am I missing some configuration with Kafka?
>>>>>>
>>>>>> On 15 January 2018 at 16:50, Gaurav Bapat <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Farrukh,
>>>>>>>
>>>>>>> I cant find any folder by my topic
>>>>>>>
>>>>>>> On 15 January 2018 at 16:33, Farrukh Naveed Anjum <
>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>
>>>>>>>> Can you check /kafaka-logs on your VM box (It should have a folder
>>>>>>>> named your topic). Can you check if it is there ?
>>>>>>>>
>>>>>>>> On Mon, Jan 15, 2018 at 3:49 PM, Gaurav Bapat <
>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> I am not getting data into my Kafka topic
>>>>>>>>>
>>>>>>>>> I have used i5 4 Core Processor with 16 GB RAM and I have
>>>>>>>>> allocated 12 GB RAM to my vagrant VM.
>>>>>>>>>
>>>>>>>>> I dont understand how to configure Kafka broker because it is
>>>>>>>>> giving me failed while waiting for acks to Kafka
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 15 January 2018 at 16:10, Farrukh Naveed Anjum <
>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Can you tell me is your KAFKA Topic getting data ? What are you
>>>>>>>>>> machine specifications ?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <
>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Thanks Farrukh,
>>>>>>>>>>>
>>>>>>>>>>> I am not getting data in my kafka topic even after creating one,
>>>>>>>>>>> the issue seems to be with broker config, how to configure Kafka and
>>>>>>>>>>> Zookeeper port?
>>>>>>>>>>>
>>>>>>>>>>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>> I had similar issue it turned out to be the issue in STROM
>>>>>>>>>>>>
>>>>>>>>>>>> No worker is assigned to togolgoy all you need is to add
>>>>>>>>>>>> additional port in
>>>>>>>>>>>>
>>>>>>>>>>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by
>>>>>>>>>>>> assigning an additional port to the list
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> https://community.hortonworks.com/questions/32499/no-workers
>>>>>>>>>>>> -in-storm-for-squid-topology.html
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I had similar issue and finally got it fixed
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <
>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Storm UI
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 15 January 2018 at 08:59, Gaurav Bapat <
>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I have Storm UI and the logs are coming from firewalls,
>>>>>>>>>>>>>> servers, etc from other machines(HP ArcSight Logger).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I have attached the NiFi screenshots, my logs are coming but
>>>>>>>>>>>>>> there is some error with Kafka and I am having issues with configuring
>>>>>>>>>>>>>> Kafka broker
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <
>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> In Ambari under storm you can find the UI under quick links
>>>>>>>>>>>>>>> at the top.  That said, the issue seems to be upstream of Metron, in NiFi.
>>>>>>>>>>>>>>> That is something I can't help with as much, but if you can share the
>>>>>>>>>>>>>>> listensyslog processor config that would be a start.  Also, share the
>>>>>>>>>>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>>>>>>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <
>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs is
>>>>>>>>>>>>>>>> not getting logs in the processor.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Also I checked using tcpdump -i and it is getting logs in
>>>>>>>>>>>>>>>> my machine but ListenSyslogs is not getting the logs
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <
>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [root@metron incubator-metron]#
>>>>>>>>>>>>>>>>> ./metron-deployment/scripts/platform-info.sh
>>>>>>>>>>>>>>>>> Metron 0.4.3
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> * master
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>>>>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>>>>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>>>>>>>>>>>>>>>> apache/incubator-metron#880
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile
>>>>>>>>>>>>>>>>> | 2 +-
>>>>>>>>>>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> ansible 2.0.0.2
>>>>>>>>>>>>>>>>>   config file =
>>>>>>>>>>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> Vagrant 1.9.6
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> Python 2.7.5
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>>>>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>>>>>>>>>>> Maven home: /opt/maven/current
>>>>>>>>>>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>>>>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>>>>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>>>>>>>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64",
>>>>>>>>>>>>>>>>> arch: "amd64", family: "unix"
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> node
>>>>>>>>>>>>>>>>> v8.9.3
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> npm
>>>>>>>>>>>>>>>>> 5.5.1
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>>>>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>>>>>>>>>>> This is free software; see the source for copying
>>>>>>>>>>>>>>>>> conditions.  There is NO
>>>>>>>>>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A
>>>>>>>>>>>>>>>>> PARTICULAR PURPOSE.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> Compiler is C++11 compliant
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu
>>>>>>>>>>>>>>>>> Jan 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>>>>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>>>>>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>>>>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>>>>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>>>>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>>>>>>>>>>> Total Physical Processors: 4
>>>>>>>>>>>>>>>>> Total cores: 16
>>>>>>>>>>>>>>>>> Disk information:
>>>>>>>>>>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>>>>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>>>>>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>>>>>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>>>>>>>>>>> This CPU appears to support virtualization
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <
>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I gone through your answer but still I can't figure out
>>>>>>>>>>>>>>>>>> how do I do parsing/indexing in Storm UI as I cant find any option for the
>>>>>>>>>>>>>>>>>> same.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> So, you created a new cef topic, and set up the
>>>>>>>>>>>>>>>>>>> appropriate parser config for it (if not, this
>>>>>>>>>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>>>>>>>>>>> 1.  Validate that the logs are getting onto the
>>>>>>>>>>>>>>>>>>> kafka topic that you are sending to.  If they aren't there, the problem is
>>>>>>>>>>>>>>>>>>> upstream from Metron.
>>>>>>>>>>>>>>>>>>> 2.  If they are getting onto the kafka topic they are
>>>>>>>>>>>>>>>>>>> being directly sent to, check the indexing kafka topic for an enriched
>>>>>>>>>>>>>>>>>>> version of those same logs.
>>>>>>>>>>>>>>>>>>> 3.  Do a binary search of the various components
>>>>>>>>>>>>>>>>>>> involved with ingest.
>>>>>>>>>>>>>>>>>>>     a. If the logs are *not* on the indexing kafka
>>>>>>>>>>>>>>>>>>> topic, check the enrichments topic for those logs.
>>>>>>>>>>>>>>>>>>>     b. If the logs are *not* on the enrichments topic,
>>>>>>>>>>>>>>>>>>> check the parser storm topology.
>>>>>>>>>>>>>>>>>>>     c. If the logs are on the enrichments topic, but
>>>>>>>>>>>>>>>>>>> *not* indexing, check the enrichments storm topology.
>>>>>>>>>>>>>>>>>>>     d. If the logs are on the indexing but *not*
>>>>>>>>>>>>>>>>>>> Kibana, check the indexing storm topic.
>>>>>>>>>>>>>>>>>>>     e. If the logs are in on the indexing topic and
>>>>>>>>>>>>>>>>>>> indexing storm topic is in good shape, check
>>>>>>>>>>>>>>>>>>> elasticsearch directly.
>>>>>>>>>>>>>>>>>>> 4.  You should have identified where the issue is at
>>>>>>>>>>>>>>>>>>> this point.  Report back here with what you observed, any relevant error
>>>>>>>>>>>>>>>>>>> messages, etc.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Hello everyone, I have deployed Metron on a single node
>>>>>>>>>>>>>>>>>>>> machine and I would like to know how do I get Syslogs from NiFi into Kibana
>>>>>>>>>>>>>>>>>>>> dashboard?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I have created a Kafka topic by the name "cef" and I
>>>>>>>>>>>>>>>>>>>> can see that the topic exists in
>>>>>>>>>>>>>>>>>>>> Metron Configuration but I am unable to connect it with
>>>>>>>>>>>>>>>>>>>> Kibana
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Need Help!!
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> With Regards
>>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> With Regards
>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> With Regards
>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>> With Regards
>> Farrukh Naveed Anjum
>>
>
>


-- 
With Regards
Farrukh Naveed Anjum

Re: Metron Install - Vagrant provision error.

Posted by Otto Fowler <ot...@gmail.com>.
If the newest 8 doesn’t work that would be a bug, imho


On January 17, 2018 at 07:20:35, Srikanth Nagarajan (sri@gandivanetworks.com)
wrote:

What is the highest version of Java supported?

______________________
*Srikanth Nagarajan *
President
*Gandiva Networks Inc*
*732.690.1884 <732.690.1884>* Mobile
sri@gandivanetworks.com
www.gandivanetworks.com

On Jan 17, 2018, at 5:22 PM, Otto Fowler <ot...@gmail.com> wrote:

We do not support Java 9 yet.



On January 17, 2018 at 04:25:29, Srikanth Nagarajan (sri@gandivanetworks.com)
wrote:

InvocationTargetException: java.nio.file.NotDirectoryException:
/Library/Java/JavaVirtualMachines/jdk-9.0.1.jdk/Contents/Home/lib/modules

Re: Metron Install - Vagrant provision error.

Posted by Srikanth Nagarajan <sr...@gandivanetworks.com>.
What is the highest version of Java supported? 

______________________
Srikanth Nagarajan 
President 
Gandiva Networks Inc
732.690.1884 Mobile
sri@gandivanetworks.com
www.gandivanetworks.com

> On Jan 17, 2018, at 5:22 PM, Otto Fowler <ot...@gmail.com> wrote:
> 
> We do not support Java 9 yet.
> 
> 
> 
>> On January 17, 2018 at 04:25:29, Srikanth Nagarajan (sri@gandivanetworks.com) wrote:
>> 
>> InvocationTargetException: java.nio.file.NotDirectoryException: /Library/Java/JavaVirtualMachines/jdk-9.0.1.jdk/Contents/Home/lib/modules 

Re: Metron Install - Vagrant provision error.

Posted by Otto Fowler <ot...@gmail.com>.
We do not support Java 9 yet.



On January 17, 2018 at 04:25:29, Srikanth Nagarajan (sri@gandivanetworks.com)
wrote:

InvocationTargetException: java.nio.file.NotDirectoryException:
/Library/Java/JavaVirtualMachines/jdk-9.0.1.jdk/Contents/Home/lib/modules

Re: Metron Install - Vagrant provision error.

Posted by Otto Fowler <ot...@gmail.com>.
   - Is the the complete error? Can you post the ansible.log in that
   directory?
   - Do you have docker installed and running?
   - can you run METRON_SRC_DIR/metron-deployment/scripts/platform_info.sh
   and put the output in a mail

ottO




On January 16, 2018 at 02:42:39, Srikanth Nagarajan (sri@gandivanetworks.com)
wrote:

Hi,

I am getting the following error in the full development install (single
box on Mac OSX ) while following the development install procedure.

vagrant provision gives the below error and more below that.

fatal: [node1 -> localhost]: FAILED! => { "changed": true, "cmd": "cd
/Users/sri/metron/metron-deployment/playbooks/../.. && mvn clean package
-DskipTests -T 2C -P HDP-2.5.0.0,mpack", "delta": "0:00:02.478441", "end":
"2018-01-16 13:09:22.953422", "failed": true, "invocation": {
"module_args": { "_raw_params": "cd
/Users/sri/metron/metron-deployment/playbooks/../.. && mvn clean package
-DskipTests -T 2C -P HDP-2.5.0.0,mpack", "_uses_shell": true, "chdir":
null, "creates": null, "executable": null, "removes": null, "warn": true },
"module_name": "command" }, "rc": 1, "start": "2018-01-16 13:09:20.474981",
"stderr": ""

Any help would be appreciated.

Thanks

Srikanth

______________________

*Srikanth Nagarajan*
*Principal*

*Gandiva Networks Inc*

*732.690.1884* Mobile

sri@gandivanetworks.com

www.gandivanetworks.com

Please consider the environment before printing this. NOTICE: The
information contained in this e-mail message is intended for addressee(s)
only. If you have received this message in error please notify the sender.

Re: Getting Syslogs to Metron

Posted by Gaurav Bapat <ga...@gmail.com>.
But I cant find how to configure it

On 16 January 2018 at 11:38, Farrukh Naveed Anjum <an...@gmail.com>
wrote:

> yes, do configure it as per metron reference usecase
>
> On Tue, Jan 16, 2018 at 8:35 AM, Gaurav Bapat <ga...@gmail.com>
> wrote:
>
>> Hi Kyle,
>>
>> I saw that I can ping from my OS to VM and from VM to OS. Looks like this
>> is some Kafka or Zookeeper environment variables setup issue, do I need to
>> configure that in vagrant ssh?
>>
>> On 16 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com> wrote:
>>
>>> Hey Kyle,
>>>
>>> I am running NiFi not on Ambari but on localhost:8089, I can ping from
>>> my OS terminal to node1 but can't ping from node1 to my OS terminal, I have
>>> attached few screenshots and the contents of /etc/hosts
>>>
>>> Thank You!
>>>
>>> On 15 January 2018 at 20:04, Kyle Richardson <ky...@gmail.com>
>>> wrote:
>>>
>>>> It looks like your Nifi instance is running on your laptop/desktop
>>>> (e.g. the VM host). My guess would be that name resolution or networking is
>>>> not properly configured between the host and the guest preventing the data
>>>> from getting from Nifi to Kafka. What's the contents of /etc/hosts on the
>>>> VM host? Can you ping node1 from the VM host by name and by IP address?
>>>>
>>>> -Kyle
>>>>
>>>> On Mon, Jan 15, 2018 at 6:55 AM, Gaurav Bapat <ga...@gmail.com>
>>>> wrote:
>>>>
>>>>> Failed while waiting for acks from Kafka is what I am getting in
>>>>> Kafka, am I missing some configuration with Kafka?
>>>>>
>>>>> On 15 January 2018 at 16:50, Gaurav Bapat <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Farrukh,
>>>>>>
>>>>>> I cant find any folder by my topic
>>>>>>
>>>>>> On 15 January 2018 at 16:33, Farrukh Naveed Anjum <
>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>
>>>>>>> Can you check /kafaka-logs on your VM box (It should have a folder
>>>>>>> named your topic). Can you check if it is there ?
>>>>>>>
>>>>>>> On Mon, Jan 15, 2018 at 3:49 PM, Gaurav Bapat <gauravb3007@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> I am not getting data into my Kafka topic
>>>>>>>>
>>>>>>>> I have used i5 4 Core Processor with 16 GB RAM and I have allocated
>>>>>>>> 12 GB RAM to my vagrant VM.
>>>>>>>>
>>>>>>>> I dont understand how to configure Kafka broker because it is
>>>>>>>> giving me failed while waiting for acks to Kafka
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 15 January 2018 at 16:10, Farrukh Naveed Anjum <
>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Can you tell me is your KAFKA Topic getting data ? What are you
>>>>>>>>> machine specifications ?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <
>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Thanks Farrukh,
>>>>>>>>>>
>>>>>>>>>> I am not getting data in my kafka topic even after creating one,
>>>>>>>>>> the issue seems to be with broker config, how to configure Kafka and
>>>>>>>>>> Zookeeper port?
>>>>>>>>>>
>>>>>>>>>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> I had similar issue it turned out to be the issue in STROM
>>>>>>>>>>>
>>>>>>>>>>> No worker is assigned to togolgoy all you need is to add
>>>>>>>>>>> additional port in
>>>>>>>>>>>
>>>>>>>>>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by
>>>>>>>>>>> assigning an additional port to the list
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> https://community.hortonworks.com/questions/32499/no-workers
>>>>>>>>>>> -in-storm-for-squid-topology.html
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I had similar issue and finally got it fixed
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <
>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Storm UI
>>>>>>>>>>>>
>>>>>>>>>>>> On 15 January 2018 at 08:59, Gaurav Bapat <
>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I have Storm UI and the logs are coming from firewalls,
>>>>>>>>>>>>> servers, etc from other machines(HP ArcSight Logger).
>>>>>>>>>>>>>
>>>>>>>>>>>>> I have attached the NiFi screenshots, my logs are coming but
>>>>>>>>>>>>> there is some error with Kafka and I am having issues with configuring
>>>>>>>>>>>>> Kafka broker
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <
>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> In Ambari under storm you can find the UI under quick links
>>>>>>>>>>>>>> at the top.  That said, the issue seems to be upstream of Metron, in NiFi.
>>>>>>>>>>>>>> That is something I can't help with as much, but if you can share the
>>>>>>>>>>>>>> listensyslog processor config that would be a start.  Also, share the
>>>>>>>>>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>>>>>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <
>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs is
>>>>>>>>>>>>>>> not getting logs in the processor.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Also I checked using tcpdump -i and it is getting logs in my
>>>>>>>>>>>>>>> machine but ListenSyslogs is not getting the logs
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <
>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [root@metron incubator-metron]#
>>>>>>>>>>>>>>>> ./metron-deployment/scripts/platform-info.sh
>>>>>>>>>>>>>>>> Metron 0.4.3
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> * master
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>>>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>>>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>>>>>>>>>>>>>>> apache/incubator-metron#880
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile |
>>>>>>>>>>>>>>>> 2 +-
>>>>>>>>>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> ansible 2.0.0.2
>>>>>>>>>>>>>>>>   config file =
>>>>>>>>>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> Vagrant 1.9.6
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> Python 2.7.5
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>>>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>>>>>>>>>> Maven home: /opt/maven/current
>>>>>>>>>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>>>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>>>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>>>>>>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64",
>>>>>>>>>>>>>>>> arch: "amd64", family: "unix"
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> node
>>>>>>>>>>>>>>>> v8.9.3
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> npm
>>>>>>>>>>>>>>>> 5.5.1
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>>>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>>>>>>>>>> This is free software; see the source for copying
>>>>>>>>>>>>>>>> conditions.  There is NO
>>>>>>>>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A
>>>>>>>>>>>>>>>> PARTICULAR PURPOSE.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> Compiler is C++11 compliant
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan
>>>>>>>>>>>>>>>> 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>>>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>>>>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>>>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>>>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>>>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>>>>>>>>>> Total Physical Processors: 4
>>>>>>>>>>>>>>>> Total cores: 16
>>>>>>>>>>>>>>>> Disk information:
>>>>>>>>>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>>>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>>>>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>>>>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>>>>>>>>>> This CPU appears to support virtualization
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <
>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I gone through your answer but still I can't figure out
>>>>>>>>>>>>>>>>> how do I do parsing/indexing in Storm UI as I cant find any option for the
>>>>>>>>>>>>>>>>> same.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> So, you created a new cef topic, and set up the
>>>>>>>>>>>>>>>>>> appropriate parser config for it (if not, this
>>>>>>>>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>>>>>>>>>> 1.  Validate that the logs are getting onto the
>>>>>>>>>>>>>>>>>> kafka topic that you are sending to.  If they aren't there, the problem is
>>>>>>>>>>>>>>>>>> upstream from Metron.
>>>>>>>>>>>>>>>>>> 2.  If they are getting onto the kafka topic they are
>>>>>>>>>>>>>>>>>> being directly sent to, check the indexing kafka topic for an enriched
>>>>>>>>>>>>>>>>>> version of those same logs.
>>>>>>>>>>>>>>>>>> 3.  Do a binary search of the various components involved
>>>>>>>>>>>>>>>>>> with ingest.
>>>>>>>>>>>>>>>>>>     a. If the logs are *not* on the indexing kafka
>>>>>>>>>>>>>>>>>> topic, check the enrichments topic for those logs.
>>>>>>>>>>>>>>>>>>     b. If the logs are *not* on the enrichments topic,
>>>>>>>>>>>>>>>>>> check the parser storm topology.
>>>>>>>>>>>>>>>>>>     c. If the logs are on the enrichments topic, but
>>>>>>>>>>>>>>>>>> *not* indexing, check the enrichments storm topology.
>>>>>>>>>>>>>>>>>>     d. If the logs are on the indexing but *not* Kibana,
>>>>>>>>>>>>>>>>>> check the indexing storm topic.
>>>>>>>>>>>>>>>>>>     e. If the logs are in on the indexing topic and
>>>>>>>>>>>>>>>>>> indexing storm topic is in good shape, check
>>>>>>>>>>>>>>>>>> elasticsearch directly.
>>>>>>>>>>>>>>>>>> 4.  You should have identified where the issue is at this
>>>>>>>>>>>>>>>>>> point.  Report back here with what you observed, any relevant error
>>>>>>>>>>>>>>>>>> messages, etc.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Hello everyone, I have deployed Metron on a single node
>>>>>>>>>>>>>>>>>>> machine and I would like to know how do I get Syslogs from NiFi into Kibana
>>>>>>>>>>>>>>>>>>> dashboard?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I have created a Kafka topic by the name "cef" and I can
>>>>>>>>>>>>>>>>>>> see that the topic exists in
>>>>>>>>>>>>>>>>>>> Metron Configuration but I am unable to connect it with
>>>>>>>>>>>>>>>>>>> Kibana
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Need Help!!
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> With Regards
>>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> With Regards
>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> With Regards
>>>>>>> Farrukh Naveed Anjum
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
>
> --
> With Regards
> Farrukh Naveed Anjum
>

Re: Getting Syslogs to Metron

Posted by Farrukh Naveed Anjum <an...@gmail.com>.
yes, do configure it as per metron reference usecase

On Tue, Jan 16, 2018 at 8:35 AM, Gaurav Bapat <ga...@gmail.com> wrote:

> Hi Kyle,
>
> I saw that I can ping from my OS to VM and from VM to OS. Looks like this
> is some Kafka or Zookeeper environment variables setup issue, do I need to
> configure that in vagrant ssh?
>
> On 16 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com> wrote:
>
>> Hey Kyle,
>>
>> I am running NiFi not on Ambari but on localhost:8089, I can ping from my
>> OS terminal to node1 but can't ping from node1 to my OS terminal, I have
>> attached few screenshots and the contents of /etc/hosts
>>
>> Thank You!
>>
>> On 15 January 2018 at 20:04, Kyle Richardson <ky...@gmail.com>
>> wrote:
>>
>>> It looks like your Nifi instance is running on your laptop/desktop (e.g.
>>> the VM host). My guess would be that name resolution or networking is not
>>> properly configured between the host and the guest preventing the data from
>>> getting from Nifi to Kafka. What's the contents of /etc/hosts on the VM
>>> host? Can you ping node1 from the VM host by name and by IP address?
>>>
>>> -Kyle
>>>
>>> On Mon, Jan 15, 2018 at 6:55 AM, Gaurav Bapat <ga...@gmail.com>
>>> wrote:
>>>
>>>> Failed while waiting for acks from Kafka is what I am getting in Kafka,
>>>> am I missing some configuration with Kafka?
>>>>
>>>> On 15 January 2018 at 16:50, Gaurav Bapat <ga...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Farrukh,
>>>>>
>>>>> I cant find any folder by my topic
>>>>>
>>>>> On 15 January 2018 at 16:33, Farrukh Naveed Anjum <
>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>
>>>>>> Can you check /kafaka-logs on your VM box (It should have a folder
>>>>>> named your topic). Can you check if it is there ?
>>>>>>
>>>>>> On Mon, Jan 15, 2018 at 3:49 PM, Gaurav Bapat <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> I am not getting data into my Kafka topic
>>>>>>>
>>>>>>> I have used i5 4 Core Processor with 16 GB RAM and I have allocated
>>>>>>> 12 GB RAM to my vagrant VM.
>>>>>>>
>>>>>>> I dont understand how to configure Kafka broker because it is giving
>>>>>>> me failed while waiting for acks to Kafka
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 15 January 2018 at 16:10, Farrukh Naveed Anjum <
>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>
>>>>>>>> Can you tell me is your KAFKA Topic getting data ? What are you
>>>>>>>> machine specifications ?
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <
>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Thanks Farrukh,
>>>>>>>>>
>>>>>>>>> I am not getting data in my kafka topic even after creating one,
>>>>>>>>> the issue seems to be with broker config, how to configure Kafka and
>>>>>>>>> Zookeeper port?
>>>>>>>>>
>>>>>>>>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> I had similar issue it turned out to be the issue in STROM
>>>>>>>>>>
>>>>>>>>>> No worker is assigned to togolgoy all you need is to add
>>>>>>>>>> additional port in
>>>>>>>>>>
>>>>>>>>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by assigning
>>>>>>>>>> an additional port to the list
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> https://community.hortonworks.com/questions/32499/no-workers
>>>>>>>>>> -in-storm-for-squid-topology.html
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I had similar issue and finally got it fixed
>>>>>>>>>>
>>>>>>>>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <
>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Storm UI
>>>>>>>>>>>
>>>>>>>>>>> On 15 January 2018 at 08:59, Gaurav Bapat <gauravb3007@gmail.com
>>>>>>>>>>> > wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>
>>>>>>>>>>>> I have Storm UI and the logs are coming from firewalls,
>>>>>>>>>>>> servers, etc from other machines(HP ArcSight Logger).
>>>>>>>>>>>>
>>>>>>>>>>>> I have attached the NiFi screenshots, my logs are coming but
>>>>>>>>>>>> there is some error with Kafka and I am having issues with configuring
>>>>>>>>>>>> Kafka broker
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <zeolla@gmail.com
>>>>>>>>>>>> > wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> In Ambari under storm you can find the UI under quick links at
>>>>>>>>>>>>> the top.  That said, the issue seems to be upstream of Metron, in NiFi.
>>>>>>>>>>>>> That is something I can't help with as much, but if you can share the
>>>>>>>>>>>>> listensyslog processor config that would be a start.  Also, share the
>>>>>>>>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>>>>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <
>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs is
>>>>>>>>>>>>>> not getting logs in the processor.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Also I checked using tcpdump -i and it is getting logs in my
>>>>>>>>>>>>>> machine but ListenSyslogs is not getting the logs
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <
>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [root@metron incubator-metron]#
>>>>>>>>>>>>>>> ./metron-deployment/scripts/platform-info.sh
>>>>>>>>>>>>>>> Metron 0.4.3
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> * master
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>>>>>>>>>>>>>> apache/incubator-metron#880
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile |
>>>>>>>>>>>>>>> 2 +-
>>>>>>>>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> ansible 2.0.0.2
>>>>>>>>>>>>>>>   config file =
>>>>>>>>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Vagrant 1.9.6
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Python 2.7.5
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>>>>>>>>> Maven home: /opt/maven/current
>>>>>>>>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>>>>>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64",
>>>>>>>>>>>>>>> arch: "amd64", family: "unix"
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> node
>>>>>>>>>>>>>>> v8.9.3
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> npm
>>>>>>>>>>>>>>> 5.5.1
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>>>>>>>>> This is free software; see the source for copying
>>>>>>>>>>>>>>> conditions.  There is NO
>>>>>>>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A
>>>>>>>>>>>>>>> PARTICULAR PURPOSE.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Compiler is C++11 compliant
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan
>>>>>>>>>>>>>>> 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>>>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>>>>>>>>> Total Physical Processors: 4
>>>>>>>>>>>>>>> Total cores: 16
>>>>>>>>>>>>>>> Disk information:
>>>>>>>>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>>>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>>>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>>>>>>>>> This CPU appears to support virtualization
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <
>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I gone through your answer but still I can't figure out how
>>>>>>>>>>>>>>>> do I do parsing/indexing in Storm UI as I cant find any option for the same.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <
>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> So, you created a new cef topic, and set up the
>>>>>>>>>>>>>>>>> appropriate parser config for it (if not, this
>>>>>>>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>>>>>>>>> 1.  Validate that the logs are getting onto the
>>>>>>>>>>>>>>>>> kafka topic that you are sending to.  If they aren't there, the problem is
>>>>>>>>>>>>>>>>> upstream from Metron.
>>>>>>>>>>>>>>>>> 2.  If they are getting onto the kafka topic they are
>>>>>>>>>>>>>>>>> being directly sent to, check the indexing kafka topic for an enriched
>>>>>>>>>>>>>>>>> version of those same logs.
>>>>>>>>>>>>>>>>> 3.  Do a binary search of the various components involved
>>>>>>>>>>>>>>>>> with ingest.
>>>>>>>>>>>>>>>>>     a. If the logs are *not* on the indexing kafka topic,
>>>>>>>>>>>>>>>>> check the enrichments topic for those logs.
>>>>>>>>>>>>>>>>>     b. If the logs are *not* on the enrichments topic,
>>>>>>>>>>>>>>>>> check the parser storm topology.
>>>>>>>>>>>>>>>>>     c. If the logs are on the enrichments topic, but *not*
>>>>>>>>>>>>>>>>> indexing, check the enrichments storm topology.
>>>>>>>>>>>>>>>>>     d. If the logs are on the indexing but *not* Kibana,
>>>>>>>>>>>>>>>>> check the indexing storm topic.
>>>>>>>>>>>>>>>>>     e. If the logs are in on the indexing topic and
>>>>>>>>>>>>>>>>> indexing storm topic is in good shape, check elasticsearch
>>>>>>>>>>>>>>>>>  directly.
>>>>>>>>>>>>>>>>> 4.  You should have identified where the issue is at this
>>>>>>>>>>>>>>>>> point.  Report back here with what you observed, any relevant error
>>>>>>>>>>>>>>>>> messages, etc.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Hello everyone, I have deployed Metron on a single node
>>>>>>>>>>>>>>>>>> machine and I would like to know how do I get Syslogs from NiFi into Kibana
>>>>>>>>>>>>>>>>>> dashboard?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I have created a Kafka topic by the name "cef" and I can
>>>>>>>>>>>>>>>>>> see that the topic exists in
>>>>>>>>>>>>>>>>>> Metron Configuration but I am unable to connect it with
>>>>>>>>>>>>>>>>>> Kibana
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Need Help!!
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> With Regards
>>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> With Regards
>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> With Regards
>>>>>> Farrukh Naveed Anjum
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>


-- 
With Regards
Farrukh Naveed Anjum

Re: Getting Syslogs to Metron

Posted by Gaurav Bapat <ga...@gmail.com>.
Hi Kyle,

I saw that I can ping from my OS to VM and from VM to OS. Looks like this
is some Kafka or Zookeeper environment variables setup issue, do I need to
configure that in vagrant ssh?

On 16 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com> wrote:

> Hey Kyle,
>
> I am running NiFi not on Ambari but on localhost:8089, I can ping from my
> OS terminal to node1 but can't ping from node1 to my OS terminal, I have
> attached few screenshots and the contents of /etc/hosts
>
> Thank You!
>
> On 15 January 2018 at 20:04, Kyle Richardson <ky...@gmail.com>
> wrote:
>
>> It looks like your Nifi instance is running on your laptop/desktop (e.g.
>> the VM host). My guess would be that name resolution or networking is not
>> properly configured between the host and the guest preventing the data from
>> getting from Nifi to Kafka. What's the contents of /etc/hosts on the VM
>> host? Can you ping node1 from the VM host by name and by IP address?
>>
>> -Kyle
>>
>> On Mon, Jan 15, 2018 at 6:55 AM, Gaurav Bapat <ga...@gmail.com>
>> wrote:
>>
>>> Failed while waiting for acks from Kafka is what I am getting in Kafka,
>>> am I missing some configuration with Kafka?
>>>
>>> On 15 January 2018 at 16:50, Gaurav Bapat <ga...@gmail.com> wrote:
>>>
>>>> Hi Farrukh,
>>>>
>>>> I cant find any folder by my topic
>>>>
>>>> On 15 January 2018 at 16:33, Farrukh Naveed Anjum <
>>>> anjum.farrukh@gmail.com> wrote:
>>>>
>>>>> Can you check /kafaka-logs on your VM box (It should have a folder
>>>>> named your topic). Can you check if it is there ?
>>>>>
>>>>> On Mon, Jan 15, 2018 at 3:49 PM, Gaurav Bapat <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> I am not getting data into my Kafka topic
>>>>>>
>>>>>> I have used i5 4 Core Processor with 16 GB RAM and I have allocated
>>>>>> 12 GB RAM to my vagrant VM.
>>>>>>
>>>>>> I dont understand how to configure Kafka broker because it is giving
>>>>>> me failed while waiting for acks to Kafka
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 15 January 2018 at 16:10, Farrukh Naveed Anjum <
>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>
>>>>>>> Can you tell me is your KAFKA Topic getting data ? What are you
>>>>>>> machine specifications ?
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <gauravb3007@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> Thanks Farrukh,
>>>>>>>>
>>>>>>>> I am not getting data in my kafka topic even after creating one,
>>>>>>>> the issue seems to be with broker config, how to configure Kafka and
>>>>>>>> Zookeeper port?
>>>>>>>>
>>>>>>>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> I had similar issue it turned out to be the issue in STROM
>>>>>>>>>
>>>>>>>>> No worker is assigned to togolgoy all you need is to add
>>>>>>>>> additional port in
>>>>>>>>>
>>>>>>>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by assigning
>>>>>>>>> an additional port to the list
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> https://community.hortonworks.com/questions/32499/no-workers
>>>>>>>>> -in-storm-for-squid-topology.html
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I had similar issue and finally got it fixed
>>>>>>>>>
>>>>>>>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <
>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Storm UI
>>>>>>>>>>
>>>>>>>>>> On 15 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>
>>>>>>>>>>> I have Storm UI and the logs are coming from firewalls, servers,
>>>>>>>>>>> etc from other machines(HP ArcSight Logger).
>>>>>>>>>>>
>>>>>>>>>>> I have attached the NiFi screenshots, my logs are coming but
>>>>>>>>>>> there is some error with Kafka and I am having issues with configuring
>>>>>>>>>>> Kafka broker
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> In Ambari under storm you can find the UI under quick links at
>>>>>>>>>>>> the top.  That said, the issue seems to be upstream of Metron, in NiFi.
>>>>>>>>>>>> That is something I can't help with as much, but if you can share the
>>>>>>>>>>>> listensyslog processor config that would be a start.  Also, share the
>>>>>>>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>>>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs is
>>>>>>>>>>>>> not getting logs in the processor.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Also I checked using tcpdump -i and it is getting logs in my
>>>>>>>>>>>>> machine but ListenSyslogs is not getting the logs
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <
>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> [root@metron incubator-metron]#
>>>>>>>>>>>>>> ./metron-deployment/scripts/platform-info.sh
>>>>>>>>>>>>>> Metron 0.4.3
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> * master
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>>>>>>>>>>>>> apache/incubator-metron#880
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2
>>>>>>>>>>>>>> +-
>>>>>>>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> ansible 2.0.0.2
>>>>>>>>>>>>>>   config file =
>>>>>>>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> Vagrant 1.9.6
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> Python 2.7.5
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>>>>>>>> Maven home: /opt/maven/current
>>>>>>>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>>>>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64",
>>>>>>>>>>>>>> arch: "amd64", family: "unix"
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> node
>>>>>>>>>>>>>> v8.9.3
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> npm
>>>>>>>>>>>>>> 5.5.1
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>>>>>>>> This is free software; see the source for copying
>>>>>>>>>>>>>> conditions.  There is NO
>>>>>>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A
>>>>>>>>>>>>>> PARTICULAR PURPOSE.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> Compiler is C++11 compliant
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4
>>>>>>>>>>>>>> 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>>>>>>>> Total Physical Processors: 4
>>>>>>>>>>>>>> Total cores: 16
>>>>>>>>>>>>>> Disk information:
>>>>>>>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>>>>>>>> This CPU appears to support virtualization
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <
>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I gone through your answer but still I can't figure out how
>>>>>>>>>>>>>>> do I do parsing/indexing in Storm UI as I cant find any option for the same.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <
>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> So, you created a new cef topic, and set up the appropriate
>>>>>>>>>>>>>>>> parser config for it (if not, this
>>>>>>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>>>>>>>> 1.  Validate that the logs are getting onto the kafka topic
>>>>>>>>>>>>>>>> that you are sending to.  If they aren't there, the problem is upstream
>>>>>>>>>>>>>>>> from Metron.
>>>>>>>>>>>>>>>> 2.  If they are getting onto the kafka topic they are being
>>>>>>>>>>>>>>>> directly sent to, check the indexing kafka topic for an enriched version of
>>>>>>>>>>>>>>>> those same logs.
>>>>>>>>>>>>>>>> 3.  Do a binary search of the various components involved
>>>>>>>>>>>>>>>> with ingest.
>>>>>>>>>>>>>>>>     a. If the logs are *not* on the indexing kafka topic,
>>>>>>>>>>>>>>>> check the enrichments topic for those logs.
>>>>>>>>>>>>>>>>     b. If the logs are *not* on the enrichments topic,
>>>>>>>>>>>>>>>> check the parser storm topology.
>>>>>>>>>>>>>>>>     c. If the logs are on the enrichments topic, but *not*
>>>>>>>>>>>>>>>> indexing, check the enrichments storm topology.
>>>>>>>>>>>>>>>>     d. If the logs are on the indexing but *not* Kibana,
>>>>>>>>>>>>>>>> check the indexing storm topic.
>>>>>>>>>>>>>>>>     e. If the logs are in on the indexing topic and
>>>>>>>>>>>>>>>> indexing storm topic is in good shape, check elasticsearch
>>>>>>>>>>>>>>>>  directly.
>>>>>>>>>>>>>>>> 4.  You should have identified where the issue is at this
>>>>>>>>>>>>>>>> point.  Report back here with what you observed, any relevant error
>>>>>>>>>>>>>>>> messages, etc.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Hello everyone, I have deployed Metron on a single node
>>>>>>>>>>>>>>>>> machine and I would like to know how do I get Syslogs from NiFi into Kibana
>>>>>>>>>>>>>>>>> dashboard?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I have created a Kafka topic by the name "cef" and I can
>>>>>>>>>>>>>>>>> see that the topic exists in
>>>>>>>>>>>>>>>>> Metron Configuration but I am unable to connect it with
>>>>>>>>>>>>>>>>> Kibana
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Need Help!!
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> With Regards
>>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> With Regards
>>>>>>> Farrukh Naveed Anjum
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> With Regards
>>>>> Farrukh Naveed Anjum
>>>>>
>>>>
>>>>
>>>
>>
>

Re: Getting Syslogs to Metron

Posted by Gaurav Bapat <ga...@gmail.com>.
Hey Kyle,

I am running NiFi not on Ambari but on localhost:8089, I can ping from my
OS terminal to node1 but can't ping from node1 to my OS terminal, I have
attached few screenshots and the contents of /etc/hosts

Thank You!

On 15 January 2018 at 20:04, Kyle Richardson <ky...@gmail.com>
wrote:

> It looks like your Nifi instance is running on your laptop/desktop (e.g.
> the VM host). My guess would be that name resolution or networking is not
> properly configured between the host and the guest preventing the data from
> getting from Nifi to Kafka. What's the contents of /etc/hosts on the VM
> host? Can you ping node1 from the VM host by name and by IP address?
>
> -Kyle
>
> On Mon, Jan 15, 2018 at 6:55 AM, Gaurav Bapat <ga...@gmail.com>
> wrote:
>
>> Failed while waiting for acks from Kafka is what I am getting in Kafka,
>> am I missing some configuration with Kafka?
>>
>> On 15 January 2018 at 16:50, Gaurav Bapat <ga...@gmail.com> wrote:
>>
>>> Hi Farrukh,
>>>
>>> I cant find any folder by my topic
>>>
>>> On 15 January 2018 at 16:33, Farrukh Naveed Anjum <
>>> anjum.farrukh@gmail.com> wrote:
>>>
>>>> Can you check /kafaka-logs on your VM box (It should have a folder
>>>> named your topic). Can you check if it is there ?
>>>>
>>>> On Mon, Jan 15, 2018 at 3:49 PM, Gaurav Bapat <ga...@gmail.com>
>>>> wrote:
>>>>
>>>>> I am not getting data into my Kafka topic
>>>>>
>>>>> I have used i5 4 Core Processor with 16 GB RAM and I have allocated 12
>>>>> GB RAM to my vagrant VM.
>>>>>
>>>>> I dont understand how to configure Kafka broker because it is giving
>>>>> me failed while waiting for acks to Kafka
>>>>>
>>>>>
>>>>>
>>>>> On 15 January 2018 at 16:10, Farrukh Naveed Anjum <
>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>
>>>>>> Can you tell me is your KAFKA Topic getting data ? What are you
>>>>>> machine specifications ?
>>>>>>
>>>>>>
>>>>>> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Thanks Farrukh,
>>>>>>>
>>>>>>> I am not getting data in my kafka topic even after creating one, the
>>>>>>> issue seems to be with broker config, how to configure Kafka and Zookeeper
>>>>>>> port?
>>>>>>>
>>>>>>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I had similar issue it turned out to be the issue in STROM
>>>>>>>>
>>>>>>>> No worker is assigned to togolgoy all you need is to add additional
>>>>>>>> port in
>>>>>>>>
>>>>>>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by assigning
>>>>>>>> an additional port to the list
>>>>>>>>
>>>>>>>>
>>>>>>>> https://community.hortonworks.com/questions/32499/no-workers
>>>>>>>> -in-storm-for-squid-topology.html
>>>>>>>>
>>>>>>>>
>>>>>>>> I had similar issue and finally got it fixed
>>>>>>>>
>>>>>>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <
>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Storm UI
>>>>>>>>>
>>>>>>>>> On 15 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Hey Jon,
>>>>>>>>>>
>>>>>>>>>> I have Storm UI and the logs are coming from firewalls, servers,
>>>>>>>>>> etc from other machines(HP ArcSight Logger).
>>>>>>>>>>
>>>>>>>>>> I have attached the NiFi screenshots, my logs are coming but
>>>>>>>>>> there is some error with Kafka and I am having issues with configuring
>>>>>>>>>> Kafka broker
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> In Ambari under storm you can find the UI under quick links at
>>>>>>>>>>> the top.  That said, the issue seems to be upstream of Metron, in NiFi.
>>>>>>>>>>> That is something I can't help with as much, but if you can share the
>>>>>>>>>>> listensyslog processor config that would be a start.  Also, share the
>>>>>>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>>>>>>
>>>>>>>>>>> Jon
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs is not
>>>>>>>>>>>> getting logs in the processor.
>>>>>>>>>>>>
>>>>>>>>>>>> Also I checked using tcpdump -i and it is getting logs in my
>>>>>>>>>>>> machine but ListenSyslogs is not getting the logs
>>>>>>>>>>>>
>>>>>>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <
>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> [root@metron incubator-metron]# ./metron-deployment/scripts/pl
>>>>>>>>>>>>> atform-info.sh
>>>>>>>>>>>>> Metron 0.4.3
>>>>>>>>>>>>> --
>>>>>>>>>>>>> * master
>>>>>>>>>>>>> --
>>>>>>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>>>>>>
>>>>>>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>>>>>>>>>>>> apache/incubator-metron#880
>>>>>>>>>>>>> --
>>>>>>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2
>>>>>>>>>>>>> +-
>>>>>>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>>>>> --
>>>>>>>>>>>>> ansible 2.0.0.2
>>>>>>>>>>>>>   config file =
>>>>>>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Vagrant 1.9.6
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Python 2.7.5
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>>>>>>> Maven home: /opt/maven/current
>>>>>>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>>>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch:
>>>>>>>>>>>>> "amd64", family: "unix"
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>>>>>>> --
>>>>>>>>>>>>> node
>>>>>>>>>>>>> v8.9.3
>>>>>>>>>>>>> --
>>>>>>>>>>>>> npm
>>>>>>>>>>>>> 5.5.1
>>>>>>>>>>>>> --
>>>>>>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>>>>>>> This is free software; see the source for copying conditions.
>>>>>>>>>>>>> There is NO
>>>>>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A
>>>>>>>>>>>>> PARTICULAR PURPOSE.
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Compiler is C++11 compliant
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4
>>>>>>>>>>>>> 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>>>>>>> Total Physical Processors: 4
>>>>>>>>>>>>> Total cores: 16
>>>>>>>>>>>>> Disk information:
>>>>>>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>>>>>>> This CPU appears to support virtualization
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <
>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I gone through your answer but still I can't figure out how
>>>>>>>>>>>>>> do I do parsing/indexing in Storm UI as I cant find any option for the same.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <
>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> So, you created a new cef topic, and set up the appropriate
>>>>>>>>>>>>>>> parser config for it (if not, this
>>>>>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>>>>>>> 1.  Validate that the logs are getting onto the kafka topic
>>>>>>>>>>>>>>> that you are sending to.  If they aren't there, the problem is upstream
>>>>>>>>>>>>>>> from Metron.
>>>>>>>>>>>>>>> 2.  If they are getting onto the kafka topic they are being
>>>>>>>>>>>>>>> directly sent to, check the indexing kafka topic for an enriched version of
>>>>>>>>>>>>>>> those same logs.
>>>>>>>>>>>>>>> 3.  Do a binary search of the various components involved
>>>>>>>>>>>>>>> with ingest.
>>>>>>>>>>>>>>>     a. If the logs are *not* on the indexing kafka topic,
>>>>>>>>>>>>>>> check the enrichments topic for those logs.
>>>>>>>>>>>>>>>     b. If the logs are *not* on the enrichments topic,
>>>>>>>>>>>>>>> check the parser storm topology.
>>>>>>>>>>>>>>>     c. If the logs are on the enrichments topic, but *not*
>>>>>>>>>>>>>>> indexing, check the enrichments storm topology.
>>>>>>>>>>>>>>>     d. If the logs are on the indexing but *not* Kibana,
>>>>>>>>>>>>>>> check the indexing storm topic.
>>>>>>>>>>>>>>>     e. If the logs are in on the indexing topic and indexing
>>>>>>>>>>>>>>> storm topic is in good shape, check elasticsearch directly.
>>>>>>>>>>>>>>> 4.  You should have identified where the issue is at this
>>>>>>>>>>>>>>> point.  Report back here with what you observed, any relevant error
>>>>>>>>>>>>>>> messages, etc.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hello everyone, I have deployed Metron on a single node
>>>>>>>>>>>>>>>> machine and I would like to know how do I get Syslogs from NiFi into Kibana
>>>>>>>>>>>>>>>> dashboard?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I have created a Kafka topic by the name "cef" and I can
>>>>>>>>>>>>>>>> see that the topic exists in
>>>>>>>>>>>>>>>> Metron Configuration but I am unable to connect it with
>>>>>>>>>>>>>>>> Kibana
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Need Help!!
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>
>>>>>>>>>>> Jon
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> With Regards
>>>>>>>> Farrukh Naveed Anjum
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> With Regards
>>>>>> Farrukh Naveed Anjum
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> With Regards
>>>> Farrukh Naveed Anjum
>>>>
>>>
>>>
>>
>

Re: Getting Syslogs to Metron

Posted by Kyle Richardson <ky...@gmail.com>.
It looks like your Nifi instance is running on your laptop/desktop (e.g.
the VM host). My guess would be that name resolution or networking is not
properly configured between the host and the guest preventing the data from
getting from Nifi to Kafka. What's the contents of /etc/hosts on the VM
host? Can you ping node1 from the VM host by name and by IP address?

-Kyle

On Mon, Jan 15, 2018 at 6:55 AM, Gaurav Bapat <ga...@gmail.com> wrote:

> Failed while waiting for acks from Kafka is what I am getting in Kafka, am
> I missing some configuration with Kafka?
>
> On 15 January 2018 at 16:50, Gaurav Bapat <ga...@gmail.com> wrote:
>
>> Hi Farrukh,
>>
>> I cant find any folder by my topic
>>
>> On 15 January 2018 at 16:33, Farrukh Naveed Anjum <
>> anjum.farrukh@gmail.com> wrote:
>>
>>> Can you check /kafaka-logs on your VM box (It should have a folder named
>>> your topic). Can you check if it is there ?
>>>
>>> On Mon, Jan 15, 2018 at 3:49 PM, Gaurav Bapat <ga...@gmail.com>
>>> wrote:
>>>
>>>> I am not getting data into my Kafka topic
>>>>
>>>> I have used i5 4 Core Processor with 16 GB RAM and I have allocated 12
>>>> GB RAM to my vagrant VM.
>>>>
>>>> I dont understand how to configure Kafka broker because it is giving me
>>>> failed while waiting for acks to Kafka
>>>>
>>>>
>>>>
>>>> On 15 January 2018 at 16:10, Farrukh Naveed Anjum <
>>>> anjum.farrukh@gmail.com> wrote:
>>>>
>>>>> Can you tell me is your KAFKA Topic getting data ? What are you
>>>>> machine specifications ?
>>>>>
>>>>>
>>>>> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Thanks Farrukh,
>>>>>>
>>>>>> I am not getting data in my kafka topic even after creating one, the
>>>>>> issue seems to be with broker config, how to configure Kafka and Zookeeper
>>>>>> port?
>>>>>>
>>>>>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I had similar issue it turned out to be the issue in STROM
>>>>>>>
>>>>>>> No worker is assigned to togolgoy all you need is to add additional
>>>>>>> port in
>>>>>>>
>>>>>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by assigning an
>>>>>>> additional port to the list
>>>>>>>
>>>>>>>
>>>>>>> https://community.hortonworks.com/questions/32499/no-workers
>>>>>>> -in-storm-for-squid-topology.html
>>>>>>>
>>>>>>>
>>>>>>> I had similar issue and finally got it fixed
>>>>>>>
>>>>>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <gauravb3007@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> Storm UI
>>>>>>>>
>>>>>>>> On 15 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hey Jon,
>>>>>>>>>
>>>>>>>>> I have Storm UI and the logs are coming from firewalls, servers,
>>>>>>>>> etc from other machines(HP ArcSight Logger).
>>>>>>>>>
>>>>>>>>> I have attached the NiFi screenshots, my logs are coming but there
>>>>>>>>> is some error with Kafka and I am having issues with configuring Kafka
>>>>>>>>> broker
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> In Ambari under storm you can find the UI under quick links at
>>>>>>>>>> the top.  That said, the issue seems to be upstream of Metron, in NiFi.
>>>>>>>>>> That is something I can't help with as much, but if you can share the
>>>>>>>>>> listensyslog processor config that would be a start.  Also, share the
>>>>>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs is not
>>>>>>>>>>> getting logs in the processor.
>>>>>>>>>>>
>>>>>>>>>>> Also I checked using tcpdump -i and it is getting logs in my
>>>>>>>>>>> machine but ListenSyslogs is not getting the logs
>>>>>>>>>>>
>>>>>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <gauravb3007@gmail.com
>>>>>>>>>>> > wrote:
>>>>>>>>>>>
>>>>>>>>>>>> [root@metron incubator-metron]# ./metron-deployment/scripts/pl
>>>>>>>>>>>> atform-info.sh
>>>>>>>>>>>> Metron 0.4.3
>>>>>>>>>>>> --
>>>>>>>>>>>> * master
>>>>>>>>>>>> --
>>>>>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>>>>>
>>>>>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>>>>>>>>>>> apache/incubator-metron#880
>>>>>>>>>>>> --
>>>>>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
>>>>>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>>>> --
>>>>>>>>>>>> ansible 2.0.0.2
>>>>>>>>>>>>   config file =
>>>>>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>>>>>> --
>>>>>>>>>>>> Vagrant 1.9.6
>>>>>>>>>>>> --
>>>>>>>>>>>> Python 2.7.5
>>>>>>>>>>>> --
>>>>>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>>>>>> Maven home: /opt/maven/current
>>>>>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch:
>>>>>>>>>>>> "amd64", family: "unix"
>>>>>>>>>>>> --
>>>>>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>>>>>> --
>>>>>>>>>>>> node
>>>>>>>>>>>> v8.9.3
>>>>>>>>>>>> --
>>>>>>>>>>>> npm
>>>>>>>>>>>> 5.5.1
>>>>>>>>>>>> --
>>>>>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>>>>>> This is free software; see the source for copying conditions.
>>>>>>>>>>>> There is NO
>>>>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A
>>>>>>>>>>>> PARTICULAR PURPOSE.
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Compiler is C++11 compliant
>>>>>>>>>>>> --
>>>>>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4
>>>>>>>>>>>> 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>>>>> --
>>>>>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>>>>>> Total Physical Processors: 4
>>>>>>>>>>>> Total cores: 16
>>>>>>>>>>>> Disk information:
>>>>>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>>>>>> This CPU appears to support virtualization
>>>>>>>>>>>>
>>>>>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <
>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I gone through your answer but still I can't figure out how do
>>>>>>>>>>>>> I do parsing/indexing in Storm UI as I cant find any option for the same.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <
>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> So, you created a new cef topic, and set up the appropriate
>>>>>>>>>>>>>> parser config for it (if not, this
>>>>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>>>>>> 1.  Validate that the logs are getting onto the kafka topic
>>>>>>>>>>>>>> that you are sending to.  If they aren't there, the problem is upstream
>>>>>>>>>>>>>> from Metron.
>>>>>>>>>>>>>> 2.  If they are getting onto the kafka topic they are being
>>>>>>>>>>>>>> directly sent to, check the indexing kafka topic for an enriched version of
>>>>>>>>>>>>>> those same logs.
>>>>>>>>>>>>>> 3.  Do a binary search of the various components involved
>>>>>>>>>>>>>> with ingest.
>>>>>>>>>>>>>>     a. If the logs are *not* on the indexing kafka topic,
>>>>>>>>>>>>>> check the enrichments topic for those logs.
>>>>>>>>>>>>>>     b. If the logs are *not* on the enrichments topic, check
>>>>>>>>>>>>>> the parser storm topology.
>>>>>>>>>>>>>>     c. If the logs are on the enrichments topic, but *not*
>>>>>>>>>>>>>> indexing, check the enrichments storm topology.
>>>>>>>>>>>>>>     d. If the logs are on the indexing but *not* Kibana,
>>>>>>>>>>>>>> check the indexing storm topic.
>>>>>>>>>>>>>>     e. If the logs are in on the indexing topic and indexing
>>>>>>>>>>>>>> storm topic is in good shape, check elasticsearch directly.
>>>>>>>>>>>>>> 4.  You should have identified where the issue is at this
>>>>>>>>>>>>>> point.  Report back here with what you observed, any relevant error
>>>>>>>>>>>>>> messages, etc.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hello everyone, I have deployed Metron on a single node
>>>>>>>>>>>>>>> machine and I would like to know how do I get Syslogs from NiFi into Kibana
>>>>>>>>>>>>>>> dashboard?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have created a Kafka topic by the name "cef" and I can see
>>>>>>>>>>>>>>> that the topic exists in
>>>>>>>>>>>>>>> Metron Configuration but I am unable to connect it with
>>>>>>>>>>>>>>> Kibana
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Need Help!!
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> With Regards
>>>>>>> Farrukh Naveed Anjum
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> With Regards
>>>>> Farrukh Naveed Anjum
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> With Regards
>>> Farrukh Naveed Anjum
>>>
>>
>>
>

Re: Getting Syslogs to Metron

Posted by Gaurav Bapat <ga...@gmail.com>.
Failed while waiting for acks from Kafka is what I am getting in Kafka, am
I missing some configuration with Kafka?

On 15 January 2018 at 16:50, Gaurav Bapat <ga...@gmail.com> wrote:

> Hi Farrukh,
>
> I cant find any folder by my topic
>
> On 15 January 2018 at 16:33, Farrukh Naveed Anjum <anjum.farrukh@gmail.com
> > wrote:
>
>> Can you check /kafaka-logs on your VM box (It should have a folder named
>> your topic). Can you check if it is there ?
>>
>> On Mon, Jan 15, 2018 at 3:49 PM, Gaurav Bapat <ga...@gmail.com>
>> wrote:
>>
>>> I am not getting data into my Kafka topic
>>>
>>> I have used i5 4 Core Processor with 16 GB RAM and I have allocated 12
>>> GB RAM to my vagrant VM.
>>>
>>> I dont understand how to configure Kafka broker because it is giving me
>>> failed while waiting for acks to Kafka
>>>
>>>
>>>
>>> On 15 January 2018 at 16:10, Farrukh Naveed Anjum <
>>> anjum.farrukh@gmail.com> wrote:
>>>
>>>> Can you tell me is your KAFKA Topic getting data ? What are you machine
>>>> specifications ?
>>>>
>>>>
>>>> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <ga...@gmail.com>
>>>> wrote:
>>>>
>>>>> Thanks Farrukh,
>>>>>
>>>>> I am not getting data in my kafka topic even after creating one, the
>>>>> issue seems to be with broker config, how to configure Kafka and Zookeeper
>>>>> port?
>>>>>
>>>>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>>>>> anjum.farrukh@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I had similar issue it turned out to be the issue in STROM
>>>>>>
>>>>>> No worker is assigned to togolgoy all you need is to add additional
>>>>>> port in
>>>>>>
>>>>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by assigning an
>>>>>> additional port to the list
>>>>>>
>>>>>>
>>>>>> https://community.hortonworks.com/questions/32499/no-workers
>>>>>> -in-storm-for-squid-topology.html
>>>>>>
>>>>>>
>>>>>> I had similar issue and finally got it fixed
>>>>>>
>>>>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Storm UI
>>>>>>>
>>>>>>> On 15 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hey Jon,
>>>>>>>>
>>>>>>>> I have Storm UI and the logs are coming from firewalls, servers,
>>>>>>>> etc from other machines(HP ArcSight Logger).
>>>>>>>>
>>>>>>>> I have attached the NiFi screenshots, my logs are coming but there
>>>>>>>> is some error with Kafka and I am having issues with configuring Kafka
>>>>>>>> broker
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> In Ambari under storm you can find the UI under quick links at the
>>>>>>>>> top.  That said, the issue seems to be upstream of Metron, in NiFi.  That
>>>>>>>>> is something I can't help with as much, but if you can share the
>>>>>>>>> listensyslog processor config that would be a start.  Also, share the
>>>>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs is not
>>>>>>>>>> getting logs in the processor.
>>>>>>>>>>
>>>>>>>>>> Also I checked using tcpdump -i and it is getting logs in my
>>>>>>>>>> machine but ListenSyslogs is not getting the logs
>>>>>>>>>>
>>>>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <ga...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> [root@metron incubator-metron]# ./metron-deployment/scripts/pl
>>>>>>>>>>> atform-info.sh
>>>>>>>>>>> Metron 0.4.3
>>>>>>>>>>> --
>>>>>>>>>>> * master
>>>>>>>>>>> --
>>>>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>>>>
>>>>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>>>>>>>>>> apache/incubator-metron#880
>>>>>>>>>>> --
>>>>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
>>>>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>>> --
>>>>>>>>>>> ansible 2.0.0.2
>>>>>>>>>>>   config file =
>>>>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>>>>> --
>>>>>>>>>>> Vagrant 1.9.6
>>>>>>>>>>> --
>>>>>>>>>>> Python 2.7.5
>>>>>>>>>>> --
>>>>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>>>>> Maven home: /opt/maven/current
>>>>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch:
>>>>>>>>>>> "amd64", family: "unix"
>>>>>>>>>>> --
>>>>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>>>>> --
>>>>>>>>>>> node
>>>>>>>>>>> v8.9.3
>>>>>>>>>>> --
>>>>>>>>>>> npm
>>>>>>>>>>> 5.5.1
>>>>>>>>>>> --
>>>>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>>>>> This is free software; see the source for copying conditions.
>>>>>>>>>>> There is NO
>>>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A
>>>>>>>>>>> PARTICULAR PURPOSE.
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Compiler is C++11 compliant
>>>>>>>>>>> --
>>>>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4
>>>>>>>>>>> 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>>>> --
>>>>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>>>>> Total Physical Processors: 4
>>>>>>>>>>> Total cores: 16
>>>>>>>>>>> Disk information:
>>>>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>>>>> This CPU appears to support virtualization
>>>>>>>>>>>
>>>>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <gauravb3007@gmail.com
>>>>>>>>>>> > wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>>
>>>>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>>>>
>>>>>>>>>>>> I gone through your answer but still I can't figure out how do
>>>>>>>>>>>> I do parsing/indexing in Storm UI as I cant find any option for the same.
>>>>>>>>>>>>
>>>>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <zeolla@gmail.com
>>>>>>>>>>>> > wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> So, you created a new cef topic, and set up the appropriate
>>>>>>>>>>>>> parser config for it (if not, this
>>>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>>>>> 1.  Validate that the logs are getting onto the kafka topic
>>>>>>>>>>>>> that you are sending to.  If they aren't there, the problem is upstream
>>>>>>>>>>>>> from Metron.
>>>>>>>>>>>>> 2.  If they are getting onto the kafka topic they are being
>>>>>>>>>>>>> directly sent to, check the indexing kafka topic for an enriched version of
>>>>>>>>>>>>> those same logs.
>>>>>>>>>>>>> 3.  Do a binary search of the various components involved with
>>>>>>>>>>>>> ingest.
>>>>>>>>>>>>>     a. If the logs are *not* on the indexing kafka topic,
>>>>>>>>>>>>> check the enrichments topic for those logs.
>>>>>>>>>>>>>     b. If the logs are *not* on the enrichments topic, check
>>>>>>>>>>>>> the parser storm topology.
>>>>>>>>>>>>>     c. If the logs are on the enrichments topic, but *not*
>>>>>>>>>>>>> indexing, check the enrichments storm topology.
>>>>>>>>>>>>>     d. If the logs are on the indexing but *not* Kibana,
>>>>>>>>>>>>> check the indexing storm topic.
>>>>>>>>>>>>>     e. If the logs are in on the indexing topic and indexing
>>>>>>>>>>>>> storm topic is in good shape, check elasticsearch directly.
>>>>>>>>>>>>> 4.  You should have identified where the issue is at this
>>>>>>>>>>>>> point.  Report back here with what you observed, any relevant error
>>>>>>>>>>>>> messages, etc.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hello everyone, I have deployed Metron on a single node
>>>>>>>>>>>>>> machine and I would like to know how do I get Syslogs from NiFi into Kibana
>>>>>>>>>>>>>> dashboard?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I have created a Kafka topic by the name "cef" and I can see
>>>>>>>>>>>>>> that the topic exists in
>>>>>>>>>>>>>> Metron Configuration but I am unable to connect it with Kibana
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Need Help!!
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> With Regards
>>>>>> Farrukh Naveed Anjum
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> With Regards
>>>> Farrukh Naveed Anjum
>>>>
>>>
>>>
>>
>>
>> --
>> With Regards
>> Farrukh Naveed Anjum
>>
>
>

Re: Getting Syslogs to Metron

Posted by Gaurav Bapat <ga...@gmail.com>.
Hi Farrukh,

I cant find any folder by my topic

On 15 January 2018 at 16:33, Farrukh Naveed Anjum <an...@gmail.com>
wrote:

> Can you check /kafaka-logs on your VM box (It should have a folder named
> your topic). Can you check if it is there ?
>
> On Mon, Jan 15, 2018 at 3:49 PM, Gaurav Bapat <ga...@gmail.com>
> wrote:
>
>> I am not getting data into my Kafka topic
>>
>> I have used i5 4 Core Processor with 16 GB RAM and I have allocated 12 GB
>> RAM to my vagrant VM.
>>
>> I dont understand how to configure Kafka broker because it is giving me
>> failed while waiting for acks to Kafka
>>
>>
>>
>> On 15 January 2018 at 16:10, Farrukh Naveed Anjum <
>> anjum.farrukh@gmail.com> wrote:
>>
>>> Can you tell me is your KAFKA Topic getting data ? What are you machine
>>> specifications ?
>>>
>>>
>>> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <ga...@gmail.com>
>>> wrote:
>>>
>>>> Thanks Farrukh,
>>>>
>>>> I am not getting data in my kafka topic even after creating one, the
>>>> issue seems to be with broker config, how to configure Kafka and Zookeeper
>>>> port?
>>>>
>>>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>>>> anjum.farrukh@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I had similar issue it turned out to be the issue in STROM
>>>>>
>>>>> No worker is assigned to togolgoy all you need is to add additional
>>>>> port in
>>>>>
>>>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by assigning an
>>>>> additional port to the list
>>>>>
>>>>>
>>>>> https://community.hortonworks.com/questions/32499/no-workers
>>>>> -in-storm-for-squid-topology.html
>>>>>
>>>>>
>>>>> I had similar issue and finally got it fixed
>>>>>
>>>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Storm UI
>>>>>>
>>>>>> On 15 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hey Jon,
>>>>>>>
>>>>>>> I have Storm UI and the logs are coming from firewalls, servers, etc
>>>>>>> from other machines(HP ArcSight Logger).
>>>>>>>
>>>>>>> I have attached the NiFi screenshots, my logs are coming but there
>>>>>>> is some error with Kafka and I am having issues with configuring Kafka
>>>>>>> broker
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> In Ambari under storm you can find the UI under quick links at the
>>>>>>>> top.  That said, the issue seems to be upstream of Metron, in NiFi.  That
>>>>>>>> is something I can't help with as much, but if you can share the
>>>>>>>> listensyslog processor config that would be a start.  Also, share the
>>>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs is not
>>>>>>>>> getting logs in the processor.
>>>>>>>>>
>>>>>>>>> Also I checked using tcpdump -i and it is getting logs in my
>>>>>>>>> machine but ListenSyslogs is not getting the logs
>>>>>>>>>
>>>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <ga...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> [root@metron incubator-metron]# ./metron-deployment/scripts/pl
>>>>>>>>>> atform-info.sh
>>>>>>>>>> Metron 0.4.3
>>>>>>>>>> --
>>>>>>>>>> * master
>>>>>>>>>> --
>>>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>>>
>>>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>>>>>>>>> apache/incubator-metron#880
>>>>>>>>>> --
>>>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
>>>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>>> --
>>>>>>>>>> ansible 2.0.0.2
>>>>>>>>>>   config file =
>>>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>>>> --
>>>>>>>>>> Vagrant 1.9.6
>>>>>>>>>> --
>>>>>>>>>> Python 2.7.5
>>>>>>>>>> --
>>>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>>>> Maven home: /opt/maven/current
>>>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch:
>>>>>>>>>> "amd64", family: "unix"
>>>>>>>>>> --
>>>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>>>> --
>>>>>>>>>> node
>>>>>>>>>> v8.9.3
>>>>>>>>>> --
>>>>>>>>>> npm
>>>>>>>>>> 5.5.1
>>>>>>>>>> --
>>>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>>>> This is free software; see the source for copying conditions.
>>>>>>>>>> There is NO
>>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A
>>>>>>>>>> PARTICULAR PURPOSE.
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Compiler is C++11 compliant
>>>>>>>>>> --
>>>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4
>>>>>>>>>> 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>>> --
>>>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>>>> Total Physical Processors: 4
>>>>>>>>>> Total cores: 16
>>>>>>>>>> Disk information:
>>>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>>>> This CPU appears to support virtualization
>>>>>>>>>>
>>>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <ga...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hey Jon,
>>>>>>>>>>>
>>>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>>>
>>>>>>>>>>> I gone through your answer but still I can't figure out how do I
>>>>>>>>>>> do parsing/indexing in Storm UI as I cant find any option for the same.
>>>>>>>>>>>
>>>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> So, you created a new cef topic, and set up the appropriate
>>>>>>>>>>>> parser config for it (if not, this
>>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>>>
>>>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>>>> 1.  Validate that the logs are getting onto the kafka topic
>>>>>>>>>>>> that you are sending to.  If they aren't there, the problem is upstream
>>>>>>>>>>>> from Metron.
>>>>>>>>>>>> 2.  If they are getting onto the kafka topic they are being
>>>>>>>>>>>> directly sent to, check the indexing kafka topic for an enriched version of
>>>>>>>>>>>> those same logs.
>>>>>>>>>>>> 3.  Do a binary search of the various components involved with
>>>>>>>>>>>> ingest.
>>>>>>>>>>>>     a. If the logs are *not* on the indexing kafka topic,
>>>>>>>>>>>> check the enrichments topic for those logs.
>>>>>>>>>>>>     b. If the logs are *not* on the enrichments topic, check
>>>>>>>>>>>> the parser storm topology.
>>>>>>>>>>>>     c. If the logs are on the enrichments topic, but *not*
>>>>>>>>>>>> indexing, check the enrichments storm topology.
>>>>>>>>>>>>     d. If the logs are on the indexing but *not* Kibana, check
>>>>>>>>>>>> the indexing storm topic.
>>>>>>>>>>>>     e. If the logs are in on the indexing topic and indexing
>>>>>>>>>>>> storm topic is in good shape, check elasticsearch directly.
>>>>>>>>>>>> 4.  You should have identified where the issue is at this
>>>>>>>>>>>> point.  Report back here with what you observed, any relevant error
>>>>>>>>>>>> messages, etc.
>>>>>>>>>>>>
>>>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hello everyone, I have deployed Metron on a single node
>>>>>>>>>>>>> machine and I would like to know how do I get Syslogs from NiFi into Kibana
>>>>>>>>>>>>> dashboard?
>>>>>>>>>>>>>
>>>>>>>>>>>>> I have created a Kafka topic by the name "cef" and I can see
>>>>>>>>>>>>> that the topic exists in
>>>>>>>>>>>>> Metron Configuration but I am unable to connect it with Kibana
>>>>>>>>>>>>>
>>>>>>>>>>>>> Need Help!!
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> With Regards
>>>>> Farrukh Naveed Anjum
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> With Regards
>>> Farrukh Naveed Anjum
>>>
>>
>>
>
>
> --
> With Regards
> Farrukh Naveed Anjum
>

Re: Getting Syslogs to Metron

Posted by Farrukh Naveed Anjum <an...@gmail.com>.
Can you check /kafaka-logs on your VM box (It should have a folder named
your topic). Can you check if it is there ?

On Mon, Jan 15, 2018 at 3:49 PM, Gaurav Bapat <ga...@gmail.com> wrote:

> I am not getting data into my Kafka topic
>
> I have used i5 4 Core Processor with 16 GB RAM and I have allocated 12 GB
> RAM to my vagrant VM.
>
> I dont understand how to configure Kafka broker because it is giving me
> failed while waiting for acks to Kafka
>
>
>
> On 15 January 2018 at 16:10, Farrukh Naveed Anjum <anjum.farrukh@gmail.com
> > wrote:
>
>> Can you tell me is your KAFKA Topic getting data ? What are you machine
>> specifications ?
>>
>>
>> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <ga...@gmail.com>
>> wrote:
>>
>>> Thanks Farrukh,
>>>
>>> I am not getting data in my kafka topic even after creating one, the
>>> issue seems to be with broker config, how to configure Kafka and Zookeeper
>>> port?
>>>
>>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>>> anjum.farrukh@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> I had similar issue it turned out to be the issue in STROM
>>>>
>>>> No worker is assigned to togolgoy all you need is to add additional
>>>> port in
>>>>
>>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by assigning an
>>>> additional port to the list
>>>>
>>>>
>>>> https://community.hortonworks.com/questions/32499/no-workers
>>>> -in-storm-for-squid-topology.html
>>>>
>>>>
>>>> I had similar issue and finally got it fixed
>>>>
>>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <ga...@gmail.com>
>>>> wrote:
>>>>
>>>>> Storm UI
>>>>>
>>>>> On 15 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hey Jon,
>>>>>>
>>>>>> I have Storm UI and the logs are coming from firewalls, servers, etc
>>>>>> from other machines(HP ArcSight Logger).
>>>>>>
>>>>>> I have attached the NiFi screenshots, my logs are coming but there is
>>>>>> some error with Kafka and I am having issues with configuring Kafka broker
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <ze...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> In Ambari under storm you can find the UI under quick links at the
>>>>>>> top.  That said, the issue seems to be upstream of Metron, in NiFi.  That
>>>>>>> is something I can't help with as much, but if you can share the
>>>>>>> listensyslog processor config that would be a start.  Also, share the
>>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs is not
>>>>>>>> getting logs in the processor.
>>>>>>>>
>>>>>>>> Also I checked using tcpdump -i and it is getting logs in my
>>>>>>>> machine but ListenSyslogs is not getting the logs
>>>>>>>>
>>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <ga...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> [root@metron incubator-metron]# ./metron-deployment/scripts/pl
>>>>>>>>> atform-info.sh
>>>>>>>>> Metron 0.4.3
>>>>>>>>> --
>>>>>>>>> * master
>>>>>>>>> --
>>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>>
>>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>>>>>>>> apache/incubator-metron#880
>>>>>>>>> --
>>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
>>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>>> --
>>>>>>>>> ansible 2.0.0.2
>>>>>>>>>   config file =
>>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>>> --
>>>>>>>>> Vagrant 1.9.6
>>>>>>>>> --
>>>>>>>>> Python 2.7.5
>>>>>>>>> --
>>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>>> Maven home: /opt/maven/current
>>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch:
>>>>>>>>> "amd64", family: "unix"
>>>>>>>>> --
>>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>>> --
>>>>>>>>> node
>>>>>>>>> v8.9.3
>>>>>>>>> --
>>>>>>>>> npm
>>>>>>>>> 5.5.1
>>>>>>>>> --
>>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>>> This is free software; see the source for copying conditions.
>>>>>>>>> There is NO
>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
>>>>>>>>> PURPOSE.
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Compiler is C++11 compliant
>>>>>>>>> --
>>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4
>>>>>>>>> 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>> --
>>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>>> Total Physical Processors: 4
>>>>>>>>> Total cores: 16
>>>>>>>>> Disk information:
>>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>>> This CPU appears to support virtualization
>>>>>>>>>
>>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <ga...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Hey Jon,
>>>>>>>>>>
>>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>>
>>>>>>>>>> I gone through your answer but still I can't figure out how do I
>>>>>>>>>> do parsing/indexing in Storm UI as I cant find any option for the same.
>>>>>>>>>>
>>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> So, you created a new cef topic, and set up the appropriate
>>>>>>>>>>> parser config for it (if not, this
>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>>
>>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>>> 1.  Validate that the logs are getting onto the kafka topic that
>>>>>>>>>>> you are sending to.  If they aren't there, the problem is upstream from
>>>>>>>>>>> Metron.
>>>>>>>>>>> 2.  If they are getting onto the kafka topic they are being
>>>>>>>>>>> directly sent to, check the indexing kafka topic for an enriched version of
>>>>>>>>>>> those same logs.
>>>>>>>>>>> 3.  Do a binary search of the various components involved with
>>>>>>>>>>> ingest.
>>>>>>>>>>>     a. If the logs are *not* on the indexing kafka topic, check
>>>>>>>>>>> the enrichments topic for those logs.
>>>>>>>>>>>     b. If the logs are *not* on the enrichments topic, check
>>>>>>>>>>> the parser storm topology.
>>>>>>>>>>>     c. If the logs are on the enrichments topic, but *not*
>>>>>>>>>>> indexing, check the enrichments storm topology.
>>>>>>>>>>>     d. If the logs are on the indexing but *not* Kibana, check
>>>>>>>>>>> the indexing storm topic.
>>>>>>>>>>>     e. If the logs are in on the indexing topic and indexing
>>>>>>>>>>> storm topic is in good shape, check elasticsearch directly.
>>>>>>>>>>> 4.  You should have identified where the issue is at this
>>>>>>>>>>> point.  Report back here with what you observed, any relevant error
>>>>>>>>>>> messages, etc.
>>>>>>>>>>>
>>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>>
>>>>>>>>>>> Jon
>>>>>>>>>>>
>>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hello everyone, I have deployed Metron on a single node machine
>>>>>>>>>>>> and I would like to know how do I get Syslogs from NiFi into Kibana
>>>>>>>>>>>> dashboard?
>>>>>>>>>>>>
>>>>>>>>>>>> I have created a Kafka topic by the name "cef" and I can see
>>>>>>>>>>>> that the topic exists in
>>>>>>>>>>>> Metron Configuration but I am unable to connect it with Kibana
>>>>>>>>>>>>
>>>>>>>>>>>> Need Help!!
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>>
>>>>>>>>>>> Jon
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>> --
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> With Regards
>>>> Farrukh Naveed Anjum
>>>>
>>>
>>>
>>
>>
>> --
>> With Regards
>> Farrukh Naveed Anjum
>>
>
>


-- 
With Regards
Farrukh Naveed Anjum

Re: Getting Syslogs to Metron

Posted by Gaurav Bapat <ga...@gmail.com>.
I am not getting data into my Kafka topic

I have used i5 4 Core Processor with 16 GB RAM and I have allocated 12 GB
RAM to my vagrant VM.

I dont understand how to configure Kafka broker because it is giving me
failed while waiting for acks to Kafka



On 15 January 2018 at 16:10, Farrukh Naveed Anjum <an...@gmail.com>
wrote:

> Can you tell me is your KAFKA Topic getting data ? What are you machine
> specifications ?
>
>
> On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <ga...@gmail.com>
> wrote:
>
>> Thanks Farrukh,
>>
>> I am not getting data in my kafka topic even after creating one, the
>> issue seems to be with broker config, how to configure Kafka and Zookeeper
>> port?
>>
>> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <
>> anjum.farrukh@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I had similar issue it turned out to be the issue in STROM
>>>
>>> No worker is assigned to togolgoy all you need is to add additional port
>>> in
>>>
>>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by assigning an
>>> additional port to the list
>>>
>>>
>>> https://community.hortonworks.com/questions/32499/no-workers
>>> -in-storm-for-squid-topology.html
>>>
>>>
>>> I had similar issue and finally got it fixed
>>>
>>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <ga...@gmail.com>
>>> wrote:
>>>
>>>> Storm UI
>>>>
>>>> On 15 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hey Jon,
>>>>>
>>>>> I have Storm UI and the logs are coming from firewalls, servers, etc
>>>>> from other machines(HP ArcSight Logger).
>>>>>
>>>>> I have attached the NiFi screenshots, my logs are coming but there is
>>>>> some error with Kafka and I am having issues with configuring Kafka broker
>>>>>
>>>>>
>>>>>
>>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <ze...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> In Ambari under storm you can find the UI under quick links at the
>>>>>> top.  That said, the issue seems to be upstream of Metron, in NiFi.  That
>>>>>> is something I can't help with as much, but if you can share the
>>>>>> listensyslog processor config that would be a start.  Also, share the
>>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs is not
>>>>>>> getting logs in the processor.
>>>>>>>
>>>>>>> Also I checked using tcpdump -i and it is getting logs in my machine
>>>>>>> but ListenSyslogs is not getting the logs
>>>>>>>
>>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <ga...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> [root@metron incubator-metron]# ./metron-deployment/scripts/pl
>>>>>>>> atform-info.sh
>>>>>>>> Metron 0.4.3
>>>>>>>> --
>>>>>>>> * master
>>>>>>>> --
>>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>>
>>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>>>>>>> apache/incubator-metron#880
>>>>>>>> --
>>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
>>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>> --
>>>>>>>> ansible 2.0.0.2
>>>>>>>>   config file =
>>>>>>>>   configured module search path = Default w/o overrides
>>>>>>>> --
>>>>>>>> Vagrant 1.9.6
>>>>>>>> --
>>>>>>>> Python 2.7.5
>>>>>>>> --
>>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>>> Maven home: /opt/maven/current
>>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch:
>>>>>>>> "amd64", family: "unix"
>>>>>>>> --
>>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>>> --
>>>>>>>> node
>>>>>>>> v8.9.3
>>>>>>>> --
>>>>>>>> npm
>>>>>>>> 5.5.1
>>>>>>>> --
>>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>>> This is free software; see the source for copying conditions.
>>>>>>>> There is NO
>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
>>>>>>>> PURPOSE.
>>>>>>>>
>>>>>>>> --
>>>>>>>> Compiler is C++11 compliant
>>>>>>>> --
>>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4
>>>>>>>> 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>> --
>>>>>>>> Total System Memory = 15773.3 MB
>>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>>> Total Physical Processors: 4
>>>>>>>> Total cores: 16
>>>>>>>> Disk information:
>>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>>> This CPU appears to support virtualization
>>>>>>>>
>>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <ga...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hey Jon,
>>>>>>>>>
>>>>>>>>> Appreciate your timely reply.
>>>>>>>>>
>>>>>>>>> I gone through your answer but still I can't figure out how do I
>>>>>>>>> do parsing/indexing in Storm UI as I cant find any option for the same.
>>>>>>>>>
>>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> So, you created a new cef topic, and set up the appropriate
>>>>>>>>>> parser config for it (if not, this
>>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>>
>>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>>> 1.  Validate that the logs are getting onto the kafka topic that
>>>>>>>>>> you are sending to.  If they aren't there, the problem is upstream from
>>>>>>>>>> Metron.
>>>>>>>>>> 2.  If they are getting onto the kafka topic they are being
>>>>>>>>>> directly sent to, check the indexing kafka topic for an enriched version of
>>>>>>>>>> those same logs.
>>>>>>>>>> 3.  Do a binary search of the various components involved with
>>>>>>>>>> ingest.
>>>>>>>>>>     a. If the logs are *not* on the indexing kafka topic, check
>>>>>>>>>> the enrichments topic for those logs.
>>>>>>>>>>     b. If the logs are *not* on the enrichments topic, check the
>>>>>>>>>> parser storm topology.
>>>>>>>>>>     c. If the logs are on the enrichments topic, but *not*
>>>>>>>>>> indexing, check the enrichments storm topology.
>>>>>>>>>>     d. If the logs are on the indexing but *not* Kibana, check
>>>>>>>>>> the indexing storm topic.
>>>>>>>>>>     e. If the logs are in on the indexing topic and indexing
>>>>>>>>>> storm topic is in good shape, check elasticsearch directly.
>>>>>>>>>> 4.  You should have identified where the issue is at this point.
>>>>>>>>>> Report back here with what you observed, any relevant error messages, etc.
>>>>>>>>>>
>>>>>>>>>> Side note:  We should document a decision tree for
>>>>>>>>>> troubleshooting data ingest.  It is fairly straightforward and makes me
>>>>>>>>>> wonder if we already have this somewhere and I'm not aware of it?  It would
>>>>>>>>>> also be a good place to put pointers to some common errors.
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hello everyone, I have deployed Metron on a single node machine
>>>>>>>>>>> and I would like to know how do I get Syslogs from NiFi into Kibana
>>>>>>>>>>> dashboard?
>>>>>>>>>>>
>>>>>>>>>>> I have created a Kafka topic by the name "cef" and I can see
>>>>>>>>>>> that the topic exists in
>>>>>>>>>>> Metron Configuration but I am unable to connect it with Kibana
>>>>>>>>>>>
>>>>>>>>>>> Need Help!!
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>> --
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> With Regards
>>> Farrukh Naveed Anjum
>>>
>>
>>
>
>
> --
> With Regards
> Farrukh Naveed Anjum
>

Re: Getting Syslogs to Metron

Posted by Farrukh Naveed Anjum <an...@gmail.com>.
Can you tell me is your KAFKA Topic getting data ? What are you machine
specifications ?


On Mon, Jan 15, 2018 at 2:56 PM, Gaurav Bapat <ga...@gmail.com> wrote:

> Thanks Farrukh,
>
> I am not getting data in my kafka topic even after creating one, the issue
> seems to be with broker config, how to configure Kafka and Zookeeper port?
>
> On 15 January 2018 at 13:23, Farrukh Naveed Anjum <anjum.farrukh@gmail.com
> > wrote:
>
>> Hi,
>>
>> I had similar issue it turned out to be the issue in STROM
>>
>> No worker is assigned to togolgoy all you need is to add additional port
>> in
>>
>>  Ambari -> Storm -> Configs -> supervisor.slot.ports by assigning an
>> additional port to the list
>>
>>
>> https://community.hortonworks.com/questions/32499/no-workers
>> -in-storm-for-squid-topology.html
>>
>>
>> I had similar issue and finally got it fixed
>>
>> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <ga...@gmail.com>
>> wrote:
>>
>>> Storm UI
>>>
>>> On 15 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com> wrote:
>>>
>>>> Hey Jon,
>>>>
>>>> I have Storm UI and the logs are coming from firewalls, servers, etc
>>>> from other machines(HP ArcSight Logger).
>>>>
>>>> I have attached the NiFi screenshots, my logs are coming but there is
>>>> some error with Kafka and I am having issues with configuring Kafka broker
>>>>
>>>>
>>>>
>>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>>
>>>>> In Ambari under storm you can find the UI under quick links at the
>>>>> top.  That said, the issue seems to be upstream of Metron, in NiFi.  That
>>>>> is something I can't help with as much, but if you can share the
>>>>> listensyslog processor config that would be a start.  Also, share the
>>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>>
>>>>> Jon
>>>>>
>>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> I have created a Kafka topic "cef" but my Listen Syslogs is not
>>>>>> getting logs in the processor.
>>>>>>
>>>>>> Also I checked using tcpdump -i and it is getting logs in my machine
>>>>>> but ListenSyslogs is not getting the logs
>>>>>>
>>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> [root@metron incubator-metron]# ./metron-deployment/scripts/pl
>>>>>>> atform-info.sh
>>>>>>> Metron 0.4.3
>>>>>>> --
>>>>>>> * master
>>>>>>> --
>>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>>> Author: cstella <ce...@gmail.com>
>>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>>
>>>>>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>>>>>> apache/incubator-metron#880
>>>>>>> --
>>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
>>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>> --
>>>>>>> ansible 2.0.0.2
>>>>>>>   config file =
>>>>>>>   configured module search path = Default w/o overrides
>>>>>>> --
>>>>>>> Vagrant 1.9.6
>>>>>>> --
>>>>>>> Python 2.7.5
>>>>>>> --
>>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>>> Maven home: /opt/maven/current
>>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch:
>>>>>>> "amd64", family: "unix"
>>>>>>> --
>>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>>> --
>>>>>>> node
>>>>>>> v8.9.3
>>>>>>> --
>>>>>>> npm
>>>>>>> 5.5.1
>>>>>>> --
>>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>> This is free software; see the source for copying conditions.  There
>>>>>>> is NO
>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
>>>>>>> PURPOSE.
>>>>>>>
>>>>>>> --
>>>>>>> Compiler is C++11 compliant
>>>>>>> --
>>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4
>>>>>>> 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>>> --
>>>>>>> Total System Memory = 15773.3 MB
>>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>>> Processor Speed: 3320.875 MHz
>>>>>>> Processor Speed: 3307.191 MHz
>>>>>>> Processor Speed: 3376.699 MHz
>>>>>>> Processor Speed: 3338.917 MHz
>>>>>>> Total Physical Processors: 4
>>>>>>> Total cores: 16
>>>>>>> Disk information:
>>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>>> This CPU appears to support virtualization
>>>>>>>
>>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <ga...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hey Jon,
>>>>>>>>
>>>>>>>> Appreciate your timely reply.
>>>>>>>>
>>>>>>>> I gone through your answer but still I can't figure out how do I do
>>>>>>>> parsing/indexing in Storm UI as I cant find any option for the same.
>>>>>>>>
>>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> So, you created a new cef topic, and set up the appropriate parser
>>>>>>>>> config for it (if not, this
>>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>>> may be helpful)?  If so:
>>>>>>>>>
>>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>>> 1.  Validate that the logs are getting onto the kafka topic that
>>>>>>>>> you are sending to.  If they aren't there, the problem is upstream from
>>>>>>>>> Metron.
>>>>>>>>> 2.  If they are getting onto the kafka topic they are being
>>>>>>>>> directly sent to, check the indexing kafka topic for an enriched version of
>>>>>>>>> those same logs.
>>>>>>>>> 3.  Do a binary search of the various components involved with
>>>>>>>>> ingest.
>>>>>>>>>     a. If the logs are *not* on the indexing kafka topic, check
>>>>>>>>> the enrichments topic for those logs.
>>>>>>>>>     b. If the logs are *not* on the enrichments topic, check the
>>>>>>>>> parser storm topology.
>>>>>>>>>     c. If the logs are on the enrichments topic, but *not*
>>>>>>>>> indexing, check the enrichments storm topology.
>>>>>>>>>     d. If the logs are on the indexing but *not* Kibana, check
>>>>>>>>> the indexing storm topic.
>>>>>>>>>     e. If the logs are in on the indexing topic and indexing storm
>>>>>>>>> topic is in good shape, check elasticsearch directly.
>>>>>>>>> 4.  You should have identified where the issue is at this point.
>>>>>>>>> Report back here with what you observed, any relevant error messages, etc.
>>>>>>>>>
>>>>>>>>> Side note:  We should document a decision tree for troubleshooting
>>>>>>>>> data ingest.  It is fairly straightforward and makes me wonder if we
>>>>>>>>> already have this somewhere and I'm not aware of it?  It would also be a
>>>>>>>>> good place to put pointers to some common errors.
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <
>>>>>>>>> gauravb3007@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hello everyone, I have deployed Metron on a single node machine
>>>>>>>>>> and I would like to know how do I get Syslogs from NiFi into Kibana
>>>>>>>>>> dashboard?
>>>>>>>>>>
>>>>>>>>>> I have created a Kafka topic by the name "cef" and I can see that
>>>>>>>>>> the topic exists in
>>>>>>>>>> Metron Configuration but I am unable to connect it with Kibana
>>>>>>>>>>
>>>>>>>>>> Need Help!!
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>> --
>>>>>
>>>>> Jon
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> With Regards
>> Farrukh Naveed Anjum
>>
>
>


-- 
With Regards
Farrukh Naveed Anjum

Re: Getting Syslogs to Metron

Posted by Gaurav Bapat <ga...@gmail.com>.
Thanks Farrukh,

I am not getting data in my kafka topic even after creating one, the issue
seems to be with broker config, how to configure Kafka and Zookeeper port?

On 15 January 2018 at 13:23, Farrukh Naveed Anjum <an...@gmail.com>
wrote:

> Hi,
>
> I had similar issue it turned out to be the issue in STROM
>
> No worker is assigned to togolgoy all you need is to add additional port in
>
>  Ambari -> Storm -> Configs -> supervisor.slot.ports by assigning an
> additional port to the list
>
>
> https://community.hortonworks.com/questions/32499/no-
> workers-in-storm-for-squid-topology.html
>
>
> I had similar issue and finally got it fixed
>
> On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <ga...@gmail.com>
> wrote:
>
>> Storm UI
>>
>> On 15 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com> wrote:
>>
>>> Hey Jon,
>>>
>>> I have Storm UI and the logs are coming from firewalls, servers, etc
>>> from other machines(HP ArcSight Logger).
>>>
>>> I have attached the NiFi screenshots, my logs are coming but there is
>>> some error with Kafka and I am having issues with configuring Kafka broker
>>>
>>>
>>>
>>> On 12 January 2018 at 18:14, Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>
>>>> In Ambari under storm you can find the UI under quick links at the
>>>> top.  That said, the issue seems to be upstream of Metron, in NiFi.  That
>>>> is something I can't help with as much, but if you can share the
>>>> listensyslog processor config that would be a start.  Also, share the
>>>> config of the thing that is sending syslog as well (are these local syslog,
>>>> is that machine aggregating syslog from other machines, etc.).  Thanks,
>>>>
>>>> Jon
>>>>
>>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com> wrote:
>>>>
>>>>> I have created a Kafka topic "cef" but my Listen Syslogs is not
>>>>> getting logs in the processor.
>>>>>
>>>>> Also I checked using tcpdump -i and it is getting logs in my machine
>>>>> but ListenSyslogs is not getting the logs
>>>>>
>>>>> On 12 January 2018 at 11:13, Gaurav Bapat <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> [root@metron incubator-metron]# ./metron-deployment/scripts/pl
>>>>>> atform-info.sh
>>>>>> Metron 0.4.3
>>>>>> --
>>>>>> * master
>>>>>> --
>>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>>> Author: cstella <ce...@gmail.com>
>>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>>
>>>>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>>>>> apache/incubator-metron#880
>>>>>> --
>>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
>>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>> --
>>>>>> ansible 2.0.0.2
>>>>>>   config file =
>>>>>>   configured module search path = Default w/o overrides
>>>>>> --
>>>>>> Vagrant 1.9.6
>>>>>> --
>>>>>> Python 2.7.5
>>>>>> --
>>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>>> 2015-11-10T22:11:47+05:30)
>>>>>> Maven home: /opt/maven/current
>>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch:
>>>>>> "amd64", family: "unix"
>>>>>> --
>>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>>> --
>>>>>> node
>>>>>> v8.9.3
>>>>>> --
>>>>>> npm
>>>>>> 5.5.1
>>>>>> --
>>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>> This is free software; see the source for copying conditions.  There
>>>>>> is NO
>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
>>>>>> PURPOSE.
>>>>>>
>>>>>> --
>>>>>> Compiler is C++11 compliant
>>>>>> --
>>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4
>>>>>> 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>>> --
>>>>>> Total System Memory = 15773.3 MB
>>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>>> Processor Speed: 3320.875 MHz
>>>>>> Processor Speed: 3307.191 MHz
>>>>>> Processor Speed: 3376.699 MHz
>>>>>> Processor Speed: 3338.917 MHz
>>>>>> Total Physical Processors: 4
>>>>>> Total cores: 16
>>>>>> Disk information:
>>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>>> This CPU appears to support virtualization
>>>>>>
>>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hey Jon,
>>>>>>>
>>>>>>> Appreciate your timely reply.
>>>>>>>
>>>>>>> I gone through your answer but still I can't figure out how do I do
>>>>>>> parsing/indexing in Storm UI as I cant find any option for the same.
>>>>>>>
>>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> So, you created a new cef topic, and set up the appropriate parser
>>>>>>>> config for it (if not, this
>>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>>> may be helpful)?  If so:
>>>>>>>>
>>>>>>>> Here are some basic troubleshooting steps:
>>>>>>>> 1.  Validate that the logs are getting onto the kafka topic that
>>>>>>>> you are sending to.  If they aren't there, the problem is upstream from
>>>>>>>> Metron.
>>>>>>>> 2.  If they are getting onto the kafka topic they are being
>>>>>>>> directly sent to, check the indexing kafka topic for an enriched version of
>>>>>>>> those same logs.
>>>>>>>> 3.  Do a binary search of the various components involved with
>>>>>>>> ingest.
>>>>>>>>     a. If the logs are *not* on the indexing kafka topic, check
>>>>>>>> the enrichments topic for those logs.
>>>>>>>>     b. If the logs are *not* on the enrichments topic, check the
>>>>>>>> parser storm topology.
>>>>>>>>     c. If the logs are on the enrichments topic, but *not*
>>>>>>>> indexing, check the enrichments storm topology.
>>>>>>>>     d. If the logs are on the indexing but *not* Kibana, check the
>>>>>>>> indexing storm topic.
>>>>>>>>     e. If the logs are in on the indexing topic and indexing storm
>>>>>>>> topic is in good shape, check elasticsearch directly.
>>>>>>>> 4.  You should have identified where the issue is at this point.
>>>>>>>> Report back here with what you observed, any relevant error messages, etc.
>>>>>>>>
>>>>>>>> Side note:  We should document a decision tree for troubleshooting
>>>>>>>> data ingest.  It is fairly straightforward and makes me wonder if we
>>>>>>>> already have this somewhere and I'm not aware of it?  It would also be a
>>>>>>>> good place to put pointers to some common errors.
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <ga...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hello everyone, I have deployed Metron on a single node machine
>>>>>>>>> and I would like to know how do I get Syslogs from NiFi into Kibana
>>>>>>>>> dashboard?
>>>>>>>>>
>>>>>>>>> I have created a Kafka topic by the name "cef" and I can see that
>>>>>>>>> the topic exists in
>>>>>>>>> Metron Configuration but I am unable to connect it with Kibana
>>>>>>>>>
>>>>>>>>> Need Help!!
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>> --
>>>>
>>>> Jon
>>>>
>>>
>>>
>>
>
>
> --
> With Regards
> Farrukh Naveed Anjum
>

Re: Getting Syslogs to Metron

Posted by Farrukh Naveed Anjum <an...@gmail.com>.
Hi,

I had similar issue it turned out to be the issue in STROM

No worker is assigned to togolgoy all you need is to add additional port in

 Ambari -> Storm -> Configs -> supervisor.slot.ports by assigning an
additional port to the list


https://community.hortonworks.com/questions/32499/no-workers-in-storm-for-squid-topology.html


I had similar issue and finally got it fixed

On Mon, Jan 15, 2018 at 8:45 AM, Gaurav Bapat <ga...@gmail.com> wrote:

> Storm UI
>
> On 15 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com> wrote:
>
>> Hey Jon,
>>
>> I have Storm UI and the logs are coming from firewalls, servers, etc from
>> other machines(HP ArcSight Logger).
>>
>> I have attached the NiFi screenshots, my logs are coming but there is
>> some error with Kafka and I am having issues with configuring Kafka broker
>>
>>
>>
>> On 12 January 2018 at 18:14, Zeolla@GMail.com <ze...@gmail.com> wrote:
>>
>>> In Ambari under storm you can find the UI under quick links at the top.
>>> That said, the issue seems to be upstream of Metron, in NiFi.  That is
>>> something I can't help with as much, but if you can share the listensyslog
>>> processor config that would be a start.  Also, share the config of the
>>> thing that is sending syslog as well (are these local syslog, is that
>>> machine aggregating syslog from other machines, etc.).  Thanks,
>>>
>>> Jon
>>>
>>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com> wrote:
>>>
>>>> I have created a Kafka topic "cef" but my Listen Syslogs is not getting
>>>> logs in the processor.
>>>>
>>>> Also I checked using tcpdump -i and it is getting logs in my machine
>>>> but ListenSyslogs is not getting the logs
>>>>
>>>> On 12 January 2018 at 11:13, Gaurav Bapat <ga...@gmail.com>
>>>> wrote:
>>>>
>>>>> [root@metron incubator-metron]# ./metron-deployment/scripts/pl
>>>>> atform-info.sh
>>>>> Metron 0.4.3
>>>>> --
>>>>> * master
>>>>> --
>>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>>> Author: cstella <ce...@gmail.com>
>>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>>
>>>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>>>> apache/incubator-metron#880
>>>>> --
>>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>> --
>>>>> ansible 2.0.0.2
>>>>>   config file =
>>>>>   configured module search path = Default w/o overrides
>>>>> --
>>>>> Vagrant 1.9.6
>>>>> --
>>>>> Python 2.7.5
>>>>> --
>>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>>> 2015-11-10T22:11:47+05:30)
>>>>> Maven home: /opt/maven/current
>>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>>> Java home: /opt/jdk1.8.0_151/jre
>>>>> Default locale: en_US, platform encoding: UTF-8
>>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch:
>>>>> "amd64", family: "unix"
>>>>> --
>>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>>> --
>>>>> node
>>>>> v8.9.3
>>>>> --
>>>>> npm
>>>>> 5.5.1
>>>>> --
>>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>> This is free software; see the source for copying conditions.  There
>>>>> is NO
>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
>>>>> PURPOSE.
>>>>>
>>>>> --
>>>>> Compiler is C++11 compliant
>>>>> --
>>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37
>>>>> UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>>> --
>>>>> Total System Memory = 15773.3 MB
>>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>>> Processor Speed: 3320.875 MHz
>>>>> Processor Speed: 3307.191 MHz
>>>>> Processor Speed: 3376.699 MHz
>>>>> Processor Speed: 3338.917 MHz
>>>>> Total Physical Processors: 4
>>>>> Total cores: 16
>>>>> Disk information:
>>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>>> This CPU appears to support virtualization
>>>>>
>>>>> On 12 January 2018 at 09:25, Gaurav Bapat <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hey Jon,
>>>>>>
>>>>>> Appreciate your timely reply.
>>>>>>
>>>>>> I gone through your answer but still I can't figure out how do I do
>>>>>> parsing/indexing in Storm UI as I cant find any option for the same.
>>>>>>
>>>>>> Is there any other UI to do parsing/indexing?
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <ze...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> So, you created a new cef topic, and set up the appropriate parser
>>>>>>> config for it (if not, this
>>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>>> may be helpful)?  If so:
>>>>>>>
>>>>>>> Here are some basic troubleshooting steps:
>>>>>>> 1.  Validate that the logs are getting onto the kafka topic that you
>>>>>>> are sending to.  If they aren't there, the problem is upstream from Metron.
>>>>>>> 2.  If they are getting onto the kafka topic they are being directly
>>>>>>> sent to, check the indexing kafka topic for an enriched version of those
>>>>>>> same logs.
>>>>>>> 3.  Do a binary search of the various components involved with
>>>>>>> ingest.
>>>>>>>     a. If the logs are *not* on the indexing kafka topic, check the
>>>>>>> enrichments topic for those logs.
>>>>>>>     b. If the logs are *not* on the enrichments topic, check the
>>>>>>> parser storm topology.
>>>>>>>     c. If the logs are on the enrichments topic, but *not*
>>>>>>> indexing, check the enrichments storm topology.
>>>>>>>     d. If the logs are on the indexing but *not* Kibana, check the
>>>>>>> indexing storm topic.
>>>>>>>     e. If the logs are in on the indexing topic and indexing storm
>>>>>>> topic is in good shape, check elasticsearch directly.
>>>>>>> 4.  You should have identified where the issue is at this point.
>>>>>>> Report back here with what you observed, any relevant error messages, etc.
>>>>>>>
>>>>>>> Side note:  We should document a decision tree for troubleshooting
>>>>>>> data ingest.  It is fairly straightforward and makes me wonder if we
>>>>>>> already have this somewhere and I'm not aware of it?  It would also be a
>>>>>>> good place to put pointers to some common errors.
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <ga...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hello everyone, I have deployed Metron on a single node machine and
>>>>>>>> I would like to know how do I get Syslogs from NiFi into Kibana dashboard?
>>>>>>>>
>>>>>>>> I have created a Kafka topic by the name "cef" and I can see that
>>>>>>>> the topic exists in
>>>>>>>> Metron Configuration but I am unable to connect it with Kibana
>>>>>>>>
>>>>>>>> Need Help!!
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>> --
>>>
>>> Jon
>>>
>>
>>
>


-- 
With Regards
Farrukh Naveed Anjum

Re: Getting Syslogs to Metron

Posted by Gaurav Bapat <ga...@gmail.com>.
Storm UI

On 15 January 2018 at 08:59, Gaurav Bapat <ga...@gmail.com> wrote:

> Hey Jon,
>
> I have Storm UI and the logs are coming from firewalls, servers, etc from
> other machines(HP ArcSight Logger).
>
> I have attached the NiFi screenshots, my logs are coming but there is some
> error with Kafka and I am having issues with configuring Kafka broker
>
>
>
> On 12 January 2018 at 18:14, Zeolla@GMail.com <ze...@gmail.com> wrote:
>
>> In Ambari under storm you can find the UI under quick links at the top.
>> That said, the issue seems to be upstream of Metron, in NiFi.  That is
>> something I can't help with as much, but if you can share the listensyslog
>> processor config that would be a start.  Also, share the config of the
>> thing that is sending syslog as well (are these local syslog, is that
>> machine aggregating syslog from other machines, etc.).  Thanks,
>>
>> Jon
>>
>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com> wrote:
>>
>>> I have created a Kafka topic "cef" but my Listen Syslogs is not getting
>>> logs in the processor.
>>>
>>> Also I checked using tcpdump -i and it is getting logs in my machine but
>>> ListenSyslogs is not getting the logs
>>>
>>> On 12 January 2018 at 11:13, Gaurav Bapat <ga...@gmail.com> wrote:
>>>
>>>> [root@metron incubator-metron]# ./metron-deployment/scripts/pl
>>>> atform-info.sh
>>>> Metron 0.4.3
>>>> --
>>>> * master
>>>> --
>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>> Author: cstella <ce...@gmail.com>
>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>
>>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>>> apache/incubator-metron#880
>>>> --
>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>> --
>>>> ansible 2.0.0.2
>>>>   config file =
>>>>   configured module search path = Default w/o overrides
>>>> --
>>>> Vagrant 1.9.6
>>>> --
>>>> Python 2.7.5
>>>> --
>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>> 2015-11-10T22:11:47+05:30)
>>>> Maven home: /opt/maven/current
>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>> Java home: /opt/jdk1.8.0_151/jre
>>>> Default locale: en_US, platform encoding: UTF-8
>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch: "amd64",
>>>> family: "unix"
>>>> --
>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>> --
>>>> node
>>>> v8.9.3
>>>> --
>>>> npm
>>>> 5.5.1
>>>> --
>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>> This is free software; see the source for copying conditions.  There is
>>>> NO
>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
>>>> PURPOSE.
>>>>
>>>> --
>>>> Compiler is C++11 compliant
>>>> --
>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37
>>>> UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>> --
>>>> Total System Memory = 15773.3 MB
>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>> Processor Speed: 3320.875 MHz
>>>> Processor Speed: 3307.191 MHz
>>>> Processor Speed: 3376.699 MHz
>>>> Processor Speed: 3338.917 MHz
>>>> Total Physical Processors: 4
>>>> Total cores: 16
>>>> Disk information:
>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>> This CPU appears to support virtualization
>>>>
>>>> On 12 January 2018 at 09:25, Gaurav Bapat <ga...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hey Jon,
>>>>>
>>>>> Appreciate your timely reply.
>>>>>
>>>>> I gone through your answer but still I can't figure out how do I do
>>>>> parsing/indexing in Storm UI as I cant find any option for the same.
>>>>>
>>>>> Is there any other UI to do parsing/indexing?
>>>>>
>>>>>
>>>>>
>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <ze...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> So, you created a new cef topic, and set up the appropriate parser
>>>>>> config for it (if not, this
>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>> may be helpful)?  If so:
>>>>>>
>>>>>> Here are some basic troubleshooting steps:
>>>>>> 1.  Validate that the logs are getting onto the kafka topic that you
>>>>>> are sending to.  If they aren't there, the problem is upstream from Metron.
>>>>>> 2.  If they are getting onto the kafka topic they are being directly
>>>>>> sent to, check the indexing kafka topic for an enriched version of those
>>>>>> same logs.
>>>>>> 3.  Do a binary search of the various components involved with ingest.
>>>>>>     a. If the logs are *not* on the indexing kafka topic, check the
>>>>>> enrichments topic for those logs.
>>>>>>     b. If the logs are *not* on the enrichments topic, check the
>>>>>> parser storm topology.
>>>>>>     c. If the logs are on the enrichments topic, but *not* indexing,
>>>>>> check the enrichments storm topology.
>>>>>>     d. If the logs are on the indexing but *not* Kibana, check the
>>>>>> indexing storm topic.
>>>>>>     e. If the logs are in on the indexing topic and indexing storm
>>>>>> topic is in good shape, check elasticsearch directly.
>>>>>> 4.  You should have identified where the issue is at this point.
>>>>>> Report back here with what you observed, any relevant error messages, etc.
>>>>>>
>>>>>> Side note:  We should document a decision tree for troubleshooting
>>>>>> data ingest.  It is fairly straightforward and makes me wonder if we
>>>>>> already have this somewhere and I'm not aware of it?  It would also be a
>>>>>> good place to put pointers to some common errors.
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hello everyone, I have deployed Metron on a single node machine and
>>>>>>> I would like to know how do I get Syslogs from NiFi into Kibana dashboard?
>>>>>>>
>>>>>>> I have created a Kafka topic by the name "cef" and I can see that
>>>>>>> the topic exists in
>>>>>>> Metron Configuration but I am unable to connect it with Kibana
>>>>>>>
>>>>>>> Need Help!!
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>
>>>>>
>>>>
>>> --
>>
>> Jon
>>
>
>

Re: Getting Syslogs to Metron

Posted by Gaurav Bapat <ga...@gmail.com>.
Hey Laurens,

My kafka processor says "Failed while waiting for acks from Kafka"

On 15 January 2018 at 21:00, Laurens Vets <la...@daemon.be> wrote:

> Hi Gaurav,
>
> If you click on the red squares in the upper right corners of your
> processors, what error messages do you see?
>
> On 2018-01-14 19:29, Gaurav Bapat wrote:
>
> Hey Jon,
>
> I have Storm UI and the logs are coming from firewalls, servers, etc from
> other machines(HP ArcSight Logger).
>
> I have attached the NiFi screenshots, my logs are coming but there is some
> error with Kafka and I am having issues with configuring Kafka broker
>
>
>
> On 12 January 2018 at 18:14, Zeolla@GMail.com <ze...@gmail.com> wrote:
>
>> In Ambari under storm you can find the UI under quick links at the top.
>> That said, the issue seems to be upstream of Metron, in NiFi.  That is
>> something I can't help with as much, but if you can share the listensyslog
>> processor config that would be a start.  Also, share the config of the
>> thing that is sending syslog as well (are these local syslog, is that
>> machine aggregating syslog from other machines, etc.).  Thanks,
>>
>> Jon
>>
>> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com> wrote:
>>
>>> I have created a Kafka topic "cef" but my Listen Syslogs is not getting
>>> logs in the processor.
>>>
>>> Also I checked using tcpdump -i and it is getting logs in my machine but
>>> ListenSyslogs is not getting the logs
>>>
>>> On 12 January 2018 at 11:13, Gaurav Bapat <ga...@gmail.com> wrote:
>>>
>>>> [root@metron incubator-metron]# ./metron-deployment/scripts/pl
>>>> atform-info.sh
>>>> Metron 0.4.3
>>>> --
>>>> * master
>>>> --
>>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>>> Author: cstella <ce...@gmail.com>
>>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>>
>>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>>> apache/incubator-metron#880
>>>> --
>>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>> --
>>>> ansible 2.0.0.2
>>>>   config file =
>>>>   configured module search path = Default w/o overrides
>>>> --
>>>> Vagrant 1.9.6
>>>> --
>>>> Python 2.7.5
>>>> --
>>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>>> 2015-11-10T22:11:47+05:30)
>>>> Maven home: /opt/maven/current
>>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>>> Java home: /opt/jdk1.8.0_151/jre
>>>> Default locale: en_US, platform encoding: UTF-8
>>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch: "amd64",
>>>> family: "unix"
>>>> --
>>>> Docker version 1.12.6, build ec8512b/1.12.6
>>>> --
>>>> node
>>>> v8.9.3
>>>> --
>>>> npm
>>>> 5.5.1
>>>> --
>>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>> This is free software; see the source for copying conditions.  There is
>>>> NO
>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
>>>> PURPOSE.
>>>>
>>>> --
>>>> Compiler is C++11 compliant
>>>> --
>>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37
>>>> UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>>> --
>>>> Total System Memory = 15773.3 MB
>>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>>> Processor Speed: 3320.875 MHz
>>>> Processor Speed: 3307.191 MHz
>>>> Processor Speed: 3376.699 MHz
>>>> Processor Speed: 3338.917 MHz
>>>> Total Physical Processors: 4
>>>> Total cores: 16
>>>> Disk information:
>>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>>> This CPU appears to support virtualization
>>>>
>>>> On 12 January 2018 at 09:25, Gaurav Bapat <ga...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hey Jon,
>>>>>
>>>>> Appreciate your timely reply.
>>>>>
>>>>> I gone through your answer but still I can't figure out how do I do
>>>>> parsing/indexing in Storm UI as I cant find any option for the same.
>>>>>
>>>>> Is there any other UI to do parsing/indexing?
>>>>>
>>>>>
>>>>>
>>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <ze...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> So, you created a new cef topic, and set up the appropriate parser
>>>>>> config for it (if not, this
>>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>>> may be helpful)?  If so:
>>>>>>
>>>>>> Here are some basic troubleshooting steps:
>>>>>> 1.  Validate that the logs are getting onto the kafka topic that you
>>>>>> are sending to.  If they aren't there, the problem is upstream from Metron.
>>>>>> 2.  If they are getting onto the kafka topic they are being directly
>>>>>> sent to, check the indexing kafka topic for an enriched version of those
>>>>>> same logs.
>>>>>> 3.  Do a binary search of the various components involved with ingest.
>>>>>>     a. If the logs are *not* on the indexing kafka topic, check the
>>>>>> enrichments topic for those logs.
>>>>>>     b. If the logs are *not* on the enrichments topic, check the
>>>>>> parser storm topology.
>>>>>>     c. If the logs are on the enrichments topic, but *not* indexing,
>>>>>> check the enrichments storm topology.
>>>>>>     d. If the logs are on the indexing but *not* Kibana, check the
>>>>>> indexing storm topic.
>>>>>>     e. If the logs are in on the indexing topic and indexing storm
>>>>>> topic is in good shape, check elasticsearch directly.
>>>>>> 4.  You should have identified where the issue is at this point.
>>>>>> Report back here with what you observed, any relevant error messages, etc.
>>>>>>
>>>>>> Side note:  We should document a decision tree for troubleshooting
>>>>>> data ingest.  It is fairly straightforward and makes me wonder if we
>>>>>> already have this somewhere and I'm not aware of it?  It would also be a
>>>>>> good place to put pointers to some common errors.
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hello everyone, I have deployed Metron on a single node machine and
>>>>>>> I would like to know how do I get Syslogs from NiFi into Kibana dashboard?
>>>>>>>
>>>>>>> I have created a Kafka topic by the name "cef" and I can see that
>>>>>>> the topic exists in
>>>>>>> Metron Configuration but I am unable to connect it with Kibana
>>>>>>>
>>>>>>> Need Help!!
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Jon
>>>>>>
>>>>> --
>>
>> Jon
>>
>
>

Re: Getting Syslogs to Metron

Posted by Laurens Vets <la...@daemon.be>.
Hi Gaurav, 

If you click on the red squares in the upper right corners of your
processors, what error messages do you see? 

On 2018-01-14 19:29, Gaurav Bapat wrote:

> Hey Jon,
> 
> I have Storm UI and the logs are coming from firewalls, servers, etc from other machines(HP ArcSight Logger).
> 
> I have attached the NiFi screenshots, my logs are coming but there is some error with Kafka and I am having issues with configuring Kafka broker
> 
> On 12 January 2018 at 18:14, Zeolla@GMail.com <ze...@gmail.com> wrote:
> 
> In Ambari under storm you can find the UI under quick links at the top.  That said, the issue seems to be upstream of Metron, in NiFi.  That is something I can't help with as much, but if you can share the listensyslog processor config that would be a start.  Also, share the config of the thing that is sending syslog as well (are these local syslog, is that machine aggregating syslog from other machines, etc.).  Thanks, 
> 
> Jon
> 
> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com> wrote: 
> 
> I have created a Kafka topic "cef" but my Listen Syslogs is not getting logs in the processor.
> 
> Also I checked using tcpdump -i and it is getting logs in my machine but ListenSyslogs is not getting the logs 
> 
> On 12 January 2018 at 11:13, Gaurav Bapat <ga...@gmail.com> wrote:
> 
> [root@metron incubator-metron]# ./metron-deployment/scripts/platform-info.sh
> Metron 0.4.3
> --
> * master
> --
> commit c559ed7e1838ec71344eae3d9e37771db2641635
> Author: cstella <ce...@gmail.com>
> Date:   Tue Jan 9 15:28:47 2018 -0500
> 
> METRON-1379: Add an OBJECT_GET stellar function closes apache/incubator-metron#880
> --
> metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
> --
> ansible 2.0.0.2
> config file = 
> configured module search path = Default w/o overrides
> --
> Vagrant 1.9.6
> --
> Python 2.7.5
> --
> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T22:11:47+05:30)
> Maven home: /opt/maven/current
> Java version: 1.8.0_151, vendor: Oracle Corporation
> Java home: /opt/jdk1.8.0_151/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch: "amd64", family: "unix"
> --
> Docker version 1.12.6, build ec8512b/1.12.6
> --
> node
> v8.9.3
> --
> npm
> 5.5.1
> --
> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
> Copyright (C) 2015 Free Software Foundation, Inc.
> This is free software; see the source for copying conditions.  There is NO
> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
> 
> --
> Compiler is C++11 compliant
> --
> Linux metron.com [1] 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
> --
> Total System Memory = 15773.3 MB
> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
> Processor Speed: 3320.875 MHz
> Processor Speed: 3307.191 MHz
> Processor Speed: 3376.699 MHz
> Processor Speed: 3338.917 MHz
> Total Physical Processors: 4
> Total cores: 16
> Disk information:
> /dev/mapper/centos-root  200G   22G  179G  11% /
> /dev/sda1                2.0G  224M  1.8G  11% /boot
> /dev/sda2               1022M   12K 1022M   1% /boot/efi
> /dev/mapper/centos-home  247G   10G  237G   5% /home
> This CPU appears to support virtualization 
> 
> On 12 January 2018 at 09:25, Gaurav Bapat <ga...@gmail.com> wrote:
> 
> Hey Jon,
> 
> Appreciate your timely reply.
> 
> I gone through your answer but still I can't figure out how do I do parsing/indexing in Storm UI as I cant find any option for the same.
> 
> Is there any other UI to do parsing/indexing?
> 
> On 11 January 2018 at 21:22, Zeolla@GMail.com <ze...@gmail.com> wrote:
> 
> So, you created a new cef topic, and set up the appropriate parser config for it (if not, this [2] may be helpful)?  If so: 
> Here are some basic troubleshooting steps: 
> 1.  Validate that the logs are getting onto the kafka topic that you are sending to.  If they aren't there, the problem is upstream from Metron. 
> 2.  If they are getting onto the kafka topic they are being directly sent to, check the indexing kafka topic for an enriched version of those same logs. 
> 3.  Do a binary search of the various components involved with ingest. 
> a. If the logs are NOT on the indexing kafka topic, check the enrichments topic for those logs. 
> b. If the logs are NOT on the enrichments topic, check the parser storm topology. 
> c. If the logs are on the enrichments topic, but NOT indexing, check the enrichments storm topology. 
> d. If the logs are on the indexing but NOT Kibana, check the indexing storm topic. 
> e. If the logs are in on the indexing topic and indexing storm topic is in good shape, check elasticsearch directly. 
> 4.  You should have identified where the issue is at this point.  Report back here with what you observed, any relevant error messages, etc. 
> 
> Side note:  We should document a decision tree for troubleshooting data ingest.  It is fairly straightforward and makes me wonder if we already have this somewhere and I'm not aware of it?  It would also be a good place to put pointers to some common errors. 
> 
> Jon 
> 
> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <ga...@gmail.com> wrote: 
> 
> Hello everyone, I have deployed Metron on a single node machine and I would like to know how do I get Syslogs from NiFi into Kibana dashboard?
> 
> I have created a Kafka topic by the name "cef" and I can see that the topic exists in Metron Configuration but I am unable to connect it with Kibana
> 
> Need Help!! 
> 
> -- 
> 
> Jon

-- 

Jon 

 

Links:
------
[1] http://metron.com
[2]
https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source

Re: Getting Syslogs to Metron

Posted by Gaurav Bapat <ga...@gmail.com>.
Hey Jon,

I have Storm UI and the logs are coming from firewalls, servers, etc from
other machines(HP ArcSight Logger).

I have attached the NiFi screenshots, my logs are coming but there is some
error with Kafka and I am having issues with configuring Kafka broker



On 12 January 2018 at 18:14, Zeolla@GMail.com <ze...@gmail.com> wrote:

> In Ambari under storm you can find the UI under quick links at the top.
> That said, the issue seems to be upstream of Metron, in NiFi.  That is
> something I can't help with as much, but if you can share the listensyslog
> processor config that would be a start.  Also, share the config of the
> thing that is sending syslog as well (are these local syslog, is that
> machine aggregating syslog from other machines, etc.).  Thanks,
>
> Jon
>
> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com> wrote:
>
>> I have created a Kafka topic "cef" but my Listen Syslogs is not getting
>> logs in the processor.
>>
>> Also I checked using tcpdump -i and it is getting logs in my machine but
>> ListenSyslogs is not getting the logs
>>
>> On 12 January 2018 at 11:13, Gaurav Bapat <ga...@gmail.com> wrote:
>>
>>> [root@metron incubator-metron]# ./metron-deployment/scripts/
>>> platform-info.sh
>>> Metron 0.4.3
>>> --
>>> * master
>>> --
>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>> Author: cstella <ce...@gmail.com>
>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>
>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>> apache/incubator-metron#880
>>> --
>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>> --
>>> ansible 2.0.0.2
>>>   config file =
>>>   configured module search path = Default w/o overrides
>>> --
>>> Vagrant 1.9.6
>>> --
>>> Python 2.7.5
>>> --
>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>> 2015-11-10T22:11:47+05:30)
>>> Maven home: /opt/maven/current
>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>> Java home: /opt/jdk1.8.0_151/jre
>>> Default locale: en_US, platform encoding: UTF-8
>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch: "amd64",
>>> family: "unix"
>>> --
>>> Docker version 1.12.6, build ec8512b/1.12.6
>>> --
>>> node
>>> v8.9.3
>>> --
>>> npm
>>> 5.5.1
>>> --
>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>> This is free software; see the source for copying conditions.  There is
>>> NO
>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
>>> PURPOSE.
>>>
>>> --
>>> Compiler is C++11 compliant
>>> --
>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37
>>> UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>> --
>>> Total System Memory = 15773.3 MB
>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>> Processor Speed: 3320.875 MHz
>>> Processor Speed: 3307.191 MHz
>>> Processor Speed: 3376.699 MHz
>>> Processor Speed: 3338.917 MHz
>>> Total Physical Processors: 4
>>> Total cores: 16
>>> Disk information:
>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>> This CPU appears to support virtualization
>>>
>>> On 12 January 2018 at 09:25, Gaurav Bapat <ga...@gmail.com> wrote:
>>>
>>>> Hey Jon,
>>>>
>>>> Appreciate your timely reply.
>>>>
>>>> I gone through your answer but still I can't figure out how do I do
>>>> parsing/indexing in Storm UI as I cant find any option for the same.
>>>>
>>>> Is there any other UI to do parsing/indexing?
>>>>
>>>>
>>>>
>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>>
>>>>> So, you created a new cef topic, and set up the appropriate parser
>>>>> config for it (if not, this
>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>> may be helpful)?  If so:
>>>>>
>>>>> Here are some basic troubleshooting steps:
>>>>> 1.  Validate that the logs are getting onto the kafka topic that you
>>>>> are sending to.  If they aren't there, the problem is upstream from Metron.
>>>>> 2.  If they are getting onto the kafka topic they are being directly
>>>>> sent to, check the indexing kafka topic for an enriched version of those
>>>>> same logs.
>>>>> 3.  Do a binary search of the various components involved with ingest.
>>>>>     a. If the logs are *not* on the indexing kafka topic, check the
>>>>> enrichments topic for those logs.
>>>>>     b. If the logs are *not* on the enrichments topic, check the
>>>>> parser storm topology.
>>>>>     c. If the logs are on the enrichments topic, but *not* indexing,
>>>>> check the enrichments storm topology.
>>>>>     d. If the logs are on the indexing but *not* Kibana, check the
>>>>> indexing storm topic.
>>>>>     e. If the logs are in on the indexing topic and indexing storm
>>>>> topic is in good shape, check elasticsearch directly.
>>>>> 4.  You should have identified where the issue is at this point.
>>>>> Report back here with what you observed, any relevant error messages, etc.
>>>>>
>>>>> Side note:  We should document a decision tree for troubleshooting
>>>>> data ingest.  It is fairly straightforward and makes me wonder if we
>>>>> already have this somewhere and I'm not aware of it?  It would also be a
>>>>> good place to put pointers to some common errors.
>>>>>
>>>>> Jon
>>>>>
>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hello everyone, I have deployed Metron on a single node machine and I
>>>>>> would like to know how do I get Syslogs from NiFi into Kibana dashboard?
>>>>>>
>>>>>> I have created a Kafka topic by the name "cef" and I can see that the
>>>>>> topic exists in
>>>>>> Metron Configuration but I am unable to connect it with Kibana
>>>>>>
>>>>>> Need Help!!
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Jon
>>>>>
>>>>
>>>>
>>>
>> --
>
> Jon
>

Re: Getting Syslogs to Metron

Posted by Gaurav Bapat <ga...@gmail.com>.
This is my screenshot Storm UI, I have stopped bro and snort but then my
Metron Parser goes Red

Also, I am struggling to configure Kafka and Zookeeper

I dont know my after setting known brokers to node1:6667 it is giving my
acks Kafka error

On 12 January 2018 at 18:14, Zeolla@GMail.com <ze...@gmail.com> wrote:

> In Ambari under storm you can find the UI under quick links at the top.
> That said, the issue seems to be upstream of Metron, in NiFi.  That is
> something I can't help with as much, but if you can share the listensyslog
> processor config that would be a start.  Also, share the config of the
> thing that is sending syslog as well (are these local syslog, is that
> machine aggregating syslog from other machines, etc.).  Thanks,
>
> Jon
>
> On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com> wrote:
>
>> I have created a Kafka topic "cef" but my Listen Syslogs is not getting
>> logs in the processor.
>>
>> Also I checked using tcpdump -i and it is getting logs in my machine but
>> ListenSyslogs is not getting the logs
>>
>> On 12 January 2018 at 11:13, Gaurav Bapat <ga...@gmail.com> wrote:
>>
>>> [root@metron incubator-metron]# ./metron-deployment/scripts/
>>> platform-info.sh
>>> Metron 0.4.3
>>> --
>>> * master
>>> --
>>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>>> Author: cstella <ce...@gmail.com>
>>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>>
>>>     METRON-1379: Add an OBJECT_GET stellar function closes
>>> apache/incubator-metron#880
>>> --
>>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>> --
>>> ansible 2.0.0.2
>>>   config file =
>>>   configured module search path = Default w/o overrides
>>> --
>>> Vagrant 1.9.6
>>> --
>>> Python 2.7.5
>>> --
>>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>>> 2015-11-10T22:11:47+05:30)
>>> Maven home: /opt/maven/current
>>> Java version: 1.8.0_151, vendor: Oracle Corporation
>>> Java home: /opt/jdk1.8.0_151/jre
>>> Default locale: en_US, platform encoding: UTF-8
>>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch: "amd64",
>>> family: "unix"
>>> --
>>> Docker version 1.12.6, build ec8512b/1.12.6
>>> --
>>> node
>>> v8.9.3
>>> --
>>> npm
>>> 5.5.1
>>> --
>>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>> This is free software; see the source for copying conditions.  There is
>>> NO
>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
>>> PURPOSE.
>>>
>>> --
>>> Compiler is C++11 compliant
>>> --
>>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37
>>> UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>>> --
>>> Total System Memory = 15773.3 MB
>>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>>> Processor Speed: 3320.875 MHz
>>> Processor Speed: 3307.191 MHz
>>> Processor Speed: 3376.699 MHz
>>> Processor Speed: 3338.917 MHz
>>> Total Physical Processors: 4
>>> Total cores: 16
>>> Disk information:
>>> /dev/mapper/centos-root  200G   22G  179G  11% /
>>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>>> This CPU appears to support virtualization
>>>
>>> On 12 January 2018 at 09:25, Gaurav Bapat <ga...@gmail.com> wrote:
>>>
>>>> Hey Jon,
>>>>
>>>> Appreciate your timely reply.
>>>>
>>>> I gone through your answer but still I can't figure out how do I do
>>>> parsing/indexing in Storm UI as I cant find any option for the same.
>>>>
>>>> Is there any other UI to do parsing/indexing?
>>>>
>>>>
>>>>
>>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>>
>>>>> So, you created a new cef topic, and set up the appropriate parser
>>>>> config for it (if not, this
>>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>>> may be helpful)?  If so:
>>>>>
>>>>> Here are some basic troubleshooting steps:
>>>>> 1.  Validate that the logs are getting onto the kafka topic that you
>>>>> are sending to.  If they aren't there, the problem is upstream from Metron.
>>>>> 2.  If they are getting onto the kafka topic they are being directly
>>>>> sent to, check the indexing kafka topic for an enriched version of those
>>>>> same logs.
>>>>> 3.  Do a binary search of the various components involved with ingest.
>>>>>     a. If the logs are *not* on the indexing kafka topic, check the
>>>>> enrichments topic for those logs.
>>>>>     b. If the logs are *not* on the enrichments topic, check the
>>>>> parser storm topology.
>>>>>     c. If the logs are on the enrichments topic, but *not* indexing,
>>>>> check the enrichments storm topology.
>>>>>     d. If the logs are on the indexing but *not* Kibana, check the
>>>>> indexing storm topic.
>>>>>     e. If the logs are in on the indexing topic and indexing storm
>>>>> topic is in good shape, check elasticsearch directly.
>>>>> 4.  You should have identified where the issue is at this point.
>>>>> Report back here with what you observed, any relevant error messages, etc.
>>>>>
>>>>> Side note:  We should document a decision tree for troubleshooting
>>>>> data ingest.  It is fairly straightforward and makes me wonder if we
>>>>> already have this somewhere and I'm not aware of it?  It would also be a
>>>>> good place to put pointers to some common errors.
>>>>>
>>>>> Jon
>>>>>
>>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hello everyone, I have deployed Metron on a single node machine and I
>>>>>> would like to know how do I get Syslogs from NiFi into Kibana dashboard?
>>>>>>
>>>>>> I have created a Kafka topic by the name "cef" and I can see that the
>>>>>> topic exists in
>>>>>> Metron Configuration but I am unable to connect it with Kibana
>>>>>>
>>>>>> Need Help!!
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Jon
>>>>>
>>>>
>>>>
>>>
>> --
>
> Jon
>

Re: Getting Syslogs to Metron

Posted by "Zeolla@GMail.com" <ze...@gmail.com>.
In Ambari under storm you can find the UI under quick links at the top.
That said, the issue seems to be upstream of Metron, in NiFi.  That is
something I can't help with as much, but if you can share the listensyslog
processor config that would be a start.  Also, share the config of the
thing that is sending syslog as well (are these local syslog, is that
machine aggregating syslog from other machines, etc.).  Thanks,

Jon

On Fri, Jan 12, 2018, 01:00 Gaurav Bapat <ga...@gmail.com> wrote:

> I have created a Kafka topic "cef" but my Listen Syslogs is not getting
> logs in the processor.
>
> Also I checked using tcpdump -i and it is getting logs in my machine but
> ListenSyslogs is not getting the logs
>
> On 12 January 2018 at 11:13, Gaurav Bapat <ga...@gmail.com> wrote:
>
>> [root@metron incubator-metron]#
>> ./metron-deployment/scripts/platform-info.sh
>> Metron 0.4.3
>> --
>> * master
>> --
>> commit c559ed7e1838ec71344eae3d9e37771db2641635
>> Author: cstella <ce...@gmail.com>
>> Date:   Tue Jan 9 15:28:47 2018 -0500
>>
>>     METRON-1379: Add an OBJECT_GET stellar function closes
>> apache/incubator-metron#880
>> --
>>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>> --
>> ansible 2.0.0.2
>>   config file =
>>   configured module search path = Default w/o overrides
>> --
>> Vagrant 1.9.6
>> --
>> Python 2.7.5
>> --
>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>> 2015-11-10T22:11:47+05:30)
>> Maven home: /opt/maven/current
>> Java version: 1.8.0_151, vendor: Oracle Corporation
>> Java home: /opt/jdk1.8.0_151/jre
>> Default locale: en_US, platform encoding: UTF-8
>> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch: "amd64",
>> family: "unix"
>> --
>> Docker version 1.12.6, build ec8512b/1.12.6
>> --
>> node
>> v8.9.3
>> --
>> npm
>> 5.5.1
>> --
>> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
>> Copyright (C) 2015 Free Software Foundation, Inc.
>> This is free software; see the source for copying conditions.  There is NO
>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
>> PURPOSE.
>>
>> --
>> Compiler is C++11 compliant
>> --
>> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37
>> UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>> --
>> Total System Memory = 15773.3 MB
>> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
>> Processor Speed: 3320.875 MHz
>> Processor Speed: 3307.191 MHz
>> Processor Speed: 3376.699 MHz
>> Processor Speed: 3338.917 MHz
>> Total Physical Processors: 4
>> Total cores: 16
>> Disk information:
>> /dev/mapper/centos-root  200G   22G  179G  11% /
>> /dev/sda1                2.0G  224M  1.8G  11% /boot
>> /dev/sda2               1022M   12K 1022M   1% /boot/efi
>> /dev/mapper/centos-home  247G   10G  237G   5% /home
>> This CPU appears to support virtualization
>>
>> On 12 January 2018 at 09:25, Gaurav Bapat <ga...@gmail.com> wrote:
>>
>>> Hey Jon,
>>>
>>> Appreciate your timely reply.
>>>
>>> I gone through your answer but still I can't figure out how do I do
>>> parsing/indexing in Storm UI as I cant find any option for the same.
>>>
>>> Is there any other UI to do parsing/indexing?
>>>
>>>
>>>
>>> On 11 January 2018 at 21:22, Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>
>>>> So, you created a new cef topic, and set up the appropriate parser
>>>> config for it (if not, this
>>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>>> may be helpful)?  If so:
>>>>
>>>> Here are some basic troubleshooting steps:
>>>> 1.  Validate that the logs are getting onto the kafka topic that you
>>>> are sending to.  If they aren't there, the problem is upstream from Metron.
>>>> 2.  If they are getting onto the kafka topic they are being directly
>>>> sent to, check the indexing kafka topic for an enriched version of those
>>>> same logs.
>>>> 3.  Do a binary search of the various components involved with ingest.
>>>>     a. If the logs are *not* on the indexing kafka topic, check the
>>>> enrichments topic for those logs.
>>>>     b. If the logs are *not* on the enrichments topic, check the
>>>> parser storm topology.
>>>>     c. If the logs are on the enrichments topic, but *not* indexing,
>>>> check the enrichments storm topology.
>>>>     d. If the logs are on the indexing but *not* Kibana, check the
>>>> indexing storm topic.
>>>>     e. If the logs are in on the indexing topic and indexing storm
>>>> topic is in good shape, check elasticsearch directly.
>>>> 4.  You should have identified where the issue is at this point.
>>>> Report back here with what you observed, any relevant error messages, etc.
>>>>
>>>> Side note:  We should document a decision tree for troubleshooting data
>>>> ingest.  It is fairly straightforward and makes me wonder if we already
>>>> have this somewhere and I'm not aware of it?  It would also be a good place
>>>> to put pointers to some common errors.
>>>>
>>>> Jon
>>>>
>>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <ga...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hello everyone, I have deployed Metron on a single node machine and I
>>>>> would like to know how do I get Syslogs from NiFi into Kibana dashboard?
>>>>>
>>>>> I have created a Kafka topic by the name "cef" and I can see that the
>>>>> topic exists in
>>>>> Metron Configuration but I am unable to connect it with Kibana
>>>>>
>>>>> Need Help!!
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Jon
>>>>
>>>
>>>
>>
> --

Jon

Re: Getting Syslogs to Metron

Posted by Gaurav Bapat <ga...@gmail.com>.
I have created a Kafka topic "cef" but my Listen Syslogs is not getting
logs in the processor.

Also I checked using tcpdump -i and it is getting logs in my machine but
ListenSyslogs is not getting the logs

On 12 January 2018 at 11:13, Gaurav Bapat <ga...@gmail.com> wrote:

> [root@metron incubator-metron]# ./metron-deployment/scripts/
> platform-info.sh
> Metron 0.4.3
> --
> * master
> --
> commit c559ed7e1838ec71344eae3d9e37771db2641635
> Author: cstella <ce...@gmail.com>
> Date:   Tue Jan 9 15:28:47 2018 -0500
>
>     METRON-1379: Add an OBJECT_GET stellar function closes
> apache/incubator-metron#880
> --
>  metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> --
> ansible 2.0.0.2
>   config file =
>   configured module search path = Default w/o overrides
> --
> Vagrant 1.9.6
> --
> Python 2.7.5
> --
> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
> 2015-11-10T22:11:47+05:30)
> Maven home: /opt/maven/current
> Java version: 1.8.0_151, vendor: Oracle Corporation
> Java home: /opt/jdk1.8.0_151/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch: "amd64",
> family: "unix"
> --
> Docker version 1.12.6, build ec8512b/1.12.6
> --
> node
> v8.9.3
> --
> npm
> 5.5.1
> --
> g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
> Copyright (C) 2015 Free Software Foundation, Inc.
> This is free software; see the source for copying conditions.  There is NO
> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
>
> --
> Compiler is C++11 compliant
> --
> Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37 UTC
> 2018 x86_64 x86_64 x86_64 GNU/Linux
> --
> Total System Memory = 15773.3 MB
> Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
> Processor Speed: 3320.875 MHz
> Processor Speed: 3307.191 MHz
> Processor Speed: 3376.699 MHz
> Processor Speed: 3338.917 MHz
> Total Physical Processors: 4
> Total cores: 16
> Disk information:
> /dev/mapper/centos-root  200G   22G  179G  11% /
> /dev/sda1                2.0G  224M  1.8G  11% /boot
> /dev/sda2               1022M   12K 1022M   1% /boot/efi
> /dev/mapper/centos-home  247G   10G  237G   5% /home
> This CPU appears to support virtualization
>
> On 12 January 2018 at 09:25, Gaurav Bapat <ga...@gmail.com> wrote:
>
>> Hey Jon,
>>
>> Appreciate your timely reply.
>>
>> I gone through your answer but still I can't figure out how do I do
>> parsing/indexing in Storm UI as I cant find any option for the same.
>>
>> Is there any other UI to do parsing/indexing?
>>
>>
>>
>> On 11 January 2018 at 21:22, Zeolla@GMail.com <ze...@gmail.com> wrote:
>>
>>> So, you created a new cef topic, and set up the appropriate parser
>>> config for it (if not, this
>>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>>> may be helpful)?  If so:
>>>
>>> Here are some basic troubleshooting steps:
>>> 1.  Validate that the logs are getting onto the kafka topic that you are
>>> sending to.  If they aren't there, the problem is upstream from Metron.
>>> 2.  If they are getting onto the kafka topic they are being directly
>>> sent to, check the indexing kafka topic for an enriched version of those
>>> same logs.
>>> 3.  Do a binary search of the various components involved with ingest.
>>>     a. If the logs are *not* on the indexing kafka topic, check the
>>> enrichments topic for those logs.
>>>     b. If the logs are *not* on the enrichments topic, check the parser
>>> storm topology.
>>>     c. If the logs are on the enrichments topic, but *not* indexing,
>>> check the enrichments storm topology.
>>>     d. If the logs are on the indexing but *not* Kibana, check the
>>> indexing storm topic.
>>>     e. If the logs are in on the indexing topic and indexing storm topic
>>> is in good shape, check elasticsearch directly.
>>> 4.  You should have identified where the issue is at this point.  Report
>>> back here with what you observed, any relevant error messages, etc.
>>>
>>> Side note:  We should document a decision tree for troubleshooting data
>>> ingest.  It is fairly straightforward and makes me wonder if we already
>>> have this somewhere and I'm not aware of it?  It would also be a good place
>>> to put pointers to some common errors.
>>>
>>> Jon
>>>
>>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <ga...@gmail.com>
>>> wrote:
>>>
>>>> Hello everyone, I have deployed Metron on a single node machine and I
>>>> would like to know how do I get Syslogs from NiFi into Kibana dashboard?
>>>>
>>>> I have created a Kafka topic by the name "cef" and I can see that the
>>>> topic exists in
>>>> Metron Configuration but I am unable to connect it with Kibana
>>>>
>>>> Need Help!!
>>>>
>>>
>>>
>>> --
>>>
>>> Jon
>>>
>>
>>
>

Re: Getting Syslogs to Metron

Posted by Gaurav Bapat <ga...@gmail.com>.
[root@metron incubator-metron]# ./metron-deployment/scripts/platform-info.sh
Metron 0.4.3
--
* master
--
commit c559ed7e1838ec71344eae3d9e37771db2641635
Author: cstella <ce...@gmail.com>
Date:   Tue Jan 9 15:28:47 2018 -0500

    METRON-1379: Add an OBJECT_GET stellar function closes
apache/incubator-metron#880
--
 metron-deployment/vagrant/full-dev-platform/Vagrantfile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--
ansible 2.0.0.2
  config file =
  configured module search path = Default w/o overrides
--
Vagrant 1.9.6
--
Python 2.7.5
--
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
2015-11-10T22:11:47+05:30)
Maven home: /opt/maven/current
Java version: 1.8.0_151, vendor: Oracle Corporation
Java home: /opt/jdk1.8.0_151/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch: "amd64",
family: "unix"
--
Docker version 1.12.6, build ec8512b/1.12.6
--
node
v8.9.3
--
npm
5.5.1
--
g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

--
Compiler is C++11 compliant
--
Linux metron.com 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37 UTC
2018 x86_64 x86_64 x86_64 GNU/Linux
--
Total System Memory = 15773.3 MB
Processor Model: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
Processor Speed: 3320.875 MHz
Processor Speed: 3307.191 MHz
Processor Speed: 3376.699 MHz
Processor Speed: 3338.917 MHz
Total Physical Processors: 4
Total cores: 16
Disk information:
/dev/mapper/centos-root  200G   22G  179G  11% /
/dev/sda1                2.0G  224M  1.8G  11% /boot
/dev/sda2               1022M   12K 1022M   1% /boot/efi
/dev/mapper/centos-home  247G   10G  237G   5% /home
This CPU appears to support virtualization

On 12 January 2018 at 09:25, Gaurav Bapat <ga...@gmail.com> wrote:

> Hey Jon,
>
> Appreciate your timely reply.
>
> I gone through your answer but still I can't figure out how do I do
> parsing/indexing in Storm UI as I cant find any option for the same.
>
> Is there any other UI to do parsing/indexing?
>
>
>
> On 11 January 2018 at 21:22, Zeolla@GMail.com <ze...@gmail.com> wrote:
>
>> So, you created a new cef topic, and set up the appropriate parser config
>> for it (if not, this
>> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
>> may be helpful)?  If so:
>>
>> Here are some basic troubleshooting steps:
>> 1.  Validate that the logs are getting onto the kafka topic that you are
>> sending to.  If they aren't there, the problem is upstream from Metron.
>> 2.  If they are getting onto the kafka topic they are being directly sent
>> to, check the indexing kafka topic for an enriched version of those same
>> logs.
>> 3.  Do a binary search of the various components involved with ingest.
>>     a. If the logs are *not* on the indexing kafka topic, check the
>> enrichments topic for those logs.
>>     b. If the logs are *not* on the enrichments topic, check the parser
>> storm topology.
>>     c. If the logs are on the enrichments topic, but *not* indexing,
>> check the enrichments storm topology.
>>     d. If the logs are on the indexing but *not* Kibana, check the
>> indexing storm topic.
>>     e. If the logs are in on the indexing topic and indexing storm topic
>> is in good shape, check elasticsearch directly.
>> 4.  You should have identified where the issue is at this point.  Report
>> back here with what you observed, any relevant error messages, etc.
>>
>> Side note:  We should document a decision tree for troubleshooting data
>> ingest.  It is fairly straightforward and makes me wonder if we already
>> have this somewhere and I'm not aware of it?  It would also be a good place
>> to put pointers to some common errors.
>>
>> Jon
>>
>> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <ga...@gmail.com>
>> wrote:
>>
>>> Hello everyone, I have deployed Metron on a single node machine and I
>>> would like to know how do I get Syslogs from NiFi into Kibana dashboard?
>>>
>>> I have created a Kafka topic by the name "cef" and I can see that the
>>> topic exists in
>>> Metron Configuration but I am unable to connect it with Kibana
>>>
>>> Need Help!!
>>>
>>
>>
>> --
>>
>> Jon
>>
>
>

Re: Getting Syslogs to Metron

Posted by Gaurav Bapat <ga...@gmail.com>.
Hey Jon,

Appreciate your timely reply.

I gone through your answer but still I can't figure out how do I do
parsing/indexing in Storm UI as I cant find any option for the same.

Is there any other UI to do parsing/indexing?



On 11 January 2018 at 21:22, Zeolla@GMail.com <ze...@gmail.com> wrote:

> So, you created a new cef topic, and set up the appropriate parser config
> for it (if not, this
> <https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
> may be helpful)?  If so:
>
> Here are some basic troubleshooting steps:
> 1.  Validate that the logs are getting onto the kafka topic that you are
> sending to.  If they aren't there, the problem is upstream from Metron.
> 2.  If they are getting onto the kafka topic they are being directly sent
> to, check the indexing kafka topic for an enriched version of those same
> logs.
> 3.  Do a binary search of the various components involved with ingest.
>     a. If the logs are *not* on the indexing kafka topic, check the
> enrichments topic for those logs.
>     b. If the logs are *not* on the enrichments topic, check the parser
> storm topology.
>     c. If the logs are on the enrichments topic, but *not* indexing,
> check the enrichments storm topology.
>     d. If the logs are on the indexing but *not* Kibana, check the
> indexing storm topic.
>     e. If the logs are in on the indexing topic and indexing storm topic
> is in good shape, check elasticsearch directly.
> 4.  You should have identified where the issue is at this point.  Report
> back here with what you observed, any relevant error messages, etc.
>
> Side note:  We should document a decision tree for troubleshooting data
> ingest.  It is fairly straightforward and makes me wonder if we already
> have this somewhere and I'm not aware of it?  It would also be a good place
> to put pointers to some common errors.
>
> Jon
>
> On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <ga...@gmail.com>
> wrote:
>
>> Hello everyone, I have deployed Metron on a single node machine and I
>> would like to know how do I get Syslogs from NiFi into Kibana dashboard?
>>
>> I have created a Kafka topic by the name "cef" and I can see that the
>> topic exists in
>> Metron Configuration but I am unable to connect it with Kibana
>>
>> Need Help!!
>>
>
>
> --
>
> Jon
>

Re: Getting Syslogs to Metron

Posted by "Zeolla@GMail.com" <ze...@gmail.com>.
So, you created a new cef topic, and set up the appropriate parser config
for it (if not, this
<https://cwiki.apache.org/confluence/display/METRON/Adding+a+New+Telemetry+Data+Source>
may be helpful)?  If so:

Here are some basic troubleshooting steps:
1.  Validate that the logs are getting onto the kafka topic that you are
sending to.  If they aren't there, the problem is upstream from Metron.
2.  If they are getting onto the kafka topic they are being directly sent
to, check the indexing kafka topic for an enriched version of those same
logs.
3.  Do a binary search of the various components involved with ingest.
    a. If the logs are *not* on the indexing kafka topic, check the
enrichments topic for those logs.
    b. If the logs are *not* on the enrichments topic, check the parser
storm topology.
    c. If the logs are on the enrichments topic, but *not* indexing, check
the enrichments storm topology.
    d. If the logs are on the indexing but *not* Kibana, check the indexing
storm topic.
    e. If the logs are in on the indexing topic and indexing storm topic is
in good shape, check elasticsearch directly.
4.  You should have identified where the issue is at this point.  Report
back here with what you observed, any relevant error messages, etc.

Side note:  We should document a decision tree for troubleshooting data
ingest.  It is fairly straightforward and makes me wonder if we already
have this somewhere and I'm not aware of it?  It would also be a good place
to put pointers to some common errors.

Jon

On Thu, Jan 11, 2018 at 1:44 AM Gaurav Bapat <ga...@gmail.com> wrote:

> Hello everyone, I have deployed Metron on a single node machine and I
> would like to know how do I get Syslogs from NiFi into Kibana dashboard?
>
> I have created a Kafka topic by the name "cef" and I can see that the
> topic exists in
> Metron Configuration but I am unable to connect it with Kibana
>
> Need Help!!
>


-- 

Jon