You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@metron.apache.org by Syed Hammad Tahir <ms...@itu.edu.pk> on 2017/11/01 08:17:40 UTC

Re: Snort Logs

How do I make these messages sent to kafka producer show in kibana
dashboard  or any metron related UI

[image: Inline image 1]

On Tue, Oct 31, 2017 at 12:50 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> OK, so finally the message is going (the formatted one) but i still cant
> see it in kibana dashboard under snort`s label. I had stopped all the stub
> sensors from monit before doing it. What am I doing wrong here?
>
> On Tue, Oct 31, 2017 at 11:28 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> same thing even when I send a formatted message.
>>
>> [image: Inline image 1]
>>
>> On Tue, Oct 31, 2017 at 10:57 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>> > wrote:
>>
>>> I sent a random message to that kafka topic and got this
>>>
>>> [image: Inline image 1]
>>>
>>> I guess this is because I am not following the format of message I
>>> should send? Like those snort logs you showed.
>>>
>>> On Mon, Oct 30, 2017 at 5:24 PM, Zeolla@GMail.com <ze...@gmail.com>
>>> wrote:
>>>
>>>> They need to meet the format of the logs I sent earlier.  Look into the
>>>> snort output options - may require you rerun snort, depending on your
>>>> situation
>>>>
>>>> Jon
>>>>
>>>> On Mon, Oct 30, 2017, 06:53 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>> wrote:
>>>>
>>>>> Yes, I have converted them to text but those logs are simply captured
>>>>> packet headers over the local network. Now I just push them via that kafka
>>>>> producer command under topic name of snort and they will be visible in
>>>>> metron?
>>>>>
>>>>> On Mon, Oct 30, 2017 at 2:41 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> You need text logs. Here's an example of some properly formatted logs
>>>>>> - https://raw.githubusercontent.com/apache/metron/master/met
>>>>>> ron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>> On Mon, Oct 30, 2017, 01:34 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>> wrote:
>>>>>>
>>>>>>> I have found the kafka-console-producer.sh but I need to know how
>>>>>>> to make it read snort.log (tcp dump format) file. May be I am missing
>>>>>>> something in the plain sight but it would be awsome if you tell me that.
>>>>>>>
>>>>>>> Regards.
>>>>>>>
>>>>>>> On Fri, Oct 27, 2017 at 5:09 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> On the 25th I said:
>>>>>>>>
>>>>>>>>      It should be in /usr/hdp/current/kafka-broker/bin/ or similar
>>>>>>>> (from memory) on node1, assuming you are running full dev.
>>>>>>>>
>>>>>>>>      Jon
>>>>>>>>
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Fri, Oct 27, 2017 at 6:25 AM Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> snort logs are in tcp dump format. I may have to convert them.
>>>>>>>>>
>>>>>>>>> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic
>>>>>>>>> test
>>>>>>>>>
>>>>>>>>> How to give file name or path in this command?
>>>>>>>>>
>>>>>>>>> On Fri, Oct 27, 2017 at 2:53 PM, Zeolla@GMail.com <
>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> If you have text snort logs you can use Apache nifi or the Kafka
>>>>>>>>>> producer script as described in step 4 here[1] to push them to Metron's
>>>>>>>>>> snort topic.  You may also want to look at this [2].
>>>>>>>>>>
>>>>>>>>>> 1: https://kafka.apache.org/quickstart
>>>>>>>>>> 2: https://stackoverflow.com/questions/38701179/kafka-console-p
>>>>>>>>>> roducer-and-bash-script
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>> On Fri, Oct 27, 2017, 02:15 Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hello everyone,
>>>>>>>>>>>
>>>>>>>>>>> I have run snort independently on vagrant ssh and dumped the
>>>>>>>>>>> logs in tcpdump format. Now I want to bring them to metron to play with
>>>>>>>>>>> them a bit. Some of you already replied me with some solutions but thats
>>>>>>>>>>> lost in the inbox somewhere and engulfed by the elasticsearhc issue that I
>>>>>>>>>>> had. Please give me an easy to understand this solution for this problem.
>>>>>>>>>>>
>>>>>>>>>>> Regards.
>>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>
>>>>> --
>>>>
>>>> Jon
>>>>
>>>
>>>
>>
>

Re: Snort Logs

Posted by "Zeolla@GMail.com" <ze...@gmail.com>.
Yeah, I think you're right.  I'm not familiar with that code as much
though, I've never had to touch it.

Given all of the lessons learned up to this point, maybe spinning full-dev
up from scratch again makes sense?  Typically I am able to spin up full-dev
pretty hands-off without hitting these types of issues, as long as
my machine has enough resources to give.

Jon

On Mon, Nov 13, 2017 at 3:49 PM Otto Fowler <ot...@gmail.com> wrote:

> I guess I am wrong.
> But from looking at the output, it looks like this is error topic stuff
> that is failing doesn’t it?
>
>
>
> On November 13, 2017 at 15:06:20, Zeolla@GMail.com (zeolla@gmail.com)
> wrote:
>
> Isn't sending indexing errors to the indexing topic intentional?  I may
> need to refresh myself on the below conversation, but I recall it coming up
> in conversation on the mailing lists in the past.
>
>
> https://github.com/apache/metron/blob/master/metron-platform/metron-elasticsearch/src/main/config/elasticsearch.properties#L33
>
> https://lists.apache.org/thread.html/01e4ed416bda8d1057f09f7717809d2802ae1de3035dc42f001d7bbe@%3Cdev.metron.apache.org%3E
>
> Jon
>
> On Mon, Nov 13, 2017 at 2:59 PM Otto Fowler <ot...@gmail.com>
> wrote:
>
>> OK.
>>
>> I think your sending errors to your indexing topic instead of the error
>> topic.
>> I think you posted your config before, but I don’t remember off the top
>> of my head
>> where the error topic is configured.
>>
>> If the error topic is the same as the indexing topic, and you ‘have
>> errors’  I think you may see this.
>>
>>
>>
>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> Here we go. This is what I see when I do kafka client on indexing topic.
>>
>> [image: Inline image 1]
>>
>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>> > wrote:
>>
>>> ok, I will try it again and report results
>>>
>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> You have to be seeing data in the indexing topic, you have errors in
>>>> the indexing topology that reads from it.
>>>>
>>>>
>>>>
>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> So you are saying:
>>>>
>>>> * when you do the kafka client on the enrichment topic things are in
>>>> json
>>>> * when you do the kafka client on the indexing topic they are csv
>>>>
>>>> 1- Yes, kafka client on enrichment shows json
>>>>
>>>> 2- No, I dont see anything in kafka client on indexing topic
>>>>
>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <ot...@gmail.com>
>>>> wrote:
>>>>
>>>>> So you are saying:
>>>>>
>>>>> * when you do the kafka client on the enrichment topic things are in
>>>>> json
>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>
>>>>> ???
>>>>>
>>>>>
>>>>>
>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>
>>>>> From one of your earlier messages, This is what I have figured out so
>>>>> far.
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> The issue is inducated by red marked portion of the flow.
>>>>>
>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> Which .java file is causing the issue in this hdfsindexbolt. I mean
>>>>>> which one should I look at because there are so many listed here.
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only
>>>>>>>>> see JSON objects of stub sensor logs but not from those pushed by me via
>>>>>>>>> kafka producer.
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>>>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> Kindly give me the mechanism implemented in metron through which
>>>>>>>>>>> a line such as this
>>>>>>>>>>>
>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>
>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Restarted snort, still giving me error for indexing topologies
>>>>>>>>>>>> even though I havent even pushed out any data to snort topic yet. I have
>>>>>>>>>>>> not run the kafka-producer command but its still giving error for
>>>>>>>>>>>> something.
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling
>>>>>>>>>>>>>>>>>>>>> the script with?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and
>>>>>>>>>>>>>>>>>>>>>> what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
>>>>>>>>>>>>>>>>>>>>>>> node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka
>>>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source?  It looks like there are
>>>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology
>>>>>>>>>>>>>>>>>>>>>>>>> error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer
>>>>>>>>>>>>>>>>>>>>>>>>>> seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding
>>>>>>>>>>>>>>>>>>>>>>>>>> from the individual writer into the writer component (It should be handled
>>>>>>>>>>>>>>>>>>>>>>>>>> in the writer itself).  The fact that it's happening for both HDFS and ES
>>>>>>>>>>>>>>>>>>>>>>>>>> is telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>> test
>>>>>>>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> run cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stub canned data file? Maybe I could see its formatting and try following
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> coming from snort through some setup ( getting pushed to kafka ), which I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think of as live.  I also think you have manually pushed messages, where
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to see logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to get the snort logs in kibana dashboard. Any help will be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --

Jon

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
OK,
So the JSONFromPosition MessageGetStrategy is choking the csv…

I don’t know if there is a position index or it is 0, but whatever it is,
it is getting csv not json message.




On November 13, 2017 at 15:49:57, Otto Fowler (ottobackwards@gmail.com)
wrote:

I guess I am wrong.
But from looking at the output, it looks like this is error topic stuff
that is failing doesn’t it?



On November 13, 2017 at 15:06:20, Zeolla@GMail.com (zeolla@gmail.com) wrote:

Isn't sending indexing errors to the indexing topic intentional?  I may
need to refresh myself on the below conversation, but I recall it coming up
in conversation on the mailing lists in the past.

https://github.com/apache/metron/blob/master/metron-platform/metron-elasticsearch/src/main/config/elasticsearch.properties#L33
https://lists.apache.org/thread.html/01e4ed416bda8d1057f09f7717809d2802ae1de3035dc42f001d7bbe@%3Cdev.metron.apache.org%3E

Jon

On Mon, Nov 13, 2017 at 2:59 PM Otto Fowler <ot...@gmail.com> wrote:

> OK.
>
> I think your sending errors to your indexing topic instead of the error
> topic.
> I think you posted your config before, but I don’t remember off the top of
> my head
> where the error topic is configured.
>
> If the error topic is the same as the indexing topic, and you ‘have
> errors’  I think you may see this.
>
>
>
> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> Here we go. This is what I see when I do kafka client on indexing topic.
>
> [image: Inline image 1]
>
> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> ok, I will try it again and report results
>>
>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> You have to be seeing data in the indexing topic, you have errors in the
>>> indexing topology that reads from it.
>>>
>>>
>>>
>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>> mscs16059@itu.edu.pk) wrote:
>>>
>>> So you are saying:
>>>
>>> * when you do the kafka client on the enrichment topic things are in json
>>> * when you do the kafka client on the indexing topic they are csv
>>>
>>> 1- Yes, kafka client on enrichment shows json
>>>
>>> 2- No, I dont see anything in kafka client on indexing topic
>>>
>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> So you are saying:
>>>>
>>>> * when you do the kafka client on the enrichment topic things are in
>>>> json
>>>> * when you do the kafka client on the indexing topic they are csv
>>>>
>>>> ???
>>>>
>>>>
>>>>
>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> From one of your earlier messages, This is what I have figured out so
>>>> far.
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> The issue is inducated by red marked portion of the flow.
>>>>
>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> Which .java file is causing the issue in this hdfsindexbolt. I mean
>>>>> which one should I look at because there are so many listed here.
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only see
>>>>>>>> JSON objects of stub sensor logs but not from those pushed by me via kafka
>>>>>>>> producer.
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <zeolla@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Kindly give me the mechanism implemented in metron through which
>>>>>>>>>> a line such as this
>>>>>>>>>>
>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>
>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> Restarted snort, still giving me error for indexing topologies
>>>>>>>>>>> even though I havent even pushed out any data to snort topic yet. I have
>>>>>>>>>>> not run the kafka-producer command but its still giving error for
>>>>>>>>>>> something.
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling
>>>>>>>>>>>>>>>>>>>> the script with?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and what's
>>>>>>>>>>>>>>>>>>>>> the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
>>>>>>>>>>>>>>>>>>>>>> node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka
>>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source?  It looks like there are
>>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology
>>>>>>>>>>>>>>>>>>>>>>>> error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing
>>>>>>>>>>>>>>>>>>>>>>>>> the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding
>>>>>>>>>>>>>>>>>>>>>>>>> from the individual writer into the writer component (It should be handled
>>>>>>>>>>>>>>>>>>>>>>>>> in the writer itself).  The fact that it's happening for both HDFS and ES
>>>>>>>>>>>>>>>>>>>>>>>>> is telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort
>>>>>>>>>>>>>>>>>>>>>>>>>>> test
>>>>>>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and
>>>>>>>>>>>>>>>>>>>>>>>>>>>> run cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> canned data file? Maybe I could see its formatting and try following the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> coming from snort through some setup ( getting pushed to kafka ), which I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think of as live.  I also think you have manually pushed messages, where
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to get the snort logs in kibana dashboard. Any help will be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --

Jon

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
I guess I am wrong.
But from looking at the output, it looks like this is error topic stuff
that is failing doesn’t it?



On November 13, 2017 at 15:06:20, Zeolla@GMail.com (zeolla@gmail.com) wrote:

Isn't sending indexing errors to the indexing topic intentional?  I may
need to refresh myself on the below conversation, but I recall it coming up
in conversation on the mailing lists in the past.

https://github.com/apache/metron/blob/master/metron-platform/metron-elasticsearch/src/main/config/elasticsearch.properties#L33
https://lists.apache.org/thread.html/01e4ed416bda8d1057f09f7717809d2802ae1de3035dc42f001d7bbe@%3Cdev.metron.apache.org%3E

Jon

On Mon, Nov 13, 2017 at 2:59 PM Otto Fowler <ot...@gmail.com> wrote:

> OK.
>
> I think your sending errors to your indexing topic instead of the error
> topic.
> I think you posted your config before, but I don’t remember off the top of
> my head
> where the error topic is configured.
>
> If the error topic is the same as the indexing topic, and you ‘have
> errors’  I think you may see this.
>
>
>
> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> Here we go. This is what I see when I do kafka client on indexing topic.
>
> [image: Inline image 1]
>
> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> ok, I will try it again and report results
>>
>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> You have to be seeing data in the indexing topic, you have errors in the
>>> indexing topology that reads from it.
>>>
>>>
>>>
>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>> mscs16059@itu.edu.pk) wrote:
>>>
>>> So you are saying:
>>>
>>> * when you do the kafka client on the enrichment topic things are in json
>>> * when you do the kafka client on the indexing topic they are csv
>>>
>>> 1- Yes, kafka client on enrichment shows json
>>>
>>> 2- No, I dont see anything in kafka client on indexing topic
>>>
>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> So you are saying:
>>>>
>>>> * when you do the kafka client on the enrichment topic things are in
>>>> json
>>>> * when you do the kafka client on the indexing topic they are csv
>>>>
>>>> ???
>>>>
>>>>
>>>>
>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> From one of your earlier messages, This is what I have figured out so
>>>> far.
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> The issue is inducated by red marked portion of the flow.
>>>>
>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> Which .java file is causing the issue in this hdfsindexbolt. I mean
>>>>> which one should I look at because there are so many listed here.
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only see
>>>>>>>> JSON objects of stub sensor logs but not from those pushed by me via kafka
>>>>>>>> producer.
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <zeolla@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Kindly give me the mechanism implemented in metron through which
>>>>>>>>>> a line such as this
>>>>>>>>>>
>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>
>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> Restarted snort, still giving me error for indexing topologies
>>>>>>>>>>> even though I havent even pushed out any data to snort topic yet. I have
>>>>>>>>>>> not run the kafka-producer command but its still giving error for
>>>>>>>>>>> something.
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling
>>>>>>>>>>>>>>>>>>>> the script with?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and what's
>>>>>>>>>>>>>>>>>>>>> the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
>>>>>>>>>>>>>>>>>>>>>> node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka
>>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source?  It looks like there are
>>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology
>>>>>>>>>>>>>>>>>>>>>>>> error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing
>>>>>>>>>>>>>>>>>>>>>>>>> the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding
>>>>>>>>>>>>>>>>>>>>>>>>> from the individual writer into the writer component (It should be handled
>>>>>>>>>>>>>>>>>>>>>>>>> in the writer itself).  The fact that it's happening for both HDFS and ES
>>>>>>>>>>>>>>>>>>>>>>>>> is telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort
>>>>>>>>>>>>>>>>>>>>>>>>>>> test
>>>>>>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and
>>>>>>>>>>>>>>>>>>>>>>>>>>>> run cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> canned data file? Maybe I could see its formatting and try following the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> coming from snort through some setup ( getting pushed to kafka ), which I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think of as live.  I also think you have manually pushed messages, where
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to get the snort logs in kibana dashboard. Any help will be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --

Jon

Re: Snort Logs

Posted by "Zeolla@GMail.com" <ze...@gmail.com>.
Isn't sending indexing errors to the indexing topic intentional?  I may
need to refresh myself on the below conversation, but I recall it coming up
in conversation on the mailing lists in the past.

https://github.com/apache/metron/blob/master/metron-platform/metron-elasticsearch/src/main/config/elasticsearch.properties#L33
https://lists.apache.org/thread.html/01e4ed416bda8d1057f09f7717809d2802ae1de3035dc42f001d7bbe@%3Cdev.metron.apache.org%3E

Jon

On Mon, Nov 13, 2017 at 2:59 PM Otto Fowler <ot...@gmail.com> wrote:

> OK.
>
> I think your sending errors to your indexing topic instead of the error
> topic.
> I think you posted your config before, but I don’t remember off the top of
> my head
> where the error topic is configured.
>
> If the error topic is the same as the indexing topic, and you ‘have
> errors’  I think you may see this.
>
>
>
> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> Here we go. This is what I see when I do kafka client on indexing topic.
>
> [image: Inline image 1]
>
> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> ok, I will try it again and report results
>>
>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> You have to be seeing data in the indexing topic, you have errors in the
>>> indexing topology that reads from it.
>>>
>>>
>>>
>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>> mscs16059@itu.edu.pk) wrote:
>>>
>>> So you are saying:
>>>
>>> * when you do the kafka client on the enrichment topic things are in json
>>> * when you do the kafka client on the indexing topic they are csv
>>>
>>> 1- Yes, kafka client on enrichment shows json
>>>
>>> 2- No, I dont see anything in kafka client on indexing topic
>>>
>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> So you are saying:
>>>>
>>>> * when you do the kafka client on the enrichment topic things are in
>>>> json
>>>> * when you do the kafka client on the indexing topic they are csv
>>>>
>>>> ???
>>>>
>>>>
>>>>
>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> From one of your earlier messages, This is what I have figured out so
>>>> far.
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> The issue is inducated by red marked portion of the flow.
>>>>
>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> Which .java file is causing the issue in this hdfsindexbolt. I mean
>>>>> which one should I look at because there are so many listed here.
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only see
>>>>>>>> JSON objects of stub sensor logs but not from those pushed by me via kafka
>>>>>>>> producer.
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <zeolla@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Kindly give me the mechanism implemented in metron through which
>>>>>>>>>> a line such as this
>>>>>>>>>>
>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>
>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> Restarted snort, still giving me error for indexing topologies
>>>>>>>>>>> even though I havent even pushed out any data to snort topic yet. I have
>>>>>>>>>>> not run the kafka-producer command but its still giving error for
>>>>>>>>>>> something.
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling
>>>>>>>>>>>>>>>>>>>> the script with?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and what's
>>>>>>>>>>>>>>>>>>>>> the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
>>>>>>>>>>>>>>>>>>>>>> node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka
>>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source?  It looks like there are
>>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology
>>>>>>>>>>>>>>>>>>>>>>>> error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing
>>>>>>>>>>>>>>>>>>>>>>>>> the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding
>>>>>>>>>>>>>>>>>>>>>>>>> from the individual writer into the writer component (It should be handled
>>>>>>>>>>>>>>>>>>>>>>>>> in the writer itself).  The fact that it's happening for both HDFS and ES
>>>>>>>>>>>>>>>>>>>>>>>>> is telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort
>>>>>>>>>>>>>>>>>>>>>>>>>>> test
>>>>>>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and
>>>>>>>>>>>>>>>>>>>>>>>>>>>> run cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> canned data file? Maybe I could see its formatting and try following the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> coming from snort through some setup ( getting pushed to kafka ), which I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think of as live.  I also think you have manually pushed messages, where
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to get the snort logs in kibana dashboard. Any help will be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --

Jon

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Hi guys, After succesfully pushing that canned data to metron, now I need
to push this pcap data (captured by snort in pcap mode) to metron.

These are just the packet headers.

[image: Inline image 1]

Any guidelines I should follow here?

On Fri, Nov 17, 2017 at 8:14 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> sure. I will keep the community updated
>
> On Fri, Nov 17, 2017 at 6:16 PM, Zeolla@GMail.com <ze...@gmail.com>
> wrote:
>
>> Congratulations!  Let us know how the rest of your work goes.
>>
>> Jon
>>
>> On Fri, Nov 17, 2017, 08:04 Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> Finally !!!!!
>>>
>>> [image: Inline image 1]
>>>
>>> And that is preformatted data. I am yet to send actual snort logs.
>>> Hopefully that will be easier now
>>>
>>> On Fri, Nov 17, 2017 at 12:08 PM, Syed Hammad Tahir <
>>> mscs16059@itu.edu.pk> wrote:
>>>
>>>> Help guys.
>>>>
>>>> On Thu, Nov 16, 2017 at 10:07 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> Any solution to this enrichments error issue?
>>>>>
>>>>> On Thu, Nov 16, 2017 at 6:08 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> No , on this ubuntu PC I am not.
>>>>>>
>>>>>> On Thu, Nov 16, 2017 at 6:06 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Are you behind a proxy?
>>>>>>>
>>>>>>> On Thu, Nov 16, 2017, 08:04 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Ok, Now I have started everything again from scratch (redeployed
>>>>>>>> single node based ambari metron cluster with ansibleSkipTags = 'quick-dev')
>>>>>>>> and now when I execute this command:
>>>>>>>>
>>>>>>>> shuf -n 10 snort.out | sed -e "s/[^,]\+ ,/`date
>>>>>>>> +'%m\/%d\/%y-%H:%M:%S'`.000000 ,/g" | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>
>>>>>>>> (format of ths command was taken from: https://github.com/apach
>>>>>>>> e/metron/blob/master/metron-deployment/roles/sensor-stubs/
>>>>>>>> templates/start-snort-stub)
>>>>>>>>
>>>>>>>> I get this under enrichment storm topology :
>>>>>>>>
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> [image: Inline image 2]
>>>>>>>>
>>>>>>>> I have come this far, please help me push these dummy preformatted
>>>>>>>> snort logs into kibana dashboard.
>>>>>>>>
>>>>>>>> Regards.
>>>>>>>>
>>>>>>>> On Tue, Nov 14, 2017 at 1:19 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> Now I cant even see the sensor stub logs if I start snort service
>>>>>>>>> from monit. How can I flush kafka of everything that was sent earlier.
>>>>>>>>> Whats going wrong with original sensor stub data?
>>>>>>>>>
>>>>>>>>> On Tue, Nov 14, 2017 at 11:05 AM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Here is what I am doing:
>>>>>>>>>>
>>>>>>>>>> Running this command: sudo tail -n 1 snort.out  |
>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>
>>>>>>>>>> sends this message to this topic: 01/11/17-21:32:37.925044
>>>>>>>>>> ,1,999158,0,"'snort test alert'",TCP,192.168.138.158,49
>>>>>>>>>> 207,95.163.121.204,80,00:00:00:00:00:00,00:00:00:00:00:00,0x
>>>>>>>>>> 3C,***A****,0xC0313398,0xD1FE0623,,0xFAF0,128,0,2561,40,40960,,,,
>>>>>>>>>>
>>>>>>>>>> And I can see it under kafka client on enrichments topic:
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> Time stamp can be matched against the sent message.
>>>>>>>>>>
>>>>>>>>>> The issue is that I cant see the message under kafka client on
>>>>>>>>>> indexing topic and hence in kibana dashboard.
>>>>>>>>>>
>>>>>>>>>> On Tue, Nov 14, 2017 at 10:56 AM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> Ran this sudo head -n 1 snort.out  |
>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>
>>>>>>>>>>> and this as well, sudo tail -n 1 snort.out  |
>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>
>>>>>>>>>>> and same issue again. Storm indexing topology keeps on giving
>>>>>>>>>>> error on previusly failed messages.
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Nov 14, 2017 at 10:40 AM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> if by that command you mean cat snort.out | kafka-producer ....
>>>>>>>>>>>> then I have been doing it but with snort.out full of all the material
>>>>>>>>>>>> copied from github raw snort.out link.
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Nov 14, 2017 at 6:31 AM, Otto Fowler <
>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> You should literally run the command I put in.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On November 13, 2017 at 20:23:09, Syed Hammad Tahir (
>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> nope, that doesnt work when I just copy/paste one line in
>>>>>>>>>>>>> kafka producer. Havent tried putting one line in snort.out and then pushing
>>>>>>>>>>>>> it.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Tue, Nov 14, 2017 at 6:09 AM, Otto Fowler <
>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> No,
>>>>>>>>>>>>>> When you push the data to kafka just push 1 line and see if
>>>>>>>>>>>>>> it works.
>>>>>>>>>>>>>> Nothing to do with configs.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On November 13, 2017 at 20:06:45, Syed Hammad Tahir (
>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ok, that cold be a reason but from where do I check this?
>>>>>>>>>>>>>> From here or somewhere else,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <
>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> OK.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I think your sending errors to your indexing topic instead
>>>>>>>>>>>>>>> of the error topic.
>>>>>>>>>>>>>>> I think you posted your config before, but I don’t remember
>>>>>>>>>>>>>>> off the top of my head
>>>>>>>>>>>>>>> where the error topic is configured.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> If the error topic is the same as the indexing topic, and
>>>>>>>>>>>>>>> you ‘have errors’  I think you may see this.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Here we go. This is what I see when I do kafka client on
>>>>>>>>>>>>>>> indexing topic.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ok, I will try it again and report results
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <
>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> You have to be seeing data in the indexing topic, you have
>>>>>>>>>>>>>>>>> errors in the indexing topology that reads from it.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> So you are saying:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> * when you do the kafka client on the enrichment topic
>>>>>>>>>>>>>>>>> things are in json
>>>>>>>>>>>>>>>>> * when you do the kafka client on the indexing topic they
>>>>>>>>>>>>>>>>> are csv
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 1- Yes, kafka client on enrichment shows json
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2- No, I dont see anything in kafka client on indexing
>>>>>>>>>>>>>>>>> topic
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <
>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> So you are saying:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> * when you do the kafka client on the enrichment topic
>>>>>>>>>>>>>>>>>> things are in json
>>>>>>>>>>>>>>>>>> * when you do the kafka client on the indexing topic they
>>>>>>>>>>>>>>>>>> are csv
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> ???
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> From one of your earlier messages, This is what I have
>>>>>>>>>>>>>>>>>> figured out so far.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> The issue is inducated by red marked portion of the flow.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Which .java file is causing the issue in this
>>>>>>>>>>>>>>>>>>> hdfsindexbolt. I mean which one should I look at because there are so many
>>>>>>>>>>>>>>>>>>> listed here.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> No, I am not seeing it under indexing topic as JSON.
>>>>>>>>>>>>>>>>>>>>>> I can only see JSON objects of stub sensor logs but not from those pushed
>>>>>>>>>>>>>>>>>>>>>> by me via kafka producer.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as
>>>>>>>>>>>>>>>>>>>>>>> the producer script) and pull from the indexing topic.  Are you seeing it
>>>>>>>>>>>>>>>>>>>>>>> in JSON there?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Kindly give me the mechanism implemented in metron
>>>>>>>>>>>>>>>>>>>>>>>> through which a line such as this
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Restarted snort, still giving me error for
>>>>>>>>>>>>>>>>>>>>>>>>> indexing topologies even though I havent even pushed out any data to snort
>>>>>>>>>>>>>>>>>>>>>>>>> topic yet. I have not run the kafka-producer command but its still giving
>>>>>>>>>>>>>>>>>>>>>>>>> error for something.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> are calling the script with?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> node based vm install) comes with sensor stubs. I assume that everything
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> has already been done for those stub sensors to push the canned data. I am
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> doing the similar thing, directly pushing the preformatted canned data to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic. I can see the logs in kibana dashboard when I start stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sensor from monit but then I push the same logs myself, those errors pop
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that I have shown earlier.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella <ce...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology and what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella <ce...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "indexing" kafka topic from the parser or from some other source?  It looks
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like there are some records in kafka that are not JSON.  By the time it
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> gets to the indexing kafka topic, it should be a JSON map.  The parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology emits that JSON map and then the enrichments topology enrich that
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> map and emits the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stack trace.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> longer seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Casey Stella (cestella@gmail.com)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there in those errors, what's the full stacktrace (that starts with the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> suggestion you file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is bleeding from the individual writer into the writer component (It should
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be handled in the writer itself).  The fact that it's happening for both
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> HDFS and ES is telling as well and I'm very interested in the full
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stacktrace there because it'll have the wrapped exception from the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> individual writer included.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort.out | kafka producer .... and now the error at storm parser topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is gone but I am now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 00:00:00,08:00:27:E8:B0:7A,0x5
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> A,***AP***,0x1E396BFC,0x56900B
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> B6,,0x1000,64,10,23403,76,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <zeolla@gmail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort.out file and run cat snort.out | kafka-console-producer.sh ... to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> make sure there are no copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> support dates in this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>  format, then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> will need to modify the default log timestamp format for snort in the short
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that th
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>
>> Jon
>>
>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
sure. I will keep the community updated

On Fri, Nov 17, 2017 at 6:16 PM, Zeolla@GMail.com <ze...@gmail.com> wrote:

> Congratulations!  Let us know how the rest of your work goes.
>
> Jon
>
> On Fri, Nov 17, 2017, 08:04 Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> Finally !!!!!
>>
>> [image: Inline image 1]
>>
>> And that is preformatted data. I am yet to send actual snort logs.
>> Hopefully that will be easier now
>>
>> On Fri, Nov 17, 2017 at 12:08 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>> > wrote:
>>
>>> Help guys.
>>>
>>> On Thu, Nov 16, 2017 at 10:07 PM, Syed Hammad Tahir <
>>> mscs16059@itu.edu.pk> wrote:
>>>
>>>> Any solution to this enrichments error issue?
>>>>
>>>> On Thu, Nov 16, 2017 at 6:08 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> No , on this ubuntu PC I am not.
>>>>>
>>>>> On Thu, Nov 16, 2017 at 6:06 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Are you behind a proxy?
>>>>>>
>>>>>> On Thu, Nov 16, 2017, 08:04 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>> wrote:
>>>>>>
>>>>>>> Ok, Now I have started everything again from scratch (redeployed
>>>>>>> single node based ambari metron cluster with ansibleSkipTags = 'quick-dev')
>>>>>>> and now when I execute this command:
>>>>>>>
>>>>>>> shuf -n 10 snort.out | sed -e "s/[^,]\+ ,/`date
>>>>>>> +'%m\/%d\/%y-%H:%M:%S'`.000000 ,/g" | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>
>>>>>>> (format of ths command was taken from: https://github.com/
>>>>>>> apache/metron/blob/master/metron-deployment/roles/
>>>>>>> sensor-stubs/templates/start-snort-stub)
>>>>>>>
>>>>>>> I get this under enrichment storm topology :
>>>>>>>
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> [image: Inline image 2]
>>>>>>>
>>>>>>> I have come this far, please help me push these dummy preformatted
>>>>>>> snort logs into kibana dashboard.
>>>>>>>
>>>>>>> Regards.
>>>>>>>
>>>>>>> On Tue, Nov 14, 2017 at 1:19 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> Now I cant even see the sensor stub logs if I start snort service
>>>>>>>> from monit. How can I flush kafka of everything that was sent earlier.
>>>>>>>> Whats going wrong with original sensor stub data?
>>>>>>>>
>>>>>>>> On Tue, Nov 14, 2017 at 11:05 AM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> Here is what I am doing:
>>>>>>>>>
>>>>>>>>> Running this command: sudo tail -n 1 snort.out  |
>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>
>>>>>>>>> sends this message to this topic: 01/11/17-21:32:37.925044
>>>>>>>>> ,1,999158,0,"'snort test alert'",TCP,192.168.138.158,
>>>>>>>>> 49207,95.163.121.204,80,00:00:00:00:00:00,00:00:00:00:00:00,
>>>>>>>>> 0x3C,***A****,0xC0313398,0xD1FE0623,,0xFAF0,128,0,2561,
>>>>>>>>> 40,40960,,,,
>>>>>>>>>
>>>>>>>>> And I can see it under kafka client on enrichments topic:
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> Time stamp can be matched against the sent message.
>>>>>>>>>
>>>>>>>>> The issue is that I cant see the message under kafka client on
>>>>>>>>> indexing topic and hence in kibana dashboard.
>>>>>>>>>
>>>>>>>>> On Tue, Nov 14, 2017 at 10:56 AM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Ran this sudo head -n 1 snort.out  |
>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>
>>>>>>>>>> and this as well, sudo tail -n 1 snort.out  |
>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>
>>>>>>>>>> and same issue again. Storm indexing topology keeps on giving
>>>>>>>>>> error on previusly failed messages.
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> On Tue, Nov 14, 2017 at 10:40 AM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> if by that command you mean cat snort.out | kafka-producer ....
>>>>>>>>>>> then I have been doing it but with snort.out full of all the material
>>>>>>>>>>> copied from github raw snort.out link.
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Nov 14, 2017 at 6:31 AM, Otto Fowler <
>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> You should literally run the command I put in.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On November 13, 2017 at 20:23:09, Syed Hammad Tahir (
>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> nope, that doesnt work when I just copy/paste one line in kafka
>>>>>>>>>>>> producer. Havent tried putting one line in snort.out and then pushing it.
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Nov 14, 2017 at 6:09 AM, Otto Fowler <
>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> No,
>>>>>>>>>>>>> When you push the data to kafka just push 1 line and see if it
>>>>>>>>>>>>> works.
>>>>>>>>>>>>> Nothing to do with configs.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On November 13, 2017 at 20:06:45, Syed Hammad Tahir (
>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ok, that cold be a reason but from where do I check this? From
>>>>>>>>>>>>> here or somewhere else,
>>>>>>>>>>>>>
>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <
>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> OK.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I think your sending errors to your indexing topic instead of
>>>>>>>>>>>>>> the error topic.
>>>>>>>>>>>>>> I think you posted your config before, but I don’t remember
>>>>>>>>>>>>>> off the top of my head
>>>>>>>>>>>>>> where the error topic is configured.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> If the error topic is the same as the indexing topic, and you
>>>>>>>>>>>>>> ‘have errors’  I think you may see this.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (
>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Here we go. This is what I see when I do kafka client on
>>>>>>>>>>>>>> indexing topic.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> ok, I will try it again and report results
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <
>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> You have to be seeing data in the indexing topic, you have
>>>>>>>>>>>>>>>> errors in the indexing topology that reads from it.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> So you are saying:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> * when you do the kafka client on the enrichment topic
>>>>>>>>>>>>>>>> things are in json
>>>>>>>>>>>>>>>> * when you do the kafka client on the indexing topic they
>>>>>>>>>>>>>>>> are csv
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> 1- Yes, kafka client on enrichment shows json
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> 2- No, I dont see anything in kafka client on indexing topic
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <
>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> So you are saying:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> * when you do the kafka client on the enrichment topic
>>>>>>>>>>>>>>>>> things are in json
>>>>>>>>>>>>>>>>> * when you do the kafka client on the indexing topic they
>>>>>>>>>>>>>>>>> are csv
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> ???
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> From one of your earlier messages, This is what I have
>>>>>>>>>>>>>>>>> figured out so far.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The issue is inducated by red marked portion of the flow.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Which .java file is causing the issue in this
>>>>>>>>>>>>>>>>>> hdfsindexbolt. I mean which one should I look at because there are so many
>>>>>>>>>>>>>>>>>> listed here.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> No, I am not seeing it under indexing topic as JSON. I
>>>>>>>>>>>>>>>>>>>>> can only see JSON objects of stub sensor logs but not from those pushed by
>>>>>>>>>>>>>>>>>>>>> me via kafka producer.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as
>>>>>>>>>>>>>>>>>>>>>> the producer script) and pull from the indexing topic.  Are you seeing it
>>>>>>>>>>>>>>>>>>>>>> in JSON there?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Kindly give me the mechanism implemented in metron
>>>>>>>>>>>>>>>>>>>>>>> through which a line such as this
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Restarted snort, still giving me error for indexing
>>>>>>>>>>>>>>>>>>>>>>>> topologies even though I havent even pushed out any data to snort topic
>>>>>>>>>>>>>>>>>>>>>>>> yet. I have not run the kafka-producer command but its still giving error
>>>>>>>>>>>>>>>>>>>>>>>> for something.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> are calling the script with?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> node based vm install) comes with sensor stubs. I assume that everything
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> has already been done for those stub sensors to push the canned data. I am
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> doing the similar thing, directly pushing the preformatted canned data to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic. I can see the logs in kibana dashboard when I start stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sensor from monit but then I push the same logs myself, those errors pop
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that I have shown earlier.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella <ce...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology and what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella <ce...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "indexing" kafka topic from the parser or from some other source?  It looks
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like there are some records in kafka that are not JSON.  By the time it
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> gets to the indexing kafka topic, it should be a JSON map.  The parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology emits that JSON map and then the enrichments topology enrich that
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> map and emits the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stack trace.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> longer seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Casey Stella (cestella@gmail.com)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in those errors, what's the full stacktrace (that starts with the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> suggestion you file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is bleeding from the individual writer into the writer component (It should
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be handled in the writer itself).  The fact that it's happening for both
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> HDFS and ES is telling as well and I'm very interested in the full
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stacktrace there because it'll have the wrapped exception from the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> individual writer included.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort.out | kafka producer .... and now the error at storm parser topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is gone but I am now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 49581,192.168.66.121,22,0A:00:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 27:00:00:00,08:00:27:E8:B0:7A,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 0x5A,***AP***,0x1E396BFC,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 0x56900BB6,,0x1000,64,10,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort.out file and run cat snort.out | kafka-console-producer.sh ... to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> make sure there are no copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> support dates in this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>  format, then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> will need to modify the default log timestamp format for snort in the short
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that th
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>
> Jon
>

Re: Snort Logs

Posted by "Zeolla@GMail.com" <ze...@gmail.com>.
Congratulations!  Let us know how the rest of your work goes.

Jon

On Fri, Nov 17, 2017, 08:04 Syed Hammad Tahir <ms...@itu.edu.pk> wrote:

> Finally !!!!!
>
> [image: Inline image 1]
>
> And that is preformatted data. I am yet to send actual snort logs.
> Hopefully that will be easier now
>
> On Fri, Nov 17, 2017 at 12:08 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> Help guys.
>>
>> On Thu, Nov 16, 2017 at 10:07 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>> > wrote:
>>
>>> Any solution to this enrichments error issue?
>>>
>>> On Thu, Nov 16, 2017 at 6:08 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> No , on this ubuntu PC I am not.
>>>>
>>>> On Thu, Nov 16, 2017 at 6:06 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>> wrote:
>>>>
>>>>> Are you behind a proxy?
>>>>>
>>>>> On Thu, Nov 16, 2017, 08:04 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>> wrote:
>>>>>
>>>>>> Ok, Now I have started everything again from scratch (redeployed
>>>>>> single node based ambari metron cluster with ansibleSkipTags = 'quick-dev')
>>>>>> and now when I execute this command:
>>>>>>
>>>>>> shuf -n 10 snort.out | sed -e "s/[^,]\+ ,/`date
>>>>>> +'%m\/%d\/%y-%H:%M:%S'`.000000 ,/g" |
>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
>>>>>> node1:6667 --topic snort
>>>>>>
>>>>>> (format of ths command was taken from:
>>>>>> https://github.com/apache/metron/blob/master/metron-deployment/roles/sensor-stubs/templates/start-snort-stub
>>>>>> )
>>>>>>
>>>>>> I get this under enrichment storm topology :
>>>>>>
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> [image: Inline image 2]
>>>>>>
>>>>>> I have come this far, please help me push these dummy preformatted
>>>>>> snort logs into kibana dashboard.
>>>>>>
>>>>>> Regards.
>>>>>>
>>>>>> On Tue, Nov 14, 2017 at 1:19 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> Now I cant even see the sensor stub logs if I start snort service
>>>>>>> from monit. How can I flush kafka of everything that was sent earlier.
>>>>>>> Whats going wrong with original sensor stub data?
>>>>>>>
>>>>>>> On Tue, Nov 14, 2017 at 11:05 AM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> Here is what I am doing:
>>>>>>>>
>>>>>>>> Running this command: sudo tail -n 1 snort.out  |
>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
>>>>>>>> node1:6667 --topic snort
>>>>>>>>
>>>>>>>> sends this message to this topic: 01/11/17-21:32:37.925044
>>>>>>>> ,1,999158,0,"'snort test
>>>>>>>> alert'",TCP,192.168.138.158,49207,95.163.121.204,80,00:00:00:00:00:00,00:00:00:00:00:00,0x3C,***A****,0xC0313398,0xD1FE0623,,0xFAF0,128,0,2561,40,40960,,,,
>>>>>>>>
>>>>>>>> And I can see it under kafka client on enrichments topic:
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> Time stamp can be matched against the sent message.
>>>>>>>>
>>>>>>>> The issue is that I cant see the message under kafka client on
>>>>>>>> indexing topic and hence in kibana dashboard.
>>>>>>>>
>>>>>>>> On Tue, Nov 14, 2017 at 10:56 AM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> Ran this sudo head -n 1 snort.out  |
>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>
>>>>>>>>> and this as well, sudo tail -n 1 snort.out  |
>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>
>>>>>>>>> and same issue again. Storm indexing topology keeps on giving
>>>>>>>>> error on previusly failed messages.
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> On Tue, Nov 14, 2017 at 10:40 AM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> if by that command you mean cat snort.out | kafka-producer ....
>>>>>>>>>> then I have been doing it but with snort.out full of all the material
>>>>>>>>>> copied from github raw snort.out link.
>>>>>>>>>>
>>>>>>>>>> On Tue, Nov 14, 2017 at 6:31 AM, Otto Fowler <
>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> You should literally run the command I put in.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On November 13, 2017 at 20:23:09, Syed Hammad Tahir (
>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>
>>>>>>>>>>> nope, that doesnt work when I just copy/paste one line in kafka
>>>>>>>>>>> producer. Havent tried putting one line in snort.out and then pushing it.
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Nov 14, 2017 at 6:09 AM, Otto Fowler <
>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> No,
>>>>>>>>>>>> When you push the data to kafka just push 1 line and see if it
>>>>>>>>>>>> works.
>>>>>>>>>>>> Nothing to do with configs.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On November 13, 2017 at 20:06:45, Syed Hammad Tahir (
>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Ok, that cold be a reason but from where do I check this? From
>>>>>>>>>>>> here or somewhere else,
>>>>>>>>>>>>
>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <
>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> OK.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I think your sending errors to your indexing topic instead of
>>>>>>>>>>>>> the error topic.
>>>>>>>>>>>>> I think you posted your config before, but I don’t remember
>>>>>>>>>>>>> off the top of my head
>>>>>>>>>>>>> where the error topic is configured.
>>>>>>>>>>>>>
>>>>>>>>>>>>> If the error topic is the same as the indexing topic, and you
>>>>>>>>>>>>> ‘have errors’  I think you may see this.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (
>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Here we go. This is what I see when I do kafka client on
>>>>>>>>>>>>> indexing topic.
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> ok, I will try it again and report results
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <
>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> You have to be seeing data in the indexing topic, you have
>>>>>>>>>>>>>>> errors in the indexing topology that reads from it.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> So you are saying:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> * when you do the kafka client on the enrichment topic
>>>>>>>>>>>>>>> things are in json
>>>>>>>>>>>>>>> * when you do the kafka client on the indexing topic they
>>>>>>>>>>>>>>> are csv
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 1- Yes, kafka client on enrichment shows json
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 2- No, I dont see anything in kafka client on indexing topic
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <
>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> So you are saying:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> * when you do the kafka client on the enrichment topic
>>>>>>>>>>>>>>>> things are in json
>>>>>>>>>>>>>>>> * when you do the kafka client on the indexing topic they
>>>>>>>>>>>>>>>> are csv
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ???
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> From one of your earlier messages, This is what I have
>>>>>>>>>>>>>>>> figured out so far.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> The issue is inducated by red marked portion of the flow.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Which .java file is causing the issue in this
>>>>>>>>>>>>>>>>> hdfsindexbolt. I mean which one should I look at because there are so many
>>>>>>>>>>>>>>>>> listed here.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> No, I am not seeing it under indexing topic as JSON. I
>>>>>>>>>>>>>>>>>>>> can only see JSON objects of stub sensor logs but not from those pushed by
>>>>>>>>>>>>>>>>>>>> me via kafka producer.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as
>>>>>>>>>>>>>>>>>>>>> the producer script) and pull from the indexing topic.  Are you seeing it
>>>>>>>>>>>>>>>>>>>>> in JSON there?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Kindly give me the mechanism implemented in metron
>>>>>>>>>>>>>>>>>>>>>> through which a line such as this
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Restarted snort, still giving me error for indexing
>>>>>>>>>>>>>>>>>>>>>>> topologies even though I havent even pushed out any data to snort topic
>>>>>>>>>>>>>>>>>>>>>>> yet. I have not run the kafka-producer command but its still giving error
>>>>>>>>>>>>>>>>>>>>>>> for something.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> are calling the script with?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> based vm install) comes with sensor stubs. I assume that everything has
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> already been done for those stub sensors to push the canned data. I am
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> doing the similar thing, directly pushing the preformatted canned data to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic. I can see the logs in kibana dashboard when I start stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sensor from monit but then I push the same logs myself, those errors pop
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that I have shown earlier.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella <ce...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology and what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella <ce...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "indexing" kafka topic from the parser or from some other source?  It looks
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like there are some records in kafka that are not JSON.  By the time it
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> gets to the indexing kafka topic, it should be a JSON map.  The parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology emits that JSON map and then the enrichments topology enrich that
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> map and emits the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stack trace.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> longer seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella (cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in those errors, what's the full stacktrace (that starts with the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> suggestion you file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is bleeding from the individual writer into the writer component (It should
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be handled in the writer itself).  The fact that it's happening for both
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> HDFS and ES is telling as well and I'm very interested in the full
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stacktrace there because it'll have the wrapped exception from the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> individual writer included.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort.out | kafka producer .... and now the error at storm parser topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is gone but I am now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,1,999158,0,"'snort test
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort.out file and run cat snort.out | kafka-console-producer.sh ... to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> make sure there are no copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> support dates in this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>  format, then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> need to modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that th
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --

Jon

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Finally !!!!!

[image: Inline image 1]

And that is preformatted data. I am yet to send actual snort logs.
Hopefully that will be easier now

On Fri, Nov 17, 2017 at 12:08 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> Help guys.
>
> On Thu, Nov 16, 2017 at 10:07 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> Any solution to this enrichments error issue?
>>
>> On Thu, Nov 16, 2017 at 6:08 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> No , on this ubuntu PC I am not.
>>>
>>> On Thu, Nov 16, 2017 at 6:06 PM, Zeolla@GMail.com <ze...@gmail.com>
>>> wrote:
>>>
>>>> Are you behind a proxy?
>>>>
>>>> On Thu, Nov 16, 2017, 08:04 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>> wrote:
>>>>
>>>>> Ok, Now I have started everything again from scratch (redeployed
>>>>> single node based ambari metron cluster with ansibleSkipTags = 'quick-dev')
>>>>> and now when I execute this command:
>>>>>
>>>>> shuf -n 10 snort.out | sed -e "s/[^,]\+ ,/`date
>>>>> +'%m\/%d\/%y-%H:%M:%S'`.000000 ,/g" | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>> --broker-list node1:6667 --topic snort
>>>>>
>>>>> (format of ths command was taken from: https://github.com/apach
>>>>> e/metron/blob/master/metron-deployment/roles/sensor-stubs/te
>>>>> mplates/start-snort-stub)
>>>>>
>>>>> I get this under enrichment storm topology :
>>>>>
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> [image: Inline image 2]
>>>>>
>>>>> I have come this far, please help me push these dummy preformatted
>>>>> snort logs into kibana dashboard.
>>>>>
>>>>> Regards.
>>>>>
>>>>> On Tue, Nov 14, 2017 at 1:19 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> Now I cant even see the sensor stub logs if I start snort service
>>>>>> from monit. How can I flush kafka of everything that was sent earlier.
>>>>>> Whats going wrong with original sensor stub data?
>>>>>>
>>>>>> On Tue, Nov 14, 2017 at 11:05 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> Here is what I am doing:
>>>>>>>
>>>>>>> Running this command: sudo tail -n 1 snort.out  |
>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>
>>>>>>> sends this message to this topic: 01/11/17-21:32:37.925044
>>>>>>> ,1,999158,0,"'snort test alert'",TCP,192.168.138.158,49
>>>>>>> 207,95.163.121.204,80,00:00:00:00:00:00,00:00:00:00:00:00,0x
>>>>>>> 3C,***A****,0xC0313398,0xD1FE0623,,0xFAF0,128,0,2561,40,40960,,,,
>>>>>>>
>>>>>>> And I can see it under kafka client on enrichments topic:
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> Time stamp can be matched against the sent message.
>>>>>>>
>>>>>>> The issue is that I cant see the message under kafka client on
>>>>>>> indexing topic and hence in kibana dashboard.
>>>>>>>
>>>>>>> On Tue, Nov 14, 2017 at 10:56 AM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> Ran this sudo head -n 1 snort.out  | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>
>>>>>>>> and this as well, sudo tail -n 1 snort.out  |
>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>
>>>>>>>> and same issue again. Storm indexing topology keeps on giving error
>>>>>>>> on previusly failed messages.
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> On Tue, Nov 14, 2017 at 10:40 AM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> if by that command you mean cat snort.out | kafka-producer ....
>>>>>>>>> then I have been doing it but with snort.out full of all the material
>>>>>>>>> copied from github raw snort.out link.
>>>>>>>>>
>>>>>>>>> On Tue, Nov 14, 2017 at 6:31 AM, Otto Fowler <
>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> You should literally run the command I put in.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On November 13, 2017 at 20:23:09, Syed Hammad Tahir (
>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>
>>>>>>>>>> nope, that doesnt work when I just copy/paste one line in kafka
>>>>>>>>>> producer. Havent tried putting one line in snort.out and then pushing it.
>>>>>>>>>>
>>>>>>>>>> On Tue, Nov 14, 2017 at 6:09 AM, Otto Fowler <
>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> No,
>>>>>>>>>>> When you push the data to kafka just push 1 line and see if it
>>>>>>>>>>> works.
>>>>>>>>>>> Nothing to do with configs.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On November 13, 2017 at 20:06:45, Syed Hammad Tahir (
>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>
>>>>>>>>>>> Ok, that cold be a reason but from where do I check this? From
>>>>>>>>>>> here or somewhere else,
>>>>>>>>>>>
>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <
>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> OK.
>>>>>>>>>>>>
>>>>>>>>>>>> I think your sending errors to your indexing topic instead of
>>>>>>>>>>>> the error topic.
>>>>>>>>>>>> I think you posted your config before, but I don’t remember off
>>>>>>>>>>>> the top of my head
>>>>>>>>>>>> where the error topic is configured.
>>>>>>>>>>>>
>>>>>>>>>>>> If the error topic is the same as the indexing topic, and you
>>>>>>>>>>>> ‘have errors’  I think you may see this.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (
>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Here we go. This is what I see when I do kafka client on
>>>>>>>>>>>> indexing topic.
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> ok, I will try it again and report results
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <
>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> You have to be seeing data in the indexing topic, you have
>>>>>>>>>>>>>> errors in the indexing topology that reads from it.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So you are saying:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> * when you do the kafka client on the enrichment topic things
>>>>>>>>>>>>>> are in json
>>>>>>>>>>>>>> * when you do the kafka client on the indexing topic they are
>>>>>>>>>>>>>> csv
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 1- Yes, kafka client on enrichment shows json
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 2- No, I dont see anything in kafka client on indexing topic
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <
>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> So you are saying:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> * when you do the kafka client on the enrichment topic
>>>>>>>>>>>>>>> things are in json
>>>>>>>>>>>>>>> * when you do the kafka client on the indexing topic they
>>>>>>>>>>>>>>> are csv
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> ???
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> From one of your earlier messages, This is what I have
>>>>>>>>>>>>>>> figured out so far.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The issue is inducated by red marked portion of the flow.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Which .java file is causing the issue in this
>>>>>>>>>>>>>>>> hdfsindexbolt. I mean which one should I look at because there are so many
>>>>>>>>>>>>>>>> listed here.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> No, I am not seeing it under indexing topic as JSON. I
>>>>>>>>>>>>>>>>>>> can only see JSON objects of stub sensor logs but not from those pushed by
>>>>>>>>>>>>>>>>>>> me via kafka producer.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as
>>>>>>>>>>>>>>>>>>>> the producer script) and pull from the indexing topic.  Are you seeing it
>>>>>>>>>>>>>>>>>>>> in JSON there?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Kindly give me the mechanism implemented in metron
>>>>>>>>>>>>>>>>>>>>> through which a line such as this
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Restarted snort, still giving me error for indexing
>>>>>>>>>>>>>>>>>>>>>> topologies even though I havent even pushed out any data to snort topic
>>>>>>>>>>>>>>>>>>>>>> yet. I have not run the kafka-producer command but its still giving error
>>>>>>>>>>>>>>>>>>>>>> for something.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> calling the script with?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> based vm install) comes with sensor stubs. I assume that everything has
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> already been done for those stub sensors to push the canned data. I am
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> doing the similar thing, directly pushing the preformatted canned data to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic. I can see the logs in kibana dashboard when I start stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sensor from monit but then I push the same logs myself, those errors pop
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that I have shown earlier.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella <ce...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella <ce...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "indexing" kafka topic from the parser or from some other source?  It looks
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like there are some records in kafka that are not JSON.  By the time it
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> gets to the indexing kafka topic, it should be a JSON map.  The parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology emits that JSON map and then the enrichments topology enrich that
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> map and emits the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stack trace.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> longer seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella (cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in those errors, what's the full stacktrace (that starts with the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> suggestion you file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bleeding from the individual writer into the writer component (It should be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> handled in the writer itself).  The fact that it's happening for both HDFS
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and ES is telling as well and I'm very interested in the full stacktrace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort.out | kafka producer .... and now the error at storm parser topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is gone but I am now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort.out file and run cat snort.out | kafka-console-producer.sh ... to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> make sure there are no copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> support dates in this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>  format, then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> need to modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is the format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort stub canned data file? Maybe I could see its formatting and try
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> following the same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> paste a few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can also see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to push the logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> messages coming from snort through some setup ( getting pushed to kafka ),
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which I think of as live.  I also think you have manually pushed messages,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> where you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the same errors for things that are automatically pushed to kafka as you do
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> when you manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parsed then that would be a problem.  If you see this error with your
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘live’ messages as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> date format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>
>>>> Jon
>>>>
>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Help guys.

On Thu, Nov 16, 2017 at 10:07 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> Any solution to this enrichments error issue?
>
> On Thu, Nov 16, 2017 at 6:08 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> No , on this ubuntu PC I am not.
>>
>> On Thu, Nov 16, 2017 at 6:06 PM, Zeolla@GMail.com <ze...@gmail.com>
>> wrote:
>>
>>> Are you behind a proxy?
>>>
>>> On Thu, Nov 16, 2017, 08:04 Syed Hammad Tahir <ms...@itu.edu.pk>
>>> wrote:
>>>
>>>> Ok, Now I have started everything again from scratch (redeployed single
>>>> node based ambari metron cluster with ansibleSkipTags = 'quick-dev') and
>>>> now when I execute this command:
>>>>
>>>> shuf -n 10 snort.out | sed -e "s/[^,]\+ ,/`date
>>>> +'%m\/%d\/%y-%H:%M:%S'`.000000 ,/g" | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>> --broker-list node1:6667 --topic snort
>>>>
>>>> (format of ths command was taken from: https://github.com/apach
>>>> e/metron/blob/master/metron-deployment/roles/sensor-stubs/te
>>>> mplates/start-snort-stub)
>>>>
>>>> I get this under enrichment storm topology :
>>>>
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> [image: Inline image 2]
>>>>
>>>> I have come this far, please help me push these dummy preformatted
>>>> snort logs into kibana dashboard.
>>>>
>>>> Regards.
>>>>
>>>> On Tue, Nov 14, 2017 at 1:19 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> Now I cant even see the sensor stub logs if I start snort service from
>>>>> monit. How can I flush kafka of everything that was sent earlier. Whats
>>>>> going wrong with original sensor stub data?
>>>>>
>>>>> On Tue, Nov 14, 2017 at 11:05 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> Here is what I am doing:
>>>>>>
>>>>>> Running this command: sudo tail -n 1 snort.out  |
>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>> --broker-list node1:6667 --topic snort
>>>>>>
>>>>>> sends this message to this topic: 01/11/17-21:32:37.925044
>>>>>> ,1,999158,0,"'snort test alert'",TCP,192.168.138.158,49
>>>>>> 207,95.163.121.204,80,00:00:00:00:00:00,00:00:00:00:00:00,0x
>>>>>> 3C,***A****,0xC0313398,0xD1FE0623,,0xFAF0,128,0,2561,40,40960,,,,
>>>>>>
>>>>>> And I can see it under kafka client on enrichments topic:
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> Time stamp can be matched against the sent message.
>>>>>>
>>>>>> The issue is that I cant see the message under kafka client on
>>>>>> indexing topic and hence in kibana dashboard.
>>>>>>
>>>>>> On Tue, Nov 14, 2017 at 10:56 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> Ran this sudo head -n 1 snort.out  | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>
>>>>>>> and this as well, sudo tail -n 1 snort.out  |
>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>
>>>>>>> and same issue again. Storm indexing topology keeps on giving error
>>>>>>> on previusly failed messages.
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> On Tue, Nov 14, 2017 at 10:40 AM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> if by that command you mean cat snort.out | kafka-producer ....
>>>>>>>> then I have been doing it but with snort.out full of all the material
>>>>>>>> copied from github raw snort.out link.
>>>>>>>>
>>>>>>>> On Tue, Nov 14, 2017 at 6:31 AM, Otto Fowler <
>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> You should literally run the command I put in.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 13, 2017 at 20:23:09, Syed Hammad Tahir (
>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>
>>>>>>>>> nope, that doesnt work when I just copy/paste one line in kafka
>>>>>>>>> producer. Havent tried putting one line in snort.out and then pushing it.
>>>>>>>>>
>>>>>>>>> On Tue, Nov 14, 2017 at 6:09 AM, Otto Fowler <
>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> No,
>>>>>>>>>> When you push the data to kafka just push 1 line and see if it
>>>>>>>>>> works.
>>>>>>>>>> Nothing to do with configs.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On November 13, 2017 at 20:06:45, Syed Hammad Tahir (
>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>
>>>>>>>>>> Ok, that cold be a reason but from where do I check this? From
>>>>>>>>>> here or somewhere else,
>>>>>>>>>>
>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <
>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> OK.
>>>>>>>>>>>
>>>>>>>>>>> I think your sending errors to your indexing topic instead of
>>>>>>>>>>> the error topic.
>>>>>>>>>>> I think you posted your config before, but I don’t remember off
>>>>>>>>>>> the top of my head
>>>>>>>>>>> where the error topic is configured.
>>>>>>>>>>>
>>>>>>>>>>> If the error topic is the same as the indexing topic, and you
>>>>>>>>>>> ‘have errors’  I think you may see this.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (
>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>
>>>>>>>>>>> Here we go. This is what I see when I do kafka client on
>>>>>>>>>>> indexing topic.
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> ok, I will try it again and report results
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <
>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> You have to be seeing data in the indexing topic, you have
>>>>>>>>>>>>> errors in the indexing topology that reads from it.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> So you are saying:
>>>>>>>>>>>>>
>>>>>>>>>>>>> * when you do the kafka client on the enrichment topic things
>>>>>>>>>>>>> are in json
>>>>>>>>>>>>> * when you do the kafka client on the indexing topic they are
>>>>>>>>>>>>> csv
>>>>>>>>>>>>>
>>>>>>>>>>>>> 1- Yes, kafka client on enrichment shows json
>>>>>>>>>>>>>
>>>>>>>>>>>>> 2- No, I dont see anything in kafka client on indexing topic
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <
>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> So you are saying:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> * when you do the kafka client on the enrichment topic things
>>>>>>>>>>>>>> are in json
>>>>>>>>>>>>>> * when you do the kafka client on the indexing topic they are
>>>>>>>>>>>>>> csv
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> ???
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> From one of your earlier messages, This is what I have
>>>>>>>>>>>>>> figured out so far.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The issue is inducated by red marked portion of the flow.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Which .java file is causing the issue in this hdfsindexbolt.
>>>>>>>>>>>>>>> I mean which one should I look at because there are so many listed here.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> No, I am not seeing it under indexing topic as JSON. I
>>>>>>>>>>>>>>>>>> can only see JSON objects of stub sensor logs but not from those pushed by
>>>>>>>>>>>>>>>>>> me via kafka producer.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the
>>>>>>>>>>>>>>>>>>> producer script) and pull from the indexing topic.  Are you seeing it in
>>>>>>>>>>>>>>>>>>> JSON there?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Kindly give me the mechanism implemented in metron
>>>>>>>>>>>>>>>>>>>> through which a line such as this
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Restarted snort, still giving me error for indexing
>>>>>>>>>>>>>>>>>>>>> topologies even though I havent even pushed out any data to snort topic
>>>>>>>>>>>>>>>>>>>>> yet. I have not run the kafka-producer command but its still giving error
>>>>>>>>>>>>>>>>>>>>> for something.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> calling the script with?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> based vm install) comes with sensor stubs. I assume that everything has
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> already been done for those stub sensors to push the canned data. I am
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> doing the similar thing, directly pushing the preformatted canned data to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic. I can see the logs in kibana dashboard when I start stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sensor from monit but then I push the same logs myself, those errors pop
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that I have shown earlier.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ce...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella <ce...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic from the parser or from some other source?  It looks like there
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> are some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stack trace.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> longer seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella (cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> those errors, what's the full stacktrace (that starts with the suggestion
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bleeding from the individual writer into the writer component (It should be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> handled in the writer itself).  The fact that it's happening for both HDFS
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and ES is telling as well and I'm very interested in the full stacktrace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort.out | kafka producer .... and now the error at storm parser topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is gone but I am now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> file and run cat snort.out | kafka-console-producer.sh ... to make sure
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there are no copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dates in this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>  format, then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> need to modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Otto Fowler (ottobackwards@gmail.com)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is the format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort stub canned data file? Maybe I could see its formatting and try
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> following the same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> paste a few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can also see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to push the logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> messages coming from snort through some setup ( getting pushed to kafka ),
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which I think of as live.  I also think you have manually pushed messages,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> where you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the same errors for things that are automatically pushed to kafka as you do
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> when you manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parsed then that would be a problem.  If you see this error with your
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘live’ messages as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> date format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>
>>> Jon
>>>
>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Any solution to this enrichments error issue?

On Thu, Nov 16, 2017 at 6:08 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> No , on this ubuntu PC I am not.
>
> On Thu, Nov 16, 2017 at 6:06 PM, Zeolla@GMail.com <ze...@gmail.com>
> wrote:
>
>> Are you behind a proxy?
>>
>> On Thu, Nov 16, 2017, 08:04 Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> Ok, Now I have started everything again from scratch (redeployed single
>>> node based ambari metron cluster with ansibleSkipTags = 'quick-dev') and
>>> now when I execute this command:
>>>
>>> shuf -n 10 snort.out | sed -e "s/[^,]\+ ,/`date
>>> +'%m\/%d\/%y-%H:%M:%S'`.000000 ,/g" | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>> --broker-list node1:6667 --topic snort
>>>
>>> (format of ths command was taken from: https://github.com/apach
>>> e/metron/blob/master/metron-deployment/roles/sensor-stubs/
>>> templates/start-snort-stub)
>>>
>>> I get this under enrichment storm topology :
>>>
>>>
>>> [image: Inline image 1]
>>>
>>> [image: Inline image 2]
>>>
>>> I have come this far, please help me push these dummy preformatted snort
>>> logs into kibana dashboard.
>>>
>>> Regards.
>>>
>>> On Tue, Nov 14, 2017 at 1:19 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> Now I cant even see the sensor stub logs if I start snort service from
>>>> monit. How can I flush kafka of everything that was sent earlier. Whats
>>>> going wrong with original sensor stub data?
>>>>
>>>> On Tue, Nov 14, 2017 at 11:05 AM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> Here is what I am doing:
>>>>>
>>>>> Running this command: sudo tail -n 1 snort.out  |
>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>> --broker-list node1:6667 --topic snort
>>>>>
>>>>> sends this message to this topic: 01/11/17-21:32:37.925044
>>>>> ,1,999158,0,"'snort test alert'",TCP,192.168.138.158,49
>>>>> 207,95.163.121.204,80,00:00:00:00:00:00,00:00:00:00:00:00,0x
>>>>> 3C,***A****,0xC0313398,0xD1FE0623,,0xFAF0,128,0,2561,40,40960,,,,
>>>>>
>>>>> And I can see it under kafka client on enrichments topic:
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> Time stamp can be matched against the sent message.
>>>>>
>>>>> The issue is that I cant see the message under kafka client on
>>>>> indexing topic and hence in kibana dashboard.
>>>>>
>>>>> On Tue, Nov 14, 2017 at 10:56 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> Ran this sudo head -n 1 snort.out  | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>> --broker-list node1:6667 --topic snort
>>>>>>
>>>>>> and this as well, sudo tail -n 1 snort.out  |
>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>> --broker-list node1:6667 --topic snort
>>>>>>
>>>>>> and same issue again. Storm indexing topology keeps on giving error
>>>>>> on previusly failed messages.
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> On Tue, Nov 14, 2017 at 10:40 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> if by that command you mean cat snort.out | kafka-producer .... then
>>>>>>> I have been doing it but with snort.out full of all the material copied
>>>>>>> from github raw snort.out link.
>>>>>>>
>>>>>>> On Tue, Nov 14, 2017 at 6:31 AM, Otto Fowler <
>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>
>>>>>>>> You should literally run the command I put in.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On November 13, 2017 at 20:23:09, Syed Hammad Tahir (
>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>
>>>>>>>> nope, that doesnt work when I just copy/paste one line in kafka
>>>>>>>> producer. Havent tried putting one line in snort.out and then pushing it.
>>>>>>>>
>>>>>>>> On Tue, Nov 14, 2017 at 6:09 AM, Otto Fowler <
>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> No,
>>>>>>>>> When you push the data to kafka just push 1 line and see if it
>>>>>>>>> works.
>>>>>>>>> Nothing to do with configs.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 13, 2017 at 20:06:45, Syed Hammad Tahir (
>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>
>>>>>>>>> Ok, that cold be a reason but from where do I check this? From
>>>>>>>>> here or somewhere else,
>>>>>>>>>
>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <
>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> OK.
>>>>>>>>>>
>>>>>>>>>> I think your sending errors to your indexing topic instead of the
>>>>>>>>>> error topic.
>>>>>>>>>> I think you posted your config before, but I don’t remember off
>>>>>>>>>> the top of my head
>>>>>>>>>> where the error topic is configured.
>>>>>>>>>>
>>>>>>>>>> If the error topic is the same as the indexing topic, and you
>>>>>>>>>> ‘have errors’  I think you may see this.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (
>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>
>>>>>>>>>> Here we go. This is what I see when I do kafka client on indexing
>>>>>>>>>> topic.
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> ok, I will try it again and report results
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <
>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> You have to be seeing data in the indexing topic, you have
>>>>>>>>>>>> errors in the indexing topology that reads from it.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> So you are saying:
>>>>>>>>>>>>
>>>>>>>>>>>> * when you do the kafka client on the enrichment topic things
>>>>>>>>>>>> are in json
>>>>>>>>>>>> * when you do the kafka client on the indexing topic they are
>>>>>>>>>>>> csv
>>>>>>>>>>>>
>>>>>>>>>>>> 1- Yes, kafka client on enrichment shows json
>>>>>>>>>>>>
>>>>>>>>>>>> 2- No, I dont see anything in kafka client on indexing topic
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <
>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> So you are saying:
>>>>>>>>>>>>>
>>>>>>>>>>>>> * when you do the kafka client on the enrichment topic things
>>>>>>>>>>>>> are in json
>>>>>>>>>>>>> * when you do the kafka client on the indexing topic they are
>>>>>>>>>>>>> csv
>>>>>>>>>>>>>
>>>>>>>>>>>>> ???
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> From one of your earlier messages, This is what I have figured
>>>>>>>>>>>>> out so far.
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>
>>>>>>>>>>>>> The issue is inducated by red marked portion of the flow.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Which .java file is causing the issue in this hdfsindexbolt.
>>>>>>>>>>>>>> I mean which one should I look at because there are so many listed here.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can
>>>>>>>>>>>>>>>>> only see JSON objects of stub sensor logs but not from those pushed by me
>>>>>>>>>>>>>>>>> via kafka producer.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the
>>>>>>>>>>>>>>>>>> producer script) and pull from the indexing topic.  Are you seeing it in
>>>>>>>>>>>>>>>>>> JSON there?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Kindly give me the mechanism implemented in metron
>>>>>>>>>>>>>>>>>>> through which a line such as this
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Restarted snort, still giving me error for indexing
>>>>>>>>>>>>>>>>>>>> topologies even though I havent even pushed out any data to snort topic
>>>>>>>>>>>>>>>>>>>> yet. I have not run the kafka-producer command but its still giving error
>>>>>>>>>>>>>>>>>>>> for something.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> calling the script with?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> based vm install) comes with sensor stubs. I assume that everything has
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> already been done for those stub sensors to push the canned data. I am
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> doing the similar thing, directly pushing the preformatted canned data to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic. I can see the logs in kibana dashboard when I start stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sensor from monit but then I push the same logs myself, those errors pop
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that I have shown earlier.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ce...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella <ce...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic from the parser or from some other source?  It looks like there
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> are some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> trace.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> longer seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella (cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> those errors, what's the full stacktrace (that starts with the suggestion
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bleeding from the individual writer into the writer component (It should be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> handled in the writer itself).  The fact that it's happening for both HDFS
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and ES is telling as well and I'm very interested in the full stacktrace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> | kafka producer .... and now the error at storm parser topology is gone
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> but I am now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 00:00:00,08:00:27:E8:B0:7A,0x5
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> A,***AP***,0x1E396BFC,0x56900B
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> B6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> file and run cat snort.out | kafka-console-producer.sh ... to make sure
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there are no copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dates in this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>  format, then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> need to modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler (ottobackwards@gmail.com)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is the format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort stub canned data file? Maybe I could see its formatting and try
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> following the same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> paste a few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can also see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push the logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> messages coming from snort through some setup ( getting pushed to kafka ),
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which I think of as live.  I also think you have manually pushed messages,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> where you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the same errors for things that are automatically pushed to kafka as you do
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> when you manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parsed then that would be a problem.  If you see this error with your
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘live’ messages as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> date format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>
>> Jon
>>
>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
No , on this ubuntu PC I am not.

On Thu, Nov 16, 2017 at 6:06 PM, Zeolla@GMail.com <ze...@gmail.com> wrote:

> Are you behind a proxy?
>
> On Thu, Nov 16, 2017, 08:04 Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> Ok, Now I have started everything again from scratch (redeployed single
>> node based ambari metron cluster with ansibleSkipTags = 'quick-dev') and
>> now when I execute this command:
>>
>> shuf -n 10 snort.out | sed -e "s/[^,]\+ ,/`date
>> +'%m\/%d\/%y-%H:%M:%S'`.000000 ,/g" | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>> --broker-list node1:6667 --topic snort
>>
>> (format of ths command was taken from: https://github.com/
>> apache/metron/blob/master/metron-deployment/roles/
>> sensor-stubs/templates/start-snort-stub)
>>
>> I get this under enrichment storm topology :
>>
>>
>> [image: Inline image 1]
>>
>> [image: Inline image 2]
>>
>> I have come this far, please help me push these dummy preformatted snort
>> logs into kibana dashboard.
>>
>> Regards.
>>
>> On Tue, Nov 14, 2017 at 1:19 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> Now I cant even see the sensor stub logs if I start snort service from
>>> monit. How can I flush kafka of everything that was sent earlier. Whats
>>> going wrong with original sensor stub data?
>>>
>>> On Tue, Nov 14, 2017 at 11:05 AM, Syed Hammad Tahir <
>>> mscs16059@itu.edu.pk> wrote:
>>>
>>>> Here is what I am doing:
>>>>
>>>> Running this command: sudo tail -n 1 snort.out  |
>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>> --broker-list node1:6667 --topic snort
>>>>
>>>> sends this message to this topic: 01/11/17-21:32:37.925044
>>>> ,1,999158,0,"'snort test alert'",TCP,192.168.138.158,
>>>> 49207,95.163.121.204,80,00:00:00:00:00:00,00:00:00:00:00:00,
>>>> 0x3C,***A****,0xC0313398,0xD1FE0623,,0xFAF0,128,0,2561,40,40960,,,,
>>>>
>>>> And I can see it under kafka client on enrichments topic:
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> Time stamp can be matched against the sent message.
>>>>
>>>> The issue is that I cant see the message under kafka client on indexing
>>>> topic and hence in kibana dashboard.
>>>>
>>>> On Tue, Nov 14, 2017 at 10:56 AM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> Ran this sudo head -n 1 snort.out  | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>> --broker-list node1:6667 --topic snort
>>>>>
>>>>> and this as well, sudo tail -n 1 snort.out  |
>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>> --broker-list node1:6667 --topic snort
>>>>>
>>>>> and same issue again. Storm indexing topology keeps on giving error on
>>>>> previusly failed messages.
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> On Tue, Nov 14, 2017 at 10:40 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> if by that command you mean cat snort.out | kafka-producer .... then
>>>>>> I have been doing it but with snort.out full of all the material copied
>>>>>> from github raw snort.out link.
>>>>>>
>>>>>> On Tue, Nov 14, 2017 at 6:31 AM, Otto Fowler <ottobackwards@gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> You should literally run the command I put in.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On November 13, 2017 at 20:23:09, Syed Hammad Tahir (
>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>
>>>>>>> nope, that doesnt work when I just copy/paste one line in kafka
>>>>>>> producer. Havent tried putting one line in snort.out and then pushing it.
>>>>>>>
>>>>>>> On Tue, Nov 14, 2017 at 6:09 AM, Otto Fowler <
>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>
>>>>>>>> No,
>>>>>>>> When you push the data to kafka just push 1 line and see if it
>>>>>>>> works.
>>>>>>>> Nothing to do with configs.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On November 13, 2017 at 20:06:45, Syed Hammad Tahir (
>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>
>>>>>>>> Ok, that cold be a reason but from where do I check this? From here
>>>>>>>> or somewhere else,
>>>>>>>>
>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <
>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> OK.
>>>>>>>>>
>>>>>>>>> I think your sending errors to your indexing topic instead of the
>>>>>>>>> error topic.
>>>>>>>>> I think you posted your config before, but I don’t remember off
>>>>>>>>> the top of my head
>>>>>>>>> where the error topic is configured.
>>>>>>>>>
>>>>>>>>> If the error topic is the same as the indexing topic, and you
>>>>>>>>> ‘have errors’  I think you may see this.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (
>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>
>>>>>>>>> Here we go. This is what I see when I do kafka client on indexing
>>>>>>>>> topic.
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> ok, I will try it again and report results
>>>>>>>>>>
>>>>>>>>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <
>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> You have to be seeing data in the indexing topic, you have
>>>>>>>>>>> errors in the indexing topology that reads from it.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>
>>>>>>>>>>> So you are saying:
>>>>>>>>>>>
>>>>>>>>>>> * when you do the kafka client on the enrichment topic things
>>>>>>>>>>> are in json
>>>>>>>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>>>>>>>
>>>>>>>>>>> 1- Yes, kafka client on enrichment shows json
>>>>>>>>>>>
>>>>>>>>>>> 2- No, I dont see anything in kafka client on indexing topic
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <
>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> So you are saying:
>>>>>>>>>>>>
>>>>>>>>>>>> * when you do the kafka client on the enrichment topic things
>>>>>>>>>>>> are in json
>>>>>>>>>>>> * when you do the kafka client on the indexing topic they are
>>>>>>>>>>>> csv
>>>>>>>>>>>>
>>>>>>>>>>>> ???
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> From one of your earlier messages, This is what I have figured
>>>>>>>>>>>> out so far.
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>> The issue is inducated by red marked portion of the flow.
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Which .java file is causing the issue in this hdfsindexbolt. I
>>>>>>>>>>>>> mean which one should I look at because there are so many listed here.
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can
>>>>>>>>>>>>>>>> only see JSON objects of stub sensor logs but not from those pushed by me
>>>>>>>>>>>>>>>> via kafka producer.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the
>>>>>>>>>>>>>>>>> producer script) and pull from the indexing topic.  Are you seeing it in
>>>>>>>>>>>>>>>>> JSON there?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Kindly give me the mechanism implemented in metron
>>>>>>>>>>>>>>>>>> through which a line such as this
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Restarted snort, still giving me error for indexing
>>>>>>>>>>>>>>>>>>> topologies even though I havent even pushed out any data to snort topic
>>>>>>>>>>>>>>>>>>> yet. I have not run the kafka-producer command but its still giving error
>>>>>>>>>>>>>>>>>>> for something.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are
>>>>>>>>>>>>>>>>>>>>>>>>>>>> calling the script with?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node
>>>>>>>>>>>>>>>>>>>>>>>>>>>> based vm install) comes with sensor stubs. I assume that everything has
>>>>>>>>>>>>>>>>>>>>>>>>>>>> already been done for those stub sensors to push the canned data. I am
>>>>>>>>>>>>>>>>>>>>>>>>>>>> doing the similar thing, directly pushing the preformatted canned data to
>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic. I can see the logs in kibana dashboard when I start stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>> sensor from monit but then I push the same logs myself, those errors pop
>>>>>>>>>>>>>>>>>>>>>>>>>>>> that I have shown earlier.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ce...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic from the parser or from some other source?  It looks like there
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> are some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> trace.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> longer seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella (cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> those errors, what's the full stacktrace (that starts with the suggestion
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bleeding from the individual writer into the writer component (It should be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> handled in the writer itself).  The fact that it's happening for both HDFS
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and ES is telling as well and I'm very interested in the full stacktrace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> | kafka producer .... and now the error at storm parser topology is gone
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> but I am now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 49581,192.168.66.121,22,0A:00:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 27:00:00:00,08:00:27:E8:B0:7A,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 0x5A,***AP***,0x1E396BFC,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 0x56900BB6,,0x1000,64,10,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> file and run cat snort.out | kafka-console-producer.sh ... to make sure
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there are no copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dates in this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>  format, then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> need to modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler (ottobackwards@gmail.com)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort stub canned data file? Maybe I could see its formatting and try
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> following the same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> paste a few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> metron-deployment/roles/
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can also see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push the logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> messages coming from snort through some setup ( getting pushed to kafka ),
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which I think of as live.  I also think you have manually pushed messages,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> where you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the same errors for things that are automatically pushed to kafka as you do
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> when you manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parsed then that would be a problem.  If you see this error with your
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘live’ messages as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> date format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>
> Jon
>

Re: Snort Logs

Posted by "Zeolla@GMail.com" <ze...@gmail.com>.
Are you behind a proxy?

On Thu, Nov 16, 2017, 08:04 Syed Hammad Tahir <ms...@itu.edu.pk> wrote:

> Ok, Now I have started everything again from scratch (redeployed single
> node based ambari metron cluster with ansibleSkipTags = 'quick-dev') and
> now when I execute this command:
>
> shuf -n 10 snort.out | sed -e "s/[^,]\+ ,/`date
> +'%m\/%d\/%y-%H:%M:%S'`.000000 ,/g" |
> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
> node1:6667 --topic snort
>
> (format of ths command was taken from:
> https://github.com/apache/metron/blob/master/metron-deployment/roles/sensor-stubs/templates/start-snort-stub
> )
>
> I get this under enrichment storm topology :
>
>
> [image: Inline image 1]
>
> [image: Inline image 2]
>
> I have come this far, please help me push these dummy preformatted snort
> logs into kibana dashboard.
>
> Regards.
>
> On Tue, Nov 14, 2017 at 1:19 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> Now I cant even see the sensor stub logs if I start snort service from
>> monit. How can I flush kafka of everything that was sent earlier. Whats
>> going wrong with original sensor stub data?
>>
>> On Tue, Nov 14, 2017 at 11:05 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>> > wrote:
>>
>>> Here is what I am doing:
>>>
>>> Running this command: sudo tail -n 1 snort.out  |
>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
>>> node1:6667 --topic snort
>>>
>>> sends this message to this topic: 01/11/17-21:32:37.925044
>>> ,1,999158,0,"'snort test
>>> alert'",TCP,192.168.138.158,49207,95.163.121.204,80,00:00:00:00:00:00,00:00:00:00:00:00,0x3C,***A****,0xC0313398,0xD1FE0623,,0xFAF0,128,0,2561,40,40960,,,,
>>>
>>> And I can see it under kafka client on enrichments topic:
>>>
>>> [image: Inline image 1]
>>>
>>> Time stamp can be matched against the sent message.
>>>
>>> The issue is that I cant see the message under kafka client on indexing
>>> topic and hence in kibana dashboard.
>>>
>>> On Tue, Nov 14, 2017 at 10:56 AM, Syed Hammad Tahir <
>>> mscs16059@itu.edu.pk> wrote:
>>>
>>>> Ran this sudo head -n 1 snort.out  | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>> --broker-list node1:6667 --topic snort
>>>>
>>>> and this as well, sudo tail -n 1 snort.out  |
>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>> --broker-list node1:6667 --topic snort
>>>>
>>>> and same issue again. Storm indexing topology keeps on giving error on
>>>> previusly failed messages.
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Tue, Nov 14, 2017 at 10:40 AM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> if by that command you mean cat snort.out | kafka-producer .... then I
>>>>> have been doing it but with snort.out full of all the material copied from
>>>>> github raw snort.out link.
>>>>>
>>>>> On Tue, Nov 14, 2017 at 6:31 AM, Otto Fowler <ot...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> You should literally run the command I put in.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On November 13, 2017 at 20:23:09, Syed Hammad Tahir (
>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>
>>>>>> nope, that doesnt work when I just copy/paste one line in kafka
>>>>>> producer. Havent tried putting one line in snort.out and then pushing it.
>>>>>>
>>>>>> On Tue, Nov 14, 2017 at 6:09 AM, Otto Fowler <ottobackwards@gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> No,
>>>>>>> When you push the data to kafka just push 1 line and see if it works.
>>>>>>> Nothing to do with configs.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On November 13, 2017 at 20:06:45, Syed Hammad Tahir (
>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>
>>>>>>> Ok, that cold be a reason but from where do I check this? From here
>>>>>>> or somewhere else,
>>>>>>>
>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <
>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>
>>>>>>>> OK.
>>>>>>>>
>>>>>>>> I think your sending errors to your indexing topic instead of the
>>>>>>>> error topic.
>>>>>>>> I think you posted your config before, but I don’t remember off the
>>>>>>>> top of my head
>>>>>>>> where the error topic is configured.
>>>>>>>>
>>>>>>>> If the error topic is the same as the indexing topic, and you ‘have
>>>>>>>> errors’  I think you may see this.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (
>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>
>>>>>>>> Here we go. This is what I see when I do kafka client on indexing
>>>>>>>> topic.
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> ok, I will try it again and report results
>>>>>>>>>
>>>>>>>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <
>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> You have to be seeing data in the indexing topic, you have errors
>>>>>>>>>> in the indexing topology that reads from it.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>
>>>>>>>>>> So you are saying:
>>>>>>>>>>
>>>>>>>>>> * when you do the kafka client on the enrichment topic things are
>>>>>>>>>> in json
>>>>>>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>>>>>>
>>>>>>>>>> 1- Yes, kafka client on enrichment shows json
>>>>>>>>>>
>>>>>>>>>> 2- No, I dont see anything in kafka client on indexing topic
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <
>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> So you are saying:
>>>>>>>>>>>
>>>>>>>>>>> * when you do the kafka client on the enrichment topic things
>>>>>>>>>>> are in json
>>>>>>>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>>>>>>>
>>>>>>>>>>> ???
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>
>>>>>>>>>>> From one of your earlier messages, This is what I have figured
>>>>>>>>>>> out so far.
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> The issue is inducated by red marked portion of the flow.
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Which .java file is causing the issue in this hdfsindexbolt. I
>>>>>>>>>>>> mean which one should I look at because there are so many listed here.
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can
>>>>>>>>>>>>>>> only see JSON objects of stub sensor logs but not from those pushed by me
>>>>>>>>>>>>>>> via kafka producer.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the
>>>>>>>>>>>>>>>> producer script) and pull from the indexing topic.  Are you seeing it in
>>>>>>>>>>>>>>>> JSON there?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Kindly give me the mechanism implemented in metron through
>>>>>>>>>>>>>>>>> which a line such as this
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Restarted snort, still giving me error for indexing
>>>>>>>>>>>>>>>>>> topologies even though I havent even pushed out any data to snort topic
>>>>>>>>>>>>>>>>>> yet. I have not run the kafka-producer command but its still giving error
>>>>>>>>>>>>>>>>>> for something.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are
>>>>>>>>>>>>>>>>>>>>>>>>>>> calling the script with?
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node
>>>>>>>>>>>>>>>>>>>>>>>>>>> based vm install) comes with sensor stubs. I assume that everything has
>>>>>>>>>>>>>>>>>>>>>>>>>>> already been done for those stub sensors to push the canned data. I am
>>>>>>>>>>>>>>>>>>>>>>>>>>> doing the similar thing, directly pushing the preformatted canned data to
>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic. I can see the logs in kibana dashboard when I start stub
>>>>>>>>>>>>>>>>>>>>>>>>>>> sensor from monit but then I push the same logs myself, those errors pop
>>>>>>>>>>>>>>>>>>>>>>>>>>> that I have shown earlier.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and
>>>>>>>>>>>>>>>>>>>>>>>>>>>> what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ce...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic from the parser or from some other source?  It looks like there
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> are some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topology error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> trace.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella (cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> those errors, what's the full stacktrace (that starts with the suggestion
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bleeding from the individual writer into the writer component (It should be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> handled in the writer itself).  The fact that it's happening for both HDFS
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and ES is telling as well and I'm very interested in the full stacktrace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka producer .... and now the error at storm parser topology is gone but
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,1,999158,0,"'snort test
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> file and run cat snort.out | kafka-console-producer.sh ... to make sure
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there are no copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dates in this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>  format, then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler (ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort stub canned data file? Maybe I could see its formatting and try
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> following the same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> paste a few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> also see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push the logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> messages coming from snort through some setup ( getting pushed to kafka ),
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which I think of as live.  I also think you have manually pushed messages,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> where you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same errors for things that are automatically pushed to kafka as you do
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> when you manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parsed then that would be a problem.  If you see this error with your
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘live’ messages as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> date format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --

Jon

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Ok, Now I have started everything again from scratch (redeployed single
node based ambari metron cluster with ansibleSkipTags = 'quick-dev') and
now when I execute this command:

shuf -n 10 snort.out | sed -e "s/[^,]\+ ,/`date
+'%m\/%d\/%y-%H:%M:%S'`.000000 ,/g" |
/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
node1:6667 --topic snort

(format of ths command was taken from:
https://github.com/apache/metron/blob/master/metron-deployment/roles/sensor-stubs/templates/start-snort-stub
)

I get this under enrichment storm topology :


[image: Inline image 1]

[image: Inline image 2]

I have come this far, please help me push these dummy preformatted snort
logs into kibana dashboard.

Regards.

On Tue, Nov 14, 2017 at 1:19 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> Now I cant even see the sensor stub logs if I start snort service from
> monit. How can I flush kafka of everything that was sent earlier. Whats
> going wrong with original sensor stub data?
>
> On Tue, Nov 14, 2017 at 11:05 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> Here is what I am doing:
>>
>> Running this command: sudo tail -n 1 snort.out  |
>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>> --broker-list node1:6667 --topic snort
>>
>> sends this message to this topic: 01/11/17-21:32:37.925044
>> ,1,999158,0,"'snort test alert'",TCP,192.168.138.158,49
>> 207,95.163.121.204,80,00:00:00:00:00:00,00:00:00:00:00:00,0x
>> 3C,***A****,0xC0313398,0xD1FE0623,,0xFAF0,128,0,2561,40,40960,,,,
>>
>> And I can see it under kafka client on enrichments topic:
>>
>> [image: Inline image 1]
>>
>> Time stamp can be matched against the sent message.
>>
>> The issue is that I cant see the message under kafka client on indexing
>> topic and hence in kibana dashboard.
>>
>> On Tue, Nov 14, 2017 at 10:56 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>> > wrote:
>>
>>> Ran this sudo head -n 1 snort.out  | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>> --broker-list node1:6667 --topic snort
>>>
>>> and this as well, sudo tail -n 1 snort.out  |
>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>> --broker-list node1:6667 --topic snort
>>>
>>> and same issue again. Storm indexing topology keeps on giving error on
>>> previusly failed messages.
>>>
>>> [image: Inline image 1]
>>>
>>> On Tue, Nov 14, 2017 at 10:40 AM, Syed Hammad Tahir <
>>> mscs16059@itu.edu.pk> wrote:
>>>
>>>> if by that command you mean cat snort.out | kafka-producer .... then I
>>>> have been doing it but with snort.out full of all the material copied from
>>>> github raw snort.out link.
>>>>
>>>> On Tue, Nov 14, 2017 at 6:31 AM, Otto Fowler <ot...@gmail.com>
>>>> wrote:
>>>>
>>>>> You should literally run the command I put in.
>>>>>
>>>>>
>>>>>
>>>>> On November 13, 2017 at 20:23:09, Syed Hammad Tahir (
>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>
>>>>> nope, that doesnt work when I just copy/paste one line in kafka
>>>>> producer. Havent tried putting one line in snort.out and then pushing it.
>>>>>
>>>>> On Tue, Nov 14, 2017 at 6:09 AM, Otto Fowler <ot...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> No,
>>>>>> When you push the data to kafka just push 1 line and see if it works.
>>>>>> Nothing to do with configs.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On November 13, 2017 at 20:06:45, Syed Hammad Tahir (
>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>
>>>>>> Ok, that cold be a reason but from where do I check this? From here
>>>>>> or somewhere else,
>>>>>>
>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>
>>>>>>
>>>>>> On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <
>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>
>>>>>>> OK.
>>>>>>>
>>>>>>> I think your sending errors to your indexing topic instead of the
>>>>>>> error topic.
>>>>>>> I think you posted your config before, but I don’t remember off the
>>>>>>> top of my head
>>>>>>> where the error topic is configured.
>>>>>>>
>>>>>>> If the error topic is the same as the indexing topic, and you ‘have
>>>>>>> errors’  I think you may see this.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (
>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>
>>>>>>> Here we go. This is what I see when I do kafka client on indexing
>>>>>>> topic.
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> ok, I will try it again and report results
>>>>>>>>
>>>>>>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <
>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> You have to be seeing data in the indexing topic, you have errors
>>>>>>>>> in the indexing topology that reads from it.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>
>>>>>>>>> So you are saying:
>>>>>>>>>
>>>>>>>>> * when you do the kafka client on the enrichment topic things are
>>>>>>>>> in json
>>>>>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>>>>>
>>>>>>>>> 1- Yes, kafka client on enrichment shows json
>>>>>>>>>
>>>>>>>>> 2- No, I dont see anything in kafka client on indexing topic
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <
>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> So you are saying:
>>>>>>>>>>
>>>>>>>>>> * when you do the kafka client on the enrichment topic things are
>>>>>>>>>> in json
>>>>>>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>>>>>>
>>>>>>>>>> ???
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>
>>>>>>>>>> From one of your earlier messages, This is what I have figured
>>>>>>>>>> out so far.
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> The issue is inducated by red marked portion of the flow.
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> Which .java file is causing the issue in this hdfsindexbolt. I
>>>>>>>>>>> mean which one should I look at because there are so many listed here.
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can
>>>>>>>>>>>>>> only see JSON objects of stub sensor logs but not from those pushed by me
>>>>>>>>>>>>>> via kafka producer.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the
>>>>>>>>>>>>>>> producer script) and pull from the indexing topic.  Are you seeing it in
>>>>>>>>>>>>>>> JSON there?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Kindly give me the mechanism implemented in metron through
>>>>>>>>>>>>>>>> which a line such as this
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Restarted snort, still giving me error for indexing
>>>>>>>>>>>>>>>>> topologies even though I havent even pushed out any data to snort topic
>>>>>>>>>>>>>>>>> yet. I have not run the kafka-producer command but its still giving error
>>>>>>>>>>>>>>>>> for something.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are
>>>>>>>>>>>>>>>>>>>>>>>>>> calling the script with?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based
>>>>>>>>>>>>>>>>>>>>>>>>>> vm install) comes with sensor stubs. I assume that everything has already
>>>>>>>>>>>>>>>>>>>>>>>>>> been done for those stub sensors to push the canned data. I am doing the
>>>>>>>>>>>>>>>>>>>>>>>>>> similar thing, directly pushing the preformatted canned data to kafka
>>>>>>>>>>>>>>>>>>>>>>>>>> topic. I can see the logs in kibana dashboard when I start stub sensor from
>>>>>>>>>>>>>>>>>>>>>>>>>> monit but then I push the same logs myself, those errors pop that I have
>>>>>>>>>>>>>>>>>>>>>>>>>> shown earlier.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and
>>>>>>>>>>>>>>>>>>>>>>>>>>> what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic from the parser or from some other source?  It looks like there
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> are some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> trace.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stella (cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> those errors, what's the full stacktrace (that starts with the suggestion
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bleeding from the individual writer into the writer component (It should be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> handled in the writer itself).  The fact that it's happening for both HDFS
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and ES is telling as well and I'm very interested in the full stacktrace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka producer .... and now the error at storm parser topology is gone but
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> file and run cat snort.out | kafka-console-producer.sh ... to make sure
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there are no copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dates in this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>  format, then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler (ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort stub canned data file? Maybe I could see its formatting and try
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> following the same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> paste a few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> also see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push the logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> coming from snort through some setup ( getting pushed to kafka ), which I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think of as live.  I also think you have manually pushed messages, where
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same errors for things that are automatically pushed to kafka as you do
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> when you manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then that would be a problem.  If you see this error with your ‘live’
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> messages as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> time I push them into kafka topic then no, I dont see any error at that
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> time. If 'live' means something else here then please tell me what could it
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then that would be a problem.  If you see this error with your ‘live’
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> messages as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> date format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> these same errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ...
>>
>> [Message clipped]
>
>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Now I cant even see the sensor stub logs if I start snort service from
monit. How can I flush kafka of everything that was sent earlier. Whats
going wrong with original sensor stub data?

On Tue, Nov 14, 2017 at 11:05 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> Here is what I am doing:
>
> Running this command: sudo tail -n 1 snort.out  |
> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
> node1:6667 --topic snort
>
> sends this message to this topic: 01/11/17-21:32:37.925044
> ,1,999158,0,"'snort test alert'",TCP,192.168.138.158,
> 49207,95.163.121.204,80,00:00:00:00:00:00,00:00:00:00:00:00,
> 0x3C,***A****,0xC0313398,0xD1FE0623,,0xFAF0,128,0,2561,40,40960,,,,
>
> And I can see it under kafka client on enrichments topic:
>
> [image: Inline image 1]
>
> Time stamp can be matched against the sent message.
>
> The issue is that I cant see the message under kafka client on indexing
> topic and hence in kibana dashboard.
>
> On Tue, Nov 14, 2017 at 10:56 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> Ran this sudo head -n 1 snort.out  | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>> --broker-list node1:6667 --topic snort
>>
>> and this as well, sudo tail -n 1 snort.out  |
>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>> --broker-list node1:6667 --topic snort
>>
>> and same issue again. Storm indexing topology keeps on giving error on
>> previusly failed messages.
>>
>> [image: Inline image 1]
>>
>> On Tue, Nov 14, 2017 at 10:40 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>> > wrote:
>>
>>> if by that command you mean cat snort.out | kafka-producer .... then I
>>> have been doing it but with snort.out full of all the material copied from
>>> github raw snort.out link.
>>>
>>> On Tue, Nov 14, 2017 at 6:31 AM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> You should literally run the command I put in.
>>>>
>>>>
>>>>
>>>> On November 13, 2017 at 20:23:09, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> nope, that doesnt work when I just copy/paste one line in kafka
>>>> producer. Havent tried putting one line in snort.out and then pushing it.
>>>>
>>>> On Tue, Nov 14, 2017 at 6:09 AM, Otto Fowler <ot...@gmail.com>
>>>> wrote:
>>>>
>>>>> No,
>>>>> When you push the data to kafka just push 1 line and see if it works.
>>>>> Nothing to do with configs.
>>>>>
>>>>>
>>>>>
>>>>> On November 13, 2017 at 20:06:45, Syed Hammad Tahir (
>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>
>>>>> Ok, that cold be a reason but from where do I check this? From here or
>>>>> somewhere else,
>>>>>
>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>
>>>>>
>>>>> On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <ottobackwards@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> OK.
>>>>>>
>>>>>> I think your sending errors to your indexing topic instead of the
>>>>>> error topic.
>>>>>> I think you posted your config before, but I don’t remember off the
>>>>>> top of my head
>>>>>> where the error topic is configured.
>>>>>>
>>>>>> If the error topic is the same as the indexing topic, and you ‘have
>>>>>> errors’  I think you may see this.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (
>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>
>>>>>> Here we go. This is what I see when I do kafka client on indexing
>>>>>> topic.
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> ok, I will try it again and report results
>>>>>>>
>>>>>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <
>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>
>>>>>>>> You have to be seeing data in the indexing topic, you have errors
>>>>>>>> in the indexing topology that reads from it.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>
>>>>>>>> So you are saying:
>>>>>>>>
>>>>>>>> * when you do the kafka client on the enrichment topic things are
>>>>>>>> in json
>>>>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>>>>
>>>>>>>> 1- Yes, kafka client on enrichment shows json
>>>>>>>>
>>>>>>>> 2- No, I dont see anything in kafka client on indexing topic
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <
>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> So you are saying:
>>>>>>>>>
>>>>>>>>> * when you do the kafka client on the enrichment topic things are
>>>>>>>>> in json
>>>>>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>>>>>
>>>>>>>>> ???
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>
>>>>>>>>> From one of your earlier messages, This is what I have figured out
>>>>>>>>> so far.
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> The issue is inducated by red marked portion of the flow.
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Which .java file is causing the issue in this hdfsindexbolt. I
>>>>>>>>>> mean which one should I look at because there are so many listed here.
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can
>>>>>>>>>>>>> only see JSON objects of stub sensor logs but not from those pushed by me
>>>>>>>>>>>>> via kafka producer.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the
>>>>>>>>>>>>>> producer script) and pull from the indexing topic.  Are you seeing it in
>>>>>>>>>>>>>> JSON there?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Kindly give me the mechanism implemented in metron through
>>>>>>>>>>>>>>> which a line such as this
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Restarted snort, still giving me error for indexing
>>>>>>>>>>>>>>>> topologies even though I havent even pushed out any data to snort topic
>>>>>>>>>>>>>>>> yet. I have not run the kafka-producer command but its still giving error
>>>>>>>>>>>>>>>> for something.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are
>>>>>>>>>>>>>>>>>>>>>>>>> calling the script with?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based
>>>>>>>>>>>>>>>>>>>>>>>>> vm install) comes with sensor stubs. I assume that everything has already
>>>>>>>>>>>>>>>>>>>>>>>>> been done for those stub sensors to push the canned data. I am doing the
>>>>>>>>>>>>>>>>>>>>>>>>> similar thing, directly pushing the preformatted canned data to kafka
>>>>>>>>>>>>>>>>>>>>>>>>> topic. I can see the logs in kibana dashboard when I start stub sensor from
>>>>>>>>>>>>>>>>>>>>>>>>> monit but then I push the same logs myself, those errors pop that I have
>>>>>>>>>>>>>>>>>>>>>>>>> shown earlier.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and
>>>>>>>>>>>>>>>>>>>>>>>>>> what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing"
>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka topic from the parser or from some other source?  It looks like there
>>>>>>>>>>>>>>>>>>>>>>>>>>>> are some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> trace.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> those errors, what's the full stacktrace (that starts with the suggestion
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bleeding from the individual writer into the writer component (It should be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> handled in the writer itself).  The fact that it's happening for both HDFS
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and ES is telling as well and I'm very interested in the full stacktrace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka producer .... and now the error at storm parser topology is gone but
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and run cat snort.out | kafka-console-producer.sh ... to make sure there
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> are no copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dates in this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler (ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stub canned data file? Maybe I could see its formatting and try following
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> also see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push the logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> coming from snort through some setup ( getting pushed to kafka ), which I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think of as live.  I also think you have manually pushed messages, where
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same errors for things that are automatically pushed to kafka as you do
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> when you manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then that would be a problem.  If you see this error with your ‘live’
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> messages as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then that would be a problem.  If you see this error with your ‘live’
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> messages as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ...
>
> [Message clipped]

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Here is what I am doing:

Running this command: sudo tail -n 1 snort.out  |
/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
node1:6667 --topic snort

sends this message to this topic: 01/11/17-21:32:37.925044
,1,999158,0,"'snort test
alert'",TCP,192.168.138.158,49207,95.163.121.204,80,00:00:00:00:00:00,00:00:00:00:00:00,0x3C,***A****,0xC0313398,0xD1FE0623,,0xFAF0,128,0,2561,40,40960,,,,

And I can see it under kafka client on enrichments topic:

[image: Inline image 1]

Time stamp can be matched against the sent message.

The issue is that I cant see the message under kafka client on indexing
topic and hence in kibana dashboard.

On Tue, Nov 14, 2017 at 10:56 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> Ran this sudo head -n 1 snort.out  | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
> --broker-list node1:6667 --topic snort
>
> and this as well, sudo tail -n 1 snort.out  |
> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
> node1:6667 --topic snort
>
> and same issue again. Storm indexing topology keeps on giving error on
> previusly failed messages.
>
> [image: Inline image 1]
>
> On Tue, Nov 14, 2017 at 10:40 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> if by that command you mean cat snort.out | kafka-producer .... then I
>> have been doing it but with snort.out full of all the material copied from
>> github raw snort.out link.
>>
>> On Tue, Nov 14, 2017 at 6:31 AM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> You should literally run the command I put in.
>>>
>>>
>>>
>>> On November 13, 2017 at 20:23:09, Syed Hammad Tahir (
>>> mscs16059@itu.edu.pk) wrote:
>>>
>>> nope, that doesnt work when I just copy/paste one line in kafka
>>> producer. Havent tried putting one line in snort.out and then pushing it.
>>>
>>> On Tue, Nov 14, 2017 at 6:09 AM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> No,
>>>> When you push the data to kafka just push 1 line and see if it works.
>>>> Nothing to do with configs.
>>>>
>>>>
>>>>
>>>> On November 13, 2017 at 20:06:45, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> Ok, that cold be a reason but from where do I check this? From here or
>>>> somewhere else,
>>>>
>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>
>>>>
>>>> On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <ot...@gmail.com>
>>>> wrote:
>>>>
>>>>> OK.
>>>>>
>>>>> I think your sending errors to your indexing topic instead of the
>>>>> error topic.
>>>>> I think you posted your config before, but I don’t remember off the
>>>>> top of my head
>>>>> where the error topic is configured.
>>>>>
>>>>> If the error topic is the same as the indexing topic, and you ‘have
>>>>> errors’  I think you may see this.
>>>>>
>>>>>
>>>>>
>>>>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (
>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>
>>>>> Here we go. This is what I see when I do kafka client on indexing
>>>>> topic.
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> ok, I will try it again and report results
>>>>>>
>>>>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <
>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>
>>>>>>> You have to be seeing data in the indexing topic, you have errors in
>>>>>>> the indexing topology that reads from it.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>
>>>>>>> So you are saying:
>>>>>>>
>>>>>>> * when you do the kafka client on the enrichment topic things are in
>>>>>>> json
>>>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>>>
>>>>>>> 1- Yes, kafka client on enrichment shows json
>>>>>>>
>>>>>>> 2- No, I dont see anything in kafka client on indexing topic
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <
>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>
>>>>>>>> So you are saying:
>>>>>>>>
>>>>>>>> * when you do the kafka client on the enrichment topic things are
>>>>>>>> in json
>>>>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>>>>
>>>>>>>> ???
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>
>>>>>>>> From one of your earlier messages, This is what I have figured out
>>>>>>>> so far.
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> The issue is inducated by red marked portion of the flow.
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> Which .java file is causing the issue in this hdfsindexbolt. I
>>>>>>>>> mean which one should I look at because there are so many listed here.
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only
>>>>>>>>>>>> see JSON objects of stub sensor logs but not from those pushed by me via
>>>>>>>>>>>> kafka producer.
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the
>>>>>>>>>>>>> producer script) and pull from the indexing topic.  Are you seeing it in
>>>>>>>>>>>>> JSON there?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Kindly give me the mechanism implemented in metron through
>>>>>>>>>>>>>> which a line such as this
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Restarted snort, still giving me error for indexing
>>>>>>>>>>>>>>> topologies even though I havent even pushed out any data to snort topic
>>>>>>>>>>>>>>> yet. I have not run the kafka-producer command but its still giving error
>>>>>>>>>>>>>>> for something.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181
>>>>>>>>>>>>>>>>>>>> -m DUMP
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z
>>>>>>>>>>>>>>>>>>>>> node1:2181 -m DUMP
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are
>>>>>>>>>>>>>>>>>>>>>>>> calling the script with?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based
>>>>>>>>>>>>>>>>>>>>>>>> vm install) comes with sensor stubs. I assume that everything has already
>>>>>>>>>>>>>>>>>>>>>>>> been done for those stub sensors to push the canned data. I am doing the
>>>>>>>>>>>>>>>>>>>>>>>> similar thing, directly pushing the preformatted canned data to kafka
>>>>>>>>>>>>>>>>>>>>>>>> topic. I can see the logs in kibana dashboard when I start stub sensor from
>>>>>>>>>>>>>>>>>>>>>>>> monit but then I push the same logs myself, those errors pop that I have
>>>>>>>>>>>>>>>>>>>>>>>> shown earlier.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and
>>>>>>>>>>>>>>>>>>>>>>>>> what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka
>>>>>>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source?  It looks like there are
>>>>>>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>> error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> trace.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> bleeding from the individual writer into the writer component (It should be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> handled in the writer itself).  The fact that it's happening for both HDFS
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and ES is telling as well and I'm very interested in the full stacktrace
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> there because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka producer .... and now the error at storm parser topology is gone but
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> test alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and run cat snort.out | kafka-console-producer.sh ... to make sure there
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> are no copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler (ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stub canned data file? Maybe I could see its formatting and try following
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> a few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> also see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> coming from snort through some setup ( getting pushed to kafka ), which I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think of as live.  I also think you have manually pushed messages, where
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same errors for things that are automatically pushed to kafka as you do
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> when you manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then that would be a problem.  If you see this error with your ‘live’
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> messages as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then that would be a problem.  If you see this error with your ‘live’
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> messages as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir ( <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ...
>>
>> [Message clipped]
>
>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Ran this sudo head -n 1 snort.out  |
/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
--broker-list node1:6667 --topic snort

and this as well, sudo tail -n 1 snort.out  |
/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
--broker-list node1:6667 --topic snort

and same issue again. Storm indexing topology keeps on giving error on
previusly failed messages.

[image: Inline image 1]

On Tue, Nov 14, 2017 at 10:40 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> if by that command you mean cat snort.out | kafka-producer .... then I
> have been doing it but with snort.out full of all the material copied from
> github raw snort.out link.
>
> On Tue, Nov 14, 2017 at 6:31 AM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> You should literally run the command I put in.
>>
>>
>>
>> On November 13, 2017 at 20:23:09, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> nope, that doesnt work when I just copy/paste one line in kafka producer.
>> Havent tried putting one line in snort.out and then pushing it.
>>
>> On Tue, Nov 14, 2017 at 6:09 AM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> No,
>>> When you push the data to kafka just push 1 line and see if it works.
>>> Nothing to do with configs.
>>>
>>>
>>>
>>> On November 13, 2017 at 20:06:45, Syed Hammad Tahir (
>>> mscs16059@itu.edu.pk) wrote:
>>>
>>> Ok, that cold be a reason but from where do I check this? From here or
>>> somewhere else,
>>>
>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>
>>>
>>> On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> OK.
>>>>
>>>> I think your sending errors to your indexing topic instead of the error
>>>> topic.
>>>> I think you posted your config before, but I don’t remember off the top
>>>> of my head
>>>> where the error topic is configured.
>>>>
>>>> If the error topic is the same as the indexing topic, and you ‘have
>>>> errors’  I think you may see this.
>>>>
>>>>
>>>>
>>>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> Here we go. This is what I see when I do kafka client on indexing topic.
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> ok, I will try it again and report results
>>>>>
>>>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <ottobackwards@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> You have to be seeing data in the indexing topic, you have errors in
>>>>>> the indexing topology that reads from it.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>
>>>>>> So you are saying:
>>>>>>
>>>>>> * when you do the kafka client on the enrichment topic things are in
>>>>>> json
>>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>>
>>>>>> 1- Yes, kafka client on enrichment shows json
>>>>>>
>>>>>> 2- No, I dont see anything in kafka client on indexing topic
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <
>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>
>>>>>>> So you are saying:
>>>>>>>
>>>>>>> * when you do the kafka client on the enrichment topic things are in
>>>>>>> json
>>>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>>>
>>>>>>> ???
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>
>>>>>>> From one of your earlier messages, This is what I have figured out
>>>>>>> so far.
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> The issue is inducated by red marked portion of the flow.
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> Which .java file is causing the issue in this hdfsindexbolt. I mean
>>>>>>>> which one should I look at because there are so many listed here.
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only
>>>>>>>>>>> see JSON objects of stub sensor logs but not from those pushed by me via
>>>>>>>>>>> kafka producer.
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the
>>>>>>>>>>>> producer script) and pull from the indexing topic.  Are you seeing it in
>>>>>>>>>>>> JSON there?
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Kindly give me the mechanism implemented in metron through
>>>>>>>>>>>>> which a line such as this
>>>>>>>>>>>>>
>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>
>>>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Restarted snort, still giving me error for indexing
>>>>>>>>>>>>>> topologies even though I havent even pushed out any data to snort topic
>>>>>>>>>>>>>> yet. I have not run the kafka-producer command but its still giving error
>>>>>>>>>>>>>> for something.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181
>>>>>>>>>>>>>>>>>>> -m DUMP
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181
>>>>>>>>>>>>>>>>>>>> -m DUMP
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling
>>>>>>>>>>>>>>>>>>>>>>> the script with?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and
>>>>>>>>>>>>>>>>>>>>>>>> what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka
>>>>>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source?  It looks like there are
>>>>>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>> error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer
>>>>>>>>>>>>>>>>>>>>>>>>>>>> seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is
>>>>>>>>>>>>>>>>>>>>>>>>>>>> bleeding from the individual writer into the writer component (It should be
>>>>>>>>>>>>>>>>>>>>>>>>>>>> handled in the writer itself).  The fact that it's happening for both HDFS
>>>>>>>>>>>>>>>>>>>>>>>>>>>> and ES is telling as well and I'm very interested in the full stacktrace
>>>>>>>>>>>>>>>>>>>>>>>>>>>> there because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka producer .... and now the error at storm parser topology is gone but
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> test alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and run cat snort.out | kafka-console-producer.sh ... to make sure there
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> are no copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler (ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stub canned data file? Maybe I could see its formatting and try following
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> also see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> coming from snort through some setup ( getting pushed to kafka ), which I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think of as live.  I also think you have manually pushed messages, where
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same errors for things that are automatically pushed to kafka as you do
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> when you manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then that would be a problem.  If you see this error with your ‘live’
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> messages as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then that would be a problem.  If you see this error with your ‘live’
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> messages as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir ( <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ...
>
> [Message clipped]

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
if by that command you mean cat snort.out | kafka-producer .... then I have
been doing it but with snort.out full of all the material copied from
github raw snort.out link.

On Tue, Nov 14, 2017 at 6:31 AM, Otto Fowler <ot...@gmail.com>
wrote:

> You should literally run the command I put in.
>
>
>
> On November 13, 2017 at 20:23:09, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> nope, that doesnt work when I just copy/paste one line in kafka producer.
> Havent tried putting one line in snort.out and then pushing it.
>
> On Tue, Nov 14, 2017 at 6:09 AM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> No,
>> When you push the data to kafka just push 1 line and see if it works.
>> Nothing to do with configs.
>>
>>
>>
>> On November 13, 2017 at 20:06:45, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> Ok, that cold be a reason but from where do I check this? From here or
>> somewhere else,
>>
>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>
>>
>> On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> OK.
>>>
>>> I think your sending errors to your indexing topic instead of the error
>>> topic.
>>> I think you posted your config before, but I don’t remember off the top
>>> of my head
>>> where the error topic is configured.
>>>
>>> If the error topic is the same as the indexing topic, and you ‘have
>>> errors’  I think you may see this.
>>>
>>>
>>>
>>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (
>>> mscs16059@itu.edu.pk) wrote:
>>>
>>> Here we go. This is what I see when I do kafka client on indexing topic.
>>>
>>> [image: Inline image 1]
>>>
>>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <
>>> mscs16059@itu.edu.pk> wrote:
>>>
>>>> ok, I will try it again and report results
>>>>
>>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <ot...@gmail.com>
>>>> wrote:
>>>>
>>>>> You have to be seeing data in the indexing topic, you have errors in
>>>>> the indexing topology that reads from it.
>>>>>
>>>>>
>>>>>
>>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>
>>>>> So you are saying:
>>>>>
>>>>> * when you do the kafka client on the enrichment topic things are in
>>>>> json
>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>
>>>>> 1- Yes, kafka client on enrichment shows json
>>>>>
>>>>> 2- No, I dont see anything in kafka client on indexing topic
>>>>>
>>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <ottobackwards@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> So you are saying:
>>>>>>
>>>>>> * when you do the kafka client on the enrichment topic things are in
>>>>>> json
>>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>>
>>>>>> ???
>>>>>>
>>>>>>
>>>>>>
>>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>
>>>>>> From one of your earlier messages, This is what I have figured out so
>>>>>> far.
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> The issue is inducated by red marked portion of the flow.
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> Which .java file is causing the issue in this hdfsindexbolt. I mean
>>>>>>> which one should I look at because there are so many listed here.
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only
>>>>>>>>>> see JSON objects of stub sensor logs but not from those pushed by me via
>>>>>>>>>> kafka producer.
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the
>>>>>>>>>>> producer script) and pull from the indexing topic.  Are you seeing it in
>>>>>>>>>>> JSON there?
>>>>>>>>>>>
>>>>>>>>>>> Jon
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Kindly give me the mechanism implemented in metron through
>>>>>>>>>>>> which a line such as this
>>>>>>>>>>>>
>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>
>>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Restarted snort, still giving me error for indexing topologies
>>>>>>>>>>>>> even though I havent even pushed out any data to snort topic yet. I have
>>>>>>>>>>>>> not run the kafka-producer command but its still giving error for
>>>>>>>>>>>>> something.
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181
>>>>>>>>>>>>>>>>>> -m DUMP
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181
>>>>>>>>>>>>>>>>>>> -m DUMP
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling
>>>>>>>>>>>>>>>>>>>>>> the script with?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and
>>>>>>>>>>>>>>>>>>>>>>> what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka
>>>>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source?  It looks like there are
>>>>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology
>>>>>>>>>>>>>>>>>>>>>>>>>> error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer
>>>>>>>>>>>>>>>>>>>>>>>>>>> seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding
>>>>>>>>>>>>>>>>>>>>>>>>>>> from the individual writer into the writer component (It should be handled
>>>>>>>>>>>>>>>>>>>>>>>>>>> in the writer itself).  The fact that it's happening for both HDFS and ES
>>>>>>>>>>>>>>>>>>>>>>>>>>> is telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out |
>>>>>>>>>>>>>>>>>>>>>>>>>>>> kafka producer .... and now the error at storm parser topology is gone but
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> test alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and run cat snort.out | kafka-console-producer.sh ... to make sure there
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> are no copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> (ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stub canned data file? Maybe I could see its formatting and try following
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> coming from snort through some setup ( getting pushed to kafka ), which I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think of as live.  I also think you have manually pushed messages, where
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then that would be a problem.  If you see this error with your ‘live’
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> messages as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then that would be a problem.  If you see this error with your ‘live’
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> messages as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to see logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to get the snort logs in kibana dashboard. Any help will be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and that is related to snort. Could it be the logs I was pushing to kafka
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> topic earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I be looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> search head. Now where do I go in this to find out why I cant see the snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> chrome web browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> elasticsearch head on the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>
>>>>>>>>>>> Jon
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
You should literally run the command I put in.



On November 13, 2017 at 20:23:09, Syed Hammad Tahir (mscs16059@itu.edu.pk)
wrote:

nope, that doesnt work when I just copy/paste one line in kafka producer.
Havent tried putting one line in snort.out and then pushing it.

On Tue, Nov 14, 2017 at 6:09 AM, Otto Fowler <ot...@gmail.com>
wrote:

> No,
> When you push the data to kafka just push 1 line and see if it works.
> Nothing to do with configs.
>
>
>
> On November 13, 2017 at 20:06:45, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> Ok, that cold be a reason but from where do I check this? From here or
> somewhere else,
>
> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>
>
> On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> OK.
>>
>> I think your sending errors to your indexing topic instead of the error
>> topic.
>> I think you posted your config before, but I don’t remember off the top
>> of my head
>> where the error topic is configured.
>>
>> If the error topic is the same as the indexing topic, and you ‘have
>> errors’  I think you may see this.
>>
>>
>>
>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> Here we go. This is what I see when I do kafka client on indexing topic.
>>
>> [image: Inline image 1]
>>
>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>> > wrote:
>>
>>> ok, I will try it again and report results
>>>
>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> You have to be seeing data in the indexing topic, you have errors in
>>>> the indexing topology that reads from it.
>>>>
>>>>
>>>>
>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> So you are saying:
>>>>
>>>> * when you do the kafka client on the enrichment topic things are in
>>>> json
>>>> * when you do the kafka client on the indexing topic they are csv
>>>>
>>>> 1- Yes, kafka client on enrichment shows json
>>>>
>>>> 2- No, I dont see anything in kafka client on indexing topic
>>>>
>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <ot...@gmail.com>
>>>> wrote:
>>>>
>>>>> So you are saying:
>>>>>
>>>>> * when you do the kafka client on the enrichment topic things are in
>>>>> json
>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>
>>>>> ???
>>>>>
>>>>>
>>>>>
>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>
>>>>> From one of your earlier messages, This is what I have figured out so
>>>>> far.
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> The issue is inducated by red marked portion of the flow.
>>>>>
>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> Which .java file is causing the issue in this hdfsindexbolt. I mean
>>>>>> which one should I look at because there are so many listed here.
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only
>>>>>>>>> see JSON objects of stub sensor logs but not from those pushed by me via
>>>>>>>>> kafka producer.
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>>>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> Kindly give me the mechanism implemented in metron through which
>>>>>>>>>>> a line such as this
>>>>>>>>>>>
>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>
>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Restarted snort, still giving me error for indexing topologies
>>>>>>>>>>>> even though I havent even pushed out any data to snort topic yet. I have
>>>>>>>>>>>> not run the kafka-producer command but its still giving error for
>>>>>>>>>>>> something.
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181
>>>>>>>>>>>>>>>>>> -m DUMP
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling
>>>>>>>>>>>>>>>>>>>>> the script with?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and
>>>>>>>>>>>>>>>>>>>>>> what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka
>>>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source?  It looks like there are
>>>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology
>>>>>>>>>>>>>>>>>>>>>>>>> error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer
>>>>>>>>>>>>>>>>>>>>>>>>>> seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding
>>>>>>>>>>>>>>>>>>>>>>>>>> from the individual writer into the writer component (It should be handled
>>>>>>>>>>>>>>>>>>>>>>>>>> in the writer itself).  The fact that it's happening for both HDFS and ES
>>>>>>>>>>>>>>>>>>>>>>>>>> is telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>> test alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> run cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stub canned data file? Maybe I could see its formatting and try following
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> coming from snort through some setup ( getting pushed to kafka ), which I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think of as live.  I also think you have manually pushed messages, where
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to see logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to get the snort logs in kibana dashboard. Any help will be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> search head. Now where do I go in this to find out why I cant see the snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> web browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> head on the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
nope, that doesnt work when I just copy/paste one line in kafka producer.
Havent tried putting one line in snort.out and then pushing it.

On Tue, Nov 14, 2017 at 6:09 AM, Otto Fowler <ot...@gmail.com>
wrote:

> No,
> When you push the data to kafka just push 1 line and see if it works.
> Nothing to do with configs.
>
>
>
> On November 13, 2017 at 20:06:45, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> Ok, that cold be a reason but from where do I check this? From here or
> somewhere else,
>
> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>
>
> On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> OK.
>>
>> I think your sending errors to your indexing topic instead of the error
>> topic.
>> I think you posted your config before, but I don’t remember off the top
>> of my head
>> where the error topic is configured.
>>
>> If the error topic is the same as the indexing topic, and you ‘have
>> errors’  I think you may see this.
>>
>>
>>
>> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> Here we go. This is what I see when I do kafka client on indexing topic.
>>
>> [image: Inline image 1]
>>
>> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>> > wrote:
>>
>>> ok, I will try it again and report results
>>>
>>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> You have to be seeing data in the indexing topic, you have errors in
>>>> the indexing topology that reads from it.
>>>>
>>>>
>>>>
>>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> So you are saying:
>>>>
>>>> * when you do the kafka client on the enrichment topic things are in
>>>> json
>>>> * when you do the kafka client on the indexing topic they are csv
>>>>
>>>> 1- Yes, kafka client on enrichment shows json
>>>>
>>>> 2- No, I dont see anything in kafka client on indexing topic
>>>>
>>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <ot...@gmail.com>
>>>> wrote:
>>>>
>>>>> So you are saying:
>>>>>
>>>>> * when you do the kafka client on the enrichment topic things are in
>>>>> json
>>>>> * when you do the kafka client on the indexing topic they are csv
>>>>>
>>>>> ???
>>>>>
>>>>>
>>>>>
>>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>
>>>>> From one of your earlier messages, This is what I have figured out so
>>>>> far.
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> The issue is inducated by red marked portion of the flow.
>>>>>
>>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> Which .java file is causing the issue in this hdfsindexbolt. I mean
>>>>>> which one should I look at because there are so many listed here.
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only
>>>>>>>>> see JSON objects of stub sensor logs but not from those pushed by me via
>>>>>>>>> kafka producer.
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <
>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>>>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> Kindly give me the mechanism implemented in metron through which
>>>>>>>>>>> a line such as this
>>>>>>>>>>>
>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>
>>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Restarted snort, still giving me error for indexing topologies
>>>>>>>>>>>> even though I havent even pushed out any data to snort topic yet. I have
>>>>>>>>>>>> not run the kafka-producer command but its still giving error for
>>>>>>>>>>>> something.
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181
>>>>>>>>>>>>>>>>>> -m DUMP
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling
>>>>>>>>>>>>>>>>>>>>> the script with?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and
>>>>>>>>>>>>>>>>>>>>>> what's the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka
>>>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source?  It looks like there are
>>>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology
>>>>>>>>>>>>>>>>>>>>>>>>> error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer
>>>>>>>>>>>>>>>>>>>>>>>>>> seeing the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding
>>>>>>>>>>>>>>>>>>>>>>>>>> from the individual writer into the writer component (It should be handled
>>>>>>>>>>>>>>>>>>>>>>>>>> in the writer itself).  The fact that it's happening for both HDFS and ES
>>>>>>>>>>>>>>>>>>>>>>>>>> is telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>> test alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> run cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ‘timestamp’ field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> stub canned data file? Maybe I could see its formatting and try following
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> coming from snort through some setup ( getting pushed to kafka ), which I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think of as live.  I also think you have manually pushed messages, where
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to see logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to get the snort logs in kibana dashboard. Any help will be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> search head. Now where do I go in this to find out why I cant see the snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> web browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> head on the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
No,
When you push the data to kafka just push 1 line and see if it works.
Nothing to do with configs.



On November 13, 2017 at 20:06:45, Syed Hammad Tahir (mscs16059@itu.edu.pk)
wrote:

Ok, that cold be a reason but from where do I check this? From here or
somewhere else,

/usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP


On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <ot...@gmail.com>
wrote:

> OK.
>
> I think your sending errors to your indexing topic instead of the error
> topic.
> I think you posted your config before, but I don’t remember off the top of
> my head
> where the error topic is configured.
>
> If the error topic is the same as the indexing topic, and you ‘have
> errors’  I think you may see this.
>
>
>
> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> Here we go. This is what I see when I do kafka client on indexing topic.
>
> [image: Inline image 1]
>
> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> ok, I will try it again and report results
>>
>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> You have to be seeing data in the indexing topic, you have errors in the
>>> indexing topology that reads from it.
>>>
>>>
>>>
>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>> mscs16059@itu.edu.pk) wrote:
>>>
>>> So you are saying:
>>>
>>> * when you do the kafka client on the enrichment topic things are in json
>>> * when you do the kafka client on the indexing topic they are csv
>>>
>>> 1- Yes, kafka client on enrichment shows json
>>>
>>> 2- No, I dont see anything in kafka client on indexing topic
>>>
>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> So you are saying:
>>>>
>>>> * when you do the kafka client on the enrichment topic things are in
>>>> json
>>>> * when you do the kafka client on the indexing topic they are csv
>>>>
>>>> ???
>>>>
>>>>
>>>>
>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> From one of your earlier messages, This is what I have figured out so
>>>> far.
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> The issue is inducated by red marked portion of the flow.
>>>>
>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> Which .java file is causing the issue in this hdfsindexbolt. I mean
>>>>> which one should I look at because there are so many listed here.
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only see
>>>>>>>> JSON objects of stub sensor logs but not from those pushed by me via kafka
>>>>>>>> producer.
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <zeolla@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Kindly give me the mechanism implemented in metron through which
>>>>>>>>>> a line such as this
>>>>>>>>>>
>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>
>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> Restarted snort, still giving me error for indexing topologies
>>>>>>>>>>> even though I havent even pushed out any data to snort topic yet. I have
>>>>>>>>>>> not run the kafka-producer command but its still giving error for
>>>>>>>>>>> something.
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling
>>>>>>>>>>>>>>>>>>>> the script with?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and what's
>>>>>>>>>>>>>>>>>>>>> the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka
>>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source?  It looks like there are
>>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology
>>>>>>>>>>>>>>>>>>>>>>>> error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing
>>>>>>>>>>>>>>>>>>>>>>>>> the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding
>>>>>>>>>>>>>>>>>>>>>>>>> from the individual writer into the writer component (It should be handled
>>>>>>>>>>>>>>>>>>>>>>>>> in the writer itself).  The fact that it's happening for both HDFS and ES
>>>>>>>>>>>>>>>>>>>>>>>>> is telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort
>>>>>>>>>>>>>>>>>>>>>>>>>>> test alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and
>>>>>>>>>>>>>>>>>>>>>>>>>>>> run cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> canned data file? Maybe I could see its formatting and try following the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> coming from snort through some setup ( getting pushed to kafka ), which I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think of as live.  I also think you have manually pushed messages, where
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to get the snort logs in kibana dashboard. Any help will be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> search head. Now where do I go in this to find out why I cant see the snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> web browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> head on the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Ok, that cold be a reason but from where do I check this? From here or
somewhere else,

/usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP


On Tue, Nov 14, 2017 at 12:59 AM, Otto Fowler <ot...@gmail.com>
wrote:

> OK.
>
> I think your sending errors to your indexing topic instead of the error
> topic.
> I think you posted your config before, but I don’t remember off the top of
> my head
> where the error topic is configured.
>
> If the error topic is the same as the indexing topic, and you ‘have
> errors’  I think you may see this.
>
>
>
> On November 13, 2017 at 14:39:44, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> Here we go. This is what I see when I do kafka client on indexing topic.
>
> [image: Inline image 1]
>
> On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> ok, I will try it again and report results
>>
>> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> You have to be seeing data in the indexing topic, you have errors in the
>>> indexing topology that reads from it.
>>>
>>>
>>>
>>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (
>>> mscs16059@itu.edu.pk) wrote:
>>>
>>> So you are saying:
>>>
>>> * when you do the kafka client on the enrichment topic things are in json
>>> * when you do the kafka client on the indexing topic they are csv
>>>
>>> 1- Yes, kafka client on enrichment shows json
>>>
>>> 2- No, I dont see anything in kafka client on indexing topic
>>>
>>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> So you are saying:
>>>>
>>>> * when you do the kafka client on the enrichment topic things are in
>>>> json
>>>> * when you do the kafka client on the indexing topic they are csv
>>>>
>>>> ???
>>>>
>>>>
>>>>
>>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> From one of your earlier messages, This is what I have figured out so
>>>> far.
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> The issue is inducated by red marked portion of the flow.
>>>>
>>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> Which .java file is causing the issue in this hdfsindexbolt. I mean
>>>>> which one should I look at because there are so many listed here.
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>>
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only see
>>>>>>>> JSON objects of stub sensor logs but not from those pushed by me via kafka
>>>>>>>> producer.
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <zeolla@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Kindly give me the mechanism implemented in metron through which
>>>>>>>>>> a line such as this
>>>>>>>>>>
>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>
>>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> Restarted snort, still giving me error for indexing topologies
>>>>>>>>>>> even though I havent even pushed out any data to snort topic yet. I have
>>>>>>>>>>> not run the kafka-producer command but its still giving error for
>>>>>>>>>>> something.
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling
>>>>>>>>>>>>>>>>>>>> the script with?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and what's
>>>>>>>>>>>>>>>>>>>>> the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka
>>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source?  It looks like there are
>>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology
>>>>>>>>>>>>>>>>>>>>>>>> error, here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing
>>>>>>>>>>>>>>>>>>>>>>>>> the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding
>>>>>>>>>>>>>>>>>>>>>>>>> from the individual writer into the writer component (It should be handled
>>>>>>>>>>>>>>>>>>>>>>>>> in the writer itself).  The fact that it's happening for both HDFS and ES
>>>>>>>>>>>>>>>>>>>>>>>>> is telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort
>>>>>>>>>>>>>>>>>>>>>>>>>>> test alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and
>>>>>>>>>>>>>>>>>>>>>>>>>>>> run cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> canned data file? Maybe I could see its formatting and try following the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> few lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> coming from snort through some setup ( getting pushed to kafka ), which I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> think of as live.  I also think you have manually pushed messages, where
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> you see this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to get the snort logs in kibana dashboard. Any help will be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> search head. Now where do I go in this to find out why I cant see the snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> web browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> head on the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
OK.

I think your sending errors to your indexing topic instead of the error
topic.
I think you posted your config before, but I don’t remember off the top of
my head
where the error topic is configured.

If the error topic is the same as the indexing topic, and you ‘have errors’
 I think you may see this.



On November 13, 2017 at 14:39:44, Syed Hammad Tahir (mscs16059@itu.edu.pk)
wrote:

Here we go. This is what I see when I do kafka client on indexing topic.

[image: Inline image 1]

On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> ok, I will try it again and report results
>
> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> You have to be seeing data in the indexing topic, you have errors in the
>> indexing topology that reads from it.
>>
>>
>>
>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> So you are saying:
>>
>> * when you do the kafka client on the enrichment topic things are in json
>> * when you do the kafka client on the indexing topic they are csv
>>
>> 1- Yes, kafka client on enrichment shows json
>>
>> 2- No, I dont see anything in kafka client on indexing topic
>>
>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> So you are saying:
>>>
>>> * when you do the kafka client on the enrichment topic things are in json
>>> * when you do the kafka client on the indexing topic they are csv
>>>
>>> ???
>>>
>>>
>>>
>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>> mscs16059@itu.edu.pk) wrote:
>>>
>>> From one of your earlier messages, This is what I have figured out so
>>> far.
>>>
>>> [image: Inline image 1]
>>>
>>> The issue is inducated by red marked portion of the flow.
>>>
>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>> mscs16059@itu.edu.pk> wrote:
>>>
>>>> Which .java file is causing the issue in this hdfsindexbolt. I mean
>>>> which one should I look at because there are so many listed here.
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>
>>>>>
>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only see
>>>>>>> JSON objects of stub sensor logs but not from those pushed by me via kafka
>>>>>>> producer.
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> Kindly give me the mechanism implemented in metron through which a
>>>>>>>>> line such as this
>>>>>>>>>
>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>
>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Restarted snort, still giving me error for indexing topologies
>>>>>>>>>> even though I havent even pushed out any data to snort topic yet. I have
>>>>>>>>>> not run the kafka-producer command but its still giving error for
>>>>>>>>>> something.
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling the
>>>>>>>>>>>>>>>>>>> script with?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and what's
>>>>>>>>>>>>>>>>>>>> the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka
>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source?  It looks like there are
>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology error,
>>>>>>>>>>>>>>>>>>>>>>> here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing
>>>>>>>>>>>>>>>>>>>>>>>> the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding
>>>>>>>>>>>>>>>>>>>>>>>> from the individual writer into the writer component (It should be handled
>>>>>>>>>>>>>>>>>>>>>>>> in the writer itself).  The fact that it's happening for both HDFS and ES
>>>>>>>>>>>>>>>>>>>>>>>> is telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and
>>>>>>>>>>>>>>>>>>>>>>>>>>> run cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in
>>>>>>>>>>>>>>>>>>>>>>>>>>>> this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then
>>>>>>>>>>>>>>>>>>>>>>>>>>>> you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>> canned data file? Maybe I could see its formatting and try following the
>>>>>>>>>>>>>>>>>>>>>>>>>>>> same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a few
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from snort through some setup ( getting pushed to kafka ), which I think of
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as live.  I also think you have manually pushed messages, where you see
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to get the snort logs in kibana dashboard. Any help will be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> search head. Now where do I go in this to find out why I cant see the snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> web browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> head on the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
Maybe the problem is you are cat’ing the whole file, and the parser expects
1 line at a time.

Can you try a simple test by using this command to send to kafka instead of
what you are using?


sudo head -n 1 snort.out  |
/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
--broker-list node1:6667 --topic snort

And see if you have the same problem?



On November 13, 2017 at 17:11:28, Otto Fowler (ottobackwards@gmail.com)
wrote:

So - looking at this:

The snort parser is failing here :

// validate the number of fields
if (records.size() != fieldNames.length) {
  throw new IllegalArgumentException("Unexpected number of fields,
expected: " + fieldNames.length + " got: " + records.size());
}

If you look at the picture for  Caused by: null pointer exception : 152

That gets an Error Record created and sent

Later:

For some reason the HDFS and indexing bolts are trying to access the
message as json, and the MessageGetStrategy get JSON from POSITION is
bombing.

I *think* there are multiple problems here.

1.  Why is snort failing the parser
2. Why are we trying to write the errors to hdfs and index?   I thought
they would just go to the error writer…




On November 13, 2017 at 14:39:44, Syed Hammad Tahir (mscs16059@itu.edu.pk)
wrote:

Here we go. This is what I see when I do kafka client on indexing topic.

[image: Inline image 1]

On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> ok, I will try it again and report results
>
> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> You have to be seeing data in the indexing topic, you have errors in the
>> indexing topology that reads from it.
>>
>>
>>
>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> So you are saying:
>>
>> * when you do the kafka client on the enrichment topic things are in json
>> * when you do the kafka client on the indexing topic they are csv
>>
>> 1- Yes, kafka client on enrichment shows json
>>
>> 2- No, I dont see anything in kafka client on indexing topic
>>
>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> So you are saying:
>>>
>>> * when you do the kafka client on the enrichment topic things are in json
>>> * when you do the kafka client on the indexing topic they are csv
>>>
>>> ???
>>>
>>>
>>>
>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>> mscs16059@itu.edu.pk) wrote:
>>>
>>> From one of your earlier messages, This is what I have figured out so
>>> far.
>>>
>>> [image: Inline image 1]
>>>
>>> The issue is inducated by red marked portion of the flow.
>>>
>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>> mscs16059@itu.edu.pk> wrote:
>>>
>>>> Which .java file is causing the issue in this hdfsindexbolt. I mean
>>>> which one should I look at because there are so many listed here.
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>
>>>>>
>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only see
>>>>>>> JSON objects of stub sensor logs but not from those pushed by me via kafka
>>>>>>> producer.
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> Kindly give me the mechanism implemented in metron through which a
>>>>>>>>> line such as this
>>>>>>>>>
>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>
>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Restarted snort, still giving me error for indexing topologies
>>>>>>>>>> even though I havent even pushed out any data to snort topic yet. I have
>>>>>>>>>> not run the kafka-producer command but its still giving error for
>>>>>>>>>> something.
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling the
>>>>>>>>>>>>>>>>>>> script with?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and what's
>>>>>>>>>>>>>>>>>>>> the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka
>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source?  It looks like there are
>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology error,
>>>>>>>>>>>>>>>>>>>>>>> here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing
>>>>>>>>>>>>>>>>>>>>>>>> the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding
>>>>>>>>>>>>>>>>>>>>>>>> from the individual writer into the writer component (It should be handled
>>>>>>>>>>>>>>>>>>>>>>>> in the writer itself).  The fact that it's happening for both HDFS and ES
>>>>>>>>>>>>>>>>>>>>>>>> is telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and
>>>>>>>>>>>>>>>>>>>>>>>>>>> run cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in
>>>>>>>>>>>>>>>>>>>>>>>>>>>> this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then
>>>>>>>>>>>>>>>>>>>>>>>>>>>> you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>> canned data file? Maybe I could see its formatting and try following the
>>>>>>>>>>>>>>>>>>>>>>>>>>>> same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a few
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from snort through some setup ( getting pushed to kafka ), which I think of
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as live.  I also think you have manually pushed messages, where you see
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to get the snort logs in kibana dashboard. Any help will be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> search head. Now where do I go in this to find out why I cant see the snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> web browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> head on the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
So - looking at this:

The snort parser is failing here :

// validate the number of fields
if (records.size() != fieldNames.length) {
  throw new IllegalArgumentException("Unexpected number of fields,
expected: " + fieldNames.length + " got: " + records.size());
}

If you look at the picture for  Caused by: null pointer exception : 152

That gets an Error Record created and sent

Later:

For some reason the HDFS and indexing bolts are trying to access the
message as json, and the MessageGetStrategy get JSON from POSITION is
bombing.

I *think* there are multiple problems here.

1.  Why is snort failing the parser
2. Why are we trying to write the errors to hdfs and index?   I thought
they would just go to the error writer…




On November 13, 2017 at 14:39:44, Syed Hammad Tahir (mscs16059@itu.edu.pk)
wrote:

Here we go. This is what I see when I do kafka client on indexing topic.

[image: Inline image 1]

On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> ok, I will try it again and report results
>
> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> You have to be seeing data in the indexing topic, you have errors in the
>> indexing topology that reads from it.
>>
>>
>>
>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> So you are saying:
>>
>> * when you do the kafka client on the enrichment topic things are in json
>> * when you do the kafka client on the indexing topic they are csv
>>
>> 1- Yes, kafka client on enrichment shows json
>>
>> 2- No, I dont see anything in kafka client on indexing topic
>>
>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> So you are saying:
>>>
>>> * when you do the kafka client on the enrichment topic things are in json
>>> * when you do the kafka client on the indexing topic they are csv
>>>
>>> ???
>>>
>>>
>>>
>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>> mscs16059@itu.edu.pk) wrote:
>>>
>>> From one of your earlier messages, This is what I have figured out so
>>> far.
>>>
>>> [image: Inline image 1]
>>>
>>> The issue is inducated by red marked portion of the flow.
>>>
>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>> mscs16059@itu.edu.pk> wrote:
>>>
>>>> Which .java file is causing the issue in this hdfsindexbolt. I mean
>>>> which one should I look at because there are so many listed here.
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>
>>>>>
>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only see
>>>>>>> JSON objects of stub sensor logs but not from those pushed by me via kafka
>>>>>>> producer.
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> Kindly give me the mechanism implemented in metron through which a
>>>>>>>>> line such as this
>>>>>>>>>
>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>
>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Restarted snort, still giving me error for indexing topologies
>>>>>>>>>> even though I havent even pushed out any data to snort topic yet. I have
>>>>>>>>>> not run the kafka-producer command but its still giving error for
>>>>>>>>>> something.
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling the
>>>>>>>>>>>>>>>>>>> script with?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and what's
>>>>>>>>>>>>>>>>>>>> the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka
>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source?  It looks like there are
>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology error,
>>>>>>>>>>>>>>>>>>>>>>> here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing
>>>>>>>>>>>>>>>>>>>>>>>> the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding
>>>>>>>>>>>>>>>>>>>>>>>> from the individual writer into the writer component (It should be handled
>>>>>>>>>>>>>>>>>>>>>>>> in the writer itself).  The fact that it's happening for both HDFS and ES
>>>>>>>>>>>>>>>>>>>>>>>> is telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and
>>>>>>>>>>>>>>>>>>>>>>>>>>> run cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in
>>>>>>>>>>>>>>>>>>>>>>>>>>>> this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then
>>>>>>>>>>>>>>>>>>>>>>>>>>>> you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>> canned data file? Maybe I could see its formatting and try following the
>>>>>>>>>>>>>>>>>>>>>>>>>>>> same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a few
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from snort through some setup ( getting pushed to kafka ), which I think of
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as live.  I also think you have manually pushed messages, where you see
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to get the snort logs in kibana dashboard. Any help will be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> search head. Now where do I go in this to find out why I cant see the snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> web browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> head on the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Here we go. This is what I see when I do kafka client on indexing topic.

[image: Inline image 1]

On Tue, Nov 14, 2017 at 12:03 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> ok, I will try it again and report results
>
> On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> You have to be seeing data in the indexing topic, you have errors in the
>> indexing topology that reads from it.
>>
>>
>>
>> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> So you are saying:
>>
>> * when you do the kafka client on the enrichment topic things are in json
>> * when you do the kafka client on the indexing topic they are csv
>>
>> 1- Yes, kafka client on enrichment shows json
>>
>> 2- No, I dont see anything in kafka client on indexing topic
>>
>> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> So you are saying:
>>>
>>> * when you do the kafka client on the enrichment topic things are in json
>>> * when you do the kafka client on the indexing topic they are csv
>>>
>>> ???
>>>
>>>
>>>
>>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (
>>> mscs16059@itu.edu.pk) wrote:
>>>
>>> From one of your earlier messages, This is what I have figured out so
>>> far.
>>>
>>> [image: Inline image 1]
>>>
>>> The issue is inducated by red marked portion of the flow.
>>>
>>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <
>>> mscs16059@itu.edu.pk> wrote:
>>>
>>>> Which .java file is causing the issue in this hdfsindexbolt. I mean
>>>> which one should I look at because there are so many listed here.
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>>
>>>>>
>>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>>
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> No, I am not seeing it under indexing topic as JSON. I can only see
>>>>>>> JSON objects of stub sensor logs but not from those pushed by me via kafka
>>>>>>> producer.
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> Kindly give me the mechanism implemented in metron through which a
>>>>>>>>> line such as this
>>>>>>>>>
>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>
>>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Restarted snort, still giving me error for indexing topologies
>>>>>>>>>> even though I havent even pushed out any data to snort topic yet. I have
>>>>>>>>>> not run the kafka-producer command but its still giving error for
>>>>>>>>>> something.
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> ok, Doing it.
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling the
>>>>>>>>>>>>>>>>>>> script with?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and what's
>>>>>>>>>>>>>>>>>>>> the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka
>>>>>>>>>>>>>>>>>>>>>> topic from the parser or from some other source?  It looks like there are
>>>>>>>>>>>>>>>>>>>>>> some records in kafka that are not JSON.  By the time it gets to the
>>>>>>>>>>>>>>>>>>>>>> indexing kafka topic, it should be a JSON map.  The parser topology emits
>>>>>>>>>>>>>>>>>>>>>> that JSON map and then the enrichments topology enrich that map and emits
>>>>>>>>>>>>>>>>>>>>>> the enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology error,
>>>>>>>>>>>>>>>>>>>>>>> here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing
>>>>>>>>>>>>>>>>>>>>>>>> the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding
>>>>>>>>>>>>>>>>>>>>>>>> from the individual writer into the writer component (It should be handled
>>>>>>>>>>>>>>>>>>>>>>>> in the writer itself).  The fact that it's happening for both HDFS and ES
>>>>>>>>>>>>>>>>>>>>>>>> is telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and
>>>>>>>>>>>>>>>>>>>>>>>>>>> run cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in
>>>>>>>>>>>>>>>>>>>>>>>>>>>> this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then
>>>>>>>>>>>>>>>>>>>>>>>>>>>> you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>> canned data file? Maybe I could see its formatting and try following the
>>>>>>>>>>>>>>>>>>>>>>>>>>>> same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a few
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> lines from here:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from snort through some setup ( getting pushed to kafka ), which I think of
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as live.  I also think you have manually pushed messages, where you see
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> push them into kafka topic then no, I dont see any error at that time. If
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 'live' means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> result in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> able to get the snort logs in kibana dashboard. Any help will be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> be looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> can find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> search head. Now where do I go in this to find out why I cant see the snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> web browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> head on the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
ok, I will try it again and report results

On Tue, Nov 14, 2017 at 12:00 AM, Otto Fowler <ot...@gmail.com>
wrote:

> You have to be seeing data in the indexing topic, you have errors in the
> indexing topology that reads from it.
>
>
>
> On November 13, 2017 at 13:42:14, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> So you are saying:
>
> * when you do the kafka client on the enrichment topic things are in json
> * when you do the kafka client on the indexing topic they are csv
>
> 1- Yes, kafka client on enrichment shows json
>
> 2- No, I dont see anything in kafka client on indexing topic
>
> On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> So you are saying:
>>
>> * when you do the kafka client on the enrichment topic things are in json
>> * when you do the kafka client on the indexing topic they are csv
>>
>> ???
>>
>>
>>
>> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> From one of your earlier messages, This is what I have figured out so
>> far.
>>
>> [image: Inline image 1]
>>
>> The issue is inducated by red marked portion of the flow.
>>
>> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>> > wrote:
>>
>>> Which .java file is causing the issue in this hdfsindexbolt. I mean
>>> which one should I look at because there are so many listed here.
>>>
>>> [image: Inline image 1]
>>>
>>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>>
>>>>
>>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>>
>>>>>
>>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> No, I am not seeing it under indexing topic as JSON. I can only see
>>>>>> JSON objects of stub sensor logs but not from those pushed by me via kafka
>>>>>> producer.
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> Kindly give me the mechanism implemented in metron through which a
>>>>>>>> line such as this
>>>>>>>>
>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>
>>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> Restarted snort, still giving me error for indexing topologies
>>>>>>>>> even though I havent even pushed out any data to snort topic yet. I have
>>>>>>>>> not run the kafka-producer command but its still giving error for
>>>>>>>>> something.
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> [image: Inline image 2]
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> ok, Doing it.
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>>
>>>>>>>>>>> Jon
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>>
>>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling the
>>>>>>>>>>>>>>>>>> script with?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and what's
>>>>>>>>>>>>>>>>>>> the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka topic
>>>>>>>>>>>>>>>>>>>>> from the parser or from some other source?  It looks like there are some
>>>>>>>>>>>>>>>>>>>>> records in kafka that are not JSON.  By the time it gets to the indexing
>>>>>>>>>>>>>>>>>>>>> kafka topic, it should be a JSON map.  The parser topology emits that JSON
>>>>>>>>>>>>>>>>>>>>> map and then the enrichments topology enrich that map and emits the
>>>>>>>>>>>>>>>>>>>>> enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology error,
>>>>>>>>>>>>>>>>>>>>>> here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing
>>>>>>>>>>>>>>>>>>>>>>> the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding
>>>>>>>>>>>>>>>>>>>>>>> from the individual writer into the writer component (It should be handled
>>>>>>>>>>>>>>>>>>>>>>> in the writer itself).  The fact that it's happening for both HDFS and ES
>>>>>>>>>>>>>>>>>>>>>>> is telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and
>>>>>>>>>>>>>>>>>>>>>>>>>> run cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in
>>>>>>>>>>>>>>>>>>>>>>>>>>> this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then
>>>>>>>>>>>>>>>>>>>>>>>>>>> you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to
>>>>>>>>>>>>>>>>>>>>>>>>>>> modify the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the
>>>>>>>>>>>>>>>>>>>>>>>>>>> format of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub
>>>>>>>>>>>>>>>>>>>>>>>>>>> canned data file? Maybe I could see its formatting and try following the
>>>>>>>>>>>>>>>>>>>>>>>>>>> same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a few
>>>>>>>>>>>>>>>>>>>>>>>>>>>> lines from here: https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also see
>>>>>>>>>>>>>>>>>>>>>>>>>>>> these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the
>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from snort through some setup ( getting pushed to kafka ), which I think of
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as live.  I also think you have manually pushed messages, where you see
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I push
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> them into kafka topic then no, I dont see any error at that time. If 'live'
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Topology -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and result
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to get the snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> search head. Now where do I go in this to find out why I cant see the snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> web browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> head on the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>
>>>>>>>>>>> Jon
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>> --
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
So you are saying:

* when you do the kafka client on the enrichment topic things are in json
* when you do the kafka client on the indexing topic they are csv

1- Yes, kafka client on enrichment shows json

2- No, I dont see anything in kafka client on indexing topic

On Mon, Nov 13, 2017 at 11:26 PM, Otto Fowler <ot...@gmail.com>
wrote:

> So you are saying:
>
> * when you do the kafka client on the enrichment topic things are in json
> * when you do the kafka client on the indexing topic they are csv
>
> ???
>
>
>
> On November 13, 2017 at 12:28:51, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> From one of your earlier messages, This is what I have figured out so
> far.
>
> [image: Inline image 1]
>
> The issue is inducated by red marked portion of the flow.
>
> On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> Which .java file is causing the issue in this hdfsindexbolt. I mean which
>> one should I look at because there are so many listed here.
>>
>> [image: Inline image 1]
>>
>> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>>
>>>
>>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>>
>>>>
>>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> No, I am not seeing it under indexing topic as JSON. I can only see
>>>>> JSON objects of stub sensor logs but not from those pushed by me via kafka
>>>>> producer.
>>>>>
>>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> Kindly give me the mechanism implemented in metron through which a
>>>>>>> line such as this
>>>>>>>
>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>
>>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> Restarted snort, still giving me error for indexing topologies even
>>>>>>>> though I havent even pushed out any data to snort topic yet. I have not run
>>>>>>>> the kafka-producer command but its still giving error for something.
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> [image: Inline image 2]
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> ok, Doing it.
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <
>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>>
>>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>>
>>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>>
>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m
>>>>>>>>>>>>>> DUMP
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> ?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling the
>>>>>>>>>>>>>>>>> script with?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> How did you start the snort parser topology and what's
>>>>>>>>>>>>>>>>>> the parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka topic
>>>>>>>>>>>>>>>>>>>> from the parser or from some other source?  It looks like there are some
>>>>>>>>>>>>>>>>>>>> records in kafka that are not JSON.  By the time it gets to the indexing
>>>>>>>>>>>>>>>>>>>> kafka topic, it should be a JSON map.  The parser topology emits that JSON
>>>>>>>>>>>>>>>>>>>> map and then the enrichments topology enrich that map and emits the
>>>>>>>>>>>>>>>>>>>> enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology error,
>>>>>>>>>>>>>>>>>>>>> here is the full stack trace
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing
>>>>>>>>>>>>>>>>>>>>>> the parser topology error?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those
>>>>>>>>>>>>>>>>>>>>>> errors, what's the full stacktrace (that starts with the suggestion you
>>>>>>>>>>>>>>>>>>>>>> file a JIRA)?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding from
>>>>>>>>>>>>>>>>>>>>>> the individual writer into the writer component (It should be handled in
>>>>>>>>>>>>>>>>>>>>>> the writer itself).  The fact that it's happening for both HDFS and ES is
>>>>>>>>>>>>>>>>>>>>>> telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and run
>>>>>>>>>>>>>>>>>>>>>>>>> cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in
>>>>>>>>>>>>>>>>>>>>>>>>>> this format:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then
>>>>>>>>>>>>>>>>>>>>>>>>>> you may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to modify
>>>>>>>>>>>>>>>>>>>>>>>>>> the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the format
>>>>>>>>>>>>>>>>>>>>>>>>>> of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub
>>>>>>>>>>>>>>>>>>>>>>>>>> canned data file? Maybe I could see its formatting and try following the
>>>>>>>>>>>>>>>>>>>>>>>>>> same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a few
>>>>>>>>>>>>>>>>>>>>>>>>>>> lines from here: https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also see
>>>>>>>>>>>>>>>>>>>>>>>>>>> these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the
>>>>>>>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming
>>>>>>>>>>>>>>>>>>>>>>>>>>>> from snort through some setup ( getting pushed to kafka ), which I think of
>>>>>>>>>>>>>>>>>>>>>>>>>>>> as live.  I also think you have manually pushed messages, where you see
>>>>>>>>>>>>>>>>>>>>>>>>>>>> this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I push
>>>>>>>>>>>>>>>>>>>>>>>>>>>> them into kafka topic then no, I dont see any error at that time. If 'live'
>>>>>>>>>>>>>>>>>>>>>>>>>>>> means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that would be a problem.  If you see this error with your ‘live’ messages
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> as well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and result
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> see logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stub
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> s/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to get the snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> that is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> search head. Now where do I go in this to find out why I cant see the snort
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> head on the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>> --
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
So you are saying:

* when you do the kafka client on the enrichment topic things are in json
* when you do the kafka client on the indexing topic they are csv

???



On November 13, 2017 at 12:28:51, Syed Hammad Tahir (mscs16059@itu.edu.pk)
wrote:

From one of your earlier messages, This is what I have figured out so far.

[image: Inline image 1]

The issue is inducated by red marked portion of the flow.

On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> Which .java file is causing the issue in this hdfsindexbolt. I mean which
> one should I look at because there are so many listed here.
>
> [image: Inline image 1]
>
> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>
>>
>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>
>>>
>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> No, I am not seeing it under indexing topic as JSON. I can only see
>>>> JSON objects of stub sensor logs but not from those pushed by me via kafka
>>>> producer.
>>>>
>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>> wrote:
>>>>
>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>
>>>>> Jon
>>>>>
>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> Kindly give me the mechanism implemented in metron through which a
>>>>>> line such as this
>>>>>>
>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>
>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> Restarted snort, still giving me error for indexing topologies even
>>>>>>> though I havent even pushed out any data to snort topic yet. I have not run
>>>>>>> the kafka-producer command but its still giving error for something.
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> [image: Inline image 2]
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> ok, Doing it.
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <zeolla@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>
>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>
>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>
>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>
>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>>>>>>
>>>>>>>>>>>>> ?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling the
>>>>>>>>>>>>>>>> script with?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> How did you start the snort parser topology and what's the
>>>>>>>>>>>>>>>>> parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka topic
>>>>>>>>>>>>>>>>>>> from the parser or from some other source?  It looks like there are some
>>>>>>>>>>>>>>>>>>> records in kafka that are not JSON.  By the time it gets to the indexing
>>>>>>>>>>>>>>>>>>> kafka topic, it should be a JSON map.  The parser topology emits that JSON
>>>>>>>>>>>>>>>>>>> map and then the enrichments topology enrich that map and emits the
>>>>>>>>>>>>>>>>>>> enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology error,
>>>>>>>>>>>>>>>>>>>> here is the full stack trace
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing the
>>>>>>>>>>>>>>>>>>>>> parser topology error?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those errors,
>>>>>>>>>>>>>>>>>>>>> what's the full stacktrace (that starts with the suggestion you file a
>>>>>>>>>>>>>>>>>>>>> JIRA)?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding from
>>>>>>>>>>>>>>>>>>>>> the individual writer into the writer component (It should be handled in
>>>>>>>>>>>>>>>>>>>>> the writer itself).  The fact that it's happening for both HDFS and ES is
>>>>>>>>>>>>>>>>>>>>> telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and run
>>>>>>>>>>>>>>>>>>>>>>>> cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in this
>>>>>>>>>>>>>>>>>>>>>>>>> format:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then you
>>>>>>>>>>>>>>>>>>>>>>>>> may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to modify
>>>>>>>>>>>>>>>>>>>>>>>>> the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the format
>>>>>>>>>>>>>>>>>>>>>>>>> of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub
>>>>>>>>>>>>>>>>>>>>>>>>> canned data file? Maybe I could see its formatting and try following the
>>>>>>>>>>>>>>>>>>>>>>>>> same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a few
>>>>>>>>>>>>>>>>>>>>>>>>>> lines from here: https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also see
>>>>>>>>>>>>>>>>>>>>>>>>>> these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the
>>>>>>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming
>>>>>>>>>>>>>>>>>>>>>>>>>>> from snort through some setup ( getting pushed to kafka ), which I think of
>>>>>>>>>>>>>>>>>>>>>>>>>>> as live.  I also think you have manually pushed messages, where you see
>>>>>>>>>>>>>>>>>>>>>>>>>>> this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that
>>>>>>>>>>>>>>>>>>>>>>>>>>> would be a problem.  If you see this error with your ‘live’ messages as
>>>>>>>>>>>>>>>>>>>>>>>>>>> well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I push
>>>>>>>>>>>>>>>>>>>>>>>>>>> them into kafka topic then no, I dont see any error at that time. If 'live'
>>>>>>>>>>>>>>>>>>>>>>>>>>> means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that
>>>>>>>>>>>>>>>>>>>>>>>>>>>> would be a problem.  If you see this error with your ‘live’ messages as
>>>>>>>>>>>>>>>>>>>>>>>>>>>> well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same
>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>> -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>> -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and result
>>>>>>>>>>>>>>>>>>>>>>>>>>>> in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to see
>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to get the snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and that
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ottobackwards@gmail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> head. Now where do I go in this to find out why I cant see the snort logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> on the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>> --
>>>>>
>>>>> Jon
>>>>>
>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
From one of your earlier messages, This is what I have figured out so far.

[image: Inline image 1]

The issue is inducated by red marked portion of the flow.

On Mon, Nov 13, 2017 at 10:14 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> Which .java file is causing the issue in this hdfsindexbolt. I mean which
> one should I look at because there are so many listed here.
>
> [image: Inline image 1]
>
> On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>>
>>
>> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>>
>>>
>>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> No, I am not seeing it under indexing topic as JSON. I can only see
>>>> JSON objects of stub sensor logs but not from those pushed by me via kafka
>>>> producer.
>>>>
>>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>> wrote:
>>>>
>>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>>
>>>>> Jon
>>>>>
>>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> Kindly give me the mechanism implemented in metron through which a
>>>>>> line such as this
>>>>>>
>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>
>>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> Restarted snort, still giving me error for indexing topologies even
>>>>>>> though I havent even pushed out any data to snort topic yet. I have not run
>>>>>>> the kafka-producer command but its still giving error for something.
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> [image: Inline image 2]
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> ok, Doing it.
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <zeolla@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>>
>>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>>
>>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>>
>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>>
>>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>>>>>>
>>>>>>>>>>>>> ?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling the
>>>>>>>>>>>>>>>> script with?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> How did you start the snort parser topology and what's the
>>>>>>>>>>>>>>>>> parser config (in zookeeper)?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka topic
>>>>>>>>>>>>>>>>>>> from the parser or from some other source?  It looks like there are some
>>>>>>>>>>>>>>>>>>> records in kafka that are not JSON.  By the time it gets to the indexing
>>>>>>>>>>>>>>>>>>> kafka topic, it should be a JSON map.  The parser topology emits that JSON
>>>>>>>>>>>>>>>>>>> map and then the enrichments topology enrich that map and emits the
>>>>>>>>>>>>>>>>>>> enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology error,
>>>>>>>>>>>>>>>>>>>> here is the full stack trace
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing the
>>>>>>>>>>>>>>>>>>>>> parser topology error?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those errors,
>>>>>>>>>>>>>>>>>>>>> what's the full stacktrace (that starts with the suggestion you file a
>>>>>>>>>>>>>>>>>>>>> JIRA)?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding from
>>>>>>>>>>>>>>>>>>>>> the individual writer into the writer component (It should be handled in
>>>>>>>>>>>>>>>>>>>>> the writer itself).  The fact that it's happening for both HDFS and ES is
>>>>>>>>>>>>>>>>>>>>> telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and run
>>>>>>>>>>>>>>>>>>>>>>>> cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in this
>>>>>>>>>>>>>>>>>>>>>>>>> format:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then you
>>>>>>>>>>>>>>>>>>>>>>>>> may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to modify
>>>>>>>>>>>>>>>>>>>>>>>>> the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the format
>>>>>>>>>>>>>>>>>>>>>>>>> of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub
>>>>>>>>>>>>>>>>>>>>>>>>> canned data file? Maybe I could see its formatting and try following the
>>>>>>>>>>>>>>>>>>>>>>>>> same thing.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a few
>>>>>>>>>>>>>>>>>>>>>>>>>> lines from here: https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also see
>>>>>>>>>>>>>>>>>>>>>>>>>> these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the
>>>>>>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming
>>>>>>>>>>>>>>>>>>>>>>>>>>> from snort through some setup ( getting pushed to kafka ), which I think of
>>>>>>>>>>>>>>>>>>>>>>>>>>> as live.  I also think you have manually pushed messages, where you see
>>>>>>>>>>>>>>>>>>>>>>>>>>> this error.
>>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same
>>>>>>>>>>>>>>>>>>>>>>>>>>> errors for things that are automatically pushed to kafka as you do when you
>>>>>>>>>>>>>>>>>>>>>>>>>>> manual push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that
>>>>>>>>>>>>>>>>>>>>>>>>>>> would be a problem.  If you see this error with your ‘live’ messages as
>>>>>>>>>>>>>>>>>>>>>>>>>>> well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I push
>>>>>>>>>>>>>>>>>>>>>>>>>>> them into kafka topic then no, I dont see any error at that time. If 'live'
>>>>>>>>>>>>>>>>>>>>>>>>>>> means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that
>>>>>>>>>>>>>>>>>>>>>>>>>>>> would be a problem.  If you see this error with your ‘live’ messages as
>>>>>>>>>>>>>>>>>>>>>>>>>>>> well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same
>>>>>>>>>>>>>>>>>>>>>>>>>>>> errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>> -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology
>>>>>>>>>>>>>>>>>>>>>>>>>>>> -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and result
>>>>>>>>>>>>>>>>>>>>>>>>>>>> in Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to see
>>>>>>>>>>>>>>>>>>>>>>>>>>>> logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI
>>>>>>>>>>>>>>>>>>>>>>>>>>>> parser bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> to get the snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and that
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> health of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ottobackwards@gmail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> head. Now where do I go in this to find out why I cant see the snort logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> on the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>> --
>>>>>
>>>>> Jon
>>>>>
>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Which .java file is causing the issue in this hdfsindexbolt. I mean which
one should I look at because there are so many listed here.

[image: Inline image 1]

On Mon, Nov 13, 2017 at 9:25 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> org.apache.metron.parsers.snort.BasicSnortParser This one is parsing it correctly since I am getting error in the indexing bolt not in the parser one.
>
>
> On Mon, Nov 13, 2017 at 9:17 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> org.apache.metron.parsers.snort.BasicSnortParser does this parse the basic message and then convert it in JSON?
>>
>>
>> On Mon, Nov 13, 2017 at 9:00 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> No, I am not seeing it under indexing topic as JSON. I can only see JSON
>>> objects of stub sensor logs but not from those pushed by me via kafka
>>> producer.
>>>
>>> On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <ze...@gmail.com>
>>> wrote:
>>>
>>>> Please use kafka-console-consumer.sh (same folder as the producer
>>>> script) and pull from the indexing topic.  Are you seeing it in JSON there?
>>>>
>>>> Jon
>>>>
>>>> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <ms...@itu.edu.pk>
>>>> wrote:
>>>>
>>>>> Kindly give me the mechanism implemented in metron through which a
>>>>> line such as this
>>>>>
>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>
>>>>> is converted into a json object. Maybe I am missing something here is the formatting.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> Restarted snort, still giving me error for indexing topologies even
>>>>>> though I havent even pushed out any data to snort topic yet. I have not run
>>>>>> the kafka-producer command but its still giving error for something.
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> [image: Inline image 2]
>>>>>>
>>>>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> ok, Doing it.
>>>>>>>
>>>>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Can you restart storm and give it another shot?
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> hi, This problem still persists guys .
>>>>>>>>>
>>>>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Any solution to these issues guys?
>>>>>>>>>>
>>>>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> I have attached the output of this dump
>>>>>>>>>>>
>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <
>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> What is the output of:
>>>>>>>>>>>>
>>>>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>>>>>
>>>>>>>>>>>> ?
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> This is the script/command i used
>>>>>>>>>>>>>
>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> What topic?  what are the parameters you are calling the
>>>>>>>>>>>>>>> script with?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The metron installation I have (single node based vm
>>>>>>>>>>>>>>> install) comes with sensor stubs. I assume that everything has already been
>>>>>>>>>>>>>>> done for those stub sensors to push the canned data. I am doing the similar
>>>>>>>>>>>>>>> thing, directly pushing the preformatted canned data to kafka topic. I can
>>>>>>>>>>>>>>> see the logs in kibana dashboard when I start stub sensor from monit but
>>>>>>>>>>>>>>> then I push the same logs myself, those errors pop that I have shown
>>>>>>>>>>>>>>> earlier.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> How did you start the snort parser topology and what's the
>>>>>>>>>>>>>>>> parser config (in zookeeper)?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka topic
>>>>>>>>>>>>>>>>>> from the parser or from some other source?  It looks like there are some
>>>>>>>>>>>>>>>>>> records in kafka that are not JSON.  By the time it gets to the indexing
>>>>>>>>>>>>>>>>>> kafka topic, it should be a JSON map.  The parser topology emits that JSON
>>>>>>>>>>>>>>>>>> map and then the enrichments topology enrich that map and emits the
>>>>>>>>>>>>>>>>>> enriched map to the indexing topic.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology error,
>>>>>>>>>>>>>>>>>>> here is the full stack trace
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing the
>>>>>>>>>>>>>>>>>>>> parser topology error?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those errors,
>>>>>>>>>>>>>>>>>>>> what's the full stacktrace (that starts with the suggestion you file a
>>>>>>>>>>>>>>>>>>>> JIRA)?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding from
>>>>>>>>>>>>>>>>>>>> the individual writer into the writer component (It should be handled in
>>>>>>>>>>>>>>>>>>>> the writer itself).  The fact that it's happening for both HDFS and ES is
>>>>>>>>>>>>>>>>>>>> telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka
>>>>>>>>>>>>>>>>>>>>> producer .... and now the error at storm parser topology is gone but I am
>>>>>>>>>>>>>>>>>>>>> now seeing this at the indexing toology
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and run
>>>>>>>>>>>>>>>>>>>>>>> cat snort.out | kafka-console-producer.sh ... to make sure there are no
>>>>>>>>>>>>>>>>>>>>>>> copy paste problems
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in this
>>>>>>>>>>>>>>>>>>>>>>>> format:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then you
>>>>>>>>>>>>>>>>>>>>>>>> may see this error I believe.
>>>>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to modify
>>>>>>>>>>>>>>>>>>>>>>>> the default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the format
>>>>>>>>>>>>>>>>>>>>>>>> of the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub
>>>>>>>>>>>>>>>>>>>>>>>> canned data file? Maybe I could see its formatting and try following the
>>>>>>>>>>>>>>>>>>>>>>>> same thing.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a few
>>>>>>>>>>>>>>>>>>>>>>>>> lines from here: https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also see
>>>>>>>>>>>>>>>>>>>>>>>>> these lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming
>>>>>>>>>>>>>>>>>>>>>>>>>> from snort through some setup ( getting pushed to kafka ), which I think of
>>>>>>>>>>>>>>>>>>>>>>>>>> as live.  I also think you have manually pushed messages, where you see
>>>>>>>>>>>>>>>>>>>>>>>>>> this error.
>>>>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same errors
>>>>>>>>>>>>>>>>>>>>>>>>>> for things that are automatically pushed to kafka as you do when you manual
>>>>>>>>>>>>>>>>>>>>>>>>>> push them.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that
>>>>>>>>>>>>>>>>>>>>>>>>>> would be a problem.  If you see this error with your ‘live’ messages as
>>>>>>>>>>>>>>>>>>>>>>>>>> well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I push
>>>>>>>>>>>>>>>>>>>>>>>>>> them into kafka topic then no, I dont see any error at that time. If 'live'
>>>>>>>>>>>>>>>>>>>>>>>>>> means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that
>>>>>>>>>>>>>>>>>>>>>>>>>>> would be a problem.  If you see this error with your ‘live’ messages as
>>>>>>>>>>>>>>>>>>>>>>>>>>> well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same
>>>>>>>>>>>>>>>>>>>>>>>>>>> errors with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology
>>>>>>>>>>>>>>>>>>>>>>>>>>> -> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology
>>>>>>>>>>>>>>>>>>>>>>>>>>> -> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and result in
>>>>>>>>>>>>>>>>>>>>>>>>>>> Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to see
>>>>>>>>>>>>>>>>>>>>>>>>>>> logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI parser
>>>>>>>>>>>>>>>>>>>>>>>>>>> bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to
>>>>>>>>>>>>>>>>>>>>>>>>>>>> get the snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and that
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> is related to snort. Could it be the logs I was pushing to kafka topic
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> find something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> of Red, so there's your problem.  I would go look in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> head. Now where do I go in this to find out why I cant see the snort logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Fowler <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> on the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>> --
>>>>
>>>> Jon
>>>>
>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
No, I am not seeing it under indexing topic as JSON. I can only see JSON
objects of stub sensor logs but not from those pushed by me via kafka
producer.

On Mon, Nov 13, 2017 at 5:17 PM, Zeolla@GMail.com <ze...@gmail.com> wrote:

> Please use kafka-console-consumer.sh (same folder as the producer script)
> and pull from the indexing topic.  Are you seeing it in JSON there?
>
> Jon
>
> On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> Kindly give me the mechanism implemented in metron through which a line
>> such as this
>>
>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>
>> is converted into a json object. Maybe I am missing something here is the formatting.
>>
>>
>>
>>
>> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> Restarted snort, still giving me error for indexing topologies even
>>> though I havent even pushed out any data to snort topic yet. I have not run
>>> the kafka-producer command but its still giving error for something.
>>>
>>> [image: Inline image 1]
>>>
>>> [image: Inline image 2]
>>>
>>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> ok, Doing it.
>>>>
>>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>> wrote:
>>>>
>>>>> Can you restart storm and give it another shot?
>>>>>
>>>>> Jon
>>>>>
>>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>> wrote:
>>>>>
>>>>>> hi, This problem still persists guys .
>>>>>>
>>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> Any solution to these issues guys?
>>>>>>>
>>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> I have attached the output of this dump
>>>>>>>>
>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <zeolla@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> What is the output of:
>>>>>>>>>
>>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>>
>>>>>>>>> ?
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> This is the script/command i used
>>>>>>>>>>
>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>
>>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> What topic?  what are the parameters you are calling the script
>>>>>>>>>>>> with?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> The metron installation I have (single node based vm install)
>>>>>>>>>>>> comes with sensor stubs. I assume that everything has already been done for
>>>>>>>>>>>> those stub sensors to push the canned data. I am doing the similar thing,
>>>>>>>>>>>> directly pushing the preformatted canned data to kafka topic. I can see the
>>>>>>>>>>>> logs in kibana dashboard when I start stub sensor from monit but then I
>>>>>>>>>>>> push the same logs myself, those errors pop that I have shown earlier.
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> How did you start the snort parser topology and what's the
>>>>>>>>>>>>> parser config (in zookeeper)?
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka topic from
>>>>>>>>>>>>>>> the parser or from some other source?  It looks like there are some records
>>>>>>>>>>>>>>> in kafka that are not JSON.  By the time it gets to the indexing kafka
>>>>>>>>>>>>>>> topic, it should be a JSON map.  The parser topology emits that JSON map
>>>>>>>>>>>>>>> and then the enrichments topology enrich that map and emits the enriched
>>>>>>>>>>>>>>> map to the indexing topic.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology error, here
>>>>>>>>>>>>>>>> is the full stack trace
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing the
>>>>>>>>>>>>>>>>> parser topology error?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> If you click on the port (6704) there in those errors,
>>>>>>>>>>>>>>>>> what's the full stacktrace (that starts with the suggestion you file a
>>>>>>>>>>>>>>>>> JIRA)?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> What this means is that an exception is bleeding from the
>>>>>>>>>>>>>>>>> individual writer into the writer component (It should be handled in the
>>>>>>>>>>>>>>>>> writer itself).  The fact that it's happening for both HDFS and ES is
>>>>>>>>>>>>>>>>> telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka producer
>>>>>>>>>>>>>>>>>> .... and now the error at storm parser topology is gone but I am now seeing
>>>>>>>>>>>>>>>>>> this at the indexing toology
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:
>>>>>>>>>>>>>>>>>>> 27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,
>>>>>>>>>>>>>>>>>>> 0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and run cat
>>>>>>>>>>>>>>>>>>>> snort.out | kafka-console-producer.sh ... to make sure there are no copy
>>>>>>>>>>>>>>>>>>>> paste problems
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in this
>>>>>>>>>>>>>>>>>>>>> format:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then you may
>>>>>>>>>>>>>>>>>>>>> see this error I believe.
>>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to modify the
>>>>>>>>>>>>>>>>>>>>> default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the format of
>>>>>>>>>>>>>>>>>>>>> the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub canned
>>>>>>>>>>>>>>>>>>>>> data file? Maybe I could see its formatting and try following the same
>>>>>>>>>>>>>>>>>>>>> thing.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a few lines
>>>>>>>>>>>>>>>>>>>>>> from here: https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metron-deployment/roles/
>>>>>>>>>>>>>>>>>>>>>> sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also see these
>>>>>>>>>>>>>>>>>>>>>> lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming from
>>>>>>>>>>>>>>>>>>>>>>> snort through some setup ( getting pushed to kafka ), which I think of as
>>>>>>>>>>>>>>>>>>>>>>> live.  I also think you have manually pushed messages, where you see this
>>>>>>>>>>>>>>>>>>>>>>> error.
>>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same errors
>>>>>>>>>>>>>>>>>>>>>>> for things that are automatically pushed to kafka as you do when you manual
>>>>>>>>>>>>>>>>>>>>>>> push them.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that
>>>>>>>>>>>>>>>>>>>>>>> would be a problem.  If you see this error with your ‘live’ messages as
>>>>>>>>>>>>>>>>>>>>>>> well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I push them
>>>>>>>>>>>>>>>>>>>>>>> into kafka topic then no, I dont see any error at that time. If 'live'
>>>>>>>>>>>>>>>>>>>>>>> means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that
>>>>>>>>>>>>>>>>>>>>>>>> would be a problem.  If you see this error with your ‘live’ messages as
>>>>>>>>>>>>>>>>>>>>>>>> well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same errors
>>>>>>>>>>>>>>>>>>>>>>>> with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology ->
>>>>>>>>>>>>>>>>>>>>>>>> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology ->
>>>>>>>>>>>>>>>>>>>>>>>> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and result in
>>>>>>>>>>>>>>>>>>>>>>>> Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to see
>>>>>>>>>>>>>>>>>>>>>>>> logs in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metron-deployment/roles/
>>>>>>>>>>>>>>>>>>>>>>>> sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI parser
>>>>>>>>>>>>>>>>>>>>>>>> bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to
>>>>>>>>>>>>>>>>>>>>>>>>> get the snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and that is
>>>>>>>>>>>>>>>>>>>>>>>>>>> related to snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be
>>>>>>>>>>>>>>>>>>>>>>>>>>>> looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Zeolla@GMail.com <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Red, so there's your problem.  I would go look in /var/log/elasticsearch/
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> head. Now where do I go in this to find out why I cant see the snort logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ot...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hammad Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>> --
>>>>>
>>>>> Jon
>>>>>
>>>>
>>>>
>>>
>> --
>
> Jon
>

Re: Snort Logs

Posted by "Zeolla@GMail.com" <ze...@gmail.com>.
Please use kafka-console-consumer.sh (same folder as the producer script)
and pull from the indexing topic.  Are you seeing it in JSON there?

Jon

On Mon, Nov 13, 2017 at 7:03 AM Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> Kindly give me the mechanism implemented in metron through which a line
> such as this
>
> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>
> is converted into a json object. Maybe I am missing something here is the formatting.
>
>
>
>
> On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> Restarted snort, still giving me error for indexing topologies even
>> though I havent even pushed out any data to snort topic yet. I have not run
>> the kafka-producer command but its still giving error for something.
>>
>> [image: Inline image 1]
>>
>> [image: Inline image 2]
>>
>> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> ok, Doing it.
>>>
>>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <ze...@gmail.com>
>>> wrote:
>>>
>>>> Can you restart storm and give it another shot?
>>>>
>>>> Jon
>>>>
>>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>> wrote:
>>>>
>>>>> hi, This problem still persists guys .
>>>>>
>>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> Any solution to these issues guys?
>>>>>>
>>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> I have attached the output of this dump
>>>>>>>
>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> What is the output of:
>>>>>>>>
>>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>>
>>>>>>>> ?
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> This is the script/command i used
>>>>>>>>>
>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>
>>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>
>>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> What topic?  what are the parameters you are calling the script
>>>>>>>>>>> with?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>
>>>>>>>>>>> The metron installation I have (single node based vm install)
>>>>>>>>>>> comes with sensor stubs. I assume that everything has already been done for
>>>>>>>>>>> those stub sensors to push the canned data. I am doing the similar thing,
>>>>>>>>>>> directly pushing the preformatted canned data to kafka topic. I can see the
>>>>>>>>>>> logs in kibana dashboard when I start stub sensor from monit but then I
>>>>>>>>>>> push the same logs myself, those errors pop that I have shown earlier.
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <
>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> How did you start the snort parser topology and what's the
>>>>>>>>>>>> parser config (in zookeeper)?
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>>
>>>>>>>>>>>>> sudo cat snort.out |
>>>>>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
>>>>>>>>>>>>> node1:6667 --topic snort
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka topic from
>>>>>>>>>>>>>> the parser or from some other source?  It looks like there are some records
>>>>>>>>>>>>>> in kafka that are not JSON.  By the time it gets to the indexing kafka
>>>>>>>>>>>>>> topic, it should be a JSON map.  The parser topology emits that JSON map
>>>>>>>>>>>>>> and then the enrichments topology enrich that map and emits the enriched
>>>>>>>>>>>>>> map to the indexing topic.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> No I am no longer seeing the parsing topology error, here is
>>>>>>>>>>>>>>> the full stack trace
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing the
>>>>>>>>>>>>>>>> parser topology error?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> If you click on the port (6704) there in those errors,
>>>>>>>>>>>>>>>> what's the full stacktrace (that starts with the suggestion you file a
>>>>>>>>>>>>>>>> JIRA)?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> What this means is that an exception is bleeding from the
>>>>>>>>>>>>>>>> individual writer into the writer component (It should be handled in the
>>>>>>>>>>>>>>>> writer itself).  The fact that it's happening for both HDFS and ES is
>>>>>>>>>>>>>>>> telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka producer
>>>>>>>>>>>>>>>>> .... and now the error at storm parser topology is gone but I am now seeing
>>>>>>>>>>>>>>>>> this at the indexing toology
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and run cat
>>>>>>>>>>>>>>>>>>> snort.out | kafka-console-producer.sh ... to make sure there are no copy
>>>>>>>>>>>>>>>>>>> paste problems
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in this
>>>>>>>>>>>>>>>>>>>> format:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then you may
>>>>>>>>>>>>>>>>>>>> see this error I believe.
>>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to modify the
>>>>>>>>>>>>>>>>>>>> default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the format of
>>>>>>>>>>>>>>>>>>>> the logs I am trying to push
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub canned
>>>>>>>>>>>>>>>>>>>> data file? Maybe I could see its formatting and try following the same
>>>>>>>>>>>>>>>>>>>> thing.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a few lines
>>>>>>>>>>>>>>>>>>>>> from here:
>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also see these
>>>>>>>>>>>>>>>>>>>>> lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming from
>>>>>>>>>>>>>>>>>>>>>> snort through some setup ( getting pushed to kafka ), which I think of as
>>>>>>>>>>>>>>>>>>>>>> live.  I also think you have manually pushed messages, where you see this
>>>>>>>>>>>>>>>>>>>>>> error.
>>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same errors for
>>>>>>>>>>>>>>>>>>>>>> things that are automatically pushed to kafka as you do when you manual
>>>>>>>>>>>>>>>>>>>>>> push them.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that
>>>>>>>>>>>>>>>>>>>>>> would be a problem.  If you see this error with your ‘live’ messages as
>>>>>>>>>>>>>>>>>>>>>> well then that could be it.
>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I push them
>>>>>>>>>>>>>>>>>>>>>> into kafka topic then no, I dont see any error at that time. If 'live'
>>>>>>>>>>>>>>>>>>>>>> means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that
>>>>>>>>>>>>>>>>>>>>>>> would be a problem.  If you see this error with your ‘live’ messages as
>>>>>>>>>>>>>>>>>>>>>>> well then that could be it.
>>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same errors
>>>>>>>>>>>>>>>>>>>>>>> with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology ->
>>>>>>>>>>>>>>>>>>>>>>> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology ->
>>>>>>>>>>>>>>>>>>>>>>> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and result in
>>>>>>>>>>>>>>>>>>>>>>> Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to see logs
>>>>>>>>>>>>>>>>>>>>>>> in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI parser
>>>>>>>>>>>>>>>>>>>>>>> bolt in snort section:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to get
>>>>>>>>>>>>>>>>>>>>>>>> the snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and that is
>>>>>>>>>>>>>>>>>>>>>>>>>> related to snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be
>>>>>>>>>>>>>>>>>>>>>>>>>>> looking at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find
>>>>>>>>>>>>>>>>>>>>>>>>>>>> something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Red, so there's your problem.  I would go look in /var/log/elasticsearch/
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <mscs16059@itu.edu.pk
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> head. Now where do I go in this to find out why I cant see the snort logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>> --
>>>>
>>>> Jon
>>>>
>>>
>>>
>>
> --

Jon

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Kindly give me the mechanism implemented in metron through which a line
such as this

01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,

is converted into a json object. Maybe I am missing something here is
the formatting.




On Mon, Nov 13, 2017 at 3:21 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> Restarted snort, still giving me error for indexing topologies even though
> I havent even pushed out any data to snort topic yet. I have not run the
> kafka-producer command but its still giving error for something.
>
> [image: Inline image 1]
>
> [image: Inline image 2]
>
> On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> ok, Doing it.
>>
>> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <ze...@gmail.com>
>> wrote:
>>
>>> Can you restart storm and give it another shot?
>>>
>>> Jon
>>>
>>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <ms...@itu.edu.pk>
>>> wrote:
>>>
>>>> hi, This problem still persists guys .
>>>>
>>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> Any solution to these issues guys?
>>>>>
>>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> I have attached the output of this dump
>>>>>>
>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> What is the output of:
>>>>>>>
>>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>>
>>>>>>> ?
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> This is the script/command i used
>>>>>>>>
>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>
>>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>
>>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> What topic?  what are the parameters you are calling the script
>>>>>>>>>> with?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>
>>>>>>>>>> The metron installation I have (single node based vm install)
>>>>>>>>>> comes with sensor stubs. I assume that everything has already been done for
>>>>>>>>>> those stub sensors to push the canned data. I am doing the similar thing,
>>>>>>>>>> directly pushing the preformatted canned data to kafka topic. I can see the
>>>>>>>>>> logs in kibana dashboard when I start stub sensor from monit but then I
>>>>>>>>>> push the same logs myself, those errors pop that I have shown earlier.
>>>>>>>>>>
>>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <cestella@gmail.com
>>>>>>>>>> > wrote:
>>>>>>>>>>
>>>>>>>>>>> How did you start the snort parser topology and what's the
>>>>>>>>>>> parser config (in zookeeper)?
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>>
>>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Are you directly writing to the "indexing" kafka topic from
>>>>>>>>>>>>> the parser or from some other source?  It looks like there are some records
>>>>>>>>>>>>> in kafka that are not JSON.  By the time it gets to the indexing kafka
>>>>>>>>>>>>> topic, it should be a JSON map.  The parser topology emits that JSON map
>>>>>>>>>>>>> and then the enrichments topology enrich that map and emits the enriched
>>>>>>>>>>>>> map to the indexing topic.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> No I am no longer seeing the parsing topology error, here is
>>>>>>>>>>>>>> the full stack trace
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing the
>>>>>>>>>>>>>>> parser topology error?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> If you click on the port (6704) there in those errors,
>>>>>>>>>>>>>>> what's the full stacktrace (that starts with the suggestion you file a
>>>>>>>>>>>>>>> JIRA)?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> What this means is that an exception is bleeding from the
>>>>>>>>>>>>>>> individual writer into the writer component (It should be handled in the
>>>>>>>>>>>>>>> writer itself).  The fact that it's happening for both HDFS and ES is
>>>>>>>>>>>>>>> telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka producer
>>>>>>>>>>>>>>>> .... and now the error at storm parser topology is gone but I am now seeing
>>>>>>>>>>>>>>>> this at the indexing toology
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581
>>>>>>>>>>>>>>>>> ,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,
>>>>>>>>>>>>>>>>> ***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I would download the entire snort.out file and run cat
>>>>>>>>>>>>>>>>>> snort.out | kafka-console-producer.sh ... to make sure there are no copy
>>>>>>>>>>>>>>>>>> paste problems
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in this
>>>>>>>>>>>>>>>>>>> format:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then you may
>>>>>>>>>>>>>>>>>>> see this error I believe.
>>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> If this is the case, then you will need to modify the
>>>>>>>>>>>>>>>>>>> default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the format of the
>>>>>>>>>>>>>>>>>>> logs I am trying to push
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub canned
>>>>>>>>>>>>>>>>>>> data file? Maybe I could see its formatting and try following the same
>>>>>>>>>>>>>>>>>>> thing.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a few lines
>>>>>>>>>>>>>>>>>>>> from here: https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also see these
>>>>>>>>>>>>>>>>>>>> lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming from
>>>>>>>>>>>>>>>>>>>>> snort through some setup ( getting pushed to kafka ), which I think of as
>>>>>>>>>>>>>>>>>>>>> live.  I also think you have manually pushed messages, where you see this
>>>>>>>>>>>>>>>>>>>>> error.
>>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same errors for
>>>>>>>>>>>>>>>>>>>>> things that are automatically pushed to kafka as you do when you manual
>>>>>>>>>>>>>>>>>>>>> push them.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that would
>>>>>>>>>>>>>>>>>>>>> be a problem.  If you see this error with your ‘live’ messages as well then
>>>>>>>>>>>>>>>>>>>>> that could be it.
>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I push them
>>>>>>>>>>>>>>>>>>>>> into kafka topic then no, I dont see any error at that time. If 'live'
>>>>>>>>>>>>>>>>>>>>> means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that would
>>>>>>>>>>>>>>>>>>>>>> be a problem.  If you see this error with your ‘live’ messages as well then
>>>>>>>>>>>>>>>>>>>>>> that could be it.
>>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same errors
>>>>>>>>>>>>>>>>>>>>>> with the live data or not.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology ->
>>>>>>>>>>>>>>>>>>>>>> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology ->
>>>>>>>>>>>>>>>>>>>>>> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and result in
>>>>>>>>>>>>>>>>>>>>>> Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to see logs
>>>>>>>>>>>>>>>>>>>>>> in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI parser bolt
>>>>>>>>>>>>>>>>>>>>>> in snort section:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to get
>>>>>>>>>>>>>>>>>>>>>>> the snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in
>>>>>>>>>>>>>>>>>>>>>>>> /var/log/elasticsearch/ is also relevant
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and that is
>>>>>>>>>>>>>>>>>>>>>>>>> related to snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be looking
>>>>>>>>>>>>>>>>>>>>>>>>>> at here?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find
>>>>>>>>>>>>>>>>>>>>>>>>>>> something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Red, so there's your problem.  I would go look in /var/log/elasticsearch/
>>>>>>>>>>>>>>>>>>>>>>>>>>>> at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now where do I go in this to find out why I cant see the snort logs in
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> browser from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> the vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>> --
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>> --
>>>
>>> Jon
>>>
>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Restarted snort, still giving me error for indexing topologies even though
I havent even pushed out any data to snort topic yet. I have not run the
kafka-producer command but its still giving error for something.

[image: Inline image 1]

[image: Inline image 2]

On Mon, Nov 13, 2017 at 3:13 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> ok, Doing it.
>
> On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <ze...@gmail.com>
> wrote:
>
>> Can you restart storm and give it another shot?
>>
>> Jon
>>
>> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> hi, This problem still persists guys .
>>>
>>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> Any solution to these issues guys?
>>>>
>>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>>> > wrote:
>>>>
>>>>> I have attached the output of this dump
>>>>>
>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>
>>>>>
>>>>>
>>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <ze...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> What is the output of:
>>>>>>
>>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>>
>>>>>> ?
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> This is the script/command i used
>>>>>>>
>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>
>>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>
>>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> What topic?  what are the parameters you are calling the script
>>>>>>>>> with?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>
>>>>>>>>> The metron installation I have (single node based vm install)
>>>>>>>>> comes with sensor stubs. I assume that everything has already been done for
>>>>>>>>> those stub sensors to push the canned data. I am doing the similar thing,
>>>>>>>>> directly pushing the preformatted canned data to kafka topic. I can see the
>>>>>>>>> logs in kibana dashboard when I start stub sensor from monit but then I
>>>>>>>>> push the same logs myself, those errors pop that I have shown earlier.
>>>>>>>>>
>>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <ce...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> How did you start the snort parser topology and what's the parser
>>>>>>>>>> config (in zookeeper)?
>>>>>>>>>>
>>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> This is what I am doing
>>>>>>>>>>>
>>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <
>>>>>>>>>>> cestella@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Are you directly writing to the "indexing" kafka topic from the
>>>>>>>>>>>> parser or from some other source?  It looks like there are some records in
>>>>>>>>>>>> kafka that are not JSON.  By the time it gets to the indexing kafka topic,
>>>>>>>>>>>> it should be a JSON map.  The parser topology emits that JSON map and then
>>>>>>>>>>>> the enrichments topology enrich that map and emits the enriched map to the
>>>>>>>>>>>> indexing topic.
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> No I am no longer seeing the parsing topology error, here is
>>>>>>>>>>>>> the full stack trace
>>>>>>>>>>>>>
>>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>
>>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>>> Also, are you saying that you are no longer seeing the parser
>>>>>>>>>>>>>> topology error?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> If you click on the port (6704) there in those errors, what's
>>>>>>>>>>>>>> the full stacktrace (that starts with the suggestion you file a JIRA)?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> What this means is that an exception is bleeding from the
>>>>>>>>>>>>>> individual writer into the writer component (It should be handled in the
>>>>>>>>>>>>>> writer itself).  The fact that it's happening for both HDFS and ES is
>>>>>>>>>>>>>> telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>>> included.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka producer
>>>>>>>>>>>>>>> .... and now the error at storm parser topology is gone but I am now seeing
>>>>>>>>>>>>>>> this at the indexing toology
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:
>>>>>>>>>>>>>>>> 00:00:00,08:00:27:E8:B0:7A,0x5
>>>>>>>>>>>>>>>> A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,
>>>>>>>>>>>>>>>> 77824,,,,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I would download the entire snort.out file and run cat
>>>>>>>>>>>>>>>>> snort.out | kafka-console-producer.sh ... to make sure there are no copy
>>>>>>>>>>>>>>>>> paste problems
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in this format:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then you may
>>>>>>>>>>>>>>>>>> see this error I believe.
>>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> If this is the case, then you will need to modify the
>>>>>>>>>>>>>>>>>> default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the format of the
>>>>>>>>>>>>>>>>>> logs I am trying to push
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub canned
>>>>>>>>>>>>>>>>>> data file? Maybe I could see its formatting and try following the same
>>>>>>>>>>>>>>>>>> thing.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> After running this command, I copy paste a few lines
>>>>>>>>>>>>>>>>>>> from here: https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metron-deployment/roles/sensor-
>>>>>>>>>>>>>>>>>>> stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also see these
>>>>>>>>>>>>>>>>>>> lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming from
>>>>>>>>>>>>>>>>>>>> snort through some setup ( getting pushed to kafka ), which I think of as
>>>>>>>>>>>>>>>>>>>> live.  I also think you have manually pushed messages, where you see this
>>>>>>>>>>>>>>>>>>>> error.
>>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same errors for
>>>>>>>>>>>>>>>>>>>> things that are automatically pushed to kafka as you do when you manual
>>>>>>>>>>>>>>>>>>>> push them.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that would
>>>>>>>>>>>>>>>>>>>> be a problem.  If you see this error with your ‘live’ messages as well then
>>>>>>>>>>>>>>>>>>>> that could be it.
>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I push them
>>>>>>>>>>>>>>>>>>>> into kafka topic then no, I dont see any error at that time. If 'live'
>>>>>>>>>>>>>>>>>>>> means something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that would
>>>>>>>>>>>>>>>>>>>>> be a problem.  If you see this error with your ‘live’ messages as well then
>>>>>>>>>>>>>>>>>>>>> that could be it.
>>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same errors
>>>>>>>>>>>>>>>>>>>>> with the live data or not.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology ->
>>>>>>>>>>>>>>>>>>>>> kafka -> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology ->
>>>>>>>>>>>>>>>>>>>>> HDFS | ElasticSearch
>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and result in
>>>>>>>>>>>>>>>>>>>>> Kibana not seeing things.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to see logs
>>>>>>>>>>>>>>>>>>>>> in kibana dashboard?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI parser bolt
>>>>>>>>>>>>>>>>>>>>> in snort section:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to get
>>>>>>>>>>>>>>>>>>>>>> the snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/
>>>>>>>>>>>>>>>>>>>>>>> is also relevant
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and that is
>>>>>>>>>>>>>>>>>>>>>>>> related to snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be looking
>>>>>>>>>>>>>>>>>>>>>>>>> at here?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find
>>>>>>>>>>>>>>>>>>>>>>>>>> something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com
>>>>>>>>>>>>>>>>>>>>>>>>>> <ze...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of
>>>>>>>>>>>>>>>>>>>>>>>>>>> Red, so there's your problem.  I would go look in /var/log/elasticsearch/
>>>>>>>>>>>>>>>>>>>>>>>>>>> at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head.
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Now where do I go in this to find out why I cant see the snort logs in
>>>>>>>>>>>>>>>>>>>>>>>>>>>> kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web browser
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>> --
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>
>>>>>
>>>>
>>> --
>>
>> Jon
>>
>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
ok, Doing it.

On Mon, Nov 13, 2017 at 3:07 PM, Zeolla@GMail.com <ze...@gmail.com> wrote:

> Can you restart storm and give it another shot?
>
> Jon
>
> On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> hi, This problem still persists guys .
>>
>> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> Any solution to these issues guys?
>>>
>>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
>>> wrote:
>>>
>>>> I have attached the output of this dump
>>>>
>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>
>>>>
>>>>
>>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <ze...@gmail.com>
>>>> wrote:
>>>>
>>>>> What is the output of:
>>>>>
>>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>>
>>>>> ?
>>>>>
>>>>> Jon
>>>>>
>>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>> wrote:
>>>>>
>>>>>> This is the script/command i used
>>>>>>
>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>> --broker-list node1:6667 --topic snort
>>>>>>
>>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>
>>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <
>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>
>>>>>>>> What topic?  what are the parameters you are calling the script
>>>>>>>> with?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>
>>>>>>>> The metron installation I have (single node based vm install) comes
>>>>>>>> with sensor stubs. I assume that everything has already been done for those
>>>>>>>> stub sensors to push the canned data. I am doing the similar thing,
>>>>>>>> directly pushing the preformatted canned data to kafka topic. I can see the
>>>>>>>> logs in kibana dashboard when I start stub sensor from monit but then I
>>>>>>>> push the same logs myself, those errors pop that I have shown earlier.
>>>>>>>>
>>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <ce...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> How did you start the snort parser topology and what's the parser
>>>>>>>>> config (in zookeeper)?
>>>>>>>>>
>>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> This is what I am doing
>>>>>>>>>>
>>>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <cestella@gmail.com
>>>>>>>>>> > wrote:
>>>>>>>>>>
>>>>>>>>>>> Are you directly writing to the "indexing" kafka topic from the
>>>>>>>>>>> parser or from some other source?  It looks like there are some records in
>>>>>>>>>>> kafka that are not JSON.  By the time it gets to the indexing kafka topic,
>>>>>>>>>>> it should be a JSON map.  The parser topology emits that JSON map and then
>>>>>>>>>>> the enrichments topology enrich that map and emits the enriched map to the
>>>>>>>>>>> indexing topic.
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> No I am no longer seeing the parsing topology error, here is
>>>>>>>>>>>> the full stack trace
>>>>>>>>>>>>
>>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>>> Also, are you saying that you are no longer seeing the parser
>>>>>>>>>>>>> topology error?
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> If you click on the port (6704) there in those errors, what's
>>>>>>>>>>>>> the full stacktrace (that starts with the suggestion you file a JIRA)?
>>>>>>>>>>>>>
>>>>>>>>>>>>> What this means is that an exception is bleeding from the
>>>>>>>>>>>>> individual writer into the writer component (It should be handled in the
>>>>>>>>>>>>> writer itself).  The fact that it's happening for both HDFS and ES is
>>>>>>>>>>>>> telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>>> included.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka producer
>>>>>>>>>>>>>> .... and now the error at storm parser topology is gone but I am now seeing
>>>>>>>>>>>>>> this at the indexing toology
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:
>>>>>>>>>>>>>>> 27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,
>>>>>>>>>>>>>>> 0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I would download the entire snort.out file and run cat
>>>>>>>>>>>>>>>> snort.out | kafka-console-producer.sh ... to make sure there are no copy
>>>>>>>>>>>>>>>> paste problems
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The snort parser is coded to support dates in this format:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then you may see
>>>>>>>>>>>>>>>>> this error I believe.
>>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> If this is the case, then you will need to modify the
>>>>>>>>>>>>>>>>> default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’
>>>>>>>>>>>>>>>>> field/column is for a piece of data that is failing
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the format of the
>>>>>>>>>>>>>>>>> logs I am trying to push
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub canned data
>>>>>>>>>>>>>>>>> file? Maybe I could see its formatting and try following the same thing.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> After running this command, I copy paste a few lines from
>>>>>>>>>>>>>>>>>> here: https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>> com/apache/metron/master/metron-deployment/roles/
>>>>>>>>>>>>>>>>>> sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I am not getting any error here. I can also see these
>>>>>>>>>>>>>>>>>> lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming from snort
>>>>>>>>>>>>>>>>>>> through some setup ( getting pushed to kafka ), which I think of as live.
>>>>>>>>>>>>>>>>>>> I also think you have manually pushed messages, where you see this error.
>>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same errors for
>>>>>>>>>>>>>>>>>>> things that are automatically pushed to kafka as you do when you manual
>>>>>>>>>>>>>>>>>>> push them.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that would
>>>>>>>>>>>>>>>>>>> be a problem.  If you see this error with your ‘live’ messages as well then
>>>>>>>>>>>>>>>>>>> that could be it.
>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I push them into
>>>>>>>>>>>>>>>>>>> kafka topic then no, I dont see any error at that time. If 'live' means
>>>>>>>>>>>>>>>>>>> something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that would
>>>>>>>>>>>>>>>>>>>> be a problem.  If you see this error with your ‘live’ messages as well then
>>>>>>>>>>>>>>>>>>>> that could be it.
>>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same errors with
>>>>>>>>>>>>>>>>>>>> the live data or not.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka
>>>>>>>>>>>>>>>>>>>> -> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>>>>>>>>>>>>>>> ElasticSearch
>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and result in Kibana
>>>>>>>>>>>>>>>>>>>> not seeing things.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to see logs in
>>>>>>>>>>>>>>>>>>>> kibana dashboard?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/
>>>>>>>>>>>>>>>>>>>> metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI parser bolt
>>>>>>>>>>>>>>>>>>>> in snort section:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to get
>>>>>>>>>>>>>>>>>>>>> the snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/
>>>>>>>>>>>>>>>>>>>>>> is also relevant
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and that is
>>>>>>>>>>>>>>>>>>>>>>> related to snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be looking
>>>>>>>>>>>>>>>>>>>>>>>> at here?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find
>>>>>>>>>>>>>>>>>>>>>>>>> something in logs
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of
>>>>>>>>>>>>>>>>>>>>>>>>>> Red, so there's your problem.  I would go look in /var/log/elasticsearch/
>>>>>>>>>>>>>>>>>>>>>>>>>> at some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>>> <ms...@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head.
>>>>>>>>>>>>>>>>>>>>>>>>>>> Now where do I go in this to find out why I cant see the snort logs in
>>>>>>>>>>>>>>>>>>>>>>>>>>> kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web browser
>>>>>>>>>>>>>>>>>>>>>>>>>>>> from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the
>>>>>>>>>>>>>>>>>>>>>>>>>>>> vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>> --
>>>>>
>>>>> Jon
>>>>>
>>>>
>>>>
>>>
>> --
>
> Jon
>

Re: Snort Logs

Posted by "Zeolla@GMail.com" <ze...@gmail.com>.
Can you restart storm and give it another shot?

Jon

On Mon, Nov 13, 2017, 00:30 Syed Hammad Tahir <ms...@itu.edu.pk> wrote:

> hi, This problem still persists guys .
>
> On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> Any solution to these issues guys?
>>
>> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> I have attached the output of this dump
>>>
>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>
>>>
>>>
>>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <ze...@gmail.com>
>>> wrote:
>>>
>>>> What is the output of:
>>>>
>>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>>
>>>> ?
>>>>
>>>> Jon
>>>>
>>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <ms...@itu.edu.pk>
>>>> wrote:
>>>>
>>>>> This is the script/command i used
>>>>>
>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>> --broker-list node1:6667 --topic snort
>>>>>
>>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>> --broker-list node1:6667 --topic snort
>>>>>>
>>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <ottobackwards@gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> What topic?  what are the parameters you are calling the script with?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>
>>>>>>> The metron installation I have (single node based vm install) comes
>>>>>>> with sensor stubs. I assume that everything has already been done for those
>>>>>>> stub sensors to push the canned data. I am doing the similar thing,
>>>>>>> directly pushing the preformatted canned data to kafka topic. I can see the
>>>>>>> logs in kibana dashboard when I start stub sensor from monit but then I
>>>>>>> push the same logs myself, those errors pop that I have shown earlier.
>>>>>>>
>>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <ce...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> How did you start the snort parser topology and what's the parser
>>>>>>>> config (in zookeeper)?
>>>>>>>>
>>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> This is what I am doing
>>>>>>>>>
>>>>>>>>> sudo cat snort.out |
>>>>>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
>>>>>>>>> node1:6667 --topic snort
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <ce...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Are you directly writing to the "indexing" kafka topic from the
>>>>>>>>>> parser or from some other source?  It looks like there are some records in
>>>>>>>>>> kafka that are not JSON.  By the time it gets to the indexing kafka topic,
>>>>>>>>>> it should be a JSON map.  The parser topology emits that JSON map and then
>>>>>>>>>> the enrichments topology enrich that map and emits the enriched map to the
>>>>>>>>>> indexing topic.
>>>>>>>>>>
>>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> No I am no longer seeing the parsing topology error, here is the
>>>>>>>>>>> full stack trace
>>>>>>>>>>>
>>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>>> Also, are you saying that you are no longer seeing the parser
>>>>>>>>>>>> topology error?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> If you click on the port (6704) there in those errors, what's
>>>>>>>>>>>> the full stacktrace (that starts with the suggestion you file a JIRA)?
>>>>>>>>>>>>
>>>>>>>>>>>> What this means is that an exception is bleeding from the
>>>>>>>>>>>> individual writer into the writer component (It should be handled in the
>>>>>>>>>>>> writer itself).  The fact that it's happening for both HDFS and ES is
>>>>>>>>>>>> telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>>> included.
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka producer ....
>>>>>>>>>>>>> and now the error at storm parser topology is gone but I am now seeing this
>>>>>>>>>>>>> at the indexing toology
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I would download the entire snort.out file and run cat
>>>>>>>>>>>>>>> snort.out | kafka-console-producer.sh ... to make sure there are no copy
>>>>>>>>>>>>>>> paste problems
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> The snort parser is coded to support dates in this format:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then you may see
>>>>>>>>>>>>>>>> this error I believe.
>>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> If this is the case, then you will need to modify the
>>>>>>>>>>>>>>>> default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’ field/column
>>>>>>>>>>>>>>>> is for a piece of data that is failing
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the format of the
>>>>>>>>>>>>>>>> logs I am trying to push
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Can someone tell me the location of snort stub canned data
>>>>>>>>>>>>>>>> file? Maybe I could see its formatting and try following the same thing.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> After running this command, I copy paste a few lines from
>>>>>>>>>>>>>>>>> here:
>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I am not getting any error here. I can also see these
>>>>>>>>>>>>>>>>> lines pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming from snort
>>>>>>>>>>>>>>>>>> through some setup ( getting pushed to kafka ), which I think of as live.
>>>>>>>>>>>>>>>>>> I also think you have manually pushed messages, where you see this error.
>>>>>>>>>>>>>>>>>> So what I am asking is if you see the same errors for
>>>>>>>>>>>>>>>>>> things that are automatically pushed to kafka as you do when you manual
>>>>>>>>>>>>>>>>>> push them.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that would be
>>>>>>>>>>>>>>>>>> a problem.  If you see this error with your ‘live’ messages as well then
>>>>>>>>>>>>>>>>>> that could be it.
>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I push them into
>>>>>>>>>>>>>>>>>> kafka topic then no, I dont see any error at that time. If 'live' means
>>>>>>>>>>>>>>>>>> something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that would be
>>>>>>>>>>>>>>>>>>> a problem.  If you see this error with your ‘live’ messages as well then
>>>>>>>>>>>>>>>>>>> that could be it.
>>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> You need to confirm that you see these same errors with
>>>>>>>>>>>>>>>>>>> the live data or not.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka
>>>>>>>>>>>>>>>>>>> -> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>>>>>>>>>>>>>> ElasticSearch
>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Any point in this chain could fail and result in Kibana
>>>>>>>>>>>>>>>>>>> not seeing things.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> could this be related to why I am unable to see logs in
>>>>>>>>>>>>>>>>>>> kibana dashboard?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI parser bolt in
>>>>>>>>>>>>>>>>>>> snort section:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to get the
>>>>>>>>>>>>>>>>>>>> snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/
>>>>>>>>>>>>>>>>>>>>> is also relevant
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and that is
>>>>>>>>>>>>>>>>>>>>>> related to snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be looking at
>>>>>>>>>>>>>>>>>>>>>>> here?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find
>>>>>>>>>>>>>>>>>>>>>>>> something in logs
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of Red,
>>>>>>>>>>>>>>>>>>>>>>>>> so there's your problem.  I would go look in /var/log/elasticsearch/ at
>>>>>>>>>>>>>>>>>>>>>>>>> some logs.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head.
>>>>>>>>>>>>>>>>>>>>>>>>>> Now where do I go in this to find out why I cant see the snort logs in
>>>>>>>>>>>>>>>>>>>>>>>>>> kibana dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web browser
>>>>>>>>>>>>>>>>>>>>>>>>>>> from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the
>>>>>>>>>>>>>>>>>>>>>>>>>>> vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>> --
>>>>
>>>> Jon
>>>>
>>>
>>>
>>
> --

Jon

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
hi, This problem still persists guys .

On Thu, Nov 9, 2017 at 11:13 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> Any solution to these issues guys?
>
> On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> I have attached the output of this dump
>>
>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>
>>
>>
>> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <ze...@gmail.com>
>> wrote:
>>
>>> What is the output of:
>>>
>>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>>
>>> ?
>>>
>>> Jon
>>>
>>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <ms...@itu.edu.pk>
>>> wrote:
>>>
>>>> This is the script/command i used
>>>>
>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>> --broker-list node1:6667 --topic snort
>>>>
>>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>> --broker-list node1:6667 --topic snort
>>>>>
>>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <ot...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> What topic?  what are the parameters you are calling the script with?
>>>>>>
>>>>>>
>>>>>>
>>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>
>>>>>> The metron installation I have (single node based vm install) comes
>>>>>> with sensor stubs. I assume that everything has already been done for those
>>>>>> stub sensors to push the canned data. I am doing the similar thing,
>>>>>> directly pushing the preformatted canned data to kafka topic. I can see the
>>>>>> logs in kibana dashboard when I start stub sensor from monit but then I
>>>>>> push the same logs myself, those errors pop that I have shown earlier.
>>>>>>
>>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <ce...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> How did you start the snort parser topology and what's the parser
>>>>>>> config (in zookeeper)?
>>>>>>>
>>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> This is what I am doing
>>>>>>>>
>>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <ce...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Are you directly writing to the "indexing" kafka topic from the
>>>>>>>>> parser or from some other source?  It looks like there are some records in
>>>>>>>>> kafka that are not JSON.  By the time it gets to the indexing kafka topic,
>>>>>>>>> it should be a JSON map.  The parser topology emits that JSON map and then
>>>>>>>>> the enrichments topology enrich that map and emits the enriched map to the
>>>>>>>>> indexing topic.
>>>>>>>>>
>>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> No I am no longer seeing the parsing topology error, here is the
>>>>>>>>>> full stack trace
>>>>>>>>>>
>>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>
>>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>>> Also, are you saying that you are no longer seeing the parser
>>>>>>>>>>> topology error?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (
>>>>>>>>>>> cestella@gmail.com) wrote:
>>>>>>>>>>>
>>>>>>>>>>> If you click on the port (6704) there in those errors, what's
>>>>>>>>>>> the full stacktrace (that starts with the suggestion you file a JIRA)?
>>>>>>>>>>>
>>>>>>>>>>> What this means is that an exception is bleeding from the
>>>>>>>>>>> individual writer into the writer component (It should be handled in the
>>>>>>>>>>> writer itself).  The fact that it's happening for both HDFS and ES is
>>>>>>>>>>> telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>>> included.
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka producer ....
>>>>>>>>>>>> and now the error at storm parser topology is gone but I am now seeing this
>>>>>>>>>>>> at the indexing toology
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00
>>>>>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <
>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> I would download the entire snort.out file and run cat
>>>>>>>>>>>>>> snort.out | kafka-console-producer.sh ... to make sure there are no copy
>>>>>>>>>>>>>> paste problems
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The snort parser is coded to support dates in this format:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then you may see
>>>>>>>>>>>>>>> this error I believe.
>>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> If this is the case, then you will need to modify the
>>>>>>>>>>>>>>> default log timestamp format for snort in the short term.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’ field/column
>>>>>>>>>>>>>>> is for a piece of data that is failing
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Now I am pretty sure that the issue is the format of the
>>>>>>>>>>>>>>> logs I am trying to push
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Can someone tell me the location of snort stub canned data
>>>>>>>>>>>>>>> file? Maybe I could see its formatting and try following the same thing.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> After running this command, I copy paste a few lines from
>>>>>>>>>>>>>>>> here: https://raw.githubusercontent.
>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I am not getting any error here. I can also see these lines
>>>>>>>>>>>>>>>> pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I *think* you have tried both messages coming from snort
>>>>>>>>>>>>>>>>> through some setup ( getting pushed to kafka ), which I think of as live.
>>>>>>>>>>>>>>>>> I also think you have manually pushed messages, where you see this error.
>>>>>>>>>>>>>>>>> So what I am asking is if you see the same errors for
>>>>>>>>>>>>>>>>> things that are automatically pushed to kafka as you do when you manual
>>>>>>>>>>>>>>>>> push them.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that would be
>>>>>>>>>>>>>>>>> a problem.  If you see this error with your ‘live’ messages as well then
>>>>>>>>>>>>>>>>> that could be it.
>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I push them into
>>>>>>>>>>>>>>>>> kafka topic then no, I dont see any error at that time. If 'live' means
>>>>>>>>>>>>>>>>> something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that would be
>>>>>>>>>>>>>>>>>> a problem.  If you see this error with your ‘live’ messages as well then
>>>>>>>>>>>>>>>>>> that could be it.
>>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> You need to confirm that you see these same errors with
>>>>>>>>>>>>>>>>>> the live data or not.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka
>>>>>>>>>>>>>>>>>> -> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>>>>>>>>>>>>> ElasticSearch
>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Any point in this chain could fail and result in Kibana
>>>>>>>>>>>>>>>>>> not seeing things.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> could this be related to why I am unable to see logs in
>>>>>>>>>>>>>>>>>> kibana dashboard?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>>> com/apache/metron/master/metro
>>>>>>>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI parser bolt in
>>>>>>>>>>>>>>>>>> snort section:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to get the
>>>>>>>>>>>>>>>>>>> snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is
>>>>>>>>>>>>>>>>>>>> also relevant
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and that is
>>>>>>>>>>>>>>>>>>>>> related to snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be looking at
>>>>>>>>>>>>>>>>>>>>>> here?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find
>>>>>>>>>>>>>>>>>>>>>>> something in logs
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of Red,
>>>>>>>>>>>>>>>>>>>>>>>> so there's your problem.  I would go look in /var/log/elasticsearch/ at
>>>>>>>>>>>>>>>>>>>>>>>> some logs.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now
>>>>>>>>>>>>>>>>>>>>>>>>> where do I go in this to find out why I cant see the snort logs in kibana
>>>>>>>>>>>>>>>>>>>>>>>>> dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web browser
>>>>>>>>>>>>>>>>>>>>>>>>>> from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad
>>>>>>>>>>>>>>>>>>>>>>>>>> Tahir (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the
>>>>>>>>>>>>>>>>>>>>>>>>>> vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>> --
>>>
>>> Jon
>>>
>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Any solution to these issues guys?

On Thu, Nov 9, 2017 at 6:01 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> I have attached the output of this dump
>
> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>
>
>
> On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <ze...@gmail.com>
> wrote:
>
>> What is the output of:
>>
>> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>>
>> ?
>>
>> Jon
>>
>> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> This is the script/command i used
>>>
>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>> --broker-list node1:6667 --topic snort
>>>
>>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>> --broker-list node1:6667 --topic snort
>>>>
>>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <ot...@gmail.com>
>>>> wrote:
>>>>
>>>>> What topic?  what are the parameters you are calling the script with?
>>>>>
>>>>>
>>>>>
>>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>
>>>>> The metron installation I have (single node based vm install) comes
>>>>> with sensor stubs. I assume that everything has already been done for those
>>>>> stub sensors to push the canned data. I am doing the similar thing,
>>>>> directly pushing the preformatted canned data to kafka topic. I can see the
>>>>> logs in kibana dashboard when I start stub sensor from monit but then I
>>>>> push the same logs myself, those errors pop that I have shown earlier.
>>>>>
>>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <ce...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> How did you start the snort parser topology and what's the parser
>>>>>> config (in zookeeper)?
>>>>>>
>>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> This is what I am doing
>>>>>>>
>>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>>> --broker-list node1:6667 --topic snort
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <ce...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Are you directly writing to the "indexing" kafka topic from the
>>>>>>>> parser or from some other source?  It looks like there are some records in
>>>>>>>> kafka that are not JSON.  By the time it gets to the indexing kafka topic,
>>>>>>>> it should be a JSON map.  The parser topology emits that JSON map and then
>>>>>>>> the enrichments topology enrich that map and emits the enriched map to the
>>>>>>>> indexing topic.
>>>>>>>>
>>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> No I am no longer seeing the parsing topology error, here is the
>>>>>>>>> full stack trace
>>>>>>>>>
>>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>>
>>>>>>>>> [image: Inline image 2]
>>>>>>>>>
>>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>>> Also, are you saying that you are no longer seeing the parser
>>>>>>>>>> topology error?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (cestella@gmail.com)
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> If you click on the port (6704) there in those errors, what's the
>>>>>>>>>> full stacktrace (that starts with the suggestion you file a JIRA)?
>>>>>>>>>>
>>>>>>>>>> What this means is that an exception is bleeding from the
>>>>>>>>>> individual writer into the writer component (It should be handled in the
>>>>>>>>>> writer itself).  The fact that it's happening for both HDFS and ES is
>>>>>>>>>> telling as well and I'm very interested in the full stacktrace there
>>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>>> included.
>>>>>>>>>>
>>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka producer ....
>>>>>>>>>>> and now the error at storm parser topology is gone but I am now seeing this
>>>>>>>>>>> at the indexing toology
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:
>>>>>>>>>>>> 00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900B
>>>>>>>>>>>> B6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <
>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> I would download the entire snort.out file and run cat
>>>>>>>>>>>>> snort.out | kafka-console-producer.sh ... to make sure there are no copy
>>>>>>>>>>>>> paste problems
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <
>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> The snort parser is coded to support dates in this format:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then you may see
>>>>>>>>>>>>>> this error I believe.
>>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> If this is the case, then you will need to modify the default
>>>>>>>>>>>>>> log timestamp format for snort in the short term.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’ field/column
>>>>>>>>>>>>>> is for a piece of data that is failing
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Now I am pretty sure that the issue is the format of the logs
>>>>>>>>>>>>>> I am trying to push
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Can someone tell me the location of snort stub canned data
>>>>>>>>>>>>>> file? Maybe I could see its formatting and try following the same thing.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> After running this command, I copy paste a few lines from
>>>>>>>>>>>>>>> here: https://raw.githubusercontent.
>>>>>>>>>>>>>>> com/apache/metron/master/metron-deployment/roles/sensor-
>>>>>>>>>>>>>>> stubs/files/snort.out
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I am not getting any error here. I can also see these lines
>>>>>>>>>>>>>>> pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I *think* you have tried both messages coming from snort
>>>>>>>>>>>>>>>> through some setup ( getting pushed to kafka ), which I think of as live.
>>>>>>>>>>>>>>>> I also think you have manually pushed messages, where you see this error.
>>>>>>>>>>>>>>>> So what I am asking is if you see the same errors for
>>>>>>>>>>>>>>>> things that are automatically pushed to kafka as you do when you manual
>>>>>>>>>>>>>>>> push them.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>>>>>>>> could be it.
>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> If by 'live' messages you mean the time I push them into
>>>>>>>>>>>>>>>> kafka topic then no, I dont see any error at that time. If 'live' means
>>>>>>>>>>>>>>>> something else here then please tell me what could it be.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>>>>>>>>> could be it.
>>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> You need to confirm that you see these same errors with
>>>>>>>>>>>>>>>>> the live data or not.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka ->
>>>>>>>>>>>>>>>>> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>>>>>>>>>>>> ElasticSearch
>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Any point in this chain could fail and result in Kibana
>>>>>>>>>>>>>>>>> not seeing things.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> could this be related to why I am unable to see logs in
>>>>>>>>>>>>>>>>> kibana dashboard?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>>> https://raw.githubusercontent.
>>>>>>>>>>>>>>>>> com/apache/metron/master/metron-deployment/roles/sensor-
>>>>>>>>>>>>>>>>> stubs/files/snort.out
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI parser bolt in
>>>>>>>>>>>>>>>>> snort section:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to get the
>>>>>>>>>>>>>>>>>> snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is
>>>>>>>>>>>>>>>>>>> also relevant
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and that is related
>>>>>>>>>>>>>>>>>>>> to snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be looking at
>>>>>>>>>>>>>>>>>>>>> here?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find
>>>>>>>>>>>>>>>>>>>>>> something in logs
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of Red,
>>>>>>>>>>>>>>>>>>>>>>> so there's your problem.  I would go look in /var/log/elasticsearch/ at
>>>>>>>>>>>>>>>>>>>>>>> some logs.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now
>>>>>>>>>>>>>>>>>>>>>>>> where do I go in this to find out why I cant see the snort logs in kibana
>>>>>>>>>>>>>>>>>>>>>>>> dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web browser
>>>>>>>>>>>>>>>>>>>>>>>>> from the play store.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir
>>>>>>>>>>>>>>>>>>>>>>>>> (mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the
>>>>>>>>>>>>>>>>>>>>>>>>> vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>> --
>>
>> Jon
>>
>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
I have attached the output of this dump

/usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP



On Thu, Nov 9, 2017 at 12:06 AM, Zeolla@GMail.com <ze...@gmail.com> wrote:

> What is the output of:
>
> /usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP
>
> ?
>
> Jon
>
> On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> This is the script/command i used
>>
>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>> --broker-list node1:6667 --topic snort
>>
>> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>> --broker-list node1:6667 --topic snort
>>>
>>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> What topic?  what are the parameters you are calling the script with?
>>>>
>>>>
>>>>
>>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> The metron installation I have (single node based vm install) comes
>>>> with sensor stubs. I assume that everything has already been done for those
>>>> stub sensors to push the canned data. I am doing the similar thing,
>>>> directly pushing the preformatted canned data to kafka topic. I can see the
>>>> logs in kibana dashboard when I start stub sensor from monit but then I
>>>> push the same logs myself, those errors pop that I have shown earlier.
>>>>
>>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <ce...@gmail.com>
>>>> wrote:
>>>>
>>>>> How did you start the snort parser topology and what's the parser
>>>>> config (in zookeeper)?
>>>>>
>>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> This is what I am doing
>>>>>>
>>>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>>>> --broker-list node1:6667 --topic snort
>>>>>>
>>>>>>
>>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <ce...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Are you directly writing to the "indexing" kafka topic from the
>>>>>>> parser or from some other source?  It looks like there are some records in
>>>>>>> kafka that are not JSON.  By the time it gets to the indexing kafka topic,
>>>>>>> it should be a JSON map.  The parser topology emits that JSON map and then
>>>>>>> the enrichments topology enrich that map and emits the enriched map to the
>>>>>>> indexing topic.
>>>>>>>
>>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> No I am no longer seeing the parsing topology error, here is the
>>>>>>>> full stack trace
>>>>>>>>
>>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> from indexingbolt in indexing topology
>>>>>>>>
>>>>>>>> [image: Inline image 2]
>>>>>>>>
>>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>>> Also, are you saying that you are no longer seeing the parser
>>>>>>>>> topology error?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (cestella@gmail.com)
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> If you click on the port (6704) there in those errors, what's the
>>>>>>>>> full stacktrace (that starts with the suggestion you file a JIRA)?
>>>>>>>>>
>>>>>>>>> What this means is that an exception is bleeding from the
>>>>>>>>> individual writer into the writer component (It should be handled in the
>>>>>>>>> writer itself).  The fact that it's happening for both HDFS and ES is
>>>>>>>>> telling as well and I'm very interested in the full stacktrace there
>>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>>> included.
>>>>>>>>>
>>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka producer ....
>>>>>>>>>> and now the error at storm parser topology is gone but I am now seeing this
>>>>>>>>>> at the indexing toology
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:
>>>>>>>>>>> 27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,
>>>>>>>>>>> 0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <
>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> I would download the entire snort.out file and run cat
>>>>>>>>>>>> snort.out | kafka-console-producer.sh ... to make sure there are no copy
>>>>>>>>>>>> paste problems
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> The snort parser is coded to support dates in this format:
>>>>>>>>>>>>>
>>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then you may see
>>>>>>>>>>>>> this error I believe.
>>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>>
>>>>>>>>>>>>> If this is the case, then you will need to modify the default
>>>>>>>>>>>>> log timestamp format for snort in the short term.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Can you post what the value of the ‘timestamp’ field/column is
>>>>>>>>>>>>> for a piece of data that is failing
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Now I am pretty sure that the issue is the format of the logs
>>>>>>>>>>>>> I am trying to push
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>
>>>>>>>>>>>>> Can someone tell me the location of snort stub canned data
>>>>>>>>>>>>> file? Maybe I could see its formatting and try following the same thing.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> After running this command, I copy paste a few lines from
>>>>>>>>>>>>>> here: https://raw.githubusercontent.com/apache/metron/master/
>>>>>>>>>>>>>> metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> like this
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I am not getting any error here. I can also see these lines
>>>>>>>>>>>>>> pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I *think* you have tried both messages coming from snort
>>>>>>>>>>>>>>> through some setup ( getting pushed to kafka ), which I think of as live.
>>>>>>>>>>>>>>> I also think you have manually pushed messages, where you see this error.
>>>>>>>>>>>>>>> So what I am asking is if you see the same errors for things
>>>>>>>>>>>>>>> that are automatically pushed to kafka as you do when you manual push them.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>>>>>>> could be it.
>>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> If by 'live' messages you mean the time I push them into
>>>>>>>>>>>>>>> kafka topic then no, I dont see any error at that time. If 'live' means
>>>>>>>>>>>>>>> something else here then please tell me what could it be.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>>>>>>>> could be it.
>>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> You need to confirm that you see these same errors with the
>>>>>>>>>>>>>>>> live data or not.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka ->
>>>>>>>>>>>>>>>> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>>>>>>>>>>> ElasticSearch
>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Any point in this chain could fail and result in Kibana not
>>>>>>>>>>>>>>>> seeing things.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> could this be related to why I am unable to see logs in
>>>>>>>>>>>>>>>> kibana dashboard?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/
>>>>>>>>>>>>>>>> metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI parser bolt in
>>>>>>>>>>>>>>>> snort section:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to get the
>>>>>>>>>>>>>>>>> snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is
>>>>>>>>>>>>>>>>>> also relevant
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and that is related
>>>>>>>>>>>>>>>>>>> to snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be looking at
>>>>>>>>>>>>>>>>>>>> here?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find
>>>>>>>>>>>>>>>>>>>>> something in logs
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of Red, so
>>>>>>>>>>>>>>>>>>>>>> there's your problem.  I would go look in /var/log/elasticsearch/ at some
>>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now
>>>>>>>>>>>>>>>>>>>>>>> where do I go in this to find out why I cant see the snort logs in kibana
>>>>>>>>>>>>>>>>>>>>>>> dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web browser from
>>>>>>>>>>>>>>>>>>>>>>>> the play store.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the
>>>>>>>>>>>>>>>>>>>>>>>> vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>> --
>
> Jon
>

Re: Snort Logs

Posted by "Zeolla@GMail.com" <ze...@gmail.com>.
What is the output of:

/usr/metron/0.4.1/bin/zk_load_configs.sh -z node1:2181 -m DUMP

?

Jon

On Wed, Nov 8, 2017 at 1:49 PM Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> This is the script/command i used
>
> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
> --broker-list node1:6667 --topic snort
>
> On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>> --broker-list node1:6667 --topic snort
>>
>> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> What topic?  what are the parameters you are calling the script with?
>>>
>>>
>>>
>>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>>> wrote:
>>>
>>> The metron installation I have (single node based vm install) comes with
>>> sensor stubs. I assume that everything has already been done for those stub
>>> sensors to push the canned data. I am doing the similar thing, directly
>>> pushing the preformatted canned data to kafka topic. I can see the logs in
>>> kibana dashboard when I start stub sensor from monit but then I push the
>>> same logs myself, those errors pop that I have shown earlier.
>>>
>>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <ce...@gmail.com>
>>> wrote:
>>>
>>>> How did you start the snort parser topology and what's the parser
>>>> config (in zookeeper)?
>>>>
>>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>>> > wrote:
>>>>
>>>>> This is what I am doing
>>>>>
>>>>> sudo cat snort.out |
>>>>> /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
>>>>> node1:6667 --topic snort
>>>>>
>>>>>
>>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <ce...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Are you directly writing to the "indexing" kafka topic from the
>>>>>> parser or from some other source?  It looks like there are some records in
>>>>>> kafka that are not JSON.  By the time it gets to the indexing kafka topic,
>>>>>> it should be a JSON map.  The parser topology emits that JSON map and then
>>>>>> the enrichments topology enrich that map and emits the enriched map to the
>>>>>> indexing topic.
>>>>>>
>>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> No I am no longer seeing the parsing topology error, here is the
>>>>>>> full stack trace
>>>>>>>
>>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> from indexingbolt in indexing topology
>>>>>>>
>>>>>>> [image: Inline image 2]
>>>>>>>
>>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <
>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>
>>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>>> Also, are you saying that you are no longer seeing the parser
>>>>>>>> topology error?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (cestella@gmail.com)
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> If you click on the port (6704) there in those errors, what's the
>>>>>>>> full stacktrace (that starts with the suggestion you file a JIRA)?
>>>>>>>>
>>>>>>>> What this means is that an exception is bleeding from the
>>>>>>>> individual writer into the writer component (It should be handled in the
>>>>>>>> writer itself).  The fact that it's happening for both HDFS and ES is
>>>>>>>> telling as well and I'm very interested in the full stacktrace there
>>>>>>>> because it'll have the wrapped exception from the individual writer
>>>>>>>> included.
>>>>>>>>
>>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka producer .... and
>>>>>>>>> now the error at storm parser topology is gone but I am now seeing this at
>>>>>>>>> the indexing toology
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> this is a single line I am trying to push
>>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>>
>>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <
>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> I would download the entire snort.out file and run cat snort.out
>>>>>>>>>>> | kafka-console-producer.sh ... to make sure there are no copy paste
>>>>>>>>>>> problems
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> The snort parser is coded to support dates in this format:
>>>>>>>>>>>>
>>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> If your records are in dd/MM/yy-  format, then you may see this
>>>>>>>>>>>> error I believe.
>>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>>
>>>>>>>>>>>> If this is the case, then you will need to modify the default
>>>>>>>>>>>> log timestamp format for snort in the short term.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Can you post what the value of the ‘timestamp’ field/column is
>>>>>>>>>>>> for a piece of data that is failing
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Now I am pretty sure that the issue is the format of the logs I
>>>>>>>>>>>> am trying to push
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>> Can someone tell me the location of snort stub canned data
>>>>>>>>>>>> file? Maybe I could see its formatting and try following the same thing.
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>
>>>>>>>>>>>>> After running this command, I copy paste a few lines from
>>>>>>>>>>>>> here:
>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>
>>>>>>>>>>>>> like this
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>>
>>>>>>>>>>>>> I am not getting any error here. I can also see these lines
>>>>>>>>>>>>> pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>>
>>>>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I *think* you have tried both messages coming from snort
>>>>>>>>>>>>>> through some setup ( getting pushed to kafka ), which I think of as live.
>>>>>>>>>>>>>> I also think you have manually pushed messages, where you see this error.
>>>>>>>>>>>>>> So what I am asking is if you see the same errors for things
>>>>>>>>>>>>>> that are automatically pushed to kafka as you do when you manual push them.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>>>>>> could be it.
>>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> If by 'live' messages you mean the time I push them into
>>>>>>>>>>>>>> kafka topic then no, I dont see any error at that time. If 'live' means
>>>>>>>>>>>>>> something else here then please tell me what could it be.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>>>>>>> could be it.
>>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> You need to confirm that you see these same errors with the
>>>>>>>>>>>>>>> live data or not.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka ->
>>>>>>>>>>>>>>> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>>>>>>>>>> ElasticSearch
>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Any point in this chain could fail and result in Kibana not
>>>>>>>>>>>>>>> seeing things.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> could this be related to why I am unable to see logs in
>>>>>>>>>>>>>>> kibana dashboard?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> THis is some error I am seeing in stormUI parser bolt in
>>>>>>>>>>>>>>> snort section:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to get the
>>>>>>>>>>>>>>>> snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is
>>>>>>>>>>>>>>>>> also relevant
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> looks like some shard is unassigned and that is related
>>>>>>>>>>>>>>>>>> to snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> This is what I see here. What should I be looking at
>>>>>>>>>>>>>>>>>>> here?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find something
>>>>>>>>>>>>>>>>>>>> in logs
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of Red, so
>>>>>>>>>>>>>>>>>>>>> there's your problem.  I would go look in /var/log/elasticsearch/ at some
>>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now
>>>>>>>>>>>>>>>>>>>>>> where do I go in this to find out why I cant see the snort logs in kibana
>>>>>>>>>>>>>>>>>>>>>> dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web browser from
>>>>>>>>>>>>>>>>>>>>>>> the play store.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the
>>>>>>>>>>>>>>>>>>>>>>> vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>
>>>>>>>>>>> Jon
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
> --

Jon

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
This is the script/command i used

sudo cat snort.out |
/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
--broker-list node1:6667 --topic snort

On Wed, Nov 8, 2017 at 11:18 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
> --broker-list node1:6667 --topic snort
>
> On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> What topic?  what are the parameters you are calling the script with?
>>
>>
>>
>> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> The metron installation I have (single node based vm install) comes with
>> sensor stubs. I assume that everything has already been done for those stub
>> sensors to push the canned data. I am doing the similar thing, directly
>> pushing the preformatted canned data to kafka topic. I can see the logs in
>> kibana dashboard when I start stub sensor from monit but then I push the
>> same logs myself, those errors pop that I have shown earlier.
>>
>> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <ce...@gmail.com> wrote:
>>
>>> How did you start the snort parser topology and what's the parser config
>>> (in zookeeper)?
>>>
>>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>>> wrote:
>>>
>>>> This is what I am doing
>>>>
>>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>>> --broker-list node1:6667 --topic snort
>>>>
>>>>
>>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <ce...@gmail.com>
>>>> wrote:
>>>>
>>>>> Are you directly writing to the "indexing" kafka topic from the parser
>>>>> or from some other source?  It looks like there are some records in kafka
>>>>> that are not JSON.  By the time it gets to the indexing kafka topic, it
>>>>> should be a JSON map.  The parser topology emits that JSON map and then the
>>>>> enrichments topology enrich that map and emits the enriched map to the
>>>>> indexing topic.
>>>>>
>>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> No I am no longer seeing the parsing topology error, here is the full
>>>>>> stack trace
>>>>>>
>>>>>> from hdfsindexingbolt in indexing topology
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> from indexingbolt in indexing topology
>>>>>>
>>>>>> [image: Inline image 2]
>>>>>>
>>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <ottobackwards@gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> What Casey said.  We need the whole stack trace.
>>>>>>> Also, are you saying that you are no longer seeing the parser
>>>>>>> topology error?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (cestella@gmail.com)
>>>>>>> wrote:
>>>>>>>
>>>>>>> If you click on the port (6704) there in those errors, what's the
>>>>>>> full stacktrace (that starts with the suggestion you file a JIRA)?
>>>>>>>
>>>>>>> What this means is that an exception is bleeding from the individual
>>>>>>> writer into the writer component (It should be handled in the writer
>>>>>>> itself).  The fact that it's happening for both HDFS and ES is telling as
>>>>>>> well and I'm very interested in the full stacktrace there because it'll
>>>>>>> have the wrapped exception from the individual writer included.
>>>>>>>
>>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> OK I did what Zeolla said, cat snort.out | kafka producer .... and
>>>>>>>> now the error at storm parser topology is gone but I am now seeing this at
>>>>>>>> the indexing toology
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> this is a single line I am trying to push
>>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00
>>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6
>>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>>
>>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <zeolla@gmail.com
>>>>>>>>> > wrote:
>>>>>>>>>
>>>>>>>>>> I would download the entire snort.out file and run cat snort.out
>>>>>>>>>> | kafka-console-producer.sh ... to make sure there are no copy paste
>>>>>>>>>> problems
>>>>>>>>>>
>>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> The snort parser is coded to support dates in this format:
>>>>>>>>>>>
>>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> If your records are in dd/MM/yy-  format, then you may see this
>>>>>>>>>>> error I believe.
>>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>>
>>>>>>>>>>> If this is the case, then you will need to modify the default
>>>>>>>>>>> log timestamp format for snort in the short term.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>>
>>>>>>>>>>> Can you post what the value of the ‘timestamp’ field/column is
>>>>>>>>>>> for a piece of data that is failing
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>
>>>>>>>>>>> Now I am pretty sure that the issue is the format of the logs I
>>>>>>>>>>> am trying to push
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> Can someone tell me the location of snort stub canned data file?
>>>>>>>>>>> Maybe I could see its formatting and try following the same thing.
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>> After running this command, I copy paste a few lines from here:
>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>
>>>>>>>>>>>> like this
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>>
>>>>>>>>>>>> I am not getting any error here. I can also see these lines
>>>>>>>>>>>> pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>>
>>>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>>
>>>>>>>>>>>>> I *think* you have tried both messages coming from snort
>>>>>>>>>>>>> through some setup ( getting pushed to kafka ), which I think of as live.
>>>>>>>>>>>>> I also think you have manually pushed messages, where you see this error.
>>>>>>>>>>>>> So what I am asking is if you see the same errors for things
>>>>>>>>>>>>> that are automatically pushed to kafka as you do when you manual push them.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>>>>> could be it.
>>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>>
>>>>>>>>>>>>> If by 'live' messages you mean the time I push them into kafka
>>>>>>>>>>>>> topic then no, I dont see any error at that time. If 'live' means something
>>>>>>>>>>>>> else here then please tell me what could it be.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>>>>>> could be it.
>>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> You need to confirm that you see these same errors with the
>>>>>>>>>>>>>> live data or not.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka ->
>>>>>>>>>>>>>> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>>>>>>>>> ElasticSearch
>>>>>>>>>>>>>> then
>>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Any point in this chain could fail and result in Kibana not
>>>>>>>>>>>>>> seeing things.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> could this be related to why I am unable to see logs in
>>>>>>>>>>>>>> kibana dashboard?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> THis is some error I am seeing in stormUI parser bolt in
>>>>>>>>>>>>>> snort section:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to get the
>>>>>>>>>>>>>>> snort logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is
>>>>>>>>>>>>>>>> also relevant
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> looks like some shard is unassigned and that is related to
>>>>>>>>>>>>>>>>> snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> This is what I see here. What should I be looking at
>>>>>>>>>>>>>>>>>> here?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find something
>>>>>>>>>>>>>>>>>>> in logs
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of Red, so
>>>>>>>>>>>>>>>>>>>> there's your problem.  I would go look in /var/log/elasticsearch/ at some
>>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now
>>>>>>>>>>>>>>>>>>>>> where do I go in this to find out why I cant see the snort logs in kibana
>>>>>>>>>>>>>>>>>>>>> dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web browser from
>>>>>>>>>>>>>>>>>>>>>> the play store.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the
>>>>>>>>>>>>>>>>>>>>>> vagrant VM?
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
sudo cat snort.out |
/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
--broker-list node1:6667 --topic snort

On Wed, Nov 8, 2017 at 11:14 PM, Otto Fowler <ot...@gmail.com>
wrote:

> What topic?  what are the parameters you are calling the script with?
>
>
>
> On November 8, 2017 at 13:12:56, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> The metron installation I have (single node based vm install) comes with
> sensor stubs. I assume that everything has already been done for those stub
> sensors to push the canned data. I am doing the similar thing, directly
> pushing the preformatted canned data to kafka topic. I can see the logs in
> kibana dashboard when I start stub sensor from monit but then I push the
> same logs myself, those errors pop that I have shown earlier.
>
> On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <ce...@gmail.com> wrote:
>
>> How did you start the snort parser topology and what's the parser config
>> (in zookeeper)?
>>
>> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> This is what I am doing
>>>
>>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>>> --broker-list node1:6667 --topic snort
>>>
>>>
>>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <ce...@gmail.com>
>>> wrote:
>>>
>>>> Are you directly writing to the "indexing" kafka topic from the parser
>>>> or from some other source?  It looks like there are some records in kafka
>>>> that are not JSON.  By the time it gets to the indexing kafka topic, it
>>>> should be a JSON map.  The parser topology emits that JSON map and then the
>>>> enrichments topology enrich that map and emits the enriched map to the
>>>> indexing topic.
>>>>
>>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> No I am no longer seeing the parsing topology error, here is the full
>>>>> stack trace
>>>>>
>>>>> from hdfsindexingbolt in indexing topology
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> from indexingbolt in indexing topology
>>>>>
>>>>> [image: Inline image 2]
>>>>>
>>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <ot...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> What Casey said.  We need the whole stack trace.
>>>>>> Also, are you saying that you are no longer seeing the parser
>>>>>> topology error?
>>>>>>
>>>>>>
>>>>>>
>>>>>> On November 8, 2017 at 11:39:06, Casey Stella (cestella@gmail.com)
>>>>>> wrote:
>>>>>>
>>>>>> If you click on the port (6704) there in those errors, what's the
>>>>>> full stacktrace (that starts with the suggestion you file a JIRA)?
>>>>>>
>>>>>> What this means is that an exception is bleeding from the individual
>>>>>> writer into the writer component (It should be handled in the writer
>>>>>> itself).  The fact that it's happening for both HDFS and ES is telling as
>>>>>> well and I'm very interested in the full stacktrace there because it'll
>>>>>> have the wrapped exception from the individual writer included.
>>>>>>
>>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> OK I did what Zeolla said, cat snort.out | kafka producer .... and
>>>>>>> now the error at storm parser topology is gone but I am now seeing this at
>>>>>>> the indexing toology
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> this is a single line I am trying to push
>>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00
>>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6
>>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>>
>>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> I would download the entire snort.out file and run cat snort.out |
>>>>>>>>> kafka-console-producer.sh ... to make sure there are no copy paste problems
>>>>>>>>>
>>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> The snort parser is coded to support dates in this format:
>>>>>>>>>>
>>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> If your records are in dd/MM/yy-  format, then you may see this
>>>>>>>>>> error I believe.
>>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>>
>>>>>>>>>> If this is the case, then you will need to modify the default log
>>>>>>>>>> timestamp format for snort in the short term.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>>
>>>>>>>>>> Can you post what the value of the ‘timestamp’ field/column is
>>>>>>>>>> for a piece of data that is failing
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>
>>>>>>>>>> Now I am pretty sure that the issue is the format of the logs I
>>>>>>>>>> am trying to push
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> Can someone tell me the location of snort stub canned data file?
>>>>>>>>>> Maybe I could see its formatting and try following the same thing.
>>>>>>>>>>
>>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> After running this command, I copy paste a few lines from here:
>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>
>>>>>>>>>>> like this
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>>
>>>>>>>>>>> I am not getting any error here. I can also see these lines
>>>>>>>>>>> pushed out via kafka consumer under topic of snort.
>>>>>>>>>>>
>>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>>
>>>>>>>>>>>> I *think* you have tried both messages coming from snort
>>>>>>>>>>>> through some setup ( getting pushed to kafka ), which I think of as live.
>>>>>>>>>>>> I also think you have manually pushed messages, where you see this error.
>>>>>>>>>>>> So what I am asking is if you see the same errors for things
>>>>>>>>>>>> that are automatically pushed to kafka as you do when you manual push them.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> "Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>>>> could be it.
>>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>>
>>>>>>>>>>>> If by 'live' messages you mean the time I push them into kafka
>>>>>>>>>>>> topic then no, I dont see any error at that time. If 'live' means something
>>>>>>>>>>>> else here then please tell me what could it be.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>>>>> could be it.
>>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>>
>>>>>>>>>>>>> You need to confirm that you see these same errors with the
>>>>>>>>>>>>> live data or not.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>>
>>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka ->
>>>>>>>>>>>>> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>>>>>>>> ElasticSearch
>>>>>>>>>>>>> then
>>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>>
>>>>>>>>>>>>> Any point in this chain could fail and result in Kibana not
>>>>>>>>>>>>> seeing things.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> could this be related to why I am unable to see logs in kibana
>>>>>>>>>>>>> dashboard?
>>>>>>>>>>>>>
>>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>>
>>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>>
>>>>>>>>>>>>> THis is some error I am seeing in stormUI parser bolt in snort
>>>>>>>>>>>>> section:
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to get the snort
>>>>>>>>>>>>>> logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also
>>>>>>>>>>>>>>> relevant
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> looks like some shard is unassigned and that is related to
>>>>>>>>>>>>>>>> snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> This is what I see here. What should I be looking at
>>>>>>>>>>>>>>>>> here?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find something
>>>>>>>>>>>>>>>>>> in logs
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of Red, so
>>>>>>>>>>>>>>>>>>> there's your problem.  I would go look in /var/log/elasticsearch/ at some
>>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now
>>>>>>>>>>>>>>>>>>>> where do I go in this to find out why I cant see the snort logs in kibana
>>>>>>>>>>>>>>>>>>>> dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web browser from
>>>>>>>>>>>>>>>>>>>>> the play store.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the vagrant
>>>>>>>>>>>>>>>>>>>>> VM?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
What topic?  what are the parameters you are calling the script with?



On November 8, 2017 at 13:12:56, Syed Hammad Tahir (mscs16059@itu.edu.pk)
wrote:

The metron installation I have (single node based vm install) comes with
sensor stubs. I assume that everything has already been done for those stub
sensors to push the canned data. I am doing the similar thing, directly
pushing the preformatted canned data to kafka topic. I can see the logs in
kibana dashboard when I start stub sensor from monit but then I push the
same logs myself, those errors pop that I have shown earlier.

On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <ce...@gmail.com> wrote:

> How did you start the snort parser topology and what's the parser config
> (in zookeeper)?
>
> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> This is what I am doing
>>
>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>> --broker-list node1:6667 --topic snort
>>
>>
>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <ce...@gmail.com> wrote:
>>
>>> Are you directly writing to the "indexing" kafka topic from the parser
>>> or from some other source?  It looks like there are some records in kafka
>>> that are not JSON.  By the time it gets to the indexing kafka topic, it
>>> should be a JSON map.  The parser topology emits that JSON map and then the
>>> enrichments topology enrich that map and emits the enriched map to the
>>> indexing topic.
>>>
>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> No I am no longer seeing the parsing topology error, here is the full
>>>> stack trace
>>>>
>>>> from hdfsindexingbolt in indexing topology
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> from indexingbolt in indexing topology
>>>>
>>>> [image: Inline image 2]
>>>>
>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <ot...@gmail.com>
>>>> wrote:
>>>>
>>>>> What Casey said.  We need the whole stack trace.
>>>>> Also, are you saying that you are no longer seeing the parser topology
>>>>> error?
>>>>>
>>>>>
>>>>>
>>>>> On November 8, 2017 at 11:39:06, Casey Stella (cestella@gmail.com)
>>>>> wrote:
>>>>>
>>>>> If you click on the port (6704) there in those errors, what's the full
>>>>> stacktrace (that starts with the suggestion you file a JIRA)?
>>>>>
>>>>> What this means is that an exception is bleeding from the individual
>>>>> writer into the writer component (It should be handled in the writer
>>>>> itself).  The fact that it's happening for both HDFS and ES is telling as
>>>>> well and I'm very interested in the full stacktrace there because it'll
>>>>> have the wrapped exception from the individual writer included.
>>>>>
>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> OK I did what Zeolla said, cat snort.out | kafka producer .... and
>>>>>> now the error at storm parser topology is gone but I am now seeing this at
>>>>>> the indexing toology
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>>
>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> this is a single line I am trying to push
>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00
>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6
>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>
>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> I would download the entire snort.out file and run cat snort.out |
>>>>>>>> kafka-console-producer.sh ... to make sure there are no copy paste problems
>>>>>>>>
>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> The snort parser is coded to support dates in this format:
>>>>>>>>>
>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> If your records are in dd/MM/yy-  format, then you may see this
>>>>>>>>> error I believe.
>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>
>>>>>>>>> If this is the case, then you will need to modify the default log
>>>>>>>>> timestamp format for snort in the short term.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>
>>>>>>>>> Can you post what the value of the ‘timestamp’ field/column is for
>>>>>>>>> a piece of data that is failing
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>
>>>>>>>>> Now I am pretty sure that the issue is the format of the logs I am
>>>>>>>>> trying to push
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> Can someone tell me the location of snort stub canned data file?
>>>>>>>>> Maybe I could see its formatting and try following the same thing.
>>>>>>>>>
>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> After running this command, I copy paste a few lines from here:
>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>
>>>>>>>>>> like this
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>
>>>>>>>>>> I am not getting any error here. I can also see these lines
>>>>>>>>>> pushed out via kafka consumer under topic of snort.
>>>>>>>>>>
>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>
>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>
>>>>>>>>>>> I *think* you have tried both messages coming from snort through
>>>>>>>>>>> some setup ( getting pushed to kafka ), which I think of as live.  I also
>>>>>>>>>>> think you have manually pushed messages, where you see this error.
>>>>>>>>>>> So what I am asking is if you see the same errors for things
>>>>>>>>>>> that are automatically pushed to kafka as you do when you manual push them.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>
>>>>>>>>>>> "Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>>> could be it.
>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>
>>>>>>>>>>> If by 'live' messages you mean the time I push them into kafka
>>>>>>>>>>> topic then no, I dont see any error at that time. If 'live' means something
>>>>>>>>>>> else here then please tell me what could it be.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>>>> could be it.
>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>
>>>>>>>>>>>> You need to confirm that you see these same errors with the
>>>>>>>>>>>> live data or not.
>>>>>>>>>>>>
>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>
>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka ->
>>>>>>>>>>>> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>>>>>>> ElasticSearch
>>>>>>>>>>>> then
>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>
>>>>>>>>>>>> Any point in this chain could fail and result in Kibana not
>>>>>>>>>>>> seeing things.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> could this be related to why I am unable to see logs in kibana
>>>>>>>>>>>> dashboard?
>>>>>>>>>>>>
>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>
>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>
>>>>>>>>>>>> THis is some error I am seeing in stormUI parser bolt in snort
>>>>>>>>>>>> section:
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to get the snort
>>>>>>>>>>>>> logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also
>>>>>>>>>>>>>> relevant
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> looks like some shard is unassigned and that is related to
>>>>>>>>>>>>>>> snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> This is what I see here. What should I be looking at here?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find something in
>>>>>>>>>>>>>>>>> logs
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of Red, so
>>>>>>>>>>>>>>>>>> there's your problem.  I would go look in /var/log/elasticsearch/ at some
>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now where
>>>>>>>>>>>>>>>>>>> do I go in this to find out why I cant see the snort logs in kibana
>>>>>>>>>>>>>>>>>>> dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web browser from the
>>>>>>>>>>>>>>>>>>>> play store.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the vagrant
>>>>>>>>>>>>>>>>>>>> VM?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
The metron installation I have (single node based vm install) comes with
sensor stubs. I assume that everything has already been done for those stub
sensors to push the canned data. I am doing the similar thing, directly
pushing the preformatted canned data to kafka topic. I can see the logs in
kibana dashboard when I start stub sensor from monit but then I push the
same logs myself, those errors pop that I have shown earlier.

On Wed, Nov 8, 2017 at 11:08 PM, Casey Stella <ce...@gmail.com> wrote:

> How did you start the snort parser topology and what's the parser config
> (in zookeeper)?
>
> On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> This is what I am doing
>>
>> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
>> --broker-list node1:6667 --topic snort
>>
>>
>> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <ce...@gmail.com> wrote:
>>
>>> Are you directly writing to the "indexing" kafka topic from the parser
>>> or from some other source?  It looks like there are some records in kafka
>>> that are not JSON.  By the time it gets to the indexing kafka topic, it
>>> should be a JSON map.  The parser topology emits that JSON map and then the
>>> enrichments topology enrich that map and emits the enriched map to the
>>> indexing topic.
>>>
>>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> No I am no longer seeing the parsing topology error, here is the full
>>>> stack trace
>>>>
>>>> from hdfsindexingbolt in indexing topology
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> from indexingbolt in indexing topology
>>>>
>>>> [image: Inline image 2]
>>>>
>>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <ot...@gmail.com>
>>>> wrote:
>>>>
>>>>> What Casey said.  We need the whole stack trace.
>>>>> Also, are you saying that you are no longer seeing the parser topology
>>>>> error?
>>>>>
>>>>>
>>>>>
>>>>> On November 8, 2017 at 11:39:06, Casey Stella (cestella@gmail.com)
>>>>> wrote:
>>>>>
>>>>> If you click on the port (6704) there in those errors, what's the full
>>>>> stacktrace (that starts with the suggestion you file a JIRA)?
>>>>>
>>>>> What this means is that an exception is bleeding from the individual
>>>>> writer into the writer component (It should be handled in the writer
>>>>> itself).  The fact that it's happening for both HDFS and ES is telling as
>>>>> well and I'm very interested in the full stacktrace there because it'll
>>>>> have the wrapped exception from the individual writer included.
>>>>>
>>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> OK I did what Zeolla said, cat snort.out | kafka producer .... and
>>>>>> now the error at storm parser topology is gone but I am now seeing this at
>>>>>> the indexing toology
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>>
>>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> this is a single line I am trying to push
>>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00
>>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6
>>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>>
>>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> I would download the entire snort.out file and run cat snort.out |
>>>>>>>> kafka-console-producer.sh ... to make sure there are no copy paste problems
>>>>>>>>
>>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> The snort parser is coded to support dates in this format:
>>>>>>>>>
>>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> If your records are in dd/MM/yy-  format, then you may see this
>>>>>>>>> error I believe.
>>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>>
>>>>>>>>> If this is the case, then you will need to modify the default log
>>>>>>>>> timestamp format for snort in the short term.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>>
>>>>>>>>> Can you post what the value of the ‘timestamp’ field/column is for
>>>>>>>>> a piece of data that is failing
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>
>>>>>>>>> Now I am pretty sure that the issue is the format of the logs I am
>>>>>>>>> trying to push
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> Can someone tell me the location of snort stub canned data file?
>>>>>>>>> Maybe I could see its formatting and try following the same thing.
>>>>>>>>>
>>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> After running this command, I copy paste a few lines from here:
>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>
>>>>>>>>>> like this
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 2]
>>>>>>>>>>
>>>>>>>>>> I am not getting any error here. I can also see these lines
>>>>>>>>>> pushed out via kafka consumer under topic of snort.
>>>>>>>>>>
>>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>>
>>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> What I mean is this:
>>>>>>>>>>>
>>>>>>>>>>> I *think* you have tried both messages coming from snort through
>>>>>>>>>>> some setup ( getting pushed to kafka ), which I think of as live.  I also
>>>>>>>>>>> think you have manually pushed messages, where you see this error.
>>>>>>>>>>> So what I am asking is if you see the same errors for things
>>>>>>>>>>> that are automatically pushed to kafka as you do when you manual push them.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>
>>>>>>>>>>> "Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>>> could be it.
>>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>>
>>>>>>>>>>> If by 'live' messages you mean the time I push them into kafka
>>>>>>>>>>> topic then no, I dont see any error at that time. If 'live' means something
>>>>>>>>>>> else here then please tell me what could it be.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>>>> could be it.
>>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>>
>>>>>>>>>>>> You need to confirm that you see these same errors with the
>>>>>>>>>>>> live data or not.
>>>>>>>>>>>>
>>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>>
>>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka ->
>>>>>>>>>>>> Storm Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>>>>>>> ElasticSearch
>>>>>>>>>>>> then
>>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>>
>>>>>>>>>>>> Any point in this chain could fail and result in Kibana not
>>>>>>>>>>>> seeing things.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> could this be related to why I am unable to see logs in kibana
>>>>>>>>>>>> dashboard?
>>>>>>>>>>>>
>>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>>
>>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>>
>>>>>>>>>>>> THis is some error I am seeing in stormUI parser bolt in snort
>>>>>>>>>>>> section:
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> I guess I have hit a dead end. I am not able to get the snort
>>>>>>>>>>>>> logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also
>>>>>>>>>>>>>> relevant
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> looks like some shard is unassigned and that is related to
>>>>>>>>>>>>>>> snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> This is what I see here. What should I be looking at here?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find something in
>>>>>>>>>>>>>>>>> logs
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of Red, so
>>>>>>>>>>>>>>>>>> there's your problem.  I would go look in /var/log/elasticsearch/ at some
>>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now where
>>>>>>>>>>>>>>>>>>> do I go in this to find out why I cant see the snort logs in kibana
>>>>>>>>>>>>>>>>>>> dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> You can install it into the chrome web browser from the
>>>>>>>>>>>>>>>>>>>> play store.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the vagrant
>>>>>>>>>>>>>>>>>>>> VM?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Casey Stella <ce...@gmail.com>.
How did you start the snort parser topology and what's the parser config
(in zookeeper)?

On Wed, Nov 8, 2017 at 1:06 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> This is what I am doing
>
> sudo cat snort.out | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
> --broker-list node1:6667 --topic snort
>
>
> On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <ce...@gmail.com> wrote:
>
>> Are you directly writing to the "indexing" kafka topic from the parser or
>> from some other source?  It looks like there are some records in kafka that
>> are not JSON.  By the time it gets to the indexing kafka topic, it should
>> be a JSON map.  The parser topology emits that JSON map and then the
>> enrichments topology enrich that map and emits the enriched map to the
>> indexing topic.
>>
>> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> No I am no longer seeing the parsing topology error, here is the full
>>> stack trace
>>>
>>> from hdfsindexingbolt in indexing topology
>>>
>>> [image: Inline image 1]
>>>
>>> from indexingbolt in indexing topology
>>>
>>> [image: Inline image 2]
>>>
>>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> What Casey said.  We need the whole stack trace.
>>>> Also, are you saying that you are no longer seeing the parser topology
>>>> error?
>>>>
>>>>
>>>>
>>>> On November 8, 2017 at 11:39:06, Casey Stella (cestella@gmail.com)
>>>> wrote:
>>>>
>>>> If you click on the port (6704) there in those errors, what's the full
>>>> stacktrace (that starts with the suggestion you file a JIRA)?
>>>>
>>>> What this means is that an exception is bleeding from the individual
>>>> writer into the writer component (It should be handled in the writer
>>>> itself).  The fact that it's happening for both HDFS and ES is telling as
>>>> well and I'm very interested in the full stacktrace there because it'll
>>>> have the wrapped exception from the individual writer included.
>>>>
>>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> OK I did what Zeolla said, cat snort.out | kafka producer .... and now
>>>>> the error at storm parser topology is gone but I am now seeing this at the
>>>>> indexing toology
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>>
>>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> this is a single line I am trying to push
>>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00
>>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6
>>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>>
>>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> I would download the entire snort.out file and run cat snort.out |
>>>>>>> kafka-console-producer.sh ... to make sure there are no copy paste problems
>>>>>>>
>>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> The snort parser is coded to support dates in this format:
>>>>>>>>
>>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>>
>>>>>>>>
>>>>>>>> If your records are in dd/MM/yy-  format, then you may see this
>>>>>>>> error I believe.
>>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>>
>>>>>>>> If this is the case, then you will need to modify the default log
>>>>>>>> timestamp format for snort in the short term.
>>>>>>>>
>>>>>>>>
>>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>>
>>>>>>>> Can you post what the value of the ‘timestamp’ field/column is for
>>>>>>>> a piece of data that is failing
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>
>>>>>>>> Now I am pretty sure that the issue is the format of the logs I am
>>>>>>>> trying to push
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> Can someone tell me the location of snort stub canned data file?
>>>>>>>> Maybe I could see its formatting and try following the same thing.
>>>>>>>>
>>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> After running this command, I copy paste a few lines from here:
>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>
>>>>>>>>> like this
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> [image: Inline image 2]
>>>>>>>>>
>>>>>>>>> I am not getting any error here. I can also see these lines pushed
>>>>>>>>> out via kafka consumer under topic of snort.
>>>>>>>>>
>>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>>
>>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> What I mean is this:
>>>>>>>>>>
>>>>>>>>>> I *think* you have tried both messages coming from snort through
>>>>>>>>>> some setup ( getting pushed to kafka ), which I think of as live.  I also
>>>>>>>>>> think you have manually pushed messages, where you see this error.
>>>>>>>>>> So what I am asking is if you see the same errors for things that
>>>>>>>>>> are automatically pushed to kafka as you do when you manual push them.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>
>>>>>>>>>> "Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>> could be it.
>>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>>
>>>>>>>>>> If by 'live' messages you mean the time I push them into kafka
>>>>>>>>>> topic then no, I dont see any error at that time. If 'live' means something
>>>>>>>>>> else here then please tell me what could it be.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>>> could be it.
>>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>>
>>>>>>>>>>> You need to confirm that you see these same errors with the live
>>>>>>>>>>> data or not.
>>>>>>>>>>>
>>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>>
>>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
>>>>>>>>>>> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>>>>>> ElasticSearch
>>>>>>>>>>> then
>>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>>
>>>>>>>>>>> Any point in this chain could fail and result in Kibana not
>>>>>>>>>>> seeing things.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>
>>>>>>>>>>> could this be related to why I am unable to see logs in kibana
>>>>>>>>>>> dashboard?
>>>>>>>>>>>
>>>>>>>>>>> I am copying a few lines from here
>>>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>>>
>>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>>
>>>>>>>>>>> THis is some error I am seeing in stormUI parser bolt in snort
>>>>>>>>>>> section:
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> I guess I have hit a dead end. I am not able to get the snort
>>>>>>>>>>>> logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also
>>>>>>>>>>>>> relevant
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> looks like some shard is unassigned and that is related to
>>>>>>>>>>>>>> snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> This is what I see here. What should I be looking at here?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find something in
>>>>>>>>>>>>>>>> logs
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of Red, so
>>>>>>>>>>>>>>>>> there's your problem.  I would go look in /var/log/elasticsearch/ at some
>>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now where
>>>>>>>>>>>>>>>>>> do I go in this to find out why I cant see the snort logs in kibana
>>>>>>>>>>>>>>>>>> dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> You can install it into the chrome web browser from the
>>>>>>>>>>>>>>>>>>> play store.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the vagrant
>>>>>>>>>>>>>>>>>>> VM?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>> --
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
This is what I am doing

sudo cat snort.out |
/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list
node1:6667 --topic snort


On Wed, Nov 8, 2017 at 10:44 PM, Casey Stella <ce...@gmail.com> wrote:

> Are you directly writing to the "indexing" kafka topic from the parser or
> from some other source?  It looks like there are some records in kafka that
> are not JSON.  By the time it gets to the indexing kafka topic, it should
> be a JSON map.  The parser topology emits that JSON map and then the
> enrichments topology enrich that map and emits the enriched map to the
> indexing topic.
>
> On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> No I am no longer seeing the parsing topology error, here is the full
>> stack trace
>>
>> from hdfsindexingbolt in indexing topology
>>
>> [image: Inline image 1]
>>
>> from indexingbolt in indexing topology
>>
>> [image: Inline image 2]
>>
>> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> What Casey said.  We need the whole stack trace.
>>> Also, are you saying that you are no longer seeing the parser topology
>>> error?
>>>
>>>
>>>
>>> On November 8, 2017 at 11:39:06, Casey Stella (cestella@gmail.com)
>>> wrote:
>>>
>>> If you click on the port (6704) there in those errors, what's the full
>>> stacktrace (that starts with the suggestion you file a JIRA)?
>>>
>>> What this means is that an exception is bleeding from the individual
>>> writer into the writer component (It should be handled in the writer
>>> itself).  The fact that it's happening for both HDFS and ES is telling as
>>> well and I'm very interested in the full stacktrace there because it'll
>>> have the wrapped exception from the individual writer included.
>>>
>>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> OK I did what Zeolla said, cat snort.out | kafka producer .... and now
>>>> the error at storm parser topology is gone but I am now seeing this at the
>>>> indexing toology
>>>>
>>>> [image: Inline image 1]
>>>>
>>>>
>>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>>> > wrote:
>>>>
>>>>> this is a single line I am trying to push
>>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00
>>>>> :00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6
>>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>>
>>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> I would download the entire snort.out file and run cat snort.out |
>>>>>> kafka-console-producer.sh ... to make sure there are no copy paste problems
>>>>>>
>>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> The snort parser is coded to support dates in this format:
>>>>>>>
>>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>>
>>>>>>>
>>>>>>> If your records are in dd/MM/yy-  format, then you may see this
>>>>>>> error I believe.
>>>>>>> Can you verify the timestamp field’s format?
>>>>>>>
>>>>>>> If this is the case, then you will need to modify the default log
>>>>>>> timestamp format for snort in the short term.
>>>>>>>
>>>>>>>
>>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (
>>>>>>> ottobackwards@gmail.com) wrote:
>>>>>>>
>>>>>>> Can you post what the value of the ‘timestamp’ field/column is for a
>>>>>>> piece of data that is failing
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>
>>>>>>> Now I am pretty sure that the issue is the format of the logs I am
>>>>>>> trying to push
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> Can someone tell me the location of snort stub canned data file?
>>>>>>> Maybe I could see its formatting and try following the same thing.
>>>>>>>
>>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>>
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> After running this command, I copy paste a few lines from here:
>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>
>>>>>>>> like this
>>>>>>>>
>>>>>>>>
>>>>>>>> [image: Inline image 2]
>>>>>>>>
>>>>>>>> I am not getting any error here. I can also see these lines pushed
>>>>>>>> out via kafka consumer under topic of snort.
>>>>>>>>
>>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>>
>>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <
>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> What I mean is this:
>>>>>>>>>
>>>>>>>>> I *think* you have tried both messages coming from snort through
>>>>>>>>> some setup ( getting pushed to kafka ), which I think of as live.  I also
>>>>>>>>> think you have manually pushed messages, where you see this error.
>>>>>>>>> So what I am asking is if you see the same errors for things that
>>>>>>>>> are automatically pushed to kafka as you do when you manual push them.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>
>>>>>>>>> "Yes, If the messages cannot be parsed then that would be a
>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>> could be it.
>>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>>
>>>>>>>>> If by 'live' messages you mean the time I push them into kafka
>>>>>>>>> topic then no, I dont see any error at that time. If 'live' means something
>>>>>>>>> else here then please tell me what could it be.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Yes, If the messages cannot be parsed then that would be a
>>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>>> could be it.
>>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>>
>>>>>>>>>> You need to confirm that you see these same errors with the live
>>>>>>>>>> data or not.
>>>>>>>>>>
>>>>>>>>>> Remember, the flow is like this
>>>>>>>>>>
>>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
>>>>>>>>>> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>>>>> ElasticSearch
>>>>>>>>>> then
>>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>>
>>>>>>>>>> Any point in this chain could fail and result in Kibana not
>>>>>>>>>> seeing things.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>
>>>>>>>>>> could this be related to why I am unable to see logs in kibana
>>>>>>>>>> dashboard?
>>>>>>>>>>
>>>>>>>>>> I am copying a few lines from here https://raw.githubusercontent.
>>>>>>>>>> com/apache/metron/master/metron-deployment/roles/sensor-stub
>>>>>>>>>> s/files/snort.out
>>>>>>>>>>
>>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>>
>>>>>>>>>> THis is some error I am seeing in stormUI parser bolt in snort
>>>>>>>>>> section:
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> I guess I have hit a dead end. I am not able to get the snort
>>>>>>>>>>> logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also
>>>>>>>>>>>> relevant
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>
>>>>>>>>>>>>> looks like some shard is unassigned and that is related to
>>>>>>>>>>>>> snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> This is what I see here. What should I be looking at here?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find something in
>>>>>>>>>>>>>>> logs
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> It looks like your ES cluster has a health of Red, so
>>>>>>>>>>>>>>>> there's your problem.  I would go look in /var/log/elasticsearch/ at some
>>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now where
>>>>>>>>>>>>>>>>> do I go in this to find out why I cant see the snort logs in kibana
>>>>>>>>>>>>>>>>> dashboard, pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> You can install it into the chrome web browser from the
>>>>>>>>>>>>>>>>>> play store.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>> --
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Casey Stella <ce...@gmail.com>.
Are you directly writing to the "indexing" kafka topic from the parser or
from some other source?  It looks like there are some records in kafka that
are not JSON.  By the time it gets to the indexing kafka topic, it should
be a JSON map.  The parser topology emits that JSON map and then the
enrichments topology enrich that map and emits the enriched map to the
indexing topic.

On Wed, Nov 8, 2017 at 12:21 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> No I am no longer seeing the parsing topology error, here is the full
> stack trace
>
> from hdfsindexingbolt in indexing topology
>
> [image: Inline image 1]
>
> from indexingbolt in indexing topology
>
> [image: Inline image 2]
>
> On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> What Casey said.  We need the whole stack trace.
>> Also, are you saying that you are no longer seeing the parser topology
>> error?
>>
>>
>>
>> On November 8, 2017 at 11:39:06, Casey Stella (cestella@gmail.com) wrote:
>>
>> If you click on the port (6704) there in those errors, what's the full
>> stacktrace (that starts with the suggestion you file a JIRA)?
>>
>> What this means is that an exception is bleeding from the individual
>> writer into the writer component (It should be handled in the writer
>> itself).  The fact that it's happening for both HDFS and ES is telling as
>> well and I'm very interested in the full stacktrace there because it'll
>> have the wrapped exception from the individual writer included.
>>
>> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> OK I did what Zeolla said, cat snort.out | kafka producer .... and now
>>> the error at storm parser topology is gone but I am now seeing this at the
>>> indexing toology
>>>
>>> [image: Inline image 1]
>>>
>>>
>>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>>> wrote:
>>>
>>>> this is a single line I am trying to push
>>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00
>>>> :00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6
>>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>>
>>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>> wrote:
>>>>
>>>>> I would download the entire snort.out file and run cat snort.out |
>>>>> kafka-console-producer.sh ... to make sure there are no copy paste problems
>>>>>
>>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> The snort parser is coded to support dates in this format:
>>>>>>
>>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>>
>>>>>>
>>>>>> If your records are in dd/MM/yy-  format, then you may see this error
>>>>>> I believe.
>>>>>> Can you verify the timestamp field’s format?
>>>>>>
>>>>>> If this is the case, then you will need to modify the default log
>>>>>> timestamp format for snort in the short term.
>>>>>>
>>>>>>
>>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (ottobackwards@gmail.com)
>>>>>> wrote:
>>>>>>
>>>>>> Can you post what the value of the ‘timestamp’ field/column is for a
>>>>>> piece of data that is failing
>>>>>>
>>>>>>
>>>>>>
>>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>
>>>>>> Now I am pretty sure that the issue is the format of the logs I am
>>>>>> trying to push
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> Can someone tell me the location of snort stub canned data file?
>>>>>> Maybe I could see its formatting and try following the same thing.
>>>>>>
>>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>>
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> After running this command, I copy paste a few lines from here:
>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>
>>>>>>> like this
>>>>>>>
>>>>>>>
>>>>>>> [image: Inline image 2]
>>>>>>>
>>>>>>> I am not getting any error here. I can also see these lines pushed
>>>>>>> out via kafka consumer under topic of snort.
>>>>>>>
>>>>>>> This was the mechanism I am using to push the logs.
>>>>>>>
>>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <ottobackwards@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> What I mean is this:
>>>>>>>>
>>>>>>>> I *think* you have tried both messages coming from snort through
>>>>>>>> some setup ( getting pushed to kafka ), which I think of as live.  I also
>>>>>>>> think you have manually pushed messages, where you see this error.
>>>>>>>> So what I am asking is if you see the same errors for things that
>>>>>>>> are automatically pushed to kafka as you do when you manual push them.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>
>>>>>>>> "Yes, If the messages cannot be parsed then that would be a
>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>> could be it.
>>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>>
>>>>>>>> If by 'live' messages you mean the time I push them into kafka
>>>>>>>> topic then no, I dont see any error at that time. If 'live' means something
>>>>>>>> else here then please tell me what could it be.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <
>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Yes, If the messages cannot be parsed then that would be a
>>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>>> could be it.
>>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>>
>>>>>>>>> You need to confirm that you see these same errors with the live
>>>>>>>>> data or not.
>>>>>>>>>
>>>>>>>>> Remember, the flow is like this
>>>>>>>>>
>>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
>>>>>>>>> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>>>> ElasticSearch
>>>>>>>>> then
>>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>>
>>>>>>>>> Any point in this chain could fail and result in Kibana not seeing
>>>>>>>>> things.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>
>>>>>>>>> could this be related to why I am unable to see logs in kibana
>>>>>>>>> dashboard?
>>>>>>>>>
>>>>>>>>> I am copying a few lines from here https://raw.githubusercontent.
>>>>>>>>> com/apache/metron/master/metron-deployment/roles/sensor-stub
>>>>>>>>> s/files/snort.out
>>>>>>>>>
>>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>>
>>>>>>>>> THis is some error I am seeing in stormUI parser bolt in snort
>>>>>>>>> section:
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> I guess I have hit a dead end. I am not able to get the snort
>>>>>>>>>> logs in kibana dashboard. Any help will be appreciated.
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also
>>>>>>>>>>> relevant
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>> looks like some shard is unassigned and that is related to
>>>>>>>>>>>> snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> This is what I see here. What should I be looking at here?
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> hi, I am back at work. lets see if i can find something in
>>>>>>>>>>>>>> logs
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> It looks like your ES cluster has a health of Red, so
>>>>>>>>>>>>>>> there's your problem.  I would go look in /var/log/elasticsearch/ at some
>>>>>>>>>>>>>>> logs.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now where do
>>>>>>>>>>>>>>>> I go in this to find out why I cant see the snort logs in kibana dashboard,
>>>>>>>>>>>>>>>> pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> You can install it into the chrome web browser from the
>>>>>>>>>>>>>>>>> play store.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>> --
>>>>>
>>>>> Jon
>>>>>
>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
No I am no longer seeing the parsing topology error, here is the full stack
trace

from hdfsindexingbolt in indexing topology

[image: Inline image 1]

from indexingbolt in indexing topology

[image: Inline image 2]

On Wed, Nov 8, 2017 at 10:08 PM, Otto Fowler <ot...@gmail.com>
wrote:

> What Casey said.  We need the whole stack trace.
> Also, are you saying that you are no longer seeing the parser topology
> error?
>
>
>
> On November 8, 2017 at 11:39:06, Casey Stella (cestella@gmail.com) wrote:
>
> If you click on the port (6704) there in those errors, what's the full
> stacktrace (that starts with the suggestion you file a JIRA)?
>
> What this means is that an exception is bleeding from the individual
> writer into the writer component (It should be handled in the writer
> itself).  The fact that it's happening for both HDFS and ES is telling as
> well and I'm very interested in the full stacktrace there because it'll
> have the wrapped exception from the individual writer included.
>
> On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> OK I did what Zeolla said, cat snort.out | kafka producer .... and now
>> the error at storm parser topology is gone but I am now seeing this at the
>> indexing toology
>>
>> [image: Inline image 1]
>>
>>
>> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> this is a single line I am trying to push
>>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00
>>> :00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6
>>> ,,0x1000,64,10,23403,76,77824,,,,
>>>
>>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <ze...@gmail.com>
>>> wrote:
>>>
>>>> I would download the entire snort.out file and run cat snort.out |
>>>> kafka-console-producer.sh ... to make sure there are no copy paste problems
>>>>
>>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com> wrote:
>>>>
>>>>> The snort parser is coded to support dates in this format:
>>>>>
>>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>>
>>>>>
>>>>> If your records are in dd/MM/yy-  format, then you may see this error
>>>>> I believe.
>>>>> Can you verify the timestamp field’s format?
>>>>>
>>>>> If this is the case, then you will need to modify the default log
>>>>> timestamp format for snort in the short term.
>>>>>
>>>>>
>>>>> On November 8, 2017 at 06:09:11, Otto Fowler (ottobackwards@gmail.com)
>>>>> wrote:
>>>>>
>>>>> Can you post what the value of the ‘timestamp’ field/column is for a
>>>>> piece of data that is failing
>>>>>
>>>>>
>>>>>
>>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>
>>>>> Now I am pretty sure that the issue is the format of the logs I am
>>>>> trying to push
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> Can someone tell me the location of snort stub canned data file? Maybe
>>>>> I could see its formatting and try following the same thing.
>>>>>
>>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> thats how I am pushing my logs to kafka topic
>>>>>>
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> After running this command, I copy paste a few lines from here:
>>>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>>
>>>>>> like this
>>>>>>
>>>>>>
>>>>>> [image: Inline image 2]
>>>>>>
>>>>>> I am not getting any error here. I can also see these lines pushed
>>>>>> out via kafka consumer under topic of snort.
>>>>>>
>>>>>> This was the mechanism I am using to push the logs.
>>>>>>
>>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <ot...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> What I mean is this:
>>>>>>>
>>>>>>> I *think* you have tried both messages coming from snort through
>>>>>>> some setup ( getting pushed to kafka ), which I think of as live.  I also
>>>>>>> think you have manually pushed messages, where you see this error.
>>>>>>> So what I am asking is if you see the same errors for things that
>>>>>>> are automatically pushed to kafka as you do when you manual push them.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>
>>>>>>> "Yes, If the messages cannot be parsed then that would be a
>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>> could be it.
>>>>>>> I wonder if the issue is with the date format?"
>>>>>>>
>>>>>>> If by 'live' messages you mean the time I push them into kafka topic
>>>>>>> then no, I dont see any error at that time. If 'live' means something else
>>>>>>> here then please tell me what could it be.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <ottobackwards@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> Yes, If the messages cannot be parsed then that would be a
>>>>>>>> problem.  If you see this error with your ‘live’ messages as well then that
>>>>>>>> could be it.
>>>>>>>> I wonder if the issue is with the date format?
>>>>>>>>
>>>>>>>> You need to confirm that you see these same errors with the live
>>>>>>>> data or not.
>>>>>>>>
>>>>>>>> Remember, the flow is like this
>>>>>>>>
>>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
>>>>>>>> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>>> ElasticSearch
>>>>>>>> then
>>>>>>>> Kibana <-> Elastic Search
>>>>>>>>
>>>>>>>> Any point in this chain could fail and result in Kibana not seeing
>>>>>>>> things.
>>>>>>>>
>>>>>>>>
>>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>
>>>>>>>> could this be related to why I am unable to see logs in kibana
>>>>>>>> dashboard?
>>>>>>>>
>>>>>>>> I am copying a few lines from here https://raw.githubusercontent.
>>>>>>>> com/apache/metron/master/metron-deployment/roles/sensor-stub
>>>>>>>> s/files/snort.out
>>>>>>>>
>>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>>
>>>>>>>> THis is some error I am seeing in stormUI parser bolt in snort
>>>>>>>> section:
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> I guess I have hit a dead end. I am not able to get the snort logs
>>>>>>>>> in kibana dashboard. Any help will be appreciated.
>>>>>>>>>
>>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also
>>>>>>>>>> relevant
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> looks like some shard is unassigned and that is related to
>>>>>>>>>>> snort. Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> This is what I see here. What should I be looking at here?
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> hi, I am back at work. lets see if i can find something in logs
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> It looks like your ES cluster has a health of Red, so there's
>>>>>>>>>>>>>> your problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now where do
>>>>>>>>>>>>>>> I go in this to find out why I cant see the snort logs in kibana dashboard,
>>>>>>>>>>>>>>> pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> You can install it into the chrome web browser from the
>>>>>>>>>>>>>>>> play store.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>> --
>>>>
>>>> Jon
>>>>
>>>
>>>
>>
>

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
What Casey said.  We need the whole stack trace.
Also, are you saying that you are no longer seeing the parser topology
error?



On November 8, 2017 at 11:39:06, Casey Stella (cestella@gmail.com) wrote:

If you click on the port (6704) there in those errors, what's the full
stacktrace (that starts with the suggestion you file a JIRA)?

What this means is that an exception is bleeding from the individual writer
into the writer component (It should be handled in the writer itself).  The
fact that it's happening for both HDFS and ES is telling as well and I'm
very interested in the full stacktrace there because it'll have the wrapped
exception from the individual writer included.

On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> OK I did what Zeolla said, cat snort.out | kafka producer .... and now the
> error at storm parser topology is gone but I am now seeing this at the
> indexing toology
>
> [image: Inline image 1]
>
>
> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> this is a single line I am trying to push
>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:
>> 00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900B
>> B6,,0x1000,64,10,23403,76,77824,,,,
>>
>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <ze...@gmail.com>
>> wrote:
>>
>>> I would download the entire snort.out file and run cat snort.out |
>>> kafka-console-producer.sh ... to make sure there are no copy paste problems
>>>
>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com> wrote:
>>>
>>>> The snort parser is coded to support dates in this format:
>>>>
>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>
>>>>
>>>> If your records are in dd/MM/yy-  format, then you may see this error I
>>>> believe.
>>>> Can you verify the timestamp field’s format?
>>>>
>>>> If this is the case, then you will need to modify the default log
>>>> timestamp format for snort in the short term.
>>>>
>>>>
>>>> On November 8, 2017 at 06:09:11, Otto Fowler (ottobackwards@gmail.com)
>>>> wrote:
>>>>
>>>> Can you post what the value of the ‘timestamp’ field/column is for a
>>>> piece of data that is failing
>>>>
>>>>
>>>>
>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> Now I am pretty sure that the issue is the format of the logs I am
>>>> trying to push
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> Can someone tell me the location of snort stub canned data file? Maybe
>>>> I could see its formatting and try following the same thing.
>>>>
>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> thats how I am pushing my logs to kafka topic
>>>>>
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> After running this command, I copy paste a few lines from here:
>>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>
>>>>> like this
>>>>>
>>>>>
>>>>> [image: Inline image 2]
>>>>>
>>>>> I am not getting any error here. I can also see these lines pushed out
>>>>> via kafka consumer under topic of snort.
>>>>>
>>>>> This was the mechanism I am using to push the logs.
>>>>>
>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <ot...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> What I mean is this:
>>>>>>
>>>>>> I *think* you have tried both messages coming from snort through some
>>>>>> setup ( getting pushed to kafka ), which I think of as live.  I also think
>>>>>> you have manually pushed messages, where you see this error.
>>>>>> So what I am asking is if you see the same errors for things that are
>>>>>> automatically pushed to kafka as you do when you manual push them.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>
>>>>>> "Yes, If the messages cannot be parsed then that would be a problem.
>>>>>> If you see this error with your ‘live’ messages as well then that could be
>>>>>> it.
>>>>>> I wonder if the issue is with the date format?"
>>>>>>
>>>>>> If by 'live' messages you mean the time I push them into kafka topic
>>>>>> then no, I dont see any error at that time. If 'live' means something else
>>>>>> here then please tell me what could it be.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <ot...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Yes, If the messages cannot be parsed then that would be a problem.
>>>>>>> If you see this error with your ‘live’ messages as well then that could be
>>>>>>> it.
>>>>>>> I wonder if the issue is with the date format?
>>>>>>>
>>>>>>> You need to confirm that you see these same errors with the live
>>>>>>> data or not.
>>>>>>>
>>>>>>> Remember, the flow is like this
>>>>>>>
>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
>>>>>>> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>> ElasticSearch
>>>>>>> then
>>>>>>> Kibana <-> Elastic Search
>>>>>>>
>>>>>>> Any point in this chain could fail and result in Kibana not seeing
>>>>>>> things.
>>>>>>>
>>>>>>>
>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>
>>>>>>> could this be related to why I am unable to see logs in kibana
>>>>>>> dashboard?
>>>>>>>
>>>>>>> I am copying a few lines from here https://raw.githubusercontent.
>>>>>>> com/apache/metron/master/metron-deployment/roles/sensor-stub
>>>>>>> s/files/snort.out
>>>>>>>
>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>
>>>>>>> THis is some error I am seeing in stormUI parser bolt in snort
>>>>>>> section:
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> I guess I have hit a dead end. I am not able to get the snort logs
>>>>>>>> in kibana dashboard. Any help will be appreciated.
>>>>>>>>
>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also
>>>>>>>>> relevant
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> looks like some shard is unassigned and that is related to snort.
>>>>>>>>>> Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> This is what I see here. What should I be looking at here?
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> hi, I am back at work. lets see if i can find something in logs
>>>>>>>>>>>>
>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> It looks like your ES cluster has a health of Red, so there's
>>>>>>>>>>>>> your problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now where do I
>>>>>>>>>>>>>> go in this to find out why I cant see the snort logs in kibana dashboard,
>>>>>>>>>>>>>> pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> You can install it into the chrome web browser from the play
>>>>>>>>>>>>>>> store.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>> --
>>>
>>> Jon
>>>
>>
>>
>

Re: Snort Logs

Posted by Casey Stella <ce...@gmail.com>.
If you click on the port (6704) there in those errors, what's the full
stacktrace (that starts with the suggestion you file a JIRA)?

What this means is that an exception is bleeding from the individual writer
into the writer component (It should be handled in the writer itself).  The
fact that it's happening for both HDFS and ES is telling as well and I'm
very interested in the full stacktrace there because it'll have the wrapped
exception from the individual writer included.

On Wed, Nov 8, 2017 at 11:24 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> OK I did what Zeolla said, cat snort.out | kafka producer .... and now the
> error at storm parser topology is gone but I am now seeing this at the
> indexing toology
>
> [image: Inline image 1]
>
>
> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> this is a single line I am trying to push
>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:
>> 00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900B
>> B6,,0x1000,64,10,23403,76,77824,,,,
>>
>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <ze...@gmail.com>
>> wrote:
>>
>>> I would download the entire snort.out file and run cat snort.out |
>>> kafka-console-producer.sh ... to make sure there are no copy paste problems
>>>
>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com> wrote:
>>>
>>>> The snort parser is coded to support dates in this format:
>>>>
>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>
>>>>
>>>> If your records are in dd/MM/yy-  format, then you may see this error I
>>>> believe.
>>>> Can you verify the timestamp field’s format?
>>>>
>>>> If this is the case, then you will need to modify the default log
>>>> timestamp format for snort in the short term.
>>>>
>>>>
>>>> On November 8, 2017 at 06:09:11, Otto Fowler (ottobackwards@gmail.com)
>>>> wrote:
>>>>
>>>> Can you post what the value of the ‘timestamp’ field/column is for a
>>>> piece of data that is failing
>>>>
>>>>
>>>>
>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> Now I am pretty sure that the issue is the format of the logs I am
>>>> trying to push
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> Can someone tell me the location of snort stub canned data file? Maybe
>>>> I could see its formatting and try following the same thing.
>>>>
>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> thats how I am pushing my logs to kafka topic
>>>>>
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> After running this command, I copy paste a few lines from here:
>>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>>
>>>>> like this
>>>>>
>>>>>
>>>>> [image: Inline image 2]
>>>>>
>>>>> I am not getting any error here. I can also see these lines pushed out
>>>>> via kafka consumer under topic of snort.
>>>>>
>>>>> This was the mechanism I am using to push the logs.
>>>>>
>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <ot...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> What I mean is this:
>>>>>>
>>>>>> I *think* you have tried both messages coming from snort through some
>>>>>> setup ( getting pushed to kafka ), which I think of as live.  I also think
>>>>>> you have manually pushed messages, where you see this error.
>>>>>> So what I am asking is if you see the same errors for things that are
>>>>>> automatically pushed to kafka as you do when you manual push them.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>
>>>>>> "Yes, If the messages cannot be parsed then that would be a problem.
>>>>>> If you see this error with your ‘live’ messages as well then that could be
>>>>>> it.
>>>>>> I wonder if the issue is with the date format?"
>>>>>>
>>>>>> If by 'live' messages you mean the time I push them into kafka topic
>>>>>> then no, I dont see any error at that time. If 'live' means something else
>>>>>> here then please tell me what could it be.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <ot...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Yes, If the messages cannot be parsed then that would be a problem.
>>>>>>> If you see this error with your ‘live’ messages as well then that could be
>>>>>>> it.
>>>>>>> I wonder if the issue is with the date format?
>>>>>>>
>>>>>>> You need to confirm that you see these same errors with the live
>>>>>>> data or not.
>>>>>>>
>>>>>>> Remember, the flow is like this
>>>>>>>
>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
>>>>>>> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>> ElasticSearch
>>>>>>> then
>>>>>>> Kibana <-> Elastic Search
>>>>>>>
>>>>>>> Any point in this chain could fail and result in Kibana not seeing
>>>>>>> things.
>>>>>>>
>>>>>>>
>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>
>>>>>>> could this be related to why I am unable to see logs in kibana
>>>>>>> dashboard?
>>>>>>>
>>>>>>> I am copying a few lines from here https://raw.githubusercontent.
>>>>>>> com/apache/metron/master/metron-deployment/roles/sensor-stub
>>>>>>> s/files/snort.out
>>>>>>>
>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>
>>>>>>> THis is some error I am seeing in stormUI parser bolt in snort
>>>>>>> section:
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> I guess I have hit a dead end. I am not able to get the snort logs
>>>>>>>> in kibana dashboard. Any help will be appreciated.
>>>>>>>>
>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also
>>>>>>>>> relevant
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> looks like some shard is unassigned and that is related to snort.
>>>>>>>>>> Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> This is what I see here. What should I be looking at here?
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> hi, I am back at work. lets see if i can find something in logs
>>>>>>>>>>>>
>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> It looks like your ES cluster has a health of Red, so there's
>>>>>>>>>>>>> your problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now where do I
>>>>>>>>>>>>>> go in this to find out why I cant see the snort logs in kibana dashboard,
>>>>>>>>>>>>>> pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> You can install it into the chrome web browser from the play
>>>>>>>>>>>>>>> store.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>> --
>>>
>>> Jon
>>>
>>
>>
>

Re: Snort Logs

Posted by "Zeolla@GMail.com" <ze...@gmail.com>.
Now that's an interesting error message.  I would defer to someone else on
immediate next steps.

Jon

On Wed, Nov 8, 2017 at 11:24 AM Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> OK I did what Zeolla said, cat snort.out | kafka producer .... and now the
> error at storm parser topology is gone but I am now seeing this at the
> indexing toology
>
> [image: Inline image 1]
>
>
> On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> this is a single line I am trying to push
>> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
>> alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>>
>> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <ze...@gmail.com>
>> wrote:
>>
>>> I would download the entire snort.out file and run cat snort.out |
>>> kafka-console-producer.sh ... to make sure there are no copy paste problems
>>>
>>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com> wrote:
>>>
>>>> The snort parser is coded to support dates in this format:
>>>>
>>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>>> private transient DateTimeFormatter dateTimeFormatter;
>>>>
>>>>
>>>> If your records are in dd/MM/yy-  format, then you may see this error I
>>>> believe.
>>>> Can you verify the timestamp field’s format?
>>>>
>>>> If this is the case, then you will need to modify the default log
>>>> timestamp format for snort in the short term.
>>>>
>>>>
>>>> On November 8, 2017 at 06:09:11, Otto Fowler (ottobackwards@gmail.com)
>>>> wrote:
>>>>
>>>> Can you post what the value of the ‘timestamp’ field/column is for a
>>>> piece of data that is failing
>>>>
>>>>
>>>>
>>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> Now I am pretty sure that the issue is the format of the logs I am
>>>> trying to push
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> Can someone tell me the location of snort stub canned data file? Maybe
>>>> I could see its formatting and try following the same thing.
>>>>
>>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> thats how I am pushing my logs to kafka topic
>>>>>
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> After running this command, I copy paste a few lines from here:
>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>
>>>>> like this
>>>>>
>>>>>
>>>>> [image: Inline image 2]
>>>>>
>>>>> I am not getting any error here. I can also see these lines pushed out
>>>>> via kafka consumer under topic of snort.
>>>>>
>>>>> This was the mechanism I am using to push the logs.
>>>>>
>>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <ot...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> What I mean is this:
>>>>>>
>>>>>> I *think* you have tried both messages coming from snort through some
>>>>>> setup ( getting pushed to kafka ), which I think of as live.  I also think
>>>>>> you have manually pushed messages, where you see this error.
>>>>>> So what I am asking is if you see the same errors for things that are
>>>>>> automatically pushed to kafka as you do when you manual push them.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>
>>>>>> "Yes, If the messages cannot be parsed then that would be a problem.
>>>>>> If you see this error with your ‘live’ messages as well then that could be
>>>>>> it.
>>>>>> I wonder if the issue is with the date format?"
>>>>>>
>>>>>> If by 'live' messages you mean the time I push them into kafka topic
>>>>>> then no, I dont see any error at that time. If 'live' means something else
>>>>>> here then please tell me what could it be.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <ot...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Yes, If the messages cannot be parsed then that would be a problem.
>>>>>>> If you see this error with your ‘live’ messages as well then that could be
>>>>>>> it.
>>>>>>> I wonder if the issue is with the date format?
>>>>>>>
>>>>>>> You need to confirm that you see these same errors with the live
>>>>>>> data or not.
>>>>>>>
>>>>>>> Remember, the flow is like this
>>>>>>>
>>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
>>>>>>> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>>> ElasticSearch
>>>>>>> then
>>>>>>> Kibana <-> Elastic Search
>>>>>>>
>>>>>>> Any point in this chain could fail and result in Kibana not seeing
>>>>>>> things.
>>>>>>>
>>>>>>>
>>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>
>>>>>>> could this be related to why I am unable to see logs in kibana
>>>>>>> dashboard?
>>>>>>>
>>>>>>> I am copying a few lines from here
>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>
>>>>>>> and then pushing them to snort kafka topic.
>>>>>>>
>>>>>>> THis is some error I am seeing in stormUI parser bolt in snort
>>>>>>> section:
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> I guess I have hit a dead end. I am not able to get the snort logs
>>>>>>>> in kibana dashboard. Any help will be appreciated.
>>>>>>>>
>>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also
>>>>>>>>> relevant
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> looks like some shard is unassigned and that is related to snort.
>>>>>>>>>> Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> This is what I see here. What should I be looking at here?
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> hi, I am back at work. lets see if i can find something in logs
>>>>>>>>>>>>
>>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> It looks like your ES cluster has a health of Red, so there's
>>>>>>>>>>>>> your problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now where do I
>>>>>>>>>>>>>> go in this to find out why I cant see the snort logs in kibana dashboard,
>>>>>>>>>>>>>> pushed to snort topic via kafka producer?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> You can install it into the chrome web browser from the play
>>>>>>>>>>>>>>> store.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>> --
>>>
>>> Jon
>>>
>>
>>
> --

Jon

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
OK I did what Zeolla said, cat snort.out | kafka producer .... and now the
error at storm parser topology is gone but I am now seeing this at the
indexing toology

[image: Inline image 1]


On Wed, Nov 8, 2017 at 8:25 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> this is a single line I am trying to push
> 01/11/17-20:49:18.107168 ,1,999158,0,"'snort test alert'",TCP,192.168.66.1,
> 49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,
> 0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,
>
> On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <ze...@gmail.com> wrote:
>
>> I would download the entire snort.out file and run cat snort.out |
>> kafka-console-producer.sh ... to make sure there are no copy paste problems
>>
>> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com> wrote:
>>
>>> The snort parser is coded to support dates in this format:
>>>
>>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>>> private transient DateTimeFormatter dateTimeFormatter;
>>>
>>>
>>> If your records are in dd/MM/yy-  format, then you may see this error I
>>> believe.
>>> Can you verify the timestamp field’s format?
>>>
>>> If this is the case, then you will need to modify the default log
>>> timestamp format for snort in the short term.
>>>
>>>
>>> On November 8, 2017 at 06:09:11, Otto Fowler (ottobackwards@gmail.com)
>>> wrote:
>>>
>>> Can you post what the value of the ‘timestamp’ field/column is for a
>>> piece of data that is failing
>>>
>>>
>>>
>>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>>> wrote:
>>>
>>> Now I am pretty sure that the issue is the format of the logs I am
>>> trying to push
>>>
>>> [image: Inline image 1]
>>>
>>> Can someone tell me the location of snort stub canned data file? Maybe I
>>> could see its formatting and try following the same thing.
>>>
>>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> thats how I am pushing my logs to kafka topic
>>>>
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> After running this command, I copy paste a few lines from here:
>>>> https://raw.githubusercontent.com/apache/metron/master/metro
>>>> n-deployment/roles/sensor-stubs/files/snort.out
>>>>
>>>> like this
>>>>
>>>>
>>>> [image: Inline image 2]
>>>>
>>>> I am not getting any error here. I can also see these lines pushed out
>>>> via kafka consumer under topic of snort.
>>>>
>>>> This was the mechanism I am using to push the logs.
>>>>
>>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <ot...@gmail.com>
>>>> wrote:
>>>>
>>>>> What I mean is this:
>>>>>
>>>>> I *think* you have tried both messages coming from snort through some
>>>>> setup ( getting pushed to kafka ), which I think of as live.  I also think
>>>>> you have manually pushed messages, where you see this error.
>>>>> So what I am asking is if you see the same errors for things that are
>>>>> automatically pushed to kafka as you do when you manual push them.
>>>>>
>>>>>
>>>>>
>>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>
>>>>> "Yes, If the messages cannot be parsed then that would be a problem.
>>>>> If you see this error with your ‘live’ messages as well then that could be
>>>>> it.
>>>>> I wonder if the issue is with the date format?"
>>>>>
>>>>> If by 'live' messages you mean the time I push them into kafka topic
>>>>> then no, I dont see any error at that time. If 'live' means something else
>>>>> here then please tell me what could it be.
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <ot...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Yes, If the messages cannot be parsed then that would be a problem.
>>>>>> If you see this error with your ‘live’ messages as well then that could be
>>>>>> it.
>>>>>> I wonder if the issue is with the date format?
>>>>>>
>>>>>> You need to confirm that you see these same errors with the live data
>>>>>> or not.
>>>>>>
>>>>>> Remember, the flow is like this
>>>>>>
>>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
>>>>>> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>>> ElasticSearch
>>>>>> then
>>>>>> Kibana <-> Elastic Search
>>>>>>
>>>>>> Any point in this chain could fail and result in Kibana not seeing
>>>>>> things.
>>>>>>
>>>>>>
>>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>
>>>>>> could this be related to why I am unable to see logs in kibana
>>>>>> dashboard?
>>>>>>
>>>>>> I am copying a few lines from here https://raw.githubusercontent.
>>>>>> com/apache/metron/master/metron-deployment/roles/sensor-
>>>>>> stubs/files/snort.out
>>>>>>
>>>>>> and then pushing them to snort kafka topic.
>>>>>>
>>>>>> THis is some error I am seeing in stormUI parser bolt in snort
>>>>>> section:
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> I guess I have hit a dead end. I am not able to get the snort logs
>>>>>>> in kibana dashboard. Any help will be appreciated.
>>>>>>>
>>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also
>>>>>>>> relevant
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> Cluster health by index shows this:
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> looks like some shard is unassigned and that is related to snort.
>>>>>>>>> Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>>
>>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> This is what I see here. What should I be looking at here?
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> hi, I am back at work. lets see if i can find something in logs
>>>>>>>>>>>
>>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> It looks like your ES cluster has a health of Red, so there's
>>>>>>>>>>>> your problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> NVM, I have installed the elastic search head. Now where do I
>>>>>>>>>>>>> go in this to find out why I cant see the snort logs in kibana dashboard,
>>>>>>>>>>>>> pushed to snort topic via kafka producer?
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> You can install it into the chrome web browser from the play
>>>>>>>>>>>>>> store.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>> --
>>
>> Jon
>>
>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
this is a single line I am trying to push
01/11/17-20:49:18.107168 ,1,999158,0,"'snort test
alert'",TCP,192.168.66.1,49581,192.168.66.121,22,0A:00:27:00:00:00,08:00:27:E8:B0:7A,0x5A,***AP***,0x1E396BFC,0x56900BB6,,0x1000,64,10,23403,76,77824,,,,

On Wed, Nov 8, 2017 at 5:30 PM, Zeolla@GMail.com <ze...@gmail.com> wrote:

> I would download the entire snort.out file and run cat snort.out |
> kafka-console-producer.sh ... to make sure there are no copy paste problems
>
> On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com> wrote:
>
>> The snort parser is coded to support dates in this format:
>>
>> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
>> private transient DateTimeFormatter dateTimeFormatter;
>>
>>
>> If your records are in dd/MM/yy-  format, then you may see this error I
>> believe.
>> Can you verify the timestamp field’s format?
>>
>> If this is the case, then you will need to modify the default log
>> timestamp format for snort in the short term.
>>
>>
>> On November 8, 2017 at 06:09:11, Otto Fowler (ottobackwards@gmail.com)
>> wrote:
>>
>> Can you post what the value of the ‘timestamp’ field/column is for a
>> piece of data that is failing
>>
>>
>>
>> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> Now I am pretty sure that the issue is the format of the logs I am trying
>> to push
>>
>> [image: Inline image 1]
>>
>> Can someone tell me the location of snort stub canned data file? Maybe I
>> could see its formatting and try following the same thing.
>>
>> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> thats how I am pushing my logs to kafka topic
>>>
>>>
>>> [image: Inline image 1]
>>>
>>> After running this command, I copy paste a few lines from here:
>>> https://raw.githubusercontent.com/apache/metron/master/
>>> metron-deployment/roles/sensor-stubs/files/snort.out
>>>
>>> like this
>>>
>>>
>>> [image: Inline image 2]
>>>
>>> I am not getting any error here. I can also see these lines pushed out
>>> via kafka consumer under topic of snort.
>>>
>>> This was the mechanism I am using to push the logs.
>>>
>>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> What I mean is this:
>>>>
>>>> I *think* you have tried both messages coming from snort through some
>>>> setup ( getting pushed to kafka ), which I think of as live.  I also think
>>>> you have manually pushed messages, where you see this error.
>>>> So what I am asking is if you see the same errors for things that are
>>>> automatically pushed to kafka as you do when you manual push them.
>>>>
>>>>
>>>>
>>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> "Yes, If the messages cannot be parsed then that would be a problem.
>>>> If you see this error with your ‘live’ messages as well then that could be
>>>> it.
>>>> I wonder if the issue is with the date format?"
>>>>
>>>> If by 'live' messages you mean the time I push them into kafka topic
>>>> then no, I dont see any error at that time. If 'live' means something else
>>>> here then please tell me what could it be.
>>>>
>>>>
>>>>
>>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <ot...@gmail.com>
>>>> wrote:
>>>>
>>>>> Yes, If the messages cannot be parsed then that would be a problem.
>>>>> If you see this error with your ‘live’ messages as well then that could be
>>>>> it.
>>>>> I wonder if the issue is with the date format?
>>>>>
>>>>> You need to confirm that you see these same errors with the live data
>>>>> or not.
>>>>>
>>>>> Remember, the flow is like this
>>>>>
>>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
>>>>> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>>> ElasticSearch
>>>>> then
>>>>> Kibana <-> Elastic Search
>>>>>
>>>>> Any point in this chain could fail and result in Kibana not seeing
>>>>> things.
>>>>>
>>>>>
>>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>
>>>>> could this be related to why I am unable to see logs in kibana
>>>>> dashboard?
>>>>>
>>>>> I am copying a few lines from here https://raw.githubusercontent.
>>>>> com/apache/metron/master/metron-deployment/roles/
>>>>> sensor-stubs/files/snort.out
>>>>>
>>>>> and then pushing them to snort kafka topic.
>>>>>
>>>>> THis is some error I am seeing in stormUI parser bolt in snort section:
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> I guess I have hit a dead end. I am not able to get the snort logs in
>>>>>> kibana dashboard. Any help will be appreciated.
>>>>>>
>>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also relevant
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> Cluster health by index shows this:
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> looks like some shard is unassigned and that is related to snort.
>>>>>>>> Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>>
>>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> This is what I see here. What should I be looking at here?
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> hi, I am back at work. lets see if i can find something in logs
>>>>>>>>>>
>>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <
>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> It looks like your ES cluster has a health of Red, so there's
>>>>>>>>>>> your problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>>>>>>>
>>>>>>>>>>> Jon
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> NVM, I have installed the elastic search head. Now where do I
>>>>>>>>>>>> go in this to find out why I cant see the snort logs in kibana dashboard,
>>>>>>>>>>>> pushed to snort topic via kafka producer?
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> You can install it into the chrome web browser from the play
>>>>>>>>>>>>> store.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>
>>>>>>>>>>> Jon
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>> --
>
> Jon
>

Re: Snort Logs

Posted by "Zeolla@GMail.com" <ze...@gmail.com>.
I would download the entire snort.out file and run cat snort.out |
kafka-console-producer.sh ... to make sure there are no copy paste problems

On Wed, Nov 8, 2017, 06:59 Otto Fowler <ot...@gmail.com> wrote:

> The snort parser is coded to support dates in this format:
>
> private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
> private transient DateTimeFormatter dateTimeFormatter;
>
>
> If your records are in dd/MM/yy-  format, then you may see this error I
> believe.
> Can you verify the timestamp field’s format?
>
> If this is the case, then you will need to modify the default log
> timestamp format for snort in the short term.
>
>
> On November 8, 2017 at 06:09:11, Otto Fowler (ottobackwards@gmail.com)
> wrote:
>
> Can you post what the value of the ‘timestamp’ field/column is for a piece
> of data that is failing
>
>
>
> On November 8, 2017 at 03:55:47, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> Now I am pretty sure that the issue is the format of the logs I am trying
> to push
>
> [image: Inline image 1]
>
> Can someone tell me the location of snort stub canned data file? Maybe I
> could see its formatting and try following the same thing.
>
> On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> thats how I am pushing my logs to kafka topic
>>
>>
>> [image: Inline image 1]
>>
>> After running this command, I copy paste a few lines from here:
>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>
>> like this
>>
>>
>> [image: Inline image 2]
>>
>> I am not getting any error here. I can also see these lines pushed out
>> via kafka consumer under topic of snort.
>>
>> This was the mechanism I am using to push the logs.
>>
>> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> What I mean is this:
>>>
>>> I *think* you have tried both messages coming from snort through some
>>> setup ( getting pushed to kafka ), which I think of as live.  I also think
>>> you have manually pushed messages, where you see this error.
>>> So what I am asking is if you see the same errors for things that are
>>> automatically pushed to kafka as you do when you manual push them.
>>>
>>>
>>>
>>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>>> wrote:
>>>
>>> "Yes, If the messages cannot be parsed then that would be a problem.  If
>>> you see this error with your ‘live’ messages as well then that could be it.
>>> I wonder if the issue is with the date format?"
>>>
>>> If by 'live' messages you mean the time I push them into kafka topic
>>> then no, I dont see any error at that time. If 'live' means something else
>>> here then please tell me what could it be.
>>>
>>>
>>>
>>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> Yes, If the messages cannot be parsed then that would be a problem.  If
>>>> you see this error with your ‘live’ messages as well then that could be it.
>>>> I wonder if the issue is with the date format?
>>>>
>>>> You need to confirm that you see these same errors with the live data
>>>> or not.
>>>>
>>>> Remember, the flow is like this
>>>>
>>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
>>>> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>>> ElasticSearch
>>>> then
>>>> Kibana <-> Elastic Search
>>>>
>>>> Any point in this chain could fail and result in Kibana not seeing
>>>> things.
>>>>
>>>>
>>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> could this be related to why I am unable to see logs in kibana
>>>> dashboard?
>>>>
>>>> I am copying a few lines from here
>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>
>>>> and then pushing them to snort kafka topic.
>>>>
>>>> THis is some error I am seeing in stormUI parser bolt in snort section:
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> I guess I have hit a dead end. I am not able to get the snort logs in
>>>>> kibana dashboard. Any help will be appreciated.
>>>>>
>>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also relevant
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> Cluster health by index shows this:
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> looks like some shard is unassigned and that is related to snort.
>>>>>>> Could it be the logs I was pushing to kafka topic earlier?
>>>>>>>
>>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> This is what I see here. What should I be looking at here?
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>> hi, I am back at work. lets see if i can find something in logs
>>>>>>>>>
>>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <zeolla@gmail.com
>>>>>>>>> > wrote:
>>>>>>>>>
>>>>>>>>>> It looks like your ES cluster has a health of Red, so there's
>>>>>>>>>> your problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> NVM, I have installed the elastic search head. Now where do I go
>>>>>>>>>>> in this to find out why I cant see the snort logs in kibana dashboard,
>>>>>>>>>>> pushed to snort topic via kafka producer?
>>>>>>>>>>>
>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> You can install it into the chrome web browser from the play
>>>>>>>>>>>> store.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
> --

Jon

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
The snort parser is coded to support dates in this format:

private static String defaultDateFormat = "MM/dd/yy-HH:mm:ss.SSSSSS";
private transient DateTimeFormatter dateTimeFormatter;


If your records are in dd/MM/yy-  format, then you may see this error I
believe.
Can you verify the timestamp field’s format?

If this is the case, then you will need to modify the default log timestamp
format for snort in the short term.


On November 8, 2017 at 06:09:11, Otto Fowler (ottobackwards@gmail.com)
wrote:

Can you post what the value of the ‘timestamp’ field/column is for a piece
of data that is failing



On November 8, 2017 at 03:55:47, Syed Hammad Tahir (mscs16059@itu.edu.pk)
wrote:

Now I am pretty sure that the issue is the format of the logs I am trying
to push

[image: Inline image 1]

Can someone tell me the location of snort stub canned data file? Maybe I
could see its formatting and try following the same thing.

On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> thats how I am pushing my logs to kafka topic
>
>
> [image: Inline image 1]
>
> After running this command, I copy paste a few lines from here:
> https://raw.githubusercontent.com/apache/metron/master/
> metron-deployment/roles/sensor-stubs/files/snort.out
>
> like this
>
>
> [image: Inline image 2]
>
> I am not getting any error here. I can also see these lines pushed out via
> kafka consumer under topic of snort.
>
> This was the mechanism I am using to push the logs.
>
> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> What I mean is this:
>>
>> I *think* you have tried both messages coming from snort through some
>> setup ( getting pushed to kafka ), which I think of as live.  I also think
>> you have manually pushed messages, where you see this error.
>> So what I am asking is if you see the same errors for things that are
>> automatically pushed to kafka as you do when you manual push them.
>>
>>
>>
>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> "Yes, If the messages cannot be parsed then that would be a problem.  If
>> you see this error with your ‘live’ messages as well then that could be it.
>> I wonder if the issue is with the date format?"
>>
>> If by 'live' messages you mean the time I push them into kafka topic then
>> no, I dont see any error at that time. If 'live' means something else here
>> then please tell me what could it be.
>>
>>
>>
>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> Yes, If the messages cannot be parsed then that would be a problem.  If
>>> you see this error with your ‘live’ messages as well then that could be it.
>>> I wonder if the issue is with the date format?
>>>
>>> You need to confirm that you see these same errors with the live data or
>>> not.
>>>
>>> Remember, the flow is like this
>>>
>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
>>> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>> ElasticSearch
>>> then
>>> Kibana <-> Elastic Search
>>>
>>> Any point in this chain could fail and result in Kibana not seeing
>>> things.
>>>
>>>
>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>>> wrote:
>>>
>>> could this be related to why I am unable to see logs in kibana dashboard?
>>>
>>> I am copying a few lines from here https://raw.githubusercontent.
>>> com/apache/metron/master/metron-deployment/roles/sensor-stub
>>> s/files/snort.out
>>>
>>> and then pushing them to snort kafka topic.
>>>
>>> THis is some error I am seeing in stormUI parser bolt in snort section:
>>>
>>> [image: Inline image 1]
>>>
>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> I guess I have hit a dead end. I am not able to get the snort logs in
>>>> kibana dashboard. Any help will be appreciated.
>>>>
>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>>> > wrote:
>>>>
>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also relevant
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> Cluster health by index shows this:
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> looks like some shard is unassigned and that is related to snort.
>>>>>> Could it be the logs I was pushing to kafka topic earlier?
>>>>>>
>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> This is what I see here. What should I be looking at here?
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> hi, I am back at work. lets see if i can find something in logs
>>>>>>>>
>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> It looks like your ES cluster has a health of Red, so there's your
>>>>>>>>> problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> NVM, I have installed the elastic search head. Now where do I go
>>>>>>>>>> in this to find out why I cant see the snort logs in kibana dashboard,
>>>>>>>>>> pushed to snort topic via kafka producer?
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> You can install it into the chrome web browser from the play
>>>>>>>>>>> store.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>
>>>>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
Can you post what the value of the ‘timestamp’ field/column is for a piece
of data that is failing



On November 8, 2017 at 03:55:47, Syed Hammad Tahir (mscs16059@itu.edu.pk)
wrote:

Now I am pretty sure that the issue is the format of the logs I am trying
to push

[image: Inline image 1]

Can someone tell me the location of snort stub canned data file? Maybe I
could see its formatting and try following the same thing.

On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> thats how I am pushing my logs to kafka topic
>
>
> [image: Inline image 1]
>
> After running this command, I copy paste a few lines from here:
> https://raw.githubusercontent.com/apache/metron/master/
> metron-deployment/roles/sensor-stubs/files/snort.out
>
> like this
>
>
> [image: Inline image 2]
>
> I am not getting any error here. I can also see these lines pushed out via
> kafka consumer under topic of snort.
>
> This was the mechanism I am using to push the logs.
>
> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> What I mean is this:
>>
>> I *think* you have tried both messages coming from snort through some
>> setup ( getting pushed to kafka ), which I think of as live.  I also think
>> you have manually pushed messages, where you see this error.
>> So what I am asking is if you see the same errors for things that are
>> automatically pushed to kafka as you do when you manual push them.
>>
>>
>>
>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> "Yes, If the messages cannot be parsed then that would be a problem.  If
>> you see this error with your ‘live’ messages as well then that could be it.
>> I wonder if the issue is with the date format?"
>>
>> If by 'live' messages you mean the time I push them into kafka topic then
>> no, I dont see any error at that time. If 'live' means something else here
>> then please tell me what could it be.
>>
>>
>>
>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> Yes, If the messages cannot be parsed then that would be a problem.  If
>>> you see this error with your ‘live’ messages as well then that could be it.
>>> I wonder if the issue is with the date format?
>>>
>>> You need to confirm that you see these same errors with the live data or
>>> not.
>>>
>>> Remember, the flow is like this
>>>
>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
>>> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>> ElasticSearch
>>> then
>>> Kibana <-> Elastic Search
>>>
>>> Any point in this chain could fail and result in Kibana not seeing
>>> things.
>>>
>>>
>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>>> wrote:
>>>
>>> could this be related to why I am unable to see logs in kibana dashboard?
>>>
>>> I am copying a few lines from here https://raw.githubusercontent.
>>> com/apache/metron/master/metron-deployment/roles/sensor-stub
>>> s/files/snort.out
>>>
>>> and then pushing them to snort kafka topic.
>>>
>>> THis is some error I am seeing in stormUI parser bolt in snort section:
>>>
>>> [image: Inline image 1]
>>>
>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> I guess I have hit a dead end. I am not able to get the snort logs in
>>>> kibana dashboard. Any help will be appreciated.
>>>>
>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>>> > wrote:
>>>>
>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also relevant
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> Cluster health by index shows this:
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> looks like some shard is unassigned and that is related to snort.
>>>>>> Could it be the logs I was pushing to kafka topic earlier?
>>>>>>
>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> This is what I see here. What should I be looking at here?
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> hi, I am back at work. lets see if i can find something in logs
>>>>>>>>
>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> It looks like your ES cluster has a health of Red, so there's your
>>>>>>>>> problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> NVM, I have installed the elastic search head. Now where do I go
>>>>>>>>>> in this to find out why I cant see the snort logs in kibana dashboard,
>>>>>>>>>> pushed to snort topic via kafka producer?
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> You can install it into the chrome web browser from the play
>>>>>>>>>>> store.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>
>>>>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Now I am pretty sure that the issue is the format of the logs I am trying
to push

[image: Inline image 1]

Can someone tell me the location of snort stub canned data file? Maybe I
could see its formatting and try following the same thing.

On Tue, Nov 7, 2017 at 10:13 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> thats how I am pushing my logs to kafka topic
>
>
> [image: Inline image 1]
>
> After running this command, I copy paste a few lines from here:
> https://raw.githubusercontent.com/apache/metron/master/
> metron-deployment/roles/sensor-stubs/files/snort.out
>
> like this
>
>
> [image: Inline image 2]
>
> I am not getting any error here. I can also see these lines pushed out via
> kafka consumer under topic of snort.
>
> This was the mechanism I am using to push the logs.
>
> On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> What I mean is this:
>>
>> I *think* you have tried both messages coming from snort through some
>> setup ( getting pushed to kafka ), which I think of as live.  I also think
>> you have manually pushed messages, where you see this error.
>> So what I am asking is if you see the same errors for things that are
>> automatically pushed to kafka as you do when you manual push them.
>>
>>
>>
>> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> "Yes, If the messages cannot be parsed then that would be a problem.  If
>> you see this error with your ‘live’ messages as well then that could be it.
>> I wonder if the issue is with the date format?"
>>
>> If by 'live' messages you mean the time I push them into kafka topic then
>> no, I dont see any error at that time. If 'live' means something else here
>> then please tell me what could it be.
>>
>>
>>
>> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> Yes, If the messages cannot be parsed then that would be a problem.  If
>>> you see this error with your ‘live’ messages as well then that could be it.
>>> I wonder if the issue is with the date format?
>>>
>>> You need to confirm that you see these same errors with the live data or
>>> not.
>>>
>>> Remember, the flow is like this
>>>
>>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
>>> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>>> ElasticSearch
>>> then
>>> Kibana <-> Elastic Search
>>>
>>> Any point in this chain could fail and result in Kibana not seeing
>>> things.
>>>
>>>
>>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>>> wrote:
>>>
>>> could this be related to why I am unable to see logs in kibana dashboard?
>>>
>>> I am copying a few lines from here https://raw.githubusercontent.
>>> com/apache/metron/master/metron-deployment/roles/sensor-stub
>>> s/files/snort.out
>>>
>>> and then pushing them to snort kafka topic.
>>>
>>> THis is some error I am seeing in stormUI parser bolt in snort section:
>>>
>>> [image: Inline image 1]
>>>
>>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> I guess I have hit a dead end. I am not able to get the snort logs in
>>>> kibana dashboard. Any help will be appreciated.
>>>>
>>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>>> > wrote:
>>>>
>>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also relevant
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> Cluster health by index shows this:
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> looks like some shard is unassigned and that is related to snort.
>>>>>> Could it be the logs I was pushing to kafka topic earlier?
>>>>>>
>>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> This is what I see here. What should I be looking at here?
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>> hi, I am back at work. lets see if i can find something in logs
>>>>>>>>
>>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> It looks like your ES cluster has a health of Red, so there's your
>>>>>>>>> problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> NVM, I have installed the elastic search head. Now where do I go
>>>>>>>>>> in this to find out why I cant see the snort logs in kibana dashboard,
>>>>>>>>>> pushed to snort topic via kafka producer?
>>>>>>>>>>
>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> You can install it into the chrome web browser from the play
>>>>>>>>>>> store.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>>
>>>>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
thats how I am pushing my logs to kafka topic


[image: Inline image 1]

After running this command, I copy paste a few lines from here:
https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out

like this


[image: Inline image 2]

I am not getting any error here. I can also see these lines pushed out via
kafka consumer under topic of snort.

This was the mechanism I am using to push the logs.

On Tue, Nov 7, 2017 at 7:18 PM, Otto Fowler <ot...@gmail.com> wrote:

> What I mean is this:
>
> I *think* you have tried both messages coming from snort through some
> setup ( getting pushed to kafka ), which I think of as live.  I also think
> you have manually pushed messages, where you see this error.
> So what I am asking is if you see the same errors for things that are
> automatically pushed to kafka as you do when you manual push them.
>
>
>
> On November 7, 2017 at 08:51:41, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> "Yes, If the messages cannot be parsed then that would be a problem.  If
> you see this error with your ‘live’ messages as well then that could be it.
> I wonder if the issue is with the date format?"
>
> If by 'live' messages you mean the time I push them into kafka topic then
> no, I dont see any error at that time. If 'live' means something else here
> then please tell me what could it be.
>
>
>
> On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> Yes, If the messages cannot be parsed then that would be a problem.  If
>> you see this error with your ‘live’ messages as well then that could be it.
>> I wonder if the issue is with the date format?
>>
>> You need to confirm that you see these same errors with the live data or
>> not.
>>
>> Remember, the flow is like this
>>
>> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
>> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
>> ElasticSearch
>> then
>> Kibana <-> Elastic Search
>>
>> Any point in this chain could fail and result in Kibana not seeing things.
>>
>>
>> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> could this be related to why I am unable to see logs in kibana dashboard?
>>
>> I am copying a few lines from here https://raw.githubusercontent.
>> com/apache/metron/master/metron-deployment/roles/sensor-
>> stubs/files/snort.out
>>
>> and then pushing them to snort kafka topic.
>>
>> THis is some error I am seeing in stormUI parser bolt in snort section:
>>
>> [image: Inline image 1]
>>
>> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> I guess I have hit a dead end. I am not able to get the snort logs in
>>> kibana dashboard. Any help will be appreciated.
>>>
>>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>>> wrote:
>>>
>>>> I guess this (metron.log) in /var/log/elasticsearch/ is also relevant
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> Cluster health by index shows this:
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> looks like some shard is unassigned and that is related to snort.
>>>>> Could it be the logs I was pushing to kafka topic earlier?
>>>>>
>>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> This is what I see here. What should I be looking at here?
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> hi, I am back at work. lets see if i can find something in logs
>>>>>>>
>>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> It looks like your ES cluster has a health of Red, so there's your
>>>>>>>> problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>>> Subject: Re: Snort Logs
>>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> NVM, I have installed the elastic search head. Now where do I go
>>>>>>>>> in this to find out why I cant see the snort logs in kibana dashboard,
>>>>>>>>> pushed to snort topic via kafka producer?
>>>>>>>>>
>>>>>>>>> [image: Inline image 1]
>>>>>>>>>
>>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> You can install it into the chrome web browser from the play
>>>>>>>>>> store.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>>
>>>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
What I mean is this:

I *think* you have tried both messages coming from snort through some setup
( getting pushed to kafka ), which I think of as live.  I also think you
have manually pushed messages, where you see this error.
So what I am asking is if you see the same errors for things that are
automatically pushed to kafka as you do when you manual push them.



On November 7, 2017 at 08:51:41, Syed Hammad Tahir (mscs16059@itu.edu.pk)
wrote:

"Yes, If the messages cannot be parsed then that would be a problem.  If
you see this error with your ‘live’ messages as well then that could be it.
I wonder if the issue is with the date format?"

If by 'live' messages you mean the time I push them into kafka topic then
no, I dont see any error at that time. If 'live' means something else here
then please tell me what could it be.



On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <ot...@gmail.com> wrote:

> Yes, If the messages cannot be parsed then that would be a problem.  If
> you see this error with your ‘live’ messages as well then that could be it.
> I wonder if the issue is with the date format?
>
> You need to confirm that you see these same errors with the live data or
> not.
>
> Remember, the flow is like this
>
> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
> ElasticSearch
> then
> Kibana <-> Elastic Search
>
> Any point in this chain could fail and result in Kibana not seeing things.
>
>
> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> could this be related to why I am unable to see logs in kibana dashboard?
>
> I am copying a few lines from here https://raw.githubusercontent.
> com/apache/metron/master/metron-deployment/roles/
> sensor-stubs/files/snort.out
>
> and then pushing them to snort kafka topic.
>
> THis is some error I am seeing in stormUI parser bolt in snort section:
>
> [image: Inline image 1]
>
> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> I guess I have hit a dead end. I am not able to get the snort logs in
>> kibana dashboard. Any help will be appreciated.
>>
>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> I guess this (metron.log) in /var/log/elasticsearch/ is also relevant
>>>
>>> [image: Inline image 1]
>>>
>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> Cluster health by index shows this:
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> looks like some shard is unassigned and that is related to snort. Could
>>>> it be the logs I was pushing to kafka topic earlier?
>>>>
>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> This is what I see here. What should I be looking at here?
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> hi, I am back at work. lets see if i can find something in logs
>>>>>>
>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> It looks like your ES cluster has a health of Red, so there's your
>>>>>>> problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> ---------- Forwarded message ----------
>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>> Subject: Re: Snort Logs
>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>
>>>>>>>>
>>>>>>>> NVM, I have installed the elastic search head. Now where do I go in
>>>>>>>> this to find out why I cant see the snort logs in kibana dashboard, pushed
>>>>>>>> to snort topic via kafka producer?
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> You can install it into the chrome web browser from the play store.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>
>>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>>
>>>>>>>>>
>>>>>>>> --
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
"Yes, If the messages cannot be parsed then that would be a problem.  If
you see this error with your ‘live’ messages as well then that could be it.
I wonder if the issue is with the date format?"

If by 'live' messages you mean the time I push them into kafka topic then
no, I dont see any error at that time. If 'live' means something else here
then please tell me what could it be.



On Tue, Nov 7, 2017 at 5:07 PM, Otto Fowler <ot...@gmail.com> wrote:

> Yes, If the messages cannot be parsed then that would be a problem.  If
> you see this error with your ‘live’ messages as well then that could be it.
> I wonder if the issue is with the date format?
>
> You need to confirm that you see these same errors with the live data or
> not.
>
> Remember, the flow is like this
>
> snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm
> Enrichment Topology -> Kafka -> Storm Indexing Topology -> HDFS |
> ElasticSearch
> then
> Kibana <-> Elastic Search
>
> Any point in this chain could fail and result in Kibana not seeing things.
>
>
> On November 7, 2017 at 01:57:19, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> could this be related to why I am unable to see logs in kibana dashboard?
>
> I am copying a few lines from here https://raw.githubusercontent.
> com/apache/metron/master/metron-deployment/roles/
> sensor-stubs/files/snort.out
>
> and then pushing them to snort kafka topic.
>
> THis is some error I am seeing in stormUI parser bolt in snort section:
>
> [image: Inline image 1]
>
> On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> I guess I have hit a dead end. I am not able to get the snort logs in
>> kibana dashboard. Any help will be appreciated.
>>
>> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> I guess this (metron.log) in /var/log/elasticsearch/ is also relevant
>>>
>>> [image: Inline image 1]
>>>
>>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> Cluster health by index shows this:
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> looks like some shard is unassigned and that is related to snort. Could
>>>> it be the logs I was pushing to kafka topic earlier?
>>>>
>>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> This is what I see here. What should I be looking at here?
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> hi, I am back at work. lets see if i can find something in logs
>>>>>>
>>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> It looks like your ES cluster has a health of Red, so there's your
>>>>>>> problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> ---------- Forwarded message ----------
>>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>>> Subject: Re: Snort Logs
>>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>>
>>>>>>>>
>>>>>>>> NVM, I have installed the elastic search head. Now where do I go in
>>>>>>>> this to find out why I cant see the snort logs in kibana dashboard, pushed
>>>>>>>> to snort topic via kafka producer?
>>>>>>>>
>>>>>>>> [image: Inline image 1]
>>>>>>>>
>>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <
>>>>>>>> ottobackwards@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> You can install it into the chrome web browser from the play store.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>>
>>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>>
>>>>>>>>>
>>>>>>>> --
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
Yes, If the messages cannot be parsed then that would be a problem.  If you
see this error with your ‘live’ messages as well then that could be it.
I wonder if the issue is with the date format?

You need to confirm that you see these same errors with the live data or
not.

Remember, the flow is like this

snort -> ??? -> Kafka -> Storm Parser Topology -> kafka -> Storm Enrichment
Topology -> Kafka -> Storm Indexing Topology -> HDFS | ElasticSearch
then
Kibana <-> Elastic Search

Any point in this chain could fail and result in Kibana not seeing things.


On November 7, 2017 at 01:57:19, Syed Hammad Tahir (mscs16059@itu.edu.pk)
wrote:

could this be related to why I am unable to see logs in kibana dashboard?

I am copying a few lines from here
https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out

and then pushing them to snort kafka topic.

THis is some error I am seeing in stormUI parser bolt in snort section:

[image: Inline image 1]

On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> I guess I have hit a dead end. I am not able to get the snort logs in
> kibana dashboard. Any help will be appreciated.
>
> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> I guess this (metron.log) in /var/log/elasticsearch/ is also relevant
>>
>> [image: Inline image 1]
>>
>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> Cluster health by index shows this:
>>>
>>> [image: Inline image 1]
>>>
>>> looks like some shard is unassigned and that is related to snort. Could
>>> it be the logs I was pushing to kafka topic earlier?
>>>
>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> This is what I see here. What should I be looking at here?
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> hi, I am back at work. lets see if i can find something in logs
>>>>>
>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> It looks like your ES cluster has a health of Red, so there's your
>>>>>> problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>>
>>>>>>> ---------- Forwarded message ----------
>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>> Subject: Re: Snort Logs
>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>
>>>>>>>
>>>>>>> NVM, I have installed the elastic search head. Now where do I go in
>>>>>>> this to find out why I cant see the snort logs in kibana dashboard, pushed
>>>>>>> to snort topic via kafka producer?
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <ottobackwards@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> You can install it into the chrome web browser from the play store.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>
>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>
>>>>>>>>
>>>>>>> --
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
could this be related to why I am unable to see logs in kibana dashboard?

I am copying a few lines from here
https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out

and then pushing them to snort kafka topic.

THis is some error I am seeing in stormUI parser bolt in snort section:

[image: Inline image 1]

On Tue, Nov 7, 2017 at 11:49 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> I guess I have hit a dead end. I am not able to get the snort logs in
> kibana dashboard. Any help will be appreciated.
>
> On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> I guess this (metron.log) in /var/log/elasticsearch/ is also relevant
>>
>> [image: Inline image 1]
>>
>> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> Cluster health by index shows this:
>>>
>>> [image: Inline image 1]
>>>
>>> looks like some shard is unassigned and that is related to snort. Could
>>> it be the logs I was pushing to kafka topic earlier?
>>>
>>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> This is what I see here. What should I be looking at here?
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> hi, I am back at work. lets see if i can find something in logs
>>>>>
>>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> It looks like your ES cluster has a health of Red, so there's your
>>>>>> problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>>
>>>>>>> ---------- Forwarded message ----------
>>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>>> Subject: Re: Snort Logs
>>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>>
>>>>>>>
>>>>>>> NVM, I have installed the elastic search head. Now where do I go in
>>>>>>> this to find out why I cant see the snort logs in kibana dashboard, pushed
>>>>>>> to snort topic via kafka producer?
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <ottobackwards@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> You can install it into the chrome web browser from the play store.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>>
>>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>>
>>>>>>>>
>>>>>>> --
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
I guess I have hit a dead end. I am not able to get the snort logs in
kibana dashboard. Any help will be appreciated.

On Mon, Nov 6, 2017 at 1:24 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> I guess this (metron.log) in /var/log/elasticsearch/ is also relevant
>
> [image: Inline image 1]
>
> On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> Cluster health by index shows this:
>>
>> [image: Inline image 1]
>>
>> looks like some shard is unassigned and that is related to snort. Could
>> it be the logs I was pushing to kafka topic earlier?
>>
>> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> This is what I see here. What should I be looking at here?
>>>
>>> [image: Inline image 1]
>>>
>>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>>> > wrote:
>>>
>>>> hi, I am back at work. lets see if i can find something in logs
>>>>
>>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>> wrote:
>>>>
>>>>> It looks like your ES cluster has a health of Red, so there's your
>>>>> problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>>
>>>>> Jon
>>>>>
>>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>>
>>>>>> ---------- Forwarded message ----------
>>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>>> Subject: Re: Snort Logs
>>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>>
>>>>>>
>>>>>> NVM, I have installed the elastic search head. Now where do I go in
>>>>>> this to find out why I cant see the snort logs in kibana dashboard, pushed
>>>>>> to snort topic via kafka producer?
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <ot...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> You can install it into the chrome web browser from the play store.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>>
>>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>>
>>>>>>>
>>>>>> --
>>>>>
>>>>> Jon
>>>>>
>>>>
>>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
I guess this (metron.log) in /var/log/elasticsearch/ is also relevant

[image: Inline image 1]

On Mon, Nov 6, 2017 at 11:46 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> Cluster health by index shows this:
>
> [image: Inline image 1]
>
> looks like some shard is unassigned and that is related to snort. Could it
> be the logs I was pushing to kafka topic earlier?
>
> On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> This is what I see here. What should I be looking at here?
>>
>> [image: Inline image 1]
>>
>> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> hi, I am back at work. lets see if i can find something in logs
>>>
>>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <ze...@gmail.com>
>>> wrote:
>>>
>>>> It looks like your ES cluster has a health of Red, so there's your
>>>> problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>>
>>>> Jon
>>>>
>>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <ms...@itu.edu.pk>
>>>> wrote:
>>>>
>>>>>
>>>>> ---------- Forwarded message ----------
>>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>>> Subject: Re: Snort Logs
>>>>> To: Otto Fowler <ot...@gmail.com>
>>>>>
>>>>>
>>>>> NVM, I have installed the elastic search head. Now where do I go in
>>>>> this to find out why I cant see the snort logs in kibana dashboard, pushed
>>>>> to snort topic via kafka producer?
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <ot...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> You can install it into the chrome web browser from the play store.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>>
>>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>>
>>>>>>
>>>>> --
>>>>
>>>> Jon
>>>>
>>>
>>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
Cluster health by index shows this:

[image: Inline image 1]

looks like some shard is unassigned and that is related to snort. Could it
be the logs I was pushing to kafka topic earlier?

On Mon, Nov 6, 2017 at 10:47 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> This is what I see here. What should I be looking at here?
>
> [image: Inline image 1]
>
> On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> hi, I am back at work. lets see if i can find something in logs
>>
>> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <ze...@gmail.com>
>> wrote:
>>
>>> It looks like your ES cluster has a health of Red, so there's your
>>> problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>>
>>> Jon
>>>
>>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <ms...@itu.edu.pk>
>>> wrote:
>>>
>>>>
>>>> ---------- Forwarded message ----------
>>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>>> Subject: Re: Snort Logs
>>>> To: Otto Fowler <ot...@gmail.com>
>>>>
>>>>
>>>> NVM, I have installed the elastic search head. Now where do I go in
>>>> this to find out why I cant see the snort logs in kibana dashboard, pushed
>>>> to snort topic via kafka producer?
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <ot...@gmail.com>
>>>> wrote:
>>>>
>>>>> You can install it into the chrome web browser from the play store.
>>>>>
>>>>>
>>>>>
>>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>>> mscs16059@itu.edu.pk) wrote:
>>>>>
>>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>>
>>>>>
>>>> --
>>>
>>> Jon
>>>
>>
>>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
This is what I see here. What should I be looking at here?

[image: Inline image 1]

On Mon, Nov 6, 2017 at 10:33 AM, Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> hi, I am back at work. lets see if i can find something in logs
>
> On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <ze...@gmail.com> wrote:
>
>> It looks like your ES cluster has a health of Red, so there's your
>> problem.  I would go look in /var/log/elasticsearch/ at some logs.
>>
>> Jon
>>
>> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>>
>>> ---------- Forwarded message ----------
>>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>>> Date: Fri, Nov 3, 2017 at 5:07 PM
>>> Subject: Re: Snort Logs
>>> To: Otto Fowler <ot...@gmail.com>
>>>
>>>
>>> NVM, I have installed the elastic search head. Now where do I go in this
>>> to find out why I cant see the snort logs in kibana dashboard, pushed to
>>> snort topic via kafka producer?
>>>
>>> [image: Inline image 1]
>>>
>>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <ot...@gmail.com>
>>> wrote:
>>>
>>>> You can install it into the chrome web browser from the play store.
>>>>
>>>>
>>>>
>>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (
>>>> mscs16059@itu.edu.pk) wrote:
>>>>
>>>> And how do I install elasticsearch head on the vagrant VM?
>>>>
>>>>
>>> --
>>
>> Jon
>>
>
>

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
hi, I am back at work. lets see if i can find something in logs

On Sat, Nov 4, 2017 at 6:38 PM, Zeolla@GMail.com <ze...@gmail.com> wrote:

> It looks like your ES cluster has a health of Red, so there's your
> problem.  I would go look in /var/log/elasticsearch/ at some logs.
>
> Jon
>
> On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>>
>> ---------- Forwarded message ----------
>> From: Syed Hammad Tahir <ms...@itu.edu.pk>
>> Date: Fri, Nov 3, 2017 at 5:07 PM
>> Subject: Re: Snort Logs
>> To: Otto Fowler <ot...@gmail.com>
>>
>>
>> NVM, I have installed the elastic search head. Now where do I go in this
>> to find out why I cant see the snort logs in kibana dashboard, pushed to
>> snort topic via kafka producer?
>>
>> [image: Inline image 1]
>>
>> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <ot...@gmail.com>
>> wrote:
>>
>>> You can install it into the chrome web browser from the play store.
>>>
>>>
>>>
>>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>>> wrote:
>>>
>>> And how do I install elasticsearch head on the vagrant VM?
>>>
>>>
>> --
>
> Jon
>

Re: Snort Logs

Posted by "Zeolla@GMail.com" <ze...@gmail.com>.
It looks like your ES cluster has a health of Red, so there's your
problem.  I would go look in /var/log/elasticsearch/ at some logs.

Jon

On Fri, Nov 3, 2017 at 12:19 PM Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

>
> ---------- Forwarded message ----------
> From: Syed Hammad Tahir <ms...@itu.edu.pk>
> Date: Fri, Nov 3, 2017 at 5:07 PM
> Subject: Re: Snort Logs
> To: Otto Fowler <ot...@gmail.com>
>
>
> NVM, I have installed the elastic search head. Now where do I go in this
> to find out why I cant see the snort logs in kibana dashboard, pushed to
> snort topic via kafka producer?
>
> [image: Inline image 1]
>
> On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <ot...@gmail.com>
> wrote:
>
>> You can install it into the chrome web browser from the play store.
>>
>>
>>
>> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (mscs16059@itu.edu.pk)
>> wrote:
>>
>> And how do I install elasticsearch head on the vagrant VM?
>>
>>
> --

Jon

Fwd: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
---------- Forwarded message ----------
From: Syed Hammad Tahir <ms...@itu.edu.pk>
Date: Fri, Nov 3, 2017 at 5:07 PM
Subject: Re: Snort Logs
To: Otto Fowler <ot...@gmail.com>


NVM, I have installed the elastic search head. Now where do I go in this to
find out why I cant see the snort logs in kibana dashboard, pushed to snort
topic via kafka producer?

[image: Inline image 1]

On Fri, Nov 3, 2017 at 5:03 PM, Otto Fowler <ot...@gmail.com> wrote:

> You can install it into the chrome web browser from the play store.
>
>
>
> On November 3, 2017 at 07:47:47, Syed Hammad Tahir (mscs16059@itu.edu.pk)
> wrote:
>
> And how do I install elasticsearch head on the vagrant VM?
>
>

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
You can install it into the chrome web browser from the play store.



On November 3, 2017 at 07:47:47, Syed Hammad Tahir (mscs16059@itu.edu.pk)
wrote:

And how do I install elasticsearch head on the vagrant VM?

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
And how do I install elasticsearch head on the vagrant VM?

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
"Did you look at kibana / elasticsearch head, and the ES and kibana logs?
I'm willing to bet that's your issue."

Hi, where do I find all these logs? I mean how do I do all that you
suggested? :)

On Thu, Nov 2, 2017 at 3:13 PM, Zeolla@GMail.com <ze...@gmail.com> wrote:

> If logs are in the indexing topic, you may have an issue indexing to
> Elasticsearch, or the indexing storm topology.  Did you look at kibana /
> elasticsearch head, and the ES and kibana logs?  I'm willing to bet that's
> your issue.
>
> Jon
>
> On Thu, Nov 2, 2017, 04:54 Syed Hammad Tahir <ms...@itu.edu.pk> wrote:
>
>> I can see the the logs under all the given topics when used with kafka
>> consumer. If everything worked well, I should look into kibana dashboards
>> where I am unable to see the logs. What should I do in order to see them?
>>
>> On Wed, Nov 1, 2017 at 5:19 PM, Zeolla@GMail.com <ze...@gmail.com>
>> wrote:
>>
>>> If you're sending logs to the snort topic successfully and they aren't
>>> showing up in the UI, odds are you have an issue in the pipeline.  The easy
>>> thing to do would be to look in the storm UI briefly and look for errors.
>>> If you don't have any errors in storm, here's a brief set of steps I would
>>> do:
>>>
>>> 1.  Connect to the snort topic as a consumer and make sure the logs you
>>> are sending are properly getting published
>>> 2.  If logs are getting to the snort topic, look at the enrichments
>>> topic for the same thing (the logs will look slightly different, as they
>>> have now been parsed)
>>> 3.  If logs are in enrichments, look at the indexing topic.
>>> 4.  If logs are in the indexing topic, you may have an issue indexing to
>>> Elasticsearch.
>>>
>>> If you find an issue during step 1 - your issue is with kafka or sending
>>> to kafka.  If the issue is in step 2, look at the snort storm topology in
>>> more detail and the storm logs.  If the issue is in step 3, look at the
>>> enrichment topology/storm logs, if the issue is in step 4 look at
>>> kibana/elasticsearch UIs and logs.  If everything looks good after this
>>> exercise, I would point the finger at your kibana dashboards, at least
>>> initially.
>>>
>>> Jon
>>>
>>> On Wed, Nov 1, 2017 at 4:17 AM Syed Hammad Tahir <ms...@itu.edu.pk>
>>> wrote:
>>>
>>>> How do I make these messages sent to kafka producer show in kibana
>>>> dashboard  or any metron related UI
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Tue, Oct 31, 2017 at 12:50 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> OK, so finally the message is going (the formatted one) but i still
>>>>> cant see it in kibana dashboard under snort`s label. I had stopped all the
>>>>> stub sensors from monit before doing it. What am I doing wrong here?
>>>>>
>>>>> On Tue, Oct 31, 2017 at 11:28 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> same thing even when I send a formatted message.
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> On Tue, Oct 31, 2017 at 10:57 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> I sent a random message to that kafka topic and got this
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> I guess this is because I am not following the format of message I
>>>>>>> should send? Like those snort logs you showed.
>>>>>>>
>>>>>>> On Mon, Oct 30, 2017 at 5:24 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> They need to meet the format of the logs I sent earlier.  Look into
>>>>>>>> the snort output options - may require you rerun snort, depending on your
>>>>>>>> situation
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Mon, Oct 30, 2017, 06:53 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Yes, I have converted them to text but those logs are simply
>>>>>>>>> captured packet headers over the local network. Now I just push them via
>>>>>>>>> that kafka producer command under topic name of snort and they will be
>>>>>>>>> visible in metron?
>>>>>>>>>
>>>>>>>>> On Mon, Oct 30, 2017 at 2:41 PM, Zeolla@GMail.com <
>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> You need text logs. Here's an example of some properly formatted
>>>>>>>>>> logs - https://raw.githubusercontent.com/apache/
>>>>>>>>>> metron/master/metron-deployment/roles/sensor-stubs/
>>>>>>>>>> files/snort.out
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>> On Mon, Oct 30, 2017, 01:34 Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> I have found the kafka-console-producer.sh but I need to know
>>>>>>>>>>> how to make it read snort.log (tcp dump format) file. May be I am missing
>>>>>>>>>>> something in the plain sight but it would be awsome if you tell me that.
>>>>>>>>>>>
>>>>>>>>>>> Regards.
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Oct 27, 2017 at 5:09 PM, Zeolla@GMail.com <
>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> On the 25th I said:
>>>>>>>>>>>>
>>>>>>>>>>>>      It should be in /usr/hdp/current/kafka-broker/bin/ or
>>>>>>>>>>>> similar (from memory) on node1, assuming you are running full dev.
>>>>>>>>>>>>
>>>>>>>>>>>>      Jon
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Oct 27, 2017 at 6:25 AM Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> snort logs are in tcp dump format. I may have to convert them.
>>>>>>>>>>>>>
>>>>>>>>>>>>> bin/kafka-console-producer.sh --broker-list localhost:9092
>>>>>>>>>>>>> --topic test
>>>>>>>>>>>>>
>>>>>>>>>>>>> How to give file name or path in this command?
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Oct 27, 2017 at 2:53 PM, Zeolla@GMail.com <
>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> If you have text snort logs you can use Apache nifi or the
>>>>>>>>>>>>>> Kafka producer script as described in step 4 here[1] to push them to
>>>>>>>>>>>>>> Metron's snort topic.  You may also want to look at this [2].
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 1: https://kafka.apache.org/quickstart
>>>>>>>>>>>>>> 2: https://stackoverflow.com/questions/38701179/kafka-
>>>>>>>>>>>>>> console-producer-and-bash-script
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Oct 27, 2017, 02:15 Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hello everyone,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have run snort independently on vagrant ssh and dumped the
>>>>>>>>>>>>>>> logs in tcpdump format. Now I want to bring them to metron to play with
>>>>>>>>>>>>>>> them a bit. Some of you already replied me with some solutions but thats
>>>>>>>>>>>>>>> lost in the inbox somewhere and engulfed by the elasticsearhc issue that I
>>>>>>>>>>>>>>> had. Please give me an easy to understand this solution for this problem.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Regards.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>> --
>>>
>>> Jon
>>>
>>
>> --
>
> Jon
>

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
Note that this command has ansible variables mixed in with it.

If you duck duck go ( or google ) ambari REST api service you will find
examples and doc on using rest with ambari.



On November 2, 2017 at 08:08:42, Otto Fowler (ottobackwards@gmail.com)
wrote:

- name: Load Kibana Dashboard
  command: >
    curl -s -w "%{http_code}" -u admin:admin -H "X-Requested-By:
ambari" -X POST -d '{ "RequestInfo": { "context": "Install Kibana
Dashboard from REST", "command":
"LOAD_TEMPLATE"},"Requests/resource_filters": [{"service_name":
"KIBANA","component_name": "KIBANA_MASTER","hosts" : "{{
kibana_hosts[0] }}"}]}' http://{{ groups.ambari_master[0] }}:{{
ambari_port }}/api/v1/clusters/{{ cluster_name }}/requests
  args:
    warn: off
  register: result
  failed_when: "result.rc != 0 or '202' not in result.stdout"



This is an example of a curl command calling the LOAD_TEMPLATE function on
the KIBANA service.



On November 2, 2017 at 08:00:28, Otto Fowler (ottobackwards@gmail.com)
wrote:

You can send rest commands.  For example we use rest to tell the indexing
service to install the ES templates from ansible.



On November 2, 2017 at 07:33:23, varsha mordi (varsha.prodevans@gmail.com)
wrote:

Is there any alternative for Ambari UI except Ambari Shell to work with
commands for services.


On Thu, Nov 2, 2017 at 3:43 PM, Zeolla@GMail.com <ze...@gmail.com> wrote:

> If logs are in the indexing topic, you may have an issue indexing to
> Elasticsearch, or the indexing storm topology.  Did you look at kibana /
> elasticsearch head, and the ES and kibana logs?  I'm willing to bet that's
> your issue.
>
> Jon
>
> On Thu, Nov 2, 2017, 04:54 Syed Hammad Tahir <ms...@itu.edu.pk> wrote:
>
>> I can see the the logs under all the given topics when used with kafka
>> consumer. If everything worked well, I should look into kibana dashboards
>> where I am unable to see the logs. What should I do in order to see them?
>>
>> On Wed, Nov 1, 2017 at 5:19 PM, Zeolla@GMail.com <ze...@gmail.com>
>> wrote:
>>
>>> If you're sending logs to the snort topic successfully and they aren't
>>> showing up in the UI, odds are you have an issue in the pipeline.  The easy
>>> thing to do would be to look in the storm UI briefly and look for errors.
>>> If you don't have any errors in storm, here's a brief set of steps I would
>>> do:
>>>
>>> 1.  Connect to the snort topic as a consumer and make sure the logs you
>>> are sending are properly getting published
>>> 2.  If logs are getting to the snort topic, look at the enrichments
>>> topic for the same thing (the logs will look slightly different, as they
>>> have now been parsed)
>>> 3.  If logs are in enrichments, look at the indexing topic.
>>> 4.  If logs are in the indexing topic, you may have an issue indexing to
>>> Elasticsearch.
>>>
>>> If you find an issue during step 1 - your issue is with kafka or sending
>>> to kafka.  If the issue is in step 2, look at the snort storm topology in
>>> more detail and the storm logs.  If the issue is in step 3, look at the
>>> enrichment topology/storm logs, if the issue is in step 4 look at
>>> kibana/elasticsearch UIs and logs.  If everything looks good after this
>>> exercise, I would point the finger at your kibana dashboards, at least
>>> initially.
>>>
>>> Jon
>>>
>>> On Wed, Nov 1, 2017 at 4:17 AM Syed Hammad Tahir <ms...@itu.edu.pk>
>>> wrote:
>>>
>>>> How do I make these messages sent to kafka producer show in kibana
>>>> dashboard  or any metron related UI
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Tue, Oct 31, 2017 at 12:50 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> OK, so finally the message is going (the formatted one) but i still
>>>>> cant see it in kibana dashboard under snort`s label. I had stopped all the
>>>>> stub sensors from monit before doing it. What am I doing wrong here?
>>>>>
>>>>> On Tue, Oct 31, 2017 at 11:28 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> same thing even when I send a formatted message.
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> On Tue, Oct 31, 2017 at 10:57 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> I sent a random message to that kafka topic and got this
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> I guess this is because I am not following the format of message I
>>>>>>> should send? Like those snort logs you showed.
>>>>>>>
>>>>>>> On Mon, Oct 30, 2017 at 5:24 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> They need to meet the format of the logs I sent earlier.  Look into
>>>>>>>> the snort output options - may require you rerun snort, depending on your
>>>>>>>> situation
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Mon, Oct 30, 2017, 06:53 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Yes, I have converted them to text but those logs are simply
>>>>>>>>> captured packet headers over the local network. Now I just push them via
>>>>>>>>> that kafka producer command under topic name of snort and they will be
>>>>>>>>> visible in metron?
>>>>>>>>>
>>>>>>>>> On Mon, Oct 30, 2017 at 2:41 PM, Zeolla@GMail.com <
>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> You need text logs. Here's an example of some properly formatted
>>>>>>>>>> logs - https://raw.githubusercontent.com/apache/
>>>>>>>>>> metron/master/metron-deployment/roles/sensor-stubs/
>>>>>>>>>> files/snort.out
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>> On Mon, Oct 30, 2017, 01:34 Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> I have found the kafka-console-producer.sh but I need to know
>>>>>>>>>>> how to make it read snort.log (tcp dump format) file. May be I am missing
>>>>>>>>>>> something in the plain sight but it would be awsome if you tell me that.
>>>>>>>>>>>
>>>>>>>>>>> Regards.
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Oct 27, 2017 at 5:09 PM, Zeolla@GMail.com <
>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> On the 25th I said:
>>>>>>>>>>>>
>>>>>>>>>>>>      It should be in /usr/hdp/current/kafka-broker/bin/ or
>>>>>>>>>>>> similar (from memory) on node1, assuming you are running full dev.
>>>>>>>>>>>>
>>>>>>>>>>>>      Jon
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Oct 27, 2017 at 6:25 AM Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> snort logs are in tcp dump format. I may have to convert them.
>>>>>>>>>>>>>
>>>>>>>>>>>>> bin/kafka-console-producer.sh --broker-list localhost:9092
>>>>>>>>>>>>> --topic test
>>>>>>>>>>>>>
>>>>>>>>>>>>> How to give file name or path in this command?
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Oct 27, 2017 at 2:53 PM, Zeolla@GMail.com <
>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> If you have text snort logs you can use Apache nifi or the
>>>>>>>>>>>>>> Kafka producer script as described in step 4 here[1] to push them to
>>>>>>>>>>>>>> Metron's snort topic.  You may also want to look at this [2].
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 1: https://kafka.apache.org/quickstart
>>>>>>>>>>>>>> 2: https://stackoverflow.com/questions/38701179/kafka-
>>>>>>>>>>>>>> console-producer-and-bash-script
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Oct 27, 2017, 02:15 Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hello everyone,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have run snort independently on vagrant ssh and dumped the
>>>>>>>>>>>>>>> logs in tcpdump format. Now I want to bring them to metron to play with
>>>>>>>>>>>>>>> them a bit. Some of you already replied me with some solutions but thats
>>>>>>>>>>>>>>> lost in the inbox somewhere and engulfed by the elasticsearhc issue that I
>>>>>>>>>>>>>>> had. Please give me an easy to understand this solution for this problem.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Regards.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>> --
>>>
>>> Jon
>>>
>>
>> --
>
> Jon
>



--
Thanks & Regards,

Varsha Mordi

Prodevans Technologies LLP.

M: +91 9637109734  *| *L: +91 80 64533365 *|* www.prodevans.com

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
- name: Load Kibana Dashboard
  command: >
    curl -s -w "%{http_code}" -u admin:admin -H "X-Requested-By:
ambari" -X POST -d '{ "RequestInfo": { "context": "Install Kibana
Dashboard from REST", "command":
"LOAD_TEMPLATE"},"Requests/resource_filters": [{"service_name":
"KIBANA","component_name": "KIBANA_MASTER","hosts" : "{{
kibana_hosts[0] }}"}]}' http://{{ groups.ambari_master[0] }}:{{
ambari_port }}/api/v1/clusters/{{ cluster_name }}/requests
  args:
    warn: off
  register: result
  failed_when: "result.rc != 0 or '202' not in result.stdout"



This is an example of a curl command calling the LOAD_TEMPLATE function on
the KIBANA service.



On November 2, 2017 at 08:00:28, Otto Fowler (ottobackwards@gmail.com)
wrote:

You can send rest commands.  For example we use rest to tell the indexing
service to install the ES templates from ansible.



On November 2, 2017 at 07:33:23, varsha mordi (varsha.prodevans@gmail.com)
wrote:

Is there any alternative for Ambari UI except Ambari Shell to work with
commands for services.


On Thu, Nov 2, 2017 at 3:43 PM, Zeolla@GMail.com <ze...@gmail.com> wrote:

> If logs are in the indexing topic, you may have an issue indexing to
> Elasticsearch, or the indexing storm topology.  Did you look at kibana /
> elasticsearch head, and the ES and kibana logs?  I'm willing to bet that's
> your issue.
>
> Jon
>
> On Thu, Nov 2, 2017, 04:54 Syed Hammad Tahir <ms...@itu.edu.pk> wrote:
>
>> I can see the the logs under all the given topics when used with kafka
>> consumer. If everything worked well, I should look into kibana dashboards
>> where I am unable to see the logs. What should I do in order to see them?
>>
>> On Wed, Nov 1, 2017 at 5:19 PM, Zeolla@GMail.com <ze...@gmail.com>
>> wrote:
>>
>>> If you're sending logs to the snort topic successfully and they aren't
>>> showing up in the UI, odds are you have an issue in the pipeline.  The easy
>>> thing to do would be to look in the storm UI briefly and look for errors.
>>> If you don't have any errors in storm, here's a brief set of steps I would
>>> do:
>>>
>>> 1.  Connect to the snort topic as a consumer and make sure the logs you
>>> are sending are properly getting published
>>> 2.  If logs are getting to the snort topic, look at the enrichments
>>> topic for the same thing (the logs will look slightly different, as they
>>> have now been parsed)
>>> 3.  If logs are in enrichments, look at the indexing topic.
>>> 4.  If logs are in the indexing topic, you may have an issue indexing to
>>> Elasticsearch.
>>>
>>> If you find an issue during step 1 - your issue is with kafka or sending
>>> to kafka.  If the issue is in step 2, look at the snort storm topology in
>>> more detail and the storm logs.  If the issue is in step 3, look at the
>>> enrichment topology/storm logs, if the issue is in step 4 look at
>>> kibana/elasticsearch UIs and logs.  If everything looks good after this
>>> exercise, I would point the finger at your kibana dashboards, at least
>>> initially.
>>>
>>> Jon
>>>
>>> On Wed, Nov 1, 2017 at 4:17 AM Syed Hammad Tahir <ms...@itu.edu.pk>
>>> wrote:
>>>
>>>> How do I make these messages sent to kafka producer show in kibana
>>>> dashboard  or any metron related UI
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Tue, Oct 31, 2017 at 12:50 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> OK, so finally the message is going (the formatted one) but i still
>>>>> cant see it in kibana dashboard under snort`s label. I had stopped all the
>>>>> stub sensors from monit before doing it. What am I doing wrong here?
>>>>>
>>>>> On Tue, Oct 31, 2017 at 11:28 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> same thing even when I send a formatted message.
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> On Tue, Oct 31, 2017 at 10:57 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> I sent a random message to that kafka topic and got this
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> I guess this is because I am not following the format of message I
>>>>>>> should send? Like those snort logs you showed.
>>>>>>>
>>>>>>> On Mon, Oct 30, 2017 at 5:24 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> They need to meet the format of the logs I sent earlier.  Look into
>>>>>>>> the snort output options - may require you rerun snort, depending on your
>>>>>>>> situation
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Mon, Oct 30, 2017, 06:53 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Yes, I have converted them to text but those logs are simply
>>>>>>>>> captured packet headers over the local network. Now I just push them via
>>>>>>>>> that kafka producer command under topic name of snort and they will be
>>>>>>>>> visible in metron?
>>>>>>>>>
>>>>>>>>> On Mon, Oct 30, 2017 at 2:41 PM, Zeolla@GMail.com <
>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> You need text logs. Here's an example of some properly formatted
>>>>>>>>>> logs - https://raw.githubusercontent.com/apache/
>>>>>>>>>> metron/master/metron-deployment/roles/sensor-stubs/
>>>>>>>>>> files/snort.out
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>> On Mon, Oct 30, 2017, 01:34 Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> I have found the kafka-console-producer.sh but I need to know
>>>>>>>>>>> how to make it read snort.log (tcp dump format) file. May be I am missing
>>>>>>>>>>> something in the plain sight but it would be awsome if you tell me that.
>>>>>>>>>>>
>>>>>>>>>>> Regards.
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Oct 27, 2017 at 5:09 PM, Zeolla@GMail.com <
>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> On the 25th I said:
>>>>>>>>>>>>
>>>>>>>>>>>>      It should be in /usr/hdp/current/kafka-broker/bin/ or
>>>>>>>>>>>> similar (from memory) on node1, assuming you are running full dev.
>>>>>>>>>>>>
>>>>>>>>>>>>      Jon
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Oct 27, 2017 at 6:25 AM Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> snort logs are in tcp dump format. I may have to convert them.
>>>>>>>>>>>>>
>>>>>>>>>>>>> bin/kafka-console-producer.sh --broker-list localhost:9092
>>>>>>>>>>>>> --topic test
>>>>>>>>>>>>>
>>>>>>>>>>>>> How to give file name or path in this command?
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Oct 27, 2017 at 2:53 PM, Zeolla@GMail.com <
>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> If you have text snort logs you can use Apache nifi or the
>>>>>>>>>>>>>> Kafka producer script as described in step 4 here[1] to push them to
>>>>>>>>>>>>>> Metron's snort topic.  You may also want to look at this [2].
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 1: https://kafka.apache.org/quickstart
>>>>>>>>>>>>>> 2: https://stackoverflow.com/questions/38701179/kafka-
>>>>>>>>>>>>>> console-producer-and-bash-script
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Oct 27, 2017, 02:15 Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hello everyone,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have run snort independently on vagrant ssh and dumped the
>>>>>>>>>>>>>>> logs in tcpdump format. Now I want to bring them to metron to play with
>>>>>>>>>>>>>>> them a bit. Some of you already replied me with some solutions but thats
>>>>>>>>>>>>>>> lost in the inbox somewhere and engulfed by the elasticsearhc issue that I
>>>>>>>>>>>>>>> had. Please give me an easy to understand this solution for this problem.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Regards.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>> --
>>>
>>> Jon
>>>
>>
>> --
>
> Jon
>



--
Thanks & Regards,

Varsha Mordi

Prodevans Technologies LLP.

M: +91 9637109734  *| *L: +91 80 64533365 *|* www.prodevans.com

Re: Snort Logs

Posted by Otto Fowler <ot...@gmail.com>.
You can send rest commands.  For example we use rest to tell the indexing
service to install the ES templates from ansible.



On November 2, 2017 at 07:33:23, varsha mordi (varsha.prodevans@gmail.com)
wrote:

Is there any alternative for Ambari UI except Ambari Shell to work with
commands for services.


On Thu, Nov 2, 2017 at 3:43 PM, Zeolla@GMail.com <ze...@gmail.com> wrote:

> If logs are in the indexing topic, you may have an issue indexing to
> Elasticsearch, or the indexing storm topology.  Did you look at kibana /
> elasticsearch head, and the ES and kibana logs?  I'm willing to bet that's
> your issue.
>
> Jon
>
> On Thu, Nov 2, 2017, 04:54 Syed Hammad Tahir <ms...@itu.edu.pk> wrote:
>
>> I can see the the logs under all the given topics when used with kafka
>> consumer. If everything worked well, I should look into kibana dashboards
>> where I am unable to see the logs. What should I do in order to see them?
>>
>> On Wed, Nov 1, 2017 at 5:19 PM, Zeolla@GMail.com <ze...@gmail.com>
>> wrote:
>>
>>> If you're sending logs to the snort topic successfully and they aren't
>>> showing up in the UI, odds are you have an issue in the pipeline.  The easy
>>> thing to do would be to look in the storm UI briefly and look for errors.
>>> If you don't have any errors in storm, here's a brief set of steps I would
>>> do:
>>>
>>> 1.  Connect to the snort topic as a consumer and make sure the logs you
>>> are sending are properly getting published
>>> 2.  If logs are getting to the snort topic, look at the enrichments
>>> topic for the same thing (the logs will look slightly different, as they
>>> have now been parsed)
>>> 3.  If logs are in enrichments, look at the indexing topic.
>>> 4.  If logs are in the indexing topic, you may have an issue indexing to
>>> Elasticsearch.
>>>
>>> If you find an issue during step 1 - your issue is with kafka or sending
>>> to kafka.  If the issue is in step 2, look at the snort storm topology in
>>> more detail and the storm logs.  If the issue is in step 3, look at the
>>> enrichment topology/storm logs, if the issue is in step 4 look at
>>> kibana/elasticsearch UIs and logs.  If everything looks good after this
>>> exercise, I would point the finger at your kibana dashboards, at least
>>> initially.
>>>
>>> Jon
>>>
>>> On Wed, Nov 1, 2017 at 4:17 AM Syed Hammad Tahir <ms...@itu.edu.pk>
>>> wrote:
>>>
>>>> How do I make these messages sent to kafka producer show in kibana
>>>> dashboard  or any metron related UI
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Tue, Oct 31, 2017 at 12:50 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> OK, so finally the message is going (the formatted one) but i still
>>>>> cant see it in kibana dashboard under snort`s label. I had stopped all the
>>>>> stub sensors from monit before doing it. What am I doing wrong here?
>>>>>
>>>>> On Tue, Oct 31, 2017 at 11:28 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> same thing even when I send a formatted message.
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> On Tue, Oct 31, 2017 at 10:57 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> I sent a random message to that kafka topic and got this
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> I guess this is because I am not following the format of message I
>>>>>>> should send? Like those snort logs you showed.
>>>>>>>
>>>>>>> On Mon, Oct 30, 2017 at 5:24 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> They need to meet the format of the logs I sent earlier.  Look into
>>>>>>>> the snort output options - may require you rerun snort, depending on your
>>>>>>>> situation
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Mon, Oct 30, 2017, 06:53 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Yes, I have converted them to text but those logs are simply
>>>>>>>>> captured packet headers over the local network. Now I just push them via
>>>>>>>>> that kafka producer command under topic name of snort and they will be
>>>>>>>>> visible in metron?
>>>>>>>>>
>>>>>>>>> On Mon, Oct 30, 2017 at 2:41 PM, Zeolla@GMail.com <
>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> You need text logs. Here's an example of some properly formatted
>>>>>>>>>> logs - https://raw.githubusercontent.com/apache/
>>>>>>>>>> metron/master/metron-deployment/roles/sensor-stubs/
>>>>>>>>>> files/snort.out
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>> On Mon, Oct 30, 2017, 01:34 Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> I have found the kafka-console-producer.sh but I need to know
>>>>>>>>>>> how to make it read snort.log (tcp dump format) file. May be I am missing
>>>>>>>>>>> something in the plain sight but it would be awsome if you tell me that.
>>>>>>>>>>>
>>>>>>>>>>> Regards.
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Oct 27, 2017 at 5:09 PM, Zeolla@GMail.com <
>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> On the 25th I said:
>>>>>>>>>>>>
>>>>>>>>>>>>      It should be in /usr/hdp/current/kafka-broker/bin/ or
>>>>>>>>>>>> similar (from memory) on node1, assuming you are running full dev.
>>>>>>>>>>>>
>>>>>>>>>>>>      Jon
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Oct 27, 2017 at 6:25 AM Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> snort logs are in tcp dump format. I may have to convert them.
>>>>>>>>>>>>>
>>>>>>>>>>>>> bin/kafka-console-producer.sh --broker-list localhost:9092
>>>>>>>>>>>>> --topic test
>>>>>>>>>>>>>
>>>>>>>>>>>>> How to give file name or path in this command?
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Oct 27, 2017 at 2:53 PM, Zeolla@GMail.com <
>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> If you have text snort logs you can use Apache nifi or the
>>>>>>>>>>>>>> Kafka producer script as described in step 4 here[1] to push them to
>>>>>>>>>>>>>> Metron's snort topic.  You may also want to look at this [2].
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 1: https://kafka.apache.org/quickstart
>>>>>>>>>>>>>> 2: https://stackoverflow.com/questions/38701179/kafka-
>>>>>>>>>>>>>> console-producer-and-bash-script
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Oct 27, 2017, 02:15 Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hello everyone,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have run snort independently on vagrant ssh and dumped the
>>>>>>>>>>>>>>> logs in tcpdump format. Now I want to bring them to metron to play with
>>>>>>>>>>>>>>> them a bit. Some of you already replied me with some solutions but thats
>>>>>>>>>>>>>>> lost in the inbox somewhere and engulfed by the elasticsearhc issue that I
>>>>>>>>>>>>>>> had. Please give me an easy to understand this solution for this problem.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Regards.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>> --
>>>
>>> Jon
>>>
>>
>> --
>
> Jon
>



--
Thanks & Regards,

Varsha Mordi

Prodevans Technologies LLP.

M: +91 9637109734  *| *L: +91 80 64533365 *|* www.prodevans.com

Re: Snort Logs

Posted by varsha mordi <va...@gmail.com>.
Is there any alternative for Ambari UI except Ambari Shell to work with
commands for services.


On Thu, Nov 2, 2017 at 3:43 PM, Zeolla@GMail.com <ze...@gmail.com> wrote:

> If logs are in the indexing topic, you may have an issue indexing to
> Elasticsearch, or the indexing storm topology.  Did you look at kibana /
> elasticsearch head, and the ES and kibana logs?  I'm willing to bet that's
> your issue.
>
> Jon
>
> On Thu, Nov 2, 2017, 04:54 Syed Hammad Tahir <ms...@itu.edu.pk> wrote:
>
>> I can see the the logs under all the given topics when used with kafka
>> consumer. If everything worked well, I should look into kibana dashboards
>> where I am unable to see the logs. What should I do in order to see them?
>>
>> On Wed, Nov 1, 2017 at 5:19 PM, Zeolla@GMail.com <ze...@gmail.com>
>> wrote:
>>
>>> If you're sending logs to the snort topic successfully and they aren't
>>> showing up in the UI, odds are you have an issue in the pipeline.  The easy
>>> thing to do would be to look in the storm UI briefly and look for errors.
>>> If you don't have any errors in storm, here's a brief set of steps I would
>>> do:
>>>
>>> 1.  Connect to the snort topic as a consumer and make sure the logs you
>>> are sending are properly getting published
>>> 2.  If logs are getting to the snort topic, look at the enrichments
>>> topic for the same thing (the logs will look slightly different, as they
>>> have now been parsed)
>>> 3.  If logs are in enrichments, look at the indexing topic.
>>> 4.  If logs are in the indexing topic, you may have an issue indexing to
>>> Elasticsearch.
>>>
>>> If you find an issue during step 1 - your issue is with kafka or sending
>>> to kafka.  If the issue is in step 2, look at the snort storm topology in
>>> more detail and the storm logs.  If the issue is in step 3, look at the
>>> enrichment topology/storm logs, if the issue is in step 4 look at
>>> kibana/elasticsearch UIs and logs.  If everything looks good after this
>>> exercise, I would point the finger at your kibana dashboards, at least
>>> initially.
>>>
>>> Jon
>>>
>>> On Wed, Nov 1, 2017 at 4:17 AM Syed Hammad Tahir <ms...@itu.edu.pk>
>>> wrote:
>>>
>>>> How do I make these messages sent to kafka producer show in kibana
>>>> dashboard  or any metron related UI
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Tue, Oct 31, 2017 at 12:50 PM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> OK, so finally the message is going (the formatted one) but i still
>>>>> cant see it in kibana dashboard under snort`s label. I had stopped all the
>>>>> stub sensors from monit before doing it. What am I doing wrong here?
>>>>>
>>>>> On Tue, Oct 31, 2017 at 11:28 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> same thing even when I send a formatted message.
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> On Tue, Oct 31, 2017 at 10:57 AM, Syed Hammad Tahir <
>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>
>>>>>>> I sent a random message to that kafka topic and got this
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> I guess this is because I am not following the format of message I
>>>>>>> should send? Like those snort logs you showed.
>>>>>>>
>>>>>>> On Mon, Oct 30, 2017 at 5:24 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> They need to meet the format of the logs I sent earlier.  Look into
>>>>>>>> the snort output options - may require you rerun snort, depending on your
>>>>>>>> situation
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Mon, Oct 30, 2017, 06:53 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Yes, I have converted them to text but those logs are simply
>>>>>>>>> captured packet headers over the local network. Now I just push them via
>>>>>>>>> that kafka producer command under topic name of snort and they will be
>>>>>>>>> visible in metron?
>>>>>>>>>
>>>>>>>>> On Mon, Oct 30, 2017 at 2:41 PM, Zeolla@GMail.com <
>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> You need text logs. Here's an example of some properly formatted
>>>>>>>>>> logs - https://raw.githubusercontent.com/apache/
>>>>>>>>>> metron/master/metron-deployment/roles/sensor-stubs/
>>>>>>>>>> files/snort.out
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>> On Mon, Oct 30, 2017, 01:34 Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> I have found the kafka-console-producer.sh but I need to know
>>>>>>>>>>> how to make it read snort.log (tcp dump format) file. May be I am missing
>>>>>>>>>>> something in the plain sight but it would be awsome if you tell me that.
>>>>>>>>>>>
>>>>>>>>>>> Regards.
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Oct 27, 2017 at 5:09 PM, Zeolla@GMail.com <
>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> On the 25th I said:
>>>>>>>>>>>>
>>>>>>>>>>>>      It should be in /usr/hdp/current/kafka-broker/bin/ or
>>>>>>>>>>>> similar (from memory) on node1, assuming you are running full dev.
>>>>>>>>>>>>
>>>>>>>>>>>>      Jon
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Oct 27, 2017 at 6:25 AM Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> snort logs are in tcp dump format. I may have to convert them.
>>>>>>>>>>>>>
>>>>>>>>>>>>> bin/kafka-console-producer.sh --broker-list localhost:9092
>>>>>>>>>>>>> --topic test
>>>>>>>>>>>>>
>>>>>>>>>>>>> How to give file name or path in this command?
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Oct 27, 2017 at 2:53 PM, Zeolla@GMail.com <
>>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> If you have text snort logs you can use Apache nifi or the
>>>>>>>>>>>>>> Kafka producer script as described in step 4 here[1] to push them to
>>>>>>>>>>>>>> Metron's snort topic.  You may also want to look at this [2].
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 1: https://kafka.apache.org/quickstart
>>>>>>>>>>>>>> 2: https://stackoverflow.com/questions/38701179/kafka-
>>>>>>>>>>>>>> console-producer-and-bash-script
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Oct 27, 2017, 02:15 Syed Hammad Tahir <
>>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hello everyone,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have run snort independently on vagrant ssh and dumped the
>>>>>>>>>>>>>>> logs in tcpdump format. Now I want to bring them to metron to play with
>>>>>>>>>>>>>>> them a bit. Some of you already replied me with some solutions but thats
>>>>>>>>>>>>>>> lost in the inbox somewhere and engulfed by the elasticsearhc issue that I
>>>>>>>>>>>>>>> had. Please give me an easy to understand this solution for this problem.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Regards.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>> --
>>>
>>> Jon
>>>
>>
>> --
>
> Jon
>



-- 
Thanks & Regards,

Varsha Mordi

Prodevans Technologies LLP.

M: +91 9637109734  *| *L: +91 80 64533365 *|* www.prodevans.com

Re: Snort Logs

Posted by "Zeolla@GMail.com" <ze...@gmail.com>.
If logs are in the indexing topic, you may have an issue indexing to
Elasticsearch, or the indexing storm topology.  Did you look at kibana /
elasticsearch head, and the ES and kibana logs?  I'm willing to bet that's
your issue.

Jon

On Thu, Nov 2, 2017, 04:54 Syed Hammad Tahir <ms...@itu.edu.pk> wrote:

> I can see the the logs under all the given topics when used with kafka
> consumer. If everything worked well, I should look into kibana dashboards
> where I am unable to see the logs. What should I do in order to see them?
>
> On Wed, Nov 1, 2017 at 5:19 PM, Zeolla@GMail.com <ze...@gmail.com> wrote:
>
>> If you're sending logs to the snort topic successfully and they aren't
>> showing up in the UI, odds are you have an issue in the pipeline.  The easy
>> thing to do would be to look in the storm UI briefly and look for errors.
>> If you don't have any errors in storm, here's a brief set of steps I would
>> do:
>>
>> 1.  Connect to the snort topic as a consumer and make sure the logs you
>> are sending are properly getting published
>> 2.  If logs are getting to the snort topic, look at the enrichments topic
>> for the same thing (the logs will look slightly different, as they have now
>> been parsed)
>> 3.  If logs are in enrichments, look at the indexing topic.
>> 4.  If logs are in the indexing topic, you may have an issue indexing to
>> Elasticsearch.
>>
>> If you find an issue during step 1 - your issue is with kafka or sending
>> to kafka.  If the issue is in step 2, look at the snort storm topology in
>> more detail and the storm logs.  If the issue is in step 3, look at the
>> enrichment topology/storm logs, if the issue is in step 4 look at
>> kibana/elasticsearch UIs and logs.  If everything looks good after this
>> exercise, I would point the finger at your kibana dashboards, at least
>> initially.
>>
>> Jon
>>
>> On Wed, Nov 1, 2017 at 4:17 AM Syed Hammad Tahir <ms...@itu.edu.pk>
>> wrote:
>>
>>> How do I make these messages sent to kafka producer show in kibana
>>> dashboard  or any metron related UI
>>>
>>> [image: Inline image 1]
>>>
>>> On Tue, Oct 31, 2017 at 12:50 PM, Syed Hammad Tahir <
>>> mscs16059@itu.edu.pk> wrote:
>>>
>>>> OK, so finally the message is going (the formatted one) but i still
>>>> cant see it in kibana dashboard under snort`s label. I had stopped all the
>>>> stub sensors from monit before doing it. What am I doing wrong here?
>>>>
>>>> On Tue, Oct 31, 2017 at 11:28 AM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> same thing even when I send a formatted message.
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> On Tue, Oct 31, 2017 at 10:57 AM, Syed Hammad Tahir <
>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>
>>>>>> I sent a random message to that kafka topic and got this
>>>>>>
>>>>>> [image: Inline image 1]
>>>>>>
>>>>>> I guess this is because I am not following the format of message I
>>>>>> should send? Like those snort logs you showed.
>>>>>>
>>>>>> On Mon, Oct 30, 2017 at 5:24 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> They need to meet the format of the logs I sent earlier.  Look into
>>>>>>> the snort output options - may require you rerun snort, depending on your
>>>>>>> situation
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>> On Mon, Oct 30, 2017, 06:53 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Yes, I have converted them to text but those logs are simply
>>>>>>>> captured packet headers over the local network. Now I just push them via
>>>>>>>> that kafka producer command under topic name of snort and they will be
>>>>>>>> visible in metron?
>>>>>>>>
>>>>>>>> On Mon, Oct 30, 2017 at 2:41 PM, Zeolla@GMail.com <zeolla@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> You need text logs. Here's an example of some properly formatted
>>>>>>>>> logs -
>>>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>> On Mon, Oct 30, 2017, 01:34 Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> I have found the kafka-console-producer.sh but I need to know
>>>>>>>>>> how to make it read snort.log (tcp dump format) file. May be I am missing
>>>>>>>>>> something in the plain sight but it would be awsome if you tell me that.
>>>>>>>>>>
>>>>>>>>>> Regards.
>>>>>>>>>>
>>>>>>>>>> On Fri, Oct 27, 2017 at 5:09 PM, Zeolla@GMail.com <
>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> On the 25th I said:
>>>>>>>>>>>
>>>>>>>>>>>      It should be in /usr/hdp/current/kafka-broker/bin/ or
>>>>>>>>>>> similar (from memory) on node1, assuming you are running full dev.
>>>>>>>>>>>
>>>>>>>>>>>      Jon
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Jon
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Oct 27, 2017 at 6:25 AM Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> snort logs are in tcp dump format. I may have to convert them.
>>>>>>>>>>>>
>>>>>>>>>>>> bin/kafka-console-producer.sh --broker-list localhost:9092
>>>>>>>>>>>> --topic test
>>>>>>>>>>>>
>>>>>>>>>>>> How to give file name or path in this command?
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Oct 27, 2017 at 2:53 PM, Zeolla@GMail.com <
>>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> If you have text snort logs you can use Apache nifi or the
>>>>>>>>>>>>> Kafka producer script as described in step 4 here[1] to push them to
>>>>>>>>>>>>> Metron's snort topic.  You may also want to look at this [2].
>>>>>>>>>>>>>
>>>>>>>>>>>>> 1: https://kafka.apache.org/quickstart
>>>>>>>>>>>>> 2:
>>>>>>>>>>>>> https://stackoverflow.com/questions/38701179/kafka-console-producer-and-bash-script
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Oct 27, 2017, 02:15 Syed Hammad Tahir <
>>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hello everyone,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I have run snort independently on vagrant ssh and dumped the
>>>>>>>>>>>>>> logs in tcpdump format. Now I want to bring them to metron to play with
>>>>>>>>>>>>>> them a bit. Some of you already replied me with some solutions but thats
>>>>>>>>>>>>>> lost in the inbox somewhere and engulfed by the elasticsearhc issue that I
>>>>>>>>>>>>>> had. Please give me an easy to understand this solution for this problem.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Regards.
>>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jon
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>
>>>>>>>>>>> Jon
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>> --
>>
>> Jon
>>
>
> --

Jon

Re: Snort Logs

Posted by Syed Hammad Tahir <ms...@itu.edu.pk>.
I can see the the logs under all the given topics when used with kafka
consumer. If everything worked well, I should look into kibana dashboards
where I am unable to see the logs. What should I do in order to see them?

On Wed, Nov 1, 2017 at 5:19 PM, Zeolla@GMail.com <ze...@gmail.com> wrote:

> If you're sending logs to the snort topic successfully and they aren't
> showing up in the UI, odds are you have an issue in the pipeline.  The easy
> thing to do would be to look in the storm UI briefly and look for errors.
> If you don't have any errors in storm, here's a brief set of steps I would
> do:
>
> 1.  Connect to the snort topic as a consumer and make sure the logs you
> are sending are properly getting published
> 2.  If logs are getting to the snort topic, look at the enrichments topic
> for the same thing (the logs will look slightly different, as they have now
> been parsed)
> 3.  If logs are in enrichments, look at the indexing topic.
> 4.  If logs are in the indexing topic, you may have an issue indexing to
> Elasticsearch.
>
> If you find an issue during step 1 - your issue is with kafka or sending
> to kafka.  If the issue is in step 2, look at the snort storm topology in
> more detail and the storm logs.  If the issue is in step 3, look at the
> enrichment topology/storm logs, if the issue is in step 4 look at
> kibana/elasticsearch UIs and logs.  If everything looks good after this
> exercise, I would point the finger at your kibana dashboards, at least
> initially.
>
> Jon
>
> On Wed, Nov 1, 2017 at 4:17 AM Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> How do I make these messages sent to kafka producer show in kibana
>> dashboard  or any metron related UI
>>
>> [image: Inline image 1]
>>
>> On Tue, Oct 31, 2017 at 12:50 PM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>> > wrote:
>>
>>> OK, so finally the message is going (the formatted one) but i still cant
>>> see it in kibana dashboard under snort`s label. I had stopped all the stub
>>> sensors from monit before doing it. What am I doing wrong here?
>>>
>>> On Tue, Oct 31, 2017 at 11:28 AM, Syed Hammad Tahir <
>>> mscs16059@itu.edu.pk> wrote:
>>>
>>>> same thing even when I send a formatted message.
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> On Tue, Oct 31, 2017 at 10:57 AM, Syed Hammad Tahir <
>>>> mscs16059@itu.edu.pk> wrote:
>>>>
>>>>> I sent a random message to that kafka topic and got this
>>>>>
>>>>> [image: Inline image 1]
>>>>>
>>>>> I guess this is because I am not following the format of message I
>>>>> should send? Like those snort logs you showed.
>>>>>
>>>>> On Mon, Oct 30, 2017 at 5:24 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> They need to meet the format of the logs I sent earlier.  Look into
>>>>>> the snort output options - may require you rerun snort, depending on your
>>>>>> situation
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>> On Mon, Oct 30, 2017, 06:53 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>> wrote:
>>>>>>
>>>>>>> Yes, I have converted them to text but those logs are simply
>>>>>>> captured packet headers over the local network. Now I just push them via
>>>>>>> that kafka producer command under topic name of snort and they will be
>>>>>>> visible in metron?
>>>>>>>
>>>>>>> On Mon, Oct 30, 2017 at 2:41 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> You need text logs. Here's an example of some properly formatted
>>>>>>>> logs - https://raw.githubusercontent.com/apache/
>>>>>>>> metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>> On Mon, Oct 30, 2017, 01:34 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> I have found the kafka-console-producer.sh but I need to know how
>>>>>>>>> to make it read snort.log (tcp dump format) file. May be I am missing
>>>>>>>>> something in the plain sight but it would be awsome if you tell me that.
>>>>>>>>>
>>>>>>>>> Regards.
>>>>>>>>>
>>>>>>>>> On Fri, Oct 27, 2017 at 5:09 PM, Zeolla@GMail.com <
>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> On the 25th I said:
>>>>>>>>>>
>>>>>>>>>>      It should be in /usr/hdp/current/kafka-broker/bin/ or
>>>>>>>>>> similar (from memory) on node1, assuming you are running full dev.
>>>>>>>>>>
>>>>>>>>>>      Jon
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>> On Fri, Oct 27, 2017 at 6:25 AM Syed Hammad Tahir <
>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>
>>>>>>>>>>> snort logs are in tcp dump format. I may have to convert them.
>>>>>>>>>>>
>>>>>>>>>>> bin/kafka-console-producer.sh --broker-list localhost:9092
>>>>>>>>>>> --topic test
>>>>>>>>>>>
>>>>>>>>>>> How to give file name or path in this command?
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Oct 27, 2017 at 2:53 PM, Zeolla@GMail.com <
>>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> If you have text snort logs you can use Apache nifi or the
>>>>>>>>>>>> Kafka producer script as described in step 4 here[1] to push them to
>>>>>>>>>>>> Metron's snort topic.  You may also want to look at this [2].
>>>>>>>>>>>>
>>>>>>>>>>>> 1: https://kafka.apache.org/quickstart
>>>>>>>>>>>> 2: https://stackoverflow.com/questions/38701179/kafka-
>>>>>>>>>>>> console-producer-and-bash-script
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Oct 27, 2017, 02:15 Syed Hammad Tahir <
>>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hello everyone,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I have run snort independently on vagrant ssh and dumped the
>>>>>>>>>>>>> logs in tcpdump format. Now I want to bring them to metron to play with
>>>>>>>>>>>>> them a bit. Some of you already replied me with some solutions but thats
>>>>>>>>>>>>> lost in the inbox somewhere and engulfed by the elasticsearhc issue that I
>>>>>>>>>>>>> had. Please give me an easy to understand this solution for this problem.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Regards.
>>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> Jon
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Jon
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>
>>>>>>>> Jon
>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>
>>>>>> Jon
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>> --
>
> Jon
>

Re: Snort Logs

Posted by "Zeolla@GMail.com" <ze...@gmail.com>.
If you're sending logs to the snort topic successfully and they aren't
showing up in the UI, odds are you have an issue in the pipeline.  The easy
thing to do would be to look in the storm UI briefly and look for errors.
If you don't have any errors in storm, here's a brief set of steps I would
do:

1.  Connect to the snort topic as a consumer and make sure the logs you are
sending are properly getting published
2.  If logs are getting to the snort topic, look at the enrichments topic
for the same thing (the logs will look slightly different, as they have now
been parsed)
3.  If logs are in enrichments, look at the indexing topic.
4.  If logs are in the indexing topic, you may have an issue indexing to
Elasticsearch.

If you find an issue during step 1 - your issue is with kafka or sending to
kafka.  If the issue is in step 2, look at the snort storm topology in more
detail and the storm logs.  If the issue is in step 3, look at the
enrichment topology/storm logs, if the issue is in step 4 look at
kibana/elasticsearch UIs and logs.  If everything looks good after this
exercise, I would point the finger at your kibana dashboards, at least
initially.

Jon

On Wed, Nov 1, 2017 at 4:17 AM Syed Hammad Tahir <ms...@itu.edu.pk>
wrote:

> How do I make these messages sent to kafka producer show in kibana
> dashboard  or any metron related UI
>
> [image: Inline image 1]
>
> On Tue, Oct 31, 2017 at 12:50 PM, Syed Hammad Tahir <ms...@itu.edu.pk>
> wrote:
>
>> OK, so finally the message is going (the formatted one) but i still cant
>> see it in kibana dashboard under snort`s label. I had stopped all the stub
>> sensors from monit before doing it. What am I doing wrong here?
>>
>> On Tue, Oct 31, 2017 at 11:28 AM, Syed Hammad Tahir <mscs16059@itu.edu.pk
>> > wrote:
>>
>>> same thing even when I send a formatted message.
>>>
>>> [image: Inline image 1]
>>>
>>> On Tue, Oct 31, 2017 at 10:57 AM, Syed Hammad Tahir <
>>> mscs16059@itu.edu.pk> wrote:
>>>
>>>> I sent a random message to that kafka topic and got this
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> I guess this is because I am not following the format of message I
>>>> should send? Like those snort logs you showed.
>>>>
>>>> On Mon, Oct 30, 2017 at 5:24 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>> wrote:
>>>>
>>>>> They need to meet the format of the logs I sent earlier.  Look into
>>>>> the snort output options - may require you rerun snort, depending on your
>>>>> situation
>>>>>
>>>>> Jon
>>>>>
>>>>> On Mon, Oct 30, 2017, 06:53 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>> wrote:
>>>>>
>>>>>> Yes, I have converted them to text but those logs are simply captured
>>>>>> packet headers over the local network. Now I just push them via that kafka
>>>>>> producer command under topic name of snort and they will be visible in
>>>>>> metron?
>>>>>>
>>>>>> On Mon, Oct 30, 2017 at 2:41 PM, Zeolla@GMail.com <ze...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> You need text logs. Here's an example of some properly formatted
>>>>>>> logs -
>>>>>>> https://raw.githubusercontent.com/apache/metron/master/metron-deployment/roles/sensor-stubs/files/snort.out
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>> On Mon, Oct 30, 2017, 01:34 Syed Hammad Tahir <ms...@itu.edu.pk>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> I have found the kafka-console-producer.sh but I need to know how
>>>>>>>> to make it read snort.log (tcp dump format) file. May be I am missing
>>>>>>>> something in the plain sight but it would be awsome if you tell me that.
>>>>>>>>
>>>>>>>> Regards.
>>>>>>>>
>>>>>>>> On Fri, Oct 27, 2017 at 5:09 PM, Zeolla@GMail.com <zeolla@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> On the 25th I said:
>>>>>>>>>
>>>>>>>>>      It should be in /usr/hdp/current/kafka-broker/bin/ or similar
>>>>>>>>> (from memory) on node1, assuming you are running full dev.
>>>>>>>>>
>>>>>>>>>      Jon
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>> On Fri, Oct 27, 2017 at 6:25 AM Syed Hammad Tahir <
>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>
>>>>>>>>>> snort logs are in tcp dump format. I may have to convert them.
>>>>>>>>>>
>>>>>>>>>> bin/kafka-console-producer.sh --broker-list localhost:9092
>>>>>>>>>> --topic test
>>>>>>>>>>
>>>>>>>>>> How to give file name or path in this command?
>>>>>>>>>>
>>>>>>>>>> On Fri, Oct 27, 2017 at 2:53 PM, Zeolla@GMail.com <
>>>>>>>>>> zeolla@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> If you have text snort logs you can use Apache nifi or the Kafka
>>>>>>>>>>> producer script as described in step 4 here[1] to push them to Metron's
>>>>>>>>>>> snort topic.  You may also want to look at this [2].
>>>>>>>>>>>
>>>>>>>>>>> 1: https://kafka.apache.org/quickstart
>>>>>>>>>>> 2:
>>>>>>>>>>> https://stackoverflow.com/questions/38701179/kafka-console-producer-and-bash-script
>>>>>>>>>>>
>>>>>>>>>>> Jon
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Oct 27, 2017, 02:15 Syed Hammad Tahir <
>>>>>>>>>>> mscs16059@itu.edu.pk> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hello everyone,
>>>>>>>>>>>>
>>>>>>>>>>>> I have run snort independently on vagrant ssh and dumped the
>>>>>>>>>>>> logs in tcpdump format. Now I want to bring them to metron to play with
>>>>>>>>>>>> them a bit. Some of you already replied me with some solutions but thats
>>>>>>>>>>>> lost in the inbox somewhere and engulfed by the elasticsearhc issue that I
>>>>>>>>>>>> had. Please give me an easy to understand this solution for this problem.
>>>>>>>>>>>>
>>>>>>>>>>>> Regards.
>>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>>
>>>>>>>>>>> Jon
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> Jon
>>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>
>>>>>> --
>>>>>
>>>>> Jon
>>>>>
>>>>
>>>>
>>>
>>
> --

Jon