You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nifi.apache.org by "Uwe@Moosheimer.com" <Uw...@Moosheimer.com> on 2016/04/23 15:37:22 UTC

Questions on logging and metrics

Hi,

I don't know if this is the right mailing list to ask so please correct me if I'm wrong.

During development of my first couple of processors I wondered why there is no centreal log processor.

I'm thinking about a "central log processor" which gets all logging events from class ProcessorLog. 
If any source uses one of the log methods (getLogger.info(), getLogger.error() ...) the message is transfered to the log processor if available - otherwise to the standard log.

Of course the log processor should only be placed once in the canvas.

The advantage of the log processor would be an easy to use way to handle all log messages at one place. The "central log processor" can be used as any other processor. By providing a success relation you are able send the log messages to kafka, cassandra, easticsearch, influx etc. or do anything else.

Every other used processor could have a standard property for logging with the options
- global setting
- Trace
- Debug
- Info
- Warn
- Error
The default would be "global setting" which means that the log level is defined by the central log processor.
If the cental log processor is set to error only getLogger.error() messages are processed.

There would be the possibility to change the settings to any other log level on global or only for selected processors. So you could have a global setting of "error" and some individual settings to "info".

Another central processor could be a "central metrics processor" which would be used for sending metrics in the same way. There could be a Metrics class which would be used by a call of Metrics.send(k,v) to send metrics to a central processor.
The Metrics class could automatically add more information like template name, prozessor name, task, timestamp etc. Again a success relation could be used to send metrics to Elasticsearch, graphitedb, influxdb etc. or do anything else.

So my questions are
- am I wrong with my ideas?
- are there any similar plans?
- are there still solutions I don't know?

As mentioned above I don't know if this is the right mailing list to post my ideas and I apologize if I'm wrong and wasting your time.

Best Regards,
Uwe

Re: Questions on logging and metrics

Posted by "Uwe@Moosheimer.com" <Uw...@Moosheimer.com>.
Joe,

thanks for the positive response.
Please let me know if I can help somehow.  

Thanks
Uwe

Mit freundlichen Grüßen / best regards
Kay-Uwe Moosheimer
> Am 23.04.2016 um 16:44 schrieb Joe Witt <jo...@gmail.com>:
> 
> Uwe
> 
> This is a perfectly fine mailing list - thanks for joining the
> discussion and you bring up some interesting points.  We could have a
> logging appender and relationship with that processor that allow this
> to work cleanly.  Need to think on that more.
> 
> Thanks
> Joe
> 
>> On Sat, Apr 23, 2016 at 9:37 AM, Uwe@Moosheimer.com <Uw...@moosheimer.com> wrote:
>> Hi,
>> 
>> I don't know if this is the right mailing list to ask so please correct me if I'm wrong.
>> 
>> During development of my first couple of processors I wondered why there is no centreal log processor.
>> 
>> I'm thinking about a "central log processor" which gets all logging events from class ProcessorLog.
>> If any source uses one of the log methods (getLogger.info(), getLogger.error() ...) the message is transfered to the log processor if available - otherwise to the standard log.
>> 
>> Of course the log processor should only be placed once in the canvas.
>> 
>> The advantage of the log processor would be an easy to use way to handle all log messages at one place. The "central log processor" can be used as any other processor. By providing a success relation you are able send the log messages to kafka, cassandra, easticsearch, influx etc. or do anything else.
>> 
>> Every other used processor could have a standard property for logging with the options
>> - global setting
>> - Trace
>> - Debug
>> - Info
>> - Warn
>> - Error
>> The default would be "global setting" which means that the log level is defined by the central log processor.
>> If the cental log processor is set to error only getLogger.error() messages are processed.
>> 
>> There would be the possibility to change the settings to any other log level on global or only for selected processors. So you could have a global setting of "error" and some individual settings to "info".
>> 
>> Another central processor could be a "central metrics processor" which would be used for sending metrics in the same way. There could be a Metrics class which would be used by a call of Metrics.send(k,v) to send metrics to a central processor.
>> The Metrics class could automatically add more information like template name, prozessor name, task, timestamp etc. Again a success relation could be used to send metrics to Elasticsearch, graphitedb, influxdb etc. or do anything else.
>> 
>> So my questions are
>> - am I wrong with my ideas?
>> - are there any similar plans?
>> - are there still solutions I don't know?
>> 
>> As mentioned above I don't know if this is the right mailing list to post my ideas and I apologize if I'm wrong and wasting your time.
>> 
>> Best Regards,
>> Uwe


Re: Questions on logging and metrics

Posted by Joe Witt <jo...@gmail.com>.
Uwe

This is a perfectly fine mailing list - thanks for joining the
discussion and you bring up some interesting points.  We could have a
logging appender and relationship with that processor that allow this
to work cleanly.  Need to think on that more.

Thanks
Joe

On Sat, Apr 23, 2016 at 9:37 AM, Uwe@Moosheimer.com <Uw...@moosheimer.com> wrote:
> Hi,
>
> I don't know if this is the right mailing list to ask so please correct me if I'm wrong.
>
> During development of my first couple of processors I wondered why there is no centreal log processor.
>
> I'm thinking about a "central log processor" which gets all logging events from class ProcessorLog.
> If any source uses one of the log methods (getLogger.info(), getLogger.error() ...) the message is transfered to the log processor if available - otherwise to the standard log.
>
> Of course the log processor should only be placed once in the canvas.
>
> The advantage of the log processor would be an easy to use way to handle all log messages at one place. The "central log processor" can be used as any other processor. By providing a success relation you are able send the log messages to kafka, cassandra, easticsearch, influx etc. or do anything else.
>
> Every other used processor could have a standard property for logging with the options
> - global setting
> - Trace
> - Debug
> - Info
> - Warn
> - Error
> The default would be "global setting" which means that the log level is defined by the central log processor.
> If the cental log processor is set to error only getLogger.error() messages are processed.
>
> There would be the possibility to change the settings to any other log level on global or only for selected processors. So you could have a global setting of "error" and some individual settings to "info".
>
> Another central processor could be a "central metrics processor" which would be used for sending metrics in the same way. There could be a Metrics class which would be used by a call of Metrics.send(k,v) to send metrics to a central processor.
> The Metrics class could automatically add more information like template name, prozessor name, task, timestamp etc. Again a success relation could be used to send metrics to Elasticsearch, graphitedb, influxdb etc. or do anything else.
>
> So my questions are
> - am I wrong with my ideas?
> - are there any similar plans?
> - are there still solutions I don't know?
>
> As mentioned above I don't know if this is the right mailing list to post my ideas and I apologize if I'm wrong and wasting your time.
>
> Best Regards,
> Uwe