You are viewing a plain text version of this content. The canonical link for it is here.
Posted to log4j-dev@logging.apache.org by Gary Gregory <ga...@gmail.com> on 2016/09/24 22:13:53 UTC

In memory appender

Hi All,

I can't believe it, but through a convoluted use-case, I actually need an
in-memory list appender, very much like our test-only ListAppender.

The requirement is as follows.

We have a JDBC driver and matching proprietary database that specializes in
data virtualization of mainframe resources like DB2, VSAM, IMS, and all
sorts of non-SQL data sources (
http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization
)

The high level requirement is to merge the driver log into the server's log
for full-end to end tractability and debugging.

When the driver is running on the z/OS mainframe, it can be configured with
a z/OS specific Appender that can talk to the server log module directly.

When the driver is running elsewhere, it can talk to the database via a
Syslog socket Appender. This requires more set up on the server side and
for the server to do special magic to know how the incoming log events
match up with server operations. Tricky.

The customer should also be able to configure the driver such that anytime
the driver communicates to the database, it sends along whatever log events
have accumulated since the last client-server roundtrip. This allows the
server to match exactly the connection and operations the client performed
with the server's own logging.

In order to do that I need to buffer all log events in an Appender and when
it's time, I need to get the list of events and reset the appender to a new
empty list so events can keep accumulating.

My proposal is to either turn our ListAppender into such an appender. For
sanity, the appender could be configured with various sizing policies:

- open: the list grows unbounded
- closed: the list grows to a given size and _new_ events are dropped on
the floor beyond that
- latest: the list grows to a given size and _old_ events are dropped on
the floor beyond that

Thoughts?

Gary

-- 
E-Mail: garydgregory@gmail.com | ggregory@apache.org
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Remko Popma <re...@gmail.com>.
You could use that too but you will need to build more infrastructure. 

What Chronicle offers is that you can have independent reader and writer threads that essentially talk to each other via a persisted chunk of memory. Chronicle takes care that these threads don't step on each other's toes: the reader cannot read records that haven't been completely written etc. 

It can also support recovery scenarios when the reader crashes and is resumed later. That should be of interest to your company. Finally, the Chronicle guys have started a company so in addition to it being open source you can get paid support for it, which is always nice to have in your back pocket...

Sent from my iPhone

> On 2016/09/27, at 9:04, Gary Gregory <ga...@gmail.com> wrote:
> 
> oh... what about our own http://logging.apache.org/log4j/2.x/manual/appenders.html#MemoryMappedFileAppender
> 
> ?
> 
> Gary
> 
>> On Mon, Sep 26, 2016 at 4:59 PM, Remko Popma <re...@gmail.com> wrote:
>> In addition to the Flume based solution, here is another alternative idea: use Peter Lawrey's Chronicle[1] library to store log events in a memory mapped file. 
>> 
>> The appender can just keep adding events without worrying about overflowing the memory. 
>> 
>> The client that reads from this file can be in a separate thread (even a separate process by the way) and can read as much as it wants, and send it to the server. 
>> 
>> Serialization: You can either serialize log events to the target format before storing them in Chronicle (so you have binary blobs in each Chronicle excerpt), client reads these blobs and sends them to the server as is. Or you can use the Chronicle Log4j2 appender[2] to store the events in Chronicle format. The tests[3] show how to read LogEvent objects from the memory mapped file, and the client would be responsible for serializing these log events to the target format before sending data to the server. 
>> 
>> [1]: https://github.com/peter-lawrey/Java-Chronicle
>> [2]: https://github.com/OpenHFT/Chronicle-Logger
>> [3]: https://github.com/OpenHFT/Chronicle-Logger/blob/master/logger-log4j-2/src/test/java/net/openhft/chronicle/logger/log4j2/Log4j2IndexedChronicleTest.java
>> 
>> Remko
>> 
>> Sent from my iPhone
>> 
>>> On 2016/09/27, at 5:57, Gary Gregory <ga...@gmail.com> wrote:
>>> 
>>> Please allow me to restate the use case I have for the CollectionAppender, which is separate from any Flume-based or Syslog-based solution, use cases I also have. Well, I have a Syslog use case, and whether or not Flume is in the picture will really be a larger discussion in my organization due to the requirement to run a Flume Agent.)
>>> 
>>> A program (like a JDBC driver already using Log4j) communicates with another (like a DBMS, not written in Java). The client and server communicate over a proprietary socket protocol. The client sends a list of buffers (in one go) to the server to perform one or more operations. One kind of buffer this protocol defines is a log buffer (where each log event is serialized in a non-Java format.) This allows each communication from the client to the server to say "This is what's happened up to now". What the server does with the log buffers is not important for this discussion.
>>> 
>>> What is important to note is that the log buffer and other buffers go to the server in one BLOB; which is why I cannot (in this use case) send log events by themselves anywhere.
>>> 
>>> I see that something (a CollectionAppender) must collect log events until the client is ready to serialize them and send them to the server. Once the events are drained out of the Appender (in one go by just getting the collection), events can collect in a new collection. A synchronous drain operation would create a new collection and return the old one.
>>> 
>>> The question becomes: What kind of temporary location can the client use to buffer log event until drain time? A Log4j Appender is a natural place to collect log events since the driver uses Log4j. The driver will make its business to drain the appender and work with the events at the right time. I am thinking that the Log4j Appender part is generic enough for inclusion in Log4j. 
>>> 
>>> Further thoughts?
>>> 
>>> Thank you all for reading this far!
>>> Gary
>>> 
>>>> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <ra...@dslextreme.com> wrote:
>>>> I guess I am not understanding your use case quite correctly. I am thinking you have a driver that is logging and you want those logs delivered to some other location to actually be written.  If that is your use case then the driver needs a log4j2.xml that configures the FlumeAppender with either the memory or file channel (depending on your needs) and points to the server(s) that is/are to receive the events. The FlumeAppender handles sending them in batches with whatever size you want (but will send them in smaller amounts if they are in the channel too long). Of course you would need the log4j-flume and flume jars. So on the driver side you wouldn’t need to write anything, just configure the appender and make sure the jars are there.
>>>> 
>>>> For the server that receives them you would also need Flume. Normally this would be a standalone component, but it really wouldn’t be hard to incorporate it into some other application. The only thing you would have to write would be the sink that writes the events to the database or whatever. To incorporate it into an application you would have to look at the main() method of flume and covert that to be a thread that you kick off.
>>>> 
>>>> Ralph
>>>> 
>>>> 
>>>> 
>>>>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>> 
>>>>> Hi Ralph,
>>>>> 
>>>>> Thanks for your feedback. Flume is great in the scenarios that do not involve sending a log buffer from the driver itself.
>>>>> 
>>>>> I can't require a Flume Agent to be running 'on the side' for the use case where the driver chains a log buffer at the end of the train of database IO buffer. For completeness talking about this Flume scenario, if I read you right, I also would need to write a custom Flume sink, which would also be in memory, until the driver is ready to drain it. Or, I could query some other 'safe' and 'reliable' Flume sink that the driver could then drain of events when it needs to.
>>>>> 
>>>>> Narrowing down on the use case where the driver chains a log buffer at the end of the train of database IO buffer, I'll think I have to see about converting the Log4j ListAppender into a more robust and flexible version. I think I'll call it a CollectionAppender and allow various Collection implementations to be plugged in.
>>>>> 
>>>>> Gary
>>>>> 
>>>>> Gary
>>>>> 
>>>>>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <ra...@dslextreme.com> wrote:
>>>>>> If you are buffering events in memory you run the risk of losing events if something should fail. 
>>>>>> 
>>>>>> That said, if I had your requirements I would use the FlumeAppender. It has either an in-memory option to buffer as you are suggesting or it can write to a local file to prevent data loss if that is a requirement. It already has the configuration options you are looking for and has been well tested. The only downside is that you need to have either a Flume instance receiving the messages are something that can receive Flume events over Avro, but it is easier just to use Flume and write a custom sink to do what you want with the data.
>>>>>> 
>>>>>> Ralph
>>>>>> 
>>>>>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>> 
>>>>>>> Hi All,
>>>>>>> 
>>>>>>> I can't believe it, but through a convoluted use-case, I actually need an in-memory list appender, very much like our test-only ListAppender.
>>>>>>> 
>>>>>>> The requirement is as follows.
>>>>>>> 
>>>>>>> We have a JDBC driver and matching proprietary database that specializes in data virtualization of mainframe resources like DB2, VSAM, IMS, and all sorts of non-SQL data sources (http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization) 
>>>>>>> 
>>>>>>> The high level requirement is to merge the driver log into the server's log for full-end to end tractability and debugging.
>>>>>>> 
>>>>>>> When the driver is running on the z/OS mainframe, it can be configured with a z/OS specific Appender that can talk to the server log module directly.
>>>>>>> 
>>>>>>> When the driver is running elsewhere, it can talk to the database via a Syslog socket Appender. This requires more set up on the server side and for the server to do special magic to know how the incoming log events match up with server operations. Tricky.
>>>>>>> 
>>>>>>> The customer should also be able to configure the driver such that anytime the driver communicates to the database, it sends along whatever log events have accumulated since the last client-server roundtrip. This allows the server to match exactly the connection and operations the client performed with the server's own logging.
>>>>>>> 
>>>>>>> In order to do that I need to buffer all log events in an Appender and when it's time, I need to get the list of events and reset the appender to a new empty list so events can keep accumulating.
>>>>>>> 
>>>>>>> My proposal is to either turn our ListAppender into such an appender. For sanity, the appender could be configured with various sizing policies:
>>>>>>> 
>>>>>>> - open: the list grows unbounded
>>>>>>> - closed: the list grows to a given size and _new_ events are dropped on the floor beyond that
>>>>>>> - latest: the list grows to a given size and _old_ events are dropped on the floor beyond that
>>>>>>> 
>>>>>>> Thoughts?
>>>>>>> 
>>>>>>> Gary
>>>>>>> 
>>>>>>> -- 
>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>> JUnit in Action, Second Edition
>>>>>>> Spring Batch in Action
>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>> Home: http://garygregory.com/
>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>> Java Persistence with Hibernate, Second Edition
>>>>> JUnit in Action, Second Edition
>>>>> Spring Batch in Action
>>>>> Blog: http://garygregory.wordpress.com 
>>>>> Home: http://garygregory.com/
>>>>> Tweet! http://twitter.com/GaryGregory
>>> 
>>> 
>>> 
>>> -- 
>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>> Java Persistence with Hibernate, Second Edition
>>> JUnit in Action, Second Edition
>>> Spring Batch in Action
>>> Blog: http://garygregory.wordpress.com 
>>> Home: http://garygregory.com/
>>> Tweet! http://twitter.com/GaryGregory
> 
> 
> 
> -- 
> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
> Java Persistence with Hibernate, Second Edition
> JUnit in Action, Second Edition
> Spring Batch in Action
> Blog: http://garygregory.wordpress.com 
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Ralph Goers <ra...@dslextreme.com>.
Chronicle seems similar to Apache Ignite in that it is a distributed cache except that Ignite looks like it does a lot more. It does implement a distributed queue - http://apacheignite.gridgain.org/v1.1/docs/queue-and-set <http://apacheignite.gridgain.org/v1.1/docs/queue-and-set>.

Ralph

> On Sep 26, 2016, at 5:21 PM, Ralph Goers <ra...@dslextreme.com> wrote:
> 
> I thought you didn’t want to write to a file?
> 
> The Chronicle stuff Remko is linking to is also worth exploring. 
> 
> Ralph
> 
> 
> 
>> On Sep 26, 2016, at 5:04 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>> 
>> oh... what about our own http://logging.apache.org/log4j/2.x/manual/appenders.html#MemoryMappedFileAppender <http://logging.apache.org/log4j/2.x/manual/appenders.html#MemoryMappedFileAppender>
>> 
>> ?
>> 
>> Gary
>> 
>> On Mon, Sep 26, 2016 at 4:59 PM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>> In addition to the Flume based solution, here is another alternative idea: use Peter Lawrey's Chronicle[1] library to store log events in a memory mapped file. 
>> 
>> The appender can just keep adding events without worrying about overflowing the memory. 
>> 
>> The client that reads from this file can be in a separate thread (even a separate process by the way) and can read as much as it wants, and send it to the server. 
>> 
>> Serialization: You can either serialize log events to the target format before storing them in Chronicle (so you have binary blobs in each Chronicle excerpt), client reads these blobs and sends them to the server as is. Or you can use the Chronicle Log4j2 appender[2] to store the events in Chronicle format. The tests[3] show how to read LogEvent objects from the memory mapped file, and the client would be responsible for serializing these log events to the target format before sending data to the server. 
>> 
>> [1]: https://github.com/peter-lawrey/Java-Chronicle <https://github.com/peter-lawrey/Java-Chronicle>
>> [2]: https://github.com/OpenHFT/Chronicle-Logger <https://github.com/OpenHFT/Chronicle-Logger>
>> [3]: https://github.com/OpenHFT/Chronicle-Logger/blob/master/logger-log4j-2/src/test/java/net/openhft/chronicle/logger/log4j2/Log4j2IndexedChronicleTest.java <https://github.com/OpenHFT/Chronicle-Logger/blob/master/logger-log4j-2/src/test/java/net/openhft/chronicle/logger/log4j2/Log4j2IndexedChronicleTest.java>
>> 
>> Remko
>> 
>> Sent from my iPhone
>> 
>> On 2016/09/27, at 5:57, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>> 
>>> Please allow me to restate the use case I have for the CollectionAppender, which is separate from any Flume-based or Syslog-based solution, use cases I also have. Well, I have a Syslog use case, and whether or not Flume is in the picture will really be a larger discussion in my organization due to the requirement to run a Flume Agent.)
>>> 
>>> A program (like a JDBC driver already using Log4j) communicates with another (like a DBMS, not written in Java). The client and server communicate over a proprietary socket protocol. The client sends a list of buffers (in one go) to the server to perform one or more operations. One kind of buffer this protocol defines is a log buffer (where each log event is serialized in a non-Java format.) This allows each communication from the client to the server to say "This is what's happened up to now". What the server does with the log buffers is not important for this discussion.
>>> 
>>> What is important to note is that the log buffer and other buffers go to the server in one BLOB; which is why I cannot (in this use case) send log events by themselves anywhere.
>>> 
>>> I see that something (a CollectionAppender) must collect log events until the client is ready to serialize them and send them to the server. Once the events are drained out of the Appender (in one go by just getting the collection), events can collect in a new collection. A synchronous drain operation would create a new collection and return the old one.
>>> 
>>> The question becomes: What kind of temporary location can the client use to buffer log event until drain time? A Log4j Appender is a natural place to collect log events since the driver uses Log4j. The driver will make its business to drain the appender and work with the events at the right time. I am thinking that the Log4j Appender part is generic enough for inclusion in Log4j. 
>>> 
>>> Further thoughts?
>>> 
>>> Thank you all for reading this far!
>>> Gary
>>> 
>>> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>> I guess I am not understanding your use case quite correctly. I am thinking you have a driver that is logging and you want those logs delivered to some other location to actually be written.  If that is your use case then the driver needs a log4j2.xml that configures the FlumeAppender with either the memory or file channel (depending on your needs) and points to the server(s) that is/are to receive the events. The FlumeAppender handles sending them in batches with whatever size you want (but will send them in smaller amounts if they are in the channel too long). Of course you would need the log4j-flume and flume jars. So on the driver side you wouldn’t need to write anything, just configure the appender and make sure the jars are there.
>>> 
>>> For the server that receives them you would also need Flume. Normally this would be a standalone component, but it really wouldn’t be hard to incorporate it into some other application. The only thing you would have to write would be the sink that writes the events to the database or whatever. To incorporate it into an application you would have to look at the main() method of flume and covert that to be a thread that you kick off.
>>> 
>>> Ralph
>>> 
>>> 
>>> 
>>>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>> 
>>>> Hi Ralph,
>>>> 
>>>> Thanks for your feedback. Flume is great in the scenarios that do not involve sending a log buffer from the driver itself.
>>>> 
>>>> I can't require a Flume Agent to be running 'on the side' for the use case where the driver chains a log buffer at the end of the train of database IO buffer. For completeness talking about this Flume scenario, if I read you right, I also would need to write a custom Flume sink, which would also be in memory, until the driver is ready to drain it. Or, I could query some other 'safe' and 'reliable' Flume sink that the driver could then drain of events when it needs to.
>>>> 
>>>> Narrowing down on the use case where the driver chains a log buffer at the end of the train of database IO buffer, I'll think I have to see about converting the Log4j ListAppender into a more robust and flexible version. I think I'll call it a CollectionAppender and allow various Collection implementations to be plugged in.
>>>> 
>>>> Gary
>>>> 
>>>> Gary
>>>> 
>>>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>> If you are buffering events in memory you run the risk of losing events if something should fail. 
>>>> 
>>>> That said, if I had your requirements I would use the FlumeAppender. It has either an in-memory option to buffer as you are suggesting or it can write to a local file to prevent data loss if that is a requirement. It already has the configuration options you are looking for and has been well tested. The only downside is that you need to have either a Flume instance receiving the messages are something that can receive Flume events over Avro, but it is easier just to use Flume and write a custom sink to do what you want with the data.
>>>> 
>>>> Ralph
>>>> 
>>>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>> 
>>>>> Hi All,
>>>>> 
>>>>> I can't believe it, but through a convoluted use-case, I actually need an in-memory list appender, very much like our test-only ListAppender.
>>>>> 
>>>>> The requirement is as follows.
>>>>> 
>>>>> We have a JDBC driver and matching proprietary database that specializes in data virtualization of mainframe resources like DB2, VSAM, IMS, and all sorts of non-SQL data sources (http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization <http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization>) 
>>>>> 
>>>>> The high level requirement is to merge the driver log into the server's log for full-end to end tractability and debugging.
>>>>> 
>>>>> When the driver is running on the z/OS mainframe, it can be configured with a z/OS specific Appender that can talk to the server log module directly.
>>>>> 
>>>>> When the driver is running elsewhere, it can talk to the database via a Syslog socket Appender. This requires more set up on the server side and for the server to do special magic to know how the incoming log events match up with server operations. Tricky.
>>>>> 
>>>>> The customer should also be able to configure the driver such that anytime the driver communicates to the database, it sends along whatever log events have accumulated since the last client-server roundtrip. This allows the server to match exactly the connection and operations the client performed with the server's own logging.
>>>>> 
>>>>> In order to do that I need to buffer all log events in an Appender and when it's time, I need to get the list of events and reset the appender to a new empty list so events can keep accumulating.
>>>>> 
>>>>> My proposal is to either turn our ListAppender into such an appender. For sanity, the appender could be configured with various sizing policies:
>>>>> 
>>>>> - open: the list grows unbounded
>>>>> - closed: the list grows to a given size and _new_ events are dropped on the floor beyond that
>>>>> - latest: the list grows to a given size and _old_ events are dropped on the floor beyond that
>>>>> 
>>>>> Thoughts?
>>>>> 
>>>>> Gary
>>>>> 
>>>>> -- 
>>>>> E-Mail: garydgregory@gmail.com <ma...@gmail.com> | ggregory@apache.org  <ma...@apache.org>
>>>>> Java Persistence with Hibernate, Second Edition <http://www.manning.com/bauer3/>
>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>> Blog: http://garygregory.wordpress.com <http://garygregory.wordpress.com/> 
>>>>> Home: http://garygregory.com/ <http://garygregory.com/>
>>>>> Tweet! http://twitter.com/GaryGregory <http://twitter.com/GaryGregory>
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> E-Mail: garydgregory@gmail.com <ma...@gmail.com> | ggregory@apache.org  <ma...@apache.org>
>>>> Java Persistence with Hibernate, Second Edition <http://www.manning.com/bauer3/>
>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>> Blog: http://garygregory.wordpress.com <http://garygregory.wordpress.com/> 
>>>> Home: http://garygregory.com/ <http://garygregory.com/>
>>>> Tweet! http://twitter.com/GaryGregory <http://twitter.com/GaryGregory>
>>> 
>>> 
>>> 
>>> -- 
>>> E-Mail: garydgregory@gmail.com <ma...@gmail.com> | ggregory@apache.org  <ma...@apache.org>
>>> Java Persistence with Hibernate, Second Edition <http://www.manning.com/bauer3/>
>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>> Spring Batch in Action <http://www.manning.com/templier/>
>>> Blog: http://garygregory.wordpress.com <http://garygregory.wordpress.com/> 
>>> Home: http://garygregory.com/ <http://garygregory.com/>
>>> Tweet! http://twitter.com/GaryGregory <http://twitter.com/GaryGregory>
>> 
>> 
>> -- 
>> E-Mail: garydgregory@gmail.com <ma...@gmail.com> | ggregory@apache.org  <ma...@apache.org>
>> Java Persistence with Hibernate, Second Edition <http://www.manning.com/bauer3/>
>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>> Spring Batch in Action <http://www.manning.com/templier/>
>> Blog: http://garygregory.wordpress.com <http://garygregory.wordpress.com/> 
>> Home: http://garygregory.com/ <http://garygregory.com/>
>> Tweet! http://twitter.com/GaryGregory <http://twitter.com/GaryGregory>


Re: In memory appender

Posted by Matt Sicker <bo...@gmail.com>.
Hazelcast has a Queue implementation.

On 26 September 2016 at 22:21, Gary Gregory <ga...@gmail.com> wrote:

> The IgniteCache looks richer than both the stock Cache and EhCache for
> sure: https://ignite.apache.org/releases/1.7.0/javadoc/
> org/apache/ignite/IgniteCache.html
>
> I am not sure I like having to basically use a map with a AtomicLong
> sequence key I need to manage AND THEN sort the map keys when what I really
> want is a List or a Queue. I feels like I have to work extra hard for a
> simpler use case. What I want is a cache that behaves like a queue and not
> like a map. Using JMS is too heavy.
>
> So I am still considering a Collection Appender.
>
> Gary
>
> On Mon, Sep 26, 2016 at 7:55 PM, Ralph Goers <ra...@dslextreme.com>
> wrote:
>
>> Ignite is a JSR 107 cache and has some benefits over ehcache.  Ehcache
>> requires you set preferIPv4Stack to true for it to work.  That might be a
>> problem for your client.
>>
>> Sent from my iPad
>>
>> On Sep 26, 2016, at 7:18 PM, Gary Gregory <ga...@gmail.com> wrote:
>>
>>
>> On Mon, Sep 26, 2016 at 6:10 PM, Gary Gregory <ga...@gmail.com>
>> wrote:
>>
>>> On Mon, Sep 26, 2016 at 6:09 PM, Gary Gregory <ga...@gmail.com>
>>> wrote:
>>>
>>>> On Mon, Sep 26, 2016 at 5:21 PM, Ralph Goers <
>>>> ralph.goers@dslextreme.com> wrote:
>>>>
>>>>> I thought you didn’t want to write to a file?
>>>>>
>>>>
>>>> I do not but if the buffer is large enough, log events should stay in
>>>> RAM. But it is not quite right anyway because I'd have to interpret the
>>>> contents of the file to turn back into log events.
>>>>
>>>> I started reading up on the Chronicle appender; thank you Remko for
>>>> pointing it out.
>>>>
>>>> An appender to a cache of objects is really want I want since I also
>>>> want to be able to evict the cache. TBC...
>>>>
>>>
>>> Like a JSR-107 Appender...
>>>
>>
>> Looking at EHCache and https://ignite.apache.org/jcac
>> he/1.0.0/javadoc/javax/cache/Cache.html I can see that a cache is always
>> a kind of map, which leads to what the key should be.
>>
>> A sequence number like we have in the pattern layout seems like a natural
>> choice. I could see a Jsr107Appender that tracks a sequence number as the
>> key. The issue is that the JSR107 Cache interface defines the iterator
>> order as undefined which would force a client trying to drain a
>> Jsr107Appender to sort all entries before being able to serialize them.
>> Unless I can find a list-based Cache implementation within EhCache for
>> example.
>>
>> Gary
>>
>>
>>
>>>
>>> Gary
>>>
>>>>
>>>> Gary
>>>>
>>>>
>>>>> The Chronicle stuff Remko is linking to is also worth exploring.
>>>>>
>>>>> Ralph
>>>>>
>>>>>
>>>>>
>>>>> On Sep 26, 2016, at 5:04 PM, Gary Gregory <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>> oh... what about our own http://logging.apache.org/
>>>>> log4j/2.x/manual/appenders.html#MemoryMappedFileAppender
>>>>>
>>>>> ?
>>>>>
>>>>> Gary
>>>>>
>>>>> On Mon, Sep 26, 2016 at 4:59 PM, Remko Popma <re...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> In addition to the Flume based solution, here is another alternative
>>>>>> idea: use Peter Lawrey's Chronicle[1] library to store log events in a
>>>>>> memory mapped file.
>>>>>>
>>>>>> The appender can just keep adding events without worrying about
>>>>>> overflowing the memory.
>>>>>>
>>>>>> The client that reads from this file can be in a separate thread
>>>>>> (even a separate process by the way) and can read as much as it wants, and
>>>>>> send it to the server.
>>>>>>
>>>>>> Serialization: You can either serialize log events to the target
>>>>>> format before storing them in Chronicle (so you have binary blobs in each
>>>>>> Chronicle excerpt), client reads these blobs and sends them to the server
>>>>>> as is. Or you can use the Chronicle Log4j2 appender[2] to store the events
>>>>>> in Chronicle format. The tests[3] show how to read LogEvent objects from
>>>>>> the memory mapped file, and the client would be responsible for serializing
>>>>>> these log events to the target format before sending data to the server.
>>>>>>
>>>>>> [1]: https://github.com/peter-lawrey/Java-Chronicle
>>>>>> [2]: https://github.com/OpenHFT/Chronicle-Logger
>>>>>> [3]: https://github.com/OpenHFT/Chronicle-Logger/blob/master
>>>>>> /logger-log4j-2/src/test/java/net/openhft/chronicle/logger/l
>>>>>> og4j2/Log4j2IndexedChronicleTest.java
>>>>>>
>>>>>> Remko
>>>>>>
>>>>>> Sent from my iPhone
>>>>>>
>>>>>> On 2016/09/27, at 5:57, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>
>>>>>> Please allow me to restate the use case I have for the
>>>>>> CollectionAppender, which is separate from any Flume-based or Syslog-based
>>>>>> solution, use cases I also have. Well, I have a Syslog use case, and
>>>>>> whether or not Flume is in the picture will really be a larger discussion
>>>>>> in my organization due to the requirement to run a Flume Agent.)
>>>>>>
>>>>>> A program (like a JDBC driver already using Log4j) communicates with
>>>>>> another (like a DBMS, not written in Java). The client and server
>>>>>> communicate over a proprietary socket protocol. The client sends a list of
>>>>>> buffers (in one go) to the server to perform one or more operations. One
>>>>>> kind of buffer this protocol defines is a log buffer (where each log event
>>>>>> is serialized in a non-Java format.) This allows each communication from
>>>>>> the client to the server to say "This is what's happened up to now". What
>>>>>> the server does with the log buffers is not important for this discussion.
>>>>>>
>>>>>> What is important to note is that the log buffer and other buffers go
>>>>>> to the server in one BLOB; which is why I cannot (in this use case) send
>>>>>> log events by themselves anywhere.
>>>>>>
>>>>>> I see that something (a CollectionAppender) must collect log events
>>>>>> until the client is ready to serialize them and send them to the server.
>>>>>> Once the events are drained out of the Appender (in one go by just getting
>>>>>> the collection), events can collect in a new collection. A synchronous
>>>>>> drain operation would create a new collection and return the old one.
>>>>>>
>>>>>> The question becomes: What kind of temporary location can the client
>>>>>> use to buffer log event until drain time? A Log4j Appender is a natural
>>>>>> place to collect log events since the driver uses Log4j. The driver will
>>>>>> make its business to drain the appender and work with the events at the
>>>>>> right time. I am thinking that the Log4j Appender part is generic enough
>>>>>> for inclusion in Log4j.
>>>>>>
>>>>>> Further thoughts?
>>>>>>
>>>>>> Thank you all for reading this far!
>>>>>> Gary
>>>>>>
>>>>>> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <
>>>>>> ralph.goers@dslextreme.com> wrote:
>>>>>>
>>>>>>> I guess I am not understanding your use case quite correctly. I am
>>>>>>> thinking you have a driver that is logging and you want those logs
>>>>>>> delivered to some other location to actually be written.  If that is your
>>>>>>> use case then the driver needs a log4j2.xml that configures the
>>>>>>> FlumeAppender with either the memory or file channel (depending on your
>>>>>>> needs) and points to the server(s) that is/are to receive the events. The
>>>>>>> FlumeAppender handles sending them in batches with whatever size you want
>>>>>>> (but will send them in smaller amounts if they are in the channel too
>>>>>>> long). Of course you would need the log4j-flume and flume jars. So on the
>>>>>>> driver side you wouldn’t need to write anything, just configure the
>>>>>>> appender and make sure the jars are there.
>>>>>>>
>>>>>>> For the server that receives them you would also need Flume.
>>>>>>> Normally this would be a standalone component, but it really wouldn’t be
>>>>>>> hard to incorporate it into some other application. The only thing you
>>>>>>> would have to write would be the sink that writes the events to the
>>>>>>> database or whatever. To incorporate it into an application you would have
>>>>>>> to look at the main() method of flume and covert that to be a thread that
>>>>>>> you kick off.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Hi Ralph,
>>>>>>>
>>>>>>> Thanks for your feedback. Flume is great in the scenarios that do
>>>>>>> not involve sending a log buffer from the driver itself.
>>>>>>>
>>>>>>> I can't require a Flume Agent to be running 'on the side' for the
>>>>>>> use case where the driver chains a log buffer at the end of the train of
>>>>>>> database IO buffer. For completeness talking about this Flume scenario, if
>>>>>>> I read you right, I also would need to write a custom Flume sink, which
>>>>>>> would also be in memory, until the driver is ready to drain it. Or, I could
>>>>>>> query some other 'safe' and 'reliable' Flume sink that the driver could
>>>>>>> then drain of events when it needs to.
>>>>>>>
>>>>>>> Narrowing down on the use case where the driver chains a log buffer
>>>>>>> at the end of the train of database IO buffer, I'll think I have to see
>>>>>>> about converting the Log4j ListAppender into a more robust and flexible
>>>>>>> version. I think I'll call it a CollectionAppender and allow various
>>>>>>> Collection implementations to be plugged in.
>>>>>>>
>>>>>>> Gary
>>>>>>>
>>>>>>> Gary
>>>>>>>
>>>>>>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <
>>>>>>> ralph.goers@dslextreme.com> wrote:
>>>>>>>
>>>>>>>> If you are buffering events in memory you run the risk of losing
>>>>>>>> events if something should fail.
>>>>>>>>
>>>>>>>> That said, if I had your requirements I would use the
>>>>>>>> FlumeAppender. It has either an in-memory option to buffer as you are
>>>>>>>> suggesting or it can write to a local file to prevent data loss if that is
>>>>>>>> a requirement. It already has the configuration options you are looking for
>>>>>>>> and has been well tested. The only downside is that you need to have either
>>>>>>>> a Flume instance receiving the messages are something that can receive
>>>>>>>> Flume events over Avro, but it is easier just to use Flume and write a
>>>>>>>> custom sink to do what you want with the data.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Hi All,
>>>>>>>>
>>>>>>>> I can't believe it, but through a convoluted use-case, I actually
>>>>>>>> need an in-memory list appender, very much like our test-only ListAppender.
>>>>>>>>
>>>>>>>> The requirement is as follows.
>>>>>>>>
>>>>>>>> We have a JDBC driver and matching proprietary database that
>>>>>>>> specializes in data virtualization of mainframe resources like DB2, VSAM,
>>>>>>>> IMS, and all sorts of non-SQL data sources (
>>>>>>>> http://www.rocketsoftware.com/products/rocket-data/rocket-d
>>>>>>>> ata-virtualization)
>>>>>>>>
>>>>>>>> The high level requirement is to merge the driver log into the
>>>>>>>> server's log for full-end to end tractability and debugging.
>>>>>>>>
>>>>>>>> When the driver is running on the z/OS mainframe, it can be
>>>>>>>> configured with a z/OS specific Appender that can talk to the server log
>>>>>>>> module directly.
>>>>>>>>
>>>>>>>> When the driver is running elsewhere, it can talk to the database
>>>>>>>> via a Syslog socket Appender. This requires more set up on the server side
>>>>>>>> and for the server to do special magic to know how the incoming log events
>>>>>>>> match up with server operations. Tricky.
>>>>>>>>
>>>>>>>> The customer should also be able to configure the driver such that
>>>>>>>> anytime the driver communicates to the database, it sends along whatever
>>>>>>>> log events have accumulated since the last client-server roundtrip. This
>>>>>>>> allows the server to match exactly the connection and operations the client
>>>>>>>> performed with the server's own logging.
>>>>>>>>
>>>>>>>> In order to do that I need to buffer all log events in an Appender
>>>>>>>> and when it's time, I need to get the list of events and reset the appender
>>>>>>>> to a new empty list so events can keep accumulating.
>>>>>>>>
>>>>>>>> My proposal is to either turn our ListAppender into such an
>>>>>>>> appender. For sanity, the appender could be configured with various sizing
>>>>>>>> policies:
>>>>>>>>
>>>>>>>> - open: the list grows unbounded
>>>>>>>> - closed: the list grows to a given size and _new_ events are
>>>>>>>> dropped on the floor beyond that
>>>>>>>> - latest: the list grows to a given size and _old_ events are
>>>>>>>> dropped on the floor beyond that
>>>>>>>>
>>>>>>>> Thoughts?
>>>>>>>>
>>>>>>>> Gary
>>>>>>>>
>>>>>>>> --
>>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>>> <http://www.manning.com/bauer3/>
>>>>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>>>>> Blog: http://garygregory.wordpress.com
>>>>>>>> Home: http://garygregory.com/
>>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>> <http://www.manning.com/bauer3/>
>>>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>>>> Blog: http://garygregory.wordpress.com
>>>>>>> Home: http://garygregory.com/
>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>> <http://www.manning.com/bauer3/>
>>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>>> Blog: http://garygregory.wordpress.com
>>>>>> Home: http://garygregory.com/
>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>> Java Persistence with Hibernate, Second Edition
>>>>> <http://www.manning.com/bauer3/>
>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>> Blog: http://garygregory.wordpress.com
>>>>> Home: http://garygregory.com/
>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>> Java Persistence with Hibernate, Second Edition
>>>> <http://www.manning.com/bauer3/>
>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>> Blog: http://garygregory.wordpress.com
>>>> Home: http://garygregory.com/
>>>> Tweet! http://twitter.com/GaryGregory
>>>>
>>>
>>>
>>>
>>> --
>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>> Java Persistence with Hibernate, Second Edition
>>> <http://www.manning.com/bauer3/>
>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>> Spring Batch in Action <http://www.manning.com/templier/>
>>> Blog: http://garygregory.wordpress.com
>>> Home: http://garygregory.com/
>>> Tweet! http://twitter.com/GaryGregory
>>>
>>
>>
>>
>> --
>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>> Java Persistence with Hibernate, Second Edition
>> <http://www.manning.com/bauer3/>
>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>> Spring Batch in Action <http://www.manning.com/templier/>
>> Blog: http://garygregory.wordpress.com
>> Home: http://garygregory.com/
>> Tweet! http://twitter.com/GaryGregory
>>
>>
>
>
> --
> E-Mail: garydgregory@gmail.com | ggregory@apache.org
> Java Persistence with Hibernate, Second Edition
> <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory
>



-- 
Matt Sicker <bo...@gmail.com>

Re: In memory appender

Posted by Gary Gregory <ga...@gmail.com>.
On Mon, Sep 26, 2016 at 9:20 PM, Ralph Goers <ra...@dslextreme.com>
wrote:

> Did you look at the ignite queue API link I sent
>

Yes, and the neat thing about the IgniteQueue is that it is a Collection,
so you can use it as the backing implementation of a CollectionAppender.

My CollectionAppender experiment uses an optional Script to create its
Collection if you do not want the default implementation.

Gary


> Sent from my iPad
>
> On Sep 26, 2016, at 8:21 PM, Gary Gregory <ga...@gmail.com> wrote:
>
> The IgniteCache looks richer than both the stock Cache and EhCache for
> sure: https://ignite.apache.org/releases/1.7.0/javadoc/
> org/apache/ignite/IgniteCache.html
>
> I am not sure I like having to basically use a map with a AtomicLong
> sequence key I need to manage AND THEN sort the map keys when what I really
> want is a List or a Queue. I feels like I have to work extra hard for a
> simpler use case. What I want is a cache that behaves like a queue and not
> like a map. Using JMS is too heavy.
>
> So I am still considering a Collection Appender.
>
> Gary
>
> On Mon, Sep 26, 2016 at 7:55 PM, Ralph Goers <ra...@dslextreme.com>
> wrote:
>
>> Ignite is a JSR 107 cache and has some benefits over ehcache.  Ehcache
>> requires you set preferIPv4Stack to true for it to work.  That might be a
>> problem for your client.
>>
>> Sent from my iPad
>>
>> On Sep 26, 2016, at 7:18 PM, Gary Gregory <ga...@gmail.com> wrote:
>>
>>
>> On Mon, Sep 26, 2016 at 6:10 PM, Gary Gregory <ga...@gmail.com>
>> wrote:
>>
>>> On Mon, Sep 26, 2016 at 6:09 PM, Gary Gregory <ga...@gmail.com>
>>> wrote:
>>>
>>>> On Mon, Sep 26, 2016 at 5:21 PM, Ralph Goers <
>>>> ralph.goers@dslextreme.com> wrote:
>>>>
>>>>> I thought you didn’t want to write to a file?
>>>>>
>>>>
>>>> I do not but if the buffer is large enough, log events should stay in
>>>> RAM. But it is not quite right anyway because I'd have to interpret the
>>>> contents of the file to turn back into log events.
>>>>
>>>> I started reading up on the Chronicle appender; thank you Remko for
>>>> pointing it out.
>>>>
>>>> An appender to a cache of objects is really want I want since I also
>>>> want to be able to evict the cache. TBC...
>>>>
>>>
>>> Like a JSR-107 Appender...
>>>
>>
>> Looking at EHCache and https://ignite.apache.org/jcac
>> he/1.0.0/javadoc/javax/cache/Cache.html I can see that a cache is always
>> a kind of map, which leads to what the key should be.
>>
>> A sequence number like we have in the pattern layout seems like a natural
>> choice. I could see a Jsr107Appender that tracks a sequence number as the
>> key. The issue is that the JSR107 Cache interface defines the iterator
>> order as undefined which would force a client trying to drain a
>> Jsr107Appender to sort all entries before being able to serialize them.
>> Unless I can find a list-based Cache implementation within EhCache for
>> example.
>>
>> Gary
>>
>>
>>
>>>
>>> Gary
>>>
>>>>
>>>> Gary
>>>>
>>>>
>>>>> The Chronicle stuff Remko is linking to is also worth exploring.
>>>>>
>>>>> Ralph
>>>>>
>>>>>
>>>>>
>>>>> On Sep 26, 2016, at 5:04 PM, Gary Gregory <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>> oh... what about our own http://logging.apache.org/
>>>>> log4j/2.x/manual/appenders.html#MemoryMappedFileAppender
>>>>>
>>>>> ?
>>>>>
>>>>> Gary
>>>>>
>>>>> On Mon, Sep 26, 2016 at 4:59 PM, Remko Popma <re...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> In addition to the Flume based solution, here is another alternative
>>>>>> idea: use Peter Lawrey's Chronicle[1] library to store log events in a
>>>>>> memory mapped file.
>>>>>>
>>>>>> The appender can just keep adding events without worrying about
>>>>>> overflowing the memory.
>>>>>>
>>>>>> The client that reads from this file can be in a separate thread
>>>>>> (even a separate process by the way) and can read as much as it wants, and
>>>>>> send it to the server.
>>>>>>
>>>>>> Serialization: You can either serialize log events to the target
>>>>>> format before storing them in Chronicle (so you have binary blobs in each
>>>>>> Chronicle excerpt), client reads these blobs and sends them to the server
>>>>>> as is. Or you can use the Chronicle Log4j2 appender[2] to store the events
>>>>>> in Chronicle format. The tests[3] show how to read LogEvent objects from
>>>>>> the memory mapped file, and the client would be responsible for serializing
>>>>>> these log events to the target format before sending data to the server.
>>>>>>
>>>>>> [1]: https://github.com/peter-lawrey/Java-Chronicle
>>>>>> [2]: https://github.com/OpenHFT/Chronicle-Logger
>>>>>> [3]: https://github.com/OpenHFT/Chronicle-Logger/blob/master
>>>>>> /logger-log4j-2/src/test/java/net/openhft/chronicle/logger/l
>>>>>> og4j2/Log4j2IndexedChronicleTest.java
>>>>>>
>>>>>> Remko
>>>>>>
>>>>>> Sent from my iPhone
>>>>>>
>>>>>> On 2016/09/27, at 5:57, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>
>>>>>> Please allow me to restate the use case I have for the
>>>>>> CollectionAppender, which is separate from any Flume-based or Syslog-based
>>>>>> solution, use cases I also have. Well, I have a Syslog use case, and
>>>>>> whether or not Flume is in the picture will really be a larger discussion
>>>>>> in my organization due to the requirement to run a Flume Agent.)
>>>>>>
>>>>>> A program (like a JDBC driver already using Log4j) communicates with
>>>>>> another (like a DBMS, not written in Java). The client and server
>>>>>> communicate over a proprietary socket protocol. The client sends a list of
>>>>>> buffers (in one go) to the server to perform one or more operations. One
>>>>>> kind of buffer this protocol defines is a log buffer (where each log event
>>>>>> is serialized in a non-Java format.) This allows each communication from
>>>>>> the client to the server to say "This is what's happened up to now". What
>>>>>> the server does with the log buffers is not important for this discussion.
>>>>>>
>>>>>> What is important to note is that the log buffer and other buffers go
>>>>>> to the server in one BLOB; which is why I cannot (in this use case) send
>>>>>> log events by themselves anywhere.
>>>>>>
>>>>>> I see that something (a CollectionAppender) must collect log events
>>>>>> until the client is ready to serialize them and send them to the server.
>>>>>> Once the events are drained out of the Appender (in one go by just getting
>>>>>> the collection), events can collect in a new collection. A synchronous
>>>>>> drain operation would create a new collection and return the old one.
>>>>>>
>>>>>> The question becomes: What kind of temporary location can the client
>>>>>> use to buffer log event until drain time? A Log4j Appender is a natural
>>>>>> place to collect log events since the driver uses Log4j. The driver will
>>>>>> make its business to drain the appender and work with the events at the
>>>>>> right time. I am thinking that the Log4j Appender part is generic enough
>>>>>> for inclusion in Log4j.
>>>>>>
>>>>>> Further thoughts?
>>>>>>
>>>>>> Thank you all for reading this far!
>>>>>> Gary
>>>>>>
>>>>>> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <
>>>>>> ralph.goers@dslextreme.com> wrote:
>>>>>>
>>>>>>> I guess I am not understanding your use case quite correctly. I am
>>>>>>> thinking you have a driver that is logging and you want those logs
>>>>>>> delivered to some other location to actually be written.  If that is your
>>>>>>> use case then the driver needs a log4j2.xml that configures the
>>>>>>> FlumeAppender with either the memory or file channel (depending on your
>>>>>>> needs) and points to the server(s) that is/are to receive the events. The
>>>>>>> FlumeAppender handles sending them in batches with whatever size you want
>>>>>>> (but will send them in smaller amounts if they are in the channel too
>>>>>>> long). Of course you would need the log4j-flume and flume jars. So on the
>>>>>>> driver side you wouldn’t need to write anything, just configure the
>>>>>>> appender and make sure the jars are there.
>>>>>>>
>>>>>>> For the server that receives them you would also need Flume.
>>>>>>> Normally this would be a standalone component, but it really wouldn’t be
>>>>>>> hard to incorporate it into some other application. The only thing you
>>>>>>> would have to write would be the sink that writes the events to the
>>>>>>> database or whatever. To incorporate it into an application you would have
>>>>>>> to look at the main() method of flume and covert that to be a thread that
>>>>>>> you kick off.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Hi Ralph,
>>>>>>>
>>>>>>> Thanks for your feedback. Flume is great in the scenarios that do
>>>>>>> not involve sending a log buffer from the driver itself.
>>>>>>>
>>>>>>> I can't require a Flume Agent to be running 'on the side' for the
>>>>>>> use case where the driver chains a log buffer at the end of the train of
>>>>>>> database IO buffer. For completeness talking about this Flume scenario, if
>>>>>>> I read you right, I also would need to write a custom Flume sink, which
>>>>>>> would also be in memory, until the driver is ready to drain it. Or, I could
>>>>>>> query some other 'safe' and 'reliable' Flume sink that the driver could
>>>>>>> then drain of events when it needs to.
>>>>>>>
>>>>>>> Narrowing down on the use case where the driver chains a log buffer
>>>>>>> at the end of the train of database IO buffer, I'll think I have to see
>>>>>>> about converting the Log4j ListAppender into a more robust and flexible
>>>>>>> version. I think I'll call it a CollectionAppender and allow various
>>>>>>> Collection implementations to be plugged in.
>>>>>>>
>>>>>>> Gary
>>>>>>>
>>>>>>> Gary
>>>>>>>
>>>>>>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <
>>>>>>> ralph.goers@dslextreme.com> wrote:
>>>>>>>
>>>>>>>> If you are buffering events in memory you run the risk of losing
>>>>>>>> events if something should fail.
>>>>>>>>
>>>>>>>> That said, if I had your requirements I would use the
>>>>>>>> FlumeAppender. It has either an in-memory option to buffer as you are
>>>>>>>> suggesting or it can write to a local file to prevent data loss if that is
>>>>>>>> a requirement. It already has the configuration options you are looking for
>>>>>>>> and has been well tested. The only downside is that you need to have either
>>>>>>>> a Flume instance receiving the messages are something that can receive
>>>>>>>> Flume events over Avro, but it is easier just to use Flume and write a
>>>>>>>> custom sink to do what you want with the data.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Hi All,
>>>>>>>>
>>>>>>>> I can't believe it, but through a convoluted use-case, I actually
>>>>>>>> need an in-memory list appender, very much like our test-only ListAppender.
>>>>>>>>
>>>>>>>> The requirement is as follows.
>>>>>>>>
>>>>>>>> We have a JDBC driver and matching proprietary database that
>>>>>>>> specializes in data virtualization of mainframe resources like DB2, VSAM,
>>>>>>>> IMS, and all sorts of non-SQL data sources (
>>>>>>>> http://www.rocketsoftware.com/products/rocket-data/rocket-d
>>>>>>>> ata-virtualization)
>>>>>>>>
>>>>>>>> The high level requirement is to merge the driver log into the
>>>>>>>> server's log for full-end to end tractability and debugging.
>>>>>>>>
>>>>>>>> When the driver is running on the z/OS mainframe, it can be
>>>>>>>> configured with a z/OS specific Appender that can talk to the server log
>>>>>>>> module directly.
>>>>>>>>
>>>>>>>> When the driver is running elsewhere, it can talk to the database
>>>>>>>> via a Syslog socket Appender. This requires more set up on the server side
>>>>>>>> and for the server to do special magic to know how the incoming log events
>>>>>>>> match up with server operations. Tricky.
>>>>>>>>
>>>>>>>> The customer should also be able to configure the driver such that
>>>>>>>> anytime the driver communicates to the database, it sends along whatever
>>>>>>>> log events have accumulated since the last client-server roundtrip. This
>>>>>>>> allows the server to match exactly the connection and operations the client
>>>>>>>> performed with the server's own logging.
>>>>>>>>
>>>>>>>> In order to do that I need to buffer all log events in an Appender
>>>>>>>> and when it's time, I need to get the list of events and reset the appender
>>>>>>>> to a new empty list so events can keep accumulating.
>>>>>>>>
>>>>>>>> My proposal is to either turn our ListAppender into such an
>>>>>>>> appender. For sanity, the appender could be configured with various sizing
>>>>>>>> policies:
>>>>>>>>
>>>>>>>> - open: the list grows unbounded
>>>>>>>> - closed: the list grows to a given size and _new_ events are
>>>>>>>> dropped on the floor beyond that
>>>>>>>> - latest: the list grows to a given size and _old_ events are
>>>>>>>> dropped on the floor beyond that
>>>>>>>>
>>>>>>>> Thoughts?
>>>>>>>>
>>>>>>>> Gary
>>>>>>>>
>>>>>>>> --
>>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>>> <http://www.manning.com/bauer3/>
>>>>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>>>>> Blog: http://garygregory.wordpress.com
>>>>>>>> Home: http://garygregory.com/
>>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>> <http://www.manning.com/bauer3/>
>>>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>>>> Blog: http://garygregory.wordpress.com
>>>>>>> Home: http://garygregory.com/
>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>> <http://www.manning.com/bauer3/>
>>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>>> Blog: http://garygregory.wordpress.com
>>>>>> Home: http://garygregory.com/
>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>> Java Persistence with Hibernate, Second Edition
>>>>> <http://www.manning.com/bauer3/>
>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>> Blog: http://garygregory.wordpress.com
>>>>> Home: http://garygregory.com/
>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>> Java Persistence with Hibernate, Second Edition
>>>> <http://www.manning.com/bauer3/>
>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>> Blog: http://garygregory.wordpress.com
>>>> Home: http://garygregory.com/
>>>> Tweet! http://twitter.com/GaryGregory
>>>>
>>>
>>>
>>>
>>> --
>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>> Java Persistence with Hibernate, Second Edition
>>> <http://www.manning.com/bauer3/>
>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>> Spring Batch in Action <http://www.manning.com/templier/>
>>> Blog: http://garygregory.wordpress.com
>>> Home: http://garygregory.com/
>>> Tweet! http://twitter.com/GaryGregory
>>>
>>
>>
>>
>> --
>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>> Java Persistence with Hibernate, Second Edition
>> <http://www.manning.com/bauer3/>
>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>> Spring Batch in Action <http://www.manning.com/templier/>
>> Blog: http://garygregory.wordpress.com
>> Home: http://garygregory.com/
>> Tweet! http://twitter.com/GaryGregory
>>
>>
>
>
> --
> E-Mail: garydgregory@gmail.com | ggregory@apache.org
> Java Persistence with Hibernate, Second Edition
> <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory
>
>


-- 
E-Mail: garydgregory@gmail.com | ggregory@apache.org
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Ralph Goers <ra...@dslextreme.com>.
Did you look at the ignite queue API link I sent

Sent from my iPad

> On Sep 26, 2016, at 8:21 PM, Gary Gregory <ga...@gmail.com> wrote:
> 
> The IgniteCache looks richer than both the stock Cache and EhCache for sure: https://ignite.apache.org/releases/1.7.0/javadoc/org/apache/ignite/IgniteCache.html
> 
> I am not sure I like having to basically use a map with a AtomicLong sequence key I need to manage AND THEN sort the map keys when what I really want is a List or a Queue. I feels like I have to work extra hard for a simpler use case. What I want is a cache that behaves like a queue and not like a map. Using JMS is too heavy. 
> 
> So I am still considering a Collection Appender.
> 
> Gary 
> 
>> On Mon, Sep 26, 2016 at 7:55 PM, Ralph Goers <ra...@dslextreme.com> wrote:
>> Ignite is a JSR 107 cache and has some benefits over ehcache.  Ehcache requires you set preferIPv4Stack to true for it to work.  That might be a problem for your client.
>> 
>> Sent from my iPad
>> 
>>> On Sep 26, 2016, at 7:18 PM, Gary Gregory <ga...@gmail.com> wrote:
>>> 
>>> 
>>>> On Mon, Sep 26, 2016 at 6:10 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>> On Mon, Sep 26, 2016 at 6:09 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>> On Mon, Sep 26, 2016 at 5:21 PM, Ralph Goers <ra...@dslextreme.com> wrote:
>>>>>> I thought you didn’t want to write to a file?
>>>>> 
>>>>> I do not but if the buffer is large enough, log events should stay in RAM. But it is not quite right anyway because I'd have to interpret the contents of the file to turn back into log events.
>>>>> 
>>>>> I started reading up on the Chronicle appender; thank you Remko for pointing it out.
>>>>> 
>>>>> An appender to a cache of objects is really want I want since I also want to be able to evict the cache. TBC...
>>>> 
>>>> Like a JSR-107 Appender...
>>> 
>>> Looking at EHCache and https://ignite.apache.org/jcache/1.0.0/javadoc/javax/cache/Cache.html I can see that a cache is always a kind of map, which leads to what the key should be.
>>> 
>>> A sequence number like we have in the pattern layout seems like a natural choice. I could see a Jsr107Appender that tracks a sequence number as the key. The issue is that the JSR107 Cache interface defines the iterator order as undefined which would force a client trying to drain a Jsr107Appender to sort all entries before being able to serialize them. Unless I can find a list-based Cache implementation within EhCache for example.
>>> 
>>> Gary
>>> 
>>>  
>>>> 
>>>> Gary
>>>>> 
>>>>> Gary
>>>>> 
>>>>>> 
>>>>>> The Chronicle stuff Remko is linking to is also worth exploring. 
>>>>>> 
>>>>>> Ralph
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> On Sep 26, 2016, at 5:04 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>> 
>>>>>>> oh... what about our own http://logging.apache.org/log4j/2.x/manual/appenders.html#MemoryMappedFileAppender
>>>>>>> 
>>>>>>> ?
>>>>>>> 
>>>>>>> Gary
>>>>>>> 
>>>>>>>> On Mon, Sep 26, 2016 at 4:59 PM, Remko Popma <re...@gmail.com> wrote:
>>>>>>>> In addition to the Flume based solution, here is another alternative idea: use Peter Lawrey's Chronicle[1] library to store log events in a memory mapped file. 
>>>>>>>> 
>>>>>>>> The appender can just keep adding events without worrying about overflowing the memory. 
>>>>>>>> 
>>>>>>>> The client that reads from this file can be in a separate thread (even a separate process by the way) and can read as much as it wants, and send it to the server. 
>>>>>>>> 
>>>>>>>> Serialization: You can either serialize log events to the target format before storing them in Chronicle (so you have binary blobs in each Chronicle excerpt), client reads these blobs and sends them to the server as is. Or you can use the Chronicle Log4j2 appender[2] to store the events in Chronicle format. The tests[3] show how to read LogEvent objects from the memory mapped file, and the client would be responsible for serializing these log events to the target format before sending data to the server. 
>>>>>>>> 
>>>>>>>> [1]: https://github.com/peter-lawrey/Java-Chronicle
>>>>>>>> [2]: https://github.com/OpenHFT/Chronicle-Logger
>>>>>>>> [3]: https://github.com/OpenHFT/Chronicle-Logger/blob/master/logger-log4j-2/src/test/java/net/openhft/chronicle/logger/log4j2/Log4j2IndexedChronicleTest.java
>>>>>>>> 
>>>>>>>> Remko
>>>>>>>> 
>>>>>>>> Sent from my iPhone
>>>>>>>> 
>>>>>>>>> On 2016/09/27, at 5:57, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>>>> 
>>>>>>>>> Please allow me to restate the use case I have for the CollectionAppender, which is separate from any Flume-based or Syslog-based solution, use cases I also have. Well, I have a Syslog use case, and whether or not Flume is in the picture will really be a larger discussion in my organization due to the requirement to run a Flume Agent.)
>>>>>>>>> 
>>>>>>>>> A program (like a JDBC driver already using Log4j) communicates with another (like a DBMS, not written in Java). The client and server communicate over a proprietary socket protocol. The client sends a list of buffers (in one go) to the server to perform one or more operations. One kind of buffer this protocol defines is a log buffer (where each log event is serialized in a non-Java format.) This allows each communication from the client to the server to say "This is what's happened up to now". What the server does with the log buffers is not important for this discussion.
>>>>>>>>> 
>>>>>>>>> What is important to note is that the log buffer and other buffers go to the server in one BLOB; which is why I cannot (in this use case) send log events by themselves anywhere.
>>>>>>>>> 
>>>>>>>>> I see that something (a CollectionAppender) must collect log events until the client is ready to serialize them and send them to the server. Once the events are drained out of the Appender (in one go by just getting the collection), events can collect in a new collection. A synchronous drain operation would create a new collection and return the old one.
>>>>>>>>> 
>>>>>>>>> The question becomes: What kind of temporary location can the client use to buffer log event until drain time? A Log4j Appender is a natural place to collect log events since the driver uses Log4j. The driver will make its business to drain the appender and work with the events at the right time. I am thinking that the Log4j Appender part is generic enough for inclusion in Log4j. 
>>>>>>>>> 
>>>>>>>>> Further thoughts?
>>>>>>>>> 
>>>>>>>>> Thank you all for reading this far!
>>>>>>>>> Gary
>>>>>>>>> 
>>>>>>>>>> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <ra...@dslextreme.com> wrote:
>>>>>>>>>> I guess I am not understanding your use case quite correctly. I am thinking you have a driver that is logging and you want those logs delivered to some other location to actually be written.  If that is your use case then the driver needs a log4j2.xml that configures the FlumeAppender with either the memory or file channel (depending on your needs) and points to the server(s) that is/are to receive the events. The FlumeAppender handles sending them in batches with whatever size you want (but will send them in smaller amounts if they are in the channel too long). Of course you would need the log4j-flume and flume jars. So on the driver side you wouldn’t need to write anything, just configure the appender and make sure the jars are there.
>>>>>>>>>> 
>>>>>>>>>> For the server that receives them you would also need Flume. Normally this would be a standalone component, but it really wouldn’t be hard to incorporate it into some other application. The only thing you would have to write would be the sink that writes the events to the database or whatever. To incorporate it into an application you would have to look at the main() method of flume and covert that to be a thread that you kick off.
>>>>>>>>>> 
>>>>>>>>>> Ralph
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>> Hi Ralph,
>>>>>>>>>>> 
>>>>>>>>>>> Thanks for your feedback. Flume is great in the scenarios that do not involve sending a log buffer from the driver itself.
>>>>>>>>>>> 
>>>>>>>>>>> I can't require a Flume Agent to be running 'on the side' for the use case where the driver chains a log buffer at the end of the train of database IO buffer. For completeness talking about this Flume scenario, if I read you right, I also would need to write a custom Flume sink, which would also be in memory, until the driver is ready to drain it. Or, I could query some other 'safe' and 'reliable' Flume sink that the driver could then drain of events when it needs to.
>>>>>>>>>>> 
>>>>>>>>>>> Narrowing down on the use case where the driver chains a log buffer at the end of the train of database IO buffer, I'll think I have to see about converting the Log4j ListAppender into a more robust and flexible version. I think I'll call it a CollectionAppender and allow various Collection implementations to be plugged in.
>>>>>>>>>>> 
>>>>>>>>>>> Gary
>>>>>>>>>>> 
>>>>>>>>>>> Gary
>>>>>>>>>>> 
>>>>>>>>>>>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <ra...@dslextreme.com> wrote:
>>>>>>>>>>>> If you are buffering events in memory you run the risk of losing events if something should fail. 
>>>>>>>>>>>> 
>>>>>>>>>>>> That said, if I had your requirements I would use the FlumeAppender. It has either an in-memory option to buffer as you are suggesting or it can write to a local file to prevent data loss if that is a requirement. It already has the configuration options you are looking for and has been well tested. The only downside is that you need to have either a Flume instance receiving the messages are something that can receive Flume events over Avro, but it is easier just to use Flume and write a custom sink to do what you want with the data.
>>>>>>>>>>>> 
>>>>>>>>>>>> Ralph
>>>>>>>>>>>> 
>>>>>>>>>>>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi All,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I can't believe it, but through a convoluted use-case, I actually need an in-memory list appender, very much like our test-only ListAppender.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The requirement is as follows.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> We have a JDBC driver and matching proprietary database that specializes in data virtualization of mainframe resources like DB2, VSAM, IMS, and all sorts of non-SQL data sources (http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization) 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The high level requirement is to merge the driver log into the server's log for full-end to end tractability and debugging.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> When the driver is running on the z/OS mainframe, it can be configured with a z/OS specific Appender that can talk to the server log module directly.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> When the driver is running elsewhere, it can talk to the database via a Syslog socket Appender. This requires more set up on the server side and for the server to do special magic to know how the incoming log events match up with server operations. Tricky.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The customer should also be able to configure the driver such that anytime the driver communicates to the database, it sends along whatever log events have accumulated since the last client-server roundtrip. This allows the server to match exactly the connection and operations the client performed with the server's own logging.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> In order to do that I need to buffer all log events in an Appender and when it's time, I need to get the list of events and reset the appender to a new empty list so events can keep accumulating.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> My proposal is to either turn our ListAppender into such an appender. For sanity, the appender could be configured with various sizing policies:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> - open: the list grows unbounded
>>>>>>>>>>>>> - closed: the list grows to a given size and _new_ events are dropped on the floor beyond that
>>>>>>>>>>>>> - latest: the list grows to a given size and _old_ events are dropped on the floor beyond that
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thoughts?
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Gary
>>>>>>>>>>>>> 
>>>>>>>>>>>>> -- 
>>>>>>>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>>>>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>>>>>>>> JUnit in Action, Second Edition
>>>>>>>>>>>>> Spring Batch in Action
>>>>>>>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>>>>>>>> Home: http://garygregory.com/
>>>>>>>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> -- 
>>>>>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>>>>>> JUnit in Action, Second Edition
>>>>>>>>>>> Spring Batch in Action
>>>>>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>>>>>> Home: http://garygregory.com/
>>>>>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> -- 
>>>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>>>> JUnit in Action, Second Edition
>>>>>>>>> Spring Batch in Action
>>>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>>>> Home: http://garygregory.com/
>>>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> -- 
>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>> JUnit in Action, Second Edition
>>>>>>> Spring Batch in Action
>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>> Home: http://garygregory.com/
>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>> Java Persistence with Hibernate, Second Edition
>>>>> JUnit in Action, Second Edition
>>>>> Spring Batch in Action
>>>>> Blog: http://garygregory.wordpress.com 
>>>>> Home: http://garygregory.com/
>>>>> Tweet! http://twitter.com/GaryGregory
>>>> 
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>> Java Persistence with Hibernate, Second Edition
>>>> JUnit in Action, Second Edition
>>>> Spring Batch in Action
>>>> Blog: http://garygregory.wordpress.com 
>>>> Home: http://garygregory.com/
>>>> Tweet! http://twitter.com/GaryGregory
>>> 
>>> 
>>> 
>>> -- 
>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>> Java Persistence with Hibernate, Second Edition
>>> JUnit in Action, Second Edition
>>> Spring Batch in Action
>>> Blog: http://garygregory.wordpress.com 
>>> Home: http://garygregory.com/
>>> Tweet! http://twitter.com/GaryGregory
> 
> 
> 
> -- 
> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
> Java Persistence with Hibernate, Second Edition
> JUnit in Action, Second Edition
> Spring Batch in Action
> Blog: http://garygregory.wordpress.com 
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Gary Gregory <ga...@gmail.com>.
Here my issue with Chronicle, in
net.openhft.chronicle.logger.log4j2.AbstractBinaryChronicleAppender.doAppend(LogEvent,
ChronicleLogWriter) you have:

    @Override
    public void doAppend(@NotNull final LogEvent event, @NotNull final
ChronicleLogWriter writer) {
        writer.write(
            toChronicleLogLevel(event.getLevel()),
            event.getTimeMillis(),
            event.getThreadName(),
            event.getLoggerName(),
            event.getMessage().getFormat(),
            event.getThrown(),
            event.getMessage().getParameters()
        );
    }

So for each event, the Log4j log event is serialized to a Chronicle
something. Then from the driver, I'd need a reader to deserialize the event
so I can write it the way I want to. So that's three LogEvent IOs per event
instead of one.

Note that there is no marker, no context info and so on, so that's no good.

If I do not want that, then I am back to implementing some low level
Chronicle guts and I am back to it not being as simple as a single class
with no depenencies: a CollectionAppener :-(

Gary

On Mon, Sep 26, 2016 at 8:53 PM, Remko Popma <re...@gmail.com> wrote:

> Chronicle is not a cache. Which is good because you don't want a cache if
> I understand your use case.
>
> Chronicle is like an in-memory appender that never runs out of memory.
>
> The existing Chronicle Logger appender (https://github.com/OpenHFT/
> Chronicle-Logger) actually looks like a good fit for your use case.
>
> Sent from my iPhone
>
> On 2016/09/27, at 12:21, Gary Gregory <ga...@gmail.com> wrote:
>
> The IgniteCache looks richer than both the stock Cache and EhCache for
> sure: https://ignite.apache.org/releases/1.7.0/javadoc/
> org/apache/ignite/IgniteCache.html
>
> I am not sure I like having to basically use a map with a AtomicLong
> sequence key I need to manage AND THEN sort the map keys when what I really
> want is a List or a Queue. I feels like I have to work extra hard for a
> simpler use case. What I want is a cache that behaves like a queue and not
> like a map. Using JMS is too heavy.
>
> So I am still considering a Collection Appender.
>
> Gary
>
> On Mon, Sep 26, 2016 at 7:55 PM, Ralph Goers <ra...@dslextreme.com>
> wrote:
>
>> Ignite is a JSR 107 cache and has some benefits over ehcache.  Ehcache
>> requires you set preferIPv4Stack to true for it to work.  That might be a
>> problem for your client.
>>
>> Sent from my iPad
>>
>> On Sep 26, 2016, at 7:18 PM, Gary Gregory <ga...@gmail.com> wrote:
>>
>>
>> On Mon, Sep 26, 2016 at 6:10 PM, Gary Gregory <ga...@gmail.com>
>> wrote:
>>
>>> On Mon, Sep 26, 2016 at 6:09 PM, Gary Gregory <ga...@gmail.com>
>>> wrote:
>>>
>>>> On Mon, Sep 26, 2016 at 5:21 PM, Ralph Goers <
>>>> ralph.goers@dslextreme.com> wrote:
>>>>
>>>>> I thought you didn’t want to write to a file?
>>>>>
>>>>
>>>> I do not but if the buffer is large enough, log events should stay in
>>>> RAM. But it is not quite right anyway because I'd have to interpret the
>>>> contents of the file to turn back into log events.
>>>>
>>>> I started reading up on the Chronicle appender; thank you Remko for
>>>> pointing it out.
>>>>
>>>> An appender to a cache of objects is really want I want since I also
>>>> want to be able to evict the cache. TBC...
>>>>
>>>
>>> Like a JSR-107 Appender...
>>>
>>
>> Looking at EHCache and https://ignite.apache.org/jcac
>> he/1.0.0/javadoc/javax/cache/Cache.html I can see that a cache is always
>> a kind of map, which leads to what the key should be.
>>
>> A sequence number like we have in the pattern layout seems like a natural
>> choice. I could see a Jsr107Appender that tracks a sequence number as the
>> key. The issue is that the JSR107 Cache interface defines the iterator
>> order as undefined which would force a client trying to drain a
>> Jsr107Appender to sort all entries before being able to serialize them.
>> Unless I can find a list-based Cache implementation within EhCache for
>> example.
>>
>> Gary
>>
>>
>>
>>>
>>> Gary
>>>
>>>>
>>>> Gary
>>>>
>>>>
>>>>> The Chronicle stuff Remko is linking to is also worth exploring.
>>>>>
>>>>> Ralph
>>>>>
>>>>>
>>>>>
>>>>> On Sep 26, 2016, at 5:04 PM, Gary Gregory <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>> oh... what about our own http://logging.apache.org/
>>>>> log4j/2.x/manual/appenders.html#MemoryMappedFileAppender
>>>>>
>>>>> ?
>>>>>
>>>>> Gary
>>>>>
>>>>> On Mon, Sep 26, 2016 at 4:59 PM, Remko Popma <re...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> In addition to the Flume based solution, here is another alternative
>>>>>> idea: use Peter Lawrey's Chronicle[1] library to store log events in a
>>>>>> memory mapped file.
>>>>>>
>>>>>> The appender can just keep adding events without worrying about
>>>>>> overflowing the memory.
>>>>>>
>>>>>> The client that reads from this file can be in a separate thread
>>>>>> (even a separate process by the way) and can read as much as it wants, and
>>>>>> send it to the server.
>>>>>>
>>>>>> Serialization: You can either serialize log events to the target
>>>>>> format before storing them in Chronicle (so you have binary blobs in each
>>>>>> Chronicle excerpt), client reads these blobs and sends them to the server
>>>>>> as is. Or you can use the Chronicle Log4j2 appender[2] to store the events
>>>>>> in Chronicle format. The tests[3] show how to read LogEvent objects from
>>>>>> the memory mapped file, and the client would be responsible for serializing
>>>>>> these log events to the target format before sending data to the server.
>>>>>>
>>>>>> [1]: https://github.com/peter-lawrey/Java-Chronicle
>>>>>> [2]: https://github.com/OpenHFT/Chronicle-Logger
>>>>>> [3]: https://github.com/OpenHFT/Chronicle-Logger/blob/master
>>>>>> /logger-log4j-2/src/test/java/net/openhft/chronicle/logger/l
>>>>>> og4j2/Log4j2IndexedChronicleTest.java
>>>>>>
>>>>>> Remko
>>>>>>
>>>>>> Sent from my iPhone
>>>>>>
>>>>>> On 2016/09/27, at 5:57, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>
>>>>>> Please allow me to restate the use case I have for the
>>>>>> CollectionAppender, which is separate from any Flume-based or Syslog-based
>>>>>> solution, use cases I also have. Well, I have a Syslog use case, and
>>>>>> whether or not Flume is in the picture will really be a larger discussion
>>>>>> in my organization due to the requirement to run a Flume Agent.)
>>>>>>
>>>>>> A program (like a JDBC driver already using Log4j) communicates with
>>>>>> another (like a DBMS, not written in Java). The client and server
>>>>>> communicate over a proprietary socket protocol. The client sends a list of
>>>>>> buffers (in one go) to the server to perform one or more operations. One
>>>>>> kind of buffer this protocol defines is a log buffer (where each log event
>>>>>> is serialized in a non-Java format.) This allows each communication from
>>>>>> the client to the server to say "This is what's happened up to now". What
>>>>>> the server does with the log buffers is not important for this discussion.
>>>>>>
>>>>>> What is important to note is that the log buffer and other buffers go
>>>>>> to the server in one BLOB; which is why I cannot (in this use case) send
>>>>>> log events by themselves anywhere.
>>>>>>
>>>>>> I see that something (a CollectionAppender) must collect log events
>>>>>> until the client is ready to serialize them and send them to the server.
>>>>>> Once the events are drained out of the Appender (in one go by just getting
>>>>>> the collection), events can collect in a new collection. A synchronous
>>>>>> drain operation would create a new collection and return the old one.
>>>>>>
>>>>>> The question becomes: What kind of temporary location can the client
>>>>>> use to buffer log event until drain time? A Log4j Appender is a natural
>>>>>> place to collect log events since the driver uses Log4j. The driver will
>>>>>> make its business to drain the appender and work with the events at the
>>>>>> right time. I am thinking that the Log4j Appender part is generic enough
>>>>>> for inclusion in Log4j.
>>>>>>
>>>>>> Further thoughts?
>>>>>>
>>>>>> Thank you all for reading this far!
>>>>>> Gary
>>>>>>
>>>>>> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <
>>>>>> ralph.goers@dslextreme.com> wrote:
>>>>>>
>>>>>>> I guess I am not understanding your use case quite correctly. I am
>>>>>>> thinking you have a driver that is logging and you want those logs
>>>>>>> delivered to some other location to actually be written.  If that is your
>>>>>>> use case then the driver needs a log4j2.xml that configures the
>>>>>>> FlumeAppender with either the memory or file channel (depending on your
>>>>>>> needs) and points to the server(s) that is/are to receive the events. The
>>>>>>> FlumeAppender handles sending them in batches with whatever size you want
>>>>>>> (but will send them in smaller amounts if they are in the channel too
>>>>>>> long). Of course you would need the log4j-flume and flume jars. So on the
>>>>>>> driver side you wouldn’t need to write anything, just configure the
>>>>>>> appender and make sure the jars are there.
>>>>>>>
>>>>>>> For the server that receives them you would also need Flume.
>>>>>>> Normally this would be a standalone component, but it really wouldn’t be
>>>>>>> hard to incorporate it into some other application. The only thing you
>>>>>>> would have to write would be the sink that writes the events to the
>>>>>>> database or whatever. To incorporate it into an application you would have
>>>>>>> to look at the main() method of flume and covert that to be a thread that
>>>>>>> you kick off.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Hi Ralph,
>>>>>>>
>>>>>>> Thanks for your feedback. Flume is great in the scenarios that do
>>>>>>> not involve sending a log buffer from the driver itself.
>>>>>>>
>>>>>>> I can't require a Flume Agent to be running 'on the side' for the
>>>>>>> use case where the driver chains a log buffer at the end of the train of
>>>>>>> database IO buffer. For completeness talking about this Flume scenario, if
>>>>>>> I read you right, I also would need to write a custom Flume sink, which
>>>>>>> would also be in memory, until the driver is ready to drain it. Or, I could
>>>>>>> query some other 'safe' and 'reliable' Flume sink that the driver could
>>>>>>> then drain of events when it needs to.
>>>>>>>
>>>>>>> Narrowing down on the use case where the driver chains a log buffer
>>>>>>> at the end of the train of database IO buffer, I'll think I have to see
>>>>>>> about converting the Log4j ListAppender into a more robust and flexible
>>>>>>> version. I think I'll call it a CollectionAppender and allow various
>>>>>>> Collection implementations to be plugged in.
>>>>>>>
>>>>>>> Gary
>>>>>>>
>>>>>>> Gary
>>>>>>>
>>>>>>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <
>>>>>>> ralph.goers@dslextreme.com> wrote:
>>>>>>>
>>>>>>>> If you are buffering events in memory you run the risk of losing
>>>>>>>> events if something should fail.
>>>>>>>>
>>>>>>>> That said, if I had your requirements I would use the
>>>>>>>> FlumeAppender. It has either an in-memory option to buffer as you are
>>>>>>>> suggesting or it can write to a local file to prevent data loss if that is
>>>>>>>> a requirement. It already has the configuration options you are looking for
>>>>>>>> and has been well tested. The only downside is that you need to have either
>>>>>>>> a Flume instance receiving the messages are something that can receive
>>>>>>>> Flume events over Avro, but it is easier just to use Flume and write a
>>>>>>>> custom sink to do what you want with the data.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Hi All,
>>>>>>>>
>>>>>>>> I can't believe it, but through a convoluted use-case, I actually
>>>>>>>> need an in-memory list appender, very much like our test-only ListAppender.
>>>>>>>>
>>>>>>>> The requirement is as follows.
>>>>>>>>
>>>>>>>> We have a JDBC driver and matching proprietary database that
>>>>>>>> specializes in data virtualization of mainframe resources like DB2, VSAM,
>>>>>>>> IMS, and all sorts of non-SQL data sources (
>>>>>>>> http://www.rocketsoftware.com/products/rocket-data/rocket-d
>>>>>>>> ata-virtualization)
>>>>>>>>
>>>>>>>> The high level requirement is to merge the driver log into the
>>>>>>>> server's log for full-end to end tractability and debugging.
>>>>>>>>
>>>>>>>> When the driver is running on the z/OS mainframe, it can be
>>>>>>>> configured with a z/OS specific Appender that can talk to the server log
>>>>>>>> module directly.
>>>>>>>>
>>>>>>>> When the driver is running elsewhere, it can talk to the database
>>>>>>>> via a Syslog socket Appender. This requires more set up on the server side
>>>>>>>> and for the server to do special magic to know how the incoming log events
>>>>>>>> match up with server operations. Tricky.
>>>>>>>>
>>>>>>>> The customer should also be able to configure the driver such that
>>>>>>>> anytime the driver communicates to the database, it sends along whatever
>>>>>>>> log events have accumulated since the last client-server roundtrip. This
>>>>>>>> allows the server to match exactly the connection and operations the client
>>>>>>>> performed with the server's own logging.
>>>>>>>>
>>>>>>>> In order to do that I need to buffer all log events in an Appender
>>>>>>>> and when it's time, I need to get the list of events and reset the appender
>>>>>>>> to a new empty list so events can keep accumulating.
>>>>>>>>
>>>>>>>> My proposal is to either turn our ListAppender into such an
>>>>>>>> appender. For sanity, the appender could be configured with various sizing
>>>>>>>> policies:
>>>>>>>>
>>>>>>>> - open: the list grows unbounded
>>>>>>>> - closed: the list grows to a given size and _new_ events are
>>>>>>>> dropped on the floor beyond that
>>>>>>>> - latest: the list grows to a given size and _old_ events are
>>>>>>>> dropped on the floor beyond that
>>>>>>>>
>>>>>>>> Thoughts?
>>>>>>>>
>>>>>>>> Gary
>>>>>>>>
>>>>>>>> --
>>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>>> <http://www.manning.com/bauer3/>
>>>>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>>>>> Blog: http://garygregory.wordpress.com
>>>>>>>> Home: http://garygregory.com/
>>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>> <http://www.manning.com/bauer3/>
>>>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>>>> Blog: http://garygregory.wordpress.com
>>>>>>> Home: http://garygregory.com/
>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>> <http://www.manning.com/bauer3/>
>>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>>> Blog: http://garygregory.wordpress.com
>>>>>> Home: http://garygregory.com/
>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>> Java Persistence with Hibernate, Second Edition
>>>>> <http://www.manning.com/bauer3/>
>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>> Blog: http://garygregory.wordpress.com
>>>>> Home: http://garygregory.com/
>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>> Java Persistence with Hibernate, Second Edition
>>>> <http://www.manning.com/bauer3/>
>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>> Blog: http://garygregory.wordpress.com
>>>> Home: http://garygregory.com/
>>>> Tweet! http://twitter.com/GaryGregory
>>>>
>>>
>>>
>>>
>>> --
>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>> Java Persistence with Hibernate, Second Edition
>>> <http://www.manning.com/bauer3/>
>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>> Spring Batch in Action <http://www.manning.com/templier/>
>>> Blog: http://garygregory.wordpress.com
>>> Home: http://garygregory.com/
>>> Tweet! http://twitter.com/GaryGregory
>>>
>>
>>
>>
>> --
>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>> Java Persistence with Hibernate, Second Edition
>> <http://www.manning.com/bauer3/>
>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>> Spring Batch in Action <http://www.manning.com/templier/>
>> Blog: http://garygregory.wordpress.com
>> Home: http://garygregory.com/
>> Tweet! http://twitter.com/GaryGregory
>>
>>
>
>
> --
> E-Mail: garydgregory@gmail.com | ggregory@apache.org
> Java Persistence with Hibernate, Second Edition
> <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory
>
>


-- 
E-Mail: garydgregory@gmail.com | ggregory@apache.org
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Remko Popma <re...@gmail.com>.
Chronicle is not a cache. Which is good because you don't want a cache if I understand your use case. 

Chronicle is like an in-memory appender that never runs out of memory. 

The existing Chronicle Logger appender (https://github.com/OpenHFT/Chronicle-Logger) actually looks like a good fit for your use case. 

Sent from my iPhone

> On 2016/09/27, at 12:21, Gary Gregory <ga...@gmail.com> wrote:
> 
> The IgniteCache looks richer than both the stock Cache and EhCache for sure: https://ignite.apache.org/releases/1.7.0/javadoc/org/apache/ignite/IgniteCache.html
> 
> I am not sure I like having to basically use a map with a AtomicLong sequence key I need to manage AND THEN sort the map keys when what I really want is a List or a Queue. I feels like I have to work extra hard for a simpler use case. What I want is a cache that behaves like a queue and not like a map. Using JMS is too heavy. 
> 
> So I am still considering a Collection Appender.
> 
> Gary 
> 
>> On Mon, Sep 26, 2016 at 7:55 PM, Ralph Goers <ra...@dslextreme.com> wrote:
>> Ignite is a JSR 107 cache and has some benefits over ehcache.  Ehcache requires you set preferIPv4Stack to true for it to work.  That might be a problem for your client.
>> 
>> Sent from my iPad
>> 
>>> On Sep 26, 2016, at 7:18 PM, Gary Gregory <ga...@gmail.com> wrote:
>>> 
>>> 
>>>> On Mon, Sep 26, 2016 at 6:10 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>> On Mon, Sep 26, 2016 at 6:09 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>> On Mon, Sep 26, 2016 at 5:21 PM, Ralph Goers <ra...@dslextreme.com> wrote:
>>>>>> I thought you didn’t want to write to a file?
>>>>> 
>>>>> I do not but if the buffer is large enough, log events should stay in RAM. But it is not quite right anyway because I'd have to interpret the contents of the file to turn back into log events.
>>>>> 
>>>>> I started reading up on the Chronicle appender; thank you Remko for pointing it out.
>>>>> 
>>>>> An appender to a cache of objects is really want I want since I also want to be able to evict the cache. TBC...
>>>> 
>>>> Like a JSR-107 Appender...
>>> 
>>> Looking at EHCache and https://ignite.apache.org/jcache/1.0.0/javadoc/javax/cache/Cache.html I can see that a cache is always a kind of map, which leads to what the key should be.
>>> 
>>> A sequence number like we have in the pattern layout seems like a natural choice. I could see a Jsr107Appender that tracks a sequence number as the key. The issue is that the JSR107 Cache interface defines the iterator order as undefined which would force a client trying to drain a Jsr107Appender to sort all entries before being able to serialize them. Unless I can find a list-based Cache implementation within EhCache for example.
>>> 
>>> Gary
>>> 
>>>  
>>>> 
>>>> Gary
>>>>> 
>>>>> Gary
>>>>> 
>>>>>> 
>>>>>> The Chronicle stuff Remko is linking to is also worth exploring. 
>>>>>> 
>>>>>> Ralph
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> On Sep 26, 2016, at 5:04 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>> 
>>>>>>> oh... what about our own http://logging.apache.org/log4j/2.x/manual/appenders.html#MemoryMappedFileAppender
>>>>>>> 
>>>>>>> ?
>>>>>>> 
>>>>>>> Gary
>>>>>>> 
>>>>>>>> On Mon, Sep 26, 2016 at 4:59 PM, Remko Popma <re...@gmail.com> wrote:
>>>>>>>> In addition to the Flume based solution, here is another alternative idea: use Peter Lawrey's Chronicle[1] library to store log events in a memory mapped file. 
>>>>>>>> 
>>>>>>>> The appender can just keep adding events without worrying about overflowing the memory. 
>>>>>>>> 
>>>>>>>> The client that reads from this file can be in a separate thread (even a separate process by the way) and can read as much as it wants, and send it to the server. 
>>>>>>>> 
>>>>>>>> Serialization: You can either serialize log events to the target format before storing them in Chronicle (so you have binary blobs in each Chronicle excerpt), client reads these blobs and sends them to the server as is. Or you can use the Chronicle Log4j2 appender[2] to store the events in Chronicle format. The tests[3] show how to read LogEvent objects from the memory mapped file, and the client would be responsible for serializing these log events to the target format before sending data to the server. 
>>>>>>>> 
>>>>>>>> [1]: https://github.com/peter-lawrey/Java-Chronicle
>>>>>>>> [2]: https://github.com/OpenHFT/Chronicle-Logger
>>>>>>>> [3]: https://github.com/OpenHFT/Chronicle-Logger/blob/master/logger-log4j-2/src/test/java/net/openhft/chronicle/logger/log4j2/Log4j2IndexedChronicleTest.java
>>>>>>>> 
>>>>>>>> Remko
>>>>>>>> 
>>>>>>>> Sent from my iPhone
>>>>>>>> 
>>>>>>>>> On 2016/09/27, at 5:57, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>>>> 
>>>>>>>>> Please allow me to restate the use case I have for the CollectionAppender, which is separate from any Flume-based or Syslog-based solution, use cases I also have. Well, I have a Syslog use case, and whether or not Flume is in the picture will really be a larger discussion in my organization due to the requirement to run a Flume Agent.)
>>>>>>>>> 
>>>>>>>>> A program (like a JDBC driver already using Log4j) communicates with another (like a DBMS, not written in Java). The client and server communicate over a proprietary socket protocol. The client sends a list of buffers (in one go) to the server to perform one or more operations. One kind of buffer this protocol defines is a log buffer (where each log event is serialized in a non-Java format.) This allows each communication from the client to the server to say "This is what's happened up to now". What the server does with the log buffers is not important for this discussion.
>>>>>>>>> 
>>>>>>>>> What is important to note is that the log buffer and other buffers go to the server in one BLOB; which is why I cannot (in this use case) send log events by themselves anywhere.
>>>>>>>>> 
>>>>>>>>> I see that something (a CollectionAppender) must collect log events until the client is ready to serialize them and send them to the server. Once the events are drained out of the Appender (in one go by just getting the collection), events can collect in a new collection. A synchronous drain operation would create a new collection and return the old one.
>>>>>>>>> 
>>>>>>>>> The question becomes: What kind of temporary location can the client use to buffer log event until drain time? A Log4j Appender is a natural place to collect log events since the driver uses Log4j. The driver will make its business to drain the appender and work with the events at the right time. I am thinking that the Log4j Appender part is generic enough for inclusion in Log4j. 
>>>>>>>>> 
>>>>>>>>> Further thoughts?
>>>>>>>>> 
>>>>>>>>> Thank you all for reading this far!
>>>>>>>>> Gary
>>>>>>>>> 
>>>>>>>>>> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <ra...@dslextreme.com> wrote:
>>>>>>>>>> I guess I am not understanding your use case quite correctly. I am thinking you have a driver that is logging and you want those logs delivered to some other location to actually be written.  If that is your use case then the driver needs a log4j2.xml that configures the FlumeAppender with either the memory or file channel (depending on your needs) and points to the server(s) that is/are to receive the events. The FlumeAppender handles sending them in batches with whatever size you want (but will send them in smaller amounts if they are in the channel too long). Of course you would need the log4j-flume and flume jars. So on the driver side you wouldn’t need to write anything, just configure the appender and make sure the jars are there.
>>>>>>>>>> 
>>>>>>>>>> For the server that receives them you would also need Flume. Normally this would be a standalone component, but it really wouldn’t be hard to incorporate it into some other application. The only thing you would have to write would be the sink that writes the events to the database or whatever. To incorporate it into an application you would have to look at the main() method of flume and covert that to be a thread that you kick off.
>>>>>>>>>> 
>>>>>>>>>> Ralph
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>> Hi Ralph,
>>>>>>>>>>> 
>>>>>>>>>>> Thanks for your feedback. Flume is great in the scenarios that do not involve sending a log buffer from the driver itself.
>>>>>>>>>>> 
>>>>>>>>>>> I can't require a Flume Agent to be running 'on the side' for the use case where the driver chains a log buffer at the end of the train of database IO buffer. For completeness talking about this Flume scenario, if I read you right, I also would need to write a custom Flume sink, which would also be in memory, until the driver is ready to drain it. Or, I could query some other 'safe' and 'reliable' Flume sink that the driver could then drain of events when it needs to.
>>>>>>>>>>> 
>>>>>>>>>>> Narrowing down on the use case where the driver chains a log buffer at the end of the train of database IO buffer, I'll think I have to see about converting the Log4j ListAppender into a more robust and flexible version. I think I'll call it a CollectionAppender and allow various Collection implementations to be plugged in.
>>>>>>>>>>> 
>>>>>>>>>>> Gary
>>>>>>>>>>> 
>>>>>>>>>>> Gary
>>>>>>>>>>> 
>>>>>>>>>>>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <ra...@dslextreme.com> wrote:
>>>>>>>>>>>> If you are buffering events in memory you run the risk of losing events if something should fail. 
>>>>>>>>>>>> 
>>>>>>>>>>>> That said, if I had your requirements I would use the FlumeAppender. It has either an in-memory option to buffer as you are suggesting or it can write to a local file to prevent data loss if that is a requirement. It already has the configuration options you are looking for and has been well tested. The only downside is that you need to have either a Flume instance receiving the messages are something that can receive Flume events over Avro, but it is easier just to use Flume and write a custom sink to do what you want with the data.
>>>>>>>>>>>> 
>>>>>>>>>>>> Ralph
>>>>>>>>>>>> 
>>>>>>>>>>>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi All,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I can't believe it, but through a convoluted use-case, I actually need an in-memory list appender, very much like our test-only ListAppender.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The requirement is as follows.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> We have a JDBC driver and matching proprietary database that specializes in data virtualization of mainframe resources like DB2, VSAM, IMS, and all sorts of non-SQL data sources (http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization) 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The high level requirement is to merge the driver log into the server's log for full-end to end tractability and debugging.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> When the driver is running on the z/OS mainframe, it can be configured with a z/OS specific Appender that can talk to the server log module directly.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> When the driver is running elsewhere, it can talk to the database via a Syslog socket Appender. This requires more set up on the server side and for the server to do special magic to know how the incoming log events match up with server operations. Tricky.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The customer should also be able to configure the driver such that anytime the driver communicates to the database, it sends along whatever log events have accumulated since the last client-server roundtrip. This allows the server to match exactly the connection and operations the client performed with the server's own logging.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> In order to do that I need to buffer all log events in an Appender and when it's time, I need to get the list of events and reset the appender to a new empty list so events can keep accumulating.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> My proposal is to either turn our ListAppender into such an appender. For sanity, the appender could be configured with various sizing policies:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> - open: the list grows unbounded
>>>>>>>>>>>>> - closed: the list grows to a given size and _new_ events are dropped on the floor beyond that
>>>>>>>>>>>>> - latest: the list grows to a given size and _old_ events are dropped on the floor beyond that
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thoughts?
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Gary
>>>>>>>>>>>>> 
>>>>>>>>>>>>> -- 
>>>>>>>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>>>>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>>>>>>>> JUnit in Action, Second Edition
>>>>>>>>>>>>> Spring Batch in Action
>>>>>>>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>>>>>>>> Home: http://garygregory.com/
>>>>>>>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> -- 
>>>>>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>>>>>> JUnit in Action, Second Edition
>>>>>>>>>>> Spring Batch in Action
>>>>>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>>>>>> Home: http://garygregory.com/
>>>>>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> -- 
>>>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>>>> JUnit in Action, Second Edition
>>>>>>>>> Spring Batch in Action
>>>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>>>> Home: http://garygregory.com/
>>>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> -- 
>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>> JUnit in Action, Second Edition
>>>>>>> Spring Batch in Action
>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>> Home: http://garygregory.com/
>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>> Java Persistence with Hibernate, Second Edition
>>>>> JUnit in Action, Second Edition
>>>>> Spring Batch in Action
>>>>> Blog: http://garygregory.wordpress.com 
>>>>> Home: http://garygregory.com/
>>>>> Tweet! http://twitter.com/GaryGregory
>>>> 
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>> Java Persistence with Hibernate, Second Edition
>>>> JUnit in Action, Second Edition
>>>> Spring Batch in Action
>>>> Blog: http://garygregory.wordpress.com 
>>>> Home: http://garygregory.com/
>>>> Tweet! http://twitter.com/GaryGregory
>>> 
>>> 
>>> 
>>> -- 
>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>> Java Persistence with Hibernate, Second Edition
>>> JUnit in Action, Second Edition
>>> Spring Batch in Action
>>> Blog: http://garygregory.wordpress.com 
>>> Home: http://garygregory.com/
>>> Tweet! http://twitter.com/GaryGregory
> 
> 
> 
> -- 
> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
> Java Persistence with Hibernate, Second Edition
> JUnit in Action, Second Edition
> Spring Batch in Action
> Blog: http://garygregory.wordpress.com 
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Gary Gregory <ga...@gmail.com>.
The IgniteCache looks richer than both the stock Cache and EhCache for
sure:
https://ignite.apache.org/releases/1.7.0/javadoc/org/apache/ignite/IgniteCache.html

I am not sure I like having to basically use a map with a AtomicLong
sequence key I need to manage AND THEN sort the map keys when what I really
want is a List or a Queue. I feels like I have to work extra hard for a
simpler use case. What I want is a cache that behaves like a queue and not
like a map. Using JMS is too heavy.

So I am still considering a Collection Appender.

Gary

On Mon, Sep 26, 2016 at 7:55 PM, Ralph Goers <ra...@dslextreme.com>
wrote:

> Ignite is a JSR 107 cache and has some benefits over ehcache.  Ehcache
> requires you set preferIPv4Stack to true for it to work.  That might be a
> problem for your client.
>
> Sent from my iPad
>
> On Sep 26, 2016, at 7:18 PM, Gary Gregory <ga...@gmail.com> wrote:
>
>
> On Mon, Sep 26, 2016 at 6:10 PM, Gary Gregory <ga...@gmail.com>
> wrote:
>
>> On Mon, Sep 26, 2016 at 6:09 PM, Gary Gregory <ga...@gmail.com>
>> wrote:
>>
>>> On Mon, Sep 26, 2016 at 5:21 PM, Ralph Goers <ralph.goers@dslextreme.com
>>> > wrote:
>>>
>>>> I thought you didn’t want to write to a file?
>>>>
>>>
>>> I do not but if the buffer is large enough, log events should stay in
>>> RAM. But it is not quite right anyway because I'd have to interpret the
>>> contents of the file to turn back into log events.
>>>
>>> I started reading up on the Chronicle appender; thank you Remko for
>>> pointing it out.
>>>
>>> An appender to a cache of objects is really want I want since I also
>>> want to be able to evict the cache. TBC...
>>>
>>
>> Like a JSR-107 Appender...
>>
>
> Looking at EHCache and https://ignite.apache.org/
> jcache/1.0.0/javadoc/javax/cache/Cache.html I can see that a cache is
> always a kind of map, which leads to what the key should be.
>
> A sequence number like we have in the pattern layout seems like a natural
> choice. I could see a Jsr107Appender that tracks a sequence number as the
> key. The issue is that the JSR107 Cache interface defines the iterator
> order as undefined which would force a client trying to drain a
> Jsr107Appender to sort all entries before being able to serialize them.
> Unless I can find a list-based Cache implementation within EhCache for
> example.
>
> Gary
>
>
>
>>
>> Gary
>>
>>>
>>> Gary
>>>
>>>
>>>> The Chronicle stuff Remko is linking to is also worth exploring.
>>>>
>>>> Ralph
>>>>
>>>>
>>>>
>>>> On Sep 26, 2016, at 5:04 PM, Gary Gregory <ga...@gmail.com>
>>>> wrote:
>>>>
>>>> oh... what about our own http://logging.apache.org/
>>>> log4j/2.x/manual/appenders.html#MemoryMappedFileAppender
>>>>
>>>> ?
>>>>
>>>> Gary
>>>>
>>>> On Mon, Sep 26, 2016 at 4:59 PM, Remko Popma <re...@gmail.com>
>>>> wrote:
>>>>
>>>>> In addition to the Flume based solution, here is another alternative
>>>>> idea: use Peter Lawrey's Chronicle[1] library to store log events in a
>>>>> memory mapped file.
>>>>>
>>>>> The appender can just keep adding events without worrying about
>>>>> overflowing the memory.
>>>>>
>>>>> The client that reads from this file can be in a separate thread (even
>>>>> a separate process by the way) and can read as much as it wants, and send
>>>>> it to the server.
>>>>>
>>>>> Serialization: You can either serialize log events to the target
>>>>> format before storing them in Chronicle (so you have binary blobs in each
>>>>> Chronicle excerpt), client reads these blobs and sends them to the server
>>>>> as is. Or you can use the Chronicle Log4j2 appender[2] to store the events
>>>>> in Chronicle format. The tests[3] show how to read LogEvent objects from
>>>>> the memory mapped file, and the client would be responsible for serializing
>>>>> these log events to the target format before sending data to the server.
>>>>>
>>>>> [1]: https://github.com/peter-lawrey/Java-Chronicle
>>>>> [2]: https://github.com/OpenHFT/Chronicle-Logger
>>>>> [3]: https://github.com/OpenHFT/Chronicle-Logger/blob/master
>>>>> /logger-log4j-2/src/test/java/net/openhft/chronicle/logger/l
>>>>> og4j2/Log4j2IndexedChronicleTest.java
>>>>>
>>>>> Remko
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On 2016/09/27, at 5:57, Gary Gregory <ga...@gmail.com> wrote:
>>>>>
>>>>> Please allow me to restate the use case I have for the
>>>>> CollectionAppender, which is separate from any Flume-based or Syslog-based
>>>>> solution, use cases I also have. Well, I have a Syslog use case, and
>>>>> whether or not Flume is in the picture will really be a larger discussion
>>>>> in my organization due to the requirement to run a Flume Agent.)
>>>>>
>>>>> A program (like a JDBC driver already using Log4j) communicates with
>>>>> another (like a DBMS, not written in Java). The client and server
>>>>> communicate over a proprietary socket protocol. The client sends a list of
>>>>> buffers (in one go) to the server to perform one or more operations. One
>>>>> kind of buffer this protocol defines is a log buffer (where each log event
>>>>> is serialized in a non-Java format.) This allows each communication from
>>>>> the client to the server to say "This is what's happened up to now". What
>>>>> the server does with the log buffers is not important for this discussion.
>>>>>
>>>>> What is important to note is that the log buffer and other buffers go
>>>>> to the server in one BLOB; which is why I cannot (in this use case) send
>>>>> log events by themselves anywhere.
>>>>>
>>>>> I see that something (a CollectionAppender) must collect log events
>>>>> until the client is ready to serialize them and send them to the server.
>>>>> Once the events are drained out of the Appender (in one go by just getting
>>>>> the collection), events can collect in a new collection. A synchronous
>>>>> drain operation would create a new collection and return the old one.
>>>>>
>>>>> The question becomes: What kind of temporary location can the client
>>>>> use to buffer log event until drain time? A Log4j Appender is a natural
>>>>> place to collect log events since the driver uses Log4j. The driver will
>>>>> make its business to drain the appender and work with the events at the
>>>>> right time. I am thinking that the Log4j Appender part is generic enough
>>>>> for inclusion in Log4j.
>>>>>
>>>>> Further thoughts?
>>>>>
>>>>> Thank you all for reading this far!
>>>>> Gary
>>>>>
>>>>> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <
>>>>> ralph.goers@dslextreme.com> wrote:
>>>>>
>>>>>> I guess I am not understanding your use case quite correctly. I am
>>>>>> thinking you have a driver that is logging and you want those logs
>>>>>> delivered to some other location to actually be written.  If that is your
>>>>>> use case then the driver needs a log4j2.xml that configures the
>>>>>> FlumeAppender with either the memory or file channel (depending on your
>>>>>> needs) and points to the server(s) that is/are to receive the events. The
>>>>>> FlumeAppender handles sending them in batches with whatever size you want
>>>>>> (but will send them in smaller amounts if they are in the channel too
>>>>>> long). Of course you would need the log4j-flume and flume jars. So on the
>>>>>> driver side you wouldn’t need to write anything, just configure the
>>>>>> appender and make sure the jars are there.
>>>>>>
>>>>>> For the server that receives them you would also need Flume. Normally
>>>>>> this would be a standalone component, but it really wouldn’t be hard to
>>>>>> incorporate it into some other application. The only thing you would have
>>>>>> to write would be the sink that writes the events to the database or
>>>>>> whatever. To incorporate it into an application you would have to look at
>>>>>> the main() method of flume and covert that to be a thread that you kick off.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>> Hi Ralph,
>>>>>>
>>>>>> Thanks for your feedback. Flume is great in the scenarios that do not
>>>>>> involve sending a log buffer from the driver itself.
>>>>>>
>>>>>> I can't require a Flume Agent to be running 'on the side' for the use
>>>>>> case where the driver chains a log buffer at the end of the train of
>>>>>> database IO buffer. For completeness talking about this Flume scenario, if
>>>>>> I read you right, I also would need to write a custom Flume sink, which
>>>>>> would also be in memory, until the driver is ready to drain it. Or, I could
>>>>>> query some other 'safe' and 'reliable' Flume sink that the driver could
>>>>>> then drain of events when it needs to.
>>>>>>
>>>>>> Narrowing down on the use case where the driver chains a log buffer
>>>>>> at the end of the train of database IO buffer, I'll think I have to see
>>>>>> about converting the Log4j ListAppender into a more robust and flexible
>>>>>> version. I think I'll call it a CollectionAppender and allow various
>>>>>> Collection implementations to be plugged in.
>>>>>>
>>>>>> Gary
>>>>>>
>>>>>> Gary
>>>>>>
>>>>>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <
>>>>>> ralph.goers@dslextreme.com> wrote:
>>>>>>
>>>>>>> If you are buffering events in memory you run the risk of losing
>>>>>>> events if something should fail.
>>>>>>>
>>>>>>> That said, if I had your requirements I would use the FlumeAppender.
>>>>>>> It has either an in-memory option to buffer as you are suggesting or it can
>>>>>>> write to a local file to prevent data loss if that is a requirement. It
>>>>>>> already has the configuration options you are looking for and has been well
>>>>>>> tested. The only downside is that you need to have either a Flume instance
>>>>>>> receiving the messages are something that can receive Flume events over
>>>>>>> Avro, but it is easier just to use Flume and write a custom sink to do what
>>>>>>> you want with the data.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Hi All,
>>>>>>>
>>>>>>> I can't believe it, but through a convoluted use-case, I actually
>>>>>>> need an in-memory list appender, very much like our test-only ListAppender.
>>>>>>>
>>>>>>> The requirement is as follows.
>>>>>>>
>>>>>>> We have a JDBC driver and matching proprietary database that
>>>>>>> specializes in data virtualization of mainframe resources like DB2, VSAM,
>>>>>>> IMS, and all sorts of non-SQL data sources (
>>>>>>> http://www.rocketsoftware.com/products/rocket-data/rocket-d
>>>>>>> ata-virtualization)
>>>>>>>
>>>>>>> The high level requirement is to merge the driver log into the
>>>>>>> server's log for full-end to end tractability and debugging.
>>>>>>>
>>>>>>> When the driver is running on the z/OS mainframe, it can be
>>>>>>> configured with a z/OS specific Appender that can talk to the server log
>>>>>>> module directly.
>>>>>>>
>>>>>>> When the driver is running elsewhere, it can talk to the database
>>>>>>> via a Syslog socket Appender. This requires more set up on the server side
>>>>>>> and for the server to do special magic to know how the incoming log events
>>>>>>> match up with server operations. Tricky.
>>>>>>>
>>>>>>> The customer should also be able to configure the driver such that
>>>>>>> anytime the driver communicates to the database, it sends along whatever
>>>>>>> log events have accumulated since the last client-server roundtrip. This
>>>>>>> allows the server to match exactly the connection and operations the client
>>>>>>> performed with the server's own logging.
>>>>>>>
>>>>>>> In order to do that I need to buffer all log events in an Appender
>>>>>>> and when it's time, I need to get the list of events and reset the appender
>>>>>>> to a new empty list so events can keep accumulating.
>>>>>>>
>>>>>>> My proposal is to either turn our ListAppender into such an
>>>>>>> appender. For sanity, the appender could be configured with various sizing
>>>>>>> policies:
>>>>>>>
>>>>>>> - open: the list grows unbounded
>>>>>>> - closed: the list grows to a given size and _new_ events are
>>>>>>> dropped on the floor beyond that
>>>>>>> - latest: the list grows to a given size and _old_ events are
>>>>>>> dropped on the floor beyond that
>>>>>>>
>>>>>>> Thoughts?
>>>>>>>
>>>>>>> Gary
>>>>>>>
>>>>>>> --
>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>> <http://www.manning.com/bauer3/>
>>>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>>>> Blog: http://garygregory.wordpress.com
>>>>>>> Home: http://garygregory.com/
>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>> <http://www.manning.com/bauer3/>
>>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>>> Blog: http://garygregory.wordpress.com
>>>>>> Home: http://garygregory.com/
>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>> Java Persistence with Hibernate, Second Edition
>>>>> <http://www.manning.com/bauer3/>
>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>> Blog: http://garygregory.wordpress.com
>>>>> Home: http://garygregory.com/
>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>> Java Persistence with Hibernate, Second Edition
>>>> <http://www.manning.com/bauer3/>
>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>> Blog: http://garygregory.wordpress.com
>>>> Home: http://garygregory.com/
>>>> Tweet! http://twitter.com/GaryGregory
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>> Java Persistence with Hibernate, Second Edition
>>> <http://www.manning.com/bauer3/>
>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>> Spring Batch in Action <http://www.manning.com/templier/>
>>> Blog: http://garygregory.wordpress.com
>>> Home: http://garygregory.com/
>>> Tweet! http://twitter.com/GaryGregory
>>>
>>
>>
>>
>> --
>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>> Java Persistence with Hibernate, Second Edition
>> <http://www.manning.com/bauer3/>
>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>> Spring Batch in Action <http://www.manning.com/templier/>
>> Blog: http://garygregory.wordpress.com
>> Home: http://garygregory.com/
>> Tweet! http://twitter.com/GaryGregory
>>
>
>
>
> --
> E-Mail: garydgregory@gmail.com | ggregory@apache.org
> Java Persistence with Hibernate, Second Edition
> <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory
>
>


-- 
E-Mail: garydgregory@gmail.com | ggregory@apache.org
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Ralph Goers <ra...@dslextreme.com>.
Ignite is a JSR 107 cache and has some benefits over ehcache.  Ehcache requires you set preferIPv4Stack to true for it to work.  That might be a problem for your client.

Sent from my iPad

> On Sep 26, 2016, at 7:18 PM, Gary Gregory <ga...@gmail.com> wrote:
> 
> 
>> On Mon, Sep 26, 2016 at 6:10 PM, Gary Gregory <ga...@gmail.com> wrote:
>>> On Mon, Sep 26, 2016 at 6:09 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>> On Mon, Sep 26, 2016 at 5:21 PM, Ralph Goers <ra...@dslextreme.com> wrote:
>>>> I thought you didn’t want to write to a file?
>>> 
>>> I do not but if the buffer is large enough, log events should stay in RAM. But it is not quite right anyway because I'd have to interpret the contents of the file to turn back into log events.
>>> 
>>> I started reading up on the Chronicle appender; thank you Remko for pointing it out.
>>> 
>>> An appender to a cache of objects is really want I want since I also want to be able to evict the cache. TBC...
>> 
>> Like a JSR-107 Appender...
> 
> Looking at EHCache and https://ignite.apache.org/jcache/1.0.0/javadoc/javax/cache/Cache.html I can see that a cache is always a kind of map, which leads to what the key should be.
> 
> A sequence number like we have in the pattern layout seems like a natural choice. I could see a Jsr107Appender that tracks a sequence number as the key. The issue is that the JSR107 Cache interface defines the iterator order as undefined which would force a client trying to drain a Jsr107Appender to sort all entries before being able to serialize them. Unless I can find a list-based Cache implementation within EhCache for example.
> 
> Gary
> 
>  
>> 
>> Gary
>>> 
>>> Gary
>>> 
>>>> 
>>>> The Chronicle stuff Remko is linking to is also worth exploring. 
>>>> 
>>>> Ralph
>>>> 
>>>> 
>>>> 
>>>>> On Sep 26, 2016, at 5:04 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>> 
>>>>> oh... what about our own http://logging.apache.org/log4j/2.x/manual/appenders.html#MemoryMappedFileAppender
>>>>> 
>>>>> ?
>>>>> 
>>>>> Gary
>>>>> 
>>>>>> On Mon, Sep 26, 2016 at 4:59 PM, Remko Popma <re...@gmail.com> wrote:
>>>>>> In addition to the Flume based solution, here is another alternative idea: use Peter Lawrey's Chronicle[1] library to store log events in a memory mapped file. 
>>>>>> 
>>>>>> The appender can just keep adding events without worrying about overflowing the memory. 
>>>>>> 
>>>>>> The client that reads from this file can be in a separate thread (even a separate process by the way) and can read as much as it wants, and send it to the server. 
>>>>>> 
>>>>>> Serialization: You can either serialize log events to the target format before storing them in Chronicle (so you have binary blobs in each Chronicle excerpt), client reads these blobs and sends them to the server as is. Or you can use the Chronicle Log4j2 appender[2] to store the events in Chronicle format. The tests[3] show how to read LogEvent objects from the memory mapped file, and the client would be responsible for serializing these log events to the target format before sending data to the server. 
>>>>>> 
>>>>>> [1]: https://github.com/peter-lawrey/Java-Chronicle
>>>>>> [2]: https://github.com/OpenHFT/Chronicle-Logger
>>>>>> [3]: https://github.com/OpenHFT/Chronicle-Logger/blob/master/logger-log4j-2/src/test/java/net/openhft/chronicle/logger/log4j2/Log4j2IndexedChronicleTest.java
>>>>>> 
>>>>>> Remko
>>>>>> 
>>>>>> Sent from my iPhone
>>>>>> 
>>>>>>> On 2016/09/27, at 5:57, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>> 
>>>>>>> Please allow me to restate the use case I have for the CollectionAppender, which is separate from any Flume-based or Syslog-based solution, use cases I also have. Well, I have a Syslog use case, and whether or not Flume is in the picture will really be a larger discussion in my organization due to the requirement to run a Flume Agent.)
>>>>>>> 
>>>>>>> A program (like a JDBC driver already using Log4j) communicates with another (like a DBMS, not written in Java). The client and server communicate over a proprietary socket protocol. The client sends a list of buffers (in one go) to the server to perform one or more operations. One kind of buffer this protocol defines is a log buffer (where each log event is serialized in a non-Java format.) This allows each communication from the client to the server to say "This is what's happened up to now". What the server does with the log buffers is not important for this discussion.
>>>>>>> 
>>>>>>> What is important to note is that the log buffer and other buffers go to the server in one BLOB; which is why I cannot (in this use case) send log events by themselves anywhere.
>>>>>>> 
>>>>>>> I see that something (a CollectionAppender) must collect log events until the client is ready to serialize them and send them to the server. Once the events are drained out of the Appender (in one go by just getting the collection), events can collect in a new collection. A synchronous drain operation would create a new collection and return the old one.
>>>>>>> 
>>>>>>> The question becomes: What kind of temporary location can the client use to buffer log event until drain time? A Log4j Appender is a natural place to collect log events since the driver uses Log4j. The driver will make its business to drain the appender and work with the events at the right time. I am thinking that the Log4j Appender part is generic enough for inclusion in Log4j. 
>>>>>>> 
>>>>>>> Further thoughts?
>>>>>>> 
>>>>>>> Thank you all for reading this far!
>>>>>>> Gary
>>>>>>> 
>>>>>>>> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <ra...@dslextreme.com> wrote:
>>>>>>>> I guess I am not understanding your use case quite correctly. I am thinking you have a driver that is logging and you want those logs delivered to some other location to actually be written.  If that is your use case then the driver needs a log4j2.xml that configures the FlumeAppender with either the memory or file channel (depending on your needs) and points to the server(s) that is/are to receive the events. The FlumeAppender handles sending them in batches with whatever size you want (but will send them in smaller amounts if they are in the channel too long). Of course you would need the log4j-flume and flume jars. So on the driver side you wouldn’t need to write anything, just configure the appender and make sure the jars are there.
>>>>>>>> 
>>>>>>>> For the server that receives them you would also need Flume. Normally this would be a standalone component, but it really wouldn’t be hard to incorporate it into some other application. The only thing you would have to write would be the sink that writes the events to the database or whatever. To incorporate it into an application you would have to look at the main() method of flume and covert that to be a thread that you kick off.
>>>>>>>> 
>>>>>>>> Ralph
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>>>> 
>>>>>>>>> Hi Ralph,
>>>>>>>>> 
>>>>>>>>> Thanks for your feedback. Flume is great in the scenarios that do not involve sending a log buffer from the driver itself.
>>>>>>>>> 
>>>>>>>>> I can't require a Flume Agent to be running 'on the side' for the use case where the driver chains a log buffer at the end of the train of database IO buffer. For completeness talking about this Flume scenario, if I read you right, I also would need to write a custom Flume sink, which would also be in memory, until the driver is ready to drain it. Or, I could query some other 'safe' and 'reliable' Flume sink that the driver could then drain of events when it needs to.
>>>>>>>>> 
>>>>>>>>> Narrowing down on the use case where the driver chains a log buffer at the end of the train of database IO buffer, I'll think I have to see about converting the Log4j ListAppender into a more robust and flexible version. I think I'll call it a CollectionAppender and allow various Collection implementations to be plugged in.
>>>>>>>>> 
>>>>>>>>> Gary
>>>>>>>>> 
>>>>>>>>> Gary
>>>>>>>>> 
>>>>>>>>>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <ra...@dslextreme.com> wrote:
>>>>>>>>>> If you are buffering events in memory you run the risk of losing events if something should fail. 
>>>>>>>>>> 
>>>>>>>>>> That said, if I had your requirements I would use the FlumeAppender. It has either an in-memory option to buffer as you are suggesting or it can write to a local file to prevent data loss if that is a requirement. It already has the configuration options you are looking for and has been well tested. The only downside is that you need to have either a Flume instance receiving the messages are something that can receive Flume events over Avro, but it is easier just to use Flume and write a custom sink to do what you want with the data.
>>>>>>>>>> 
>>>>>>>>>> Ralph
>>>>>>>>>> 
>>>>>>>>>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>> Hi All,
>>>>>>>>>>> 
>>>>>>>>>>> I can't believe it, but through a convoluted use-case, I actually need an in-memory list appender, very much like our test-only ListAppender.
>>>>>>>>>>> 
>>>>>>>>>>> The requirement is as follows.
>>>>>>>>>>> 
>>>>>>>>>>> We have a JDBC driver and matching proprietary database that specializes in data virtualization of mainframe resources like DB2, VSAM, IMS, and all sorts of non-SQL data sources (http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization) 
>>>>>>>>>>> 
>>>>>>>>>>> The high level requirement is to merge the driver log into the server's log for full-end to end tractability and debugging.
>>>>>>>>>>> 
>>>>>>>>>>> When the driver is running on the z/OS mainframe, it can be configured with a z/OS specific Appender that can talk to the server log module directly.
>>>>>>>>>>> 
>>>>>>>>>>> When the driver is running elsewhere, it can talk to the database via a Syslog socket Appender. This requires more set up on the server side and for the server to do special magic to know how the incoming log events match up with server operations. Tricky.
>>>>>>>>>>> 
>>>>>>>>>>> The customer should also be able to configure the driver such that anytime the driver communicates to the database, it sends along whatever log events have accumulated since the last client-server roundtrip. This allows the server to match exactly the connection and operations the client performed with the server's own logging.
>>>>>>>>>>> 
>>>>>>>>>>> In order to do that I need to buffer all log events in an Appender and when it's time, I need to get the list of events and reset the appender to a new empty list so events can keep accumulating.
>>>>>>>>>>> 
>>>>>>>>>>> My proposal is to either turn our ListAppender into such an appender. For sanity, the appender could be configured with various sizing policies:
>>>>>>>>>>> 
>>>>>>>>>>> - open: the list grows unbounded
>>>>>>>>>>> - closed: the list grows to a given size and _new_ events are dropped on the floor beyond that
>>>>>>>>>>> - latest: the list grows to a given size and _old_ events are dropped on the floor beyond that
>>>>>>>>>>> 
>>>>>>>>>>> Thoughts?
>>>>>>>>>>> 
>>>>>>>>>>> Gary
>>>>>>>>>>> 
>>>>>>>>>>> -- 
>>>>>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>>>>>> JUnit in Action, Second Edition
>>>>>>>>>>> Spring Batch in Action
>>>>>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>>>>>> Home: http://garygregory.com/
>>>>>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> -- 
>>>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>>>> JUnit in Action, Second Edition
>>>>>>>>> Spring Batch in Action
>>>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>>>> Home: http://garygregory.com/
>>>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> -- 
>>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>> JUnit in Action, Second Edition
>>>>>>> Spring Batch in Action
>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>> Home: http://garygregory.com/
>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>> Java Persistence with Hibernate, Second Edition
>>>>> JUnit in Action, Second Edition
>>>>> Spring Batch in Action
>>>>> Blog: http://garygregory.wordpress.com 
>>>>> Home: http://garygregory.com/
>>>>> Tweet! http://twitter.com/GaryGregory
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>> Java Persistence with Hibernate, Second Edition
>>> JUnit in Action, Second Edition
>>> Spring Batch in Action
>>> Blog: http://garygregory.wordpress.com 
>>> Home: http://garygregory.com/
>>> Tweet! http://twitter.com/GaryGregory
>> 
>> 
>> 
>> 
>> -- 
>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>> Java Persistence with Hibernate, Second Edition
>> JUnit in Action, Second Edition
>> Spring Batch in Action
>> Blog: http://garygregory.wordpress.com 
>> Home: http://garygregory.com/
>> Tweet! http://twitter.com/GaryGregory
> 
> 
> 
> -- 
> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
> Java Persistence with Hibernate, Second Edition
> JUnit in Action, Second Edition
> Spring Batch in Action
> Blog: http://garygregory.wordpress.com 
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Gary Gregory <ga...@gmail.com>.
On Mon, Sep 26, 2016 at 6:10 PM, Gary Gregory <ga...@gmail.com>
wrote:

> On Mon, Sep 26, 2016 at 6:09 PM, Gary Gregory <ga...@gmail.com>
> wrote:
>
>> On Mon, Sep 26, 2016 at 5:21 PM, Ralph Goers <ra...@dslextreme.com>
>> wrote:
>>
>>> I thought you didn’t want to write to a file?
>>>
>>
>> I do not but if the buffer is large enough, log events should stay in
>> RAM. But it is not quite right anyway because I'd have to interpret the
>> contents of the file to turn back into log events.
>>
>> I started reading up on the Chronicle appender; thank you Remko for
>> pointing it out.
>>
>> An appender to a cache of objects is really want I want since I also want
>> to be able to evict the cache. TBC...
>>
>
> Like a JSR-107 Appender...
>

Looking at EHCache and
https://ignite.apache.org/jcache/1.0.0/javadoc/javax/cache/Cache.html I can
see that a cache is always a kind of map, which leads to what the key
should be.

A sequence number like we have in the pattern layout seems like a natural
choice. I could see a Jsr107Appender that tracks a sequence number as the
key. The issue is that the JSR107 Cache interface defines the iterator
order as undefined which would force a client trying to drain a
Jsr107Appender to sort all entries before being able to serialize them.
Unless I can find a list-based Cache implementation within EhCache for
example.

Gary



>
> Gary
>
>>
>> Gary
>>
>>
>>> The Chronicle stuff Remko is linking to is also worth exploring.
>>>
>>> Ralph
>>>
>>>
>>>
>>> On Sep 26, 2016, at 5:04 PM, Gary Gregory <ga...@gmail.com>
>>> wrote:
>>>
>>> oh... what about our own http://logging.apache.org/
>>> log4j/2.x/manual/appenders.html#MemoryMappedFileAppender
>>>
>>> ?
>>>
>>> Gary
>>>
>>> On Mon, Sep 26, 2016 at 4:59 PM, Remko Popma <re...@gmail.com>
>>> wrote:
>>>
>>>> In addition to the Flume based solution, here is another alternative
>>>> idea: use Peter Lawrey's Chronicle[1] library to store log events in a
>>>> memory mapped file.
>>>>
>>>> The appender can just keep adding events without worrying about
>>>> overflowing the memory.
>>>>
>>>> The client that reads from this file can be in a separate thread (even
>>>> a separate process by the way) and can read as much as it wants, and send
>>>> it to the server.
>>>>
>>>> Serialization: You can either serialize log events to the target format
>>>> before storing them in Chronicle (so you have binary blobs in each
>>>> Chronicle excerpt), client reads these blobs and sends them to the server
>>>> as is. Or you can use the Chronicle Log4j2 appender[2] to store the events
>>>> in Chronicle format. The tests[3] show how to read LogEvent objects from
>>>> the memory mapped file, and the client would be responsible for serializing
>>>> these log events to the target format before sending data to the server.
>>>>
>>>> [1]: https://github.com/peter-lawrey/Java-Chronicle
>>>> [2]: https://github.com/OpenHFT/Chronicle-Logger
>>>> [3]: https://github.com/OpenHFT/Chronicle-Logger/blob/master
>>>> /logger-log4j-2/src/test/java/net/openhft/chronicle/logger/l
>>>> og4j2/Log4j2IndexedChronicleTest.java
>>>>
>>>> Remko
>>>>
>>>> Sent from my iPhone
>>>>
>>>> On 2016/09/27, at 5:57, Gary Gregory <ga...@gmail.com> wrote:
>>>>
>>>> Please allow me to restate the use case I have for the
>>>> CollectionAppender, which is separate from any Flume-based or Syslog-based
>>>> solution, use cases I also have. Well, I have a Syslog use case, and
>>>> whether or not Flume is in the picture will really be a larger discussion
>>>> in my organization due to the requirement to run a Flume Agent.)
>>>>
>>>> A program (like a JDBC driver already using Log4j) communicates with
>>>> another (like a DBMS, not written in Java). The client and server
>>>> communicate over a proprietary socket protocol. The client sends a list of
>>>> buffers (in one go) to the server to perform one or more operations. One
>>>> kind of buffer this protocol defines is a log buffer (where each log event
>>>> is serialized in a non-Java format.) This allows each communication from
>>>> the client to the server to say "This is what's happened up to now". What
>>>> the server does with the log buffers is not important for this discussion.
>>>>
>>>> What is important to note is that the log buffer and other buffers go
>>>> to the server in one BLOB; which is why I cannot (in this use case) send
>>>> log events by themselves anywhere.
>>>>
>>>> I see that something (a CollectionAppender) must collect log events
>>>> until the client is ready to serialize them and send them to the server.
>>>> Once the events are drained out of the Appender (in one go by just getting
>>>> the collection), events can collect in a new collection. A synchronous
>>>> drain operation would create a new collection and return the old one.
>>>>
>>>> The question becomes: What kind of temporary location can the client
>>>> use to buffer log event until drain time? A Log4j Appender is a natural
>>>> place to collect log events since the driver uses Log4j. The driver will
>>>> make its business to drain the appender and work with the events at the
>>>> right time. I am thinking that the Log4j Appender part is generic enough
>>>> for inclusion in Log4j.
>>>>
>>>> Further thoughts?
>>>>
>>>> Thank you all for reading this far!
>>>> Gary
>>>>
>>>> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <
>>>> ralph.goers@dslextreme.com> wrote:
>>>>
>>>>> I guess I am not understanding your use case quite correctly. I am
>>>>> thinking you have a driver that is logging and you want those logs
>>>>> delivered to some other location to actually be written.  If that is your
>>>>> use case then the driver needs a log4j2.xml that configures the
>>>>> FlumeAppender with either the memory or file channel (depending on your
>>>>> needs) and points to the server(s) that is/are to receive the events. The
>>>>> FlumeAppender handles sending them in batches with whatever size you want
>>>>> (but will send them in smaller amounts if they are in the channel too
>>>>> long). Of course you would need the log4j-flume and flume jars. So on the
>>>>> driver side you wouldn’t need to write anything, just configure the
>>>>> appender and make sure the jars are there.
>>>>>
>>>>> For the server that receives them you would also need Flume. Normally
>>>>> this would be a standalone component, but it really wouldn’t be hard to
>>>>> incorporate it into some other application. The only thing you would have
>>>>> to write would be the sink that writes the events to the database or
>>>>> whatever. To incorporate it into an application you would have to look at
>>>>> the main() method of flume and covert that to be a thread that you kick off.
>>>>>
>>>>> Ralph
>>>>>
>>>>>
>>>>>
>>>>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>> Hi Ralph,
>>>>>
>>>>> Thanks for your feedback. Flume is great in the scenarios that do not
>>>>> involve sending a log buffer from the driver itself.
>>>>>
>>>>> I can't require a Flume Agent to be running 'on the side' for the use
>>>>> case where the driver chains a log buffer at the end of the train of
>>>>> database IO buffer. For completeness talking about this Flume scenario, if
>>>>> I read you right, I also would need to write a custom Flume sink, which
>>>>> would also be in memory, until the driver is ready to drain it. Or, I could
>>>>> query some other 'safe' and 'reliable' Flume sink that the driver could
>>>>> then drain of events when it needs to.
>>>>>
>>>>> Narrowing down on the use case where the driver chains a log buffer at
>>>>> the end of the train of database IO buffer, I'll think I have to see about
>>>>> converting the Log4j ListAppender into a more robust and flexible version.
>>>>> I think I'll call it a CollectionAppender and allow various Collection
>>>>> implementations to be plugged in.
>>>>>
>>>>> Gary
>>>>>
>>>>> Gary
>>>>>
>>>>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <
>>>>> ralph.goers@dslextreme.com> wrote:
>>>>>
>>>>>> If you are buffering events in memory you run the risk of losing
>>>>>> events if something should fail.
>>>>>>
>>>>>> That said, if I had your requirements I would use the FlumeAppender.
>>>>>> It has either an in-memory option to buffer as you are suggesting or it can
>>>>>> write to a local file to prevent data loss if that is a requirement. It
>>>>>> already has the configuration options you are looking for and has been well
>>>>>> tested. The only downside is that you need to have either a Flume instance
>>>>>> receiving the messages are something that can receive Flume events over
>>>>>> Avro, but it is easier just to use Flume and write a custom sink to do what
>>>>>> you want with the data.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>> I can't believe it, but through a convoluted use-case, I actually
>>>>>> need an in-memory list appender, very much like our test-only ListAppender.
>>>>>>
>>>>>> The requirement is as follows.
>>>>>>
>>>>>> We have a JDBC driver and matching proprietary database that
>>>>>> specializes in data virtualization of mainframe resources like DB2, VSAM,
>>>>>> IMS, and all sorts of non-SQL data sources (
>>>>>> http://www.rocketsoftware.com/products/rocket-data/rocket-d
>>>>>> ata-virtualization)
>>>>>>
>>>>>> The high level requirement is to merge the driver log into the
>>>>>> server's log for full-end to end tractability and debugging.
>>>>>>
>>>>>> When the driver is running on the z/OS mainframe, it can be
>>>>>> configured with a z/OS specific Appender that can talk to the server log
>>>>>> module directly.
>>>>>>
>>>>>> When the driver is running elsewhere, it can talk to the database via
>>>>>> a Syslog socket Appender. This requires more set up on the server side and
>>>>>> for the server to do special magic to know how the incoming log events
>>>>>> match up with server operations. Tricky.
>>>>>>
>>>>>> The customer should also be able to configure the driver such that
>>>>>> anytime the driver communicates to the database, it sends along whatever
>>>>>> log events have accumulated since the last client-server roundtrip. This
>>>>>> allows the server to match exactly the connection and operations the client
>>>>>> performed with the server's own logging.
>>>>>>
>>>>>> In order to do that I need to buffer all log events in an Appender
>>>>>> and when it's time, I need to get the list of events and reset the appender
>>>>>> to a new empty list so events can keep accumulating.
>>>>>>
>>>>>> My proposal is to either turn our ListAppender into such an appender.
>>>>>> For sanity, the appender could be configured with various sizing policies:
>>>>>>
>>>>>> - open: the list grows unbounded
>>>>>> - closed: the list grows to a given size and _new_ events are dropped
>>>>>> on the floor beyond that
>>>>>> - latest: the list grows to a given size and _old_ events are dropped
>>>>>> on the floor beyond that
>>>>>>
>>>>>> Thoughts?
>>>>>>
>>>>>> Gary
>>>>>>
>>>>>> --
>>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>> <http://www.manning.com/bauer3/>
>>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>>> Blog: http://garygregory.wordpress.com
>>>>>> Home: http://garygregory.com/
>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>> Java Persistence with Hibernate, Second Edition
>>>>> <http://www.manning.com/bauer3/>
>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>> Blog: http://garygregory.wordpress.com
>>>>> Home: http://garygregory.com/
>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>> Java Persistence with Hibernate, Second Edition
>>>> <http://www.manning.com/bauer3/>
>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>> Blog: http://garygregory.wordpress.com
>>>> Home: http://garygregory.com/
>>>> Tweet! http://twitter.com/GaryGregory
>>>>
>>>>
>>>
>>>
>>> --
>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>> Java Persistence with Hibernate, Second Edition
>>> <http://www.manning.com/bauer3/>
>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>> Spring Batch in Action <http://www.manning.com/templier/>
>>> Blog: http://garygregory.wordpress.com
>>> Home: http://garygregory.com/
>>> Tweet! http://twitter.com/GaryGregory
>>>
>>>
>>>
>>
>>
>> --
>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>> Java Persistence with Hibernate, Second Edition
>> <http://www.manning.com/bauer3/>
>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>> Spring Batch in Action <http://www.manning.com/templier/>
>> Blog: http://garygregory.wordpress.com
>> Home: http://garygregory.com/
>> Tweet! http://twitter.com/GaryGregory
>>
>
>
>
> --
> E-Mail: garydgregory@gmail.com | ggregory@apache.org
> Java Persistence with Hibernate, Second Edition
> <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory
>



-- 
E-Mail: garydgregory@gmail.com | ggregory@apache.org
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Gary Gregory <ga...@gmail.com>.
On Mon, Sep 26, 2016 at 6:09 PM, Gary Gregory <ga...@gmail.com>
wrote:

> On Mon, Sep 26, 2016 at 5:21 PM, Ralph Goers <ra...@dslextreme.com>
> wrote:
>
>> I thought you didn’t want to write to a file?
>>
>
> I do not but if the buffer is large enough, log events should stay in RAM.
> But it is not quite right anyway because I'd have to interpret the contents
> of the file to turn back into log events.
>
> I started reading up on the Chronicle appender; thank you Remko for
> pointing it out.
>
> An appender to a cache of objects is really want I want since I also want
> to be able to evict the cache. TBC...
>

Like a JSR-107 Appender...

Gary

>
> Gary
>
>
>> The Chronicle stuff Remko is linking to is also worth exploring.
>>
>> Ralph
>>
>>
>>
>> On Sep 26, 2016, at 5:04 PM, Gary Gregory <ga...@gmail.com> wrote:
>>
>> oh... what about our own http://logging.apache.org/
>> log4j/2.x/manual/appenders.html#MemoryMappedFileAppender
>>
>> ?
>>
>> Gary
>>
>> On Mon, Sep 26, 2016 at 4:59 PM, Remko Popma <re...@gmail.com>
>> wrote:
>>
>>> In addition to the Flume based solution, here is another alternative
>>> idea: use Peter Lawrey's Chronicle[1] library to store log events in a
>>> memory mapped file.
>>>
>>> The appender can just keep adding events without worrying about
>>> overflowing the memory.
>>>
>>> The client that reads from this file can be in a separate thread (even a
>>> separate process by the way) and can read as much as it wants, and send it
>>> to the server.
>>>
>>> Serialization: You can either serialize log events to the target format
>>> before storing them in Chronicle (so you have binary blobs in each
>>> Chronicle excerpt), client reads these blobs and sends them to the server
>>> as is. Or you can use the Chronicle Log4j2 appender[2] to store the events
>>> in Chronicle format. The tests[3] show how to read LogEvent objects from
>>> the memory mapped file, and the client would be responsible for serializing
>>> these log events to the target format before sending data to the server.
>>>
>>> [1]: https://github.com/peter-lawrey/Java-Chronicle
>>> [2]: https://github.com/OpenHFT/Chronicle-Logger
>>> [3]: https://github.com/OpenHFT/Chronicle-Logger/blob/master
>>> /logger-log4j-2/src/test/java/net/openhft/chronicle/logger/l
>>> og4j2/Log4j2IndexedChronicleTest.java
>>>
>>> Remko
>>>
>>> Sent from my iPhone
>>>
>>> On 2016/09/27, at 5:57, Gary Gregory <ga...@gmail.com> wrote:
>>>
>>> Please allow me to restate the use case I have for the
>>> CollectionAppender, which is separate from any Flume-based or Syslog-based
>>> solution, use cases I also have. Well, I have a Syslog use case, and
>>> whether or not Flume is in the picture will really be a larger discussion
>>> in my organization due to the requirement to run a Flume Agent.)
>>>
>>> A program (like a JDBC driver already using Log4j) communicates with
>>> another (like a DBMS, not written in Java). The client and server
>>> communicate over a proprietary socket protocol. The client sends a list of
>>> buffers (in one go) to the server to perform one or more operations. One
>>> kind of buffer this protocol defines is a log buffer (where each log event
>>> is serialized in a non-Java format.) This allows each communication from
>>> the client to the server to say "This is what's happened up to now". What
>>> the server does with the log buffers is not important for this discussion.
>>>
>>> What is important to note is that the log buffer and other buffers go to
>>> the server in one BLOB; which is why I cannot (in this use case) send log
>>> events by themselves anywhere.
>>>
>>> I see that something (a CollectionAppender) must collect log events
>>> until the client is ready to serialize them and send them to the server.
>>> Once the events are drained out of the Appender (in one go by just getting
>>> the collection), events can collect in a new collection. A synchronous
>>> drain operation would create a new collection and return the old one.
>>>
>>> The question becomes: What kind of temporary location can the client use
>>> to buffer log event until drain time? A Log4j Appender is a natural place
>>> to collect log events since the driver uses Log4j. The driver will make its
>>> business to drain the appender and work with the events at the right time.
>>> I am thinking that the Log4j Appender part is generic enough for inclusion
>>> in Log4j.
>>>
>>> Further thoughts?
>>>
>>> Thank you all for reading this far!
>>> Gary
>>>
>>> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <ralph.goers@dslextreme.com
>>> > wrote:
>>>
>>>> I guess I am not understanding your use case quite correctly. I am
>>>> thinking you have a driver that is logging and you want those logs
>>>> delivered to some other location to actually be written.  If that is your
>>>> use case then the driver needs a log4j2.xml that configures the
>>>> FlumeAppender with either the memory or file channel (depending on your
>>>> needs) and points to the server(s) that is/are to receive the events. The
>>>> FlumeAppender handles sending them in batches with whatever size you want
>>>> (but will send them in smaller amounts if they are in the channel too
>>>> long). Of course you would need the log4j-flume and flume jars. So on the
>>>> driver side you wouldn’t need to write anything, just configure the
>>>> appender and make sure the jars are there.
>>>>
>>>> For the server that receives them you would also need Flume. Normally
>>>> this would be a standalone component, but it really wouldn’t be hard to
>>>> incorporate it into some other application. The only thing you would have
>>>> to write would be the sink that writes the events to the database or
>>>> whatever. To incorporate it into an application you would have to look at
>>>> the main() method of flume and covert that to be a thread that you kick off.
>>>>
>>>> Ralph
>>>>
>>>>
>>>>
>>>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com>
>>>> wrote:
>>>>
>>>> Hi Ralph,
>>>>
>>>> Thanks for your feedback. Flume is great in the scenarios that do not
>>>> involve sending a log buffer from the driver itself.
>>>>
>>>> I can't require a Flume Agent to be running 'on the side' for the use
>>>> case where the driver chains a log buffer at the end of the train of
>>>> database IO buffer. For completeness talking about this Flume scenario, if
>>>> I read you right, I also would need to write a custom Flume sink, which
>>>> would also be in memory, until the driver is ready to drain it. Or, I could
>>>> query some other 'safe' and 'reliable' Flume sink that the driver could
>>>> then drain of events when it needs to.
>>>>
>>>> Narrowing down on the use case where the driver chains a log buffer at
>>>> the end of the train of database IO buffer, I'll think I have to see about
>>>> converting the Log4j ListAppender into a more robust and flexible version.
>>>> I think I'll call it a CollectionAppender and allow various Collection
>>>> implementations to be plugged in.
>>>>
>>>> Gary
>>>>
>>>> Gary
>>>>
>>>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <
>>>> ralph.goers@dslextreme.com> wrote:
>>>>
>>>>> If you are buffering events in memory you run the risk of losing
>>>>> events if something should fail.
>>>>>
>>>>> That said, if I had your requirements I would use the FlumeAppender.
>>>>> It has either an in-memory option to buffer as you are suggesting or it can
>>>>> write to a local file to prevent data loss if that is a requirement. It
>>>>> already has the configuration options you are looking for and has been well
>>>>> tested. The only downside is that you need to have either a Flume instance
>>>>> receiving the messages are something that can receive Flume events over
>>>>> Avro, but it is easier just to use Flume and write a custom sink to do what
>>>>> you want with the data.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>> Hi All,
>>>>>
>>>>> I can't believe it, but through a convoluted use-case, I actually need
>>>>> an in-memory list appender, very much like our test-only ListAppender.
>>>>>
>>>>> The requirement is as follows.
>>>>>
>>>>> We have a JDBC driver and matching proprietary database that
>>>>> specializes in data virtualization of mainframe resources like DB2, VSAM,
>>>>> IMS, and all sorts of non-SQL data sources (
>>>>> http://www.rocketsoftware.com/products/rocket-data/rocket-d
>>>>> ata-virtualization)
>>>>>
>>>>> The high level requirement is to merge the driver log into the
>>>>> server's log for full-end to end tractability and debugging.
>>>>>
>>>>> When the driver is running on the z/OS mainframe, it can be configured
>>>>> with a z/OS specific Appender that can talk to the server log module
>>>>> directly.
>>>>>
>>>>> When the driver is running elsewhere, it can talk to the database via
>>>>> a Syslog socket Appender. This requires more set up on the server side and
>>>>> for the server to do special magic to know how the incoming log events
>>>>> match up with server operations. Tricky.
>>>>>
>>>>> The customer should also be able to configure the driver such that
>>>>> anytime the driver communicates to the database, it sends along whatever
>>>>> log events have accumulated since the last client-server roundtrip. This
>>>>> allows the server to match exactly the connection and operations the client
>>>>> performed with the server's own logging.
>>>>>
>>>>> In order to do that I need to buffer all log events in an Appender and
>>>>> when it's time, I need to get the list of events and reset the appender to
>>>>> a new empty list so events can keep accumulating.
>>>>>
>>>>> My proposal is to either turn our ListAppender into such an appender.
>>>>> For sanity, the appender could be configured with various sizing policies:
>>>>>
>>>>> - open: the list grows unbounded
>>>>> - closed: the list grows to a given size and _new_ events are dropped
>>>>> on the floor beyond that
>>>>> - latest: the list grows to a given size and _old_ events are dropped
>>>>> on the floor beyond that
>>>>>
>>>>> Thoughts?
>>>>>
>>>>> Gary
>>>>>
>>>>> --
>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>>> Java Persistence with Hibernate, Second Edition
>>>>> <http://www.manning.com/bauer3/>
>>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>>> Blog: http://garygregory.wordpress.com
>>>>> Home: http://garygregory.com/
>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>> Java Persistence with Hibernate, Second Edition
>>>> <http://www.manning.com/bauer3/>
>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>> Blog: http://garygregory.wordpress.com
>>>> Home: http://garygregory.com/
>>>> Tweet! http://twitter.com/GaryGregory
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>> Java Persistence with Hibernate, Second Edition
>>> <http://www.manning.com/bauer3/>
>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>> Spring Batch in Action <http://www.manning.com/templier/>
>>> Blog: http://garygregory.wordpress.com
>>> Home: http://garygregory.com/
>>> Tweet! http://twitter.com/GaryGregory
>>>
>>>
>>
>>
>> --
>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>> Java Persistence with Hibernate, Second Edition
>> <http://www.manning.com/bauer3/>
>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>> Spring Batch in Action <http://www.manning.com/templier/>
>> Blog: http://garygregory.wordpress.com
>> Home: http://garygregory.com/
>> Tweet! http://twitter.com/GaryGregory
>>
>>
>>
>
>
> --
> E-Mail: garydgregory@gmail.com | ggregory@apache.org
> Java Persistence with Hibernate, Second Edition
> <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory
>



-- 
E-Mail: garydgregory@gmail.com | ggregory@apache.org
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Gary Gregory <ga...@gmail.com>.
On Mon, Sep 26, 2016 at 5:21 PM, Ralph Goers <ra...@dslextreme.com>
wrote:

> I thought you didn’t want to write to a file?
>

I do not but if the buffer is large enough, log events should stay in RAM.
But it is not quite right anyway because I'd have to interpret the contents
of the file to turn back into log events.

I started reading up on the Chronicle appender; thank you Remko for
pointing it out.

An appender to a cache of objects is really want I want since I also want
to be able to evict the cache. TBC...

Gary


> The Chronicle stuff Remko is linking to is also worth exploring.
>
> Ralph
>
>
>
> On Sep 26, 2016, at 5:04 PM, Gary Gregory <ga...@gmail.com> wrote:
>
> oh... what about our own http://logging.apache.org/
> log4j/2.x/manual/appenders.html#MemoryMappedFileAppender
>
> ?
>
> Gary
>
> On Mon, Sep 26, 2016 at 4:59 PM, Remko Popma <re...@gmail.com>
> wrote:
>
>> In addition to the Flume based solution, here is another alternative
>> idea: use Peter Lawrey's Chronicle[1] library to store log events in a
>> memory mapped file.
>>
>> The appender can just keep adding events without worrying about
>> overflowing the memory.
>>
>> The client that reads from this file can be in a separate thread (even a
>> separate process by the way) and can read as much as it wants, and send it
>> to the server.
>>
>> Serialization: You can either serialize log events to the target format
>> before storing them in Chronicle (so you have binary blobs in each
>> Chronicle excerpt), client reads these blobs and sends them to the server
>> as is. Or you can use the Chronicle Log4j2 appender[2] to store the events
>> in Chronicle format. The tests[3] show how to read LogEvent objects from
>> the memory mapped file, and the client would be responsible for serializing
>> these log events to the target format before sending data to the server.
>>
>> [1]: https://github.com/peter-lawrey/Java-Chronicle
>> [2]: https://github.com/OpenHFT/Chronicle-Logger
>> [3]: https://github.com/OpenHFT/Chronicle-Logger/blob/master
>> /logger-log4j-2/src/test/java/net/openhft/chronicle/logger/
>> log4j2/Log4j2IndexedChronicleTest.java
>>
>> Remko
>>
>> Sent from my iPhone
>>
>> On 2016/09/27, at 5:57, Gary Gregory <ga...@gmail.com> wrote:
>>
>> Please allow me to restate the use case I have for the
>> CollectionAppender, which is separate from any Flume-based or Syslog-based
>> solution, use cases I also have. Well, I have a Syslog use case, and
>> whether or not Flume is in the picture will really be a larger discussion
>> in my organization due to the requirement to run a Flume Agent.)
>>
>> A program (like a JDBC driver already using Log4j) communicates with
>> another (like a DBMS, not written in Java). The client and server
>> communicate over a proprietary socket protocol. The client sends a list of
>> buffers (in one go) to the server to perform one or more operations. One
>> kind of buffer this protocol defines is a log buffer (where each log event
>> is serialized in a non-Java format.) This allows each communication from
>> the client to the server to say "This is what's happened up to now". What
>> the server does with the log buffers is not important for this discussion.
>>
>> What is important to note is that the log buffer and other buffers go to
>> the server in one BLOB; which is why I cannot (in this use case) send log
>> events by themselves anywhere.
>>
>> I see that something (a CollectionAppender) must collect log events until
>> the client is ready to serialize them and send them to the server. Once the
>> events are drained out of the Appender (in one go by just getting the
>> collection), events can collect in a new collection. A synchronous drain
>> operation would create a new collection and return the old one.
>>
>> The question becomes: What kind of temporary location can the client use
>> to buffer log event until drain time? A Log4j Appender is a natural place
>> to collect log events since the driver uses Log4j. The driver will make its
>> business to drain the appender and work with the events at the right time.
>> I am thinking that the Log4j Appender part is generic enough for inclusion
>> in Log4j.
>>
>> Further thoughts?
>>
>> Thank you all for reading this far!
>> Gary
>>
>> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <ra...@dslextreme.com>
>> wrote:
>>
>>> I guess I am not understanding your use case quite correctly. I am
>>> thinking you have a driver that is logging and you want those logs
>>> delivered to some other location to actually be written.  If that is your
>>> use case then the driver needs a log4j2.xml that configures the
>>> FlumeAppender with either the memory or file channel (depending on your
>>> needs) and points to the server(s) that is/are to receive the events. The
>>> FlumeAppender handles sending them in batches with whatever size you want
>>> (but will send them in smaller amounts if they are in the channel too
>>> long). Of course you would need the log4j-flume and flume jars. So on the
>>> driver side you wouldn’t need to write anything, just configure the
>>> appender and make sure the jars are there.
>>>
>>> For the server that receives them you would also need Flume. Normally
>>> this would be a standalone component, but it really wouldn’t be hard to
>>> incorporate it into some other application. The only thing you would have
>>> to write would be the sink that writes the events to the database or
>>> whatever. To incorporate it into an application you would have to look at
>>> the main() method of flume and covert that to be a thread that you kick off.
>>>
>>> Ralph
>>>
>>>
>>>
>>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com>
>>> wrote:
>>>
>>> Hi Ralph,
>>>
>>> Thanks for your feedback. Flume is great in the scenarios that do not
>>> involve sending a log buffer from the driver itself.
>>>
>>> I can't require a Flume Agent to be running 'on the side' for the use
>>> case where the driver chains a log buffer at the end of the train of
>>> database IO buffer. For completeness talking about this Flume scenario, if
>>> I read you right, I also would need to write a custom Flume sink, which
>>> would also be in memory, until the driver is ready to drain it. Or, I could
>>> query some other 'safe' and 'reliable' Flume sink that the driver could
>>> then drain of events when it needs to.
>>>
>>> Narrowing down on the use case where the driver chains a log buffer at
>>> the end of the train of database IO buffer, I'll think I have to see about
>>> converting the Log4j ListAppender into a more robust and flexible version.
>>> I think I'll call it a CollectionAppender and allow various Collection
>>> implementations to be plugged in.
>>>
>>> Gary
>>>
>>> Gary
>>>
>>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <ralph.goers@dslextreme.com
>>> > wrote:
>>>
>>>> If you are buffering events in memory you run the risk of losing events
>>>> if something should fail.
>>>>
>>>> That said, if I had your requirements I would use the FlumeAppender. It
>>>> has either an in-memory option to buffer as you are suggesting or it can
>>>> write to a local file to prevent data loss if that is a requirement. It
>>>> already has the configuration options you are looking for and has been well
>>>> tested. The only downside is that you need to have either a Flume instance
>>>> receiving the messages are something that can receive Flume events over
>>>> Avro, but it is easier just to use Flume and write a custom sink to do what
>>>> you want with the data.
>>>>
>>>> Ralph
>>>>
>>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com>
>>>> wrote:
>>>>
>>>> Hi All,
>>>>
>>>> I can't believe it, but through a convoluted use-case, I actually need
>>>> an in-memory list appender, very much like our test-only ListAppender.
>>>>
>>>> The requirement is as follows.
>>>>
>>>> We have a JDBC driver and matching proprietary database that
>>>> specializes in data virtualization of mainframe resources like DB2, VSAM,
>>>> IMS, and all sorts of non-SQL data sources (
>>>> http://www.rocketsoftware.com/products/rocket-data/rocket-d
>>>> ata-virtualization)
>>>>
>>>> The high level requirement is to merge the driver log into the server's
>>>> log for full-end to end tractability and debugging.
>>>>
>>>> When the driver is running on the z/OS mainframe, it can be configured
>>>> with a z/OS specific Appender that can talk to the server log module
>>>> directly.
>>>>
>>>> When the driver is running elsewhere, it can talk to the database via a
>>>> Syslog socket Appender. This requires more set up on the server side and
>>>> for the server to do special magic to know how the incoming log events
>>>> match up with server operations. Tricky.
>>>>
>>>> The customer should also be able to configure the driver such that
>>>> anytime the driver communicates to the database, it sends along whatever
>>>> log events have accumulated since the last client-server roundtrip. This
>>>> allows the server to match exactly the connection and operations the client
>>>> performed with the server's own logging.
>>>>
>>>> In order to do that I need to buffer all log events in an Appender and
>>>> when it's time, I need to get the list of events and reset the appender to
>>>> a new empty list so events can keep accumulating.
>>>>
>>>> My proposal is to either turn our ListAppender into such an appender.
>>>> For sanity, the appender could be configured with various sizing policies:
>>>>
>>>> - open: the list grows unbounded
>>>> - closed: the list grows to a given size and _new_ events are dropped
>>>> on the floor beyond that
>>>> - latest: the list grows to a given size and _old_ events are dropped
>>>> on the floor beyond that
>>>>
>>>> Thoughts?
>>>>
>>>> Gary
>>>>
>>>> --
>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>>> Java Persistence with Hibernate, Second Edition
>>>> <http://www.manning.com/bauer3/>
>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>> Blog: http://garygregory.wordpress.com
>>>> Home: http://garygregory.com/
>>>> Tweet! http://twitter.com/GaryGregory
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>> Java Persistence with Hibernate, Second Edition
>>> <http://www.manning.com/bauer3/>
>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>> Spring Batch in Action <http://www.manning.com/templier/>
>>> Blog: http://garygregory.wordpress.com
>>> Home: http://garygregory.com/
>>> Tweet! http://twitter.com/GaryGregory
>>>
>>>
>>>
>>
>>
>> --
>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>> Java Persistence with Hibernate, Second Edition
>> <http://www.manning.com/bauer3/>
>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>> Spring Batch in Action <http://www.manning.com/templier/>
>> Blog: http://garygregory.wordpress.com
>> Home: http://garygregory.com/
>> Tweet! http://twitter.com/GaryGregory
>>
>>
>
>
> --
> E-Mail: garydgregory@gmail.com | ggregory@apache.org
> Java Persistence with Hibernate, Second Edition
> <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory
>
>
>


-- 
E-Mail: garydgregory@gmail.com | ggregory@apache.org
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Ralph Goers <ra...@dslextreme.com>.
I thought you didn’t want to write to a file?

The Chronicle stuff Remko is linking to is also worth exploring. 

Ralph



> On Sep 26, 2016, at 5:04 PM, Gary Gregory <ga...@gmail.com> wrote:
> 
> oh... what about our own http://logging.apache.org/log4j/2.x/manual/appenders.html#MemoryMappedFileAppender <http://logging.apache.org/log4j/2.x/manual/appenders.html#MemoryMappedFileAppender>
> 
> ?
> 
> Gary
> 
> On Mon, Sep 26, 2016 at 4:59 PM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
> In addition to the Flume based solution, here is another alternative idea: use Peter Lawrey's Chronicle[1] library to store log events in a memory mapped file. 
> 
> The appender can just keep adding events without worrying about overflowing the memory. 
> 
> The client that reads from this file can be in a separate thread (even a separate process by the way) and can read as much as it wants, and send it to the server. 
> 
> Serialization: You can either serialize log events to the target format before storing them in Chronicle (so you have binary blobs in each Chronicle excerpt), client reads these blobs and sends them to the server as is. Or you can use the Chronicle Log4j2 appender[2] to store the events in Chronicle format. The tests[3] show how to read LogEvent objects from the memory mapped file, and the client would be responsible for serializing these log events to the target format before sending data to the server. 
> 
> [1]: https://github.com/peter-lawrey/Java-Chronicle <https://github.com/peter-lawrey/Java-Chronicle>
> [2]: https://github.com/OpenHFT/Chronicle-Logger <https://github.com/OpenHFT/Chronicle-Logger>
> [3]: https://github.com/OpenHFT/Chronicle-Logger/blob/master/logger-log4j-2/src/test/java/net/openhft/chronicle/logger/log4j2/Log4j2IndexedChronicleTest.java <https://github.com/OpenHFT/Chronicle-Logger/blob/master/logger-log4j-2/src/test/java/net/openhft/chronicle/logger/log4j2/Log4j2IndexedChronicleTest.java>
> 
> Remko
> 
> Sent from my iPhone
> 
> On 2016/09/27, at 5:57, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
> 
>> Please allow me to restate the use case I have for the CollectionAppender, which is separate from any Flume-based or Syslog-based solution, use cases I also have. Well, I have a Syslog use case, and whether or not Flume is in the picture will really be a larger discussion in my organization due to the requirement to run a Flume Agent.)
>> 
>> A program (like a JDBC driver already using Log4j) communicates with another (like a DBMS, not written in Java). The client and server communicate over a proprietary socket protocol. The client sends a list of buffers (in one go) to the server to perform one or more operations. One kind of buffer this protocol defines is a log buffer (where each log event is serialized in a non-Java format.) This allows each communication from the client to the server to say "This is what's happened up to now". What the server does with the log buffers is not important for this discussion.
>> 
>> What is important to note is that the log buffer and other buffers go to the server in one BLOB; which is why I cannot (in this use case) send log events by themselves anywhere.
>> 
>> I see that something (a CollectionAppender) must collect log events until the client is ready to serialize them and send them to the server. Once the events are drained out of the Appender (in one go by just getting the collection), events can collect in a new collection. A synchronous drain operation would create a new collection and return the old one.
>> 
>> The question becomes: What kind of temporary location can the client use to buffer log event until drain time? A Log4j Appender is a natural place to collect log events since the driver uses Log4j. The driver will make its business to drain the appender and work with the events at the right time. I am thinking that the Log4j Appender part is generic enough for inclusion in Log4j. 
>> 
>> Further thoughts?
>> 
>> Thank you all for reading this far!
>> Gary
>> 
>> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>> I guess I am not understanding your use case quite correctly. I am thinking you have a driver that is logging and you want those logs delivered to some other location to actually be written.  If that is your use case then the driver needs a log4j2.xml that configures the FlumeAppender with either the memory or file channel (depending on your needs) and points to the server(s) that is/are to receive the events. The FlumeAppender handles sending them in batches with whatever size you want (but will send them in smaller amounts if they are in the channel too long). Of course you would need the log4j-flume and flume jars. So on the driver side you wouldn’t need to write anything, just configure the appender and make sure the jars are there.
>> 
>> For the server that receives them you would also need Flume. Normally this would be a standalone component, but it really wouldn’t be hard to incorporate it into some other application. The only thing you would have to write would be the sink that writes the events to the database or whatever. To incorporate it into an application you would have to look at the main() method of flume and covert that to be a thread that you kick off.
>> 
>> Ralph
>> 
>> 
>> 
>>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>> 
>>> Hi Ralph,
>>> 
>>> Thanks for your feedback. Flume is great in the scenarios that do not involve sending a log buffer from the driver itself.
>>> 
>>> I can't require a Flume Agent to be running 'on the side' for the use case where the driver chains a log buffer at the end of the train of database IO buffer. For completeness talking about this Flume scenario, if I read you right, I also would need to write a custom Flume sink, which would also be in memory, until the driver is ready to drain it. Or, I could query some other 'safe' and 'reliable' Flume sink that the driver could then drain of events when it needs to.
>>> 
>>> Narrowing down on the use case where the driver chains a log buffer at the end of the train of database IO buffer, I'll think I have to see about converting the Log4j ListAppender into a more robust and flexible version. I think I'll call it a CollectionAppender and allow various Collection implementations to be plugged in.
>>> 
>>> Gary
>>> 
>>> Gary
>>> 
>>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>> If you are buffering events in memory you run the risk of losing events if something should fail. 
>>> 
>>> That said, if I had your requirements I would use the FlumeAppender. It has either an in-memory option to buffer as you are suggesting or it can write to a local file to prevent data loss if that is a requirement. It already has the configuration options you are looking for and has been well tested. The only downside is that you need to have either a Flume instance receiving the messages are something that can receive Flume events over Avro, but it is easier just to use Flume and write a custom sink to do what you want with the data.
>>> 
>>> Ralph
>>> 
>>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>> 
>>>> Hi All,
>>>> 
>>>> I can't believe it, but through a convoluted use-case, I actually need an in-memory list appender, very much like our test-only ListAppender.
>>>> 
>>>> The requirement is as follows.
>>>> 
>>>> We have a JDBC driver and matching proprietary database that specializes in data virtualization of mainframe resources like DB2, VSAM, IMS, and all sorts of non-SQL data sources (http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization <http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization>) 
>>>> 
>>>> The high level requirement is to merge the driver log into the server's log for full-end to end tractability and debugging.
>>>> 
>>>> When the driver is running on the z/OS mainframe, it can be configured with a z/OS specific Appender that can talk to the server log module directly.
>>>> 
>>>> When the driver is running elsewhere, it can talk to the database via a Syslog socket Appender. This requires more set up on the server side and for the server to do special magic to know how the incoming log events match up with server operations. Tricky.
>>>> 
>>>> The customer should also be able to configure the driver such that anytime the driver communicates to the database, it sends along whatever log events have accumulated since the last client-server roundtrip. This allows the server to match exactly the connection and operations the client performed with the server's own logging.
>>>> 
>>>> In order to do that I need to buffer all log events in an Appender and when it's time, I need to get the list of events and reset the appender to a new empty list so events can keep accumulating.
>>>> 
>>>> My proposal is to either turn our ListAppender into such an appender. For sanity, the appender could be configured with various sizing policies:
>>>> 
>>>> - open: the list grows unbounded
>>>> - closed: the list grows to a given size and _new_ events are dropped on the floor beyond that
>>>> - latest: the list grows to a given size and _old_ events are dropped on the floor beyond that
>>>> 
>>>> Thoughts?
>>>> 
>>>> Gary
>>>> 
>>>> -- 
>>>> E-Mail: garydgregory@gmail.com <ma...@gmail.com> | ggregory@apache.org  <ma...@apache.org>
>>>> Java Persistence with Hibernate, Second Edition <http://www.manning.com/bauer3/>
>>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>>> Spring Batch in Action <http://www.manning.com/templier/>
>>>> Blog: http://garygregory.wordpress.com <http://garygregory.wordpress.com/> 
>>>> Home: http://garygregory.com/ <http://garygregory.com/>
>>>> Tweet! http://twitter.com/GaryGregory <http://twitter.com/GaryGregory>
>>> 
>>> 
>>> 
>>> -- 
>>> E-Mail: garydgregory@gmail.com <ma...@gmail.com> | ggregory@apache.org  <ma...@apache.org>
>>> Java Persistence with Hibernate, Second Edition <http://www.manning.com/bauer3/>
>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>> Spring Batch in Action <http://www.manning.com/templier/>
>>> Blog: http://garygregory.wordpress.com <http://garygregory.wordpress.com/> 
>>> Home: http://garygregory.com/ <http://garygregory.com/>
>>> Tweet! http://twitter.com/GaryGregory <http://twitter.com/GaryGregory>
>> 
>> 
>> 
>> -- 
>> E-Mail: garydgregory@gmail.com <ma...@gmail.com> | ggregory@apache.org  <ma...@apache.org>
>> Java Persistence with Hibernate, Second Edition <http://www.manning.com/bauer3/>
>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>> Spring Batch in Action <http://www.manning.com/templier/>
>> Blog: http://garygregory.wordpress.com <http://garygregory.wordpress.com/> 
>> Home: http://garygregory.com/ <http://garygregory.com/>
>> Tweet! http://twitter.com/GaryGregory <http://twitter.com/GaryGregory>
> 
> 
> -- 
> E-Mail: garydgregory@gmail.com <ma...@gmail.com> | ggregory@apache.org  <ma...@apache.org>
> Java Persistence with Hibernate, Second Edition <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com <http://garygregory.wordpress.com/> 
> Home: http://garygregory.com/ <http://garygregory.com/>
> Tweet! http://twitter.com/GaryGregory <http://twitter.com/GaryGregory>

Re: In memory appender

Posted by Gary Gregory <ga...@gmail.com>.
oh... what about our own
http://logging.apache.org/log4j/2.x/manual/appenders.html#MemoryMappedFileAppender

?

Gary

On Mon, Sep 26, 2016 at 4:59 PM, Remko Popma <re...@gmail.com> wrote:

> In addition to the Flume based solution, here is another alternative idea:
> use Peter Lawrey's Chronicle[1] library to store log events in a memory
> mapped file.
>
> The appender can just keep adding events without worrying about
> overflowing the memory.
>
> The client that reads from this file can be in a separate thread (even a
> separate process by the way) and can read as much as it wants, and send it
> to the server.
>
> Serialization: You can either serialize log events to the target format
> before storing them in Chronicle (so you have binary blobs in each
> Chronicle excerpt), client reads these blobs and sends them to the server
> as is. Or you can use the Chronicle Log4j2 appender[2] to store the events
> in Chronicle format. The tests[3] show how to read LogEvent objects from
> the memory mapped file, and the client would be responsible for serializing
> these log events to the target format before sending data to the server.
>
> [1]: https://github.com/peter-lawrey/Java-Chronicle
> [2]: https://github.com/OpenHFT/Chronicle-Logger
> [3]: https://github.com/OpenHFT/Chronicle-Logger/blob/
> master/logger-log4j-2/src/test/java/net/openhft/chronicle/logger/log4j2/
> Log4j2IndexedChronicleTest.java
>
> Remko
>
> Sent from my iPhone
>
> On 2016/09/27, at 5:57, Gary Gregory <ga...@gmail.com> wrote:
>
> Please allow me to restate the use case I have for the CollectionAppender,
> which is separate from any Flume-based or Syslog-based solution, use cases
> I also have. Well, I have a Syslog use case, and whether or not Flume is in
> the picture will really be a larger discussion in my organization due to
> the requirement to run a Flume Agent.)
>
> A program (like a JDBC driver already using Log4j) communicates with
> another (like a DBMS, not written in Java). The client and server
> communicate over a proprietary socket protocol. The client sends a list of
> buffers (in one go) to the server to perform one or more operations. One
> kind of buffer this protocol defines is a log buffer (where each log event
> is serialized in a non-Java format.) This allows each communication from
> the client to the server to say "This is what's happened up to now". What
> the server does with the log buffers is not important for this discussion.
>
> What is important to note is that the log buffer and other buffers go to
> the server in one BLOB; which is why I cannot (in this use case) send log
> events by themselves anywhere.
>
> I see that something (a CollectionAppender) must collect log events until
> the client is ready to serialize them and send them to the server. Once the
> events are drained out of the Appender (in one go by just getting the
> collection), events can collect in a new collection. A synchronous drain
> operation would create a new collection and return the old one.
>
> The question becomes: What kind of temporary location can the client use
> to buffer log event until drain time? A Log4j Appender is a natural place
> to collect log events since the driver uses Log4j. The driver will make its
> business to drain the appender and work with the events at the right time.
> I am thinking that the Log4j Appender part is generic enough for inclusion
> in Log4j.
>
> Further thoughts?
>
> Thank you all for reading this far!
> Gary
>
> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <ra...@dslextreme.com>
> wrote:
>
>> I guess I am not understanding your use case quite correctly. I am
>> thinking you have a driver that is logging and you want those logs
>> delivered to some other location to actually be written.  If that is your
>> use case then the driver needs a log4j2.xml that configures the
>> FlumeAppender with either the memory or file channel (depending on your
>> needs) and points to the server(s) that is/are to receive the events. The
>> FlumeAppender handles sending them in batches with whatever size you want
>> (but will send them in smaller amounts if they are in the channel too
>> long). Of course you would need the log4j-flume and flume jars. So on the
>> driver side you wouldn’t need to write anything, just configure the
>> appender and make sure the jars are there.
>>
>> For the server that receives them you would also need Flume. Normally
>> this would be a standalone component, but it really wouldn’t be hard to
>> incorporate it into some other application. The only thing you would have
>> to write would be the sink that writes the events to the database or
>> whatever. To incorporate it into an application you would have to look at
>> the main() method of flume and covert that to be a thread that you kick off.
>>
>> Ralph
>>
>>
>>
>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com>
>> wrote:
>>
>> Hi Ralph,
>>
>> Thanks for your feedback. Flume is great in the scenarios that do not
>> involve sending a log buffer from the driver itself.
>>
>> I can't require a Flume Agent to be running 'on the side' for the use
>> case where the driver chains a log buffer at the end of the train of
>> database IO buffer. For completeness talking about this Flume scenario, if
>> I read you right, I also would need to write a custom Flume sink, which
>> would also be in memory, until the driver is ready to drain it. Or, I could
>> query some other 'safe' and 'reliable' Flume sink that the driver could
>> then drain of events when it needs to.
>>
>> Narrowing down on the use case where the driver chains a log buffer at
>> the end of the train of database IO buffer, I'll think I have to see about
>> converting the Log4j ListAppender into a more robust and flexible version.
>> I think I'll call it a CollectionAppender and allow various Collection
>> implementations to be plugged in.
>>
>> Gary
>>
>> Gary
>>
>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <ra...@dslextreme.com>
>> wrote:
>>
>>> If you are buffering events in memory you run the risk of losing events
>>> if something should fail.
>>>
>>> That said, if I had your requirements I would use the FlumeAppender. It
>>> has either an in-memory option to buffer as you are suggesting or it can
>>> write to a local file to prevent data loss if that is a requirement. It
>>> already has the configuration options you are looking for and has been well
>>> tested. The only downside is that you need to have either a Flume instance
>>> receiving the messages are something that can receive Flume events over
>>> Avro, but it is easier just to use Flume and write a custom sink to do what
>>> you want with the data.
>>>
>>> Ralph
>>>
>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com>
>>> wrote:
>>>
>>> Hi All,
>>>
>>> I can't believe it, but through a convoluted use-case, I actually need
>>> an in-memory list appender, very much like our test-only ListAppender.
>>>
>>> The requirement is as follows.
>>>
>>> We have a JDBC driver and matching proprietary database that specializes
>>> in data virtualization of mainframe resources like DB2, VSAM, IMS, and all
>>> sorts of non-SQL data sources (http://www.rocketsoftware.com
>>> /products/rocket-data/rocket-data-virtualization)
>>>
>>> The high level requirement is to merge the driver log into the server's
>>> log for full-end to end tractability and debugging.
>>>
>>> When the driver is running on the z/OS mainframe, it can be configured
>>> with a z/OS specific Appender that can talk to the server log module
>>> directly.
>>>
>>> When the driver is running elsewhere, it can talk to the database via a
>>> Syslog socket Appender. This requires more set up on the server side and
>>> for the server to do special magic to know how the incoming log events
>>> match up with server operations. Tricky.
>>>
>>> The customer should also be able to configure the driver such that
>>> anytime the driver communicates to the database, it sends along whatever
>>> log events have accumulated since the last client-server roundtrip. This
>>> allows the server to match exactly the connection and operations the client
>>> performed with the server's own logging.
>>>
>>> In order to do that I need to buffer all log events in an Appender and
>>> when it's time, I need to get the list of events and reset the appender to
>>> a new empty list so events can keep accumulating.
>>>
>>> My proposal is to either turn our ListAppender into such an appender.
>>> For sanity, the appender could be configured with various sizing policies:
>>>
>>> - open: the list grows unbounded
>>> - closed: the list grows to a given size and _new_ events are dropped on
>>> the floor beyond that
>>> - latest: the list grows to a given size and _old_ events are dropped on
>>> the floor beyond that
>>>
>>> Thoughts?
>>>
>>> Gary
>>>
>>> --
>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>> Java Persistence with Hibernate, Second Edition
>>> <http://www.manning.com/bauer3/>
>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>> Spring Batch in Action <http://www.manning.com/templier/>
>>> Blog: http://garygregory.wordpress.com
>>> Home: http://garygregory.com/
>>> Tweet! http://twitter.com/GaryGregory
>>>
>>>
>>>
>>
>>
>> --
>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>> Java Persistence with Hibernate, Second Edition
>> <http://www.manning.com/bauer3/>
>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>> Spring Batch in Action <http://www.manning.com/templier/>
>> Blog: http://garygregory.wordpress.com
>> Home: http://garygregory.com/
>> Tweet! http://twitter.com/GaryGregory
>>
>>
>>
>
>
> --
> E-Mail: garydgregory@gmail.com | ggregory@apache.org
> Java Persistence with Hibernate, Second Edition
> <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory
>
>


-- 
E-Mail: garydgregory@gmail.com | ggregory@apache.org
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Remko Popma <re...@gmail.com>.
In addition to the Flume based solution, here is another alternative idea: use Peter Lawrey's Chronicle[1] library to store log events in a memory mapped file. 

The appender can just keep adding events without worrying about overflowing the memory. 

The client that reads from this file can be in a separate thread (even a separate process by the way) and can read as much as it wants, and send it to the server. 

Serialization: You can either serialize log events to the target format before storing them in Chronicle (so you have binary blobs in each Chronicle excerpt), client reads these blobs and sends them to the server as is. Or you can use the Chronicle Log4j2 appender[2] to store the events in Chronicle format. The tests[3] show how to read LogEvent objects from the memory mapped file, and the client would be responsible for serializing these log events to the target format before sending data to the server. 

[1]: https://github.com/peter-lawrey/Java-Chronicle
[2]: https://github.com/OpenHFT/Chronicle-Logger
[3]: https://github.com/OpenHFT/Chronicle-Logger/blob/master/logger-log4j-2/src/test/java/net/openhft/chronicle/logger/log4j2/Log4j2IndexedChronicleTest.java

Remko

Sent from my iPhone

> On 2016/09/27, at 5:57, Gary Gregory <ga...@gmail.com> wrote:
> 
> Please allow me to restate the use case I have for the CollectionAppender, which is separate from any Flume-based or Syslog-based solution, use cases I also have. Well, I have a Syslog use case, and whether or not Flume is in the picture will really be a larger discussion in my organization due to the requirement to run a Flume Agent.)
> 
> A program (like a JDBC driver already using Log4j) communicates with another (like a DBMS, not written in Java). The client and server communicate over a proprietary socket protocol. The client sends a list of buffers (in one go) to the server to perform one or more operations. One kind of buffer this protocol defines is a log buffer (where each log event is serialized in a non-Java format.) This allows each communication from the client to the server to say "This is what's happened up to now". What the server does with the log buffers is not important for this discussion.
> 
> What is important to note is that the log buffer and other buffers go to the server in one BLOB; which is why I cannot (in this use case) send log events by themselves anywhere.
> 
> I see that something (a CollectionAppender) must collect log events until the client is ready to serialize them and send them to the server. Once the events are drained out of the Appender (in one go by just getting the collection), events can collect in a new collection. A synchronous drain operation would create a new collection and return the old one.
> 
> The question becomes: What kind of temporary location can the client use to buffer log event until drain time? A Log4j Appender is a natural place to collect log events since the driver uses Log4j. The driver will make its business to drain the appender and work with the events at the right time. I am thinking that the Log4j Appender part is generic enough for inclusion in Log4j. 
> 
> Further thoughts?
> 
> Thank you all for reading this far!
> Gary
> 
>> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <ra...@dslextreme.com> wrote:
>> I guess I am not understanding your use case quite correctly. I am thinking you have a driver that is logging and you want those logs delivered to some other location to actually be written.  If that is your use case then the driver needs a log4j2.xml that configures the FlumeAppender with either the memory or file channel (depending on your needs) and points to the server(s) that is/are to receive the events. The FlumeAppender handles sending them in batches with whatever size you want (but will send them in smaller amounts if they are in the channel too long). Of course you would need the log4j-flume and flume jars. So on the driver side you wouldn’t need to write anything, just configure the appender and make sure the jars are there.
>> 
>> For the server that receives them you would also need Flume. Normally this would be a standalone component, but it really wouldn’t be hard to incorporate it into some other application. The only thing you would have to write would be the sink that writes the events to the database or whatever. To incorporate it into an application you would have to look at the main() method of flume and covert that to be a thread that you kick off.
>> 
>> Ralph
>> 
>> 
>> 
>>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com> wrote:
>>> 
>>> Hi Ralph,
>>> 
>>> Thanks for your feedback. Flume is great in the scenarios that do not involve sending a log buffer from the driver itself.
>>> 
>>> I can't require a Flume Agent to be running 'on the side' for the use case where the driver chains a log buffer at the end of the train of database IO buffer. For completeness talking about this Flume scenario, if I read you right, I also would need to write a custom Flume sink, which would also be in memory, until the driver is ready to drain it. Or, I could query some other 'safe' and 'reliable' Flume sink that the driver could then drain of events when it needs to.
>>> 
>>> Narrowing down on the use case where the driver chains a log buffer at the end of the train of database IO buffer, I'll think I have to see about converting the Log4j ListAppender into a more robust and flexible version. I think I'll call it a CollectionAppender and allow various Collection implementations to be plugged in.
>>> 
>>> Gary
>>> 
>>> Gary
>>> 
>>>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <ra...@dslextreme.com> wrote:
>>>> If you are buffering events in memory you run the risk of losing events if something should fail. 
>>>> 
>>>> That said, if I had your requirements I would use the FlumeAppender. It has either an in-memory option to buffer as you are suggesting or it can write to a local file to prevent data loss if that is a requirement. It already has the configuration options you are looking for and has been well tested. The only downside is that you need to have either a Flume instance receiving the messages are something that can receive Flume events over Avro, but it is easier just to use Flume and write a custom sink to do what you want with the data.
>>>> 
>>>> Ralph
>>>> 
>>>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>> 
>>>>> Hi All,
>>>>> 
>>>>> I can't believe it, but through a convoluted use-case, I actually need an in-memory list appender, very much like our test-only ListAppender.
>>>>> 
>>>>> The requirement is as follows.
>>>>> 
>>>>> We have a JDBC driver and matching proprietary database that specializes in data virtualization of mainframe resources like DB2, VSAM, IMS, and all sorts of non-SQL data sources (http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization) 
>>>>> 
>>>>> The high level requirement is to merge the driver log into the server's log for full-end to end tractability and debugging.
>>>>> 
>>>>> When the driver is running on the z/OS mainframe, it can be configured with a z/OS specific Appender that can talk to the server log module directly.
>>>>> 
>>>>> When the driver is running elsewhere, it can talk to the database via a Syslog socket Appender. This requires more set up on the server side and for the server to do special magic to know how the incoming log events match up with server operations. Tricky.
>>>>> 
>>>>> The customer should also be able to configure the driver such that anytime the driver communicates to the database, it sends along whatever log events have accumulated since the last client-server roundtrip. This allows the server to match exactly the connection and operations the client performed with the server's own logging.
>>>>> 
>>>>> In order to do that I need to buffer all log events in an Appender and when it's time, I need to get the list of events and reset the appender to a new empty list so events can keep accumulating.
>>>>> 
>>>>> My proposal is to either turn our ListAppender into such an appender. For sanity, the appender could be configured with various sizing policies:
>>>>> 
>>>>> - open: the list grows unbounded
>>>>> - closed: the list grows to a given size and _new_ events are dropped on the floor beyond that
>>>>> - latest: the list grows to a given size and _old_ events are dropped on the floor beyond that
>>>>> 
>>>>> Thoughts?
>>>>> 
>>>>> Gary
>>>>> 
>>>>> -- 
>>>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>>>> Java Persistence with Hibernate, Second Edition
>>>>> JUnit in Action, Second Edition
>>>>> Spring Batch in Action
>>>>> Blog: http://garygregory.wordpress.com 
>>>>> Home: http://garygregory.com/
>>>>> Tweet! http://twitter.com/GaryGregory
>>> 
>>> 
>>> 
>>> -- 
>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
>>> Java Persistence with Hibernate, Second Edition
>>> JUnit in Action, Second Edition
>>> Spring Batch in Action
>>> Blog: http://garygregory.wordpress.com 
>>> Home: http://garygregory.com/
>>> Tweet! http://twitter.com/GaryGregory
> 
> 
> 
> -- 
> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
> Java Persistence with Hibernate, Second Edition
> JUnit in Action, Second Edition
> Spring Batch in Action
> Blog: http://garygregory.wordpress.com 
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Gary Gregory <ga...@gmail.com>.
So: I write a custom FlumeManager (and add a new
FlumeAppender.ManagerType enum called CUSTOM where I can plug in my own
class name through a new attribute to be named later). I extract some of
the logic out of the FlumeAvroManager for its batch processing logic.

My FlumeManager does not send Flume events to a Flume Agent, but instead
only caches the log events and waits for the driver to drain them. The
FlumeManager cannot do the database IO since the client app does not know
about Log4j. The manager has no idea it is collecting data to be included
as a payload in a larger buffer mixed with data gathered from JDBC API
calls. Even if it did, it would not know what to do with the result buffers
coming back from a DBMS server.

The driver needs the log4j-flume-ng module and whatever minimal set of
flume jars just to be able to see the Flume Event type for example.

Then, the FlumeAppender has:

    @Override
    public void append(final LogEvent event) {
        final String name = event.getLoggerName();
        if (name != null) {
            for (final String pkg : EXCLUDED_PACKAGES) {
                if (name.startsWith(pkg)) {
                    return;
                }
            }
        }
        final FlumeEvent flumeEvent = factory.createEvent(event,
mdcIncludes, mdcExcludes, mdcRequired, mdcPrefix,
            eventPrefix, compressBody);
        flumeEvent.setBody(getLayout().toByteArray(flumeEvent));
        manager.send(flumeEvent);
    }

Since I need to use the FlumeAppender to use my custom FlumeManager, I also
have to create a custom Layout that serializes events as I needed. But then
I am getting an extra FlumeEvent object where I only need the original
LogEvent or the byte[]. So that's a small penalty.

I thought that I could implements a
custom org.apache.logging.log4j.flume.appender.FlumeEventFactory to just
pass through the LogEvent but that is not possible since FlumeEvent is a
class and not an interface. So my factory can just pass null to the
FlumeEvent except for the log event.

I am not sure I am connecting all the dots here.

To recap, I need:
- A small new feature in FlumeAppender to plugin a custom FlumeManager
class (not sure about the ctor signature requirements but that's where
builders come in)
- A custom FlumeManager with code extracted from the AvroFlumeManager to do
event batch caching.
- A custom Layout to serialize a LogEvent to whatever I need

So far I do not see how that is easier than writing a Collection Appender.

Thank you for reading this far (again),
Gary

On Mon, Sep 26, 2016 at 2:51 PM, Ralph Goers <ra...@dslextreme.com>
wrote:

> Well, the key to this is “proprietary socket protocol”.  Today, the Flume
> appender does everything you want except that it is hardwired to use the
> Avro RpcClient to send a batch of Flume events. If you need some other
> protocol you would need to write a new variation of the FlumeManager that
> sends the data however you want.  In that case your server wouldn’t need to
> know anything about Flume as all you would be doing was using Flume to
> handle to event buffering.
>
> I really think writing your own CollectionAppender is a very bad idea.
> Flume has already implemented it, it works, and isn’t trivial to build from
> scratch.
>
> Ralph
>
>
> On Sep 26, 2016, at 1:57 PM, Gary Gregory <ga...@gmail.com> wrote:
>
> Please allow me to restate the use case I have for the CollectionAppender,
> which is separate from any Flume-based or Syslog-based solution, use cases
> I also have. Well, I have a Syslog use case, and whether or not Flume is in
> the picture will really be a larger discussion in my organization due to
> the requirement to run a Flume Agent.)
>
> A program (like a JDBC driver already using Log4j) communicates with
> another (like a DBMS, not written in Java). The client and server
> communicate over a proprietary socket protocol. The client sends a list of
> buffers (in one go) to the server to perform one or more operations. One
> kind of buffer this protocol defines is a log buffer (where each log event
> is serialized in a non-Java format.) This allows each communication from
> the client to the server to say "This is what's happened up to now". What
> the server does with the log buffers is not important for this discussion.
>
> What is important to note is that the log buffer and other buffers go to
> the server in one BLOB; which is why I cannot (in this use case) send log
> events by themselves anywhere.
>
> I see that something (a CollectionAppender) must collect log events until
> the client is ready to serialize them and send them to the server. Once the
> events are drained out of the Appender (in one go by just getting the
> collection), events can collect in a new collection. A synchronous drain
> operation would create a new collection and return the old one.
>
> The question becomes: What kind of temporary location can the client use
> to buffer log event until drain time? A Log4j Appender is a natural place
> to collect log events since the driver uses Log4j. The driver will make its
> business to drain the appender and work with the events at the right time.
> I am thinking that the Log4j Appender part is generic enough for inclusion
> in Log4j.
>
> Further thoughts?
>
> Thank you all for reading this far!
> Gary
>
> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <ra...@dslextreme.com>
> wrote:
>
>> I guess I am not understanding your use case quite correctly. I am
>> thinking you have a driver that is logging and you want those logs
>> delivered to some other location to actually be written.  If that is your
>> use case then the driver needs a log4j2.xml that configures the
>> FlumeAppender with either the memory or file channel (depending on your
>> needs) and points to the server(s) that is/are to receive the events. The
>> FlumeAppender handles sending them in batches with whatever size you want
>> (but will send them in smaller amounts if they are in the channel too
>> long). Of course you would need the log4j-flume and flume jars. So on the
>> driver side you wouldn’t need to write anything, just configure the
>> appender and make sure the jars are there.
>>
>> For the server that receives them you would also need Flume. Normally
>> this would be a standalone component, but it really wouldn’t be hard to
>> incorporate it into some other application. The only thing you would have
>> to write would be the sink that writes the events to the database or
>> whatever. To incorporate it into an application you would have to look at
>> the main() method of flume and covert that to be a thread that you kick off.
>>
>> Ralph
>>
>>
>>
>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com>
>> wrote:
>>
>> Hi Ralph,
>>
>> Thanks for your feedback. Flume is great in the scenarios that do not
>> involve sending a log buffer from the driver itself.
>>
>> I can't require a Flume Agent to be running 'on the side' for the use
>> case where the driver chains a log buffer at the end of the train of
>> database IO buffer. For completeness talking about this Flume scenario, if
>> I read you right, I also would need to write a custom Flume sink, which
>> would also be in memory, until the driver is ready to drain it. Or, I could
>> query some other 'safe' and 'reliable' Flume sink that the driver could
>> then drain of events when it needs to.
>>
>> Narrowing down on the use case where the driver chains a log buffer at
>> the end of the train of database IO buffer, I'll think I have to see about
>> converting the Log4j ListAppender into a more robust and flexible version.
>> I think I'll call it a CollectionAppender and allow various Collection
>> implementations to be plugged in.
>>
>> Gary
>>
>> Gary
>>
>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <ra...@dslextreme.com>
>> wrote:
>>
>>> If you are buffering events in memory you run the risk of losing events
>>> if something should fail.
>>>
>>> That said, if I had your requirements I would use the FlumeAppender. It
>>> has either an in-memory option to buffer as you are suggesting or it can
>>> write to a local file to prevent data loss if that is a requirement. It
>>> already has the configuration options you are looking for and has been well
>>> tested. The only downside is that you need to have either a Flume instance
>>> receiving the messages are something that can receive Flume events over
>>> Avro, but it is easier just to use Flume and write a custom sink to do what
>>> you want with the data.
>>>
>>> Ralph
>>>
>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com>
>>> wrote:
>>>
>>> Hi All,
>>>
>>> I can't believe it, but through a convoluted use-case, I actually need
>>> an in-memory list appender, very much like our test-only ListAppender.
>>>
>>> The requirement is as follows.
>>>
>>> We have a JDBC driver and matching proprietary database that specializes
>>> in data virtualization of mainframe resources like DB2, VSAM, IMS, and all
>>> sorts of non-SQL data sources (http://www.rocketsoftware.com
>>> /products/rocket-data/rocket-data-virtualization)
>>>
>>> The high level requirement is to merge the driver log into the server's
>>> log for full-end to end tractability and debugging.
>>>
>>> When the driver is running on the z/OS mainframe, it can be configured
>>> with a z/OS specific Appender that can talk to the server log module
>>> directly.
>>>
>>> When the driver is running elsewhere, it can talk to the database via a
>>> Syslog socket Appender. This requires more set up on the server side and
>>> for the server to do special magic to know how the incoming log events
>>> match up with server operations. Tricky.
>>>
>>> The customer should also be able to configure the driver such that
>>> anytime the driver communicates to the database, it sends along whatever
>>> log events have accumulated since the last client-server roundtrip. This
>>> allows the server to match exactly the connection and operations the client
>>> performed with the server's own logging.
>>>
>>> In order to do that I need to buffer all log events in an Appender and
>>> when it's time, I need to get the list of events and reset the appender to
>>> a new empty list so events can keep accumulating.
>>>
>>> My proposal is to either turn our ListAppender into such an appender.
>>> For sanity, the appender could be configured with various sizing policies:
>>>
>>> - open: the list grows unbounded
>>> - closed: the list grows to a given size and _new_ events are dropped on
>>> the floor beyond that
>>> - latest: the list grows to a given size and _old_ events are dropped on
>>> the floor beyond that
>>>
>>> Thoughts?
>>>
>>> Gary
>>>
>>> --
>>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>>> Java Persistence with Hibernate, Second Edition
>>> <http://www.manning.com/bauer3/>
>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>> Spring Batch in Action <http://www.manning.com/templier/>
>>> Blog: http://garygregory.wordpress.com
>>> Home: http://garygregory.com/
>>> Tweet! http://twitter.com/GaryGregory
>>>
>>>
>>>
>>
>>
>> --
>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>> Java Persistence with Hibernate, Second Edition
>> <http://www.manning.com/bauer3/>
>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>> Spring Batch in Action <http://www.manning.com/templier/>
>> Blog: http://garygregory.wordpress.com
>> Home: http://garygregory.com/
>> Tweet! http://twitter.com/GaryGregory
>>
>>
>>
>
>
> --
> E-Mail: garydgregory@gmail.com | ggregory@apache.org
> Java Persistence with Hibernate, Second Edition
> <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory
>
>
>


-- 
E-Mail: garydgregory@gmail.com | ggregory@apache.org
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Ralph Goers <ra...@dslextreme.com>.
Well, the key to this is “proprietary socket protocol”.  Today, the Flume appender does everything you want except that it is hardwired to use the Avro RpcClient to send a batch of Flume events. If you need some other protocol you would need to write a new variation of the FlumeManager that sends the data however you want.  In that case your server wouldn’t need to know anything about Flume as all you would be doing was using Flume to handle to event buffering.

I really think writing your own CollectionAppender is a very bad idea. Flume has already implemented it, it works, and isn’t trivial to build from scratch.

Ralph


> On Sep 26, 2016, at 1:57 PM, Gary Gregory <ga...@gmail.com> wrote:
> 
> Please allow me to restate the use case I have for the CollectionAppender, which is separate from any Flume-based or Syslog-based solution, use cases I also have. Well, I have a Syslog use case, and whether or not Flume is in the picture will really be a larger discussion in my organization due to the requirement to run a Flume Agent.)
> 
> A program (like a JDBC driver already using Log4j) communicates with another (like a DBMS, not written in Java). The client and server communicate over a proprietary socket protocol. The client sends a list of buffers (in one go) to the server to perform one or more operations. One kind of buffer this protocol defines is a log buffer (where each log event is serialized in a non-Java format.) This allows each communication from the client to the server to say "This is what's happened up to now". What the server does with the log buffers is not important for this discussion.
> 
> What is important to note is that the log buffer and other buffers go to the server in one BLOB; which is why I cannot (in this use case) send log events by themselves anywhere.
> 
> I see that something (a CollectionAppender) must collect log events until the client is ready to serialize them and send them to the server. Once the events are drained out of the Appender (in one go by just getting the collection), events can collect in a new collection. A synchronous drain operation would create a new collection and return the old one.
> 
> The question becomes: What kind of temporary location can the client use to buffer log event until drain time? A Log4j Appender is a natural place to collect log events since the driver uses Log4j. The driver will make its business to drain the appender and work with the events at the right time. I am thinking that the Log4j Appender part is generic enough for inclusion in Log4j. 
> 
> Further thoughts?
> 
> Thank you all for reading this far!
> Gary
> 
> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> I guess I am not understanding your use case quite correctly. I am thinking you have a driver that is logging and you want those logs delivered to some other location to actually be written.  If that is your use case then the driver needs a log4j2.xml that configures the FlumeAppender with either the memory or file channel (depending on your needs) and points to the server(s) that is/are to receive the events. The FlumeAppender handles sending them in batches with whatever size you want (but will send them in smaller amounts if they are in the channel too long). Of course you would need the log4j-flume and flume jars. So on the driver side you wouldn’t need to write anything, just configure the appender and make sure the jars are there.
> 
> For the server that receives them you would also need Flume. Normally this would be a standalone component, but it really wouldn’t be hard to incorporate it into some other application. The only thing you would have to write would be the sink that writes the events to the database or whatever. To incorporate it into an application you would have to look at the main() method of flume and covert that to be a thread that you kick off.
> 
> Ralph
> 
> 
> 
>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>> 
>> Hi Ralph,
>> 
>> Thanks for your feedback. Flume is great in the scenarios that do not involve sending a log buffer from the driver itself.
>> 
>> I can't require a Flume Agent to be running 'on the side' for the use case where the driver chains a log buffer at the end of the train of database IO buffer. For completeness talking about this Flume scenario, if I read you right, I also would need to write a custom Flume sink, which would also be in memory, until the driver is ready to drain it. Or, I could query some other 'safe' and 'reliable' Flume sink that the driver could then drain of events when it needs to.
>> 
>> Narrowing down on the use case where the driver chains a log buffer at the end of the train of database IO buffer, I'll think I have to see about converting the Log4j ListAppender into a more robust and flexible version. I think I'll call it a CollectionAppender and allow various Collection implementations to be plugged in.
>> 
>> Gary
>> 
>> Gary
>> 
>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>> If you are buffering events in memory you run the risk of losing events if something should fail. 
>> 
>> That said, if I had your requirements I would use the FlumeAppender. It has either an in-memory option to buffer as you are suggesting or it can write to a local file to prevent data loss if that is a requirement. It already has the configuration options you are looking for and has been well tested. The only downside is that you need to have either a Flume instance receiving the messages are something that can receive Flume events over Avro, but it is easier just to use Flume and write a custom sink to do what you want with the data.
>> 
>> Ralph
>> 
>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>> 
>>> Hi All,
>>> 
>>> I can't believe it, but through a convoluted use-case, I actually need an in-memory list appender, very much like our test-only ListAppender.
>>> 
>>> The requirement is as follows.
>>> 
>>> We have a JDBC driver and matching proprietary database that specializes in data virtualization of mainframe resources like DB2, VSAM, IMS, and all sorts of non-SQL data sources (http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization <http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization>) 
>>> 
>>> The high level requirement is to merge the driver log into the server's log for full-end to end tractability and debugging.
>>> 
>>> When the driver is running on the z/OS mainframe, it can be configured with a z/OS specific Appender that can talk to the server log module directly.
>>> 
>>> When the driver is running elsewhere, it can talk to the database via a Syslog socket Appender. This requires more set up on the server side and for the server to do special magic to know how the incoming log events match up with server operations. Tricky.
>>> 
>>> The customer should also be able to configure the driver such that anytime the driver communicates to the database, it sends along whatever log events have accumulated since the last client-server roundtrip. This allows the server to match exactly the connection and operations the client performed with the server's own logging.
>>> 
>>> In order to do that I need to buffer all log events in an Appender and when it's time, I need to get the list of events and reset the appender to a new empty list so events can keep accumulating.
>>> 
>>> My proposal is to either turn our ListAppender into such an appender. For sanity, the appender could be configured with various sizing policies:
>>> 
>>> - open: the list grows unbounded
>>> - closed: the list grows to a given size and _new_ events are dropped on the floor beyond that
>>> - latest: the list grows to a given size and _old_ events are dropped on the floor beyond that
>>> 
>>> Thoughts?
>>> 
>>> Gary
>>> 
>>> -- 
>>> E-Mail: garydgregory@gmail.com <ma...@gmail.com> | ggregory@apache.org  <ma...@apache.org>
>>> Java Persistence with Hibernate, Second Edition <http://www.manning.com/bauer3/>
>>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>>> Spring Batch in Action <http://www.manning.com/templier/>
>>> Blog: http://garygregory.wordpress.com <http://garygregory.wordpress.com/> 
>>> Home: http://garygregory.com/ <http://garygregory.com/>
>>> Tweet! http://twitter.com/GaryGregory <http://twitter.com/GaryGregory>
>> 
>> 
>> 
>> -- 
>> E-Mail: garydgregory@gmail.com <ma...@gmail.com> | ggregory@apache.org  <ma...@apache.org>
>> Java Persistence with Hibernate, Second Edition <http://www.manning.com/bauer3/>
>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>> Spring Batch in Action <http://www.manning.com/templier/>
>> Blog: http://garygregory.wordpress.com <http://garygregory.wordpress.com/> 
>> Home: http://garygregory.com/ <http://garygregory.com/>
>> Tweet! http://twitter.com/GaryGregory <http://twitter.com/GaryGregory>
> 
> 
> 
> -- 
> E-Mail: garydgregory@gmail.com <ma...@gmail.com> | ggregory@apache.org  <ma...@apache.org>
> Java Persistence with Hibernate, Second Edition <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com <http://garygregory.wordpress.com/> 
> Home: http://garygregory.com/ <http://garygregory.com/>
> Tweet! http://twitter.com/GaryGregory <http://twitter.com/GaryGregory>

Re: In memory appender

Posted by Gary Gregory <ga...@gmail.com>.
Please allow me to restate the use case I have for the CollectionAppender,
which is separate from any Flume-based or Syslog-based solution, use cases
I also have. Well, I have a Syslog use case, and whether or not Flume is in
the picture will really be a larger discussion in my organization due to
the requirement to run a Flume Agent.)

A program (like a JDBC driver already using Log4j) communicates with
another (like a DBMS, not written in Java). The client and server
communicate over a proprietary socket protocol. The client sends a list of
buffers (in one go) to the server to perform one or more operations. One
kind of buffer this protocol defines is a log buffer (where each log event
is serialized in a non-Java format.) This allows each communication from
the client to the server to say "This is what's happened up to now". What
the server does with the log buffers is not important for this discussion.

What is important to note is that the log buffer and other buffers go to
the server in one BLOB; which is why I cannot (in this use case) send log
events by themselves anywhere.

I see that something (a CollectionAppender) must collect log events until
the client is ready to serialize them and send them to the server. Once the
events are drained out of the Appender (in one go by just getting the
collection), events can collect in a new collection. A synchronous drain
operation would create a new collection and return the old one.

The question becomes: What kind of temporary location can the client use to
buffer log event until drain time? A Log4j Appender is a natural place to
collect log events since the driver uses Log4j. The driver will make its
business to drain the appender and work with the events at the right time.
I am thinking that the Log4j Appender part is generic enough for inclusion
in Log4j.

Further thoughts?

Thank you all for reading this far!
Gary

On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers <ra...@dslextreme.com>
wrote:

> I guess I am not understanding your use case quite correctly. I am
> thinking you have a driver that is logging and you want those logs
> delivered to some other location to actually be written.  If that is your
> use case then the driver needs a log4j2.xml that configures the
> FlumeAppender with either the memory or file channel (depending on your
> needs) and points to the server(s) that is/are to receive the events. The
> FlumeAppender handles sending them in batches with whatever size you want
> (but will send them in smaller amounts if they are in the channel too
> long). Of course you would need the log4j-flume and flume jars. So on the
> driver side you wouldn’t need to write anything, just configure the
> appender and make sure the jars are there.
>
> For the server that receives them you would also need Flume. Normally this
> would be a standalone component, but it really wouldn’t be hard to
> incorporate it into some other application. The only thing you would have
> to write would be the sink that writes the events to the database or
> whatever. To incorporate it into an application you would have to look at
> the main() method of flume and covert that to be a thread that you kick off.
>
> Ralph
>
>
>
> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com> wrote:
>
> Hi Ralph,
>
> Thanks for your feedback. Flume is great in the scenarios that do not
> involve sending a log buffer from the driver itself.
>
> I can't require a Flume Agent to be running 'on the side' for the use case
> where the driver chains a log buffer at the end of the train of database IO
> buffer. For completeness talking about this Flume scenario, if I read you
> right, I also would need to write a custom Flume sink, which would also be
> in memory, until the driver is ready to drain it. Or, I could query some
> other 'safe' and 'reliable' Flume sink that the driver could then drain of
> events when it needs to.
>
> Narrowing down on the use case where the driver chains a log buffer at the
> end of the train of database IO buffer, I'll think I have to see about
> converting the Log4j ListAppender into a more robust and flexible version.
> I think I'll call it a CollectionAppender and allow various Collection
> implementations to be plugged in.
>
> Gary
>
> Gary
>
> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <ra...@dslextreme.com>
> wrote:
>
>> If you are buffering events in memory you run the risk of losing events
>> if something should fail.
>>
>> That said, if I had your requirements I would use the FlumeAppender. It
>> has either an in-memory option to buffer as you are suggesting or it can
>> write to a local file to prevent data loss if that is a requirement. It
>> already has the configuration options you are looking for and has been well
>> tested. The only downside is that you need to have either a Flume instance
>> receiving the messages are something that can receive Flume events over
>> Avro, but it is easier just to use Flume and write a custom sink to do what
>> you want with the data.
>>
>> Ralph
>>
>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com> wrote:
>>
>> Hi All,
>>
>> I can't believe it, but through a convoluted use-case, I actually need an
>> in-memory list appender, very much like our test-only ListAppender.
>>
>> The requirement is as follows.
>>
>> We have a JDBC driver and matching proprietary database that specializes
>> in data virtualization of mainframe resources like DB2, VSAM, IMS, and all
>> sorts of non-SQL data sources (http://www.rocketsoftware.com
>> /products/rocket-data/rocket-data-virtualization)
>>
>> The high level requirement is to merge the driver log into the server's
>> log for full-end to end tractability and debugging.
>>
>> When the driver is running on the z/OS mainframe, it can be configured
>> with a z/OS specific Appender that can talk to the server log module
>> directly.
>>
>> When the driver is running elsewhere, it can talk to the database via a
>> Syslog socket Appender. This requires more set up on the server side and
>> for the server to do special magic to know how the incoming log events
>> match up with server operations. Tricky.
>>
>> The customer should also be able to configure the driver such that
>> anytime the driver communicates to the database, it sends along whatever
>> log events have accumulated since the last client-server roundtrip. This
>> allows the server to match exactly the connection and operations the client
>> performed with the server's own logging.
>>
>> In order to do that I need to buffer all log events in an Appender and
>> when it's time, I need to get the list of events and reset the appender to
>> a new empty list so events can keep accumulating.
>>
>> My proposal is to either turn our ListAppender into such an appender. For
>> sanity, the appender could be configured with various sizing policies:
>>
>> - open: the list grows unbounded
>> - closed: the list grows to a given size and _new_ events are dropped on
>> the floor beyond that
>> - latest: the list grows to a given size and _old_ events are dropped on
>> the floor beyond that
>>
>> Thoughts?
>>
>> Gary
>>
>> --
>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>> Java Persistence with Hibernate, Second Edition
>> <http://www.manning.com/bauer3/>
>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>> Spring Batch in Action <http://www.manning.com/templier/>
>> Blog: http://garygregory.wordpress.com
>> Home: http://garygregory.com/
>> Tweet! http://twitter.com/GaryGregory
>>
>>
>>
>
>
> --
> E-Mail: garydgregory@gmail.com | ggregory@apache.org
> Java Persistence with Hibernate, Second Edition
> <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory
>
>
>


-- 
E-Mail: garydgregory@gmail.com | ggregory@apache.org
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Ralph Goers <ra...@dslextreme.com>.
I guess I am not understanding your use case quite correctly. I am thinking you have a driver that is logging and you want those logs delivered to some other location to actually be written.  If that is your use case then the driver needs a log4j2.xml that configures the FlumeAppender with either the memory or file channel (depending on your needs) and points to the server(s) that is/are to receive the events. The FlumeAppender handles sending them in batches with whatever size you want (but will send them in smaller amounts if they are in the channel too long). Of course you would need the log4j-flume and flume jars. So on the driver side you wouldn’t need to write anything, just configure the appender and make sure the jars are there.

For the server that receives them you would also need Flume. Normally this would be a standalone component, but it really wouldn’t be hard to incorporate it into some other application. The only thing you would have to write would be the sink that writes the events to the database or whatever. To incorporate it into an application you would have to look at the main() method of flume and covert that to be a thread that you kick off.

Ralph



> On Sep 25, 2016, at 12:01 PM, Gary Gregory <ga...@gmail.com> wrote:
> 
> Hi Ralph,
> 
> Thanks for your feedback. Flume is great in the scenarios that do not involve sending a log buffer from the driver itself.
> 
> I can't require a Flume Agent to be running 'on the side' for the use case where the driver chains a log buffer at the end of the train of database IO buffer. For completeness talking about this Flume scenario, if I read you right, I also would need to write a custom Flume sink, which would also be in memory, until the driver is ready to drain it. Or, I could query some other 'safe' and 'reliable' Flume sink that the driver could then drain of events when it needs to.
> 
> Narrowing down on the use case where the driver chains a log buffer at the end of the train of database IO buffer, I'll think I have to see about converting the Log4j ListAppender into a more robust and flexible version. I think I'll call it a CollectionAppender and allow various Collection implementations to be plugged in.
> 
> Gary
> 
> Gary
> 
> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> If you are buffering events in memory you run the risk of losing events if something should fail. 
> 
> That said, if I had your requirements I would use the FlumeAppender. It has either an in-memory option to buffer as you are suggesting or it can write to a local file to prevent data loss if that is a requirement. It already has the configuration options you are looking for and has been well tested. The only downside is that you need to have either a Flume instance receiving the messages are something that can receive Flume events over Avro, but it is easier just to use Flume and write a custom sink to do what you want with the data.
> 
> Ralph
> 
>> On Sep 24, 2016, at 3:13 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>> 
>> Hi All,
>> 
>> I can't believe it, but through a convoluted use-case, I actually need an in-memory list appender, very much like our test-only ListAppender.
>> 
>> The requirement is as follows.
>> 
>> We have a JDBC driver and matching proprietary database that specializes in data virtualization of mainframe resources like DB2, VSAM, IMS, and all sorts of non-SQL data sources (http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization <http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization>) 
>> 
>> The high level requirement is to merge the driver log into the server's log for full-end to end tractability and debugging.
>> 
>> When the driver is running on the z/OS mainframe, it can be configured with a z/OS specific Appender that can talk to the server log module directly.
>> 
>> When the driver is running elsewhere, it can talk to the database via a Syslog socket Appender. This requires more set up on the server side and for the server to do special magic to know how the incoming log events match up with server operations. Tricky.
>> 
>> The customer should also be able to configure the driver such that anytime the driver communicates to the database, it sends along whatever log events have accumulated since the last client-server roundtrip. This allows the server to match exactly the connection and operations the client performed with the server's own logging.
>> 
>> In order to do that I need to buffer all log events in an Appender and when it's time, I need to get the list of events and reset the appender to a new empty list so events can keep accumulating.
>> 
>> My proposal is to either turn our ListAppender into such an appender. For sanity, the appender could be configured with various sizing policies:
>> 
>> - open: the list grows unbounded
>> - closed: the list grows to a given size and _new_ events are dropped on the floor beyond that
>> - latest: the list grows to a given size and _old_ events are dropped on the floor beyond that
>> 
>> Thoughts?
>> 
>> Gary
>> 
>> -- 
>> E-Mail: garydgregory@gmail.com <ma...@gmail.com> | ggregory@apache.org  <ma...@apache.org>
>> Java Persistence with Hibernate, Second Edition <http://www.manning.com/bauer3/>
>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>> Spring Batch in Action <http://www.manning.com/templier/>
>> Blog: http://garygregory.wordpress.com <http://garygregory.wordpress.com/> 
>> Home: http://garygregory.com/ <http://garygregory.com/>
>> Tweet! http://twitter.com/GaryGregory <http://twitter.com/GaryGregory>
> 
> 
> 
> -- 
> E-Mail: garydgregory@gmail.com <ma...@gmail.com> | ggregory@apache.org  <ma...@apache.org>
> Java Persistence with Hibernate, Second Edition <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com <http://garygregory.wordpress.com/> 
> Home: http://garygregory.com/ <http://garygregory.com/>
> Tweet! http://twitter.com/GaryGregory <http://twitter.com/GaryGregory>

Re: In memory appender

Posted by Gary Gregory <ga...@gmail.com>.
Hi Ralph,

Thanks for your feedback. Flume is great in the scenarios that do not
involve sending a log buffer from the driver itself.

I can't require a Flume Agent to be running 'on the side' for the use case
where the driver chains a log buffer at the end of the train of database IO
buffer. For completeness talking about this Flume scenario, if I read you
right, I also would need to write a custom Flume sink, which would also be
in memory, until the driver is ready to drain it. Or, I could query some
other 'safe' and 'reliable' Flume sink that the driver could then drain of
events when it needs to.

Narrowing down on the use case where the driver chains a log buffer at the
end of the train of database IO buffer, I'll think I have to see about
converting the Log4j ListAppender into a more robust and flexible version.
I think I'll call it a CollectionAppender and allow various Collection
implementations to be plugged in.

Gary

Gary

On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers <ra...@dslextreme.com>
wrote:

> If you are buffering events in memory you run the risk of losing events if
> something should fail.
>
> That said, if I had your requirements I would use the FlumeAppender. It
> has either an in-memory option to buffer as you are suggesting or it can
> write to a local file to prevent data loss if that is a requirement. It
> already has the configuration options you are looking for and has been well
> tested. The only downside is that you need to have either a Flume instance
> receiving the messages are something that can receive Flume events over
> Avro, but it is easier just to use Flume and write a custom sink to do what
> you want with the data.
>
> Ralph
>
> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com> wrote:
>
> Hi All,
>
> I can't believe it, but through a convoluted use-case, I actually need an
> in-memory list appender, very much like our test-only ListAppender.
>
> The requirement is as follows.
>
> We have a JDBC driver and matching proprietary database that specializes
> in data virtualization of mainframe resources like DB2, VSAM, IMS, and all
> sorts of non-SQL data sources (http://www.rocketsoftware.
> com/products/rocket-data/rocket-data-virtualization)
>
> The high level requirement is to merge the driver log into the server's
> log for full-end to end tractability and debugging.
>
> When the driver is running on the z/OS mainframe, it can be configured
> with a z/OS specific Appender that can talk to the server log module
> directly.
>
> When the driver is running elsewhere, it can talk to the database via a
> Syslog socket Appender. This requires more set up on the server side and
> for the server to do special magic to know how the incoming log events
> match up with server operations. Tricky.
>
> The customer should also be able to configure the driver such that anytime
> the driver communicates to the database, it sends along whatever log events
> have accumulated since the last client-server roundtrip. This allows the
> server to match exactly the connection and operations the client performed
> with the server's own logging.
>
> In order to do that I need to buffer all log events in an Appender and
> when it's time, I need to get the list of events and reset the appender to
> a new empty list so events can keep accumulating.
>
> My proposal is to either turn our ListAppender into such an appender. For
> sanity, the appender could be configured with various sizing policies:
>
> - open: the list grows unbounded
> - closed: the list grows to a given size and _new_ events are dropped on
> the floor beyond that
> - latest: the list grows to a given size and _old_ events are dropped on
> the floor beyond that
>
> Thoughts?
>
> Gary
>
> --
> E-Mail: garydgregory@gmail.com | ggregory@apache.org
> Java Persistence with Hibernate, Second Edition
> <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory
>
>
>


-- 
E-Mail: garydgregory@gmail.com | ggregory@apache.org
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Ralph Goers <ra...@dslextreme.com>.
If you are buffering events in memory you run the risk of losing events if something should fail. 

That said, if I had your requirements I would use the FlumeAppender. It has either an in-memory option to buffer as you are suggesting or it can write to a local file to prevent data loss if that is a requirement. It already has the configuration options you are looking for and has been well tested. The only downside is that you need to have either a Flume instance receiving the messages are something that can receive Flume events over Avro, but it is easier just to use Flume and write a custom sink to do what you want with the data.

Ralph

> On Sep 24, 2016, at 3:13 PM, Gary Gregory <ga...@gmail.com> wrote:
> 
> Hi All,
> 
> I can't believe it, but through a convoluted use-case, I actually need an in-memory list appender, very much like our test-only ListAppender.
> 
> The requirement is as follows.
> 
> We have a JDBC driver and matching proprietary database that specializes in data virtualization of mainframe resources like DB2, VSAM, IMS, and all sorts of non-SQL data sources (http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization <http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization>) 
> 
> The high level requirement is to merge the driver log into the server's log for full-end to end tractability and debugging.
> 
> When the driver is running on the z/OS mainframe, it can be configured with a z/OS specific Appender that can talk to the server log module directly.
> 
> When the driver is running elsewhere, it can talk to the database via a Syslog socket Appender. This requires more set up on the server side and for the server to do special magic to know how the incoming log events match up with server operations. Tricky.
> 
> The customer should also be able to configure the driver such that anytime the driver communicates to the database, it sends along whatever log events have accumulated since the last client-server roundtrip. This allows the server to match exactly the connection and operations the client performed with the server's own logging.
> 
> In order to do that I need to buffer all log events in an Appender and when it's time, I need to get the list of events and reset the appender to a new empty list so events can keep accumulating.
> 
> My proposal is to either turn our ListAppender into such an appender. For sanity, the appender could be configured with various sizing policies:
> 
> - open: the list grows unbounded
> - closed: the list grows to a given size and _new_ events are dropped on the floor beyond that
> - latest: the list grows to a given size and _old_ events are dropped on the floor beyond that
> 
> Thoughts?
> 
> Gary
> 
> -- 
> E-Mail: garydgregory@gmail.com <ma...@gmail.com> | ggregory@apache.org  <ma...@apache.org>
> Java Persistence with Hibernate, Second Edition <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com <http://garygregory.wordpress.com/> 
> Home: http://garygregory.com/ <http://garygregory.com/>
> Tweet! http://twitter.com/GaryGregory <http://twitter.com/GaryGregory>

Re: In memory appender

Posted by Matt Sicker <bo...@gmail.com>.
AsyncAppender is just a wrapper around any other Appender. So I don't see
why it couldn't be used here.

On 26 September 2016 at 04:50, Mikael Ståldal <mi...@magine.com>
wrote:

> Could AsyncAppender be adapted for this use case?
>
> On Mon, Sep 26, 2016 at 12:44 AM, Remko Popma <re...@gmail.com>
> wrote:
>
>> That sounds like quite a unique use case!
>>
>> Would it make sense to go through a few iterations at your company until
>> you have something that you're really happy with (to use and to support)
>> before publishing it in Log4j?
>>
>> Remko
>>
>> Sent from my iPhone
>>
>> On 2016/09/25, at 7:13, Gary Gregory <ga...@gmail.com> wrote:
>>
>> Hi All,
>>
>> I can't believe it, but through a convoluted use-case, I actually need an
>> in-memory list appender, very much like our test-only ListAppender.
>>
>> The requirement is as follows.
>>
>> We have a JDBC driver and matching proprietary database that specializes
>> in data virtualization of mainframe resources like DB2, VSAM, IMS, and all
>> sorts of non-SQL data sources (http://www.rocketsoftware.com
>> /products/rocket-data/rocket-data-virtualization)
>>
>> The high level requirement is to merge the driver log into the server's
>> log for full-end to end tractability and debugging.
>>
>> When the driver is running on the z/OS mainframe, it can be configured
>> with a z/OS specific Appender that can talk to the server log module
>> directly.
>>
>> When the driver is running elsewhere, it can talk to the database via a
>> Syslog socket Appender. This requires more set up on the server side and
>> for the server to do special magic to know how the incoming log events
>> match up with server operations. Tricky.
>>
>> The customer should also be able to configure the driver such that
>> anytime the driver communicates to the database, it sends along whatever
>> log events have accumulated since the last client-server roundtrip. This
>> allows the server to match exactly the connection and operations the client
>> performed with the server's own logging.
>>
>> In order to do that I need to buffer all log events in an Appender and
>> when it's time, I need to get the list of events and reset the appender to
>> a new empty list so events can keep accumulating.
>>
>> My proposal is to either turn our ListAppender into such an appender. For
>> sanity, the appender could be configured with various sizing policies:
>>
>> - open: the list grows unbounded
>> - closed: the list grows to a given size and _new_ events are dropped on
>> the floor beyond that
>> - latest: the list grows to a given size and _old_ events are dropped on
>> the floor beyond that
>>
>> Thoughts?
>>
>> Gary
>>
>> --
>> E-Mail: garydgregory@gmail.com | ggregory@apache.org
>> Java Persistence with Hibernate, Second Edition
>> <http://www.manning.com/bauer3/>
>> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
>> Spring Batch in Action <http://www.manning.com/templier/>
>> Blog: http://garygregory.wordpress.com
>> Home: http://garygregory.com/
>> Tweet! http://twitter.com/GaryGregory
>>
>>
>
>
> --
> [image: MagineTV]
>
> *Mikael Ståldal*
> Senior software developer
>
> *Magine TV*
> mikael.staldal@magine.com
> Grev Turegatan 3  | 114 46 Stockholm, Sweden  |   www.magine.com
>
> Privileged and/or Confidential Information may be contained in this
> message. If you are not the addressee indicated in this message
> (or responsible for delivery of the message to such a person), you may not
> copy or deliver this message to anyone. In such case,
> you should destroy this message and kindly notify the sender by reply
> email.
>



-- 
Matt Sicker <bo...@gmail.com>

Re: In memory appender

Posted by Mikael Ståldal <mi...@magine.com>.
Could AsyncAppender be adapted for this use case?

On Mon, Sep 26, 2016 at 12:44 AM, Remko Popma <re...@gmail.com> wrote:

> That sounds like quite a unique use case!
>
> Would it make sense to go through a few iterations at your company until
> you have something that you're really happy with (to use and to support)
> before publishing it in Log4j?
>
> Remko
>
> Sent from my iPhone
>
> On 2016/09/25, at 7:13, Gary Gregory <ga...@gmail.com> wrote:
>
> Hi All,
>
> I can't believe it, but through a convoluted use-case, I actually need an
> in-memory list appender, very much like our test-only ListAppender.
>
> The requirement is as follows.
>
> We have a JDBC driver and matching proprietary database that specializes
> in data virtualization of mainframe resources like DB2, VSAM, IMS, and all
> sorts of non-SQL data sources (http://www.rocketsoftware.
> com/products/rocket-data/rocket-data-virtualization)
>
> The high level requirement is to merge the driver log into the server's
> log for full-end to end tractability and debugging.
>
> When the driver is running on the z/OS mainframe, it can be configured
> with a z/OS specific Appender that can talk to the server log module
> directly.
>
> When the driver is running elsewhere, it can talk to the database via a
> Syslog socket Appender. This requires more set up on the server side and
> for the server to do special magic to know how the incoming log events
> match up with server operations. Tricky.
>
> The customer should also be able to configure the driver such that anytime
> the driver communicates to the database, it sends along whatever log events
> have accumulated since the last client-server roundtrip. This allows the
> server to match exactly the connection and operations the client performed
> with the server's own logging.
>
> In order to do that I need to buffer all log events in an Appender and
> when it's time, I need to get the list of events and reset the appender to
> a new empty list so events can keep accumulating.
>
> My proposal is to either turn our ListAppender into such an appender. For
> sanity, the appender could be configured with various sizing policies:
>
> - open: the list grows unbounded
> - closed: the list grows to a given size and _new_ events are dropped on
> the floor beyond that
> - latest: the list grows to a given size and _old_ events are dropped on
> the floor beyond that
>
> Thoughts?
>
> Gary
>
> --
> E-Mail: garydgregory@gmail.com | ggregory@apache.org
> Java Persistence with Hibernate, Second Edition
> <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory
>
>


-- 
[image: MagineTV]

*Mikael Ståldal*
Senior software developer

*Magine TV*
mikael.staldal@magine.com
Grev Turegatan 3  | 114 46 Stockholm, Sweden  |   www.magine.com

Privileged and/or Confidential Information may be contained in this
message. If you are not the addressee indicated in this message
(or responsible for delivery of the message to such a person), you may not
copy or deliver this message to anyone. In such case,
you should destroy this message and kindly notify the sender by reply
email.

Re: In memory appender

Posted by Gary Gregory <ga...@gmail.com>.
On Sun, Sep 25, 2016 at 3:44 PM, Remko Popma <re...@gmail.com> wrote:

> That sounds like quite a unique use case!
>
> Would it make sense to go through a few iterations at your company until
> you have something that you're really happy with (to use and to support)
> before publishing it in Log4j?
>

Right now I am experimenting and am discussing it here since our existing
ListAppender is like a slightly simpler version of what I need.
ListAppender does contain more test specific method so it could be
refactored to use a new CollectionAppender. I'll elaborate in a reply to a
different message on this thread.

Gary


> Remko
>
> Sent from my iPhone
>
> On 2016/09/25, at 7:13, Gary Gregory <ga...@gmail.com> wrote:
>
> Hi All,
>
> I can't believe it, but through a convoluted use-case, I actually need an
> in-memory list appender, very much like our test-only ListAppender.
>
> The requirement is as follows.
>
> We have a JDBC driver and matching proprietary database that specializes
> in data virtualization of mainframe resources like DB2, VSAM, IMS, and all
> sorts of non-SQL data sources (http://www.rocketsoftware.
> com/products/rocket-data/rocket-data-virtualization)
>
> The high level requirement is to merge the driver log into the server's
> log for full-end to end tractability and debugging.
>
> When the driver is running on the z/OS mainframe, it can be configured
> with a z/OS specific Appender that can talk to the server log module
> directly.
>
> When the driver is running elsewhere, it can talk to the database via a
> Syslog socket Appender. This requires more set up on the server side and
> for the server to do special magic to know how the incoming log events
> match up with server operations. Tricky.
>
> The customer should also be able to configure the driver such that anytime
> the driver communicates to the database, it sends along whatever log events
> have accumulated since the last client-server roundtrip. This allows the
> server to match exactly the connection and operations the client performed
> with the server's own logging.
>
> In order to do that I need to buffer all log events in an Appender and
> when it's time, I need to get the list of events and reset the appender to
> a new empty list so events can keep accumulating.
>
> My proposal is to either turn our ListAppender into such an appender. For
> sanity, the appender could be configured with various sizing policies:
>
> - open: the list grows unbounded
> - closed: the list grows to a given size and _new_ events are dropped on
> the floor beyond that
> - latest: the list grows to a given size and _old_ events are dropped on
> the floor beyond that
>
> Thoughts?
>
> Gary
>
> --
> E-Mail: garydgregory@gmail.com | ggregory@apache.org
> Java Persistence with Hibernate, Second Edition
> <http://www.manning.com/bauer3/>
> JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
> Spring Batch in Action <http://www.manning.com/templier/>
> Blog: http://garygregory.wordpress.com
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory
>
>


-- 
E-Mail: garydgregory@gmail.com | ggregory@apache.org
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory

Re: In memory appender

Posted by Remko Popma <re...@gmail.com>.
That sounds like quite a unique use case! 

Would it make sense to go through a few iterations at your company until you have something that you're really happy with (to use and to support) before publishing it in Log4j?

Remko

Sent from my iPhone

> On 2016/09/25, at 7:13, Gary Gregory <ga...@gmail.com> wrote:
> 
> Hi All,
> 
> I can't believe it, but through a convoluted use-case, I actually need an in-memory list appender, very much like our test-only ListAppender.
> 
> The requirement is as follows.
> 
> We have a JDBC driver and matching proprietary database that specializes in data virtualization of mainframe resources like DB2, VSAM, IMS, and all sorts of non-SQL data sources (http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization) 
> 
> The high level requirement is to merge the driver log into the server's log for full-end to end tractability and debugging.
> 
> When the driver is running on the z/OS mainframe, it can be configured with a z/OS specific Appender that can talk to the server log module directly.
> 
> When the driver is running elsewhere, it can talk to the database via a Syslog socket Appender. This requires more set up on the server side and for the server to do special magic to know how the incoming log events match up with server operations. Tricky.
> 
> The customer should also be able to configure the driver such that anytime the driver communicates to the database, it sends along whatever log events have accumulated since the last client-server roundtrip. This allows the server to match exactly the connection and operations the client performed with the server's own logging.
> 
> In order to do that I need to buffer all log events in an Appender and when it's time, I need to get the list of events and reset the appender to a new empty list so events can keep accumulating.
> 
> My proposal is to either turn our ListAppender into such an appender. For sanity, the appender could be configured with various sizing policies:
> 
> - open: the list grows unbounded
> - closed: the list grows to a given size and _new_ events are dropped on the floor beyond that
> - latest: the list grows to a given size and _old_ events are dropped on the floor beyond that
> 
> Thoughts?
> 
> Gary
> 
> -- 
> E-Mail: garydgregory@gmail.com | ggregory@apache.org 
> Java Persistence with Hibernate, Second Edition
> JUnit in Action, Second Edition
> Spring Batch in Action
> Blog: http://garygregory.wordpress.com 
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory