You are viewing a plain text version of this content. The canonical link for it is here.
Posted to log4j-dev@logging.apache.org by Apache <ra...@dslextreme.com> on 2017/02/05 16:40:12 UTC
Logback performance improvements
Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
Ralph
Re: Logback performance improvements
Posted by Remko Popma <re...@gmail.com>.
I will try taking a look when I get home from work tonight.
Ceki reported some strange numbers about Log4j 1 outperforming Log4j2 before, and when we showed results from 3 or 4 environments that disproved his claims, he never really explained how that happened.
Not sure if there's some issue with his test setup... I would definitely double check his numbers.
Sent from my iPhone
> On Feb 9, 2017, at 14:12, Apache <ra...@dslextreme.com> wrote:
>
> Logback 1.2 has been released. I just ran our performance benchmarks against it and I am mystified. Logback 1.2 is coming out almost 4 times slower than 1.1.10. Can someone please verify my results?
>
> I am running them with
>
> java -jar target/benchmarks.jar ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 10 -t 4 -tu ms
>
> This is on my MacBook Pro (mid-2015) with a 2.8 GHz Intel Core i7 and a fast SSD.
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 94.052 �} 13.645 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 838.130 �} 11.205 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1844.284 �} 72.803 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 2041.727 �} 1260.868 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1879.691 �} 93.162 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 287.678 �} 8.516 ops/ms
>
>
> This is on a Mac Pro (mid-2013) with a 3.5 GHz Intel 6 Core Xeon E5 with an external USB SSD.
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 67.616 �} 8.256 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 607.507 �} 87.307 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 668.316 �} 124.457 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3174.031 �} 149.277 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 839.811 �} 154.662 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 175.482 �} 0.869 ops/ms
>
> This is also on the same Map Pro (mid-2013) but using the faster internal SSD.
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 75.435 �} 0.287 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 735.162 �} 5.182 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1346.217 �} 13.955 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 2439.903 �} 355.553 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1507.775 �} 9.600 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 184.796 �} 2.052 ops/ms
>
> Finally, this is on my old MacBook Pro (Late-2011) 2.5GHz Intel Core i5 with an internal hard drive.
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 39.181 �} 3.315 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 485.160 �} 98.892 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 551.578 �} 120.902 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1720.083 �} 1113.734 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 561.398 �} 226.339 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 161.530 �} 4.585 ops/ms
>
> The news page at Logback says that it has major performance improvements but these numbers are 3-4 times slower than what I got with 1.1.10.
>
> Ralph
>
>
>
>
>
>
>
>
>
>> On Feb 8, 2017, at 10:31 AM, Apache <ra...@dslextreme.com> wrote:
>>
>> I�fm pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can�ft figure out how to reduce that based on how dispersed the code has gotten.
>>
>> Ralph
>>
>>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com> wrote:
>>>
>>> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn�ft as the layouts now require the ByteBuffer.
>>>
>>> Ralph
>>>
>>>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>> The append method isn�ft synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>
>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>
>>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>
>>>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
>>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>>>>>
>>>>>>> I haven�ft run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>
>>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>> Someone posted numbers on the Logback user�fs list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>>>
>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf to compare your results to Ceki�fs. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>>
>>>>>>>>>>>>> Gary
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 �} 4935.712 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 �} 6762.712 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 �} 67032.225 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 �} 53208.281 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 �} 12074.003 ops/s
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I�fll have to try this on one my VMs at work. We don�ft run anything directly on bare metal any more.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>
>>
>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
Logback 1.2 has been released. I just ran our performance benchmarks against it and I am mystified. Logback 1.2 is coming out almost 4 times slower than 1.1.10. Can someone please verify my results?
I am running them with
java -jar target/benchmarks.jar ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 10 -t 4 -tu ms
This is on my MacBook Pro (mid-2015) with a 2.8 GHz Intel Core i7 and a fast SSD.
Benchmark Mode Samples Score Error Units
o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 94.052 ± 13.645 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 838.130 ± 11.205 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1844.284 ± 72.803 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 2041.727 ± 1260.868 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1879.691 ± 93.162 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 287.678 ± 8.516 ops/ms
This is on a Mac Pro (mid-2013) with a 3.5 GHz Intel 6 Core Xeon E5 with an external USB SSD.
Benchmark Mode Samples Score Error Units
o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 67.616 ± 8.256 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 607.507 ± 87.307 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 668.316 ± 124.457 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3174.031 ± 149.277 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 839.811 ± 154.662 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 175.482 ± 0.869 ops/ms
This is also on the same Map Pro (mid-2013) but using the faster internal SSD.
Benchmark Mode Samples Score Error Units
o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 75.435 ± 0.287 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 735.162 ± 5.182 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1346.217 ± 13.955 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 2439.903 ± 355.553 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1507.775 ± 9.600 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 184.796 ± 2.052 ops/ms
Finally, this is on my old MacBook Pro (Late-2011) 2.5GHz Intel Core i5 with an internal hard drive.
Benchmark Mode Samples Score Error Units
o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 39.181 ± 3.315 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 485.160 ± 98.892 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 551.578 ± 120.902 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1720.083 ± 1113.734 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 561.398 ± 226.339 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 161.530 ± 4.585 ops/ms
The news page at Logback says that it has major performance improvements but these numbers are 3-4 times slower than what I got with 1.1.10.
Ralph
> On Feb 8, 2017, at 10:31 AM, Apache <ra...@dslextreme.com> wrote:
>
> I’m pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can’t figure out how to reduce that based on how dispersed the code has gotten.
>
> Ralph
>
>> On Feb 8, 2017, at 10:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>
>> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn’t as the layouts now require the ByteBuffer.
>>
>> Ralph
>>
>>> On Feb 8, 2017, at 12:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>
>>> The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>
>>> Ralph
>>>
>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>
>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>
>>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>
>>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>>
>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>
>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>
>>>>> Ralph
>>>>>
>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>
>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>
>>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>
>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>
>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>
>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>
>>>>>>>> Let me know what args/command line...
>>>>>>>>
>>>>>>>> Gary
>>>>>>>>
>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>
>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>
>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>
>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>
>>
>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
Feel free to post them on the Logback user list. There is a performance thread there where Ceki asked for that feedback.
Ralph
> On Feb 9, 2017, at 12:45 PM, Matt Sicker <bo...@gmail.com> wrote:
>
> I ran the logback-perf repo on the same AWS instance. Here's the CSV data. It appears as soon as more than one thread comes into play, log4j2 has better scores.
>
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",1,10,964.600470,279.139021,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",1,10,1274.682156,6.179197,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",1,10,1257.026405,32.898682,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",1,10,1363.683525,41.884725,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",2,10,687.304803,13.266922,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",2,10,1386.596198,207.305249,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",2,10,1579.884762,24.098318,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",2,10,953.138212,99.156775,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",4,10,670.442970,15.049614,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",4,10,1218.543593,18.234077,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",4,10,1309.092881,31.547936,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",4,10,845.168355,24.547390,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",8,10,689.805339,7.415023,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",8,10,1196.396592,19.360776,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",8,10,1319.477318,10.601260,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",8,10,816.608726,25.603234,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",16,10,687.623660,16.114008,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",16,10,1203.649145,8.835115,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",16,10,1266.241778,7.564414,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",16,10,789.507183,9.866592,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",32,10,690.252411,11.587858,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",32,10,1514185.478492,126804.168771,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",32,10,1264.049209,28.309088,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",32,10,754.828687,14.865909,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",64,10,670.498518,11.147198,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",64,10,1293.301940,22.687086,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",64,10,1380.527892,14.907542,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",64,10,750.528159,11.356281,"ops/ms"
>
> On 9 February 2017 at 13:02, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> You might try running Ceki’s benchmark project on AWS and publish those numbers here. He also asked people to publish numbers on the Logback user’s list so he can add them to his spreadsheet.
>
> From your numbers and the numbers I’ve been getting, it is clear to me that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware Remko is listing I’d say his machine is about as old as my other MacBook except that he has an SSD that is slightly faster than my hard drive.
>
> Ralph
>
>> On Feb 9, 2017, at 11:12 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>
>> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params involved in making the instance, but here's some data (same strangeness with MMF):
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 86.867 ± 4.502 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 671.156 ± 7.099 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1221.814 ± 22.130 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1178.407 ± 960.141 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1220.746 ± 34.421 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 898.122 ± 8.128 ops/ms
>>
>> On 9 February 2017 at 12:02, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core i7. Can find out more hardware specs if needed. I also noticed that the memory mapped file starts out fast and slows down over time (somewhat seen by the error value here).
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 96.540 ± 7.875 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 766.286 ± 11.461 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1787.620 ± 36.695 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1506.588 ± 956.354 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1934.966 ± 50.089 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1285.066 ± 12.674 ops/ms
>>
>> On 9 February 2017 at 11:44, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit), i5-3317u CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores), SSD hard disk:
>>
>> java -jar log4j-perf/target/benchmarks.jar ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 20 -t 4 -tu ms
>>
>> # Run complete. Total time: 00:03:58
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.646 ± 0.876 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 405.305 ± 6.596 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 751.949 ± 16.055 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1250.666 ± 168.757 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 728.743 ± 23.909 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 676.926 ± 19.518 ops/ms
>>
>> --------------------
>> Logback config without immediateFlush=false:
>>
>> # Run complete. Total time: 00:03:44
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.949 ± 1.220 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 404.042 ± 8.450 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 690.393 ± 115.537 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1221.681 ± 82.205 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 823.059 ± 41.512 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 83.352 ± 11.911 ops/ms
>>
>>
>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <mikael.staldal@magine.com <ma...@magine.com>> wrote:
>> I guess that with a memory mapped file, you leave it to the OS to do the best it can, and you lose any direct control over how it is actually done.
>>
>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>> On my Mac Pro with the slower external SSD I now got:
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 73.739 ± 0.740 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 683.129 ± 9.407 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 991.293 ± 193.049 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3072.250 ± 63.475 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1056.256 ± 137.673 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 784.723 ± 153.226 ops/ms
>>
>> and on the same machine with the faster internal SSD I got:
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 74.661 ± 0.232 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 647.041 ± 2.994 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1333.887 ± 13.921 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3025.726 ± 210.414 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1433.620 ± 11.194 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1026.319 ± 13.347 ops/ms
>>
>> I will continue to run this on a few other configurations. I think I would also like to add the async appenders/loggers to this test so that one can see all the various options.
>>
>> It is really quite interesting to me to see how the memory mapped appender behaves so differently from one machine to another. I don’t know under what circumstances I would recommend using it though.
>>
>> Ralph
>>
>>
>>> On Feb 9, 2017, at 7:03 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>
>>> After modifying the configuration the new results on my laptop are:
>>>
>>> Benchmark Mode Samples Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 92.580 ± 3.698 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 828.707 ± 55.006 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1647.230 ± 125.682 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1465.002 ± 1284.943 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1765.340 ± 149.707 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1192.594 ± 57.777 ops/ms
>>>
>>> I will try the other machines later and post those results.
>>>
>>> Ralph
>>>
>>>
>>>> On Feb 9, 2017, at 5:22 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>
>>>> Ceki replied on twitter that the immediateFlush option is now a parameter of the appended, not the encoder, so it looks like the confit needs to be changed and the test rerun.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>>>>
>>>>> FYI, The write and flush methods in BufferedOutputStream are also synchronized, so we won't be able to do away with synchronization completely.
>>>>>
>>>>> In OutputStreamManager we synchronize multiple methods but these are nested calls. I thought reentrant synchronization had negligible overhead but I haven't measured this myself.
>>>>>
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On Feb 9, 2017, at 2:31, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>
>>>>>> I’m pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can’t figure out how to reduce that based on how dispersed the code has gotten.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>
>>>>>>> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>
>>>>>>>> The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>
>>>>>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>>>>>>>
>>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>>>>
>>>>>>>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>>>>
>>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>>
>>>>>>>>>>>>> Gary
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>
>>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>>
>>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>
>>
>>
>>
>>
>> --
>>
>>
>> Mikael Ståldal
>> Senior software developer
>>
>> Magine TV
>> mikael.staldal@magine.com <ma...@magine.com>
>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com <http://www.magine.com/>
>>
>> Privileged and/or Confidential Information may be contained in this message. If you are not the addressee indicated in this message
>> (or responsible for delivery of the message to such a person), you may not copy or deliver this message to anyone. In such case,
>> you should destroy this message and kindly notify the sender by reply email.
>>
>>
>>
>>
>> --
>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>
>>
>>
>> --
>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>
>
>
>
> --
> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
Re: Logback performance improvements
Posted by Mikael Ståldal <mi...@magine.com>.
Let's think about what is the relevant metric for users. I guess that quite
some users value minimal slowdown of the application over maximal rate of
log events written out.
On Sun, Feb 12, 2017 at 3:41 PM, Remko Popma <re...@gmail.com> wrote:
> But I agree it is interesting to see that a simple (RAF)FileAppender will
> outperform async logging if the sustained logging rate is faster than the
> appender can handle for long enough to fill up the queue. I've heard people
> in the high performance computing world mention this but I hadn't seen
> actual numbers until now.
>
> It is good to gain an intuition on these things. Async Loggers (with LMAX
> Disruptor) performance is worse than a plain File but not terrible either.
> Interesting how LogbackAsyncFile seems to suffer more than
> log4j2AsyncAppender.
>
> And yes, JUL is so bad it is not funny any more. We did document that
> <https://logging.apache.org/log4j/2.x/performance.html#fileLoggingComparison>
> but perhaps we need to evangelize it more...
>
>
> On Sun, Feb 12, 2017 at 4:59 PM, Remko Popma <re...@gmail.com>
> wrote:
>
>> The queue is bigger but still fills up quickly because all the benchmarks
>> do is add events as quickly as possible and the FileAppender can't keep up
>> with that rate.
>>
>> Sent from my iPhone
>>
>> On Feb 12, 2017, at 16:19, Apache <ra...@dslextreme.com> wrote:
>>
>> I added the tests so you guys could run them and take a look. I have no
>> problem with the changes being reverted.
>>
>> As I think I said, I expected most of the async appenders to back up. I
>> expected them to be a bit slower, but I didn’t expect them to be as slow as
>> they are when the queue is full. I also don’t understand why AsyncLogger is
>> slower as it should have a large ring buffer if I understand correctly.
>>
>> Ralph
>>
>> On Feb 11, 2017, at 4:36 PM, Remko Popma <re...@gmail.com> wrote:
>>
>> I don't think it is a good idea to mix async tests with sync tests.
>>
>> We already have separate benchmarks for the various ways to log
>> asynchronously, if you want to compare them just run those also.
>>
>> JMH makes it super easy to build benchmarks but we have to be very
>> careful that the test really measures what we want to measure.
>>
>> Asynchronous logging has two "states", queue full and space available. Do
>> we want to measure the queue full state or the space available state (or
>> the transition between the two)?
>>
>> With a normal FileAppender JMH will iterate so fast that the queue
>> immediately fills up. This will likely happen during the warmup iterations
>> (no guarantee of course).
>>
>> What actually happens in that state? We used to block until space becomes
>> available in the queue. What we do now depends on the configured
>> AsyncQueueFullPolicy. Because blocking caused deadlocks in some scenarios,
>> our current default is to bypass the queue and log to the FileAppender
>> directly. I expect that to be slower than the simple FileAppender because
>> of the additional lock contention on the tail of the queue during the
>> attempted thread handover.
>>
>> The "queue full" state is not the normal logging state for an
>> application. If we want to measure this we should move these tests to a
>> separate benchmark that is clearly marked "QueueFullAsyncBenchmark" or
>> something like that.
>> Otherwise people reading these benchmark results will misinterpret them
>> and get confused.
>>
>> The existing Async benchmarks ensure they measure the "queue space
>> available" state.
>>
>> Remko
>>
>> Sent from my iPhone
>>
>> On Feb 12, 2017, at 4:37, Apache <ra...@dslextreme.com> wrote:
>>
>> Just for fun I decided to add some more tests to the file appender
>> benchmark. I’ve checked them in. Please review them to see if everything is
>> configured so the tests make sense.
>>
>> Note that I would expect the async appenders to reduce to the speed of
>> the file appender, since once the queue fills up that is what they are
>> waiting on. But I didn’t set a buffer size for the disruptor or async
>> logger tests so I would have expected those to be quite a bit faster than
>> the regular file test.
>>
>> The one thing that is definitely worth noting is how truly terrible the
>> JUL file appender is. I have to assume that it must be doing an immediate
>> flush on every write.
>>
>> This is on my MacBook Pro - what Ceki would call Fast CPU/Fast SSD
>>
>> Benchmark Mode Samples
>> Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt
>> 10 69.546 ± 2.635 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>> 783.006 ± 28.138 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10
>> 939.605 ± 38.655 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt
>> 10 1446.522 ± 45.485 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt
>> 10 1269.014 ± 27.236 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt
>> 10 1475.605 ± 74.675 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt
>> 10 2131.240 ± 114.184 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt
>> 10 1499.667 ± 39.668 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10
>> 326.969 ± 2.690 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>> 940.527 ± 34.090 ops/ms
>>
>> And this is on my old MacBook Pro - it uses a hard drive so isn’t very
>> fast.
>>
>> Benchmark Mode Samples
>> Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>> 15.722 ± 15.557 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>> 530.668 ± 54.193 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10
>> 498.620 ± 178.693 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt 10
>> 454.541 ± 145.505 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt 10
>> 527.784 ± 150.269 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>> 587.605 ± 97.769 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>> 1966.092 ± 431.196 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>> 364.694 ± 34.602 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10
>> 258.220 ± 1.936 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>> 560.958 ± 36.982 ops/ms
>>
>> Ralph
>>
>> On Feb 9, 2017, at 1:39 PM, Matt Sicker <bo...@gmail.com> wrote:
>>
>> Also, there are some potential issues with logback-perf:
>>
>> * JMH is way out of date (1.11.3 versus 1.17.4)
>> * Less warmup iterations than we do
>>
>> Anyways, results for 32 threads (8 core environment):
>>
>> Benchmark Mode Cnt Score Error Units
>> FileAppenderBenchmark.log4j1File thrpt 10 695.774 ± 9.567 ops/ms
>> FileAppenderBenchmark.log4j2File thrpt 10 1300.091 ± 17.579 ops/ms
>> FileAppenderBenchmark.log4j2RAF thrpt 10 1365.118 ± 17.656 ops/ms
>> FileAppenderBenchmark.logbackFile thrpt 10 766.294 ± 10.121 ops/ms
>>
>> On 9 February 2017 at 14:37, Matt Sicker <bo...@gmail.com> wrote:
>>
>>> That value does look messed up. I'll re-run the 32 thread tests. Also,
>>> I'm not on the logback lists yet, so I'll sign up this afternoon.
>>>
>>> On 9 February 2017 at 14:35, Apache <ra...@dslextreme.com> wrote:
>>>
>>>> What is up with that score for 32 threads? That can’t possibly be
>>>> correct.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 9, 2017, at 12:45 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>> I ran the logback-perf repo on the same AWS instance. Here's the CSV
>>>> data. It appears as soon as more than one thread comes into play, log4j2
>>>> has better scores.
>>>>
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>> (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>> t",1,10,964.600470,279.139021,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>> t",1,10,1274.682156,6.179197,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>> ",1,10,1257.026405,32.898682,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>> pt",1,10,1363.683525,41.884725,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>> (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>> t",2,10,687.304803,13.266922,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>> t",2,10,1386.596198,207.305249,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>> ",2,10,1579.884762,24.098318,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>> pt",2,10,953.138212,99.156775,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>> (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>> t",4,10,670.442970,15.049614,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>> t",4,10,1218.543593,18.234077,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>> ",4,10,1309.092881,31.547936,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>> pt",4,10,845.168355,24.547390,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>> (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>> t",8,10,689.805339,7.415023,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>> t",8,10,1196.396592,19.360776,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>> ",8,10,1319.477318,10.601260,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>> pt",8,10,816.608726,25.603234,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>> (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>> t",16,10,687.623660,16.114008,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>> t",16,10,1203.649145,8.835115,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>> ",16,10,1266.241778,7.564414,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>> pt",16,10,789.507183,9.866592,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>> (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>> t",32,10,690.252411,11.587858,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>> t",32,10,1514185.478492,126804.168771,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>> ",32,10,1264.049209,28.309088,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>> pt",32,10,754.828687,14.865909,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>> (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>> t",64,10,670.498518,11.147198,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>> t",64,10,1293.301940,22.687086,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>> ",64,10,1380.527892,14.907542,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>> pt",64,10,750.528159,11.356281,"ops/ms"
>>>>
>>>> On 9 February 2017 at 13:02, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>>> You might try running Ceki’s benchmark project on AWS and publish
>>>>> those numbers here. He also asked people to publish numbers on the Logback
>>>>> user’s list so he can add them to his spreadsheet.
>>>>>
>>>>> From your numbers and the numbers I’ve been getting, it is clear to me
>>>>> that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware
>>>>> Remko is listing I’d say his machine is about as old as my other MacBook
>>>>> except that he has an SSD that is slightly faster than my hard drive.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 9, 2017, at 11:12 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>
>>>>> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R)
>>>>> Xeon(R) CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params
>>>>> involved in making the instance, but here's some data (same strangeness
>>>>> with MMF):
>>>>>
>>>>> Benchmark Mode Samples
>>>>> Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>> 86.867 ± 4.502 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>> 671.156 ± 7.099 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>> 1221.814 ± 22.130 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>> 1178.407 ± 960.141 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>> 1220.746 ± 34.421 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>> 898.122 ± 8.128 ops/ms
>>>>>
>>>>> On 9 February 2017 at 12:02, Matt Sicker <bo...@gmail.com> wrote:
>>>>>
>>>>>> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core
>>>>>> i7. Can find out more hardware specs if needed. I also noticed that the
>>>>>> memory mapped file starts out fast and slows down over time (somewhat seen
>>>>>> by the error value here).
>>>>>>
>>>>>> Benchmark Mode Samples
>>>>>> Score Error Units
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>> 96.540 ± 7.875 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>> 766.286 ± 11.461 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>> 1787.620 ± 36.695 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>> 1506.588 ± 956.354 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>> 1934.966 ± 50.089 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>> 1285.066 ± 12.674 ops/ms
>>>>>>
>>>>>> On 9 February 2017 at 11:44, Remko Popma <re...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit),
>>>>>>> i5-3317u CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores),
>>>>>>> SSD hard disk:
>>>>>>>
>>>>>>> java -jar log4j-perf/target/benchmarks.jar
>>>>>>> ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 20 -t 4 -tu ms
>>>>>>>
>>>>>>> # Run complete. Total time: 00:03:58
>>>>>>>
>>>>>>> Benchmark Mode Samples
>>>>>>> Score Error Units
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>>> 37.646 ± 0.876 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>>> 405.305 ± 6.596 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>>> 751.949 ± 16.055 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>>>>>> 1250.666 ± 168.757 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>>> 728.743 ± 23.909 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>>> 676.926 ± 19.518 ops/ms
>>>>>>>
>>>>>>> --------------------
>>>>>>> Logback config without immediateFlush=false:
>>>>>>>
>>>>>>> # Run complete. Total time: 00:03:44
>>>>>>>
>>>>>>> Benchmark Mode Samples
>>>>>>> Score Error Units
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>>> 37.949 ± 1.220 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>>> 404.042 ± 8.450 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>>> 690.393 ± 115.537 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>>>>>> 1221.681 ± 82.205 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>>> 823.059 ± 41.512 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>>> 83.352 ± 11.911 ops/ms
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <
>>>>>>> mikael.staldal@magine.com> wrote:
>>>>>>>
>>>>>>>> I guess that with a memory mapped file, you leave it to the OS to
>>>>>>>> do the best it can, and you lose any direct control over how it is actually
>>>>>>>> done.
>>>>>>>>
>>>>>>>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> On my Mac Pro with the slower external SSD I now got:
>>>>>>>>>
>>>>>>>>> Benchmark Mode Samples
>>>>>>>>> Score Error Units
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>>>> 73.739 ± 0.740 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>>>> 683.129 ± 9.407 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>>>> 991.293 ± 193.049 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>>>> 3072.250 ± 63.475 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>>>> 1056.256 ± 137.673 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>>>> 784.723 ± 153.226 ops/ms
>>>>>>>>>
>>>>>>>>> and on the same machine with the faster internal SSD I got:
>>>>>>>>>
>>>>>>>>> Benchmark Mode Samples
>>>>>>>>> Score Error Units
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>>>> 74.661 ± 0.232 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>>>> 647.041 ± 2.994 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>>>> 1333.887 ± 13.921 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>>>> 3025.726 ± 210.414 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>>>> 1433.620 ± 11.194 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>>>> 1026.319 ± 13.347 ops/ms
>>>>>>>>>
>>>>>>>>> I will continue to run this on a few other configurations. I think
>>>>>>>>> I would also like to add the async appenders/loggers to this test so that
>>>>>>>>> one can see all the various options.
>>>>>>>>>
>>>>>>>>> It is really quite interesting to me to see how the memory mapped
>>>>>>>>> appender behaves so differently from one machine to another. I don’t know
>>>>>>>>> under what circumstances I would recommend using it though.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Feb 9, 2017, at 7:03 AM, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> After modifying the configuration the new results on my laptop are:
>>>>>>>>>
>>>>>>>>> Benchmark Mode Samples
>>>>>>>>> Score Error Units
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>>>> 92.580 ± 3.698 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>>>> 828.707 ± 55.006 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>>>> 1647.230 ± 125.682 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>>>> 1465.002 ± 1284.943 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>>>> 1765.340 ± 149.707 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>>>> 1192.594 ± 57.777 ops/ms
>>>>>>>>>
>>>>>>>>> I will try the other machines later and post those results.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Ceki replied on twitter that the immediateFlush option is now a
>>>>>>>>> parameter of the appended, not the encoder, so it looks like the confit
>>>>>>>>> needs to be changed and the test rerun.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> FYI, The write and flush methods in BufferedOutputStream are also
>>>>>>>>> synchronized, so we won't be able to do away with synchronization
>>>>>>>>> completely.
>>>>>>>>>
>>>>>>>>> In OutputStreamManager we synchronize multiple methods but these
>>>>>>>>> are nested calls. I thought reentrant synchronization had negligible
>>>>>>>>> overhead but I haven't measured this myself.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Sent from my iPhone
>>>>>>>>>
>>>>>>>>> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> I’m pretty sure the problem we have is that a) we are
>>>>>>>>> synchronizing many methods and b) we are synchronizing more than just the
>>>>>>>>> write. Unfortunately, I can’t figure out how to reduce that based on how
>>>>>>>>> dispersed the code has gotten.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> I tried to modify FileManager to just use a BufferedOutputStream
>>>>>>>>> but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> The append method isn’t synchronized but the writeBytes method
>>>>>>>>> acquires a lock. His code is actually a lot simpler than ours in that it
>>>>>>>>> just uses a BufferedOutputStream and he only obtains the lock when he is
>>>>>>>>> writing to it.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>> I'm not sure if I'm looking in the right place, but a major
>>>>>>>>> difference now between Logback's appenders and Log4j's is that Logback
>>>>>>>>> isn't synchronized on the append method.
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Is this something we can improve performance on by implementing a
>>>>>>>>>> file appender based on FileChannel or AsynchronousFileChannel instead of
>>>>>>>>>> OutputStream?
>>>>>>>>>>
>>>>>>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Ceki has updated his numbers to include those reported on the
>>>>>>>>>>> mailing list. https://docs.google.com/
>>>>>>>>>>> spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/
>>>>>>>>>>> edit#gid=0
>>>>>>>>>>>
>>>>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers
>>>>>>>>>>> for my two MacBooks are at https://docs.google.com/spread
>>>>>>>>>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?u
>>>>>>>>>>> sp=sharing.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Yes, that is still the standard approach most people use and is
>>>>>>>>>>> the only way to provide a head-to-head comparison against the logging
>>>>>>>>>>> frameworks.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> This is all in a synchronous appender, right? Either way, that's
>>>>>>>>>>> rather interesting.
>>>>>>>>>>>
>>>>>>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Someone posted numbers on the Logback user’s list that match
>>>>>>>>>>>> mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat
>>>>>>>>>>>> better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> I think we need some comparisons on the log4j side: file
>>>>>>>>>>>> appender with 256k buffer size, random access file appender with 256k
>>>>>>>>>>>> buffer size (which appears to be the default), and memory mapped file
>>>>>>>>>>>> appender. It'd be cool to see how these compose with async logging enabled
>>>>>>>>>>>> in both log4j and logback.
>>>>>>>>>>>>
>>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com
>>>>>>>>>>>> > wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> You should run the code at https://github.com/ceki/logbac
>>>>>>>>>>>>> k-perf to compare your results to Ceki’s. You also should
>>>>>>>>>>>>> capture the cpubenchmark speed of your processor and get the speed of your
>>>>>>>>>>>>> hard drive. I used Blackmagic speed test on my Mac. I am capturing my
>>>>>>>>>>>>> results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <
>>>>>>>>>>>>> garydgregory@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> If you want, I can run tests on Windows once the build works
>>>>>>>>>>>>> on Windows again.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>>
>>>>>>>>>>>>> Gary
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com>
>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard
>>>>>>>>>>>>>> drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Benchmark Mode
>>>>>>>>>>>>>> Samples Score Error Units
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt
>>>>>>>>>>>>>> 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt
>>>>>>>>>>>>>> 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt
>>>>>>>>>>>>>> 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt
>>>>>>>>>>>>>> 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt
>>>>>>>>>>>>>> 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run
>>>>>>>>>>>>>> anything directly on bare metal any more.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <
>>>>>>>>>>>>>> ralph.goers@dslextreme.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>>>>>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>>>>>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>>>>>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we
>>>>>>>>>>>>>> have a few optimizations we can make.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>>
>>>>>>>> *Mikael Ståldal*
>>>>>>>> Senior software developer
>>>>>>>>
>>>>>>>> *Magine TV*
>>>>>>>> mikael.staldal@magine.com
>>>>>>>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
>>>>>>>>
>>>>>>>> Privileged and/or Confidential Information may be contained in this
>>>>>>>> message. If you are not the addressee indicated in this message
>>>>>>>> (or responsible for delivery of the message to such a person), you
>>>>>>>> may not copy or deliver this message to anyone. In such case,
>>>>>>>> you should destroy this message and kindly notify the sender by
>>>>>>>> reply email.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>>
>>
>>
>
--
[image: MagineTV]
*Mikael Ståldal*
Senior software developer
*Magine TV*
mikael.staldal@magine.com
Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
Privileged and/or Confidential Information may be contained in this
message. If you are not the addressee indicated in this message
(or responsible for delivery of the message to such a person), you may not
copy or deliver this message to anyone. In such case,
you should destroy this message and kindly notify the sender by reply
email.
Re: Logback performance improvements
Posted by Matt Sicker <bo...@gmail.com>.
I found a comment somewhere (probably an old comment) that asked why Log4j
2 even exists when you could just use JUL. I feel like maybe we should make
a JSR/JEP/whatever to deprecate JUL entirely.
On 12 February 2017 at 08:54, Remko Popma <re...@gmail.com> wrote:
> #FriendsDontLetFriendsUseJavaUtilLogging
>
> On Sun, Feb 12, 2017 at 11:41 PM, Remko Popma <re...@gmail.com>
> wrote:
>
>> But I agree it is interesting to see that a simple (RAF)FileAppender will
>> outperform async logging if the sustained logging rate is faster than the
>> appender can handle for long enough to fill up the queue. I've heard people
>> in the high performance computing world mention this but I hadn't seen
>> actual numbers until now.
>>
>> It is good to gain an intuition on these things. Async Loggers (with LMAX
>> Disruptor) performance is worse than a plain File but not terrible either.
>> Interesting how LogbackAsyncFile seems to suffer more than
>> log4j2AsyncAppender.
>>
>> And yes, JUL is so bad it is not funny any more. We did document that
>> <https://logging.apache.org/log4j/2.x/performance.html#fileLoggingComparison>
>> but perhaps we need to evangelize it more...
>>
>>
>> On Sun, Feb 12, 2017 at 4:59 PM, Remko Popma <re...@gmail.com>
>> wrote:
>>
>>> The queue is bigger but still fills up quickly because all the
>>> benchmarks do is add events as quickly as possible and the FileAppender
>>> can't keep up with that rate.
>>>
>>> Sent from my iPhone
>>>
>>> On Feb 12, 2017, at 16:19, Apache <ra...@dslextreme.com> wrote:
>>>
>>> I added the tests so you guys could run them and take a look. I have no
>>> problem with the changes being reverted.
>>>
>>> As I think I said, I expected most of the async appenders to back up. I
>>> expected them to be a bit slower, but I didn’t expect them to be as slow as
>>> they are when the queue is full. I also don’t understand why AsyncLogger is
>>> slower as it should have a large ring buffer if I understand correctly.
>>>
>>> Ralph
>>>
>>> On Feb 11, 2017, at 4:36 PM, Remko Popma <re...@gmail.com> wrote:
>>>
>>> I don't think it is a good idea to mix async tests with sync tests.
>>>
>>> We already have separate benchmarks for the various ways to log
>>> asynchronously, if you want to compare them just run those also.
>>>
>>> JMH makes it super easy to build benchmarks but we have to be very
>>> careful that the test really measures what we want to measure.
>>>
>>> Asynchronous logging has two "states", queue full and space available.
>>> Do we want to measure the queue full state or the space available state (or
>>> the transition between the two)?
>>>
>>> With a normal FileAppender JMH will iterate so fast that the queue
>>> immediately fills up. This will likely happen during the warmup iterations
>>> (no guarantee of course).
>>>
>>> What actually happens in that state? We used to block until space
>>> becomes available in the queue. What we do now depends on the configured
>>> AsyncQueueFullPolicy. Because blocking caused deadlocks in some scenarios,
>>> our current default is to bypass the queue and log to the FileAppender
>>> directly. I expect that to be slower than the simple FileAppender because
>>> of the additional lock contention on the tail of the queue during the
>>> attempted thread handover.
>>>
>>> The "queue full" state is not the normal logging state for an
>>> application. If we want to measure this we should move these tests to a
>>> separate benchmark that is clearly marked "QueueFullAsyncBenchmark" or
>>> something like that.
>>> Otherwise people reading these benchmark results will misinterpret them
>>> and get confused.
>>>
>>> The existing Async benchmarks ensure they measure the "queue space
>>> available" state.
>>>
>>> Remko
>>>
>>> Sent from my iPhone
>>>
>>> On Feb 12, 2017, at 4:37, Apache <ra...@dslextreme.com> wrote:
>>>
>>> Just for fun I decided to add some more tests to the file appender
>>> benchmark. I’ve checked them in. Please review them to see if everything is
>>> configured so the tests make sense.
>>>
>>> Note that I would expect the async appenders to reduce to the speed of
>>> the file appender, since once the queue fills up that is what they are
>>> waiting on. But I didn’t set a buffer size for the disruptor or async
>>> logger tests so I would have expected those to be quite a bit faster than
>>> the regular file test.
>>>
>>> The one thing that is definitely worth noting is how truly terrible the
>>> JUL file appender is. I have to assume that it must be doing an immediate
>>> flush on every write.
>>>
>>> This is on my MacBook Pro - what Ceki would call Fast CPU/Fast SSD
>>>
>>> Benchmark Mode Samples
>>> Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt
>>> 10 69.546 ± 2.635 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt
>>> 10 783.006 ± 28.138 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt
>>> 10 939.605 ± 38.655 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt
>>> 10 1446.522 ± 45.485 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt
>>> 10 1269.014 ± 27.236 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt
>>> 10 1475.605 ± 74.675 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt
>>> 10 2131.240 ± 114.184 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt
>>> 10 1499.667 ± 39.668 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt
>>> 10 326.969 ± 2.690 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt
>>> 10 940.527 ± 34.090 ops/ms
>>>
>>> And this is on my old MacBook Pro - it uses a hard drive so isn’t very
>>> fast.
>>>
>>> Benchmark Mode Samples
>>> Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt
>>> 10 15.722 ± 15.557 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt
>>> 10 530.668 ± 54.193 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt
>>> 10 498.620 ± 178.693 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt
>>> 10 454.541 ± 145.505 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt
>>> 10 527.784 ± 150.269 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt
>>> 10 587.605 ± 97.769 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt
>>> 10 1966.092 ± 431.196 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt
>>> 10 364.694 ± 34.602 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt
>>> 10 258.220 ± 1.936 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt
>>> 10 560.958 ± 36.982 ops/ms
>>>
>>> Ralph
>>>
>>> On Feb 9, 2017, at 1:39 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>
>>> Also, there are some potential issues with logback-perf:
>>>
>>> * JMH is way out of date (1.11.3 versus 1.17.4)
>>> * Less warmup iterations than we do
>>>
>>> Anyways, results for 32 threads (8 core environment):
>>>
>>> Benchmark Mode Cnt Score Error Units
>>> FileAppenderBenchmark.log4j1File thrpt 10 695.774 ± 9.567 ops/ms
>>> FileAppenderBenchmark.log4j2File thrpt 10 1300.091 ± 17.579 ops/ms
>>> FileAppenderBenchmark.log4j2RAF thrpt 10 1365.118 ± 17.656 ops/ms
>>> FileAppenderBenchmark.logbackFile thrpt 10 766.294 ± 10.121 ops/ms
>>>
>>> On 9 February 2017 at 14:37, Matt Sicker <bo...@gmail.com> wrote:
>>>
>>>> That value does look messed up. I'll re-run the 32 thread tests. Also,
>>>> I'm not on the logback lists yet, so I'll sign up this afternoon.
>>>>
>>>> On 9 February 2017 at 14:35, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>>> What is up with that score for 32 threads? That can’t possibly be
>>>>> correct.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 9, 2017, at 12:45 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>
>>>>> I ran the logback-perf repo on the same AWS instance. Here's the CSV
>>>>> data. It appears as soon as more than one thread comes into play, log4j2
>>>>> has better scores.
>>>>>
>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>>> (99.9%)","Unit"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>>> t",1,10,964.600470,279.139021,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>>> t",1,10,1274.682156,6.179197,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>>> ",1,10,1257.026405,32.898682,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>>> pt",1,10,1363.683525,41.884725,"ops/ms"
>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>>> (99.9%)","Unit"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>>> t",2,10,687.304803,13.266922,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>>> t",2,10,1386.596198,207.305249,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>>> ",2,10,1579.884762,24.098318,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>>> pt",2,10,953.138212,99.156775,"ops/ms"
>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>>> (99.9%)","Unit"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>>> t",4,10,670.442970,15.049614,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>>> t",4,10,1218.543593,18.234077,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>>> ",4,10,1309.092881,31.547936,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>>> pt",4,10,845.168355,24.547390,"ops/ms"
>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>>> (99.9%)","Unit"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>>> t",8,10,689.805339,7.415023,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>>> t",8,10,1196.396592,19.360776,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>>> ",8,10,1319.477318,10.601260,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>>> pt",8,10,816.608726,25.603234,"ops/ms"
>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>>> (99.9%)","Unit"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>>> t",16,10,687.623660,16.114008,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>>> t",16,10,1203.649145,8.835115,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>>> ",16,10,1266.241778,7.564414,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>>> pt",16,10,789.507183,9.866592,"ops/ms"
>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>>> (99.9%)","Unit"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>>> t",32,10,690.252411,11.587858,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>>> t",32,10,1514185.478492,126804.168771,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>>> ",32,10,1264.049209,28.309088,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>>> pt",32,10,754.828687,14.865909,"ops/ms"
>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>>> (99.9%)","Unit"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>>> t",64,10,670.498518,11.147198,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>>> t",64,10,1293.301940,22.687086,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>>> ",64,10,1380.527892,14.907542,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>>> pt",64,10,750.528159,11.356281,"ops/ms"
>>>>>
>>>>> On 9 February 2017 at 13:02, Apache <ra...@dslextreme.com>
>>>>> wrote:
>>>>>
>>>>>> You might try running Ceki’s benchmark project on AWS and publish
>>>>>> those numbers here. He also asked people to publish numbers on the Logback
>>>>>> user’s list so he can add them to his spreadsheet.
>>>>>>
>>>>>> From your numbers and the numbers I’ve been getting, it is clear to
>>>>>> me that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware
>>>>>> Remko is listing I’d say his machine is about as old as my other MacBook
>>>>>> except that he has an SSD that is slightly faster than my hard drive.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 9, 2017, at 11:12 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>
>>>>>> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R)
>>>>>> Xeon(R) CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params
>>>>>> involved in making the instance, but here's some data (same strangeness
>>>>>> with MMF):
>>>>>>
>>>>>> Benchmark Mode Samples
>>>>>> Score Error Units
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>> 86.867 ± 4.502 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>> 671.156 ± 7.099 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>> 1221.814 ± 22.130 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>> 1178.407 ± 960.141 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>> 1220.746 ± 34.421 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>> 898.122 ± 8.128 ops/ms
>>>>>>
>>>>>> On 9 February 2017 at 12:02, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>
>>>>>>> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core
>>>>>>> i7. Can find out more hardware specs if needed. I also noticed that the
>>>>>>> memory mapped file starts out fast and slows down over time (somewhat seen
>>>>>>> by the error value here).
>>>>>>>
>>>>>>> Benchmark Mode Samples
>>>>>>> Score Error Units
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>> 96.540 ± 7.875 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>> 766.286 ± 11.461 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>> 1787.620 ± 36.695 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>> 1506.588 ± 956.354 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>> 1934.966 ± 50.089 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>> 1285.066 ± 12.674 ops/ms
>>>>>>>
>>>>>>> On 9 February 2017 at 11:44, Remko Popma <re...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit),
>>>>>>>> i5-3317u CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores),
>>>>>>>> SSD hard disk:
>>>>>>>>
>>>>>>>> java -jar log4j-perf/target/benchmarks.jar
>>>>>>>> ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 20 -t 4 -tu ms
>>>>>>>>
>>>>>>>> # Run complete. Total time: 00:03:58
>>>>>>>>
>>>>>>>> Benchmark Mode Samples
>>>>>>>> Score Error Units
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>>>> 37.646 ± 0.876 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>>>> 405.305 ± 6.596 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>>>> 751.949 ± 16.055 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>>>>>>> 1250.666 ± 168.757 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>>>> 728.743 ± 23.909 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>>>> 676.926 ± 19.518 ops/ms
>>>>>>>>
>>>>>>>> --------------------
>>>>>>>> Logback config without immediateFlush=false:
>>>>>>>>
>>>>>>>> # Run complete. Total time: 00:03:44
>>>>>>>>
>>>>>>>> Benchmark Mode Samples
>>>>>>>> Score Error Units
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>>>> 37.949 ± 1.220 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>>>> 404.042 ± 8.450 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>>>> 690.393 ± 115.537 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>>>>>>> 1221.681 ± 82.205 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>>>> 823.059 ± 41.512 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>>>> 83.352 ± 11.911 ops/ms
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <
>>>>>>>> mikael.staldal@magine.com> wrote:
>>>>>>>>
>>>>>>>>> I guess that with a memory mapped file, you leave it to the OS to
>>>>>>>>> do the best it can, and you lose any direct control over how it is actually
>>>>>>>>> done.
>>>>>>>>>
>>>>>>>>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ralph.goers@dslextreme.com
>>>>>>>>> > wrote:
>>>>>>>>>
>>>>>>>>>> On my Mac Pro with the slower external SSD I now got:
>>>>>>>>>>
>>>>>>>>>> Benchmark Mode Samples
>>>>>>>>>> Score Error Units
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>>>>> 73.739 ± 0.740 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>>>>> 683.129 ± 9.407 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>>>>> 991.293 ± 193.049 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>>>>> 3072.250 ± 63.475 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>>>>> 1056.256 ± 137.673 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>>>>> 784.723 ± 153.226 ops/ms
>>>>>>>>>>
>>>>>>>>>> and on the same machine with the faster internal SSD I got:
>>>>>>>>>>
>>>>>>>>>> Benchmark Mode Samples
>>>>>>>>>> Score Error Units
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>>>>> 74.661 ± 0.232 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>>>>> 647.041 ± 2.994 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>>>>> 1333.887 ± 13.921 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>>>>> 3025.726 ± 210.414 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>>>>> 1433.620 ± 11.194 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>>>>> 1026.319 ± 13.347 ops/ms
>>>>>>>>>>
>>>>>>>>>> I will continue to run this on a few other configurations. I
>>>>>>>>>> think I would also like to add the async appenders/loggers to this test so
>>>>>>>>>> that one can see all the various options.
>>>>>>>>>>
>>>>>>>>>> It is really quite interesting to me to see how the memory mapped
>>>>>>>>>> appender behaves so differently from one machine to another. I don’t know
>>>>>>>>>> under what circumstances I would recommend using it though.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Feb 9, 2017, at 7:03 AM, Apache <ra...@dslextreme.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> After modifying the configuration the new results on my laptop
>>>>>>>>>> are:
>>>>>>>>>>
>>>>>>>>>> Benchmark Mode Samples
>>>>>>>>>> Score Error Units
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>>>>> 92.580 ± 3.698 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>>>>> 828.707 ± 55.006 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>>>>> 1647.230 ± 125.682 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>>>>> 1465.002 ± 1284.943 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>>>>> 1765.340 ± 149.707 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>>>>> 1192.594 ± 57.777 ops/ms
>>>>>>>>>>
>>>>>>>>>> I will try the other machines later and post those results.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> Ceki replied on twitter that the immediateFlush option is now a
>>>>>>>>>> parameter of the appended, not the encoder, so it looks like the confit
>>>>>>>>>> needs to be changed and the test rerun.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> FYI, The write and flush methods in BufferedOutputStream are also
>>>>>>>>>> synchronized, so we won't be able to do away with synchronization
>>>>>>>>>> completely.
>>>>>>>>>>
>>>>>>>>>> In OutputStreamManager we synchronize multiple methods but these
>>>>>>>>>> are nested calls. I thought reentrant synchronization had negligible
>>>>>>>>>> overhead but I haven't measured this myself.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>
>>>>>>>>>> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> I’m pretty sure the problem we have is that a) we are
>>>>>>>>>> synchronizing many methods and b) we are synchronizing more than just the
>>>>>>>>>> write. Unfortunately, I can’t figure out how to reduce that based on how
>>>>>>>>>> dispersed the code has gotten.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> I tried to modify FileManager to just use a BufferedOutputStream
>>>>>>>>>> but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> The append method isn’t synchronized but the writeBytes method
>>>>>>>>>> acquires a lock. His code is actually a lot simpler than ours in that it
>>>>>>>>>> just uses a BufferedOutputStream and he only obtains the lock when he is
>>>>>>>>>> writing to it.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>> I'm not sure if I'm looking in the right place, but a major
>>>>>>>>>> difference now between Logback's appenders and Log4j's is that Logback
>>>>>>>>>> isn't synchronized on the append method.
>>>>>>>>>>
>>>>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Is this something we can improve performance on by implementing
>>>>>>>>>>> a file appender based on FileChannel or AsynchronousFileChannel instead of
>>>>>>>>>>> OutputStream?
>>>>>>>>>>>
>>>>>>>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Ceki has updated his numbers to include those reported on the
>>>>>>>>>>>> mailing list. https://docs.google.com/
>>>>>>>>>>>> spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/
>>>>>>>>>>>> edit#gid=0
>>>>>>>>>>>>
>>>>>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my
>>>>>>>>>>>> numbers for my two MacBooks are at
>>>>>>>>>>>> https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0T
>>>>>>>>>>>> Mj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Yes, that is still the standard approach most people use and is
>>>>>>>>>>>> the only way to provide a head-to-head comparison against the logging
>>>>>>>>>>>> frameworks.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> This is all in a synchronous appender, right? Either way,
>>>>>>>>>>>> that's rather interesting.
>>>>>>>>>>>>
>>>>>>>>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com
>>>>>>>>>>>> > wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Someone posted numbers on the Logback user’s list that match
>>>>>>>>>>>>> mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat
>>>>>>>>>>>>> better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> I think we need some comparisons on the log4j side: file
>>>>>>>>>>>>> appender with 256k buffer size, random access file appender with 256k
>>>>>>>>>>>>> buffer size (which appears to be the default), and memory mapped file
>>>>>>>>>>>>> appender. It'd be cool to see how these compose with async logging enabled
>>>>>>>>>>>>> in both log4j and logback.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <
>>>>>>>>>>>>> ralph.goers@dslextreme.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> You should run the code at https://github.com/ceki/logbac
>>>>>>>>>>>>>> k-perf to compare your results to Ceki’s. You also should
>>>>>>>>>>>>>> capture the cpubenchmark speed of your processor and get the speed of your
>>>>>>>>>>>>>> hard drive. I used Blackmagic speed test on my Mac. I am capturing my
>>>>>>>>>>>>>> results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <
>>>>>>>>>>>>>> garydgregory@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> If you want, I can run tests on Windows once the build works
>>>>>>>>>>>>>> on Windows again.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Gary
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com>
>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard
>>>>>>>>>>>>>>> drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Benchmark Mode
>>>>>>>>>>>>>>> Samples Score Error Units
>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt
>>>>>>>>>>>>>>> 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt
>>>>>>>>>>>>>>> 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt
>>>>>>>>>>>>>>> 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt
>>>>>>>>>>>>>>> 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt
>>>>>>>>>>>>>>> 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run
>>>>>>>>>>>>>>> anything directly on bare metal any more.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <
>>>>>>>>>>>>>>> ralph.goers@dslextreme.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>>>>>>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>>>>>>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>>>>>>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we
>>>>>>>>>>>>>>> have a few optimizations we can make.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *Mikael Ståldal*
>>>>>>>>> Senior software developer
>>>>>>>>>
>>>>>>>>> *Magine TV*
>>>>>>>>> mikael.staldal@magine.com
>>>>>>>>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
>>>>>>>>>
>>>>>>>>> Privileged and/or Confidential Information may be contained in
>>>>>>>>> this message. If you are not the addressee indicated in this message
>>>>>>>>> (or responsible for delivery of the message to such a person), you
>>>>>>>>> may not copy or deliver this message to anyone. In such case,
>>>>>>>>> you should destroy this message and kindly notify the sender by
>>>>>>>>> reply email.
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>>
>>>
>>>
>>
>
--
Matt Sicker <bo...@gmail.com>
Re: Logback performance improvements
Posted by Remko Popma <re...@gmail.com>.
#FriendsDontLetFriendsUseJavaUtilLogging
On Sun, Feb 12, 2017 at 11:41 PM, Remko Popma <re...@gmail.com> wrote:
> But I agree it is interesting to see that a simple (RAF)FileAppender will
> outperform async logging if the sustained logging rate is faster than the
> appender can handle for long enough to fill up the queue. I've heard people
> in the high performance computing world mention this but I hadn't seen
> actual numbers until now.
>
> It is good to gain an intuition on these things. Async Loggers (with LMAX
> Disruptor) performance is worse than a plain File but not terrible either.
> Interesting how LogbackAsyncFile seems to suffer more than
> log4j2AsyncAppender.
>
> And yes, JUL is so bad it is not funny any more. We did document that
> <https://logging.apache.org/log4j/2.x/performance.html#fileLoggingComparison>
> but perhaps we need to evangelize it more...
>
>
> On Sun, Feb 12, 2017 at 4:59 PM, Remko Popma <re...@gmail.com>
> wrote:
>
>> The queue is bigger but still fills up quickly because all the benchmarks
>> do is add events as quickly as possible and the FileAppender can't keep up
>> with that rate.
>>
>> Sent from my iPhone
>>
>> On Feb 12, 2017, at 16:19, Apache <ra...@dslextreme.com> wrote:
>>
>> I added the tests so you guys could run them and take a look. I have no
>> problem with the changes being reverted.
>>
>> As I think I said, I expected most of the async appenders to back up. I
>> expected them to be a bit slower, but I didn’t expect them to be as slow as
>> they are when the queue is full. I also don’t understand why AsyncLogger is
>> slower as it should have a large ring buffer if I understand correctly.
>>
>> Ralph
>>
>> On Feb 11, 2017, at 4:36 PM, Remko Popma <re...@gmail.com> wrote:
>>
>> I don't think it is a good idea to mix async tests with sync tests.
>>
>> We already have separate benchmarks for the various ways to log
>> asynchronously, if you want to compare them just run those also.
>>
>> JMH makes it super easy to build benchmarks but we have to be very
>> careful that the test really measures what we want to measure.
>>
>> Asynchronous logging has two "states", queue full and space available. Do
>> we want to measure the queue full state or the space available state (or
>> the transition between the two)?
>>
>> With a normal FileAppender JMH will iterate so fast that the queue
>> immediately fills up. This will likely happen during the warmup iterations
>> (no guarantee of course).
>>
>> What actually happens in that state? We used to block until space becomes
>> available in the queue. What we do now depends on the configured
>> AsyncQueueFullPolicy. Because blocking caused deadlocks in some scenarios,
>> our current default is to bypass the queue and log to the FileAppender
>> directly. I expect that to be slower than the simple FileAppender because
>> of the additional lock contention on the tail of the queue during the
>> attempted thread handover.
>>
>> The "queue full" state is not the normal logging state for an
>> application. If we want to measure this we should move these tests to a
>> separate benchmark that is clearly marked "QueueFullAsyncBenchmark" or
>> something like that.
>> Otherwise people reading these benchmark results will misinterpret them
>> and get confused.
>>
>> The existing Async benchmarks ensure they measure the "queue space
>> available" state.
>>
>> Remko
>>
>> Sent from my iPhone
>>
>> On Feb 12, 2017, at 4:37, Apache <ra...@dslextreme.com> wrote:
>>
>> Just for fun I decided to add some more tests to the file appender
>> benchmark. I’ve checked them in. Please review them to see if everything is
>> configured so the tests make sense.
>>
>> Note that I would expect the async appenders to reduce to the speed of
>> the file appender, since once the queue fills up that is what they are
>> waiting on. But I didn’t set a buffer size for the disruptor or async
>> logger tests so I would have expected those to be quite a bit faster than
>> the regular file test.
>>
>> The one thing that is definitely worth noting is how truly terrible the
>> JUL file appender is. I have to assume that it must be doing an immediate
>> flush on every write.
>>
>> This is on my MacBook Pro - what Ceki would call Fast CPU/Fast SSD
>>
>> Benchmark Mode Samples
>> Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt
>> 10 69.546 ± 2.635 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>> 783.006 ± 28.138 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10
>> 939.605 ± 38.655 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt
>> 10 1446.522 ± 45.485 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt
>> 10 1269.014 ± 27.236 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt
>> 10 1475.605 ± 74.675 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt
>> 10 2131.240 ± 114.184 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt
>> 10 1499.667 ± 39.668 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10
>> 326.969 ± 2.690 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>> 940.527 ± 34.090 ops/ms
>>
>> And this is on my old MacBook Pro - it uses a hard drive so isn’t very
>> fast.
>>
>> Benchmark Mode Samples
>> Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>> 15.722 ± 15.557 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>> 530.668 ± 54.193 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10
>> 498.620 ± 178.693 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt 10
>> 454.541 ± 145.505 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt 10
>> 527.784 ± 150.269 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>> 587.605 ± 97.769 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>> 1966.092 ± 431.196 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>> 364.694 ± 34.602 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10
>> 258.220 ± 1.936 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>> 560.958 ± 36.982 ops/ms
>>
>> Ralph
>>
>> On Feb 9, 2017, at 1:39 PM, Matt Sicker <bo...@gmail.com> wrote:
>>
>> Also, there are some potential issues with logback-perf:
>>
>> * JMH is way out of date (1.11.3 versus 1.17.4)
>> * Less warmup iterations than we do
>>
>> Anyways, results for 32 threads (8 core environment):
>>
>> Benchmark Mode Cnt Score Error Units
>> FileAppenderBenchmark.log4j1File thrpt 10 695.774 ± 9.567 ops/ms
>> FileAppenderBenchmark.log4j2File thrpt 10 1300.091 ± 17.579 ops/ms
>> FileAppenderBenchmark.log4j2RAF thrpt 10 1365.118 ± 17.656 ops/ms
>> FileAppenderBenchmark.logbackFile thrpt 10 766.294 ± 10.121 ops/ms
>>
>> On 9 February 2017 at 14:37, Matt Sicker <bo...@gmail.com> wrote:
>>
>>> That value does look messed up. I'll re-run the 32 thread tests. Also,
>>> I'm not on the logback lists yet, so I'll sign up this afternoon.
>>>
>>> On 9 February 2017 at 14:35, Apache <ra...@dslextreme.com> wrote:
>>>
>>>> What is up with that score for 32 threads? That can’t possibly be
>>>> correct.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 9, 2017, at 12:45 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>> I ran the logback-perf repo on the same AWS instance. Here's the CSV
>>>> data. It appears as soon as more than one thread comes into play, log4j2
>>>> has better scores.
>>>>
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>> (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>> t",1,10,964.600470,279.139021,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>> t",1,10,1274.682156,6.179197,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>> ",1,10,1257.026405,32.898682,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>> pt",1,10,1363.683525,41.884725,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>> (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>> t",2,10,687.304803,13.266922,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>> t",2,10,1386.596198,207.305249,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>> ",2,10,1579.884762,24.098318,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>> pt",2,10,953.138212,99.156775,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>> (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>> t",4,10,670.442970,15.049614,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>> t",4,10,1218.543593,18.234077,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>> ",4,10,1309.092881,31.547936,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>> pt",4,10,845.168355,24.547390,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>> (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>> t",8,10,689.805339,7.415023,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>> t",8,10,1196.396592,19.360776,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>> ",8,10,1319.477318,10.601260,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>> pt",8,10,816.608726,25.603234,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>> (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>> t",16,10,687.623660,16.114008,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>> t",16,10,1203.649145,8.835115,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>> ",16,10,1266.241778,7.564414,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>> pt",16,10,789.507183,9.866592,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>> (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>> t",32,10,690.252411,11.587858,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>> t",32,10,1514185.478492,126804.168771,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>> ",32,10,1264.049209,28.309088,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>> pt",32,10,754.828687,14.865909,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>>> (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>>> t",64,10,670.498518,11.147198,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>>> t",64,10,1293.301940,22.687086,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>>> ",64,10,1380.527892,14.907542,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>>> pt",64,10,750.528159,11.356281,"ops/ms"
>>>>
>>>> On 9 February 2017 at 13:02, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>>> You might try running Ceki’s benchmark project on AWS and publish
>>>>> those numbers here. He also asked people to publish numbers on the Logback
>>>>> user’s list so he can add them to his spreadsheet.
>>>>>
>>>>> From your numbers and the numbers I’ve been getting, it is clear to me
>>>>> that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware
>>>>> Remko is listing I’d say his machine is about as old as my other MacBook
>>>>> except that he has an SSD that is slightly faster than my hard drive.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 9, 2017, at 11:12 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>
>>>>> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R)
>>>>> Xeon(R) CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params
>>>>> involved in making the instance, but here's some data (same strangeness
>>>>> with MMF):
>>>>>
>>>>> Benchmark Mode Samples
>>>>> Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>> 86.867 ± 4.502 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>> 671.156 ± 7.099 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>> 1221.814 ± 22.130 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>> 1178.407 ± 960.141 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>> 1220.746 ± 34.421 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>> 898.122 ± 8.128 ops/ms
>>>>>
>>>>> On 9 February 2017 at 12:02, Matt Sicker <bo...@gmail.com> wrote:
>>>>>
>>>>>> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core
>>>>>> i7. Can find out more hardware specs if needed. I also noticed that the
>>>>>> memory mapped file starts out fast and slows down over time (somewhat seen
>>>>>> by the error value here).
>>>>>>
>>>>>> Benchmark Mode Samples
>>>>>> Score Error Units
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>> 96.540 ± 7.875 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>> 766.286 ± 11.461 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>> 1787.620 ± 36.695 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>> 1506.588 ± 956.354 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>> 1934.966 ± 50.089 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>> 1285.066 ± 12.674 ops/ms
>>>>>>
>>>>>> On 9 February 2017 at 11:44, Remko Popma <re...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit),
>>>>>>> i5-3317u CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores),
>>>>>>> SSD hard disk:
>>>>>>>
>>>>>>> java -jar log4j-perf/target/benchmarks.jar
>>>>>>> ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 20 -t 4 -tu ms
>>>>>>>
>>>>>>> # Run complete. Total time: 00:03:58
>>>>>>>
>>>>>>> Benchmark Mode Samples
>>>>>>> Score Error Units
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>>> 37.646 ± 0.876 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>>> 405.305 ± 6.596 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>>> 751.949 ± 16.055 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>>>>>> 1250.666 ± 168.757 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>>> 728.743 ± 23.909 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>>> 676.926 ± 19.518 ops/ms
>>>>>>>
>>>>>>> --------------------
>>>>>>> Logback config without immediateFlush=false:
>>>>>>>
>>>>>>> # Run complete. Total time: 00:03:44
>>>>>>>
>>>>>>> Benchmark Mode Samples
>>>>>>> Score Error Units
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>>> 37.949 ± 1.220 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>>> 404.042 ± 8.450 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>>> 690.393 ± 115.537 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>>>>>> 1221.681 ± 82.205 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>>> 823.059 ± 41.512 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>>> 83.352 ± 11.911 ops/ms
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <
>>>>>>> mikael.staldal@magine.com> wrote:
>>>>>>>
>>>>>>>> I guess that with a memory mapped file, you leave it to the OS to
>>>>>>>> do the best it can, and you lose any direct control over how it is actually
>>>>>>>> done.
>>>>>>>>
>>>>>>>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> On my Mac Pro with the slower external SSD I now got:
>>>>>>>>>
>>>>>>>>> Benchmark Mode Samples
>>>>>>>>> Score Error Units
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>>>> 73.739 ± 0.740 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>>>> 683.129 ± 9.407 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>>>> 991.293 ± 193.049 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>>>> 3072.250 ± 63.475 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>>>> 1056.256 ± 137.673 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>>>> 784.723 ± 153.226 ops/ms
>>>>>>>>>
>>>>>>>>> and on the same machine with the faster internal SSD I got:
>>>>>>>>>
>>>>>>>>> Benchmark Mode Samples
>>>>>>>>> Score Error Units
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>>>> 74.661 ± 0.232 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>>>> 647.041 ± 2.994 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>>>> 1333.887 ± 13.921 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>>>> 3025.726 ± 210.414 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>>>> 1433.620 ± 11.194 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>>>> 1026.319 ± 13.347 ops/ms
>>>>>>>>>
>>>>>>>>> I will continue to run this on a few other configurations. I think
>>>>>>>>> I would also like to add the async appenders/loggers to this test so that
>>>>>>>>> one can see all the various options.
>>>>>>>>>
>>>>>>>>> It is really quite interesting to me to see how the memory mapped
>>>>>>>>> appender behaves so differently from one machine to another. I don’t know
>>>>>>>>> under what circumstances I would recommend using it though.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Feb 9, 2017, at 7:03 AM, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> After modifying the configuration the new results on my laptop are:
>>>>>>>>>
>>>>>>>>> Benchmark Mode Samples
>>>>>>>>> Score Error Units
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>>>> 92.580 ± 3.698 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>>>> 828.707 ± 55.006 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>>>> 1647.230 ± 125.682 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>>>> 1465.002 ± 1284.943 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>>>> 1765.340 ± 149.707 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>>>> 1192.594 ± 57.777 ops/ms
>>>>>>>>>
>>>>>>>>> I will try the other machines later and post those results.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Ceki replied on twitter that the immediateFlush option is now a
>>>>>>>>> parameter of the appended, not the encoder, so it looks like the confit
>>>>>>>>> needs to be changed and the test rerun.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> FYI, The write and flush methods in BufferedOutputStream are also
>>>>>>>>> synchronized, so we won't be able to do away with synchronization
>>>>>>>>> completely.
>>>>>>>>>
>>>>>>>>> In OutputStreamManager we synchronize multiple methods but these
>>>>>>>>> are nested calls. I thought reentrant synchronization had negligible
>>>>>>>>> overhead but I haven't measured this myself.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Sent from my iPhone
>>>>>>>>>
>>>>>>>>> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> I’m pretty sure the problem we have is that a) we are
>>>>>>>>> synchronizing many methods and b) we are synchronizing more than just the
>>>>>>>>> write. Unfortunately, I can’t figure out how to reduce that based on how
>>>>>>>>> dispersed the code has gotten.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> I tried to modify FileManager to just use a BufferedOutputStream
>>>>>>>>> but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> The append method isn’t synchronized but the writeBytes method
>>>>>>>>> acquires a lock. His code is actually a lot simpler than ours in that it
>>>>>>>>> just uses a BufferedOutputStream and he only obtains the lock when he is
>>>>>>>>> writing to it.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>> I'm not sure if I'm looking in the right place, but a major
>>>>>>>>> difference now between Logback's appenders and Log4j's is that Logback
>>>>>>>>> isn't synchronized on the append method.
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Is this something we can improve performance on by implementing a
>>>>>>>>>> file appender based on FileChannel or AsynchronousFileChannel instead of
>>>>>>>>>> OutputStream?
>>>>>>>>>>
>>>>>>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Ceki has updated his numbers to include those reported on the
>>>>>>>>>>> mailing list. https://docs.google.com/
>>>>>>>>>>> spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/
>>>>>>>>>>> edit#gid=0
>>>>>>>>>>>
>>>>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers
>>>>>>>>>>> for my two MacBooks are at https://docs.google.com/spread
>>>>>>>>>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?u
>>>>>>>>>>> sp=sharing.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Yes, that is still the standard approach most people use and is
>>>>>>>>>>> the only way to provide a head-to-head comparison against the logging
>>>>>>>>>>> frameworks.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> This is all in a synchronous appender, right? Either way, that's
>>>>>>>>>>> rather interesting.
>>>>>>>>>>>
>>>>>>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Someone posted numbers on the Logback user’s list that match
>>>>>>>>>>>> mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat
>>>>>>>>>>>> better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> I think we need some comparisons on the log4j side: file
>>>>>>>>>>>> appender with 256k buffer size, random access file appender with 256k
>>>>>>>>>>>> buffer size (which appears to be the default), and memory mapped file
>>>>>>>>>>>> appender. It'd be cool to see how these compose with async logging enabled
>>>>>>>>>>>> in both log4j and logback.
>>>>>>>>>>>>
>>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com
>>>>>>>>>>>> > wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> You should run the code at https://github.com/ceki/logbac
>>>>>>>>>>>>> k-perf to compare your results to Ceki’s. You also should
>>>>>>>>>>>>> capture the cpubenchmark speed of your processor and get the speed of your
>>>>>>>>>>>>> hard drive. I used Blackmagic speed test on my Mac. I am capturing my
>>>>>>>>>>>>> results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <
>>>>>>>>>>>>> garydgregory@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> If you want, I can run tests on Windows once the build works
>>>>>>>>>>>>> on Windows again.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>>
>>>>>>>>>>>>> Gary
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com>
>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard
>>>>>>>>>>>>>> drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Benchmark Mode
>>>>>>>>>>>>>> Samples Score Error Units
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt
>>>>>>>>>>>>>> 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt
>>>>>>>>>>>>>> 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt
>>>>>>>>>>>>>> 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt
>>>>>>>>>>>>>> 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt
>>>>>>>>>>>>>> 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run
>>>>>>>>>>>>>> anything directly on bare metal any more.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <
>>>>>>>>>>>>>> ralph.goers@dslextreme.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>>>>>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>>>>>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>>>>>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we
>>>>>>>>>>>>>> have a few optimizations we can make.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>>
>>>>>>>> *Mikael Ståldal*
>>>>>>>> Senior software developer
>>>>>>>>
>>>>>>>> *Magine TV*
>>>>>>>> mikael.staldal@magine.com
>>>>>>>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
>>>>>>>>
>>>>>>>> Privileged and/or Confidential Information may be contained in this
>>>>>>>> message. If you are not the addressee indicated in this message
>>>>>>>> (or responsible for delivery of the message to such a person), you
>>>>>>>> may not copy or deliver this message to anyone. In such case,
>>>>>>>> you should destroy this message and kindly notify the sender by
>>>>>>>> reply email.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>>
>>
>>
>
Re: Logback performance improvements
Posted by Remko Popma <re...@gmail.com>.
But I agree it is interesting to see that a simple (RAF)FileAppender will
outperform async logging if the sustained logging rate is faster than the
appender can handle for long enough to fill up the queue. I've heard people
in the high performance computing world mention this but I hadn't seen
actual numbers until now.
It is good to gain an intuition on these things. Async Loggers (with LMAX
Disruptor) performance is worse than a plain File but not terrible either.
Interesting how LogbackAsyncFile seems to suffer more than
log4j2AsyncAppender.
And yes, JUL is so bad it is not funny any more. We did document that
<https://logging.apache.org/log4j/2.x/performance.html#fileLoggingComparison>
but perhaps we need to evangelize it more...
On Sun, Feb 12, 2017 at 4:59 PM, Remko Popma <re...@gmail.com> wrote:
> The queue is bigger but still fills up quickly because all the benchmarks
> do is add events as quickly as possible and the FileAppender can't keep up
> with that rate.
>
> Sent from my iPhone
>
> On Feb 12, 2017, at 16:19, Apache <ra...@dslextreme.com> wrote:
>
> I added the tests so you guys could run them and take a look. I have no
> problem with the changes being reverted.
>
> As I think I said, I expected most of the async appenders to back up. I
> expected them to be a bit slower, but I didn’t expect them to be as slow as
> they are when the queue is full. I also don’t understand why AsyncLogger is
> slower as it should have a large ring buffer if I understand correctly.
>
> Ralph
>
> On Feb 11, 2017, at 4:36 PM, Remko Popma <re...@gmail.com> wrote:
>
> I don't think it is a good idea to mix async tests with sync tests.
>
> We already have separate benchmarks for the various ways to log
> asynchronously, if you want to compare them just run those also.
>
> JMH makes it super easy to build benchmarks but we have to be very careful
> that the test really measures what we want to measure.
>
> Asynchronous logging has two "states", queue full and space available. Do
> we want to measure the queue full state or the space available state (or
> the transition between the two)?
>
> With a normal FileAppender JMH will iterate so fast that the queue
> immediately fills up. This will likely happen during the warmup iterations
> (no guarantee of course).
>
> What actually happens in that state? We used to block until space becomes
> available in the queue. What we do now depends on the configured
> AsyncQueueFullPolicy. Because blocking caused deadlocks in some scenarios,
> our current default is to bypass the queue and log to the FileAppender
> directly. I expect that to be slower than the simple FileAppender because
> of the additional lock contention on the tail of the queue during the
> attempted thread handover.
>
> The "queue full" state is not the normal logging state for an application.
> If we want to measure this we should move these tests to a separate
> benchmark that is clearly marked "QueueFullAsyncBenchmark" or something
> like that.
> Otherwise people reading these benchmark results will misinterpret them
> and get confused.
>
> The existing Async benchmarks ensure they measure the "queue space
> available" state.
>
> Remko
>
> Sent from my iPhone
>
> On Feb 12, 2017, at 4:37, Apache <ra...@dslextreme.com> wrote:
>
> Just for fun I decided to add some more tests to the file appender
> benchmark. I’ve checked them in. Please review them to see if everything is
> configured so the tests make sense.
>
> Note that I would expect the async appenders to reduce to the speed of the
> file appender, since once the queue fills up that is what they are waiting
> on. But I didn’t set a buffer size for the disruptor or async logger tests
> so I would have expected those to be quite a bit faster than the regular
> file test.
>
> The one thing that is definitely worth noting is how truly terrible the
> JUL file appender is. I have to assume that it must be doing an immediate
> flush on every write.
>
> This is on my MacBook Pro - what Ceki would call Fast CPU/Fast SSD
>
> Benchmark Mode Samples
> Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
> 69.546 ± 2.635 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
> 783.006 ± 28.138 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10
> 939.605 ± 38.655 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt 10
> 1446.522 ± 45.485 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt 10
> 1269.014 ± 27.236 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
> 1475.605 ± 74.675 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
> 2131.240 ± 114.184 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
> 1499.667 ± 39.668 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10
> 326.969 ± 2.690 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
> 940.527 ± 34.090 ops/ms
>
> And this is on my old MacBook Pro - it uses a hard drive so isn’t very
> fast.
>
> Benchmark Mode Samples
> Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
> 15.722 ± 15.557 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
> 530.668 ± 54.193 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10
> 498.620 ± 178.693 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt 10
> 454.541 ± 145.505 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt 10
> 527.784 ± 150.269 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
> 587.605 ± 97.769 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
> 1966.092 ± 431.196 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
> 364.694 ± 34.602 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10
> 258.220 ± 1.936 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
> 560.958 ± 36.982 ops/ms
>
> Ralph
>
> On Feb 9, 2017, at 1:39 PM, Matt Sicker <bo...@gmail.com> wrote:
>
> Also, there are some potential issues with logback-perf:
>
> * JMH is way out of date (1.11.3 versus 1.17.4)
> * Less warmup iterations than we do
>
> Anyways, results for 32 threads (8 core environment):
>
> Benchmark Mode Cnt Score Error Units
> FileAppenderBenchmark.log4j1File thrpt 10 695.774 ± 9.567 ops/ms
> FileAppenderBenchmark.log4j2File thrpt 10 1300.091 ± 17.579 ops/ms
> FileAppenderBenchmark.log4j2RAF thrpt 10 1365.118 ± 17.656 ops/ms
> FileAppenderBenchmark.logbackFile thrpt 10 766.294 ± 10.121 ops/ms
>
> On 9 February 2017 at 14:37, Matt Sicker <bo...@gmail.com> wrote:
>
>> That value does look messed up. I'll re-run the 32 thread tests. Also,
>> I'm not on the logback lists yet, so I'll sign up this afternoon.
>>
>> On 9 February 2017 at 14:35, Apache <ra...@dslextreme.com> wrote:
>>
>>> What is up with that score for 32 threads? That can’t possibly be
>>> correct.
>>>
>>> Ralph
>>>
>>> On Feb 9, 2017, at 12:45 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>
>>> I ran the logback-perf repo on the same AWS instance. Here's the CSV
>>> data. It appears as soon as more than one thread comes into play, log4j2
>>> has better scores.
>>>
>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>> (99.9%)","Unit"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>> t",1,10,964.600470,279.139021,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>> t",1,10,1274.682156,6.179197,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>> ",1,10,1257.026405,32.898682,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>> pt",1,10,1363.683525,41.884725,"ops/ms"
>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>> (99.9%)","Unit"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>> t",2,10,687.304803,13.266922,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>> t",2,10,1386.596198,207.305249,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>> ",2,10,1579.884762,24.098318,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>> pt",2,10,953.138212,99.156775,"ops/ms"
>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>> (99.9%)","Unit"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>> t",4,10,670.442970,15.049614,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>> t",4,10,1218.543593,18.234077,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>> ",4,10,1309.092881,31.547936,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>> pt",4,10,845.168355,24.547390,"ops/ms"
>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>> (99.9%)","Unit"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>> t",8,10,689.805339,7.415023,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>> t",8,10,1196.396592,19.360776,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>> ",8,10,1319.477318,10.601260,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>> pt",8,10,816.608726,25.603234,"ops/ms"
>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>> (99.9%)","Unit"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>> t",16,10,687.623660,16.114008,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>> t",16,10,1203.649145,8.835115,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>> ",16,10,1266.241778,7.564414,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>> pt",16,10,789.507183,9.866592,"ops/ms"
>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>> (99.9%)","Unit"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>> t",32,10,690.252411,11.587858,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>> t",32,10,1514185.478492,126804.168771,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>> ",32,10,1264.049209,28.309088,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>> pt",32,10,754.828687,14.865909,"ops/ms"
>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>> (99.9%)","Unit"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>> t",64,10,670.498518,11.147198,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>> t",64,10,1293.301940,22.687086,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>> ",64,10,1380.527892,14.907542,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>> pt",64,10,750.528159,11.356281,"ops/ms"
>>>
>>> On 9 February 2017 at 13:02, Apache <ra...@dslextreme.com> wrote:
>>>
>>>> You might try running Ceki’s benchmark project on AWS and publish those
>>>> numbers here. He also asked people to publish numbers on the Logback user’s
>>>> list so he can add them to his spreadsheet.
>>>>
>>>> From your numbers and the numbers I’ve been getting, it is clear to me
>>>> that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware
>>>> Remko is listing I’d say his machine is about as old as my other MacBook
>>>> except that he has an SSD that is slightly faster than my hard drive.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 9, 2017, at 11:12 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R)
>>>> Xeon(R) CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params
>>>> involved in making the instance, but here's some data (same strangeness
>>>> with MMF):
>>>>
>>>> Benchmark Mode Samples
>>>> Score Error Units
>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>> 86.867 ± 4.502 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>> 671.156 ± 7.099 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>> 1221.814 ± 22.130 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>> 1178.407 ± 960.141 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>> 1220.746 ± 34.421 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>> 898.122 ± 8.128 ops/ms
>>>>
>>>> On 9 February 2017 at 12:02, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>>> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core
>>>>> i7. Can find out more hardware specs if needed. I also noticed that the
>>>>> memory mapped file starts out fast and slows down over time (somewhat seen
>>>>> by the error value here).
>>>>>
>>>>> Benchmark Mode Samples
>>>>> Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>> 96.540 ± 7.875 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>> 766.286 ± 11.461 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>> 1787.620 ± 36.695 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>> 1506.588 ± 956.354 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>> 1934.966 ± 50.089 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>> 1285.066 ± 12.674 ops/ms
>>>>>
>>>>> On 9 February 2017 at 11:44, Remko Popma <re...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit),
>>>>>> i5-3317u CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores),
>>>>>> SSD hard disk:
>>>>>>
>>>>>> java -jar log4j-perf/target/benchmarks.jar
>>>>>> ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 20 -t 4 -tu ms
>>>>>>
>>>>>> # Run complete. Total time: 00:03:58
>>>>>>
>>>>>> Benchmark Mode Samples
>>>>>> Score Error Units
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>> 37.646 ± 0.876 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>> 405.305 ± 6.596 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>> 751.949 ± 16.055 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>>>>> 1250.666 ± 168.757 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>> 728.743 ± 23.909 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>> 676.926 ± 19.518 ops/ms
>>>>>>
>>>>>> --------------------
>>>>>> Logback config without immediateFlush=false:
>>>>>>
>>>>>> # Run complete. Total time: 00:03:44
>>>>>>
>>>>>> Benchmark Mode Samples
>>>>>> Score Error Units
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>> 37.949 ± 1.220 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>> 404.042 ± 8.450 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>> 690.393 ± 115.537 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>>>>> 1221.681 ± 82.205 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>> 823.059 ± 41.512 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>> 83.352 ± 11.911 ops/ms
>>>>>>
>>>>>>
>>>>>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <
>>>>>> mikael.staldal@magine.com> wrote:
>>>>>>
>>>>>>> I guess that with a memory mapped file, you leave it to the OS to do
>>>>>>> the best it can, and you lose any direct control over how it is actually
>>>>>>> done.
>>>>>>>
>>>>>>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ra...@dslextreme.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> On my Mac Pro with the slower external SSD I now got:
>>>>>>>>
>>>>>>>> Benchmark Mode Samples
>>>>>>>> Score Error Units
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>>> 73.739 ± 0.740 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>>> 683.129 ± 9.407 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>>> 991.293 ± 193.049 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>>> 3072.250 ± 63.475 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>>> 1056.256 ± 137.673 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>>> 784.723 ± 153.226 ops/ms
>>>>>>>>
>>>>>>>> and on the same machine with the faster internal SSD I got:
>>>>>>>>
>>>>>>>> Benchmark Mode Samples
>>>>>>>> Score Error Units
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>>> 74.661 ± 0.232 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>>> 647.041 ± 2.994 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>>> 1333.887 ± 13.921 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>>> 3025.726 ± 210.414 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>>> 1433.620 ± 11.194 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>>> 1026.319 ± 13.347 ops/ms
>>>>>>>>
>>>>>>>> I will continue to run this on a few other configurations. I think
>>>>>>>> I would also like to add the async appenders/loggers to this test so that
>>>>>>>> one can see all the various options.
>>>>>>>>
>>>>>>>> It is really quite interesting to me to see how the memory mapped
>>>>>>>> appender behaves so differently from one machine to another. I don’t know
>>>>>>>> under what circumstances I would recommend using it though.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>
>>>>>>>> On Feb 9, 2017, at 7:03 AM, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> After modifying the configuration the new results on my laptop are:
>>>>>>>>
>>>>>>>> Benchmark Mode Samples
>>>>>>>> Score Error Units
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>>> 92.580 ± 3.698 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>>> 828.707 ± 55.006 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>>> 1647.230 ± 125.682 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>>> 1465.002 ± 1284.943 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>>> 1765.340 ± 149.707 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>>> 1192.594 ± 57.777 ops/ms
>>>>>>>>
>>>>>>>> I will try the other machines later and post those results.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>
>>>>>>>> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Ceki replied on twitter that the immediateFlush option is now a
>>>>>>>> parameter of the appended, not the encoder, so it looks like the confit
>>>>>>>> needs to be changed and the test rerun.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> FYI, The write and flush methods in BufferedOutputStream are also
>>>>>>>> synchronized, so we won't be able to do away with synchronization
>>>>>>>> completely.
>>>>>>>>
>>>>>>>> In OutputStreamManager we synchronize multiple methods but these
>>>>>>>> are nested calls. I thought reentrant synchronization had negligible
>>>>>>>> overhead but I haven't measured this myself.
>>>>>>>>
>>>>>>>>
>>>>>>>> Sent from my iPhone
>>>>>>>>
>>>>>>>> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>
>>>>>>>> I’m pretty sure the problem we have is that a) we are synchronizing
>>>>>>>> many methods and b) we are synchronizing more than just the write.
>>>>>>>> Unfortunately, I can’t figure out how to reduce that based on how dispersed
>>>>>>>> the code has gotten.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> I tried to modify FileManager to just use a BufferedOutputStream
>>>>>>>> but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> The append method isn’t synchronized but the writeBytes method
>>>>>>>> acquires a lock. His code is actually a lot simpler than ours in that it
>>>>>>>> just uses a BufferedOutputStream and he only obtains the lock when he is
>>>>>>>> writing to it.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>
>>>>>>>> I'm not sure if I'm looking in the right place, but a major
>>>>>>>> difference now between Logback's appenders and Log4j's is that Logback
>>>>>>>> isn't synchronized on the append method.
>>>>>>>>
>>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Is this something we can improve performance on by implementing a
>>>>>>>>> file appender based on FileChannel or AsynchronousFileChannel instead of
>>>>>>>>> OutputStream?
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Ceki has updated his numbers to include those reported on the
>>>>>>>>>> mailing list. https://docs.google.com/
>>>>>>>>>> spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/
>>>>>>>>>> edit#gid=0
>>>>>>>>>>
>>>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers
>>>>>>>>>> for my two MacBooks are at https://docs.google.com/spread
>>>>>>>>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?u
>>>>>>>>>> sp=sharing.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> Yes, that is still the standard approach most people use and is
>>>>>>>>>> the only way to provide a head-to-head comparison against the logging
>>>>>>>>>> frameworks.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>> This is all in a synchronous appender, right? Either way, that's
>>>>>>>>>> rather interesting.
>>>>>>>>>>
>>>>>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Someone posted numbers on the Logback user’s list that match
>>>>>>>>>>> mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat
>>>>>>>>>>> better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> I think we need some comparisons on the log4j side: file
>>>>>>>>>>> appender with 256k buffer size, random access file appender with 256k
>>>>>>>>>>> buffer size (which appears to be the default), and memory mapped file
>>>>>>>>>>> appender. It'd be cool to see how these compose with async logging enabled
>>>>>>>>>>> in both log4j and logback.
>>>>>>>>>>>
>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf to
>>>>>>>>>>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>>>>>>>>>>> speed of your processor and get the speed of your hard drive. I used
>>>>>>>>>>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>>>>>>>>>>> spreadsheet. I will post the like once I have it.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <
>>>>>>>>>>>> garydgregory@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> If you want, I can run tests on Windows once the build works on
>>>>>>>>>>>> Windows again.
>>>>>>>>>>>>
>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>
>>>>>>>>>>>> Gary
>>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard
>>>>>>>>>>>>> drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>
>>>>>>>>>>>>> Benchmark Mode
>>>>>>>>>>>>> Samples Score Error Units
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt
>>>>>>>>>>>>> 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt
>>>>>>>>>>>>> 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt
>>>>>>>>>>>>> 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt
>>>>>>>>>>>>> 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt
>>>>>>>>>>>>> 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>>
>>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run
>>>>>>>>>>>>> anything directly on bare metal any more.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com>
>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>>>>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>>>>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>>>>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we
>>>>>>>>>>>>> have a few optimizations we can make.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>>
>>>>>>> *Mikael Ståldal*
>>>>>>> Senior software developer
>>>>>>>
>>>>>>> *Magine TV*
>>>>>>> mikael.staldal@magine.com
>>>>>>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
>>>>>>>
>>>>>>> Privileged and/or Confidential Information may be contained in this
>>>>>>> message. If you are not the addressee indicated in this message
>>>>>>> (or responsible for delivery of the message to such a person), you
>>>>>>> may not copy or deliver this message to anyone. In such case,
>>>>>>> you should destroy this message and kindly notify the sender by
>>>>>>> reply email.
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>>
>>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>
>
>
> --
> Matt Sicker <bo...@gmail.com>
>
>
>
>
Re: Logback performance improvements
Posted by Remko Popma <re...@gmail.com>.
The queue is bigger but still fills up quickly because all the benchmarks do is add events as quickly as possible and the FileAppender can't keep up with that rate.
Sent from my iPhone
> On Feb 12, 2017, at 16:19, Apache <ra...@dslextreme.com> wrote:
>
> I added the tests so you guys could run them and take a look. I have no problem with the changes being reverted.
>
> As I think I said, I expected most of the async appenders to back up. I expected them to be a bit slower, but I didn’t expect them to be as slow as they are when the queue is full. I also don’t understand why AsyncLogger is slower as it should have a large ring buffer if I understand correctly.
>
> Ralph
>
>> On Feb 11, 2017, at 4:36 PM, Remko Popma <re...@gmail.com> wrote:
>>
>> I don't think it is a good idea to mix async tests with sync tests.
>>
>> We already have separate benchmarks for the various ways to log asynchronously, if you want to compare them just run those also.
>>
>> JMH makes it super easy to build benchmarks but we have to be very careful that the test really measures what we want to measure.
>>
>> Asynchronous logging has two "states", queue full and space available. Do we want to measure the queue full state or the space available state (or the transition between the two)?
>>
>> With a normal FileAppender JMH will iterate so fast that the queue immediately fills up. This will likely happen during the warmup iterations (no guarantee of course).
>>
>> What actually happens in that state? We used to block until space becomes available in the queue. What we do now depends on the configured AsyncQueueFullPolicy. Because blocking caused deadlocks in some scenarios, our current default is to bypass the queue and log to the FileAppender directly. I expect that to be slower than the simple FileAppender because of the additional lock contention on the tail of the queue during the attempted thread handover.
>>
>> The "queue full" state is not the normal logging state for an application. If we want to measure this we should move these tests to a separate benchmark that is clearly marked "QueueFullAsyncBenchmark" or something like that.
>> Otherwise people reading these benchmark results will misinterpret them and get confused.
>>
>> The existing Async benchmarks ensure they measure the "queue space available" state.
>>
>> Remko
>>
>> Sent from my iPhone
>>
>>> On Feb 12, 2017, at 4:37, Apache <ra...@dslextreme.com> wrote:
>>>
>>> Just for fun I decided to add some more tests to the file appender benchmark. I’ve checked them in. Please review them to see if everything is configured so the tests make sense.
>>>
>>> Note that I would expect the async appenders to reduce to the speed of the file appender, since once the queue fills up that is what they are waiting on. But I didn’t set a buffer size for the disruptor or async logger tests so I would have expected those to be quite a bit faster than the regular file test.
>>>
>>> The one thing that is definitely worth noting is how truly terrible the JUL file appender is. I have to assume that it must be doing an immediate flush on every write.
>>>
>>> This is on my MacBook Pro - what Ceki would call Fast CPU/Fast SSD
>>>
>>> Benchmark Mode Samples Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 69.546 ± 2.635 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 783.006 ± 28.138 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10 939.605 ± 38.655 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt 10 1446.522 ± 45.485 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt 10 1269.014 ± 27.236 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1475.605 ± 74.675 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 2131.240 ± 114.184 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1499.667 ± 39.668 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10 326.969 ± 2.690 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 940.527 ± 34.090 ops/ms
>>>
>>> And this is on my old MacBook Pro - it uses a hard drive so isn’t very fast.
>>>
>>> Benchmark Mode Samples Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 15.722 ± 15.557 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 530.668 ± 54.193 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10 498.620 ± 178.693 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt 10 454.541 ± 145.505 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt 10 527.784 ± 150.269 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 587.605 ± 97.769 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1966.092 ± 431.196 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 364.694 ± 34.602 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10 258.220 ± 1.936 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 560.958 ± 36.982 ops/ms
>>>
>>> Ralph
>>>
>>>> On Feb 9, 2017, at 1:39 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>> Also, there are some potential issues with logback-perf:
>>>>
>>>> * JMH is way out of date (1.11.3 versus 1.17.4)
>>>> * Less warmup iterations than we do
>>>>
>>>> Anyways, results for 32 threads (8 core environment):
>>>>
>>>> Benchmark Mode Cnt Score Error Units
>>>> FileAppenderBenchmark.log4j1File thrpt 10 695.774 ± 9.567 ops/ms
>>>> FileAppenderBenchmark.log4j2File thrpt 10 1300.091 ± 17.579 ops/ms
>>>> FileAppenderBenchmark.log4j2RAF thrpt 10 1365.118 ± 17.656 ops/ms
>>>> FileAppenderBenchmark.logbackFile thrpt 10 766.294 ± 10.121 ops/ms
>>>>
>>>>> On 9 February 2017 at 14:37, Matt Sicker <bo...@gmail.com> wrote:
>>>>> That value does look messed up. I'll re-run the 32 thread tests. Also, I'm not on the logback lists yet, so I'll sign up this afternoon.
>>>>>
>>>>>> On 9 February 2017 at 14:35, Apache <ra...@dslextreme.com> wrote:
>>>>>> What is up with that score for 32 threads? That can’t possibly be correct.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 9, 2017, at 12:45 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>
>>>>>>> I ran the logback-perf repo on the same AWS instance. Here's the CSV data. It appears as soon as more than one thread comes into play, log4j2 has better scores.
>>>>>>>
>>>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",1,10,964.600470,279.139021,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",1,10,1274.682156,6.179197,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",1,10,1257.026405,32.898682,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",1,10,1363.683525,41.884725,"ops/ms"
>>>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",2,10,687.304803,13.266922,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",2,10,1386.596198,207.305249,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",2,10,1579.884762,24.098318,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",2,10,953.138212,99.156775,"ops/ms"
>>>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",4,10,670.442970,15.049614,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",4,10,1218.543593,18.234077,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",4,10,1309.092881,31.547936,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",4,10,845.168355,24.547390,"ops/ms"
>>>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",8,10,689.805339,7.415023,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",8,10,1196.396592,19.360776,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",8,10,1319.477318,10.601260,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",8,10,816.608726,25.603234,"ops/ms"
>>>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",16,10,687.623660,16.114008,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",16,10,1203.649145,8.835115,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",16,10,1266.241778,7.564414,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",16,10,789.507183,9.866592,"ops/ms"
>>>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",32,10,690.252411,11.587858,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",32,10,1514185.478492,126804.168771,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",32,10,1264.049209,28.309088,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",32,10,754.828687,14.865909,"ops/ms"
>>>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",64,10,670.498518,11.147198,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",64,10,1293.301940,22.687086,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",64,10,1380.527892,14.907542,"ops/ms"
>>>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",64,10,750.528159,11.356281,"ops/ms"
>>>>>>>
>>>>>>>> On 9 February 2017 at 13:02, Apache <ra...@dslextreme.com> wrote:
>>>>>>>> You might try running Ceki’s benchmark project on AWS and publish those numbers here. He also asked people to publish numbers on the Logback user’s list so he can add them to his spreadsheet.
>>>>>>>>
>>>>>>>> From your numbers and the numbers I’ve been getting, it is clear to me that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware Remko is listing I’d say his machine is about as old as my other MacBook except that he has an SSD that is slightly faster than my hard drive.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 9, 2017, at 11:12 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params involved in making the instance, but here's some data (same strangeness with MMF):
>>>>>>>>>
>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 86.867 ± 4.502 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 671.156 ± 7.099 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1221.814 ± 22.130 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1178.407 ± 960.141 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1220.746 ± 34.421 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 898.122 ± 8.128 ops/ms
>>>>>>>>>
>>>>>>>>>> On 9 February 2017 at 12:02, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core i7. Can find out more hardware specs if needed. I also noticed that the memory mapped file starts out fast and slows down over time (somewhat seen by the error value here).
>>>>>>>>>>
>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 96.540 ± 7.875 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 766.286 ± 11.461 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1787.620 ± 36.695 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1506.588 ± 956.354 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1934.966 ± 50.089 ops/ms
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1285.066 ± 12.674 ops/ms
>>>>>>>>>>
>>>>>>>>>>> On 9 February 2017 at 11:44, Remko Popma <re...@gmail.com> wrote:
>>>>>>>>>>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit), i5-3317u CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores), SSD hard disk:
>>>>>>>>>>>
>>>>>>>>>>> java -jar log4j-perf/target/benchmarks.jar ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 20 -t 4 -tu ms
>>>>>>>>>>>
>>>>>>>>>>> # Run complete. Total time: 00:03:58
>>>>>>>>>>>
>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.646 ± 0.876 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 405.305 ± 6.596 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 751.949 ± 16.055 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1250.666 ± 168.757 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 728.743 ± 23.909 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 676.926 ± 19.518 ops/ms
>>>>>>>>>>>
>>>>>>>>>>> --------------------
>>>>>>>>>>> Logback config without immediateFlush=false:
>>>>>>>>>>>
>>>>>>>>>>> # Run complete. Total time: 00:03:44
>>>>>>>>>>>
>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.949 ± 1.220 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 404.042 ± 8.450 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 690.393 ± 115.537 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1221.681 ± 82.205 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 823.059 ± 41.512 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 83.352 ± 11.911 ops/ms
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <mi...@magine.com> wrote:
>>>>>>>>>>>> I guess that with a memory mapped file, you leave it to the OS to do the best it can, and you lose any direct control over how it is actually done.
>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>> On my Mac Pro with the slower external SSD I now got:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 73.739 ± 0.740 ops/ms
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 683.129 ± 9.407 ops/ms
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 991.293 ± 193.049 ops/ms
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3072.250 ± 63.475 ops/ms
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1056.256 ± 137.673 ops/ms
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 784.723 ± 153.226 ops/ms
>>>>>>>>>>>>>
>>>>>>>>>>>>> and on the same machine with the faster internal SSD I got:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 74.661 ± 0.232 ops/ms
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 647.041 ± 2.994 ops/ms
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1333.887 ± 13.921 ops/ms
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3025.726 ± 210.414 ops/ms
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1433.620 ± 11.194 ops/ms
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1026.319 ± 13.347 ops/ms
>>>>>>>>>>>>>
>>>>>>>>>>>>> I will continue to run this on a few other configurations. I think I would also like to add the async appenders/loggers to this test so that one can see all the various options.
>>>>>>>>>>>>>
>>>>>>>>>>>>> It is really quite interesting to me to see how the memory mapped appender behaves so differently from one machine to another. I don’t know under what circumstances I would recommend using it though.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 9, 2017, at 7:03 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> After modifying the configuration the new results on my laptop are:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 92.580 ± 3.698 ops/ms
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 828.707 ± 55.006 ops/ms
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1647.230 ± 125.682 ops/ms
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1465.002 ± 1284.943 ops/ms
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1765.340 ± 149.707 ops/ms
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1192.594 ± 57.777 ops/ms
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I will try the other machines later and post those results.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ceki replied on twitter that the immediateFlush option is now a parameter of the appended, not the encoder, so it looks like the confit needs to be changed and the test rerun.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> FYI, The write and flush methods in BufferedOutputStream are also synchronized, so we won't be able to do away with synchronization completely.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> In OutputStreamManager we synchronize multiple methods but these are nested calls. I thought reentrant synchronization had negligible overhead but I haven't measured this myself.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I’m pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can’t figure out how to reduce that based on how dispersed the code has gotten.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Gary
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Mikael Ståldal
>>>>>>>>>>>> Senior software developer
>>>>>>>>>>>>
>>>>>>>>>>>> Magine TV
>>>>>>>>>>>> mikael.staldal@magine.com
>>>>>>>>>>>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
>>>>>>>>>>>>
>>>>>>>>>>>> Privileged and/or Confidential Information may be contained in this message. If you are not the addressee indicated in this message
>>>>>>>>>>>> (or responsible for delivery of the message to such a person), you may not copy or deliver this message to anyone. In such case,
>>>>>>>>>>>> you should destroy this message and kindly notify the sender by reply email.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>
>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
I added the tests so you guys could run them and take a look. I have no problem with the changes being reverted.
As I think I said, I expected most of the async appenders to back up. I expected them to be a bit slower, but I didn’t expect them to be as slow as they are when the queue is full. I also don’t understand why AsyncLogger is slower as it should have a large ring buffer if I understand correctly.
Ralph
> On Feb 11, 2017, at 4:36 PM, Remko Popma <re...@gmail.com> wrote:
>
> I don't think it is a good idea to mix async tests with sync tests.
>
> We already have separate benchmarks for the various ways to log asynchronously, if you want to compare them just run those also.
>
> JMH makes it super easy to build benchmarks but we have to be very careful that the test really measures what we want to measure.
>
> Asynchronous logging has two "states", queue full and space available. Do we want to measure the queue full state or the space available state (or the transition between the two)?
>
> With a normal FileAppender JMH will iterate so fast that the queue immediately fills up. This will likely happen during the warmup iterations (no guarantee of course).
>
> What actually happens in that state? We used to block until space becomes available in the queue. What we do now depends on the configured AsyncQueueFullPolicy. Because blocking caused deadlocks in some scenarios, our current default is to bypass the queue and log to the FileAppender directly. I expect that to be slower than the simple FileAppender because of the additional lock contention on the tail of the queue during the attempted thread handover.
>
> The "queue full" state is not the normal logging state for an application. If we want to measure this we should move these tests to a separate benchmark that is clearly marked "QueueFullAsyncBenchmark" or something like that.
> Otherwise people reading these benchmark results will misinterpret them and get confused.
>
> The existing Async benchmarks ensure they measure the "queue space available" state.
>
> Remko
>
> Sent from my iPhone
>
> On Feb 12, 2017, at 4:37, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>
>> Just for fun I decided to add some more tests to the file appender benchmark. I’ve checked them in. Please review them to see if everything is configured so the tests make sense.
>>
>> Note that I would expect the async appenders to reduce to the speed of the file appender, since once the queue fills up that is what they are waiting on. But I didn’t set a buffer size for the disruptor or async logger tests so I would have expected those to be quite a bit faster than the regular file test.
>>
>> The one thing that is definitely worth noting is how truly terrible the JUL file appender is. I have to assume that it must be doing an immediate flush on every write.
>>
>> This is on my MacBook Pro - what Ceki would call Fast CPU/Fast SSD
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 69.546 ± 2.635 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 783.006 ± 28.138 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10 939.605 ± 38.655 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt 10 1446.522 ± 45.485 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt 10 1269.014 ± 27.236 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1475.605 ± 74.675 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 2131.240 ± 114.184 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1499.667 ± 39.668 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10 326.969 ± 2.690 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 940.527 ± 34.090 ops/ms
>>
>> And this is on my old MacBook Pro - it uses a hard drive so isn’t very fast.
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 15.722 ± 15.557 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 530.668 ± 54.193 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10 498.620 ± 178.693 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt 10 454.541 ± 145.505 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt 10 527.784 ± 150.269 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 587.605 ± 97.769 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1966.092 ± 431.196 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 364.694 ± 34.602 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10 258.220 ± 1.936 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 560.958 ± 36.982 ops/ms
>>
>> Ralph
>>
>>> On Feb 9, 2017, at 1:39 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>
>>> Also, there are some potential issues with logback-perf:
>>>
>>> * JMH is way out of date (1.11.3 versus 1.17.4)
>>> * Less warmup iterations than we do
>>>
>>> Anyways, results for 32 threads (8 core environment):
>>>
>>> Benchmark Mode Cnt Score Error Units
>>> FileAppenderBenchmark.log4j1File thrpt 10 695.774 ± 9.567 ops/ms
>>> FileAppenderBenchmark.log4j2File thrpt 10 1300.091 ± 17.579 ops/ms
>>> FileAppenderBenchmark.log4j2RAF thrpt 10 1365.118 ± 17.656 ops/ms
>>> FileAppenderBenchmark.logbackFile thrpt 10 766.294 ± 10.121 ops/ms
>>>
>>> On 9 February 2017 at 14:37, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>> That value does look messed up. I'll re-run the 32 thread tests. Also, I'm not on the logback lists yet, so I'll sign up this afternoon.
>>>
>>> On 9 February 2017 at 14:35, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>> What is up with that score for 32 threads? That can’t possibly be correct.
>>>
>>> Ralph
>>>
>>>> On Feb 9, 2017, at 12:45 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>
>>>> I ran the logback-perf repo on the same AWS instance. Here's the CSV data. It appears as soon as more than one thread comes into play, log4j2 has better scores.
>>>>
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",1,10,964.600470,279.139021,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",1,10,1274.682156,6.179197,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",1,10,1257.026405,32.898682,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",1,10,1363.683525,41.884725,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",2,10,687.304803,13.266922,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",2,10,1386.596198,207.305249,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",2,10,1579.884762,24.098318,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",2,10,953.138212,99.156775,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",4,10,670.442970,15.049614,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",4,10,1218.543593,18.234077,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",4,10,1309.092881,31.547936,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",4,10,845.168355,24.547390,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",8,10,689.805339,7.415023,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",8,10,1196.396592,19.360776,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",8,10,1319.477318,10.601260,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",8,10,816.608726,25.603234,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",16,10,687.623660,16.114008,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",16,10,1203.649145,8.835115,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",16,10,1266.241778,7.564414,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",16,10,789.507183,9.866592,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",32,10,690.252411,11.587858,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",32,10,1514185.478492,126804.168771,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",32,10,1264.049209,28.309088,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",32,10,754.828687,14.865909,"ops/ms"
>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",64,10,670.498518,11.147198,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",64,10,1293.301940,22.687086,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",64,10,1380.527892,14.907542,"ops/ms"
>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",64,10,750.528159,11.356281,"ops/ms"
>>>>
>>>> On 9 February 2017 at 13:02, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>> You might try running Ceki’s benchmark project on AWS and publish those numbers here. He also asked people to publish numbers on the Logback user’s list so he can add them to his spreadsheet.
>>>>
>>>> From your numbers and the numbers I’ve been getting, it is clear to me that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware Remko is listing I’d say his machine is about as old as my other MacBook except that he has an SSD that is slightly faster than my hard drive.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 9, 2017, at 11:12 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>
>>>>> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params involved in making the instance, but here's some data (same strangeness with MMF):
>>>>>
>>>>> Benchmark Mode Samples Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 86.867 ± 4.502 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 671.156 ± 7.099 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1221.814 ± 22.130 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1178.407 ± 960.141 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1220.746 ± 34.421 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 898.122 ± 8.128 ops/ms
>>>>>
>>>>> On 9 February 2017 at 12:02, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core i7. Can find out more hardware specs if needed. I also noticed that the memory mapped file starts out fast and slows down over time (somewhat seen by the error value here).
>>>>>
>>>>> Benchmark Mode Samples Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 96.540 ± 7.875 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 766.286 ± 11.461 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1787.620 ± 36.695 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1506.588 ± 956.354 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1934.966 ± 50.089 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1285.066 ± 12.674 ops/ms
>>>>>
>>>>> On 9 February 2017 at 11:44, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>>>>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit), i5-3317u CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores), SSD hard disk:
>>>>>
>>>>> java -jar log4j-perf/target/benchmarks.jar ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 20 -t 4 -tu ms
>>>>>
>>>>> # Run complete. Total time: 00:03:58
>>>>>
>>>>> Benchmark Mode Samples Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.646 ± 0.876 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 405.305 ± 6.596 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 751.949 ± 16.055 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1250.666 ± 168.757 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 728.743 ± 23.909 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 676.926 ± 19.518 ops/ms
>>>>>
>>>>> --------------------
>>>>> Logback config without immediateFlush=false:
>>>>>
>>>>> # Run complete. Total time: 00:03:44
>>>>>
>>>>> Benchmark Mode Samples Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.949 ± 1.220 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 404.042 ± 8.450 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 690.393 ± 115.537 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1221.681 ± 82.205 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 823.059 ± 41.512 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 83.352 ± 11.911 ops/ms
>>>>>
>>>>>
>>>>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <mikael.staldal@magine.com <ma...@magine.com>> wrote:
>>>>> I guess that with a memory mapped file, you leave it to the OS to do the best it can, and you lose any direct control over how it is actually done.
>>>>>
>>>>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>> On my Mac Pro with the slower external SSD I now got:
>>>>>
>>>>> Benchmark Mode Samples Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 73.739 ± 0.740 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 683.129 ± 9.407 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 991.293 ± 193.049 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3072.250 ± 63.475 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1056.256 ± 137.673 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 784.723 ± 153.226 ops/ms
>>>>>
>>>>> and on the same machine with the faster internal SSD I got:
>>>>>
>>>>> Benchmark Mode Samples Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 74.661 ± 0.232 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 647.041 ± 2.994 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1333.887 ± 13.921 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3025.726 ± 210.414 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1433.620 ± 11.194 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1026.319 ± 13.347 ops/ms
>>>>>
>>>>> I will continue to run this on a few other configurations. I think I would also like to add the async appenders/loggers to this test so that one can see all the various options.
>>>>>
>>>>> It is really quite interesting to me to see how the memory mapped appender behaves so differently from one machine to another. I don’t know under what circumstances I would recommend using it though.
>>>>>
>>>>> Ralph
>>>>>
>>>>>
>>>>>> On Feb 9, 2017, at 7:03 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>
>>>>>> After modifying the configuration the new results on my laptop are:
>>>>>>
>>>>>> Benchmark Mode Samples Score Error Units
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 92.580 ± 3.698 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 828.707 ± 55.006 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1647.230 ± 125.682 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1465.002 ± 1284.943 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1765.340 ± 149.707 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1192.594 ± 57.777 ops/ms
>>>>>>
>>>>>> I will try the other machines later and post those results.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>
>>>>>>> On Feb 9, 2017, at 5:22 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>
>>>>>>> Ceki replied on twitter that the immediateFlush option is now a parameter of the appended, not the encoder, so it looks like the confit needs to be changed and the test rerun.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>
>>>>>>>> FYI, The write and flush methods in BufferedOutputStream are also synchronized, so we won't be able to do away with synchronization completely.
>>>>>>>>
>>>>>>>> In OutputStreamManager we synchronize multiple methods but these are nested calls. I thought reentrant synchronization had negligible overhead but I haven't measured this myself.
>>>>>>>>
>>>>>>>>
>>>>>>>> Sent from my iPhone
>>>>>>>>
>>>>>>>> On Feb 9, 2017, at 2:31, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>
>>>>>>>>> I’m pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can’t figure out how to reduce that based on how dispersed the code has gotten.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>>>>>>>>
>>>>>>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>>>>>>>
>>>>>>>>>>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>>>>>>>>>>
>>>>>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Gary
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>>
>>>>> Mikael Ståldal
>>>>> Senior software developer
>>>>>
>>>>> Magine TV
>>>>> mikael.staldal@magine.com <ma...@magine.com>
>>>>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com <http://www.magine.com/>
>>>>>
>>>>> Privileged and/or Confidential Information may be contained in this message. If you are not the addressee indicated in this message
>>>>> (or responsible for delivery of the message to such a person), you may not copy or deliver this message to anyone. In such case,
>>>>> you should destroy this message and kindly notify the sender by reply email.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>
Re: Logback performance improvements
Posted by Remko Popma <re...@gmail.com>.
I don't think it is a good idea to mix async tests with sync tests.
We already have separate benchmarks for the various ways to log asynchronously, if you want to compare them just run those also.
JMH makes it super easy to build benchmarks but we have to be very careful that the test really measures what we want to measure.
Asynchronous logging has two "states", queue full and space available. Do we want to measure the queue full state or the space available state (or the transition between the two)?
With a normal FileAppender JMH will iterate so fast that the queue immediately fills up. This will likely happen during the warmup iterations (no guarantee of course).
What actually happens in that state? We used to block until space becomes available in the queue. What we do now depends on the configured AsyncQueueFullPolicy. Because blocking caused deadlocks in some scenarios, our current default is to bypass the queue and log to the FileAppender directly. I expect that to be slower than the simple FileAppender because of the additional lock contention on the tail of the queue during the attempted thread handover.
The "queue full" state is not the normal logging state for an application. If we want to measure this we should move these tests to a separate benchmark that is clearly marked "QueueFullAsyncBenchmark" or something like that.
Otherwise people reading these benchmark results will misinterpret them and get confused.
The existing Async benchmarks ensure they measure the "queue space available" state.
Remko
Sent from my iPhone
> On Feb 12, 2017, at 4:37, Apache <ra...@dslextreme.com> wrote:
>
> Just for fun I decided to add some more tests to the file appender benchmark. I’ve checked them in. Please review them to see if everything is configured so the tests make sense.
>
> Note that I would expect the async appenders to reduce to the speed of the file appender, since once the queue fills up that is what they are waiting on. But I didn’t set a buffer size for the disruptor or async logger tests so I would have expected those to be quite a bit faster than the regular file test.
>
> The one thing that is definitely worth noting is how truly terrible the JUL file appender is. I have to assume that it must be doing an immediate flush on every write.
>
> This is on my MacBook Pro - what Ceki would call Fast CPU/Fast SSD
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 69.546 ± 2.635 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 783.006 ± 28.138 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10 939.605 ± 38.655 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt 10 1446.522 ± 45.485 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt 10 1269.014 ± 27.236 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1475.605 ± 74.675 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 2131.240 ± 114.184 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1499.667 ± 39.668 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10 326.969 ± 2.690 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 940.527 ± 34.090 ops/ms
>
> And this is on my old MacBook Pro - it uses a hard drive so isn’t very fast.
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 15.722 ± 15.557 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 530.668 ± 54.193 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10 498.620 ± 178.693 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt 10 454.541 ± 145.505 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt 10 527.784 ± 150.269 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 587.605 ± 97.769 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1966.092 ± 431.196 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 364.694 ± 34.602 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10 258.220 ± 1.936 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 560.958 ± 36.982 ops/ms
>
> Ralph
>
>> On Feb 9, 2017, at 1:39 PM, Matt Sicker <bo...@gmail.com> wrote:
>>
>> Also, there are some potential issues with logback-perf:
>>
>> * JMH is way out of date (1.11.3 versus 1.17.4)
>> * Less warmup iterations than we do
>>
>> Anyways, results for 32 threads (8 core environment):
>>
>> Benchmark Mode Cnt Score Error Units
>> FileAppenderBenchmark.log4j1File thrpt 10 695.774 ± 9.567 ops/ms
>> FileAppenderBenchmark.log4j2File thrpt 10 1300.091 ± 17.579 ops/ms
>> FileAppenderBenchmark.log4j2RAF thrpt 10 1365.118 ± 17.656 ops/ms
>> FileAppenderBenchmark.logbackFile thrpt 10 766.294 ± 10.121 ops/ms
>>
>>> On 9 February 2017 at 14:37, Matt Sicker <bo...@gmail.com> wrote:
>>> That value does look messed up. I'll re-run the 32 thread tests. Also, I'm not on the logback lists yet, so I'll sign up this afternoon.
>>>
>>>> On 9 February 2017 at 14:35, Apache <ra...@dslextreme.com> wrote:
>>>> What is up with that score for 32 threads? That can’t possibly be correct.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 9, 2017, at 12:45 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>
>>>>> I ran the logback-perf repo on the same AWS instance. Here's the CSV data. It appears as soon as more than one thread comes into play, log4j2 has better scores.
>>>>>
>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",1,10,964.600470,279.139021,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",1,10,1274.682156,6.179197,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",1,10,1257.026405,32.898682,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",1,10,1363.683525,41.884725,"ops/ms"
>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",2,10,687.304803,13.266922,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",2,10,1386.596198,207.305249,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",2,10,1579.884762,24.098318,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",2,10,953.138212,99.156775,"ops/ms"
>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",4,10,670.442970,15.049614,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",4,10,1218.543593,18.234077,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",4,10,1309.092881,31.547936,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",4,10,845.168355,24.547390,"ops/ms"
>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",8,10,689.805339,7.415023,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",8,10,1196.396592,19.360776,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",8,10,1319.477318,10.601260,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",8,10,816.608726,25.603234,"ops/ms"
>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",16,10,687.623660,16.114008,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",16,10,1203.649145,8.835115,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",16,10,1266.241778,7.564414,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",16,10,789.507183,9.866592,"ops/ms"
>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",32,10,690.252411,11.587858,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",32,10,1514185.478492,126804.168771,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",32,10,1264.049209,28.309088,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",32,10,754.828687,14.865909,"ops/ms"
>>>>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",64,10,670.498518,11.147198,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",64,10,1293.301940,22.687086,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",64,10,1380.527892,14.907542,"ops/ms"
>>>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",64,10,750.528159,11.356281,"ops/ms"
>>>>>
>>>>>> On 9 February 2017 at 13:02, Apache <ra...@dslextreme.com> wrote:
>>>>>> You might try running Ceki’s benchmark project on AWS and publish those numbers here. He also asked people to publish numbers on the Logback user’s list so he can add them to his spreadsheet.
>>>>>>
>>>>>> From your numbers and the numbers I’ve been getting, it is clear to me that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware Remko is listing I’d say his machine is about as old as my other MacBook except that he has an SSD that is slightly faster than my hard drive.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 9, 2017, at 11:12 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>
>>>>>>> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params involved in making the instance, but here's some data (same strangeness with MMF):
>>>>>>>
>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 86.867 ± 4.502 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 671.156 ± 7.099 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1221.814 ± 22.130 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1178.407 ± 960.141 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1220.746 ± 34.421 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 898.122 ± 8.128 ops/ms
>>>>>>>
>>>>>>>> On 9 February 2017 at 12:02, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core i7. Can find out more hardware specs if needed. I also noticed that the memory mapped file starts out fast and slows down over time (somewhat seen by the error value here).
>>>>>>>>
>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 96.540 ± 7.875 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 766.286 ± 11.461 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1787.620 ± 36.695 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1506.588 ± 956.354 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1934.966 ± 50.089 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1285.066 ± 12.674 ops/ms
>>>>>>>>
>>>>>>>>> On 9 February 2017 at 11:44, Remko Popma <re...@gmail.com> wrote:
>>>>>>>>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit), i5-3317u CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores), SSD hard disk:
>>>>>>>>>
>>>>>>>>> java -jar log4j-perf/target/benchmarks.jar ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 20 -t 4 -tu ms
>>>>>>>>>
>>>>>>>>> # Run complete. Total time: 00:03:58
>>>>>>>>>
>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.646 ± 0.876 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 405.305 ± 6.596 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 751.949 ± 16.055 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1250.666 ± 168.757 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 728.743 ± 23.909 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 676.926 ± 19.518 ops/ms
>>>>>>>>>
>>>>>>>>> --------------------
>>>>>>>>> Logback config without immediateFlush=false:
>>>>>>>>>
>>>>>>>>> # Run complete. Total time: 00:03:44
>>>>>>>>>
>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.949 ± 1.220 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 404.042 ± 8.450 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 690.393 ± 115.537 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1221.681 ± 82.205 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 823.059 ± 41.512 ops/ms
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 83.352 ± 11.911 ops/ms
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <mi...@magine.com> wrote:
>>>>>>>>>> I guess that with a memory mapped file, you leave it to the OS to do the best it can, and you lose any direct control over how it is actually done.
>>>>>>>>>>
>>>>>>>>>>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>> On my Mac Pro with the slower external SSD I now got:
>>>>>>>>>>>
>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 73.739 ± 0.740 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 683.129 ± 9.407 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 991.293 ± 193.049 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3072.250 ± 63.475 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1056.256 ± 137.673 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 784.723 ± 153.226 ops/ms
>>>>>>>>>>>
>>>>>>>>>>> and on the same machine with the faster internal SSD I got:
>>>>>>>>>>>
>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 74.661 ± 0.232 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 647.041 ± 2.994 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1333.887 ± 13.921 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3025.726 ± 210.414 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1433.620 ± 11.194 ops/ms
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1026.319 ± 13.347 ops/ms
>>>>>>>>>>>
>>>>>>>>>>> I will continue to run this on a few other configurations. I think I would also like to add the async appenders/loggers to this test so that one can see all the various options.
>>>>>>>>>>>
>>>>>>>>>>> It is really quite interesting to me to see how the memory mapped appender behaves so differently from one machine to another. I don’t know under what circumstances I would recommend using it though.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> On Feb 9, 2017, at 7:03 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> After modifying the configuration the new results on my laptop are:
>>>>>>>>>>>>
>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 92.580 ± 3.698 ops/ms
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 828.707 ± 55.006 ops/ms
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1647.230 ± 125.682 ops/ms
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1465.002 ± 1284.943 ops/ms
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1765.340 ± 149.707 ops/ms
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1192.594 ± 57.777 ops/ms
>>>>>>>>>>>>
>>>>>>>>>>>> I will try the other machines later and post those results.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ceki replied on twitter that the immediateFlush option is now a parameter of the appended, not the encoder, so it looks like the confit needs to be changed and the test rerun.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> FYI, The write and flush methods in BufferedOutputStream are also synchronized, so we won't be able to do away with synchronization completely.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In OutputStreamManager we synchronize multiple methods but these are nested calls. I thought reentrant synchronization had negligible overhead but I haven't measured this myself.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I’m pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can’t figure out how to reduce that based on how dispersed the code has gotten.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Gary
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Mikael Ståldal
>>>>>>>>>> Senior software developer
>>>>>>>>>>
>>>>>>>>>> Magine TV
>>>>>>>>>> mikael.staldal@magine.com
>>>>>>>>>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
>>>>>>>>>>
>>>>>>>>>> Privileged and/or Confidential Information may be contained in this message. If you are not the addressee indicated in this message
>>>>>>>>>> (or responsible for delivery of the message to such a person), you may not copy or deliver this message to anyone. In such case,
>>>>>>>>>> you should destroy this message and kindly notify the sender by reply email.
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>
Re: Logback performance improvements
Posted by Matt Sicker <bo...@gmail.com>.
Whoa, I wonder how async MMF benchmarks compare. Also, I wonder if there's
some sort of cloud service out there we can use for free to run benchmarks
on a regular basis. Might even be doable with a dedicated VM on
builds.apache.org.
On 11 February 2017 at 13:37, Apache <ra...@dslextreme.com> wrote:
> Just for fun I decided to add some more tests to the file appender
> benchmark. I’ve checked them in. Please review them to see if everything is
> configured so the tests make sense.
>
> Note that I would expect the async appenders to reduce to the speed of the
> file appender, since once the queue fills up that is what they are waiting
> on. But I didn’t set a buffer size for the disruptor or async logger tests
> so I would have expected those to be quite a bit faster than the regular
> file test.
>
> The one thing that is definitely worth noting is how truly terrible the
> JUL file appender is. I have to assume that it must be doing an immediate
> flush on every write.
>
> This is on my MacBook Pro - what Ceki would call Fast CPU/Fast SSD
>
> Benchmark Mode Samples
> Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
> 69.546 ± 2.635 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
> 783.006 ± 28.138 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10
> 939.605 ± 38.655 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt 10
> 1446.522 ± 45.485 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt 10
> 1269.014 ± 27.236 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
> 1475.605 ± 74.675 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
> 2131.240 ± 114.184 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
> 1499.667 ± 39.668 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10
> 326.969 ± 2.690 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
> 940.527 ± 34.090 ops/ms
>
> And this is on my old MacBook Pro - it uses a hard drive so isn’t very
> fast.
>
> Benchmark Mode Samples
> Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
> 15.722 ± 15.557 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
> 530.668 ± 54.193 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10
> 498.620 ± 178.693 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt 10
> 454.541 ± 145.505 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt 10
> 527.784 ± 150.269 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
> 587.605 ± 97.769 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
> 1966.092 ± 431.196 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
> 364.694 ± 34.602 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10
> 258.220 ± 1.936 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
> 560.958 ± 36.982 ops/ms
>
> Ralph
>
> On Feb 9, 2017, at 1:39 PM, Matt Sicker <bo...@gmail.com> wrote:
>
> Also, there are some potential issues with logback-perf:
>
> * JMH is way out of date (1.11.3 versus 1.17.4)
> * Less warmup iterations than we do
>
> Anyways, results for 32 threads (8 core environment):
>
> Benchmark Mode Cnt Score Error Units
> FileAppenderBenchmark.log4j1File thrpt 10 695.774 ± 9.567 ops/ms
> FileAppenderBenchmark.log4j2File thrpt 10 1300.091 ± 17.579 ops/ms
> FileAppenderBenchmark.log4j2RAF thrpt 10 1365.118 ± 17.656 ops/ms
> FileAppenderBenchmark.logbackFile thrpt 10 766.294 ± 10.121 ops/ms
>
> On 9 February 2017 at 14:37, Matt Sicker <bo...@gmail.com> wrote:
>
>> That value does look messed up. I'll re-run the 32 thread tests. Also,
>> I'm not on the logback lists yet, so I'll sign up this afternoon.
>>
>> On 9 February 2017 at 14:35, Apache <ra...@dslextreme.com> wrote:
>>
>>> What is up with that score for 32 threads? That can’t possibly be
>>> correct.
>>>
>>> Ralph
>>>
>>> On Feb 9, 2017, at 12:45 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>
>>> I ran the logback-perf repo on the same AWS instance. Here's the CSV
>>> data. It appears as soon as more than one thread comes into play, log4j2
>>> has better scores.
>>>
>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>> (99.9%)","Unit"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>> t",1,10,964.600470,279.139021,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>> t",1,10,1274.682156,6.179197,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>> ",1,10,1257.026405,32.898682,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>> pt",1,10,1363.683525,41.884725,"ops/ms"
>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>> (99.9%)","Unit"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>> t",2,10,687.304803,13.266922,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>> t",2,10,1386.596198,207.305249,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>> ",2,10,1579.884762,24.098318,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>> pt",2,10,953.138212,99.156775,"ops/ms"
>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>> (99.9%)","Unit"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>> t",4,10,670.442970,15.049614,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>> t",4,10,1218.543593,18.234077,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>> ",4,10,1309.092881,31.547936,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>> pt",4,10,845.168355,24.547390,"ops/ms"
>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>> (99.9%)","Unit"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>> t",8,10,689.805339,7.415023,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>> t",8,10,1196.396592,19.360776,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>> ",8,10,1319.477318,10.601260,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>> pt",8,10,816.608726,25.603234,"ops/ms"
>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>> (99.9%)","Unit"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>> t",16,10,687.623660,16.114008,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>> t",16,10,1203.649145,8.835115,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>> ",16,10,1266.241778,7.564414,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>> pt",16,10,789.507183,9.866592,"ops/ms"
>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>> (99.9%)","Unit"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>> t",32,10,690.252411,11.587858,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>> t",32,10,1514185.478492,126804.168771,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>> ",32,10,1264.049209,28.309088,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>> pt",32,10,754.828687,14.865909,"ops/ms"
>>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>>> (99.9%)","Unit"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrp
>>> t",64,10,670.498518,11.147198,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrp
>>> t",64,10,1293.301940,22.687086,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt
>>> ",64,10,1380.527892,14.907542,"ops/ms"
>>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thr
>>> pt",64,10,750.528159,11.356281,"ops/ms"
>>>
>>> On 9 February 2017 at 13:02, Apache <ra...@dslextreme.com> wrote:
>>>
>>>> You might try running Ceki’s benchmark project on AWS and publish those
>>>> numbers here. He also asked people to publish numbers on the Logback user’s
>>>> list so he can add them to his spreadsheet.
>>>>
>>>> From your numbers and the numbers I’ve been getting, it is clear to me
>>>> that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware
>>>> Remko is listing I’d say his machine is about as old as my other MacBook
>>>> except that he has an SSD that is slightly faster than my hard drive.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 9, 2017, at 11:12 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R)
>>>> Xeon(R) CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params
>>>> involved in making the instance, but here's some data (same strangeness
>>>> with MMF):
>>>>
>>>> Benchmark Mode Samples
>>>> Score Error Units
>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>> 86.867 ± 4.502 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>> 671.156 ± 7.099 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>> 1221.814 ± 22.130 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>> 1178.407 ± 960.141 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>> 1220.746 ± 34.421 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>> 898.122 ± 8.128 ops/ms
>>>>
>>>> On 9 February 2017 at 12:02, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>>> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core
>>>>> i7. Can find out more hardware specs if needed. I also noticed that the
>>>>> memory mapped file starts out fast and slows down over time (somewhat seen
>>>>> by the error value here).
>>>>>
>>>>> Benchmark Mode Samples
>>>>> Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>> 96.540 ± 7.875 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>> 766.286 ± 11.461 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>> 1787.620 ± 36.695 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>> 1506.588 ± 956.354 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>> 1934.966 ± 50.089 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>> 1285.066 ± 12.674 ops/ms
>>>>>
>>>>> On 9 February 2017 at 11:44, Remko Popma <re...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit),
>>>>>> i5-3317u CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores),
>>>>>> SSD hard disk:
>>>>>>
>>>>>> java -jar log4j-perf/target/benchmarks.jar
>>>>>> ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 20 -t 4 -tu ms
>>>>>>
>>>>>> # Run complete. Total time: 00:03:58
>>>>>>
>>>>>> Benchmark Mode Samples
>>>>>> Score Error Units
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>> 37.646 ± 0.876 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>> 405.305 ± 6.596 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>> 751.949 ± 16.055 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>>>>> 1250.666 ± 168.757 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>> 728.743 ± 23.909 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>> 676.926 ± 19.518 ops/ms
>>>>>>
>>>>>> --------------------
>>>>>> Logback config without immediateFlush=false:
>>>>>>
>>>>>> # Run complete. Total time: 00:03:44
>>>>>>
>>>>>> Benchmark Mode Samples
>>>>>> Score Error Units
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>> 37.949 ± 1.220 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>> 404.042 ± 8.450 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>> 690.393 ± 115.537 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>>>>> 1221.681 ± 82.205 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>> 823.059 ± 41.512 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>> 83.352 ± 11.911 ops/ms
>>>>>>
>>>>>>
>>>>>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <
>>>>>> mikael.staldal@magine.com> wrote:
>>>>>>
>>>>>>> I guess that with a memory mapped file, you leave it to the OS to do
>>>>>>> the best it can, and you lose any direct control over how it is actually
>>>>>>> done.
>>>>>>>
>>>>>>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ra...@dslextreme.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> On my Mac Pro with the slower external SSD I now got:
>>>>>>>>
>>>>>>>> Benchmark Mode Samples
>>>>>>>> Score Error Units
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>>> 73.739 ± 0.740 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>>> 683.129 ± 9.407 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>>> 991.293 ± 193.049 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>>> 3072.250 ± 63.475 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>>> 1056.256 ± 137.673 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>>> 784.723 ± 153.226 ops/ms
>>>>>>>>
>>>>>>>> and on the same machine with the faster internal SSD I got:
>>>>>>>>
>>>>>>>> Benchmark Mode Samples
>>>>>>>> Score Error Units
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>>> 74.661 ± 0.232 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>>> 647.041 ± 2.994 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>>> 1333.887 ± 13.921 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>>> 3025.726 ± 210.414 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>>> 1433.620 ± 11.194 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>>> 1026.319 ± 13.347 ops/ms
>>>>>>>>
>>>>>>>> I will continue to run this on a few other configurations. I think
>>>>>>>> I would also like to add the async appenders/loggers to this test so that
>>>>>>>> one can see all the various options.
>>>>>>>>
>>>>>>>> It is really quite interesting to me to see how the memory mapped
>>>>>>>> appender behaves so differently from one machine to another. I don’t know
>>>>>>>> under what circumstances I would recommend using it though.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>
>>>>>>>> On Feb 9, 2017, at 7:03 AM, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> After modifying the configuration the new results on my laptop are:
>>>>>>>>
>>>>>>>> Benchmark Mode Samples
>>>>>>>> Score Error Units
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>>> 92.580 ± 3.698 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>>> 828.707 ± 55.006 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>>> 1647.230 ± 125.682 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>>> 1465.002 ± 1284.943 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>>> 1765.340 ± 149.707 ops/ms
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>>> 1192.594 ± 57.777 ops/ms
>>>>>>>>
>>>>>>>> I will try the other machines later and post those results.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>
>>>>>>>> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Ceki replied on twitter that the immediateFlush option is now a
>>>>>>>> parameter of the appended, not the encoder, so it looks like the confit
>>>>>>>> needs to be changed and the test rerun.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> FYI, The write and flush methods in BufferedOutputStream are also
>>>>>>>> synchronized, so we won't be able to do away with synchronization
>>>>>>>> completely.
>>>>>>>>
>>>>>>>> In OutputStreamManager we synchronize multiple methods but these
>>>>>>>> are nested calls. I thought reentrant synchronization had negligible
>>>>>>>> overhead but I haven't measured this myself.
>>>>>>>>
>>>>>>>>
>>>>>>>> Sent from my iPhone
>>>>>>>>
>>>>>>>> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>
>>>>>>>> I’m pretty sure the problem we have is that a) we are synchronizing
>>>>>>>> many methods and b) we are synchronizing more than just the write.
>>>>>>>> Unfortunately, I can’t figure out how to reduce that based on how dispersed
>>>>>>>> the code has gotten.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> I tried to modify FileManager to just use a BufferedOutputStream
>>>>>>>> but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> The append method isn’t synchronized but the writeBytes method
>>>>>>>> acquires a lock. His code is actually a lot simpler than ours in that it
>>>>>>>> just uses a BufferedOutputStream and he only obtains the lock when he is
>>>>>>>> writing to it.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>
>>>>>>>> I'm not sure if I'm looking in the right place, but a major
>>>>>>>> difference now between Logback's appenders and Log4j's is that Logback
>>>>>>>> isn't synchronized on the append method.
>>>>>>>>
>>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Is this something we can improve performance on by implementing a
>>>>>>>>> file appender based on FileChannel or AsynchronousFileChannel instead of
>>>>>>>>> OutputStream?
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Ceki has updated his numbers to include those reported on the
>>>>>>>>>> mailing list. https://docs.google.com/
>>>>>>>>>> spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/
>>>>>>>>>> edit#gid=0
>>>>>>>>>>
>>>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers
>>>>>>>>>> for my two MacBooks are at https://docs.google.com/spread
>>>>>>>>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?u
>>>>>>>>>> sp=sharing.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> Yes, that is still the standard approach most people use and is
>>>>>>>>>> the only way to provide a head-to-head comparison against the logging
>>>>>>>>>> frameworks.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>> This is all in a synchronous appender, right? Either way, that's
>>>>>>>>>> rather interesting.
>>>>>>>>>>
>>>>>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Someone posted numbers on the Logback user’s list that match
>>>>>>>>>>> mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat
>>>>>>>>>>> better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> I think we need some comparisons on the log4j side: file
>>>>>>>>>>> appender with 256k buffer size, random access file appender with 256k
>>>>>>>>>>> buffer size (which appears to be the default), and memory mapped file
>>>>>>>>>>> appender. It'd be cool to see how these compose with async logging enabled
>>>>>>>>>>> in both log4j and logback.
>>>>>>>>>>>
>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf to
>>>>>>>>>>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>>>>>>>>>>> speed of your processor and get the speed of your hard drive. I used
>>>>>>>>>>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>>>>>>>>>>> spreadsheet. I will post the like once I have it.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <
>>>>>>>>>>>> garydgregory@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> If you want, I can run tests on Windows once the build works on
>>>>>>>>>>>> Windows again.
>>>>>>>>>>>>
>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>
>>>>>>>>>>>> Gary
>>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard
>>>>>>>>>>>>> drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>
>>>>>>>>>>>>> Benchmark Mode
>>>>>>>>>>>>> Samples Score Error Units
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt
>>>>>>>>>>>>> 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt
>>>>>>>>>>>>> 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt
>>>>>>>>>>>>> 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt
>>>>>>>>>>>>> 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt
>>>>>>>>>>>>> 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>>
>>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run
>>>>>>>>>>>>> anything directly on bare metal any more.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com>
>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>>>>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>>>>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>>>>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we
>>>>>>>>>>>>> have a few optimizations we can make.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> [image: MagineTV]
>>>>>>>
>>>>>>> *Mikael Ståldal*
>>>>>>> Senior software developer
>>>>>>>
>>>>>>> *Magine TV*
>>>>>>> mikael.staldal@magine.com
>>>>>>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
>>>>>>>
>>>>>>> Privileged and/or Confidential Information may be contained in this
>>>>>>> message. If you are not the addressee indicated in this message
>>>>>>> (or responsible for delivery of the message to such a person), you
>>>>>>> may not copy or deliver this message to anyone. In such case,
>>>>>>> you should destroy this message and kindly notify the sender by
>>>>>>> reply email.
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>>
>>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>
>
>
> --
> Matt Sicker <bo...@gmail.com>
>
>
>
--
Matt Sicker <bo...@gmail.com>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
Just for fun I decided to add some more tests to the file appender benchmark. I’ve checked them in. Please review them to see if everything is configured so the tests make sense.
Note that I would expect the async appenders to reduce to the speed of the file appender, since once the queue fills up that is what they are waiting on. But I didn’t set a buffer size for the disruptor or async logger tests so I would have expected those to be quite a bit faster than the regular file test.
The one thing that is definitely worth noting is how truly terrible the JUL file appender is. I have to assume that it must be doing an immediate flush on every write.
This is on my MacBook Pro - what Ceki would call Fast CPU/Fast SSD
Benchmark Mode Samples Score Error Units
o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 69.546 ± 2.635 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 783.006 ± 28.138 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10 939.605 ± 38.655 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt 10 1446.522 ± 45.485 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt 10 1269.014 ± 27.236 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1475.605 ± 74.675 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 2131.240 ± 114.184 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1499.667 ± 39.668 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10 326.969 ± 2.690 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 940.527 ± 34.090 ops/ms
And this is on my old MacBook Pro - it uses a hard drive so isn’t very fast.
Benchmark Mode Samples Score Error Units
o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 15.722 ± 15.557 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 530.668 ± 54.193 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncAppender thrpt 10 498.620 ± 178.693 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncDisruptor thrpt 10 454.541 ± 145.505 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2AsyncLogger thrpt 10 527.784 ± 150.269 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 587.605 ± 97.769 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1966.092 ± 431.196 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 364.694 ± 34.602 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackAsyncFile thrpt 10 258.220 ± 1.936 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 560.958 ± 36.982 ops/ms
Ralph
> On Feb 9, 2017, at 1:39 PM, Matt Sicker <bo...@gmail.com> wrote:
>
> Also, there are some potential issues with logback-perf:
>
> * JMH is way out of date (1.11.3 versus 1.17.4)
> * Less warmup iterations than we do
>
> Anyways, results for 32 threads (8 core environment):
>
> Benchmark Mode Cnt Score Error Units
> FileAppenderBenchmark.log4j1File thrpt 10 695.774 ± 9.567 ops/ms
> FileAppenderBenchmark.log4j2File thrpt 10 1300.091 ± 17.579 ops/ms
> FileAppenderBenchmark.log4j2RAF thrpt 10 1365.118 ± 17.656 ops/ms
> FileAppenderBenchmark.logbackFile thrpt 10 766.294 ± 10.121 ops/ms
>
> On 9 February 2017 at 14:37, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
> That value does look messed up. I'll re-run the 32 thread tests. Also, I'm not on the logback lists yet, so I'll sign up this afternoon.
>
> On 9 February 2017 at 14:35, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> What is up with that score for 32 threads? That can’t possibly be correct.
>
> Ralph
>
>> On Feb 9, 2017, at 12:45 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>
>> I ran the logback-perf repo on the same AWS instance. Here's the CSV data. It appears as soon as more than one thread comes into play, log4j2 has better scores.
>>
>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",1,10,964.600470,279.139021,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",1,10,1274.682156,6.179197,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",1,10,1257.026405,32.898682,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",1,10,1363.683525,41.884725,"ops/ms"
>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",2,10,687.304803,13.266922,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",2,10,1386.596198,207.305249,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",2,10,1579.884762,24.098318,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",2,10,953.138212,99.156775,"ops/ms"
>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",4,10,670.442970,15.049614,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",4,10,1218.543593,18.234077,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",4,10,1309.092881,31.547936,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",4,10,845.168355,24.547390,"ops/ms"
>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",8,10,689.805339,7.415023,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",8,10,1196.396592,19.360776,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",8,10,1319.477318,10.601260,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",8,10,816.608726,25.603234,"ops/ms"
>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",16,10,687.623660,16.114008,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",16,10,1203.649145,8.835115,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",16,10,1266.241778,7.564414,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",16,10,789.507183,9.866592,"ops/ms"
>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",32,10,690.252411,11.587858,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",32,10,1514185.478492,126804.168771,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",32,10,1264.049209,28.309088,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",32,10,754.828687,14.865909,"ops/ms"
>> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",64,10,670.498518,11.147198,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",64,10,1293.301940,22.687086,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",64,10,1380.527892,14.907542,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",64,10,750.528159,11.356281,"ops/ms"
>>
>> On 9 February 2017 at 13:02, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>> You might try running Ceki’s benchmark project on AWS and publish those numbers here. He also asked people to publish numbers on the Logback user’s list so he can add them to his spreadsheet.
>>
>> From your numbers and the numbers I’ve been getting, it is clear to me that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware Remko is listing I’d say his machine is about as old as my other MacBook except that he has an SSD that is slightly faster than my hard drive.
>>
>> Ralph
>>
>>> On Feb 9, 2017, at 11:12 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>
>>> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params involved in making the instance, but here's some data (same strangeness with MMF):
>>>
>>> Benchmark Mode Samples Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 86.867 ± 4.502 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 671.156 ± 7.099 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1221.814 ± 22.130 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1178.407 ± 960.141 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1220.746 ± 34.421 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 898.122 ± 8.128 ops/ms
>>>
>>> On 9 February 2017 at 12:02, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core i7. Can find out more hardware specs if needed. I also noticed that the memory mapped file starts out fast and slows down over time (somewhat seen by the error value here).
>>>
>>> Benchmark Mode Samples Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 96.540 ± 7.875 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 766.286 ± 11.461 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1787.620 ± 36.695 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1506.588 ± 956.354 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1934.966 ± 50.089 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1285.066 ± 12.674 ops/ms
>>>
>>> On 9 February 2017 at 11:44, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit), i5-3317u CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores), SSD hard disk:
>>>
>>> java -jar log4j-perf/target/benchmarks.jar ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 20 -t 4 -tu ms
>>>
>>> # Run complete. Total time: 00:03:58
>>>
>>> Benchmark Mode Samples Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.646 ± 0.876 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 405.305 ± 6.596 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 751.949 ± 16.055 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1250.666 ± 168.757 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 728.743 ± 23.909 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 676.926 ± 19.518 ops/ms
>>>
>>> --------------------
>>> Logback config without immediateFlush=false:
>>>
>>> # Run complete. Total time: 00:03:44
>>>
>>> Benchmark Mode Samples Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.949 ± 1.220 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 404.042 ± 8.450 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 690.393 ± 115.537 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1221.681 ± 82.205 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 823.059 ± 41.512 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 83.352 ± 11.911 ops/ms
>>>
>>>
>>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <mikael.staldal@magine.com <ma...@magine.com>> wrote:
>>> I guess that with a memory mapped file, you leave it to the OS to do the best it can, and you lose any direct control over how it is actually done.
>>>
>>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>> On my Mac Pro with the slower external SSD I now got:
>>>
>>> Benchmark Mode Samples Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 73.739 ± 0.740 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 683.129 ± 9.407 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 991.293 ± 193.049 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3072.250 ± 63.475 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1056.256 ± 137.673 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 784.723 ± 153.226 ops/ms
>>>
>>> and on the same machine with the faster internal SSD I got:
>>>
>>> Benchmark Mode Samples Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 74.661 ± 0.232 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 647.041 ± 2.994 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1333.887 ± 13.921 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3025.726 ± 210.414 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1433.620 ± 11.194 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1026.319 ± 13.347 ops/ms
>>>
>>> I will continue to run this on a few other configurations. I think I would also like to add the async appenders/loggers to this test so that one can see all the various options.
>>>
>>> It is really quite interesting to me to see how the memory mapped appender behaves so differently from one machine to another. I don’t know under what circumstances I would recommend using it though.
>>>
>>> Ralph
>>>
>>>
>>>> On Feb 9, 2017, at 7:03 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>
>>>> After modifying the configuration the new results on my laptop are:
>>>>
>>>> Benchmark Mode Samples Score Error Units
>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 92.580 ± 3.698 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 828.707 ± 55.006 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1647.230 ± 125.682 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1465.002 ± 1284.943 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1765.340 ± 149.707 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1192.594 ± 57.777 ops/ms
>>>>
>>>> I will try the other machines later and post those results.
>>>>
>>>> Ralph
>>>>
>>>>
>>>>> On Feb 9, 2017, at 5:22 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>
>>>>> Ceki replied on twitter that the immediateFlush option is now a parameter of the appended, not the encoder, so it looks like the confit needs to be changed and the test rerun.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>>>>>
>>>>>> FYI, The write and flush methods in BufferedOutputStream are also synchronized, so we won't be able to do away with synchronization completely.
>>>>>>
>>>>>> In OutputStreamManager we synchronize multiple methods but these are nested calls. I thought reentrant synchronization had negligible overhead but I haven't measured this myself.
>>>>>>
>>>>>>
>>>>>> Sent from my iPhone
>>>>>>
>>>>>> On Feb 9, 2017, at 2:31, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>
>>>>>>> I’m pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can’t figure out how to reduce that based on how dispersed the code has gotten.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>
>>>>>>>> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>
>>>>>>>>> The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>>>>>>
>>>>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>>>>>
>>>>>>>>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>>>>>>>>
>>>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>>>>>
>>>>>>>>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Gary
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>
>>>
>>>
>>>
>>>
>>> --
>>>
>>>
>>> Mikael Ståldal
>>> Senior software developer
>>>
>>> Magine TV
>>> mikael.staldal@magine.com <ma...@magine.com>
>>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com <http://www.magine.com/>
>>>
>>> Privileged and/or Confidential Information may be contained in this message. If you are not the addressee indicated in this message
>>> (or responsible for delivery of the message to such a person), you may not copy or deliver this message to anyone. In such case,
>>> you should destroy this message and kindly notify the sender by reply email.
>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>
>>
>>
>>
>> --
>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>
>
>
>
> --
> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>
>
>
> --
> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
Re: Logback performance improvements
Posted by Matt Sicker <bo...@gmail.com>.
Also, there are some potential issues with logback-perf:
* JMH is way out of date (1.11.3 versus 1.17.4)
* Less warmup iterations than we do
Anyways, results for 32 threads (8 core environment):
Benchmark Mode Cnt Score Error Units
FileAppenderBenchmark.log4j1File thrpt 10 695.774 ± 9.567 ops/ms
FileAppenderBenchmark.log4j2File thrpt 10 1300.091 ± 17.579 ops/ms
FileAppenderBenchmark.log4j2RAF thrpt 10 1365.118 ± 17.656 ops/ms
FileAppenderBenchmark.logbackFile thrpt 10 766.294 ± 10.121 ops/ms
On 9 February 2017 at 14:37, Matt Sicker <bo...@gmail.com> wrote:
> That value does look messed up. I'll re-run the 32 thread tests. Also, I'm
> not on the logback lists yet, so I'll sign up this afternoon.
>
> On 9 February 2017 at 14:35, Apache <ra...@dslextreme.com> wrote:
>
>> What is up with that score for 32 threads? That can’t possibly be
>> correct.
>>
>> Ralph
>>
>> On Feb 9, 2017, at 12:45 PM, Matt Sicker <bo...@gmail.com> wrote:
>>
>> I ran the logback-perf repo on the same AWS instance. Here's the CSV
>> data. It appears as soon as more than one thread comes into play, log4j2
>> has better scores.
>>
>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>> (99.9%)","Unit"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","
>> thrpt",1,10,964.600470,279.139021,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","
>> thrpt",1,10,1274.682156,6.179197,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","
>> thrpt",1,10,1257.026405,32.898682,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","
>> thrpt",1,10,1363.683525,41.884725,"ops/ms"
>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>> (99.9%)","Unit"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","
>> thrpt",2,10,687.304803,13.266922,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","
>> thrpt",2,10,1386.596198,207.305249,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","
>> thrpt",2,10,1579.884762,24.098318,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","
>> thrpt",2,10,953.138212,99.156775,"ops/ms"
>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>> (99.9%)","Unit"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","
>> thrpt",4,10,670.442970,15.049614,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","
>> thrpt",4,10,1218.543593,18.234077,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","
>> thrpt",4,10,1309.092881,31.547936,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","
>> thrpt",4,10,845.168355,24.547390,"ops/ms"
>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>> (99.9%)","Unit"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","
>> thrpt",8,10,689.805339,7.415023,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","
>> thrpt",8,10,1196.396592,19.360776,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","
>> thrpt",8,10,1319.477318,10.601260,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","
>> thrpt",8,10,816.608726,25.603234,"ops/ms"
>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>> (99.9%)","Unit"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","
>> thrpt",16,10,687.623660,16.114008,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","
>> thrpt",16,10,1203.649145,8.835115,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","
>> thrpt",16,10,1266.241778,7.564414,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","
>> thrpt",16,10,789.507183,9.866592,"ops/ms"
>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>> (99.9%)","Unit"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","
>> thrpt",32,10,690.252411,11.587858,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","
>> thrpt",32,10,1514185.478492,126804.168771,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","
>> thrpt",32,10,1264.049209,28.309088,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","
>> thrpt",32,10,754.828687,14.865909,"ops/ms"
>> "Benchmark","Mode","Threads","Samples","Score","Score Error
>> (99.9%)","Unit"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","
>> thrpt",64,10,670.498518,11.147198,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","
>> thrpt",64,10,1293.301940,22.687086,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","
>> thrpt",64,10,1380.527892,14.907542,"ops/ms"
>> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","
>> thrpt",64,10,750.528159,11.356281,"ops/ms"
>>
>> On 9 February 2017 at 13:02, Apache <ra...@dslextreme.com> wrote:
>>
>>> You might try running Ceki’s benchmark project on AWS and publish those
>>> numbers here. He also asked people to publish numbers on the Logback user’s
>>> list so he can add them to his spreadsheet.
>>>
>>> From your numbers and the numbers I’ve been getting, it is clear to me
>>> that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware
>>> Remko is listing I’d say his machine is about as old as my other MacBook
>>> except that he has an SSD that is slightly faster than my hard drive.
>>>
>>> Ralph
>>>
>>> On Feb 9, 2017, at 11:12 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>
>>> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R)
>>> Xeon(R) CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params
>>> involved in making the instance, but here's some data (same strangeness
>>> with MMF):
>>>
>>> Benchmark Mode Samples
>>> Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>> 86.867 ± 4.502 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>> 671.156 ± 7.099 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>> 1221.814 ± 22.130 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>> 1178.407 ± 960.141 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>> 1220.746 ± 34.421 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>> 898.122 ± 8.128 ops/ms
>>>
>>> On 9 February 2017 at 12:02, Matt Sicker <bo...@gmail.com> wrote:
>>>
>>>> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core i7.
>>>> Can find out more hardware specs if needed. I also noticed that the memory
>>>> mapped file starts out fast and slows down over time (somewhat seen by the
>>>> error value here).
>>>>
>>>> Benchmark Mode Samples
>>>> Score Error Units
>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>> 96.540 ± 7.875 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>> 766.286 ± 11.461 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>> 1787.620 ± 36.695 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>> 1506.588 ± 956.354 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>> 1934.966 ± 50.089 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>> 1285.066 ± 12.674 ops/ms
>>>>
>>>> On 9 February 2017 at 11:44, Remko Popma <re...@gmail.com> wrote:
>>>>
>>>>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit), i5-3317u
>>>>> CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores), SSD hard
>>>>> disk:
>>>>>
>>>>> java -jar log4j-perf/target/benchmarks.jar
>>>>> ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 20 -t 4 -tu ms
>>>>>
>>>>> # Run complete. Total time: 00:03:58
>>>>>
>>>>> Benchmark Mode Samples
>>>>> Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>> 37.646 ± 0.876 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>> 405.305 ± 6.596 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>> 751.949 ± 16.055 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>>>> 1250.666 ± 168.757 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>> 728.743 ± 23.909 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>> 676.926 ± 19.518 ops/ms
>>>>>
>>>>> --------------------
>>>>> Logback config without immediateFlush=false:
>>>>>
>>>>> # Run complete. Total time: 00:03:44
>>>>>
>>>>> Benchmark Mode Samples
>>>>> Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>> 37.949 ± 1.220 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>> 404.042 ± 8.450 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>> 690.393 ± 115.537 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>>>> 1221.681 ± 82.205 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>> 823.059 ± 41.512 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>> 83.352 ± 11.911 ops/ms
>>>>>
>>>>>
>>>>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <
>>>>> mikael.staldal@magine.com> wrote:
>>>>>
>>>>>> I guess that with a memory mapped file, you leave it to the OS to do
>>>>>> the best it can, and you lose any direct control over how it is actually
>>>>>> done.
>>>>>>
>>>>>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ra...@dslextreme.com>
>>>>>> wrote:
>>>>>>
>>>>>>> On my Mac Pro with the slower external SSD I now got:
>>>>>>>
>>>>>>> Benchmark Mode Samples
>>>>>>> Score Error Units
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>> 73.739 ± 0.740 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>> 683.129 ± 9.407 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>> 991.293 ± 193.049 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>> 3072.250 ± 63.475 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>> 1056.256 ± 137.673 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>> 784.723 ± 153.226 ops/ms
>>>>>>>
>>>>>>> and on the same machine with the faster internal SSD I got:
>>>>>>>
>>>>>>> Benchmark Mode Samples
>>>>>>> Score Error Units
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>> 74.661 ± 0.232 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>> 647.041 ± 2.994 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>> 1333.887 ± 13.921 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>> 3025.726 ± 210.414 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>> 1433.620 ± 11.194 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>> 1026.319 ± 13.347 ops/ms
>>>>>>>
>>>>>>> I will continue to run this on a few other configurations. I think I
>>>>>>> would also like to add the async appenders/loggers to this test so that one
>>>>>>> can see all the various options.
>>>>>>>
>>>>>>> It is really quite interesting to me to see how the memory mapped
>>>>>>> appender behaves so differently from one machine to another. I don’t know
>>>>>>> under what circumstances I would recommend using it though.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>
>>>>>>> On Feb 9, 2017, at 7:03 AM, Apache <ra...@dslextreme.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> After modifying the configuration the new results on my laptop are:
>>>>>>>
>>>>>>> Benchmark Mode Samples
>>>>>>> Score Error Units
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>>> 92.580 ± 3.698 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>>> 828.707 ± 55.006 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>>> 1647.230 ± 125.682 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>>> 1465.002 ± 1284.943 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>>> 1765.340 ± 149.707 ops/ms
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>>> 1192.594 ± 57.777 ops/ms
>>>>>>>
>>>>>>> I will try the other machines later and post those results.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>
>>>>>>> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Ceki replied on twitter that the immediateFlush option is now a
>>>>>>> parameter of the appended, not the encoder, so it looks like the confit
>>>>>>> needs to be changed and the test rerun.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> FYI, The write and flush methods in BufferedOutputStream are also
>>>>>>> synchronized, so we won't be able to do away with synchronization
>>>>>>> completely.
>>>>>>>
>>>>>>> In OutputStreamManager we synchronize multiple methods but these are
>>>>>>> nested calls. I thought reentrant synchronization had negligible overhead
>>>>>>> but I haven't measured this myself.
>>>>>>>
>>>>>>>
>>>>>>> Sent from my iPhone
>>>>>>>
>>>>>>> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com> wrote:
>>>>>>>
>>>>>>> I’m pretty sure the problem we have is that a) we are synchronizing
>>>>>>> many methods and b) we are synchronizing more than just the write.
>>>>>>> Unfortunately, I can’t figure out how to reduce that based on how dispersed
>>>>>>> the code has gotten.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> I tried to modify FileManager to just use a BufferedOutputStream but
>>>>>>> discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> The append method isn’t synchronized but the writeBytes method
>>>>>>> acquires a lock. His code is actually a lot simpler than ours in that it
>>>>>>> just uses a BufferedOutputStream and he only obtains the lock when he is
>>>>>>> writing to it.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>
>>>>>>> I'm not sure if I'm looking in the right place, but a major
>>>>>>> difference now between Logback's appenders and Log4j's is that Logback
>>>>>>> isn't synchronized on the append method.
>>>>>>>
>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Is this something we can improve performance on by implementing a
>>>>>>>> file appender based on FileChannel or AsynchronousFileChannel instead of
>>>>>>>> OutputStream?
>>>>>>>>
>>>>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Ceki has updated his numbers to include those reported on the
>>>>>>>>> mailing list. https://docs.google.com/
>>>>>>>>> spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/
>>>>>>>>> edit#gid=0
>>>>>>>>>
>>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers
>>>>>>>>> for my two MacBooks are at https://docs.google.com/spread
>>>>>>>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?u
>>>>>>>>> sp=sharing.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Yes, that is still the standard approach most people use and is
>>>>>>>>> the only way to provide a head-to-head comparison against the logging
>>>>>>>>> frameworks.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>> This is all in a synchronous appender, right? Either way, that's
>>>>>>>>> rather interesting.
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Someone posted numbers on the Logback user’s list that match
>>>>>>>>>> mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat
>>>>>>>>>> better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>> I think we need some comparisons on the log4j side: file appender
>>>>>>>>>> with 256k buffer size, random access file appender with 256k buffer size
>>>>>>>>>> (which appears to be the default), and memory mapped file appender. It'd be
>>>>>>>>>> cool to see how these compose with async logging enabled in both log4j and
>>>>>>>>>> logback.
>>>>>>>>>>
>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf to
>>>>>>>>>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>>>>>>>>>> speed of your processor and get the speed of your hard drive. I used
>>>>>>>>>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>>>>>>>>>> spreadsheet. I will post the like once I have it.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> If you want, I can run tests on Windows once the build works on
>>>>>>>>>>> Windows again.
>>>>>>>>>>>
>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>
>>>>>>>>>>> Gary
>>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive
>>>>>>>>>>>> category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>
>>>>>>>>>>>> Benchmark Mode
>>>>>>>>>>>> Samples Score Error Units
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt
>>>>>>>>>>>> 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt
>>>>>>>>>>>> 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt
>>>>>>>>>>>> 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt
>>>>>>>>>>>> 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt
>>>>>>>>>>>> 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>
>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run
>>>>>>>>>>>> anything directly on bare metal any more.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>>>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>>>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>>>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we
>>>>>>>>>>>> have a few optimizations we can make.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> [image: MagineTV]
>>>>>>
>>>>>> *Mikael Ståldal*
>>>>>> Senior software developer
>>>>>>
>>>>>> *Magine TV*
>>>>>> mikael.staldal@magine.com
>>>>>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
>>>>>>
>>>>>> Privileged and/or Confidential Information may be contained in this
>>>>>> message. If you are not the addressee indicated in this message
>>>>>> (or responsible for delivery of the message to such a person), you
>>>>>> may not copy or deliver this message to anyone. In such case,
>>>>>> you should destroy this message and kindly notify the sender by reply
>>>>>> email.
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>>
>>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>>
>>
>
>
> --
> Matt Sicker <bo...@gmail.com>
>
--
Matt Sicker <bo...@gmail.com>
Re: Logback performance improvements
Posted by Matt Sicker <bo...@gmail.com>.
That value does look messed up. I'll re-run the 32 thread tests. Also, I'm
not on the logback lists yet, so I'll sign up this afternoon.
On 9 February 2017 at 14:35, Apache <ra...@dslextreme.com> wrote:
> What is up with that score for 32 threads? That can’t possibly be correct.
>
> Ralph
>
> On Feb 9, 2017, at 12:45 PM, Matt Sicker <bo...@gmail.com> wrote:
>
> I ran the logback-perf repo on the same AWS instance. Here's the CSV data.
> It appears as soon as more than one thread comes into play, log4j2 has
> better scores.
>
> "Benchmark","Mode","Threads","Samples","Score","Score Error
> (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",1,10,964.
> 600470,279.139021,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",1,10,1274.
> 682156,6.179197,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",1,10,1257.
> 026405,32.898682,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",1,10,
> 1363.683525,41.884725,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error
> (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",2,10,687.
> 304803,13.266922,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",2,10,1386.
> 596198,207.305249,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",2,10,1579.
> 884762,24.098318,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",2,10,953.
> 138212,99.156775,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error
> (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",4,10,670.
> 442970,15.049614,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",4,10,1218.
> 543593,18.234077,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",4,10,1309.
> 092881,31.547936,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",4,10,845.
> 168355,24.547390,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error
> (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",8,10,689.
> 805339,7.415023,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",8,10,1196.
> 396592,19.360776,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",8,10,1319.
> 477318,10.601260,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",8,10,816.
> 608726,25.603234,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error
> (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",16,10,687.
> 623660,16.114008,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",16,10,
> 1203.649145,8.835115,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",16,10,1266.
> 241778,7.564414,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",16,10,
> 789.507183,9.866592,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error
> (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",32,10,690.
> 252411,11.587858,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",32,10,
> 1514185.478492,126804.168771,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",32,10,1264.
> 049209,28.309088,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",32,10,
> 754.828687,14.865909,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error
> (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",64,10,670.
> 498518,11.147198,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",64,10,
> 1293.301940,22.687086,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",64,10,1380.
> 527892,14.907542,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",64,10,
> 750.528159,11.356281,"ops/ms"
>
> On 9 February 2017 at 13:02, Apache <ra...@dslextreme.com> wrote:
>
>> You might try running Ceki’s benchmark project on AWS and publish those
>> numbers here. He also asked people to publish numbers on the Logback user’s
>> list so he can add them to his spreadsheet.
>>
>> From your numbers and the numbers I’ve been getting, it is clear to me
>> that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware
>> Remko is listing I’d say his machine is about as old as my other MacBook
>> except that he has an SSD that is slightly faster than my hard drive.
>>
>> Ralph
>>
>> On Feb 9, 2017, at 11:12 AM, Matt Sicker <bo...@gmail.com> wrote:
>>
>> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R) Xeon(R)
>> CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params involved in
>> making the instance, but here's some data (same strangeness with MMF):
>>
>> Benchmark Mode Samples Score
>> Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>> 86.867 ± 4.502 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>> 671.156 ± 7.099 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>> 1221.814 ± 22.130 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>> 1178.407 ± 960.141 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>> 1220.746 ± 34.421 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>> 898.122 ± 8.128 ops/ms
>>
>> On 9 February 2017 at 12:02, Matt Sicker <bo...@gmail.com> wrote:
>>
>>> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core i7.
>>> Can find out more hardware specs if needed. I also noticed that the memory
>>> mapped file starts out fast and slows down over time (somewhat seen by the
>>> error value here).
>>>
>>> Benchmark Mode Samples
>>> Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>> 96.540 ± 7.875 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>> 766.286 ± 11.461 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>> 1787.620 ± 36.695 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>> 1506.588 ± 956.354 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>> 1934.966 ± 50.089 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>> 1285.066 ± 12.674 ops/ms
>>>
>>> On 9 February 2017 at 11:44, Remko Popma <re...@gmail.com> wrote:
>>>
>>>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit), i5-3317u
>>>> CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores), SSD hard
>>>> disk:
>>>>
>>>> java -jar log4j-perf/target/benchmarks.jar ".*FileAppenderBenchmark.*"
>>>> -f 1 -wi 10 -i 20 -t 4 -tu ms
>>>>
>>>> # Run complete. Total time: 00:03:58
>>>>
>>>> Benchmark Mode Samples
>>>> Score Error Units
>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>> 37.646 ± 0.876 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>> 405.305 ± 6.596 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>> 751.949 ± 16.055 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>>> 1250.666 ± 168.757 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>> 728.743 ± 23.909 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>> 676.926 ± 19.518 ops/ms
>>>>
>>>> --------------------
>>>> Logback config without immediateFlush=false:
>>>>
>>>> # Run complete. Total time: 00:03:44
>>>>
>>>> Benchmark Mode Samples
>>>> Score Error Units
>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>> 37.949 ± 1.220 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>> 404.042 ± 8.450 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>> 690.393 ± 115.537 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>>> 1221.681 ± 82.205 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>> 823.059 ± 41.512 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>> 83.352 ± 11.911 ops/ms
>>>>
>>>>
>>>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <
>>>> mikael.staldal@magine.com> wrote:
>>>>
>>>>> I guess that with a memory mapped file, you leave it to the OS to do
>>>>> the best it can, and you lose any direct control over how it is actually
>>>>> done.
>>>>>
>>>>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ra...@dslextreme.com>
>>>>> wrote:
>>>>>
>>>>>> On my Mac Pro with the slower external SSD I now got:
>>>>>>
>>>>>> Benchmark Mode Samples
>>>>>> Score Error Units
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>> 73.739 ± 0.740 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>> 683.129 ± 9.407 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>> 991.293 ± 193.049 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>> 3072.250 ± 63.475 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>> 1056.256 ± 137.673 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>> 784.723 ± 153.226 ops/ms
>>>>>>
>>>>>> and on the same machine with the faster internal SSD I got:
>>>>>>
>>>>>> Benchmark Mode Samples
>>>>>> Score Error Units
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>> 74.661 ± 0.232 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>> 647.041 ± 2.994 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>> 1333.887 ± 13.921 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>> 3025.726 ± 210.414 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>> 1433.620 ± 11.194 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>> 1026.319 ± 13.347 ops/ms
>>>>>>
>>>>>> I will continue to run this on a few other configurations. I think I
>>>>>> would also like to add the async appenders/loggers to this test so that one
>>>>>> can see all the various options.
>>>>>>
>>>>>> It is really quite interesting to me to see how the memory mapped
>>>>>> appender behaves so differently from one machine to another. I don’t know
>>>>>> under what circumstances I would recommend using it though.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>
>>>>>> On Feb 9, 2017, at 7:03 AM, Apache <ra...@dslextreme.com>
>>>>>> wrote:
>>>>>>
>>>>>> After modifying the configuration the new results on my laptop are:
>>>>>>
>>>>>> Benchmark Mode Samples
>>>>>> Score Error Units
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>>> 92.580 ± 3.698 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>>> 828.707 ± 55.006 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>>> 1647.230 ± 125.682 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>>> 1465.002 ± 1284.943 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>>> 1765.340 ± 149.707 ops/ms
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>>> 1192.594 ± 57.777 ops/ms
>>>>>>
>>>>>> I will try the other machines later and post those results.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>
>>>>>> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com>
>>>>>> wrote:
>>>>>>
>>>>>> Ceki replied on twitter that the immediateFlush option is now a
>>>>>> parameter of the appended, not the encoder, so it looks like the confit
>>>>>> needs to be changed and the test rerun.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>> FYI, The write and flush methods in BufferedOutputStream are also
>>>>>> synchronized, so we won't be able to do away with synchronization
>>>>>> completely.
>>>>>>
>>>>>> In OutputStreamManager we synchronize multiple methods but these are
>>>>>> nested calls. I thought reentrant synchronization had negligible overhead
>>>>>> but I haven't measured this myself.
>>>>>>
>>>>>>
>>>>>> Sent from my iPhone
>>>>>>
>>>>>> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com> wrote:
>>>>>>
>>>>>> I’m pretty sure the problem we have is that a) we are synchronizing
>>>>>> many methods and b) we are synchronizing more than just the write.
>>>>>> Unfortunately, I can’t figure out how to reduce that based on how dispersed
>>>>>> the code has gotten.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com>
>>>>>> wrote:
>>>>>>
>>>>>> I tried to modify FileManager to just use a BufferedOutputStream but
>>>>>> discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com>
>>>>>> wrote:
>>>>>>
>>>>>> The append method isn’t synchronized but the writeBytes method
>>>>>> acquires a lock. His code is actually a lot simpler than ours in that it
>>>>>> just uses a BufferedOutputStream and he only obtains the lock when he is
>>>>>> writing to it.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>
>>>>>> I'm not sure if I'm looking in the right place, but a major
>>>>>> difference now between Logback's appenders and Log4j's is that Logback
>>>>>> isn't synchronized on the append method.
>>>>>>
>>>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>
>>>>>>> Is this something we can improve performance on by implementing a
>>>>>>> file appender based on FileChannel or AsynchronousFileChannel instead of
>>>>>>> OutputStream?
>>>>>>>
>>>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Ceki has updated his numbers to include those reported on the
>>>>>>>> mailing list. https://docs.google.com/
>>>>>>>> spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/
>>>>>>>> edit#gid=0
>>>>>>>>
>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers
>>>>>>>> for my two MacBooks are at https://docs.google.com/spread
>>>>>>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?u
>>>>>>>> sp=sharing.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Yes, that is still the standard approach most people use and is the
>>>>>>>> only way to provide a head-to-head comparison against the logging
>>>>>>>> frameworks.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>
>>>>>>>> This is all in a synchronous appender, right? Either way, that's
>>>>>>>> rather interesting.
>>>>>>>>
>>>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Someone posted numbers on the Logback user’s list that match
>>>>>>>>> mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat
>>>>>>>>> better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>> I think we need some comparisons on the log4j side: file appender
>>>>>>>>> with 256k buffer size, random access file appender with 256k buffer size
>>>>>>>>> (which appears to be the default), and memory mapped file appender. It'd be
>>>>>>>>> cool to see how these compose with async logging enabled in both log4j and
>>>>>>>>> logback.
>>>>>>>>>
>>>>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf to
>>>>>>>>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>>>>>>>>> speed of your processor and get the speed of your hard drive. I used
>>>>>>>>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>>>>>>>>> spreadsheet. I will post the like once I have it.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> If you want, I can run tests on Windows once the build works on
>>>>>>>>>> Windows again.
>>>>>>>>>>
>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>
>>>>>>>>>> Gary
>>>>>>>>>>
>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive
>>>>>>>>>>> category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>
>>>>>>>>>>> Benchmark Mode Samples
>>>>>>>>>>> Score Error Units
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt
>>>>>>>>>>> 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>>>>>>> 842374.496 ± 6762.712 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt
>>>>>>>>>>> 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt
>>>>>>>>>>> 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>>>>>>> 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>
>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run
>>>>>>>>>>> anything directly on bare metal any more.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we
>>>>>>>>>>> have a few optimizations we can make.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> [image: MagineTV]
>>>>>
>>>>> *Mikael Ståldal*
>>>>> Senior software developer
>>>>>
>>>>> *Magine TV*
>>>>> mikael.staldal@magine.com
>>>>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
>>>>>
>>>>> Privileged and/or Confidential Information may be contained in this
>>>>> message. If you are not the addressee indicated in this message
>>>>> (or responsible for delivery of the message to such a person), you may
>>>>> not copy or deliver this message to anyone. In such case,
>>>>> you should destroy this message and kindly notify the sender by reply
>>>>> email.
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>>
>>
>
>
> --
> Matt Sicker <bo...@gmail.com>
>
>
>
--
Matt Sicker <bo...@gmail.com>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
What is up with that score for 32 threads? That can’t possibly be correct.
Ralph
> On Feb 9, 2017, at 12:45 PM, Matt Sicker <bo...@gmail.com> wrote:
>
> I ran the logback-perf repo on the same AWS instance. Here's the CSV data. It appears as soon as more than one thread comes into play, log4j2 has better scores.
>
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",1,10,964.600470,279.139021,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",1,10,1274.682156,6.179197,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",1,10,1257.026405,32.898682,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",1,10,1363.683525,41.884725,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",2,10,687.304803,13.266922,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",2,10,1386.596198,207.305249,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",2,10,1579.884762,24.098318,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",2,10,953.138212,99.156775,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",4,10,670.442970,15.049614,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",4,10,1218.543593,18.234077,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",4,10,1309.092881,31.547936,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",4,10,845.168355,24.547390,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",8,10,689.805339,7.415023,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",8,10,1196.396592,19.360776,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",8,10,1319.477318,10.601260,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",8,10,816.608726,25.603234,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",16,10,687.623660,16.114008,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",16,10,1203.649145,8.835115,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",16,10,1266.241778,7.564414,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",16,10,789.507183,9.866592,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",32,10,690.252411,11.587858,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",32,10,1514185.478492,126804.168771,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",32,10,1264.049209,28.309088,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",32,10,754.828687,14.865909,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",64,10,670.498518,11.147198,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",64,10,1293.301940,22.687086,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",64,10,1380.527892,14.907542,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",64,10,750.528159,11.356281,"ops/ms"
>
> On 9 February 2017 at 13:02, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> You might try running Ceki’s benchmark project on AWS and publish those numbers here. He also asked people to publish numbers on the Logback user’s list so he can add them to his spreadsheet.
>
> From your numbers and the numbers I’ve been getting, it is clear to me that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware Remko is listing I’d say his machine is about as old as my other MacBook except that he has an SSD that is slightly faster than my hard drive.
>
> Ralph
>
>> On Feb 9, 2017, at 11:12 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>
>> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params involved in making the instance, but here's some data (same strangeness with MMF):
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 86.867 ± 4.502 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 671.156 ± 7.099 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1221.814 ± 22.130 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1178.407 ± 960.141 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1220.746 ± 34.421 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 898.122 ± 8.128 ops/ms
>>
>> On 9 February 2017 at 12:02, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core i7. Can find out more hardware specs if needed. I also noticed that the memory mapped file starts out fast and slows down over time (somewhat seen by the error value here).
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 96.540 ± 7.875 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 766.286 ± 11.461 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1787.620 ± 36.695 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1506.588 ± 956.354 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1934.966 ± 50.089 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1285.066 ± 12.674 ops/ms
>>
>> On 9 February 2017 at 11:44, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit), i5-3317u CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores), SSD hard disk:
>>
>> java -jar log4j-perf/target/benchmarks.jar ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 20 -t 4 -tu ms
>>
>> # Run complete. Total time: 00:03:58
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.646 ± 0.876 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 405.305 ± 6.596 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 751.949 ± 16.055 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1250.666 ± 168.757 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 728.743 ± 23.909 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 676.926 ± 19.518 ops/ms
>>
>> --------------------
>> Logback config without immediateFlush=false:
>>
>> # Run complete. Total time: 00:03:44
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.949 ± 1.220 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 404.042 ± 8.450 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 690.393 ± 115.537 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1221.681 ± 82.205 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 823.059 ± 41.512 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 83.352 ± 11.911 ops/ms
>>
>>
>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <mikael.staldal@magine.com <ma...@magine.com>> wrote:
>> I guess that with a memory mapped file, you leave it to the OS to do the best it can, and you lose any direct control over how it is actually done.
>>
>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>> On my Mac Pro with the slower external SSD I now got:
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 73.739 ± 0.740 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 683.129 ± 9.407 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 991.293 ± 193.049 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3072.250 ± 63.475 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1056.256 ± 137.673 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 784.723 ± 153.226 ops/ms
>>
>> and on the same machine with the faster internal SSD I got:
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 74.661 ± 0.232 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 647.041 ± 2.994 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1333.887 ± 13.921 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3025.726 ± 210.414 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1433.620 ± 11.194 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1026.319 ± 13.347 ops/ms
>>
>> I will continue to run this on a few other configurations. I think I would also like to add the async appenders/loggers to this test so that one can see all the various options.
>>
>> It is really quite interesting to me to see how the memory mapped appender behaves so differently from one machine to another. I don’t know under what circumstances I would recommend using it though.
>>
>> Ralph
>>
>>
>>> On Feb 9, 2017, at 7:03 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>
>>> After modifying the configuration the new results on my laptop are:
>>>
>>> Benchmark Mode Samples Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 92.580 ± 3.698 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 828.707 ± 55.006 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1647.230 ± 125.682 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1465.002 ± 1284.943 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1765.340 ± 149.707 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1192.594 ± 57.777 ops/ms
>>>
>>> I will try the other machines later and post those results.
>>>
>>> Ralph
>>>
>>>
>>>> On Feb 9, 2017, at 5:22 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>
>>>> Ceki replied on twitter that the immediateFlush option is now a parameter of the appended, not the encoder, so it looks like the confit needs to be changed and the test rerun.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>>>>
>>>>> FYI, The write and flush methods in BufferedOutputStream are also synchronized, so we won't be able to do away with synchronization completely.
>>>>>
>>>>> In OutputStreamManager we synchronize multiple methods but these are nested calls. I thought reentrant synchronization had negligible overhead but I haven't measured this myself.
>>>>>
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On Feb 9, 2017, at 2:31, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>
>>>>>> I’m pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can’t figure out how to reduce that based on how dispersed the code has gotten.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>
>>>>>>> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>
>>>>>>>> The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>
>>>>>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>>>>>>>
>>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>>>>
>>>>>>>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>>>>
>>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>>
>>>>>>>>>>>>> Gary
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>
>>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>>
>>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>
>>
>>
>>
>>
>> --
>>
>>
>> Mikael Ståldal
>> Senior software developer
>>
>> Magine TV
>> mikael.staldal@magine.com <ma...@magine.com>
>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com <http://www.magine.com/>
>>
>> Privileged and/or Confidential Information may be contained in this message. If you are not the addressee indicated in this message
>> (or responsible for delivery of the message to such a person), you may not copy or deliver this message to anyone. In such case,
>> you should destroy this message and kindly notify the sender by reply email.
>>
>>
>>
>>
>> --
>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>
>>
>>
>> --
>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>
>
>
>
> --
> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
At least Logback is now consistently performing better than Log4j 1.
Ralph
> On Feb 9, 2017, at 12:45 PM, Matt Sicker <bo...@gmail.com> wrote:
>
> I ran the logback-perf repo on the same AWS instance. Here's the CSV data. It appears as soon as more than one thread comes into play, log4j2 has better scores.
>
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",1,10,964.600470,279.139021,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",1,10,1274.682156,6.179197,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",1,10,1257.026405,32.898682,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",1,10,1363.683525,41.884725,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",2,10,687.304803,13.266922,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",2,10,1386.596198,207.305249,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",2,10,1579.884762,24.098318,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",2,10,953.138212,99.156775,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",4,10,670.442970,15.049614,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",4,10,1218.543593,18.234077,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",4,10,1309.092881,31.547936,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",4,10,845.168355,24.547390,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",8,10,689.805339,7.415023,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",8,10,1196.396592,19.360776,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",8,10,1319.477318,10.601260,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",8,10,816.608726,25.603234,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",16,10,687.623660,16.114008,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",16,10,1203.649145,8.835115,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",16,10,1266.241778,7.564414,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",16,10,789.507183,9.866592,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",32,10,690.252411,11.587858,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",32,10,1514185.478492,126804.168771,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",32,10,1264.049209,28.309088,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",32,10,754.828687,14.865909,"ops/ms"
> "Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",64,10,670.498518,11.147198,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",64,10,1293.301940,22.687086,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",64,10,1380.527892,14.907542,"ops/ms"
> "ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",64,10,750.528159,11.356281,"ops/ms"
>
> On 9 February 2017 at 13:02, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> You might try running Ceki’s benchmark project on AWS and publish those numbers here. He also asked people to publish numbers on the Logback user’s list so he can add them to his spreadsheet.
>
> From your numbers and the numbers I’ve been getting, it is clear to me that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware Remko is listing I’d say his machine is about as old as my other MacBook except that he has an SSD that is slightly faster than my hard drive.
>
> Ralph
>
>> On Feb 9, 2017, at 11:12 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>
>> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params involved in making the instance, but here's some data (same strangeness with MMF):
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 86.867 ± 4.502 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 671.156 ± 7.099 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1221.814 ± 22.130 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1178.407 ± 960.141 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1220.746 ± 34.421 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 898.122 ± 8.128 ops/ms
>>
>> On 9 February 2017 at 12:02, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core i7. Can find out more hardware specs if needed. I also noticed that the memory mapped file starts out fast and slows down over time (somewhat seen by the error value here).
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 96.540 ± 7.875 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 766.286 ± 11.461 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1787.620 ± 36.695 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1506.588 ± 956.354 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1934.966 ± 50.089 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1285.066 ± 12.674 ops/ms
>>
>> On 9 February 2017 at 11:44, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit), i5-3317u CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores), SSD hard disk:
>>
>> java -jar log4j-perf/target/benchmarks.jar ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 20 -t 4 -tu ms
>>
>> # Run complete. Total time: 00:03:58
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.646 ± 0.876 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 405.305 ± 6.596 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 751.949 ± 16.055 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1250.666 ± 168.757 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 728.743 ± 23.909 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 676.926 ± 19.518 ops/ms
>>
>> --------------------
>> Logback config without immediateFlush=false:
>>
>> # Run complete. Total time: 00:03:44
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.949 ± 1.220 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 404.042 ± 8.450 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 690.393 ± 115.537 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1221.681 ± 82.205 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 823.059 ± 41.512 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 83.352 ± 11.911 ops/ms
>>
>>
>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <mikael.staldal@magine.com <ma...@magine.com>> wrote:
>> I guess that with a memory mapped file, you leave it to the OS to do the best it can, and you lose any direct control over how it is actually done.
>>
>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>> On my Mac Pro with the slower external SSD I now got:
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 73.739 ± 0.740 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 683.129 ± 9.407 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 991.293 ± 193.049 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3072.250 ± 63.475 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1056.256 ± 137.673 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 784.723 ± 153.226 ops/ms
>>
>> and on the same machine with the faster internal SSD I got:
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 74.661 ± 0.232 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 647.041 ± 2.994 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1333.887 ± 13.921 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3025.726 ± 210.414 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1433.620 ± 11.194 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1026.319 ± 13.347 ops/ms
>>
>> I will continue to run this on a few other configurations. I think I would also like to add the async appenders/loggers to this test so that one can see all the various options.
>>
>> It is really quite interesting to me to see how the memory mapped appender behaves so differently from one machine to another. I don’t know under what circumstances I would recommend using it though.
>>
>> Ralph
>>
>>
>>> On Feb 9, 2017, at 7:03 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>
>>> After modifying the configuration the new results on my laptop are:
>>>
>>> Benchmark Mode Samples Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 92.580 ± 3.698 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 828.707 ± 55.006 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1647.230 ± 125.682 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1465.002 ± 1284.943 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1765.340 ± 149.707 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1192.594 ± 57.777 ops/ms
>>>
>>> I will try the other machines later and post those results.
>>>
>>> Ralph
>>>
>>>
>>>> On Feb 9, 2017, at 5:22 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>
>>>> Ceki replied on twitter that the immediateFlush option is now a parameter of the appended, not the encoder, so it looks like the confit needs to be changed and the test rerun.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>>>>
>>>>> FYI, The write and flush methods in BufferedOutputStream are also synchronized, so we won't be able to do away with synchronization completely.
>>>>>
>>>>> In OutputStreamManager we synchronize multiple methods but these are nested calls. I thought reentrant synchronization had negligible overhead but I haven't measured this myself.
>>>>>
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On Feb 9, 2017, at 2:31, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>
>>>>>> I’m pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can’t figure out how to reduce that based on how dispersed the code has gotten.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>
>>>>>>> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>
>>>>>>>> The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>
>>>>>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>>>>>>>
>>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>>>>
>>>>>>>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>>>>
>>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>>
>>>>>>>>>>>>> Gary
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>
>>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>>
>>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>
>>
>>
>>
>>
>> --
>>
>>
>> Mikael Ståldal
>> Senior software developer
>>
>> Magine TV
>> mikael.staldal@magine.com <ma...@magine.com>
>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com <http://www.magine.com/>
>>
>> Privileged and/or Confidential Information may be contained in this message. If you are not the addressee indicated in this message
>> (or responsible for delivery of the message to such a person), you may not copy or deliver this message to anyone. In such case,
>> you should destroy this message and kindly notify the sender by reply email.
>>
>>
>>
>>
>> --
>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>
>>
>>
>> --
>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>
>
>
>
> --
> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
Re: Logback performance improvements
Posted by Matt Sicker <bo...@gmail.com>.
I ran the logback-perf repo on the same AWS instance. Here's the CSV data.
It appears as soon as more than one thread comes into play, log4j2 has
better scores.
"Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",1,10,964.600470,279.139021,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",1,10,1274.682156,6.179197,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",1,10,1257.026405,32.898682,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",1,10,1363.683525,41.884725,"ops/ms"
"Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",2,10,687.304803,13.266922,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",2,10,1386.596198,207.305249,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",2,10,1579.884762,24.098318,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",2,10,953.138212,99.156775,"ops/ms"
"Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",4,10,670.442970,15.049614,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",4,10,1218.543593,18.234077,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",4,10,1309.092881,31.547936,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",4,10,845.168355,24.547390,"ops/ms"
"Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",8,10,689.805339,7.415023,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",8,10,1196.396592,19.360776,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",8,10,1319.477318,10.601260,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",8,10,816.608726,25.603234,"ops/ms"
"Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",16,10,687.623660,16.114008,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",16,10,1203.649145,8.835115,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",16,10,1266.241778,7.564414,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",16,10,789.507183,9.866592,"ops/ms"
"Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",32,10,690.252411,11.587858,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",32,10,1514185.478492,126804.168771,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",32,10,1264.049209,28.309088,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",32,10,754.828687,14.865909,"ops/ms"
"Benchmark","Mode","Threads","Samples","Score","Score Error (99.9%)","Unit"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j1File","thrpt",64,10,670.498518,11.147198,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j2File","thrpt",64,10,1293.301940,22.687086,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.log4j2RAF","thrpt",64,10,1380.527892,14.907542,"ops/ms"
"ch.qos.logback.perf.FileAppenderBenchmark.logbackFile","thrpt",64,10,750.528159,11.356281,"ops/ms"
On 9 February 2017 at 13:02, Apache <ra...@dslextreme.com> wrote:
> You might try running Ceki’s benchmark project on AWS and publish those
> numbers here. He also asked people to publish numbers on the Logback user’s
> list so he can add them to his spreadsheet.
>
> From your numbers and the numbers I’ve been getting, it is clear to me
> that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware
> Remko is listing I’d say his machine is about as old as my other MacBook
> except that he has an SSD that is slightly faster than my hard drive.
>
> Ralph
>
> On Feb 9, 2017, at 11:12 AM, Matt Sicker <bo...@gmail.com> wrote:
>
> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R) Xeon(R)
> CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params involved in
> making the instance, but here's some data (same strangeness with MMF):
>
> Benchmark Mode Samples Score
> Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 86.867
> ± 4.502 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 671.156
> ± 7.099 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1221.814
> ± 22.130 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1178.407
> ± 960.141 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1220.746
> ± 34.421 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 898.122
> ± 8.128 ops/ms
>
> On 9 February 2017 at 12:02, Matt Sicker <bo...@gmail.com> wrote:
>
>> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core i7.
>> Can find out more hardware specs if needed. I also noticed that the memory
>> mapped file starts out fast and slows down over time (somewhat seen by the
>> error value here).
>>
>> Benchmark Mode Samples Score
>> Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>> 96.540 ± 7.875 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>> 766.286 ± 11.461 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>> 1787.620 ± 36.695 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>> 1506.588 ± 956.354 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>> 1934.966 ± 50.089 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>> 1285.066 ± 12.674 ops/ms
>>
>> On 9 February 2017 at 11:44, Remko Popma <re...@gmail.com> wrote:
>>
>>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit), i5-3317u
>>> CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores), SSD hard
>>> disk:
>>>
>>> java -jar log4j-perf/target/benchmarks.jar ".*FileAppenderBenchmark.*"
>>> -f 1 -wi 10 -i 20 -t 4 -tu ms
>>>
>>> # Run complete. Total time: 00:03:58
>>>
>>> Benchmark Mode Samples
>>> Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>> 37.646 ± 0.876 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>> 405.305 ± 6.596 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>> 751.949 ± 16.055 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>> 1250.666 ± 168.757 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>> 728.743 ± 23.909 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>> 676.926 ± 19.518 ops/ms
>>>
>>> --------------------
>>> Logback config without immediateFlush=false:
>>>
>>> # Run complete. Total time: 00:03:44
>>>
>>> Benchmark Mode Samples
>>> Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>> 37.949 ± 1.220 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>> 404.042 ± 8.450 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>> 690.393 ± 115.537 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>>> 1221.681 ± 82.205 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>> 823.059 ± 41.512 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>> 83.352 ± 11.911 ops/ms
>>>
>>>
>>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <
>>> mikael.staldal@magine.com> wrote:
>>>
>>>> I guess that with a memory mapped file, you leave it to the OS to do
>>>> the best it can, and you lose any direct control over how it is actually
>>>> done.
>>>>
>>>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ra...@dslextreme.com>
>>>> wrote:
>>>>
>>>>> On my Mac Pro with the slower external SSD I now got:
>>>>>
>>>>> Benchmark Mode Samples
>>>>> Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>> 73.739 ± 0.740 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>> 683.129 ± 9.407 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>> 991.293 ± 193.049 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>> 3072.250 ± 63.475 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>> 1056.256 ± 137.673 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>> 784.723 ± 153.226 ops/ms
>>>>>
>>>>> and on the same machine with the faster internal SSD I got:
>>>>>
>>>>> Benchmark Mode Samples
>>>>> Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>> 74.661 ± 0.232 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>> 647.041 ± 2.994 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>> 1333.887 ± 13.921 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>> 3025.726 ± 210.414 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>> 1433.620 ± 11.194 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>> 1026.319 ± 13.347 ops/ms
>>>>>
>>>>> I will continue to run this on a few other configurations. I think I
>>>>> would also like to add the async appenders/loggers to this test so that one
>>>>> can see all the various options.
>>>>>
>>>>> It is really quite interesting to me to see how the memory mapped
>>>>> appender behaves so differently from one machine to another. I don’t know
>>>>> under what circumstances I would recommend using it though.
>>>>>
>>>>> Ralph
>>>>>
>>>>>
>>>>> On Feb 9, 2017, at 7:03 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>
>>>>> After modifying the configuration the new results on my laptop are:
>>>>>
>>>>> Benchmark Mode Samples
>>>>> Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>>> 92.580 ± 3.698 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>>> 828.707 ± 55.006 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>>> 1647.230 ± 125.682 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>>> 1465.002 ± 1284.943 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>>> 1765.340 ± 149.707 ops/ms
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>>> 1192.594 ± 57.777 ops/ms
>>>>>
>>>>> I will try the other machines later and post those results.
>>>>>
>>>>> Ralph
>>>>>
>>>>>
>>>>> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>
>>>>> Ceki replied on twitter that the immediateFlush option is now a
>>>>> parameter of the appended, not the encoder, so it looks like the confit
>>>>> needs to be changed and the test rerun.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com> wrote:
>>>>>
>>>>> FYI, The write and flush methods in BufferedOutputStream are also
>>>>> synchronized, so we won't be able to do away with synchronization
>>>>> completely.
>>>>>
>>>>> In OutputStreamManager we synchronize multiple methods but these are
>>>>> nested calls. I thought reentrant synchronization had negligible overhead
>>>>> but I haven't measured this myself.
>>>>>
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com> wrote:
>>>>>
>>>>> I’m pretty sure the problem we have is that a) we are synchronizing
>>>>> many methods and b) we are synchronizing more than just the write.
>>>>> Unfortunately, I can’t figure out how to reduce that based on how dispersed
>>>>> the code has gotten.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com>
>>>>> wrote:
>>>>>
>>>>> I tried to modify FileManager to just use a BufferedOutputStream but
>>>>> discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com>
>>>>> wrote:
>>>>>
>>>>> The append method isn’t synchronized but the writeBytes method
>>>>> acquires a lock. His code is actually a lot simpler than ours in that it
>>>>> just uses a BufferedOutputStream and he only obtains the lock when he is
>>>>> writing to it.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>
>>>>> I'm not sure if I'm looking in the right place, but a major difference
>>>>> now between Logback's appenders and Log4j's is that Logback isn't
>>>>> synchronized on the append method.
>>>>>
>>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>>>
>>>>>> Is this something we can improve performance on by implementing a
>>>>>> file appender based on FileChannel or AsynchronousFileChannel instead of
>>>>>> OutputStream?
>>>>>>
>>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Ceki has updated his numbers to include those reported on the
>>>>>>> mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0
>>>>>>> RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>>>>>
>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for
>>>>>>> my two MacBooks are at https://docs.google.com/spread
>>>>>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?u
>>>>>>> sp=sharing.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Yes, that is still the standard approach most people use and is the
>>>>>>> only way to provide a head-to-head comparison against the logging
>>>>>>> frameworks.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>
>>>>>>> This is all in a synchronous appender, right? Either way, that's
>>>>>>> rather interesting.
>>>>>>>
>>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Someone posted numbers on the Logback user’s list that match mine.
>>>>>>>> It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and
>>>>>>>> 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>
>>>>>>>> I think we need some comparisons on the log4j side: file appender
>>>>>>>> with 256k buffer size, random access file appender with 256k buffer size
>>>>>>>> (which appears to be the default), and memory mapped file appender. It'd be
>>>>>>>> cool to see how these compose with async logging enabled in both log4j and
>>>>>>>> logback.
>>>>>>>>
>>>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf to
>>>>>>>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>>>>>>>> speed of your processor and get the speed of your hard drive. I used
>>>>>>>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>>>>>>>> spreadsheet. I will post the like once I have it.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> If you want, I can run tests on Windows once the build works on
>>>>>>>>> Windows again.
>>>>>>>>>
>>>>>>>>> Let me know what args/command line...
>>>>>>>>>
>>>>>>>>> Gary
>>>>>>>>>
>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive
>>>>>>>>>> category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>
>>>>>>>>>> Benchmark Mode Samples
>>>>>>>>>> Score Error Units
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>>>>>> 98187.673 ± 4935.712 ops/s
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>>>>>> 842374.496 ± 6762.712 ops/s
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>>>>>> 1853062.583 ± 67032.225 ops/s
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>>>>>> 2036011.226 ± 53208.281 ops/s
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>>>>>> 999667.438 ± 12074.003 ops/s
>>>>>>>>>>
>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run
>>>>>>>>>> anything directly on bare metal any more.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have
>>>>>>>>>> a few optimizations we can make.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> [image: MagineTV]
>>>>
>>>> *Mikael Ståldal*
>>>> Senior software developer
>>>>
>>>> *Magine TV*
>>>> mikael.staldal@magine.com
>>>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
>>>>
>>>> Privileged and/or Confidential Information may be contained in this
>>>> message. If you are not the addressee indicated in this message
>>>> (or responsible for delivery of the message to such a person), you may
>>>> not copy or deliver this message to anyone. In such case,
>>>> you should destroy this message and kindly notify the sender by reply
>>>> email.
>>>>
>>>
>>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>
>
>
> --
> Matt Sicker <bo...@gmail.com>
>
>
>
--
Matt Sicker <bo...@gmail.com>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
You might try running Ceki’s benchmark project on AWS and publish those numbers here. He also asked people to publish numbers on the Logback user’s list so he can add them to his spreadsheet.
From your numbers and the numbers I’ve been getting, it is clear to me that the SSDs in Apple’s MacBook’s are pretty awesome. From the hardware Remko is listing I’d say his machine is about as old as my other MacBook except that he has an SSD that is slightly faster than my hard drive.
Ralph
> On Feb 9, 2017, at 11:12 AM, Matt Sicker <bo...@gmail.com> wrote:
>
> Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params involved in making the instance, but here's some data (same strangeness with MMF):
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 86.867 ± 4.502 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 671.156 ± 7.099 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1221.814 ± 22.130 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1178.407 ± 960.141 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1220.746 ± 34.421 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 898.122 ± 8.128 ops/ms
>
> On 9 February 2017 at 12:02, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core i7. Can find out more hardware specs if needed. I also noticed that the memory mapped file starts out fast and slows down over time (somewhat seen by the error value here).
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 96.540 ± 7.875 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 766.286 ± 11.461 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1787.620 ± 36.695 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1506.588 ± 956.354 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1934.966 ± 50.089 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1285.066 ± 12.674 ops/ms
>
> On 9 February 2017 at 11:44, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit), i5-3317u CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores), SSD hard disk:
>
> java -jar log4j-perf/target/benchmarks.jar ".*FileAppenderBenchmark.*" -f 1 -wi 10 -i 20 -t 4 -tu ms
>
> # Run complete. Total time: 00:03:58
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.646 ± 0.876 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 405.305 ± 6.596 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 751.949 ± 16.055 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1250.666 ± 168.757 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 728.743 ± 23.909 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 676.926 ± 19.518 ops/ms
>
> --------------------
> Logback config without immediateFlush=false:
>
> # Run complete. Total time: 00:03:44
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.949 ± 1.220 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 404.042 ± 8.450 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 690.393 ± 115.537 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1221.681 ± 82.205 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 823.059 ± 41.512 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 83.352 ± 11.911 ops/ms
>
>
> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <mikael.staldal@magine.com <ma...@magine.com>> wrote:
> I guess that with a memory mapped file, you leave it to the OS to do the best it can, and you lose any direct control over how it is actually done.
>
> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> On my Mac Pro with the slower external SSD I now got:
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 73.739 ± 0.740 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 683.129 ± 9.407 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 991.293 ± 193.049 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3072.250 ± 63.475 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1056.256 ± 137.673 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 784.723 ± 153.226 ops/ms
>
> and on the same machine with the faster internal SSD I got:
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 74.661 ± 0.232 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 647.041 ± 2.994 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1333.887 ± 13.921 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3025.726 ± 210.414 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1433.620 ± 11.194 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1026.319 ± 13.347 ops/ms
>
> I will continue to run this on a few other configurations. I think I would also like to add the async appenders/loggers to this test so that one can see all the various options.
>
> It is really quite interesting to me to see how the memory mapped appender behaves so differently from one machine to another. I don’t know under what circumstances I would recommend using it though.
>
> Ralph
>
>
>> On Feb 9, 2017, at 7:03 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>
>> After modifying the configuration the new results on my laptop are:
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 92.580 ± 3.698 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 828.707 ± 55.006 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1647.230 ± 125.682 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1465.002 ± 1284.943 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1765.340 ± 149.707 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1192.594 ± 57.777 ops/ms
>>
>> I will try the other machines later and post those results.
>>
>> Ralph
>>
>>
>>> On Feb 9, 2017, at 5:22 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>
>>> Ceki replied on twitter that the immediateFlush option is now a parameter of the appended, not the encoder, so it looks like the confit needs to be changed and the test rerun.
>>>
>>> Ralph
>>>
>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>>>
>>>> FYI, The write and flush methods in BufferedOutputStream are also synchronized, so we won't be able to do away with synchronization completely.
>>>>
>>>> In OutputStreamManager we synchronize multiple methods but these are nested calls. I thought reentrant synchronization had negligible overhead but I haven't measured this myself.
>>>>
>>>>
>>>> Sent from my iPhone
>>>>
>>>> On Feb 9, 2017, at 2:31, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>
>>>>> I’m pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can’t figure out how to reduce that based on how dispersed the code has gotten.
>>>>>
>>>>> Ralph
>>>>>
>>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>
>>>>>> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>
>>>>>>> The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>
>>>>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>>>>
>>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>>>
>>>>>>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>>>>>>
>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>
>>>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>>>
>>>>>>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>>>
>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>>>
>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>
>>>>>>>>>>>> Gary
>>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>
>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>
>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>
>>>>>>
>>>>>
>>
>
>
>
>
> --
>
>
> Mikael Ståldal
> Senior software developer
>
> Magine TV
> mikael.staldal@magine.com <ma...@magine.com>
> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com <http://www.magine.com/>
>
> Privileged and/or Confidential Information may be contained in this message. If you are not the addressee indicated in this message
> (or responsible for delivery of the message to such a person), you may not copy or deliver this message to anyone. In such case,
> you should destroy this message and kindly notify the sender by reply email.
>
>
>
>
> --
> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>
>
>
> --
> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
Re: Logback performance improvements
Posted by Matt Sicker <bo...@gmail.com>.
Ran on an AWS instance (CentOS 7.2), cpuinfo says 8-core Intel(R) Xeon(R)
CPU E5-2666 v3 @ 2.90GHz, not super sure about all the params involved in
making the instance, but here's some data (same strangeness with MMF):
Benchmark Mode Samples Score
Error Units
o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 86.867 ±
4.502 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 671.156 ±
7.099 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1221.814 ±
22.130 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1178.407 ±
960.141 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1220.746 ±
34.421 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 898.122 ±
8.128 ops/ms
On 9 February 2017 at 12:02, Matt Sicker <bo...@gmail.com> wrote:
> Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core i7.
> Can find out more hardware specs if needed. I also noticed that the memory
> mapped file starts out fast and slows down over time (somewhat seen by the
> error value here).
>
> Benchmark Mode Samples Score
> Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 96.540
> ± 7.875 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 766.286
> ± 11.461 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1787.620
> ± 36.695 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1506.588
> ± 956.354 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1934.966
> ± 50.089 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1285.066
> ± 12.674 ops/ms
>
> On 9 February 2017 at 11:44, Remko Popma <re...@gmail.com> wrote:
>
>> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit), i5-3317u
>> CPU @ 1.70GHz (dual core with hyperthreading for 4 virtual cores), SSD hard
>> disk:
>>
>> java -jar log4j-perf/target/benchmarks.jar ".*FileAppenderBenchmark.*"
>> -f 1 -wi 10 -i 20 -t 4 -tu ms
>>
>> # Run complete. Total time: 00:03:58
>>
>> Benchmark Mode Samples Score
>> Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>> 37.646 ± 0.876 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>> 405.305 ± 6.596 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>> 751.949 ± 16.055 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>> 1250.666 ± 168.757 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>> 728.743 ± 23.909 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>> 676.926 ± 19.518 ops/ms
>>
>> --------------------
>> Logback config without immediateFlush=false:
>>
>> # Run complete. Total time: 00:03:44
>>
>> Benchmark Mode Samples Score
>> Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>> 37.949 ± 1.220 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>> 404.042 ± 8.450 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>> 690.393 ± 115.537 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20
>> 1221.681 ± 82.205 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>> 823.059 ± 41.512 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>> 83.352 ± 11.911 ops/ms
>>
>>
>> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <
>> mikael.staldal@magine.com> wrote:
>>
>>> I guess that with a memory mapped file, you leave it to the OS to do the
>>> best it can, and you lose any direct control over how it is actually done.
>>>
>>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ra...@dslextreme.com>
>>> wrote:
>>>
>>>> On my Mac Pro with the slower external SSD I now got:
>>>>
>>>> Benchmark Mode Samples
>>>> Score Error Units
>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>> 73.739 ± 0.740 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>> 683.129 ± 9.407 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>> 991.293 ± 193.049 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>> 3072.250 ± 63.475 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>> 1056.256 ± 137.673 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>> 784.723 ± 153.226 ops/ms
>>>>
>>>> and on the same machine with the faster internal SSD I got:
>>>>
>>>> Benchmark Mode Samples
>>>> Score Error Units
>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>> 74.661 ± 0.232 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>> 647.041 ± 2.994 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>> 1333.887 ± 13.921 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>> 3025.726 ± 210.414 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>> 1433.620 ± 11.194 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>> 1026.319 ± 13.347 ops/ms
>>>>
>>>> I will continue to run this on a few other configurations. I think I
>>>> would also like to add the async appenders/loggers to this test so that one
>>>> can see all the various options.
>>>>
>>>> It is really quite interesting to me to see how the memory mapped
>>>> appender behaves so differently from one machine to another. I don’t know
>>>> under what circumstances I would recommend using it though.
>>>>
>>>> Ralph
>>>>
>>>>
>>>> On Feb 9, 2017, at 7:03 AM, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>> After modifying the configuration the new results on my laptop are:
>>>>
>>>> Benchmark Mode Samples
>>>> Score Error Units
>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>>> 92.580 ± 3.698 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>>> 828.707 ± 55.006 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>>> 1647.230 ± 125.682 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>>> 1465.002 ± 1284.943 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>>> 1765.340 ± 149.707 ops/ms
>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>>> 1192.594 ± 57.777 ops/ms
>>>>
>>>> I will try the other machines later and post those results.
>>>>
>>>> Ralph
>>>>
>>>>
>>>> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>> Ceki replied on twitter that the immediateFlush option is now a
>>>> parameter of the appended, not the encoder, so it looks like the confit
>>>> needs to be changed and the test rerun.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com> wrote:
>>>>
>>>> FYI, The write and flush methods in BufferedOutputStream are also
>>>> synchronized, so we won't be able to do away with synchronization
>>>> completely.
>>>>
>>>> In OutputStreamManager we synchronize multiple methods but these are
>>>> nested calls. I thought reentrant synchronization had negligible overhead
>>>> but I haven't measured this myself.
>>>>
>>>>
>>>> Sent from my iPhone
>>>>
>>>> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>> I’m pretty sure the problem we have is that a) we are synchronizing
>>>> many methods and b) we are synchronizing more than just the write.
>>>> Unfortunately, I can’t figure out how to reduce that based on how dispersed
>>>> the code has gotten.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>> I tried to modify FileManager to just use a BufferedOutputStream but
>>>> discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>> The append method isn’t synchronized but the writeBytes method acquires
>>>> a lock. His code is actually a lot simpler than ours in that it just uses a
>>>> BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>> I'm not sure if I'm looking in the right place, but a major difference
>>>> now between Logback's appenders and Log4j's is that Logback isn't
>>>> synchronized on the append method.
>>>>
>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>>> Is this something we can improve performance on by implementing a file
>>>>> appender based on FileChannel or AsynchronousFileChannel instead of
>>>>> OutputStream?
>>>>>
>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com>
>>>>> wrote:
>>>>>
>>>>>> Ceki has updated his numbers to include those reported on the mailing
>>>>>> list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0
>>>>>> RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>>>>
>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for
>>>>>> my two MacBooks are at https://docs.google.com/spread
>>>>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?u
>>>>>> sp=sharing.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com>
>>>>>> wrote:
>>>>>>
>>>>>> Yes, that is still the standard approach most people use and is the
>>>>>> only way to provide a head-to-head comparison against the logging
>>>>>> frameworks.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>
>>>>>> This is all in a synchronous appender, right? Either way, that's
>>>>>> rather interesting.
>>>>>>
>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Someone posted numbers on the Logback user’s list that match mine.
>>>>>>> It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and
>>>>>>> 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>
>>>>>>> I think we need some comparisons on the log4j side: file appender
>>>>>>> with 256k buffer size, random access file appender with 256k buffer size
>>>>>>> (which appears to be the default), and memory mapped file appender. It'd be
>>>>>>> cool to see how these compose with async logging enabled in both log4j and
>>>>>>> logback.
>>>>>>>
>>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> You should run the code at https://github.com/ceki/logback-perf to
>>>>>>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>>>>>>> speed of your processor and get the speed of your hard drive. I used
>>>>>>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>>>>>>> spreadsheet. I will post the like once I have it.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> If you want, I can run tests on Windows once the build works on
>>>>>>>> Windows again.
>>>>>>>>
>>>>>>>> Let me know what args/command line...
>>>>>>>>
>>>>>>>> Gary
>>>>>>>>
>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive
>>>>>>>>> category. With Logback 1.10 and -t 4 now get
>>>>>>>>>
>>>>>>>>> Benchmark Mode Samples
>>>>>>>>> Score Error Units
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>>>>> 98187.673 ± 4935.712 ops/s
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>>>>> 842374.496 ± 6762.712 ops/s
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>>>>> 1853062.583 ± 67032.225 ops/s
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>>>>> 2036011.226 ± 53208.281 ops/s
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>>>>> 999667.438 ± 12074.003 ops/s
>>>>>>>>>
>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything
>>>>>>>>> directly on bare metal any more.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have
>>>>>>>>> a few optimizations we can make.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> [image: MagineTV]
>>>
>>> *Mikael Ståldal*
>>> Senior software developer
>>>
>>> *Magine TV*
>>> mikael.staldal@magine.com
>>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
>>>
>>> Privileged and/or Confidential Information may be contained in this
>>> message. If you are not the addressee indicated in this message
>>> (or responsible for delivery of the message to such a person), you may
>>> not copy or deliver this message to anyone. In such case,
>>> you should destroy this message and kindly notify the sender by reply
>>> email.
>>>
>>
>>
>
>
> --
> Matt Sicker <bo...@gmail.com>
>
--
Matt Sicker <bo...@gmail.com>
Re: Logback performance improvements
Posted by Matt Sicker <bo...@gmail.com>.
Run on a MacBook Pro (Retina, 15-inch, Mid 2015) 2.5 GHz Intel Core i7. Can
find out more hardware specs if needed. I also noticed that the memory
mapped file starts out fast and slows down over time (somewhat seen by the
error value here).
Benchmark Mode Samples Score
Error Units
o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 96.540 ±
7.875 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 766.286 ±
11.461 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1787.620 ±
36.695 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1506.588 ±
956.354 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1934.966 ±
50.089 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1285.066 ±
12.674 ops/ms
On 9 February 2017 at 11:44, Remko Popma <re...@gmail.com> wrote:
> My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit), i5-3317u CPU
> @ 1.70GHz (dual core with hyperthreading for 4 virtual cores), SSD hard
> disk:
>
> java -jar log4j-perf/target/benchmarks.jar ".*FileAppenderBenchmark.*" -f
> 1 -wi 10 -i 20 -t 4 -tu ms
>
> # Run complete. Total time: 00:03:58
>
> Benchmark Mode Samples Score
> Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.646
> ± 0.876 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 405.305
> ± 6.596 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 751.949
> ± 16.055 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1250.666
> ± 168.757 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 728.743
> ± 23.909 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 676.926
> ± 19.518 ops/ms
>
> --------------------
> Logback config without immediateFlush=false:
>
> # Run complete. Total time: 00:03:44
>
> Benchmark Mode Samples Score
> Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.949
> ± 1.220 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 404.042
> ± 8.450 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 690.393
> ± 115.537 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1221.681
> ± 82.205 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 823.059
> ± 41.512 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 83.352
> ± 11.911 ops/ms
>
>
> On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <mikael.staldal@magine.com
> > wrote:
>
>> I guess that with a memory mapped file, you leave it to the OS to do the
>> best it can, and you lose any direct control over how it is actually done.
>>
>> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ra...@dslextreme.com>
>> wrote:
>>
>>> On my Mac Pro with the slower external SSD I now got:
>>>
>>> Benchmark Mode Samples
>>> Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>> 73.739 ± 0.740 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>> 683.129 ± 9.407 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>> 991.293 ± 193.049 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>> 3072.250 ± 63.475 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>> 1056.256 ± 137.673 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>> 784.723 ± 153.226 ops/ms
>>>
>>> and on the same machine with the faster internal SSD I got:
>>>
>>> Benchmark Mode Samples
>>> Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>> 74.661 ± 0.232 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>> 647.041 ± 2.994 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>> 1333.887 ± 13.921 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>> 3025.726 ± 210.414 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>> 1433.620 ± 11.194 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>> 1026.319 ± 13.347 ops/ms
>>>
>>> I will continue to run this on a few other configurations. I think I
>>> would also like to add the async appenders/loggers to this test so that one
>>> can see all the various options.
>>>
>>> It is really quite interesting to me to see how the memory mapped
>>> appender behaves so differently from one machine to another. I don’t know
>>> under what circumstances I would recommend using it though.
>>>
>>> Ralph
>>>
>>>
>>> On Feb 9, 2017, at 7:03 AM, Apache <ra...@dslextreme.com> wrote:
>>>
>>> After modifying the configuration the new results on my laptop are:
>>>
>>> Benchmark Mode Samples
>>> Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>>> 92.580 ± 3.698 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>>> 828.707 ± 55.006 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>>> 1647.230 ± 125.682 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>>> 1465.002 ± 1284.943 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>>> 1765.340 ± 149.707 ops/ms
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>>> 1192.594 ± 57.777 ops/ms
>>>
>>> I will try the other machines later and post those results.
>>>
>>> Ralph
>>>
>>>
>>> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com> wrote:
>>>
>>> Ceki replied on twitter that the immediateFlush option is now a
>>> parameter of the appended, not the encoder, so it looks like the confit
>>> needs to be changed and the test rerun.
>>>
>>> Ralph
>>>
>>> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com> wrote:
>>>
>>> FYI, The write and flush methods in BufferedOutputStream are also
>>> synchronized, so we won't be able to do away with synchronization
>>> completely.
>>>
>>> In OutputStreamManager we synchronize multiple methods but these are
>>> nested calls. I thought reentrant synchronization had negligible overhead
>>> but I haven't measured this myself.
>>>
>>>
>>> Sent from my iPhone
>>>
>>> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com> wrote:
>>>
>>> I’m pretty sure the problem we have is that a) we are synchronizing many
>>> methods and b) we are synchronizing more than just the write.
>>> Unfortunately, I can’t figure out how to reduce that based on how dispersed
>>> the code has gotten.
>>>
>>> Ralph
>>>
>>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com> wrote:
>>>
>>> I tried to modify FileManager to just use a BufferedOutputStream but
>>> discovered I couldn’t as the layouts now require the ByteBuffer.
>>>
>>> Ralph
>>>
>>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com> wrote:
>>>
>>> The append method isn’t synchronized but the writeBytes method acquires
>>> a lock. His code is actually a lot simpler than ours in that it just uses a
>>> BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>
>>> Ralph
>>>
>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>
>>> I'm not sure if I'm looking in the right place, but a major difference
>>> now between Logback's appenders and Log4j's is that Logback isn't
>>> synchronized on the append method.
>>>
>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>
>>>> Is this something we can improve performance on by implementing a file
>>>> appender based on FileChannel or AsynchronousFileChannel instead of
>>>> OutputStream?
>>>>
>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>>> Ceki has updated his numbers to include those reported on the mailing
>>>>> list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0
>>>>> RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>>>
>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for
>>>>> my two MacBooks are at https://docs.google.com/spread
>>>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing
>>>>> .
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>
>>>>> Yes, that is still the standard approach most people use and is the
>>>>> only way to provide a head-to-head comparison against the logging
>>>>> frameworks.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>
>>>>> This is all in a synchronous appender, right? Either way, that's
>>>>> rather interesting.
>>>>>
>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com>
>>>>> wrote:
>>>>>
>>>>>> Someone posted numbers on the Logback user’s list that match mine.
>>>>>> It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and
>>>>>> 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>
>>>>>> I think we need some comparisons on the log4j side: file appender
>>>>>> with 256k buffer size, random access file appender with 256k buffer size
>>>>>> (which appears to be the default), and memory mapped file appender. It'd be
>>>>>> cool to see how these compose with async logging enabled in both log4j and
>>>>>> logback.
>>>>>>
>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com>
>>>>>> wrote:
>>>>>>
>>>>>>> You should run the code at https://github.com/ceki/logback-perf to
>>>>>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>>>>>> speed of your processor and get the speed of your hard drive. I used
>>>>>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>>>>>> spreadsheet. I will post the like once I have it.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> If you want, I can run tests on Windows once the build works on
>>>>>>> Windows again.
>>>>>>>
>>>>>>> Let me know what args/command line...
>>>>>>>
>>>>>>> Gary
>>>>>>>
>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive
>>>>>>>> category. With Logback 1.10 and -t 4 now get
>>>>>>>>
>>>>>>>> Benchmark Mode Samples
>>>>>>>> Score Error Units
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>>>> 98187.673 ± 4935.712 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>>>> 842374.496 ± 6762.712 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>>>> 1853062.583 ± 67032.225 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>>>> 2036011.226 ± 53208.281 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>>>> 999667.438 ± 12074.003 ops/s
>>>>>>>>
>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything
>>>>>>>> directly on bare metal any more.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a
>>>>>>>> few optimizations we can make.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> [image: MagineTV]
>>
>> *Mikael Ståldal*
>> Senior software developer
>>
>> *Magine TV*
>> mikael.staldal@magine.com
>> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
>>
>> Privileged and/or Confidential Information may be contained in this
>> message. If you are not the addressee indicated in this message
>> (or responsible for delivery of the message to such a person), you may
>> not copy or deliver this message to anyone. In such case,
>> you should destroy this message and kindly notify the sender by reply
>> email.
>>
>
>
--
Matt Sicker <bo...@gmail.com>
Re: Logback performance improvements
Posted by Remko Popma <re...@gmail.com>.
My results on Windows 10 64-bit laptop (java 1.8.0_51 64bit), i5-3317u CPU
@ 1.70GHz (dual core with hyperthreading for 4 virtual cores), SSD hard
disk:
java -jar log4j-perf/target/benchmarks.jar ".*FileAppenderBenchmark.*" -f 1
-wi 10 -i 20 -t 4 -tu ms
# Run complete. Total time: 00:03:58
Benchmark Mode Samples Score
Error Units
o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.646 ±
0.876 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 405.305 ±
6.596 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 751.949 ±
16.055 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1250.666 ±
168.757 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 728.743 ±
23.909 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 676.926 ±
19.518 ops/ms
--------------------
Logback config without immediateFlush=false:
# Run complete. Total time: 00:03:44
Benchmark Mode Samples Score
Error Units
o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 37.949 ±
1.220 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 404.042 ±
8.450 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 690.393 ±
115.537 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 20 1221.681 ±
82.205 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 823.059 ±
41.512 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 83.352 ±
11.911 ops/ms
On Fri, Feb 10, 2017 at 1:05 AM, Mikael Ståldal <mi...@magine.com>
wrote:
> I guess that with a memory mapped file, you leave it to the OS to do the
> best it can, and you lose any direct control over how it is actually done.
>
> On Thu, Feb 9, 2017 at 4:52 PM, Apache <ra...@dslextreme.com> wrote:
>
>> On my Mac Pro with the slower external SSD I now got:
>>
>> Benchmark Mode Samples Score
>> Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>> 73.739 ± 0.740 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>> 683.129 ± 9.407 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>> 991.293 ± 193.049 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>> 3072.250 ± 63.475 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>> 1056.256 ± 137.673 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>> 784.723 ± 153.226 ops/ms
>>
>> and on the same machine with the faster internal SSD I got:
>>
>> Benchmark Mode Samples Score
>> Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>> 74.661 ± 0.232 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>> 647.041 ± 2.994 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>> 1333.887 ± 13.921 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>> 3025.726 ± 210.414 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>> 1433.620 ± 11.194 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>> 1026.319 ± 13.347 ops/ms
>>
>> I will continue to run this on a few other configurations. I think I
>> would also like to add the async appenders/loggers to this test so that one
>> can see all the various options.
>>
>> It is really quite interesting to me to see how the memory mapped
>> appender behaves so differently from one machine to another. I don’t know
>> under what circumstances I would recommend using it though.
>>
>> Ralph
>>
>>
>> On Feb 9, 2017, at 7:03 AM, Apache <ra...@dslextreme.com> wrote:
>>
>> After modifying the configuration the new results on my laptop are:
>>
>> Benchmark Mode Samples
>> Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10
>> 92.580 ± 3.698 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10
>> 828.707 ± 55.006 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10
>> 1647.230 ± 125.682 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10
>> 1465.002 ± 1284.943 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10
>> 1765.340 ± 149.707 ops/ms
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10
>> 1192.594 ± 57.777 ops/ms
>>
>> I will try the other machines later and post those results.
>>
>> Ralph
>>
>>
>> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com> wrote:
>>
>> Ceki replied on twitter that the immediateFlush option is now a parameter
>> of the appended, not the encoder, so it looks like the confit needs to be
>> changed and the test rerun.
>>
>> Ralph
>>
>> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com> wrote:
>>
>> FYI, The write and flush methods in BufferedOutputStream are also
>> synchronized, so we won't be able to do away with synchronization
>> completely.
>>
>> In OutputStreamManager we synchronize multiple methods but these are
>> nested calls. I thought reentrant synchronization had negligible overhead
>> but I haven't measured this myself.
>>
>>
>> Sent from my iPhone
>>
>> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com> wrote:
>>
>> I’m pretty sure the problem we have is that a) we are synchronizing many
>> methods and b) we are synchronizing more than just the write.
>> Unfortunately, I can’t figure out how to reduce that based on how dispersed
>> the code has gotten.
>>
>> Ralph
>>
>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com> wrote:
>>
>> I tried to modify FileManager to just use a BufferedOutputStream but
>> discovered I couldn’t as the layouts now require the ByteBuffer.
>>
>> Ralph
>>
>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com> wrote:
>>
>> The append method isn’t synchronized but the writeBytes method acquires a
>> lock. His code is actually a lot simpler than ours in that it just uses a
>> BufferedOutputStream and he only obtains the lock when he is writing to it.
>>
>> Ralph
>>
>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>
>> I'm not sure if I'm looking in the right place, but a major difference
>> now between Logback's appenders and Log4j's is that Logback isn't
>> synchronized on the append method.
>>
>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>
>>> Is this something we can improve performance on by implementing a file
>>> appender based on FileChannel or AsynchronousFileChannel instead of
>>> OutputStream?
>>>
>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
>>>
>>>> Ceki has updated his numbers to include those reported on the mailing
>>>> list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0
>>>> RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>>
>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my
>>>> two MacBooks are at https://docs.google.com/spread
>>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing
>>>> .
>>>>
>>>> Ralph
>>>>
>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>> Yes, that is still the standard approach most people use and is the
>>>> only way to provide a head-to-head comparison against the logging
>>>> frameworks.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>> This is all in a synchronous appender, right? Either way, that's rather
>>>> interesting.
>>>>
>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>>> Someone posted numbers on the Logback user’s list that match mine. It
>>>>> shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and
>>>>> 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>
>>>>> I think we need some comparisons on the log4j side: file appender with
>>>>> 256k buffer size, random access file appender with 256k buffer size (which
>>>>> appears to be the default), and memory mapped file appender. It'd be cool
>>>>> to see how these compose with async logging enabled in both log4j and
>>>>> logback.
>>>>>
>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com>
>>>>> wrote:
>>>>>
>>>>>> You should run the code at https://github.com/ceki/logback-perf to
>>>>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>>>>> speed of your processor and get the speed of your hard drive. I used
>>>>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>>>>> spreadsheet. I will post the like once I have it.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>> If you want, I can run tests on Windows once the build works on
>>>>>> Windows again.
>>>>>>
>>>>>> Let me know what args/command line...
>>>>>>
>>>>>> Gary
>>>>>>
>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>>>>>>
>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive
>>>>>>> category. With Logback 1.10 and -t 4 now get
>>>>>>>
>>>>>>> Benchmark Mode Samples
>>>>>>> Score Error Units
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>>> 98187.673 ± 4935.712 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>>> 842374.496 ± 6762.712 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>>> 1853062.583 ± 67032.225 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>>> 2036011.226 ± 53208.281 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>>> 999667.438 ± 12074.003 ops/s
>>>>>>>
>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything
>>>>>>> directly on bare metal any more.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a
>>>>>>> few optimizations we can make.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>>
>>
>>
>>
>>
>>
>
>
> --
> [image: MagineTV]
>
> *Mikael Ståldal*
> Senior software developer
>
> *Magine TV*
> mikael.staldal@magine.com
> Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
>
> Privileged and/or Confidential Information may be contained in this
> message. If you are not the addressee indicated in this message
> (or responsible for delivery of the message to such a person), you may not
> copy or deliver this message to anyone. In such case,
> you should destroy this message and kindly notify the sender by reply
> email.
>
Re: Logback performance improvements
Posted by Mikael Ståldal <mi...@magine.com>.
I guess that with a memory mapped file, you leave it to the OS to do the
best it can, and you lose any direct control over how it is actually done.
On Thu, Feb 9, 2017 at 4:52 PM, Apache <ra...@dslextreme.com> wrote:
> On my Mac Pro with the slower external SSD I now got:
>
> Benchmark Mode Samples Score
> Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 73.739
> ± 0.740 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 683.129
> ± 9.407 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 991.293
> ± 193.049 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3072.250
> ± 63.475 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1056.256
> ± 137.673 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 784.723
> ± 153.226 ops/ms
>
> and on the same machine with the faster internal SSD I got:
>
> Benchmark Mode Samples Score
> Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 74.661
> ± 0.232 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 647.041
> ± 2.994 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1333.887
> ± 13.921 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3025.726
> ± 210.414 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1433.620
> ± 11.194 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1026.319
> ± 13.347 ops/ms
>
> I will continue to run this on a few other configurations. I think I would
> also like to add the async appenders/loggers to this test so that one can
> see all the various options.
>
> It is really quite interesting to me to see how the memory mapped appender
> behaves so differently from one machine to another. I don’t know under what
> circumstances I would recommend using it though.
>
> Ralph
>
>
> On Feb 9, 2017, at 7:03 AM, Apache <ra...@dslextreme.com> wrote:
>
> After modifying the configuration the new results on my laptop are:
>
> Benchmark Mode Samples Score
> Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 92.580
> ± 3.698 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 828.707
> ± 55.006 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1647.230
> ± 125.682 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1465.002
> ± 1284.943 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1765.340
> ± 149.707 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1192.594
> ± 57.777 ops/ms
>
> I will try the other machines later and post those results.
>
> Ralph
>
>
> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com> wrote:
>
> Ceki replied on twitter that the immediateFlush option is now a parameter
> of the appended, not the encoder, so it looks like the confit needs to be
> changed and the test rerun.
>
> Ralph
>
> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com> wrote:
>
> FYI, The write and flush methods in BufferedOutputStream are also
> synchronized, so we won't be able to do away with synchronization
> completely.
>
> In OutputStreamManager we synchronize multiple methods but these are
> nested calls. I thought reentrant synchronization had negligible overhead
> but I haven't measured this myself.
>
>
> Sent from my iPhone
>
> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com> wrote:
>
> I’m pretty sure the problem we have is that a) we are synchronizing many
> methods and b) we are synchronizing more than just the write.
> Unfortunately, I can’t figure out how to reduce that based on how dispersed
> the code has gotten.
>
> Ralph
>
> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com> wrote:
>
> I tried to modify FileManager to just use a BufferedOutputStream but
> discovered I couldn’t as the layouts now require the ByteBuffer.
>
> Ralph
>
> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com> wrote:
>
> The append method isn’t synchronized but the writeBytes method acquires a
> lock. His code is actually a lot simpler than ours in that it just uses a
> BufferedOutputStream and he only obtains the lock when he is writing to it.
>
> Ralph
>
> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>
> I'm not sure if I'm looking in the right place, but a major difference now
> between Logback's appenders and Log4j's is that Logback isn't synchronized
> on the append method.
>
> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>
>> Is this something we can improve performance on by implementing a file
>> appender based on FileChannel or AsynchronousFileChannel instead of
>> OutputStream?
>>
>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
>>
>>> Ceki has updated his numbers to include those reported on the mailing
>>> list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0
>>> RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>
>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my
>>> two MacBooks are at https://docs.google.com/spread
>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing.
>>>
>>> Ralph
>>>
>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>>>
>>> Yes, that is still the standard approach most people use and is the only
>>> way to provide a head-to-head comparison against the logging frameworks.
>>>
>>> Ralph
>>>
>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>
>>> This is all in a synchronous appender, right? Either way, that's rather
>>> interesting.
>>>
>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com> wrote:
>>>
>>>> Someone posted numbers on the Logback user’s list that match mine. It
>>>> shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and
>>>> 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>> I think we need some comparisons on the log4j side: file appender with
>>>> 256k buffer size, random access file appender with 256k buffer size (which
>>>> appears to be the default), and memory mapped file appender. It'd be cool
>>>> to see how these compose with async logging enabled in both log4j and
>>>> logback.
>>>>
>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>>> You should run the code at https://github.com/ceki/logback-perf to
>>>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>>>> speed of your processor and get the speed of your hard drive. I used
>>>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>>>> spreadsheet. I will post the like once I have it.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>> If you want, I can run tests on Windows once the build works on
>>>>> Windows again.
>>>>>
>>>>> Let me know what args/command line...
>>>>>
>>>>> Gary
>>>>>
>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>>>>>
>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive
>>>>>> category. With Logback 1.10 and -t 4 now get
>>>>>>
>>>>>> Benchmark Mode Samples
>>>>>> Score Error Units
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>> 98187.673 ± 4935.712 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>> 842374.496 ± 6762.712 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>> 1853062.583 ± 67032.225 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>> 2036011.226 ± 53208.281 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>> 999667.438 ± 12074.003 ops/s
>>>>>>
>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything
>>>>>> directly on bare metal any more.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com>
>>>>>> wrote:
>>>>>>
>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a
>>>>>> few optimizations we can make.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>
>
>
> --
> Matt Sicker <bo...@gmail.com>
>
>
>
>
>
>
>
--
[image: MagineTV]
*Mikael Ståldal*
Senior software developer
*Magine TV*
mikael.staldal@magine.com
Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
Privileged and/or Confidential Information may be contained in this
message. If you are not the addressee indicated in this message
(or responsible for delivery of the message to such a person), you may not
copy or deliver this message to anyone. In such case,
you should destroy this message and kindly notify the sender by reply
email.
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
On my Mac Pro with the slower external SSD I now got:
Benchmark Mode Samples Score Error Units
o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 73.739 ± 0.740 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 683.129 ± 9.407 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 991.293 ± 193.049 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3072.250 ± 63.475 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1056.256 ± 137.673 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 784.723 ± 153.226 ops/ms
and on the same machine with the faster internal SSD I got:
Benchmark Mode Samples Score Error Units
o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 74.661 ± 0.232 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 647.041 ± 2.994 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1333.887 ± 13.921 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 3025.726 ± 210.414 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1433.620 ± 11.194 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1026.319 ± 13.347 ops/ms
I will continue to run this on a few other configurations. I think I would also like to add the async appenders/loggers to this test so that one can see all the various options.
It is really quite interesting to me to see how the memory mapped appender behaves so differently from one machine to another. I don’t know under what circumstances I would recommend using it though.
Ralph
> On Feb 9, 2017, at 7:03 AM, Apache <ra...@dslextreme.com> wrote:
>
> After modifying the configuration the new results on my laptop are:
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 92.580 ± 3.698 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 828.707 ± 55.006 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1647.230 ± 125.682 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1465.002 ± 1284.943 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1765.340 ± 149.707 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1192.594 ± 57.777 ops/ms
>
> I will try the other machines later and post those results.
>
> Ralph
>
>
>> On Feb 9, 2017, at 5:22 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>
>> Ceki replied on twitter that the immediateFlush option is now a parameter of the appended, not the encoder, so it looks like the confit needs to be changed and the test rerun.
>>
>> Ralph
>>
>> On Feb 9, 2017, at 3:08 AM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>>
>>> FYI, The write and flush methods in BufferedOutputStream are also synchronized, so we won't be able to do away with synchronization completely.
>>>
>>> In OutputStreamManager we synchronize multiple methods but these are nested calls. I thought reentrant synchronization had negligible overhead but I haven't measured this myself.
>>>
>>>
>>> Sent from my iPhone
>>>
>>> On Feb 9, 2017, at 2:31, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>
>>>> I’m pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can’t figure out how to reduce that based on how dispersed the code has gotten.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>
>>>>> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>
>>>>> Ralph
>>>>>
>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>
>>>>>> The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>
>>>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>>>
>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>>
>>>>>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>>>>>
>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>
>>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>
>>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>>
>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>>
>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>
>>>>>>>>>>> Gary
>>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>
>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>
>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>
>>>>>
>>>>
>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
It is there now.
Ralph
> On Feb 9, 2017, at 7:46 AM, Remko Popma <re...@gmail.com> wrote:
>
> can you push the correct config?
>
> On Thu, Feb 9, 2017 at 11:03 PM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> After modifying the configuration the new results on my laptop are:
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 92.580 ± 3.698 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 828.707 ± 55.006 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1647.230 ± 125.682 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1465.002 ± 1284.943 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1765.340 ± 149.707 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1192.594 ± 57.777 ops/ms
>
> I will try the other machines later and post those results.
>
> Ralph
>
>
>> On Feb 9, 2017, at 5:22 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>
>> Ceki replied on twitter that the immediateFlush option is now a parameter of the appended, not the encoder, so it looks like the confit needs to be changed and the test rerun.
>>
>> Ralph
>>
>> On Feb 9, 2017, at 3:08 AM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>>
>>> FYI, The write and flush methods in BufferedOutputStream are also synchronized, so we won't be able to do away with synchronization completely.
>>>
>>> In OutputStreamManager we synchronize multiple methods but these are nested calls. I thought reentrant synchronization had negligible overhead but I haven't measured this myself.
>>>
>>>
>>> Sent from my iPhone
>>>
>>> On Feb 9, 2017, at 2:31, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>
>>>> I’m pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can’t figure out how to reduce that based on how dispersed the code has gotten.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 8, 2017, at 10:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>
>>>>> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>>
>>>>> Ralph
>>>>>
>>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>
>>>>>> The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>
>>>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>>>
>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>>
>>>>>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>>>>>
>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>
>>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>
>>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>>
>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>>
>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>
>>>>>>>>>>> Gary
>>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>
>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>
>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>
>>>>>
>>>>
>
>
Re: Logback performance improvements
Posted by Remko Popma <re...@gmail.com>.
can you push the correct config?
On Thu, Feb 9, 2017 at 11:03 PM, Apache <ra...@dslextreme.com> wrote:
> After modifying the configuration the new results on my laptop are:
>
> Benchmark Mode Samples Score
> Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 92.580
> ± 3.698 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 828.707
> ± 55.006 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1647.230
> ± 125.682 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1465.002
> ± 1284.943 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1765.340
> ± 149.707 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1192.594
> ± 57.777 ops/ms
>
> I will try the other machines later and post those results.
>
> Ralph
>
>
> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com> wrote:
>
> Ceki replied on twitter that the immediateFlush option is now a parameter
> of the appended, not the encoder, so it looks like the confit needs to be
> changed and the test rerun.
>
> Ralph
>
> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com> wrote:
>
> FYI, The write and flush methods in BufferedOutputStream are also
> synchronized, so we won't be able to do away with synchronization
> completely.
>
> In OutputStreamManager we synchronize multiple methods but these are
> nested calls. I thought reentrant synchronization had negligible overhead
> but I haven't measured this myself.
>
>
> Sent from my iPhone
>
> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com> wrote:
>
> I’m pretty sure the problem we have is that a) we are synchronizing many
> methods and b) we are synchronizing more than just the write.
> Unfortunately, I can’t figure out how to reduce that based on how dispersed
> the code has gotten.
>
> Ralph
>
> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com> wrote:
>
> I tried to modify FileManager to just use a BufferedOutputStream but
> discovered I couldn’t as the layouts now require the ByteBuffer.
>
> Ralph
>
> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com> wrote:
>
> The append method isn’t synchronized but the writeBytes method acquires a
> lock. His code is actually a lot simpler than ours in that it just uses a
> BufferedOutputStream and he only obtains the lock when he is writing to it.
>
> Ralph
>
> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>
> I'm not sure if I'm looking in the right place, but a major difference now
> between Logback's appenders and Log4j's is that Logback isn't synchronized
> on the append method.
>
> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>
>> Is this something we can improve performance on by implementing a file
>> appender based on FileChannel or AsynchronousFileChannel instead of
>> OutputStream?
>>
>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
>>
>>> Ceki has updated his numbers to include those reported on the mailing
>>> list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0
>>> RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>
>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my
>>> two MacBooks are at https://docs.google.com/spread
>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing.
>>>
>>> Ralph
>>>
>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>>>
>>> Yes, that is still the standard approach most people use and is the only
>>> way to provide a head-to-head comparison against the logging frameworks.
>>>
>>> Ralph
>>>
>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>
>>> This is all in a synchronous appender, right? Either way, that's rather
>>> interesting.
>>>
>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com> wrote:
>>>
>>>> Someone posted numbers on the Logback user’s list that match mine. It
>>>> shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and
>>>> 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>> I think we need some comparisons on the log4j side: file appender with
>>>> 256k buffer size, random access file appender with 256k buffer size (which
>>>> appears to be the default), and memory mapped file appender. It'd be cool
>>>> to see how these compose with async logging enabled in both log4j and
>>>> logback.
>>>>
>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>>> You should run the code at https://github.com/ceki/logback-perf to
>>>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>>>> speed of your processor and get the speed of your hard drive. I used
>>>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>>>> spreadsheet. I will post the like once I have it.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>> If you want, I can run tests on Windows once the build works on
>>>>> Windows again.
>>>>>
>>>>> Let me know what args/command line...
>>>>>
>>>>> Gary
>>>>>
>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>>>>>
>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive
>>>>>> category. With Logback 1.10 and -t 4 now get
>>>>>>
>>>>>> Benchmark Mode Samples
>>>>>> Score Error Units
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>> 98187.673 ± 4935.712 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>> 842374.496 ± 6762.712 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>> 1853062.583 ± 67032.225 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>> 2036011.226 ± 53208.281 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>> 999667.438 ± 12074.003 ops/s
>>>>>>
>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything
>>>>>> directly on bare metal any more.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com>
>>>>>> wrote:
>>>>>>
>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a
>>>>>> few optimizations we can make.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>
>
>
> --
> Matt Sicker <bo...@gmail.com>
>
>
>
>
>
>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
After modifying the configuration the new results on my laptop are:
Benchmark Mode Samples Score Error Units
o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 92.580 ± 3.698 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 828.707 ± 55.006 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1647.230 ± 125.682 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1465.002 ± 1284.943 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 1765.340 ± 149.707 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 1192.594 ± 57.777 ops/ms
I will try the other machines later and post those results.
Ralph
> On Feb 9, 2017, at 5:22 AM, Apache <ra...@dslextreme.com> wrote:
>
> Ceki replied on twitter that the immediateFlush option is now a parameter of the appended, not the encoder, so it looks like the confit needs to be changed and the test rerun.
>
> Ralph
>
> On Feb 9, 2017, at 3:08 AM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>
>> FYI, The write and flush methods in BufferedOutputStream are also synchronized, so we won't be able to do away with synchronization completely.
>>
>> In OutputStreamManager we synchronize multiple methods but these are nested calls. I thought reentrant synchronization had negligible overhead but I haven't measured this myself.
>>
>>
>> Sent from my iPhone
>>
>> On Feb 9, 2017, at 2:31, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>
>>> I’m pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can’t figure out how to reduce that based on how dispersed the code has gotten.
>>>
>>> Ralph
>>>
>>>> On Feb 8, 2017, at 10:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>
>>>> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 8, 2017, at 12:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>
>>>>> The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>>>
>>>>> Ralph
>>>>>
>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>
>>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>>
>>>>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>
>>>>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>>>>
>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>
>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>
>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>
>>>>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>
>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>
>>>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>
>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>
>>>>>>>>>> Gary
>>>>>>>>>>
>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>
>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>
>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>
>>>>
>>>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
Ceki replied on twitter that the immediateFlush option is now a parameter of the appended, not the encoder, so it looks like the confit needs to be changed and the test rerun.
Ralph
> On Feb 9, 2017, at 3:08 AM, Remko Popma <re...@gmail.com> wrote:
>
> FYI, The write and flush methods in BufferedOutputStream are also synchronized, so we won't be able to do away with synchronization completely.
>
> In OutputStreamManager we synchronize multiple methods but these are nested calls. I thought reentrant synchronization had negligible overhead but I haven't measured this myself.
>
>
> Sent from my iPhone
>
>> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com> wrote:
>>
>> I’m pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can’t figure out how to reduce that based on how dispersed the code has gotten.
>>
>> Ralph
>>
>>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com> wrote:
>>>
>>> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn’t as the layouts now require the ByteBuffer.
>>>
>>> Ralph
>>>
>>>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>> The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>
>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>
>>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>
>>>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
>>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>>>>>
>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>
>>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>>>
>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>>
>>>>>>>>>>>>> Gary
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>
>>
Re: Logback performance improvements
Posted by Remko Popma <re...@gmail.com>.
FYI, The write and flush methods in BufferedOutputStream are also synchronized, so we won't be able to do away with synchronization completely.
In OutputStreamManager we synchronize multiple methods but these are nested calls. I thought reentrant synchronization had negligible overhead but I haven't measured this myself.
Sent from my iPhone
> On Feb 9, 2017, at 2:31, Apache <ra...@dslextreme.com> wrote:
>
> I�fm pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can�ft figure out how to reduce that based on how dispersed the code has gotten.
>
> Ralph
>
>> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com> wrote:
>>
>> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn�ft as the layouts now require the ByteBuffer.
>>
>> Ralph
>>
>>> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com> wrote:
>>>
>>> The append method isn�ft synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>>
>>> Ralph
>>>
>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>
>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>
>>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>>>>
>>>>>> I haven�ft run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>
>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>
>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>
>>>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>> Someone posted numbers on the Logback user�fs list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>>
>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf to compare your results to Ceki�fs. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>>>
>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>
>>>>>>>>>>>> Gary
>>>>>>>>>>>>
>>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>
>>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 �} 4935.712 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 �} 6762.712 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 �} 67032.225 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 �} 53208.281 ops/s
>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 �} 12074.003 ops/s
>>>>>>>>>>>>>
>>>>>>>>>>>>> I�fll have to try this on one my VMs at work. We don�ft run anything directly on bare metal any more.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>
>>
>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
I’m pretty sure the problem we have is that a) we are synchronizing many methods and b) we are synchronizing more than just the write. Unfortunately, I can’t figure out how to reduce that based on how dispersed the code has gotten.
Ralph
> On Feb 8, 2017, at 10:14 AM, Apache <ra...@dslextreme.com> wrote:
>
> I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn’t as the layouts now require the ByteBuffer.
>
> Ralph
>
>> On Feb 8, 2017, at 12:14 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>
>> The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>>
>> Ralph
>>
>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>
>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>
>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>
>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>
>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>
>>> Ralph
>>>
>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>
>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>
>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>
>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>
>>>>> Ralph
>>>>>
>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>
>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>
>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>
>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>
>>>>>>> Let me know what args/command line...
>>>>>>>
>>>>>>> Gary
>>>>>>>
>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>
>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>
>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>
>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>
>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>
>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
I tried to modify FileManager to just use a BufferedOutputStream but discovered I couldn’t as the layouts now require the ByteBuffer.
Ralph
> On Feb 8, 2017, at 12:14 AM, Apache <ra...@dslextreme.com> wrote:
>
> The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
>
> Ralph
>
>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>
>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>
>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>
>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>
>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>
>> Ralph
>>
>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>
>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>
>>> Ralph
>>>
>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>
>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>
>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>
>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>
>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>
>>>>> Ralph
>>>>>
>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>
>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>
>>>>>> Let me know what args/command line...
>>>>>>
>>>>>> Gary
>>>>>>
>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>
>>>>>> Benchmark Mode Samples Score Error Units
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>
>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>
>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>
>>>>>>> Ralph
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>
>>
>>
>>
>>
>> --
>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>
>>
>>
>> --
>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
The append method isn’t synchronized but the writeBytes method acquires a lock. His code is actually a lot simpler than ours in that it just uses a BufferedOutputStream and he only obtains the lock when he is writing to it.
Ralph
> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>
> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>
> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>
> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>
> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>
> Ralph
>
>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>
>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>
>> Ralph
>>
>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>
>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>
>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>
>>> Ralph
>>>
>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>
>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>
>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>
>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>
>>>>> Let me know what args/command line...
>>>>>
>>>>> Gary
>>>>>
>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>
>>>>> Benchmark Mode Samples Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>
>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>
>>>>> Ralph
>>>>>
>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>
>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>
>>>>>> Ralph
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>
>
>
>
>
> --
> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>
>
>
> --
> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
Re: Logback performance improvements
Posted by Remko Popma <re...@gmail.com>.
Is it possible that the memory mapped file starts resizing and remapping at this point?
Sent from my iPhone
> On Feb 8, 2017, at 15:12, Apache <ra...@dslextreme.com> wrote:
>
> I used a ThreadLocal byte buffer and wrote to the file channel and if anything, it performed slightly worse. This is probably because I had to write after ever event, not when the buffer was full, otherwise the ordering of events in the output would get messed up.
>
> I decided to throw the MemoryMappedFileAppender into the mix and I got some very strange behavior. Using Logback 1.1.10 the overall results with 4 threads were:
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 98.886 ± 10.855 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 826.640 ± 14.219 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1861.518 ± 139.885 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1478.489 ± 970.526 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 2023.783 ± 41.284 ops/ms
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 997.495 ± 42.801 ops/ms
>
> What is strange is the actual output from the run for the MemoryMappedFileAppender. You will notice that it starts off like a bat out of hell but then bogs down terribly. I’d love to know why.
>
> # VM invoker: /Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/bin/java
> # VM options: <none>
> # Warmup: 10 iterations, 1 s each
> # Measurement: 10 iterations, 1 s each
> # Timeout: 10 min per iteration
> # Threads: 4 threads, will synchronize iterations
> # Benchmark mode: Throughput, ops/time
> # Benchmark: org.apache.logging.log4j.perf.jmh.FileAppenderBenchmark.log4j2MMF
>
> # Run progress: 50.00% complete, ETA 00:01:14
> # Fork: 1 of 1
> # Warmup Iteration 1: 2546.240 ops/ms
> # Warmup Iteration 2: 3071.504 ops/ms
> # Warmup Iteration 3: 2980.503 ops/ms
> # Warmup Iteration 4: 2709.490 ops/ms
> # Warmup Iteration 5: 2661.919 ops/ms
> # Warmup Iteration 6: 2610.875 ops/ms
> # Warmup Iteration 7: 2663.431 ops/ms
> # Warmup Iteration 8: 2675.847 ops/ms
> # Warmup Iteration 9: 2755.735 ops/ms
> # Warmup Iteration 10: 2666.353 ops/ms
> Iteration 1: 2577.419 ops/ms
> Iteration 2: 2804.161 ops/ms
> Iteration 3: 1179.059 ops/ms
> Iteration 4: 1167.719 ops/ms
> Iteration 5: 1170.686 ops/ms
> Iteration 6: 1246.053 ops/ms
> Iteration 7: 1173.593 ops/ms
> Iteration 8: 1196.317 ops/ms
> Iteration 9: 1127.199 ops/ms
> Iteration 10: 1142.684 ops/ms
>
> Ralph
>
>
>> On Feb 7, 2017, at 10:55 AM, Apache <ra...@dslextreme.com> wrote:
>>
>> True, but I would still like to also see what difference it makes using the FileChannel instead of the OutputStream.
>>
>> Ralph
>>
>>> On Feb 7, 2017, at 9:45 AM, Remko Popma <re...@gmail.com> wrote:
>>>
>>> That is all doable but it may be a good idea to test if that is really where the bottleneck is.
>>> We could try whether we get better numbers if we remove the current synchronization (ignoring any scrambled output, just for testing purposes).
>>>
>>>
>>>> On Wed, Feb 8, 2017 at 1:40 AM, Apache <ra...@dslextreme.com> wrote:
>>>> In looking at FileManager and OutputStreamManager it does have a ByteBuffer but it requires synchronization. I am thinking it might make more sense to have a ThreadLocal ByteBuffer and then pass that to FileChannel.write() so that no synchronization is required.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 7, 2017, at 9:36 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>
>>>>> Can't we use the ByteBuffers introduced in the GC-free epic? I was under the impression that byte arrays being passed to appenders was created from a ByteBuffer at this point, though I haven't really taken a close look at this.
>>>>>
>>>>> On 7 February 2017 at 10:05, Apache <ra...@dslextreme.com> wrote:
>>>>>> A FileChannel is guaranteed to be thread safe. You can obtain a FileChannel from a FlieOutputStream, so that would seem to imply that FileOutputStream might be thread-safe, but you can’t really know that without looking at the source. The problem is that FileChannel.write() takes a ByteBuffer whereas FileOutputStream.write() accepts a byte array. To be thread safe it would have to safely copy the byte array into the byte buffer to pass to the FileChannel. But FileOutputStream doesn’t use the FileChannel at all in Java 7. It calls a native method that doesn’t specify whether it is thread-safe or not, so simply removing the synchronization isn’t guaranteed to work properly.
>>>>>>
>>>>>> OTOH, RandomAccessFile doesn’t say that it is thread-safe either and we are not synchronizing writes to it.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 7, 2017, at 8:37 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>
>>>>>>> I looked at 1.2-SNAPSHOT and 1.1.10 and saw nothing special other than a lack of a synchronized keyword on the equivalent append method. Perhaps he figured out a simpler way to emulate locking?
>>>>>>>
>>>>>>> I've been working with async/non-blocking streaming APIs for long enough now that I can't even remember the last time I had to write an actual lock.
>>>>>>>
>>>>>>>> On 6 February 2017 at 22:02, Apache <ra...@dslextreme.com> wrote:
>>>>>>>> Logback 1.2-SNAPSHOT
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 6, 2017, at 8:29 PM, Remko Popma <re...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>> Sorry what 1.2 do you mean?
>>>>>>>>>
>>>>>>>>> Sent from my iPhone
>>>>>>>>>
>>>>>>>>>> On Feb 7, 2017, at 11:06, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>
>>>>>>>>>> In 1.2? That may work for a FileOutputStream but it isn’t guaranteed to work for others.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>>>>>>>
>>>>>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>>>>>>>
>>>>>>>>>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>>>>>>>>>>>
>>>>>>>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Gary
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Ralph
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>
>>
>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
I used a ThreadLocal byte buffer and wrote to the file channel and if anything, it performed slightly worse. This is probably because I had to write after ever event, not when the buffer was full, otherwise the ordering of events in the output would get messed up.
I decided to throw the MemoryMappedFileAppender into the mix and I got some very strange behavior. Using Logback 1.1.10 the overall results with 4 threads were:
Benchmark Mode Samples Score Error Units
o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 10 98.886 ± 10.855 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 10 826.640 ± 14.219 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 10 1861.518 ± 139.885 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2MMF thrpt 10 1478.489 ± 970.526 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 10 2023.783 ± 41.284 ops/ms
o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 10 997.495 ± 42.801 ops/ms
What is strange is the actual output from the run for the MemoryMappedFileAppender. You will notice that it starts off like a bat out of hell but then bogs down terribly. I’d love to know why.
# VM invoker: /Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home/jre/bin/java
# VM options: <none>
# Warmup: 10 iterations, 1 s each
# Measurement: 10 iterations, 1 s each
# Timeout: 10 min per iteration
# Threads: 4 threads, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: org.apache.logging.log4j.perf.jmh.FileAppenderBenchmark.log4j2MMF
# Run progress: 50.00% complete, ETA 00:01:14
# Fork: 1 of 1
# Warmup Iteration 1: 2546.240 ops/ms
# Warmup Iteration 2: 3071.504 ops/ms
# Warmup Iteration 3: 2980.503 ops/ms
# Warmup Iteration 4: 2709.490 ops/ms
# Warmup Iteration 5: 2661.919 ops/ms
# Warmup Iteration 6: 2610.875 ops/ms
# Warmup Iteration 7: 2663.431 ops/ms
# Warmup Iteration 8: 2675.847 ops/ms
# Warmup Iteration 9: 2755.735 ops/ms
# Warmup Iteration 10: 2666.353 ops/ms
Iteration 1: 2577.419 ops/ms
Iteration 2: 2804.161 ops/ms
Iteration 3: 1179.059 ops/ms
Iteration 4: 1167.719 ops/ms
Iteration 5: 1170.686 ops/ms
Iteration 6: 1246.053 ops/ms
Iteration 7: 1173.593 ops/ms
Iteration 8: 1196.317 ops/ms
Iteration 9: 1127.199 ops/ms
Iteration 10: 1142.684 ops/ms
Ralph
> On Feb 7, 2017, at 10:55 AM, Apache <ra...@dslextreme.com> wrote:
>
> True, but I would still like to also see what difference it makes using the FileChannel instead of the OutputStream.
>
> Ralph
>
>> On Feb 7, 2017, at 9:45 AM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>>
>> That is all doable but it may be a good idea to test if that is really where the bottleneck is.
>> We could try whether we get better numbers if we remove the current synchronization (ignoring any scrambled output, just for testing purposes).
>>
>>
>> On Wed, Feb 8, 2017 at 1:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>> In looking at FileManager and OutputStreamManager it does have a ByteBuffer but it requires synchronization. I am thinking it might make more sense to have a ThreadLocal ByteBuffer and then pass that to FileChannel.write() so that no synchronization is required.
>>
>> Ralph
>>
>>> On Feb 7, 2017, at 9:36 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>
>>> Can't we use the ByteBuffers introduced in the GC-free epic? I was under the impression that byte arrays being passed to appenders was created from a ByteBuffer at this point, though I haven't really taken a close look at this.
>>>
>>> On 7 February 2017 at 10:05, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>> A FileChannel is guaranteed to be thread safe. You can obtain a FileChannel from a FlieOutputStream, so that would seem to imply that FileOutputStream might be thread-safe, but you can’t really know that without looking at the source. The problem is that FileChannel.write() takes a ByteBuffer whereas FileOutputStream.write() accepts a byte array. To be thread safe it would have to safely copy the byte array into the byte buffer to pass to the FileChannel. But FileOutputStream doesn’t use the FileChannel at all in Java 7. It calls a native method that doesn’t specify whether it is thread-safe or not, so simply removing the synchronization isn’t guaranteed to work properly.
>>>
>>> OTOH, RandomAccessFile doesn’t say that it is thread-safe either and we are not synchronizing writes to it.
>>>
>>> Ralph
>>>
>>>> On Feb 7, 2017, at 8:37 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>
>>>> I looked at 1.2-SNAPSHOT and 1.1.10 and saw nothing special other than a lack of a synchronized keyword on the equivalent append method. Perhaps he figured out a simpler way to emulate locking?
>>>>
>>>> I've been working with async/non-blocking streaming APIs for long enough now that I can't even remember the last time I had to write an actual lock.
>>>>
>>>> On 6 February 2017 at 22:02, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>> Logback 1.2-SNAPSHOT
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 6, 2017, at 8:29 PM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>>>>>
>>>>> Sorry what 1.2 do you mean?
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On Feb 7, 2017, at 11:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>
>>>>>> In 1.2? That may work for a FileOutputStream but it isn’t guaranteed to work for others.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>
>>>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>>>
>>>>>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>>
>>>>>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>>>>>
>>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>
>>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>
>>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>>
>>>>>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>>
>>>>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>>
>>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>>
>>>>>>>>>>> Gary
>>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>
>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>
>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>
>>
>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
True, but I would still like to also see what difference it makes using the FileChannel instead of the OutputStream.
Ralph
> On Feb 7, 2017, at 9:45 AM, Remko Popma <re...@gmail.com> wrote:
>
> That is all doable but it may be a good idea to test if that is really where the bottleneck is.
> We could try whether we get better numbers if we remove the current synchronization (ignoring any scrambled output, just for testing purposes).
>
>
> On Wed, Feb 8, 2017 at 1:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> In looking at FileManager and OutputStreamManager it does have a ByteBuffer but it requires synchronization. I am thinking it might make more sense to have a ThreadLocal ByteBuffer and then pass that to FileChannel.write() so that no synchronization is required.
>
> Ralph
>
>> On Feb 7, 2017, at 9:36 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>
>> Can't we use the ByteBuffers introduced in the GC-free epic? I was under the impression that byte arrays being passed to appenders was created from a ByteBuffer at this point, though I haven't really taken a close look at this.
>>
>> On 7 February 2017 at 10:05, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>> A FileChannel is guaranteed to be thread safe. You can obtain a FileChannel from a FlieOutputStream, so that would seem to imply that FileOutputStream might be thread-safe, but you can’t really know that without looking at the source. The problem is that FileChannel.write() takes a ByteBuffer whereas FileOutputStream.write() accepts a byte array. To be thread safe it would have to safely copy the byte array into the byte buffer to pass to the FileChannel. But FileOutputStream doesn’t use the FileChannel at all in Java 7. It calls a native method that doesn’t specify whether it is thread-safe or not, so simply removing the synchronization isn’t guaranteed to work properly.
>>
>> OTOH, RandomAccessFile doesn’t say that it is thread-safe either and we are not synchronizing writes to it.
>>
>> Ralph
>>
>>> On Feb 7, 2017, at 8:37 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>
>>> I looked at 1.2-SNAPSHOT and 1.1.10 and saw nothing special other than a lack of a synchronized keyword on the equivalent append method. Perhaps he figured out a simpler way to emulate locking?
>>>
>>> I've been working with async/non-blocking streaming APIs for long enough now that I can't even remember the last time I had to write an actual lock.
>>>
>>> On 6 February 2017 at 22:02, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>> Logback 1.2-SNAPSHOT
>>>
>>> Ralph
>>>
>>>> On Feb 6, 2017, at 8:29 PM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>>>>
>>>> Sorry what 1.2 do you mean?
>>>>
>>>> Sent from my iPhone
>>>>
>>>> On Feb 7, 2017, at 11:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>
>>>>> In 1.2? That may work for a FileOutputStream but it isn’t guaranteed to work for others.
>>>>>
>>>>> Ralph
>>>>>
>>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>
>>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>>
>>>>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>>
>>>>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>>>>
>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>
>>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>
>>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>>
>>>>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>
>>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>>
>>>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>
>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>
>>>>>>>>>> Gary
>>>>>>>>>>
>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>
>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>
>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>
>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>
>>
>>
>>
>> --
>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>
>
Re: Logback performance improvements
Posted by Mikael Ståldal <mi...@magine.com>.
We should try to override getBuffer()/drain()/flushBuffer() from
OutputStreamManager in FileManager and RandomAccessFileManager (just like
we do in MenoryMappedFileManager).
On Tue, Feb 7, 2017 at 5:45 PM, Remko Popma <re...@gmail.com> wrote:
> That is all doable but it may be a good idea to test if that is really
> where the bottleneck is.
> We could try whether we get better numbers if we remove the current
> synchronization (ignoring any scrambled output, just for testing purposes).
>
>
> On Wed, Feb 8, 2017 at 1:40 AM, Apache <ra...@dslextreme.com> wrote:
>
>> In looking at FileManager and OutputStreamManager it does have a
>> ByteBuffer but it requires synchronization. I am thinking it might make
>> more sense to have a ThreadLocal ByteBuffer and then pass that to
>> FileChannel.write() so that no synchronization is required.
>>
>> Ralph
>>
>> On Feb 7, 2017, at 9:36 AM, Matt Sicker <bo...@gmail.com> wrote:
>>
>> Can't we use the ByteBuffers introduced in the GC-free epic? I was under
>> the impression that byte arrays being passed to appenders was created from
>> a ByteBuffer at this point, though I haven't really taken a close look at
>> this.
>>
>> On 7 February 2017 at 10:05, Apache <ra...@dslextreme.com> wrote:
>>
>>> A FileChannel is guaranteed to be thread safe. You can obtain a
>>> FileChannel from a FlieOutputStream, so that would seem to imply that
>>> FileOutputStream might be thread-safe, but you can’t really know that
>>> without looking at the source. The problem is that FileChannel.write()
>>> takes a ByteBuffer whereas FileOutputStream.write() accepts a byte array.
>>> To be thread safe it would have to safely copy the byte array into the byte
>>> buffer to pass to the FileChannel. But FileOutputStream doesn’t use the
>>> FileChannel at all in Java 7. It calls a native method that doesn’t specify
>>> whether it is thread-safe or not, so simply removing the synchronization
>>> isn’t guaranteed to work properly.
>>>
>>> OTOH, RandomAccessFile doesn’t say that it is thread-safe either and we
>>> are not synchronizing writes to it.
>>>
>>> Ralph
>>>
>>> On Feb 7, 2017, at 8:37 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>
>>> I looked at 1.2-SNAPSHOT and 1.1.10 and saw nothing special other than a
>>> lack of a synchronized keyword on the equivalent append method. Perhaps he
>>> figured out a simpler way to emulate locking?
>>>
>>> I've been working with async/non-blocking streaming APIs for long enough
>>> now that I can't even remember the last time I had to write an actual lock.
>>>
>>> On 6 February 2017 at 22:02, Apache <ra...@dslextreme.com> wrote:
>>>
>>>> Logback 1.2-SNAPSHOT
>>>>
>>>> Ralph
>>>>
>>>> On Feb 6, 2017, at 8:29 PM, Remko Popma <re...@gmail.com> wrote:
>>>>
>>>> Sorry what 1.2 do you mean?
>>>>
>>>> Sent from my iPhone
>>>>
>>>> On Feb 7, 2017, at 11:06, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>> In 1.2? That may work for a FileOutputStream but it isn’t guaranteed
>>>> to work for others.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>> I'm not sure if I'm looking in the right place, but a major difference
>>>> now between Logback's appenders and Log4j's is that Logback isn't
>>>> synchronized on the append method.
>>>>
>>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>>> Is this something we can improve performance on by implementing a file
>>>>> appender based on FileChannel or AsynchronousFileChannel instead of
>>>>> OutputStream?
>>>>>
>>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com>
>>>>> wrote:
>>>>>
>>>>>> Ceki has updated his numbers to include those reported on the mailing
>>>>>> list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0
>>>>>> RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>>>>
>>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for
>>>>>> my two MacBooks are at https://docs.google.com/spread
>>>>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?u
>>>>>> sp=sharing.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com>
>>>>>> wrote:
>>>>>>
>>>>>> Yes, that is still the standard approach most people use and is the
>>>>>> only way to provide a head-to-head comparison against the logging
>>>>>> frameworks.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>
>>>>>> This is all in a synchronous appender, right? Either way, that's
>>>>>> rather interesting.
>>>>>>
>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Someone posted numbers on the Logback user’s list that match mine.
>>>>>>> It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and
>>>>>>> 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>
>>>>>>> I think we need some comparisons on the log4j side: file appender
>>>>>>> with 256k buffer size, random access file appender with 256k buffer size
>>>>>>> (which appears to be the default), and memory mapped file appender. It'd be
>>>>>>> cool to see how these compose with async logging enabled in both log4j and
>>>>>>> logback.
>>>>>>>
>>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> You should run the code at https://github.com/ceki/logback-perf to
>>>>>>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>>>>>>> speed of your processor and get the speed of your hard drive. I used
>>>>>>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>>>>>>> spreadsheet. I will post the like once I have it.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> If you want, I can run tests on Windows once the build works on
>>>>>>>> Windows again.
>>>>>>>>
>>>>>>>> Let me know what args/command line...
>>>>>>>>
>>>>>>>> Gary
>>>>>>>>
>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive
>>>>>>>>> category. With Logback 1.10 and -t 4 now get
>>>>>>>>>
>>>>>>>>> Benchmark Mode Samples
>>>>>>>>> Score Error Units
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>>>>> 98187.673 ± 4935.712 ops/s
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>>>>> 842374.496 ± 6762.712 ops/s
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>>>>> 1853062.583 ± 67032.225 ops/s
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>>>>> 2036011.226 ± 53208.281 ops/s
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>>>>> 999667.438 ± 12074.003 ops/s
>>>>>>>>>
>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything
>>>>>>>>> directly on bare metal any more.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have
>>>>>>>>> a few optimizations we can make.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>>
>>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>>
>>
>
--
[image: MagineTV]
*Mikael Ståldal*
Senior software developer
*Magine TV*
mikael.staldal@magine.com
Grev Turegatan 3 | 114 46 Stockholm, Sweden | www.magine.com
Privileged and/or Confidential Information may be contained in this
message. If you are not the addressee indicated in this message
(or responsible for delivery of the message to such a person), you may not
copy or deliver this message to anyone. In such case,
you should destroy this message and kindly notify the sender by reply
email.
Re: Logback performance improvements
Posted by Remko Popma <re...@gmail.com>.
That is all doable but it may be a good idea to test if that is really
where the bottleneck is.
We could try whether we get better numbers if we remove the current
synchronization (ignoring any scrambled output, just for testing purposes).
On Wed, Feb 8, 2017 at 1:40 AM, Apache <ra...@dslextreme.com> wrote:
> In looking at FileManager and OutputStreamManager it does have a
> ByteBuffer but it requires synchronization. I am thinking it might make
> more sense to have a ThreadLocal ByteBuffer and then pass that to
> FileChannel.write() so that no synchronization is required.
>
> Ralph
>
> On Feb 7, 2017, at 9:36 AM, Matt Sicker <bo...@gmail.com> wrote:
>
> Can't we use the ByteBuffers introduced in the GC-free epic? I was under
> the impression that byte arrays being passed to appenders was created from
> a ByteBuffer at this point, though I haven't really taken a close look at
> this.
>
> On 7 February 2017 at 10:05, Apache <ra...@dslextreme.com> wrote:
>
>> A FileChannel is guaranteed to be thread safe. You can obtain a
>> FileChannel from a FlieOutputStream, so that would seem to imply that
>> FileOutputStream might be thread-safe, but you can’t really know that
>> without looking at the source. The problem is that FileChannel.write()
>> takes a ByteBuffer whereas FileOutputStream.write() accepts a byte array.
>> To be thread safe it would have to safely copy the byte array into the byte
>> buffer to pass to the FileChannel. But FileOutputStream doesn’t use the
>> FileChannel at all in Java 7. It calls a native method that doesn’t specify
>> whether it is thread-safe or not, so simply removing the synchronization
>> isn’t guaranteed to work properly.
>>
>> OTOH, RandomAccessFile doesn’t say that it is thread-safe either and we
>> are not synchronizing writes to it.
>>
>> Ralph
>>
>> On Feb 7, 2017, at 8:37 AM, Matt Sicker <bo...@gmail.com> wrote:
>>
>> I looked at 1.2-SNAPSHOT and 1.1.10 and saw nothing special other than a
>> lack of a synchronized keyword on the equivalent append method. Perhaps he
>> figured out a simpler way to emulate locking?
>>
>> I've been working with async/non-blocking streaming APIs for long enough
>> now that I can't even remember the last time I had to write an actual lock.
>>
>> On 6 February 2017 at 22:02, Apache <ra...@dslextreme.com> wrote:
>>
>>> Logback 1.2-SNAPSHOT
>>>
>>> Ralph
>>>
>>> On Feb 6, 2017, at 8:29 PM, Remko Popma <re...@gmail.com> wrote:
>>>
>>> Sorry what 1.2 do you mean?
>>>
>>> Sent from my iPhone
>>>
>>> On Feb 7, 2017, at 11:06, Apache <ra...@dslextreme.com> wrote:
>>>
>>> In 1.2? That may work for a FileOutputStream but it isn’t guaranteed to
>>> work for others.
>>>
>>> Ralph
>>>
>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>
>>> I'm not sure if I'm looking in the right place, but a major difference
>>> now between Logback's appenders and Log4j's is that Logback isn't
>>> synchronized on the append method.
>>>
>>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>>
>>>> Is this something we can improve performance on by implementing a file
>>>> appender based on FileChannel or AsynchronousFileChannel instead of
>>>> OutputStream?
>>>>
>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>>> Ceki has updated his numbers to include those reported on the mailing
>>>>> list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0
>>>>> RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>>>
>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for
>>>>> my two MacBooks are at https://docs.google.com/spread
>>>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing
>>>>> .
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>
>>>>> Yes, that is still the standard approach most people use and is the
>>>>> only way to provide a head-to-head comparison against the logging
>>>>> frameworks.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>
>>>>> This is all in a synchronous appender, right? Either way, that's
>>>>> rather interesting.
>>>>>
>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com>
>>>>> wrote:
>>>>>
>>>>>> Someone posted numbers on the Logback user’s list that match mine.
>>>>>> It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and
>>>>>> 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>
>>>>>> I think we need some comparisons on the log4j side: file appender
>>>>>> with 256k buffer size, random access file appender with 256k buffer size
>>>>>> (which appears to be the default), and memory mapped file appender. It'd be
>>>>>> cool to see how these compose with async logging enabled in both log4j and
>>>>>> logback.
>>>>>>
>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com>
>>>>>> wrote:
>>>>>>
>>>>>>> You should run the code at https://github.com/ceki/logback-perf to
>>>>>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>>>>>> speed of your processor and get the speed of your hard drive. I used
>>>>>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>>>>>> spreadsheet. I will post the like once I have it.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> If you want, I can run tests on Windows once the build works on
>>>>>>> Windows again.
>>>>>>>
>>>>>>> Let me know what args/command line...
>>>>>>>
>>>>>>> Gary
>>>>>>>
>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive
>>>>>>>> category. With Logback 1.10 and -t 4 now get
>>>>>>>>
>>>>>>>> Benchmark Mode Samples
>>>>>>>> Score Error Units
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>>>> 98187.673 ± 4935.712 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>>>> 842374.496 ± 6762.712 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>>>> 1853062.583 ± 67032.225 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>>>> 2036011.226 ± 53208.281 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>>>> 999667.438 ± 12074.003 ops/s
>>>>>>>>
>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything
>>>>>>>> directly on bare metal any more.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a
>>>>>>>> few optimizations we can make.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>>
>>
>
>
> --
> Matt Sicker <bo...@gmail.com>
>
>
>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
In looking at FileManager and OutputStreamManager it does have a ByteBuffer but it requires synchronization. I am thinking it might make more sense to have a ThreadLocal ByteBuffer and then pass that to FileChannel.write() so that no synchronization is required.
Ralph
> On Feb 7, 2017, at 9:36 AM, Matt Sicker <bo...@gmail.com> wrote:
>
> Can't we use the ByteBuffers introduced in the GC-free epic? I was under the impression that byte arrays being passed to appenders was created from a ByteBuffer at this point, though I haven't really taken a close look at this.
>
> On 7 February 2017 at 10:05, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> A FileChannel is guaranteed to be thread safe. You can obtain a FileChannel from a FlieOutputStream, so that would seem to imply that FileOutputStream might be thread-safe, but you can’t really know that without looking at the source. The problem is that FileChannel.write() takes a ByteBuffer whereas FileOutputStream.write() accepts a byte array. To be thread safe it would have to safely copy the byte array into the byte buffer to pass to the FileChannel. But FileOutputStream doesn’t use the FileChannel at all in Java 7. It calls a native method that doesn’t specify whether it is thread-safe or not, so simply removing the synchronization isn’t guaranteed to work properly.
>
> OTOH, RandomAccessFile doesn’t say that it is thread-safe either and we are not synchronizing writes to it.
>
> Ralph
>
>> On Feb 7, 2017, at 8:37 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>
>> I looked at 1.2-SNAPSHOT and 1.1.10 and saw nothing special other than a lack of a synchronized keyword on the equivalent append method. Perhaps he figured out a simpler way to emulate locking?
>>
>> I've been working with async/non-blocking streaming APIs for long enough now that I can't even remember the last time I had to write an actual lock.
>>
>> On 6 February 2017 at 22:02, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>> Logback 1.2-SNAPSHOT
>>
>> Ralph
>>
>>> On Feb 6, 2017, at 8:29 PM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>>>
>>> Sorry what 1.2 do you mean?
>>>
>>> Sent from my iPhone
>>>
>>> On Feb 7, 2017, at 11:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>
>>>> In 1.2? That may work for a FileOutputStream but it isn’t guaranteed to work for others.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>
>>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>>
>>>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>>
>>>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>>>
>>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>>>
>>>>> Ralph
>>>>>
>>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>
>>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>
>>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>>
>>>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>
>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>
>>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>>
>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>
>>>>>>>>> Let me know what args/command line...
>>>>>>>>>
>>>>>>>>> Gary
>>>>>>>>>
>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>
>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>
>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>>>
>>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>
>>
>>
>>
>>
>> --
>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>
>
>
>
> --
> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
Re: Logback performance improvements
Posted by Matt Sicker <bo...@gmail.com>.
Can't we use the ByteBuffers introduced in the GC-free epic? I was under
the impression that byte arrays being passed to appenders was created from
a ByteBuffer at this point, though I haven't really taken a close look at
this.
On 7 February 2017 at 10:05, Apache <ra...@dslextreme.com> wrote:
> A FileChannel is guaranteed to be thread safe. You can obtain a
> FileChannel from a FlieOutputStream, so that would seem to imply that
> FileOutputStream might be thread-safe, but you can’t really know that
> without looking at the source. The problem is that FileChannel.write()
> takes a ByteBuffer whereas FileOutputStream.write() accepts a byte array.
> To be thread safe it would have to safely copy the byte array into the byte
> buffer to pass to the FileChannel. But FileOutputStream doesn’t use the
> FileChannel at all in Java 7. It calls a native method that doesn’t specify
> whether it is thread-safe or not, so simply removing the synchronization
> isn’t guaranteed to work properly.
>
> OTOH, RandomAccessFile doesn’t say that it is thread-safe either and we
> are not synchronizing writes to it.
>
> Ralph
>
> On Feb 7, 2017, at 8:37 AM, Matt Sicker <bo...@gmail.com> wrote:
>
> I looked at 1.2-SNAPSHOT and 1.1.10 and saw nothing special other than a
> lack of a synchronized keyword on the equivalent append method. Perhaps he
> figured out a simpler way to emulate locking?
>
> I've been working with async/non-blocking streaming APIs for long enough
> now that I can't even remember the last time I had to write an actual lock.
>
> On 6 February 2017 at 22:02, Apache <ra...@dslextreme.com> wrote:
>
>> Logback 1.2-SNAPSHOT
>>
>> Ralph
>>
>> On Feb 6, 2017, at 8:29 PM, Remko Popma <re...@gmail.com> wrote:
>>
>> Sorry what 1.2 do you mean?
>>
>> Sent from my iPhone
>>
>> On Feb 7, 2017, at 11:06, Apache <ra...@dslextreme.com> wrote:
>>
>> In 1.2? That may work for a FileOutputStream but it isn’t guaranteed to
>> work for others.
>>
>> Ralph
>>
>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>
>> I'm not sure if I'm looking in the right place, but a major difference
>> now between Logback's appenders and Log4j's is that Logback isn't
>> synchronized on the append method.
>>
>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>
>>> Is this something we can improve performance on by implementing a file
>>> appender based on FileChannel or AsynchronousFileChannel instead of
>>> OutputStream?
>>>
>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
>>>
>>>> Ceki has updated his numbers to include those reported on the mailing
>>>> list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0
>>>> RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>>
>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my
>>>> two MacBooks are at https://docs.google.com/spread
>>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing
>>>> .
>>>>
>>>> Ralph
>>>>
>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>> Yes, that is still the standard approach most people use and is the
>>>> only way to provide a head-to-head comparison against the logging
>>>> frameworks.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>> This is all in a synchronous appender, right? Either way, that's rather
>>>> interesting.
>>>>
>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>>> Someone posted numbers on the Logback user’s list that match mine. It
>>>>> shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and
>>>>> 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>
>>>>> I think we need some comparisons on the log4j side: file appender with
>>>>> 256k buffer size, random access file appender with 256k buffer size (which
>>>>> appears to be the default), and memory mapped file appender. It'd be cool
>>>>> to see how these compose with async logging enabled in both log4j and
>>>>> logback.
>>>>>
>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com>
>>>>> wrote:
>>>>>
>>>>>> You should run the code at https://github.com/ceki/logback-perf to
>>>>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>>>>> speed of your processor and get the speed of your hard drive. I used
>>>>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>>>>> spreadsheet. I will post the like once I have it.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>> If you want, I can run tests on Windows once the build works on
>>>>>> Windows again.
>>>>>>
>>>>>> Let me know what args/command line...
>>>>>>
>>>>>> Gary
>>>>>>
>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>>>>>>
>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive
>>>>>>> category. With Logback 1.10 and -t 4 now get
>>>>>>>
>>>>>>> Benchmark Mode Samples
>>>>>>> Score Error Units
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>>> 98187.673 ± 4935.712 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>>> 842374.496 ± 6762.712 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>>> 1853062.583 ± 67032.225 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>>> 2036011.226 ± 53208.281 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>>> 999667.438 ± 12074.003 ops/s
>>>>>>>
>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything
>>>>>>> directly on bare metal any more.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a
>>>>>>> few optimizations we can make.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <bo...@gmail.com>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>>
>>
>>
>
>
> --
> Matt Sicker <bo...@gmail.com>
>
>
>
--
Matt Sicker <bo...@gmail.com>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
A FileChannel is guaranteed to be thread safe. You can obtain a FileChannel from a FlieOutputStream, so that would seem to imply that FileOutputStream might be thread-safe, but you can’t really know that without looking at the source. The problem is that FileChannel.write() takes a ByteBuffer whereas FileOutputStream.write() accepts a byte array. To be thread safe it would have to safely copy the byte array into the byte buffer to pass to the FileChannel. But FileOutputStream doesn’t use the FileChannel at all in Java 7. It calls a native method that doesn’t specify whether it is thread-safe or not, so simply removing the synchronization isn’t guaranteed to work properly.
OTOH, RandomAccessFile doesn’t say that it is thread-safe either and we are not synchronizing writes to it.
Ralph
> On Feb 7, 2017, at 8:37 AM, Matt Sicker <bo...@gmail.com> wrote:
>
> I looked at 1.2-SNAPSHOT and 1.1.10 and saw nothing special other than a lack of a synchronized keyword on the equivalent append method. Perhaps he figured out a simpler way to emulate locking?
>
> I've been working with async/non-blocking streaming APIs for long enough now that I can't even remember the last time I had to write an actual lock.
>
> On 6 February 2017 at 22:02, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> Logback 1.2-SNAPSHOT
>
> Ralph
>
>> On Feb 6, 2017, at 8:29 PM, Remko Popma <remko.popma@gmail.com <ma...@gmail.com>> wrote:
>>
>> Sorry what 1.2 do you mean?
>>
>> Sent from my iPhone
>>
>> On Feb 7, 2017, at 11:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>
>>> In 1.2? That may work for a FileOutputStream but it isn’t guaranteed to work for others.
>>>
>>> Ralph
>>>
>>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>
>>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>>
>>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>>
>>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>>
>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>
>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>
>>>>> Ralph
>>>>>
>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>
>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>
>>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>
>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>
>>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>>
>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>
>>>>>>>> Let me know what args/command line...
>>>>>>>>
>>>>>>>> Gary
>>>>>>>>
>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>
>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>
>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>>
>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>>
>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>
>
>
>
>
> --
> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
Re: Logback performance improvements
Posted by Matt Sicker <bo...@gmail.com>.
I looked at 1.2-SNAPSHOT and 1.1.10 and saw nothing special other than a
lack of a synchronized keyword on the equivalent append method. Perhaps he
figured out a simpler way to emulate locking?
I've been working with async/non-blocking streaming APIs for long enough
now that I can't even remember the last time I had to write an actual lock.
On 6 February 2017 at 22:02, Apache <ra...@dslextreme.com> wrote:
> Logback 1.2-SNAPSHOT
>
> Ralph
>
> On Feb 6, 2017, at 8:29 PM, Remko Popma <re...@gmail.com> wrote:
>
> Sorry what 1.2 do you mean?
>
> Sent from my iPhone
>
> On Feb 7, 2017, at 11:06, Apache <ra...@dslextreme.com> wrote:
>
> In 1.2? That may work for a FileOutputStream but it isn’t guaranteed to
> work for others.
>
> Ralph
>
> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>
> I'm not sure if I'm looking in the right place, but a major difference now
> between Logback's appenders and Log4j's is that Logback isn't synchronized
> on the append method.
>
> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>
>> Is this something we can improve performance on by implementing a file
>> appender based on FileChannel or AsynchronousFileChannel instead of
>> OutputStream?
>>
>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
>>
>>> Ceki has updated his numbers to include those reported on the mailing
>>> list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0
>>> RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>
>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my
>>> two MacBooks are at https://docs.google.com/spread
>>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing.
>>>
>>> Ralph
>>>
>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>>>
>>> Yes, that is still the standard approach most people use and is the only
>>> way to provide a head-to-head comparison against the logging frameworks.
>>>
>>> Ralph
>>>
>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>
>>> This is all in a synchronous appender, right? Either way, that's rather
>>> interesting.
>>>
>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com> wrote:
>>>
>>>> Someone posted numbers on the Logback user’s list that match mine. It
>>>> shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and
>>>> 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>
>>>> I think we need some comparisons on the log4j side: file appender with
>>>> 256k buffer size, random access file appender with 256k buffer size (which
>>>> appears to be the default), and memory mapped file appender. It'd be cool
>>>> to see how these compose with async logging enabled in both log4j and
>>>> logback.
>>>>
>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>>> You should run the code at https://github.com/ceki/logback-perf to
>>>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>>>> speed of your processor and get the speed of your hard drive. I used
>>>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>>>> spreadsheet. I will post the like once I have it.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>> If you want, I can run tests on Windows once the build works on
>>>>> Windows again.
>>>>>
>>>>> Let me know what args/command line...
>>>>>
>>>>> Gary
>>>>>
>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>>>>>
>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive
>>>>>> category. With Logback 1.10 and -t 4 now get
>>>>>>
>>>>>> Benchmark Mode Samples
>>>>>> Score Error Units
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>>> 98187.673 ± 4935.712 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>>> 842374.496 ± 6762.712 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>>> 1853062.583 ± 67032.225 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>>> 2036011.226 ± 53208.281 ops/s
>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>>> 999667.438 ± 12074.003 ops/s
>>>>>>
>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything
>>>>>> directly on bare metal any more.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com>
>>>>>> wrote:
>>>>>>
>>>>>> Ceki finally fixed some of the performance problems in the
>>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a
>>>>>> few optimizations we can make.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <bo...@gmail.com>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>
>
>
> --
> Matt Sicker <bo...@gmail.com>
>
>
>
>
--
Matt Sicker <bo...@gmail.com>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
Logback 1.2-SNAPSHOT
Ralph
> On Feb 6, 2017, at 8:29 PM, Remko Popma <re...@gmail.com> wrote:
>
> Sorry what 1.2 do you mean?
>
> Sent from my iPhone
>
> On Feb 7, 2017, at 11:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>
>> In 1.2? That may work for a FileOutputStream but it isn’t guaranteed to work for others.
>>
>> Ralph
>>
>>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>
>>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>>
>>> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>
>>> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>>>
>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>>>
>>> Ralph
>>>
>>>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>
>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>
>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>
>>>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>
>>>>> Ralph
>>>>>
>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>>>
>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>
>>>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>
>>>>>> Ralph
>>>>>>
>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>>>
>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>
>>>>>>> Let me know what args/command line...
>>>>>>>
>>>>>>> Gary
>>>>>>>
>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>
>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>
>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>>>
>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>>>
>>>>>>>> Ralph
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>>
>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>
Re: Logback performance improvements
Posted by Remko Popma <re...@gmail.com>.
Sorry what 1.2 do you mean?
Sent from my iPhone
> On Feb 7, 2017, at 11:06, Apache <ra...@dslextreme.com> wrote:
>
> In 1.2? That may work for a FileOutputStream but it isn’t guaranteed to work for others.
>
> Ralph
>
>> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>>
>> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>>
>> On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
>>> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>>>
>>>> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
>>>> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>>>
>>>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>
>>>>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>>>>
>>>>> Ralph
>>>>>
>>>>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>
>>>>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>>>>
>>>>>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com> wrote:
>>>>>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>>>>>
>>>>>>> Ralph
>>>>>>>
>>>>>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>>>>>>
>>>>>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>>>>>
>>>>>>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>> You should run the code at https://github.com/ceki/logback-perf to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>>>>>>
>>>>>>>>> Ralph
>>>>>>>>>
>>>>>>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>>>>>>
>>>>>>>>>> Let me know what args/command line...
>>>>>>>>>>
>>>>>>>>>> Gary
>>>>>>>>>>
>>>>>>>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>>>>>>>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>>>>>>>
>>>>>>>>>>> Benchmark Mode Samples Score Error Units
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>>>>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>>>>>>>
>>>>>>>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a few optimizations we can make.
>>>>>>>>>>>>
>>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Matt Sicker <bo...@gmail.com>
>>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
In 1.2? That may work for a FileOutputStream but it isn’t guaranteed to work for others.
Ralph
> On Feb 6, 2017, at 5:23 PM, Matt Sicker <bo...@gmail.com> wrote:
>
> I'm not sure if I'm looking in the right place, but a major difference now between Logback's appenders and Log4j's is that Logback isn't synchronized on the append method.
>
> On 6 February 2017 at 18:18, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
> Is this something we can improve performance on by implementing a file appender based on FileChannel or AsynchronousFileChannel instead of OutputStream?
>
> On 6 February 2017 at 17:50, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
>
> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
>
> Ralph
>
>> On Feb 6, 2017, at 9:33 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>
>> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>>
>> Ralph
>>
>>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>
>>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>>
>>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>
>>> Ralph
>>>
>>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>>
>>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>>
>>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>>
>>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>>
>>>>> Let me know what args/command line...
>>>>>
>>>>> Gary
>>>>>
>>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>>
>>>>> Benchmark Mode Samples Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>>
>>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>>
>>>>> Ralph
>>>>>
>>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>>
>>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>>
>>>>>> Ralph
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>
>
>
>
>
> --
> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>
>
>
> --
> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
Re: Logback performance improvements
Posted by Matt Sicker <bo...@gmail.com>.
I'm not sure if I'm looking in the right place, but a major difference now
between Logback's appenders and Log4j's is that Logback isn't synchronized
on the append method.
On 6 February 2017 at 18:18, Matt Sicker <bo...@gmail.com> wrote:
> Is this something we can improve performance on by implementing a file
> appender based on FileChannel or AsynchronousFileChannel instead of
> OutputStream?
>
> On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
>
>> Ceki has updated his numbers to include those reported on the mailing
>> list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0
>> RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>>
>> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my
>> two MacBooks are at https://docs.google.com/spread
>> sheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing.
>>
>> Ralph
>>
>> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>>
>> Yes, that is still the standard approach most people use and is the only
>> way to provide a head-to-head comparison against the logging frameworks.
>>
>> Ralph
>>
>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>>
>> This is all in a synchronous appender, right? Either way, that's rather
>> interesting.
>>
>> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com> wrote:
>>
>>> Someone posted numbers on the Logback user’s list that match mine. It
>>> shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and
>>> 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>>
>>> Ralph
>>>
>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>>
>>> I think we need some comparisons on the log4j side: file appender with
>>> 256k buffer size, random access file appender with 256k buffer size (which
>>> appears to be the default), and memory mapped file appender. It'd be cool
>>> to see how these compose with async logging enabled in both log4j and
>>> logback.
>>>
>>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com> wrote:
>>>
>>>> You should run the code at https://github.com/ceki/logback-perf to
>>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>>> speed of your processor and get the speed of your hard drive. I used
>>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>>> spreadsheet. I will post the like once I have it.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com>
>>>> wrote:
>>>>
>>>> If you want, I can run tests on Windows once the build works on Windows
>>>> again.
>>>>
>>>> Let me know what args/command line...
>>>>
>>>> Gary
>>>>
>>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>>>>
>>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive
>>>>> category. With Logback 1.10 and -t 4 now get
>>>>>
>>>>> Benchmark Mode Samples
>>>>> Score Error Units
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>>> 98187.673 ± 4935.712 ops/s
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>>> 842374.496 ± 6762.712 ops/s
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>>> 1853062.583 ± 67032.225 ops/s
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>>> 2036011.226 ± 53208.281 ops/s
>>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>>> 999667.438 ± 12074.003 ops/s
>>>>>
>>>>> I’ll have to try this on one my VMs at work. We don’t run anything
>>>>> directly on bare metal any more.
>>>>>
>>>>> Ralph
>>>>>
>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com> wrote:
>>>>>
>>>>> Ceki finally fixed some of the performance problems in the
>>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a
>>>>> few optimizations we can make.
>>>>>
>>>>> Ralph
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Matt Sicker <bo...@gmail.com>
>>>
>>>
>>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>>
>>
>>
>
>
> --
> Matt Sicker <bo...@gmail.com>
>
--
Matt Sicker <bo...@gmail.com>
Re: Logback performance improvements
Posted by Matt Sicker <bo...@gmail.com>.
Is this something we can improve performance on by implementing a file
appender based on FileChannel or AsynchronousFileChannel instead of
OutputStream?
On 6 February 2017 at 17:50, Apache <ra...@dslextreme.com> wrote:
> Ceki has updated his numbers to include those reported on the mailing
> list. https://docs.google.com/spreadsheets/d/
> 1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0
>
> I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my
> two MacBooks are at https://docs.google.com/spreadsheets/d/
> 1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing.
>
> Ralph
>
> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>
> Yes, that is still the standard approach most people use and is the only
> way to provide a head-to-head comparison against the logging frameworks.
>
> Ralph
>
> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>
> This is all in a synchronous appender, right? Either way, that's rather
> interesting.
>
> On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com> wrote:
>
>> Someone posted numbers on the Logback user’s list that match mine. It
>> shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and
>> 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>
>> Ralph
>>
>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>>
>> I think we need some comparisons on the log4j side: file appender with
>> 256k buffer size, random access file appender with 256k buffer size (which
>> appears to be the default), and memory mapped file appender. It'd be cool
>> to see how these compose with async logging enabled in both log4j and
>> logback.
>>
>> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com> wrote:
>>
>>> You should run the code at https://github.com/ceki/logback-perf to
>>> compare your results to Ceki’s. You also should capture the cpubenchmark
>>> speed of your processor and get the speed of your hard drive. I used
>>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>>> spreadsheet. I will post the like once I have it.
>>>
>>> Ralph
>>>
>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com> wrote:
>>>
>>> If you want, I can run tests on Windows once the build works on Windows
>>> again.
>>>
>>> Let me know what args/command line...
>>>
>>> Gary
>>>
>>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>>>
>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive
>>>> category. With Logback 1.10 and -t 4 now get
>>>>
>>>> Benchmark Mode Samples
>>>> Score Error Units
>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>>> 98187.673 ± 4935.712 ops/s
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>>> 842374.496 ± 6762.712 ops/s
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>>> 1853062.583 ± 67032.225 ops/s
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>>> 2036011.226 ± 53208.281 ops/s
>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>>> 999667.438 ± 12074.003 ops/s
>>>>
>>>> I’ll have to try this on one my VMs at work. We don’t run anything
>>>> directly on bare metal any more.
>>>>
>>>> Ralph
>>>>
>>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com> wrote:
>>>>
>>>> Ceki finally fixed some of the performance problems in the
>>>> FileAppender. See https://logback.qos.ch/news.html and
>>>> https://docs.google.com/spreadsheets/d/1cpb5D7qny
>>>> ye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0. I suspect we have a few
>>>> optimizations we can make.
>>>>
>>>> Ralph
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> Matt Sicker <bo...@gmail.com>
>>
>>
>>
>
>
> --
> Matt Sicker <bo...@gmail.com>
>
>
>
>
--
Matt Sicker <bo...@gmail.com>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
Ceki has updated his numbers to include those reported on the mailing list. https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>
I haven’t run the tests with Logback 1.2-SNAPSHOT but my numbers for my two MacBooks are at https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing <https://docs.google.com/spreadsheets/d/1L67IhmUVvyLBWtC6iq0TMj-j0vrbKsUKWuZV0Nlqisk/edit?usp=sharing>.
Ralph
> On Feb 6, 2017, at 9:33 AM, Apache <ra...@dslextreme.com> wrote:
>
> Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
>
> Ralph
>
>> On Feb 6, 2017, at 8:02 AM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>
>> This is all in a synchronous appender, right? Either way, that's rather interesting.
>>
>> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>>
>> Ralph
>>
>>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>>
>>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>>
>>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>>
>>> Ralph
>>>
>>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>>
>>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>>
>>>> Let me know what args/command line...
>>>>
>>>> Gary
>>>>
>>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>>
>>>> Benchmark Mode Samples Score Error Units
>>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>>
>>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>>
>>>> Ralph
>>>>
>>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>>
>>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>>
>>>>> Ralph
>>>>
>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>>
>>
>>
>>
>> --
>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
Yes, that is still the standard approach most people use and is the only way to provide a head-to-head comparison against the logging frameworks.
Ralph
> On Feb 6, 2017, at 8:02 AM, Matt Sicker <bo...@gmail.com> wrote:
>
> This is all in a synchronous appender, right? Either way, that's rather interesting.
>
> On 6 February 2017 at 07:54, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>
> Ralph
>
>> On Feb 5, 2017, at 3:25 PM, Matt Sicker <boards@gmail.com <ma...@gmail.com>> wrote:
>>
>> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>>
>> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>>
>> Ralph
>>
>>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>>
>>> If you want, I can run tests on Windows once the build works on Windows again.
>>>
>>> Let me know what args/command line...
>>>
>>> Gary
>>>
>>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>>
>>> Benchmark Mode Samples Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>>
>>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>>
>>> Ralph
>>>
>>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>>
>>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>>
>>>> Ralph
>>>
>>
>>
>>
>>
>> --
>> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
>
>
>
>
> --
> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
Re: Logback performance improvements
Posted by Matt Sicker <bo...@gmail.com>.
This is all in a synchronous appender, right? Either way, that's rather
interesting.
On 6 February 2017 at 07:54, Apache <ra...@dslextreme.com> wrote:
> Someone posted numbers on the Logback user’s list that match mine. It
> shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and
> 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
>
> Ralph
>
> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>
> I think we need some comparisons on the log4j side: file appender with
> 256k buffer size, random access file appender with 256k buffer size (which
> appears to be the default), and memory mapped file appender. It'd be cool
> to see how these compose with async logging enabled in both log4j and
> logback.
>
> On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com> wrote:
>
>> You should run the code at https://github.com/ceki/logback-perf to
>> compare your results to Ceki’s. You also should capture the cpubenchmark
>> speed of your processor and get the speed of your hard drive. I used
>> Blackmagic speed test on my Mac. I am capturing my results in a Google
>> spreadsheet. I will post the like once I have it.
>>
>> Ralph
>>
>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com> wrote:
>>
>> If you want, I can run tests on Windows once the build works on Windows
>> again.
>>
>> Let me know what args/command line...
>>
>> Gary
>>
>> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>>
>>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive
>>> category. With Logback 1.10 and -t 4 now get
>>>
>>> Benchmark Mode Samples
>>> Score Error Units
>>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>>> 98187.673 ± 4935.712 ops/s
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>>> 842374.496 ± 6762.712 ops/s
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>>> 1853062.583 ± 67032.225 ops/s
>>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>>> 2036011.226 ± 53208.281 ops/s
>>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>>> 999667.438 ± 12074.003 ops/s
>>>
>>> I’ll have to try this on one my VMs at work. We don’t run anything
>>> directly on bare metal any more.
>>>
>>> Ralph
>>>
>>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com> wrote:
>>>
>>> Ceki finally fixed some of the performance problems in the FileAppender.
>>> See https://logback.qos.ch/news.html and https://docs.google
>>> .com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D
>>> 91m0/edit#gid=0. I suspect we have a few optimizations we can make.
>>>
>>> Ralph
>>>
>>>
>>>
>>
>
>
> --
> Matt Sicker <bo...@gmail.com>
>
>
>
--
Matt Sicker <bo...@gmail.com>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
Someone posted numbers on the Logback user’s list that match mine. It shows Logback 1.1.9 was pretty terrible, 1.1.10 is somewhat better and 1.2-SNAPSHOT is on par or slightly better than Log4j 2.
Ralph
> On Feb 5, 2017, at 3:25 PM, Matt Sicker <bo...@gmail.com> wrote:
>
> I think we need some comparisons on the log4j side: file appender with 256k buffer size, random access file appender with 256k buffer size (which appears to be the default), and memory mapped file appender. It'd be cool to see how these compose with async logging enabled in both log4j and logback.
>
> On 5 February 2017 at 16:06, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
>
> Ralph
>
>> On Feb 5, 2017, at 1:48 PM, Gary Gregory <garydgregory@gmail.com <ma...@gmail.com>> wrote:
>>
>> If you want, I can run tests on Windows once the build works on Windows again.
>>
>> Let me know what args/command line...
>>
>> Gary
>>
>> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>>
>> Benchmark Mode Samples Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>>
>> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>>
>> Ralph
>>
>>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>>
>>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>>
>>> Ralph
>>
>
>
>
>
> --
> Matt Sicker <boards@gmail.com <ma...@gmail.com>>
Re: Logback performance improvements
Posted by Matt Sicker <bo...@gmail.com>.
I think we need some comparisons on the log4j side: file appender with 256k
buffer size, random access file appender with 256k buffer size (which
appears to be the default), and memory mapped file appender. It'd be cool
to see how these compose with async logging enabled in both log4j and
logback.
On 5 February 2017 at 16:06, Apache <ra...@dslextreme.com> wrote:
> You should run the code at https://github.com/ceki/logback-perf to
> compare your results to Ceki’s. You also should capture the cpubenchmark
> speed of your processor and get the speed of your hard drive. I used
> Blackmagic speed test on my Mac. I am capturing my results in a Google
> spreadsheet. I will post the like once I have it.
>
> Ralph
>
> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com> wrote:
>
> If you want, I can run tests on Windows once the build works on Windows
> again.
>
> Let me know what args/command line...
>
> Gary
>
> On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
>
>> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category.
>> With Logback 1.10 and -t 4 now get
>>
>> Benchmark Mode Samples
>> Score Error Units
>> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
>> 98187.673 ± 4935.712 ops/s
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
>> 842374.496 ± 6762.712 ops/s
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
>> 1853062.583 ± 67032.225 ops/s
>> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
>> 2036011.226 ± 53208.281 ops/s
>> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
>> 999667.438 ± 12074.003 ops/s
>>
>> I’ll have to try this on one my VMs at work. We don’t run anything
>> directly on bare metal any more.
>>
>> Ralph
>>
>> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com> wrote:
>>
>> Ceki finally fixed some of the performance problems in the FileAppender.
>> See https://logback.qos.ch/news.html and https://docs.google
>> .com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D
>> 91m0/edit#gid=0. I suspect we have a few optimizations we can make.
>>
>> Ralph
>>
>>
>>
>
--
Matt Sicker <bo...@gmail.com>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
You should run the code at https://github.com/ceki/logback-perf <https://github.com/ceki/logback-perf> to compare your results to Ceki’s. You also should capture the cpubenchmark speed of your processor and get the speed of your hard drive. I used Blackmagic speed test on my Mac. I am capturing my results in a Google spreadsheet. I will post the like once I have it.
Ralph
> On Feb 5, 2017, at 1:48 PM, Gary Gregory <ga...@gmail.com> wrote:
>
> If you want, I can run tests on Windows once the build works on Windows again.
>
> Let me know what args/command line...
>
> Gary
>
> On Feb 5, 2017 11:58 AM, "Apache" <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
>
> Benchmark Mode Samples Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
>
> I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
>
> Ralph
>
>> On Feb 5, 2017, at 9:40 AM, Apache <ralph.goers@dslextreme.com <ma...@dslextreme.com>> wrote:
>>
>> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>>
>> Ralph
>
Re: Logback performance improvements
Posted by Gary Gregory <ga...@gmail.com>.
If you want, I can run tests on Windows once the build works on Windows
again.
Let me know what args/command line...
Gary
On Feb 5, 2017 11:58 AM, "Apache" <ra...@dslextreme.com> wrote:
> I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category.
> With Logback 1.10 and -t 4 now get
>
> Benchmark Mode Samples
> Score Error Units
> o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20
> 98187.673 ± 4935.712 ops/s
> o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20
> 842374.496 ± 6762.712 ops/s
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20
> 1853062.583 ± 67032.225 ops/s
> o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20
> 2036011.226 ± 53208.281 ops/s
> o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20
> 999667.438 ± 12074.003 ops/s
>
> I’ll have to try this on one my VMs at work. We don’t run anything
> directly on bare metal any more.
>
> Ralph
>
> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com> wrote:
>
> Ceki finally fixed some of the performance problems in the FileAppender.
> See https://logback.qos.ch/news.html and https://docs.
> google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98ca
> tNZytYIu5D91m0/edit#gid=0. I suspect we have a few optimizations we can
> make.
>
> Ralph
>
>
>
Re: Logback performance improvements
Posted by Apache <ra...@dslextreme.com>.
I guess my MacBook Pro must fit in the Slow CPU/Fast Hard drive category. With Logback 1.10 and -t 4 now get
Benchmark Mode Samples Score Error Units
o.a.l.l.p.j.FileAppenderBenchmark.julFile thrpt 20 98187.673 ± 4935.712 ops/s
o.a.l.l.p.j.FileAppenderBenchmark.log4j1File thrpt 20 842374.496 ± 6762.712 ops/s
o.a.l.l.p.j.FileAppenderBenchmark.log4j2File thrpt 20 1853062.583 ± 67032.225 ops/s
o.a.l.l.p.j.FileAppenderBenchmark.log4j2RAF thrpt 20 2036011.226 ± 53208.281 ops/s
o.a.l.l.p.j.FileAppenderBenchmark.logbackFile thrpt 20 999667.438 ± 12074.003 ops/s
I’ll have to try this on one my VMs at work. We don’t run anything directly on bare metal any more.
Ralph
> On Feb 5, 2017, at 9:40 AM, Apache <ra...@dslextreme.com> wrote:
>
> Ceki finally fixed some of the performance problems in the FileAppender. See https://logback.qos.ch/news.html <https://logback.qos.ch/news.html> and https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0 <https://docs.google.com/spreadsheets/d/1cpb5D7qnyye4W0RTlHUnXedYK98catNZytYIu5D91m0/edit#gid=0>. I suspect we have a few optimizations we can make.
>
> Ralph