You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by Andrew Purtell <ap...@apache.org> on 2018/07/31 01:43:49 UTC

Small cluster YCSB comparison results (1.2.6.1, 1.3.2.1, 1.4.6, 1.5.0-SNAPSHOT)

Instance OS:
​ ​
Linux version 4.14.55-62.37.amzn1.x86_64
Instance
​Type:
​ ​
​master: c3.8xlarge, regionservers: ​
c3.8xlarge x 5
​, client c3.8xlarge​

Regionserver JVM:
​ ​
OpenJDK Runtime Environment (build 1.8.0_172-shenandoah-b11)
​ ​
64-Bit Server VM (build 25.172-b11, mixed mode)
Regionserver JVM args:
​ ​
-Xms48g -Xmx48g -XX:+UseShenandoahGC -XX:+AlwaysPreTouch -XX:+UseNUMA
-XX:-UseBiasedLocking
​ ​
-XX:+ParallelRefProcEnabled
HDFS version:
​ ​
Hadoop 2.7.6
YCSB client args: -threads 32 -target 50000
Init: Load 100 M rows and snapshot
Run: Delete table, clone and redeploy from snapshot, run 10 M operations
​​ (except workload E, run 1 M operations)

YCSB Workload A




1.2.6.1 1.3.2.1 1.4.6 1.5.0-SNAPSHOT





[OVERALL], RunTime(ms) 269305 273797 268317 264509
[OVERALL], Throughput(ops/sec) 37133 36523 37269 37806
[READ], AverageLatency(us) 526 541 523 504
[READ], MinLatency(us) 267 271 273 271
[READ], MaxLatency(us) 76351 398591 82239 100415
[READ], 95thPercentileLatency(us) 654 674 652 625
[READ], 99thPercentileLatency(us), 741 777 742 709
[UPDATE], AverageLatency(us) 1186 1200 1182 1178
[UPDATE], MinLatency(us) 665 717 716 679
[UPDATE], MaxLatency(us) 106943 362495 111167 119615
[UPDATE], 95thPercentileLatency(us) 1497 1505 1493 1492
[UPDATE], 99thPercentileLatency(us) 1734 1754 1740 1721
YCSB Workload B




1.2.6.1 1.3.2.1 1.4.6 1.5.0-SNAPSHOT





[OVERALL], RunTime(ms) 200577 200575 200593 200579
[OVERALL], Throughput(ops/sec) 49856 49857 49852 49856
[READ], AverageLatency(us),  485 477 471 462
[READ], MinLatency(us) 225 222 213 216
[READ], MaxLatency(us) 75391 156415 163455 94655
[READ], 95thPercentileLatency(us) 603 585 578 571
[READ], 99thPercentileLatency(us) 682 669 652 643
[UPDATE], AverageLatency(us) 1065 1075 1034 1045
[UPDATE], MinLatency(us) 745 760 742 742
[UPDATE], MaxLatency(us) 106687 58399 140159 130367
[UPDATE], 95thPercentileLatency(us) 1292 1307 1279 1302
[UPDATE], 99thPercentileLatency(us) 1451 1518 1426 1443
YCSB Workload C




1.2.6.1 1.3.2.1 1.4.6 1.5.0-SNAPSHOT





[OVERALL], RunTime(ms) 200600 200562 200565 200572
[OVERALL], Throughput(ops/sec) 49850 49860 49859 49857
[READ], AverageLatency(us) 357 346 348 329
[READ], MinLatency(us) 194 195 196 193
[READ], MaxLatency(us) 83135 87999 150143 116607
[READ], 95thPercentileLatency(us) 440 427 427 405
[READ], 99thPercentileLatency(us) 492 481 478 453
YCSB Workload D




1.2.6.1 1.3.2.1 1.4.6 1.5.0-SNAPSHOT





[OVERALL], RunTime(ms) 200591 200585 200587 200621
[OVERALL], Throughput(ops/sec) 49853 49854 49854 49845
[READ], AverageLatency(us) 491 479 485 475
[READ], MinLatency(us) 212 223 228 221
[READ], MaxLatency(us) 131839 113151 190207 105343
[READ], 95thPercentileLatency(us) 1198 1202 1170 1176
[READ], 99thPercentileLatency(us) 1619 1677 1611 1630
[INSERT], AverageLatency(us) 1157 1152 1153 1134
[INSERT], MinLatency(us) 803 803 799 796
[INSERT], MaxLatency(us) 106367 111103 48927 122047
[INSERT], 95thPercentileLatency(us) 1385 1380 1409 1389
[INSERT], 99thPercentileLatency(us) 1600 1606 1621 1564
YCSB Workload E




1.2.6.1 1.3.2.1 1.4.6 1.5.0-SNAPSHOT





[OVERALL], RunTime(ms) 94320 88992 92606 70698
[OVERALL], Throughput(ops/sec) 10602 11237 10798 14145
[SCAN], AverageLatency(us) 3031 2863 2962 2194
[SCAN], MinLatency(us) 860 823 800 447
[SCAN], MaxLatency(us) 112447 332287 1025023 1029631
[SCAN], 95thPercentileLatency(us) 6263 5951 6307 5315
[SCAN], 99thPercentileLatency(us) 13215 12551 12375 13135
[INSERT], AverageLatency(us) 1982 1857 1986 2793
[INSERT], MinLatency(us) 927 917 892 1000
[INSERT], MaxLatency(us) 112447 116607 106879 125951
[INSERT], 95thPercentileLatency(us) 3081 2777 3129 4487
[INSERT], 99thPercentileLatency(us) 4479 4135 5083 6739

--
Best regards,
Andrew

Re: Small cluster YCSB comparison results (1.2.6.1, 1.3.2.1, 1.4.6, 1.5.0-SNAPSHOT)

Posted by Andrew Purtell <ap...@apache.org>.
I have some old numbers from a prior experiment with 1 TB heap. Might be
sufficient to say neither CMS nor G1 survived until the end of the test,
which was a simple LTT load ... of a billion row plus in memory table on
heap in a single regionserver, but that is a detail. :-) I might have time
to retest on this same test cluster with G1.

We are not running Shenandoah in production yet. However it seems ready for
pre production and I am being aggressive about testing with it. Reminds me,
RedHat just did a bulk backport into their 8u tree, should rebuild the test
JVM. (The line between pioneer and crazy is thin, YMMV.)


On Tue, Jul 31, 2018 at 7:58 AM Mike Drob <md...@apache.org> wrote:

Shenandoah GC is interesting. Do you have any comparisons to CMS or G1? Are
> y'all running Shenandoah in production already?
>
> On Tue, Jul 31, 2018 at 9:37 AM, Josh Elser <el...@apache.org> wrote:
>
> > +1, great stuff! Thanks to you for doing this testing and sharing results
> > with us all.
> >
> >
> > On 7/30/18 10:38 PM, Stack wrote:
> >
> >> Thanks Andy. Looks good.
> >>
> >> Maybe next time add -p clientbuffering=true ?
> >>
> >> Good on you,
> >> S
> >> On Mon, Jul 30, 2018 at 6:55 PM Andrew Purtell <ap...@apache.org>
> >> wrote:
> >>
> >>>
> >>> A couple of notes and general observations.
> >>>
> >>> Note all instances remained up for the entire duration of testing
> >>> including
> >>> burn in (all tests ran on the same hardware), and HDFS volumes were
> built
> >>> on locally attached storage (hence C3 generation instances), so I
> >>> controlled as much as possible for system level variance.
> >>>
> >>> Results are quite similar among the releasing 1.x versions and
> >>> 1.5-SNAPSHOT. Note measurements are reported in microseconds.
> >>>
> >>> I thought 1.5-SNAPSHOT might show performance regressions, but the
> >>> surprise
> >>> is in the other direction. It seems to be better performing in the YCSB
> >>> scenarios than the other versions tested in most cases.
> >>>
> >>> There are general small trends toward improvement as reduction in
> >>> latencies
> >>> with the exception of workloads B and F. Workloads B and F, especially
> >>> when
> >>> run against 1.5-SNAPSHOT, may show reduced performance on
> >>> inserts/mutations
> >>> in trade for improved performance in reads/scanning. More testing
> needed.
> >>>
> >>
>


-- 
Best regards,
Andrew

Words like orphans lost among the crosstalk, meaning torn from truth's
decrepit hands
   - A23, Crosstalk

Re: Small cluster YCSB comparison results (1.2.6.1, 1.3.2.1, 1.4.6, 1.5.0-SNAPSHOT)

Posted by Mike Drob <md...@apache.org>.
Shenandoah GC is interesting. Do you have any comparisons to CMS or G1? Are
y'all running Shenandoah in production already?

On Tue, Jul 31, 2018 at 9:37 AM, Josh Elser <el...@apache.org> wrote:

> +1, great stuff! Thanks to you for doing this testing and sharing results
> with us all.
>
>
> On 7/30/18 10:38 PM, Stack wrote:
>
>> Thanks Andy. Looks good.
>>
>> Maybe next time add -p clientbuffering=true ?
>>
>> Good on you,
>> S
>> On Mon, Jul 30, 2018 at 6:55 PM Andrew Purtell <ap...@apache.org>
>> wrote:
>>
>>>
>>> A couple of notes and general observations.
>>>
>>> Note all instances remained up for the entire duration of testing
>>> including
>>> burn in (all tests ran on the same hardware), and HDFS volumes were built
>>> on locally attached storage (hence C3 generation instances), so I
>>> controlled as much as possible for system level variance.
>>>
>>> Results are quite similar among the releasing 1.x versions and
>>> 1.5-SNAPSHOT. Note measurements are reported in microseconds.
>>>
>>> I thought 1.5-SNAPSHOT might show performance regressions, but the
>>> surprise
>>> is in the other direction. It seems to be better performing in the YCSB
>>> scenarios than the other versions tested in most cases.
>>>
>>> There are general small trends toward improvement as reduction in
>>> latencies
>>> with the exception of workloads B and F. Workloads B and F, especially
>>> when
>>> run against 1.5-SNAPSHOT, may show reduced performance on
>>> inserts/mutations
>>> in trade for improved performance in reads/scanning. More testing needed.
>>>
>>

Re: Small cluster YCSB comparison results (1.2.6.1, 1.3.2.1, 1.4.6, 1.5.0-SNAPSHOT)

Posted by Josh Elser <el...@apache.org>.
+1, great stuff! Thanks to you for doing this testing and sharing 
results with us all.

On 7/30/18 10:38 PM, Stack wrote:
> Thanks Andy. Looks good.
> 
> Maybe next time add -p clientbuffering=true ?
> 
> Good on you,
> S
> On Mon, Jul 30, 2018 at 6:55 PM Andrew Purtell <ap...@apache.org> wrote:
>>
>> A couple of notes and general observations.
>>
>> Note all instances remained up for the entire duration of testing including
>> burn in (all tests ran on the same hardware), and HDFS volumes were built
>> on locally attached storage (hence C3 generation instances), so I
>> controlled as much as possible for system level variance.
>>
>> Results are quite similar among the releasing 1.x versions and
>> 1.5-SNAPSHOT. Note measurements are reported in microseconds.
>>
>> I thought 1.5-SNAPSHOT might show performance regressions, but the surprise
>> is in the other direction. It seems to be better performing in the YCSB
>> scenarios than the other versions tested in most cases.
>>
>> There are general small trends toward improvement as reduction in latencies
>> with the exception of workloads B and F. Workloads B and F, especially when
>> run against 1.5-SNAPSHOT, may show reduced performance on inserts/mutations
>> in trade for improved performance in reads/scanning. More testing needed.

Re: Small cluster YCSB comparison results (1.2.6.1, 1.3.2.1, 1.4.6, 1.5.0-SNAPSHOT)

Posted by Stack <st...@duboce.net>.
Thanks Andy. Looks good.

Maybe next time add -p clientbuffering=true ?

Good on you,
S
On Mon, Jul 30, 2018 at 6:55 PM Andrew Purtell <ap...@apache.org> wrote:
>
> A couple of notes and general observations.
>
> Note all instances remained up for the entire duration of testing including
> burn in (all tests ran on the same hardware), and HDFS volumes were built
> on locally attached storage (hence C3 generation instances), so I
> controlled as much as possible for system level variance.
>
> Results are quite similar among the releasing 1.x versions and
> 1.5-SNAPSHOT. Note measurements are reported in microseconds.
>
> I thought 1.5-SNAPSHOT might show performance regressions, but the surprise
> is in the other direction. It seems to be better performing in the YCSB
> scenarios than the other versions tested in most cases.
>
> There are general small trends toward improvement as reduction in latencies
> with the exception of workloads B and F. Workloads B and F, especially when
> run against 1.5-SNAPSHOT, may show reduced performance on inserts/mutations
> in trade for improved performance in reads/scanning. More testing needed.

Re: Small cluster YCSB comparison results (1.2.6.1, 1.3.2.1, 1.4.6, 1.5.0-SNAPSHOT)

Posted by Andrew Purtell <ap...@apache.org>.
​A couple of notes and general observations.

Note all instances remained up for the entire duration of testing including
burn in (all tests ran on the same hardware), and HDFS volumes were built
on locally attached storage (hence C3 generation instances), so I
controlled as much as possible for system level variance.

Results are quite similar among the releasing 1.x versions and
1.5-SNAPSHOT. Note measurements are reported in microseconds.

I thought 1.5-SNAPSHOT might show performance regressions, but the surprise
is in the other direction. It seems to be better performing in the YCSB
scenarios than the other versions tested in most cases.

There are general small trends toward improvement as reduction in latencies
with the exception of workloads B and F. Workloads B and F, especially when
run against 1.5-SNAPSHOT, may show reduced performance on inserts/mutations
in trade for improved performance in reads/scanning. More testing needed.