You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@james.apache.org by GitBox <gi...@apache.org> on 2022/03/24 10:49:45 UTC

[GitHub] [james-project] chibenwa commented on a change in pull request #937: JAMES-3734 Document database benchmark methodologies and base performances

chibenwa commented on a change in pull request #937:
URL: https://github.com/apache/james-project/pull/937#discussion_r834165151



##########
File path: server/apps/distributed-app/docs/modules/ROOT/pages/operate/db-benchmark.adoc
##########
@@ -0,0 +1,1261 @@
+= Distributed James Server -- Database benchmarks
+:navtitle: Database benchmarks
+
+This document provides basic performance of Distributed James' databases, benchmark methodologies as a basis for a James administrator
+can test and evaluate if his Distributed James is performing well.

Review comment:
       ```suggestion
   can test and evaluate if his Distributed James databases are performing well.
   ```

##########
File path: server/apps/distributed-app/docs/modules/ROOT/pages/operate/db-benchmark.adoc
##########
@@ -0,0 +1,1261 @@
+= Distributed James Server -- Database benchmarks
+:navtitle: Database benchmarks
+
+This document provides basic performance of Distributed James' databases, benchmark methodologies as a basis for a James administrator
+can test and evaluate if his Distributed James is performing well.
+
+It includes:
+
+* James base performance at Linagora
+* Propose benchmark methodology and base performance for each database
+
+== James base performance at Linagora
+
+At Linagora, we are deploying Distributed James to provide email service to our customers. These databases are being used
+by our Distributed James:
+
+- Apache Cassandra 4 as main database
+- OpenDistro 1.13.1 as search engine
+- RabbitMQ 3.8.17 as message queue
+- OVH Swift S3 as an object storage
+
+With the above system, our email service has been operating stably with valuable performance for many years.
+For a more details, our Distributed James on Pre-Production environment can handle a load throughput up to about 13000 request

Review comment:
       We are at ~1000 JMAP request / sec

##########
File path: server/apps/distributed-app/docs/modules/ROOT/pages/operate/db-benchmark.adoc
##########
@@ -0,0 +1,1261 @@
+= Distributed James Server -- Database benchmarks
+:navtitle: Database benchmarks
+
+This document provides basic performance of Distributed James' databases, benchmark methodologies as a basis for a James administrator
+can test and evaluate if his Distributed James is performing well.
+
+It includes:
+
+* James base performance at Linagora
+* Propose benchmark methodology and base performance for each database

Review comment:
       ```suggestion
   * Propose benchmark methodology and base performance for each database. This aims to help operators to quickly identify performance issues and compliance of their databases.
   ```

##########
File path: server/apps/distributed-app/docs/modules/ROOT/pages/operate/db-benchmark.adoc
##########
@@ -0,0 +1,1261 @@
+= Distributed James Server -- Database benchmarks
+:navtitle: Database benchmarks
+
+This document provides basic performance of Distributed James' databases, benchmark methodologies as a basis for a James administrator
+can test and evaluate if his Distributed James is performing well.
+
+It includes:
+
+* James base performance at Linagora

Review comment:
       Ban `Linagora` mentions here.
   
   Speak of `sample deployment topology`.

##########
File path: server/apps/distributed-app/docs/modules/ROOT/pages/operate/db-benchmark.adoc
##########
@@ -0,0 +1,1261 @@
+= Distributed James Server -- Database benchmarks
+:navtitle: Database benchmarks
+
+This document provides basic performance of Distributed James' databases, benchmark methodologies as a basis for a James administrator
+can test and evaluate if his Distributed James is performing well.
+
+It includes:
+
+* James base performance at Linagora
+* Propose benchmark methodology and base performance for each database
+
+== James base performance at Linagora
+
+At Linagora, we are deploying Distributed James to provide email service to our customers. These databases are being used
+by our Distributed James:
+
+- Apache Cassandra 4 as main database
+- OpenDistro 1.13.1 as search engine
+- RabbitMQ 3.8.17 as message queue
+- OVH Swift S3 as an object storage
+
+With the above system, our email service has been operating stably with valuable performance for many years.
+For a more details, our Distributed James on Pre-Production environment can handle a load throughput up to about 13000 request
+per second with 99th percentile latency is 400ms.
+
+== Benchmark methodologies and base performances
+We did some benchmarks for our databases on our pre-production environment. We are willing to share the benchmark methodologies
+and the result to you as a reference to evaluate your Distributed James' performance. Other evaluation methods are welcome,
+as long as your databases exhibit similar or even better performance than ours. It is up to your business needs.
+If your databases shows results that fall far from our baseline performance, there's a good chance that
+there are problems with your system, and you need to check it out thoroughly.
+
+=== Benchmark Cassandra
+
+==== Benchmark methodology
+===== Benchmark tool
+
+We use https://cassandra.apache.org/doc/latest/cassandra/tools/cassandra_stress.html[cassandra-stress tool] - an official
+tool of Cassandra for stress loading tests.
+
+The cassandra-stress tool is a Java-based stress testing utility for basic benchmarking and load testing a Cassandra cluster.
+Data modeling choices can greatly affect application performance. Significant load testing over several trials is the best method for discovering issues with a particular data model. The cassandra-stress tool is an effective tool for populating a cluster and stress testing CQL tables and queries. Use cassandra-stress to:
+
+- Quickly determine how a schema performs.
+- Understand how your database scales.
+- Optimize your data model and settings.
+- Determine production capacity.
+
+There are several operation types:
+
+- write-only, read-only, and mixed workloads of standard data
+- write-only and read-only workloads for counter columns
+- user configured workloads, running custom queries on custom schemas
+
+===== How to benchmark
+
+Here we are using a simple case to test and compare Cassandra performance between different setup environments.
+
+[source,yaml]
+----
+keyspace: stresscql
+
+keyspace_definition: |
+  CREATE KEYSPACE stresscql WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3};
+
+table: mixed_workload
+
+table_definition: |
+  CREATE TABLE mixed_workload (
+    key uuid PRIMARY KEY,
+    a blob,
+    b blob
+  ) WITH COMPACT STORAGE
+
+columnspec:
+  - name: a
+    size: uniform(1..10000)
+  - name: b
+    size: uniform(1..100000)
+
+insert:
+  partitions: fixed(1)
+
+queries:
+   read:
+      cql: select * from mixed_workload where key = ?
+      fields: samerow
+----
+
+Create the yaml file as above and copy to a Cassandra node.
+
+Insert some sample data:
+
+[source,bash]
+----
+cassandra-stress user profile=mixed_workload.yml n=100000 "ops(insert=1)" cl=ONE -mode native cql3 user=<user> password=<password> -node <IP> -rate threads=8 -graph file=./graph_insert.xml title=Benchmark revision=insert_ONE
+----
+
+Read intensive scenario:
+
+[source,bash]
+----
+cassandra-stress user profile=mixed_workload.yml n=100000 "ops(insert=1,read=4)" cl=ONE -mode native cql3 user=<user> password=<password> -node <IP> -rate threads=8 -graph file=./graph_mixed.xml title=Benchmark revision=mixed_ONE
+----
+
+In there:
+
+- n=100000: The number of insert batches, not number of individual insert operations.
+- rate threads=8: The number of concurrent threads. If not specified it will start with 4 threads and increase until server reaches a limit.
+- ops(insert=1,read=4): This will execute insert and read queries in the ratio 1:4.
+- graph: Export results to graph in html format.
+
+==== Benchmark result
+image::cassandra_stress_test_result_1.png[]
+
+image::cassandra_stress_test_result_2.png[]
+
+==== References
+https://www.datastax.com/blog/improved-cassandra-21-stress-tool-benchmark-any-schema-part-1[Datastax - Cassandra stress tool]
+
+https://www.instaclustr.com/deep-diving-cassandra-stress-part-3-using-yaml-profiles/[Deep Diving cassandra-stress – Part 3 (Using YAML Profiles)]
+
+=== Benchmark Elasticsearch
+
+==== Benchmark methodology
+
+===== Benchmark tool
+We use https://github.com/elastic/rally[EsRally] - an official Elasticsearch benchmarking tool. EsRally provides the following features:
+
+- Automatically create Elasticsearch clusters, stress tests them, and delete them.
+- Manage stress testing data and solutions by Elasticsearch version.
+- Present stress testing data in a comprehensive way, allowing you to compare and analyze the data of different stress tests and store the data on a particular Elasticsearch instance for secondary analysis.
+- Collect Java Virtual Machine (JVM) details, such as memory and garbage collection (GC) data, to locate performance problems.
+
+You can have a look at https://elasticsearch-benchmarks.elastic.co/  where Elasticsearch also officially uses esrally to test its performance and publishes the results in real-time.
+
+===== How to benchmark
+Please follow https://esrally.readthedocs.io/en/latest/quickstart.html?spm=a2c65.11461447.0.0.e26a498c3KJZNe[Esrally quickstart documentation]
+to set up it first.
+
+Let's see which tracks (simulation profiles) that EsRally provides: ```esrally list tracks```.
+For our James use case, we are interested in ```pmc``` track: ```Full-text benchmark with academic papers from PMC```.
+
+Run the below script to benchmark against your Elasticsearch cluster:
+
+[source,bash]
+----
+esrally race --pipeline=benchmark-only --track=[track-name] --target-host=[ip_node1:port_node1],[ip_node2:port_node2],[ip_node3:port_node3] --client-options="use_ssl:false,verify_certs:false,basic_auth_user:'[user]',basic_auth_password:'[password]'"
+----
+
+In there:
+
+* --pipeline=benchmark-only: benchmark against a running cluster
+* track-name: track you want to benchmark
+* ip:port: Elasticsearch Node' socket
+* --client-options: change to your Elasticsearch authentication credentials
+
+==== Benchmark result

Review comment:
       ```suggestion
   ==== Sample Benchmark result
   ```

##########
File path: server/apps/distributed-app/docs/modules/ROOT/pages/operate/db-benchmark.adoc
##########
@@ -0,0 +1,1261 @@
+= Distributed James Server -- Database benchmarks
+:navtitle: Database benchmarks
+
+This document provides basic performance of Distributed James' databases, benchmark methodologies as a basis for a James administrator
+can test and evaluate if his Distributed James is performing well.
+
+It includes:
+
+* James base performance at Linagora
+* Propose benchmark methodology and base performance for each database
+
+== James base performance at Linagora
+
+At Linagora, we are deploying Distributed James to provide email service to our customers. These databases are being used
+by our Distributed James:
+
+- Apache Cassandra 4 as main database
+- OpenDistro 1.13.1 as search engine
+- RabbitMQ 3.8.17 as message queue
+- OVH Swift S3 as an object storage
+
+With the above system, our email service has been operating stably with valuable performance for many years.
+For a more details, our Distributed James on Pre-Production environment can handle a load throughput up to about 13000 request
+per second with 99th percentile latency is 400ms.
+
+== Benchmark methodologies and base performances
+We did some benchmarks for our databases on our pre-production environment. We are willing to share the benchmark methodologies
+and the result to you as a reference to evaluate your Distributed James' performance. Other evaluation methods are welcome,
+as long as your databases exhibit similar or even better performance than ours. It is up to your business needs.
+If your databases shows results that fall far from our baseline performance, there's a good chance that
+there are problems with your system, and you need to check it out thoroughly.
+
+=== Benchmark Cassandra
+
+==== Benchmark methodology
+===== Benchmark tool
+
+We use https://cassandra.apache.org/doc/latest/cassandra/tools/cassandra_stress.html[cassandra-stress tool] - an official
+tool of Cassandra for stress loading tests.
+
+The cassandra-stress tool is a Java-based stress testing utility for basic benchmarking and load testing a Cassandra cluster.
+Data modeling choices can greatly affect application performance. Significant load testing over several trials is the best method for discovering issues with a particular data model. The cassandra-stress tool is an effective tool for populating a cluster and stress testing CQL tables and queries. Use cassandra-stress to:
+
+- Quickly determine how a schema performs.
+- Understand how your database scales.
+- Optimize your data model and settings.
+- Determine production capacity.
+
+There are several operation types:
+
+- write-only, read-only, and mixed workloads of standard data
+- write-only and read-only workloads for counter columns
+- user configured workloads, running custom queries on custom schemas
+
+===== How to benchmark
+
+Here we are using a simple case to test and compare Cassandra performance between different setup environments.
+
+[source,yaml]
+----
+keyspace: stresscql
+
+keyspace_definition: |
+  CREATE KEYSPACE stresscql WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3};
+
+table: mixed_workload
+
+table_definition: |
+  CREATE TABLE mixed_workload (
+    key uuid PRIMARY KEY,
+    a blob,
+    b blob
+  ) WITH COMPACT STORAGE
+
+columnspec:
+  - name: a
+    size: uniform(1..10000)
+  - name: b
+    size: uniform(1..100000)
+
+insert:
+  partitions: fixed(1)
+
+queries:
+   read:
+      cql: select * from mixed_workload where key = ?
+      fields: samerow
+----
+
+Create the yaml file as above and copy to a Cassandra node.
+
+Insert some sample data:
+
+[source,bash]
+----
+cassandra-stress user profile=mixed_workload.yml n=100000 "ops(insert=1)" cl=ONE -mode native cql3 user=<user> password=<password> -node <IP> -rate threads=8 -graph file=./graph_insert.xml title=Benchmark revision=insert_ONE
+----
+
+Read intensive scenario:
+
+[source,bash]
+----
+cassandra-stress user profile=mixed_workload.yml n=100000 "ops(insert=1,read=4)" cl=ONE -mode native cql3 user=<user> password=<password> -node <IP> -rate threads=8 -graph file=./graph_mixed.xml title=Benchmark revision=mixed_ONE
+----
+
+In there:
+
+- n=100000: The number of insert batches, not number of individual insert operations.
+- rate threads=8: The number of concurrent threads. If not specified it will start with 4 threads and increase until server reaches a limit.
+- ops(insert=1,read=4): This will execute insert and read queries in the ratio 1:4.
+- graph: Export results to graph in html format.
+
+==== Benchmark result

Review comment:
       ```suggestion
   ==== Sample Benchmark result
   ```

##########
File path: server/apps/distributed-app/docs/modules/ROOT/pages/operate/db-benchmark.adoc
##########
@@ -0,0 +1,1261 @@
+= Distributed James Server -- Database benchmarks
+:navtitle: Database benchmarks
+
+This document provides basic performance of Distributed James' databases, benchmark methodologies as a basis for a James administrator
+can test and evaluate if his Distributed James is performing well.
+
+It includes:
+
+* James base performance at Linagora
+* Propose benchmark methodology and base performance for each database
+
+== James base performance at Linagora
+
+At Linagora, we are deploying Distributed James to provide email service to our customers. These databases are being used
+by our Distributed James:
+
+- Apache Cassandra 4 as main database
+- OpenDistro 1.13.1 as search engine
+- RabbitMQ 3.8.17 as message queue
+- OVH Swift S3 as an object storage
+
+With the above system, our email service has been operating stably with valuable performance for many years.
+For a more details, our Distributed James on Pre-Production environment can handle a load throughput up to about 13000 request
+per second with 99th percentile latency is 400ms.
+
+== Benchmark methodologies and base performances
+We did some benchmarks for our databases on our pre-production environment. We are willing to share the benchmark methodologies
+and the result to you as a reference to evaluate your Distributed James' performance. Other evaluation methods are welcome,
+as long as your databases exhibit similar or even better performance than ours. It is up to your business needs.
+If your databases shows results that fall far from our baseline performance, there's a good chance that
+there are problems with your system, and you need to check it out thoroughly.
+
+=== Benchmark Cassandra
+
+==== Benchmark methodology
+===== Benchmark tool
+
+We use https://cassandra.apache.org/doc/latest/cassandra/tools/cassandra_stress.html[cassandra-stress tool] - an official
+tool of Cassandra for stress loading tests.
+
+The cassandra-stress tool is a Java-based stress testing utility for basic benchmarking and load testing a Cassandra cluster.
+Data modeling choices can greatly affect application performance. Significant load testing over several trials is the best method for discovering issues with a particular data model. The cassandra-stress tool is an effective tool for populating a cluster and stress testing CQL tables and queries. Use cassandra-stress to:
+
+- Quickly determine how a schema performs.
+- Understand how your database scales.
+- Optimize your data model and settings.
+- Determine production capacity.
+
+There are several operation types:
+
+- write-only, read-only, and mixed workloads of standard data
+- write-only and read-only workloads for counter columns
+- user configured workloads, running custom queries on custom schemas
+
+===== How to benchmark
+
+Here we are using a simple case to test and compare Cassandra performance between different setup environments.
+
+[source,yaml]
+----
+keyspace: stresscql
+
+keyspace_definition: |
+  CREATE KEYSPACE stresscql WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3};
+
+table: mixed_workload
+
+table_definition: |
+  CREATE TABLE mixed_workload (
+    key uuid PRIMARY KEY,
+    a blob,
+    b blob
+  ) WITH COMPACT STORAGE
+
+columnspec:
+  - name: a
+    size: uniform(1..10000)
+  - name: b
+    size: uniform(1..100000)
+
+insert:
+  partitions: fixed(1)
+
+queries:
+   read:
+      cql: select * from mixed_workload where key = ?
+      fields: samerow
+----
+
+Create the yaml file as above and copy to a Cassandra node.
+
+Insert some sample data:
+
+[source,bash]
+----
+cassandra-stress user profile=mixed_workload.yml n=100000 "ops(insert=1)" cl=ONE -mode native cql3 user=<user> password=<password> -node <IP> -rate threads=8 -graph file=./graph_insert.xml title=Benchmark revision=insert_ONE
+----
+
+Read intensive scenario:
+
+[source,bash]
+----
+cassandra-stress user profile=mixed_workload.yml n=100000 "ops(insert=1,read=4)" cl=ONE -mode native cql3 user=<user> password=<password> -node <IP> -rate threads=8 -graph file=./graph_mixed.xml title=Benchmark revision=mixed_ONE
+----
+
+In there:
+
+- n=100000: The number of insert batches, not number of individual insert operations.
+- rate threads=8: The number of concurrent threads. If not specified it will start with 4 threads and increase until server reaches a limit.
+- ops(insert=1,read=4): This will execute insert and read queries in the ratio 1:4.
+- graph: Export results to graph in html format.
+
+==== Benchmark result
+image::cassandra_stress_test_result_1.png[]
+
+image::cassandra_stress_test_result_2.png[]
+
+==== References
+https://www.datastax.com/blog/improved-cassandra-21-stress-tool-benchmark-any-schema-part-1[Datastax - Cassandra stress tool]
+
+https://www.instaclustr.com/deep-diving-cassandra-stress-part-3-using-yaml-profiles/[Deep Diving cassandra-stress – Part 3 (Using YAML Profiles)]
+
+=== Benchmark Elasticsearch
+
+==== Benchmark methodology
+
+===== Benchmark tool
+We use https://github.com/elastic/rally[EsRally] - an official Elasticsearch benchmarking tool. EsRally provides the following features:
+
+- Automatically create Elasticsearch clusters, stress tests them, and delete them.
+- Manage stress testing data and solutions by Elasticsearch version.
+- Present stress testing data in a comprehensive way, allowing you to compare and analyze the data of different stress tests and store the data on a particular Elasticsearch instance for secondary analysis.
+- Collect Java Virtual Machine (JVM) details, such as memory and garbage collection (GC) data, to locate performance problems.
+
+You can have a look at https://elasticsearch-benchmarks.elastic.co/  where Elasticsearch also officially uses esrally to test its performance and publishes the results in real-time.
+
+===== How to benchmark
+Please follow https://esrally.readthedocs.io/en/latest/quickstart.html?spm=a2c65.11461447.0.0.e26a498c3KJZNe[Esrally quickstart documentation]
+to set up it first.
+
+Let's see which tracks (simulation profiles) that EsRally provides: ```esrally list tracks```.
+For our James use case, we are interested in ```pmc``` track: ```Full-text benchmark with academic papers from PMC```.
+
+Run the below script to benchmark against your Elasticsearch cluster:
+
+[source,bash]
+----
+esrally race --pipeline=benchmark-only --track=[track-name] --target-host=[ip_node1:port_node1],[ip_node2:port_node2],[ip_node3:port_node3] --client-options="use_ssl:false,verify_certs:false,basic_auth_user:'[user]',basic_auth_password:'[password]'"
+----
+
+In there:
+
+* --pipeline=benchmark-only: benchmark against a running cluster
+* track-name: track you want to benchmark
+* ip:port: Elasticsearch Node' socket
+* --client-options: change to your Elasticsearch authentication credentials
+
+==== Benchmark result
+===== PMC track
+
+[source]
+----
+------------------------------------------------------
+    _______             __   _____
+   / ____(_)___  ____ _/ /  / ___/_________  ________
+  / /_  / / __ \/ __ `/ /   \__ \/ ___/ __ \/ ___/ _ \
+ / __/ / / / / / /_/ / /   ___/ / /__/ /_/ / /  /  __/
+/_/   /_/_/ /_/\__,_/_/   /____/\___/\____/_/   \___/
+------------------------------------------------------
+
+|                                                         Metric |                          Task |       Value |    Unit |
+|---------------------------------------------------------------:|------------------------------:|------------:|--------:|
+|                     Cumulative indexing time of primary shards |                               |     563.427 |     min |
+|             Min cumulative indexing time across primary shards |                               |           0 |     min |
+|          Median cumulative indexing time across primary shards |                               |  0.00293333 |     min |
+|             Max cumulative indexing time across primary shards |                               |      112.04 |     min |
+|            Cumulative indexing throttle time of primary shards |                               |           0 |     min |
+|    Min cumulative indexing throttle time across primary shards |                               |           0 |     min |
+| Median cumulative indexing throttle time across primary shards |                               |           0 |     min |
+|    Max cumulative indexing throttle time across primary shards |                               |           0 |     min |
+|                        Cumulative merge time of primary shards |                               |     1134.99 |     min |
+|                       Cumulative merge count of primary shards |                               |      165181 |         |
+|                Min cumulative merge time across primary shards |                               |           0 |     min |
+|             Median cumulative merge time across primary shards |                               |  0.00188333 |     min |
+|                Max cumulative merge time across primary shards |                               |     248.347 |     min |
+|               Cumulative merge throttle time of primary shards |                               |     620.683 |     min |
+|       Min cumulative merge throttle time across primary shards |                               |           0 |     min |
+|    Median cumulative merge throttle time across primary shards |                               |           0 |     min |
+|       Max cumulative merge throttle time across primary shards |                               |     138.621 |     min |
+|                      Cumulative refresh time of primary shards |                               |      644.67 |     min |
+|                     Cumulative refresh count of primary shards |                               | 1.37405e+06 |         |
+|              Min cumulative refresh time across primary shards |                               |           0 |     min |
+|           Median cumulative refresh time across primary shards |                               |   0.0101667 |     min |
+|              Max cumulative refresh time across primary shards |                               |     147.427 |     min |
+|                        Cumulative flush time of primary shards |                               |     45.1533 |     min |
+|                       Cumulative flush count of primary shards |                               |        4084 |         |
+|                Min cumulative flush time across primary shards |                               |           0 |     min |
+|             Median cumulative flush time across primary shards |                               |      0.0005 |     min |
+|                Max cumulative flush time across primary shards |                               |     7.92482 |     min |
+|                                        Total Young Gen GC time |                               |       5.593 |       s |
+|                                       Total Young Gen GC count |                               |         320 |         |
+|                                          Total Old Gen GC time |                               |           0 |       s |
+|                                         Total Old Gen GC count |                               |           0 |         |
+|                                                     Store size |                               |     359.984 |      GB |
+|                                                  Translog size |                               | 1.33691e-05 |      GB |
+|                                         Heap used for segments |                               |     8.39256 |      MB |
+|                                       Heap used for doc values |                               |    0.444857 |      MB |
+|                                            Heap used for terms |                               |     6.57648 |      MB |
+|                                            Heap used for norms |                               |    0.882629 |      MB |
+|                                           Heap used for points |                               |           0 |      MB |
+|                                    Heap used for stored fields |                               |    0.488602 |      MB |
+|                                                  Segment count |                               |         964 |         |
+|                                                 Min Throughput |                  index-append |      734.63 |  docs/s |
+|                                                Mean Throughput |                  index-append |      763.16 |  docs/s |
+|                                              Median Throughput |                  index-append |       746.5 |  docs/s |
+|                                                 Max Throughput |                  index-append |      833.51 |  docs/s |
+|                                        50th percentile latency |                  index-append |     4738.57 |      ms |
+|                                        90th percentile latency |                  index-append |      8129.1 |      ms |
+|                                        99th percentile latency |                  index-append |     11734.5 |      ms |
+|                                       100th percentile latency |                  index-append |     14662.9 |      ms |
+|                                   50th percentile service time |                  index-append |     4738.57 |      ms |
+|                                   90th percentile service time |                  index-append |      8129.1 |      ms |
+|                                   99th percentile service time |                  index-append |     11734.5 |      ms |
+|                                  100th percentile service time |                  index-append |     14662.9 |      ms |
+|                                                     error rate |                  index-append |           0 |       % |
+|                                                 Min Throughput |                       default |       19.94 |   ops/s |
+|                                                Mean Throughput |                       default |       19.95 |   ops/s |
+|                                              Median Throughput |                       default |       19.95 |   ops/s |
+|                                                 Max Throughput |                       default |       19.96 |   ops/s |
+|                                        50th percentile latency |                       default |     23.1322 |      ms |
+|                                        90th percentile latency |                       default |     25.4129 |      ms |
+|                                        99th percentile latency |                       default |     29.1382 |      ms |
+|                                       100th percentile latency |                       default |     29.4762 |      ms |
+|                                   50th percentile service time |                       default |     21.4895 |      ms |
+|                                   90th percentile service time |                       default |      23.589 |      ms |
+|                                   99th percentile service time |                       default |     26.6134 |      ms |
+|                                  100th percentile service time |                       default |     27.9068 |      ms |
+|                                                     error rate |                       default |           0 |       % |
+|                                                 Min Throughput |                          term |       19.93 |   ops/s |
+|                                                Mean Throughput |                          term |       19.94 |   ops/s |
+|                                              Median Throughput |                          term |       19.94 |   ops/s |
+|                                                 Max Throughput |                          term |       19.95 |   ops/s |
+|                                        50th percentile latency |                          term |     31.0684 |      ms |
+|                                        90th percentile latency |                          term |     34.1419 |      ms |
+|                                        99th percentile latency |                          term |     74.7904 |      ms |
+|                                       100th percentile latency |                          term |     103.663 |      ms |
+|                                   50th percentile service time |                          term |     29.6775 |      ms |
+|                                   90th percentile service time |                          term |     32.4288 |      ms |
+|                                   99th percentile service time |                          term |      36.013 |      ms |
+|                                  100th percentile service time |                          term |     102.193 |      ms |
+|                                                     error rate |                          term |           0 |       % |
+|                                                 Min Throughput |                        phrase |       19.94 |   ops/s |
+|                                                Mean Throughput |                        phrase |       19.95 |   ops/s |
+|                                              Median Throughput |                        phrase |       19.95 |   ops/s |
+|                                                 Max Throughput |                        phrase |       19.95 |   ops/s |
+|                                        50th percentile latency |                        phrase |     23.0255 |      ms |
+|                                        90th percentile latency |                        phrase |     26.1607 |      ms |
+|                                        99th percentile latency |                        phrase |     31.2094 |      ms |
+|                                       100th percentile latency |                        phrase |     45.5012 |      ms |
+|                                   50th percentile service time |                        phrase |     21.5109 |      ms |
+|                                   90th percentile service time |                        phrase |     24.4144 |      ms |
+|                                   99th percentile service time |                        phrase |     26.1865 |      ms |
+|                                  100th percentile service time |                        phrase |     43.5122 |      ms |
+|                                                     error rate |                        phrase |           0 |       % |
+|                                                 Min Throughput | articles_monthly_agg_uncached |       19.95 |   ops/s |
+|                                                Mean Throughput | articles_monthly_agg_uncached |       19.96 |   ops/s |
+|                                              Median Throughput | articles_monthly_agg_uncached |       19.96 |   ops/s |
+|                                                 Max Throughput | articles_monthly_agg_uncached |       19.96 |   ops/s |
+|                                        50th percentile latency | articles_monthly_agg_uncached |     26.7918 |      ms |
+|                                        90th percentile latency | articles_monthly_agg_uncached |     34.1708 |      ms |
+|                                        99th percentile latency | articles_monthly_agg_uncached |     42.3661 |      ms |
+|                                       100th percentile latency | articles_monthly_agg_uncached |     43.0024 |      ms |
+|                                   50th percentile service time | articles_monthly_agg_uncached |     25.3893 |      ms |
+|                                   90th percentile service time | articles_monthly_agg_uncached |     32.3418 |      ms |
+|                                   99th percentile service time | articles_monthly_agg_uncached |     41.3612 |      ms |
+|                                  100th percentile service time | articles_monthly_agg_uncached |     42.0802 |      ms |
+|                                                     error rate | articles_monthly_agg_uncached |           0 |       % |
+|                                                 Min Throughput |   articles_monthly_agg_cached |       19.94 |   ops/s |
+|                                                Mean Throughput |   articles_monthly_agg_cached |       19.95 |   ops/s |
+|                                              Median Throughput |   articles_monthly_agg_cached |       19.95 |   ops/s |
+|                                                 Max Throughput |   articles_monthly_agg_cached |       19.96 |   ops/s |
+|                                        50th percentile latency |   articles_monthly_agg_cached |     9.63666 |      ms |
+|                                        90th percentile latency |   articles_monthly_agg_cached |      10.973 |      ms |
+|                                        99th percentile latency |   articles_monthly_agg_cached |     27.1236 |      ms |
+|                                       100th percentile latency |   articles_monthly_agg_cached |     28.7119 |      ms |
+|                                   50th percentile service time |   articles_monthly_agg_cached |     7.99763 |      ms |
+|                                   90th percentile service time |   articles_monthly_agg_cached |       8.979 |      ms |
+|                                   99th percentile service time |   articles_monthly_agg_cached |     25.7034 |      ms |
+|                                  100th percentile service time |   articles_monthly_agg_cached |     27.1026 |      ms |
+|                                                     error rate |   articles_monthly_agg_cached |           0 |       % |
+|                                                 Min Throughput |                        scroll |        5.85 | pages/s |
+|                                                Mean Throughput |                        scroll |        5.86 | pages/s |
+|                                              Median Throughput |                        scroll |        5.86 | pages/s |
+|                                                 Max Throughput |                        scroll |        5.87 | pages/s |
+|                                        50th percentile latency |                        scroll |      229970 |      ms |
+|                                        90th percentile latency |                        scroll |      319870 |      ms |
+|                                        99th percentile latency |                        scroll |      340138 |      ms |
+|                                       100th percentile latency |                        scroll |      342421 |      ms |
+|                                   50th percentile service time |                        scroll |     4269.07 |      ms |
+|                                   90th percentile service time |                        scroll |     4308.67 |      ms |
+|                                   99th percentile service time |                        scroll |     4445.16 |      ms |
+|                                  100th percentile service time |                        scroll |     4605.69 |      ms |
+|                                                     error rate |                        scroll |           0 |       % |
+
+
+----------------------------------
+[INFO] SUCCESS (took 1772 seconds)
+----------------------------------
+----
+
+===== PMC custom track
+We customized the PMC track by increase search throughput target to figure out our Elasticsearch cluster limit.
+
+The result is that with 25-30 request/s we have a 99th percentile latency of 1s.
+
+==== References
+https://www.alibabacloud.com/blog/esrally-official-stress-testing-tool-for-elasticsearch_597102[esrally: Official Stress Testing Tool for Elasticsearch]
+
+https://esrally.readthedocs.io/en/latest/adding_tracks.html[Create a custom EsRally track]
+
+https://discuss.elastic.co/t/why-the-percentile-latency-is-several-times-more-than-service-time/69630[Why the percentile latency is several times more than service time]
+
+=== Benchmark RabbitMQ
+
+==== Benchmark methodology
+
+===== Benchmark tool
+We use https://github.com/rabbitmq/rabbitmq-perf-test[rabbitmq-perf-test] tool.
+
+===== How to benchmark
+Using PerfTestMulti for more friendly:
+
+- Provide input scenario from a single file
+- Provide output result as a single file. Can be visualized result file by the chart (graph WebUI)
+
+Run a command like below:
+
+[source,bash]
+----
+bin/runjava com.rabbitmq.perf.PerfTestMulti [scenario-file] [result-file]
+----
+
+In order to visualize result, coping [result-file] to ```/html/examples/[result-file]```.
+Start webserver to view graph by the command:
+
+[source,bash]
+----
+bin/runjava com.rabbitmq.perf.WebServer
+----
+Then browse: http://localhost:8080/examples/sample.html
+
+==== Benchmark result
+- Scenario file:
+
+[source]
+----
+[{'name': 'consume', 'type': 'simple',
+'uri': 'amqp://james:eeN7Auquaeng@localhost:5677',
+'params':
+    [{'time-limit': 30, 'producer-count': 2, 'consumer-count': 4}]}]
+----
+
+- Result file:
+
+[source,json]
+----
+{
+  "consume": {
+    "send-bytes-rate": 0,
+    "recv-msg-rate": 4330.225080385852,
+    "avg-latency": 18975254,
+    "send-msg-rate": 455161.3183279743,
+    "recv-bytes-rate": 0,
+    "samples": [
+      {
+        "elapsed": 15086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 0,
+        "send-msg-rate": 0.06628662335940608,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 16086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 1579,
+        "max-latency": 928296,
+        "min-latency": 278765,
+        "avg-latency": 725508,
+        "send-msg-rate": 388994,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 17086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 5904,
+        "max-latency": 1975826,
+        "min-latency": 1009973,
+        "avg-latency": 1418672,
+        "send-msg-rate": 494985,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 18086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4412,
+        "max-latency": 2965961,
+        "min-latency": 2022136,
+        "avg-latency": 2511183,
+        "send-msg-rate": 531039,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 19086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4548,
+        "max-latency": 3959211,
+        "min-latency": 2997603,
+        "avg-latency": 3465769,
+        "send-msg-rate": 531832,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 20086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 3979,
+        "max-latency": 4948621,
+        "min-latency": 3989045,
+        "avg-latency": 4473077,
+        "send-msg-rate": 505607,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 21086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4744,
+        "max-latency": 5933657,
+        "min-latency": 4978287,
+        "avg-latency": 5424512,
+        "send-msg-rate": 490676,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 22086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 3885,
+        "max-latency": 6929973,
+        "min-latency": 5951582,
+        "avg-latency": 6433048,
+        "send-msg-rate": 533287,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 23086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 3595,
+        "max-latency": 7942142,
+        "min-latency": 6947270,
+        "avg-latency": 7449859,
+        "send-msg-rate": 522966,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 24086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4500,
+        "max-latency": 8952646,
+        "min-latency": 7940826,
+        "avg-latency": 8454105,
+        "send-msg-rate": 530737,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 25086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4373,
+        "max-latency": 9957532,
+        "min-latency": 8979541,
+        "avg-latency": 9455712,
+        "send-msg-rate": 523295,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 26086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 3923,
+        "max-latency": 10869605,
+        "min-latency": 9897496,
+        "avg-latency": 10375606,
+        "send-msg-rate": 509922,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 27086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4326,
+        "max-latency": 11951149,
+        "min-latency": 10900391,
+        "avg-latency": 11406887,
+        "send-msg-rate": 517988,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 28086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 3882,
+        "max-latency": 12887678,
+        "min-latency": 11897094,
+        "avg-latency": 12379830,
+        "send-msg-rate": 490748,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 29086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4049,
+        "max-latency": 13891391,
+        "min-latency": 12899145,
+        "avg-latency": 13414724,
+        "send-msg-rate": 493250,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 30086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4561,
+        "max-latency": 14894143,
+        "min-latency": 13879640,
+        "avg-latency": 14383738,
+        "send-msg-rate": 498551,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 31086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4566,
+        "max-latency": 15893550,
+        "min-latency": 14886977,
+        "avg-latency": 15394496,
+        "send-msg-rate": 520082,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 32086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4021,
+        "max-latency": 16889312,
+        "min-latency": 15855953,
+        "avg-latency": 16379683,
+        "send-msg-rate": 523804,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 33086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 3850,
+        "max-latency": 17887454,
+        "min-latency": 16849119,
+        "avg-latency": 17338426,
+        "send-msg-rate": 549140,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 34086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4431,
+        "max-latency": 18889244,
+        "min-latency": 17863745,
+        "avg-latency": 18388210,
+        "send-msg-rate": 543937,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 35086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4065,
+        "max-latency": 19871373,
+        "min-latency": 18877435,
+        "avg-latency": 19370418,
+        "send-msg-rate": 528349,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 36086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 3642,
+        "max-latency": 20883255,
+        "min-latency": 19874781,
+        "avg-latency": 20368221,
+        "send-msg-rate": 515856,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 37086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4234,
+        "max-latency": 21876908,
+        "min-latency": 20876800,
+        "avg-latency": 21335539,
+        "send-msg-rate": 529803,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 38086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 3567,
+        "max-latency": 22874280,
+        "min-latency": 21811748,
+        "avg-latency": 22332021,
+        "send-msg-rate": 538176,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 39086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 3481,
+        "max-latency": 23840582,
+        "min-latency": 22807333,
+        "avg-latency": 23366879,
+        "send-msg-rate": 543741,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 40086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4192,
+        "max-latency": 24858249,
+        "min-latency": 23811110,
+        "avg-latency": 24325777,
+        "send-msg-rate": 525104,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 41086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4620,
+        "max-latency": 25828213,
+        "min-latency": 24827856,
+        "avg-latency": 25345122,
+        "send-msg-rate": 534957,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 42086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4525,
+        "max-latency": 26847048,
+        "min-latency": 25822969,
+        "avg-latency": 26303329,
+        "send-msg-rate": 528350,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 43086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4741,
+        "max-latency": 27910610,
+        "min-latency": 26810209,
+        "avg-latency": 27297471,
+        "send-msg-rate": 552710,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 44086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4174,
+        "max-latency": 28827225,
+        "min-latency": 27785933,
+        "avg-latency": 28302852,
+        "send-msg-rate": 532727,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 45086,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4577,
+        "max-latency": 29891881,
+        "min-latency": 28788887,
+        "avg-latency": 29293319,
+        "send-msg-rate": 539277,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 46093,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4712.01588877855,
+        "max-latency": 30874699,
+        "min-latency": 29787357,
+        "avg-latency": 30329346,
+        "send-msg-rate": 640.5163853028798,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 47143,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4157.142857142857,
+        "max-latency": 31904966,
+        "min-latency": 30770442,
+        "avg-latency": 31380901,
+        "send-msg-rate": 0,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 48184,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 3768.4918347742555,
+        "max-latency": 32969370,
+        "min-latency": 31852685,
+        "avg-latency": 32385432,
+        "send-msg-rate": 0,
+        "recv-bytes-rate": 0
+      },
+      {
+        "elapsed": 49186,
+        "send-bytes-rate": 0,
+        "recv-msg-rate": 4416.167664670658,
+        "max-latency": 33953465,
+        "min-latency": 32854771,
+        "avg-latency": 33373113,
+        "send-msg-rate": 0,
+        "recv-bytes-rate": 0
+      }
+    ]
+  }
+}
+----
+
+- Key result points:
+
+|===
+|Metrics |Unit |Result
+
+|Publisher throughput (the sending rate)
+|messages / second
+|3111
+
+|Consumer throughput (the receiving rate)
+|messages / second
+|4404
+|===
+
+=== Benchmark S3 storage
+
+==== Benchmark methodology
+
+===== Benchmark tool
+We use https://github.com/dvassallo/s3-benchmark[s3-benchmark] tool.
+
+===== How to benchmark
+1. Make sure you set up appropriate S3 credentials with `awscli`.
+2. If you are using a compatible S3 storage of cloud providers like OVH, Scaleway... You would need to configure
+`awscli-plugin-endpoint`. E.g: https://docs.ovh.com/au/en/storage/getting_started_with_the_swift_S3_API/[Getting started with the OVH Swift S3 API]
+3. Install `s3-benchmark` tool and run the command:
+
+[source,bash]
+----
+./s3-benchmark -endpoint=[endpoint] -region=[region] -bucket-name=[bucket-name] -payloads-min=[payload-min] -payloads-max=[payload-max] threads-max=[threads-max]
+----
+
+==== Benchmark result

Review comment:
       ```suggestion
   ==== Sample Benchmark result
   ```

##########
File path: server/apps/distributed-app/docs/modules/ROOT/pages/operate/db-benchmark.adoc
##########
@@ -0,0 +1,1261 @@
+= Distributed James Server -- Database benchmarks
+:navtitle: Database benchmarks
+
+This document provides basic performance of Distributed James' databases, benchmark methodologies as a basis for a James administrator
+can test and evaluate if his Distributed James is performing well.
+
+It includes:
+
+* James base performance at Linagora
+* Propose benchmark methodology and base performance for each database
+
+== James base performance at Linagora
+
+At Linagora, we are deploying Distributed James to provide email service to our customers. These databases are being used
+by our Distributed James:
+
+- Apache Cassandra 4 as main database
+- OpenDistro 1.13.1 as search engine
+- RabbitMQ 3.8.17 as message queue
+- OVH Swift S3 as an object storage
+
+With the above system, our email service has been operating stably with valuable performance for many years.
+For a more details, our Distributed James on Pre-Production environment can handle a load throughput up to about 13000 request
+per second with 99th percentile latency is 400ms.
+
+== Benchmark methodologies and base performances
+We did some benchmarks for our databases on our pre-production environment. We are willing to share the benchmark methodologies
+and the result to you as a reference to evaluate your Distributed James' performance. Other evaluation methods are welcome,
+as long as your databases exhibit similar or even better performance than ours. It is up to your business needs.
+If your databases shows results that fall far from our baseline performance, there's a good chance that
+there are problems with your system, and you need to check it out thoroughly.
+
+=== Benchmark Cassandra
+
+==== Benchmark methodology
+===== Benchmark tool
+
+We use https://cassandra.apache.org/doc/latest/cassandra/tools/cassandra_stress.html[cassandra-stress tool] - an official
+tool of Cassandra for stress loading tests.
+
+The cassandra-stress tool is a Java-based stress testing utility for basic benchmarking and load testing a Cassandra cluster.
+Data modeling choices can greatly affect application performance. Significant load testing over several trials is the best method for discovering issues with a particular data model. The cassandra-stress tool is an effective tool for populating a cluster and stress testing CQL tables and queries. Use cassandra-stress to:
+
+- Quickly determine how a schema performs.
+- Understand how your database scales.
+- Optimize your data model and settings.
+- Determine production capacity.
+
+There are several operation types:
+
+- write-only, read-only, and mixed workloads of standard data
+- write-only and read-only workloads for counter columns
+- user configured workloads, running custom queries on custom schemas
+
+===== How to benchmark
+
+Here we are using a simple case to test and compare Cassandra performance between different setup environments.
+
+[source,yaml]
+----
+keyspace: stresscql
+
+keyspace_definition: |
+  CREATE KEYSPACE stresscql WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3};
+
+table: mixed_workload
+
+table_definition: |
+  CREATE TABLE mixed_workload (
+    key uuid PRIMARY KEY,
+    a blob,
+    b blob
+  ) WITH COMPACT STORAGE
+
+columnspec:
+  - name: a
+    size: uniform(1..10000)
+  - name: b
+    size: uniform(1..100000)
+
+insert:
+  partitions: fixed(1)
+
+queries:
+   read:
+      cql: select * from mixed_workload where key = ?
+      fields: samerow
+----
+
+Create the yaml file as above and copy to a Cassandra node.
+
+Insert some sample data:
+
+[source,bash]
+----
+cassandra-stress user profile=mixed_workload.yml n=100000 "ops(insert=1)" cl=ONE -mode native cql3 user=<user> password=<password> -node <IP> -rate threads=8 -graph file=./graph_insert.xml title=Benchmark revision=insert_ONE
+----
+
+Read intensive scenario:
+
+[source,bash]
+----
+cassandra-stress user profile=mixed_workload.yml n=100000 "ops(insert=1,read=4)" cl=ONE -mode native cql3 user=<user> password=<password> -node <IP> -rate threads=8 -graph file=./graph_mixed.xml title=Benchmark revision=mixed_ONE
+----
+
+In there:
+
+- n=100000: The number of insert batches, not number of individual insert operations.
+- rate threads=8: The number of concurrent threads. If not specified it will start with 4 threads and increase until server reaches a limit.
+- ops(insert=1,read=4): This will execute insert and read queries in the ratio 1:4.
+- graph: Export results to graph in html format.
+
+==== Benchmark result
+image::cassandra_stress_test_result_1.png[]
+
+image::cassandra_stress_test_result_2.png[]
+
+==== References
+https://www.datastax.com/blog/improved-cassandra-21-stress-tool-benchmark-any-schema-part-1[Datastax - Cassandra stress tool]
+
+https://www.instaclustr.com/deep-diving-cassandra-stress-part-3-using-yaml-profiles/[Deep Diving cassandra-stress – Part 3 (Using YAML Profiles)]
+
+=== Benchmark Elasticsearch
+
+==== Benchmark methodology
+
+===== Benchmark tool
+We use https://github.com/elastic/rally[EsRally] - an official Elasticsearch benchmarking tool. EsRally provides the following features:
+
+- Automatically create Elasticsearch clusters, stress tests them, and delete them.
+- Manage stress testing data and solutions by Elasticsearch version.
+- Present stress testing data in a comprehensive way, allowing you to compare and analyze the data of different stress tests and store the data on a particular Elasticsearch instance for secondary analysis.
+- Collect Java Virtual Machine (JVM) details, such as memory and garbage collection (GC) data, to locate performance problems.
+
+You can have a look at https://elasticsearch-benchmarks.elastic.co/  where Elasticsearch also officially uses esrally to test its performance and publishes the results in real-time.
+
+===== How to benchmark
+Please follow https://esrally.readthedocs.io/en/latest/quickstart.html?spm=a2c65.11461447.0.0.e26a498c3KJZNe[Esrally quickstart documentation]
+to set up it first.
+
+Let's see which tracks (simulation profiles) that EsRally provides: ```esrally list tracks```.
+For our James use case, we are interested in ```pmc``` track: ```Full-text benchmark with academic papers from PMC```.
+
+Run the below script to benchmark against your Elasticsearch cluster:
+
+[source,bash]
+----
+esrally race --pipeline=benchmark-only --track=[track-name] --target-host=[ip_node1:port_node1],[ip_node2:port_node2],[ip_node3:port_node3] --client-options="use_ssl:false,verify_certs:false,basic_auth_user:'[user]',basic_auth_password:'[password]'"
+----
+
+In there:
+
+* --pipeline=benchmark-only: benchmark against a running cluster
+* track-name: track you want to benchmark
+* ip:port: Elasticsearch Node' socket
+* --client-options: change to your Elasticsearch authentication credentials
+
+==== Benchmark result
+===== PMC track
+
+[source]
+----
+------------------------------------------------------
+    _______             __   _____
+   / ____(_)___  ____ _/ /  / ___/_________  ________
+  / /_  / / __ \/ __ `/ /   \__ \/ ___/ __ \/ ___/ _ \
+ / __/ / / / / / /_/ / /   ___/ / /__/ /_/ / /  /  __/
+/_/   /_/_/ /_/\__,_/_/   /____/\___/\____/_/   \___/
+------------------------------------------------------
+
+|                                                         Metric |                          Task |       Value |    Unit |
+|---------------------------------------------------------------:|------------------------------:|------------:|--------:|
+|                     Cumulative indexing time of primary shards |                               |     563.427 |     min |
+|             Min cumulative indexing time across primary shards |                               |           0 |     min |
+|          Median cumulative indexing time across primary shards |                               |  0.00293333 |     min |
+|             Max cumulative indexing time across primary shards |                               |      112.04 |     min |
+|            Cumulative indexing throttle time of primary shards |                               |           0 |     min |
+|    Min cumulative indexing throttle time across primary shards |                               |           0 |     min |
+| Median cumulative indexing throttle time across primary shards |                               |           0 |     min |
+|    Max cumulative indexing throttle time across primary shards |                               |           0 |     min |
+|                        Cumulative merge time of primary shards |                               |     1134.99 |     min |
+|                       Cumulative merge count of primary shards |                               |      165181 |         |
+|                Min cumulative merge time across primary shards |                               |           0 |     min |
+|             Median cumulative merge time across primary shards |                               |  0.00188333 |     min |
+|                Max cumulative merge time across primary shards |                               |     248.347 |     min |
+|               Cumulative merge throttle time of primary shards |                               |     620.683 |     min |
+|       Min cumulative merge throttle time across primary shards |                               |           0 |     min |
+|    Median cumulative merge throttle time across primary shards |                               |           0 |     min |
+|       Max cumulative merge throttle time across primary shards |                               |     138.621 |     min |
+|                      Cumulative refresh time of primary shards |                               |      644.67 |     min |
+|                     Cumulative refresh count of primary shards |                               | 1.37405e+06 |         |
+|              Min cumulative refresh time across primary shards |                               |           0 |     min |
+|           Median cumulative refresh time across primary shards |                               |   0.0101667 |     min |
+|              Max cumulative refresh time across primary shards |                               |     147.427 |     min |
+|                        Cumulative flush time of primary shards |                               |     45.1533 |     min |
+|                       Cumulative flush count of primary shards |                               |        4084 |         |
+|                Min cumulative flush time across primary shards |                               |           0 |     min |
+|             Median cumulative flush time across primary shards |                               |      0.0005 |     min |
+|                Max cumulative flush time across primary shards |                               |     7.92482 |     min |
+|                                        Total Young Gen GC time |                               |       5.593 |       s |
+|                                       Total Young Gen GC count |                               |         320 |         |
+|                                          Total Old Gen GC time |                               |           0 |       s |
+|                                         Total Old Gen GC count |                               |           0 |         |
+|                                                     Store size |                               |     359.984 |      GB |
+|                                                  Translog size |                               | 1.33691e-05 |      GB |
+|                                         Heap used for segments |                               |     8.39256 |      MB |
+|                                       Heap used for doc values |                               |    0.444857 |      MB |
+|                                            Heap used for terms |                               |     6.57648 |      MB |
+|                                            Heap used for norms |                               |    0.882629 |      MB |
+|                                           Heap used for points |                               |           0 |      MB |
+|                                    Heap used for stored fields |                               |    0.488602 |      MB |
+|                                                  Segment count |                               |         964 |         |
+|                                                 Min Throughput |                  index-append |      734.63 |  docs/s |
+|                                                Mean Throughput |                  index-append |      763.16 |  docs/s |
+|                                              Median Throughput |                  index-append |       746.5 |  docs/s |
+|                                                 Max Throughput |                  index-append |      833.51 |  docs/s |
+|                                        50th percentile latency |                  index-append |     4738.57 |      ms |
+|                                        90th percentile latency |                  index-append |      8129.1 |      ms |
+|                                        99th percentile latency |                  index-append |     11734.5 |      ms |
+|                                       100th percentile latency |                  index-append |     14662.9 |      ms |
+|                                   50th percentile service time |                  index-append |     4738.57 |      ms |
+|                                   90th percentile service time |                  index-append |      8129.1 |      ms |
+|                                   99th percentile service time |                  index-append |     11734.5 |      ms |
+|                                  100th percentile service time |                  index-append |     14662.9 |      ms |
+|                                                     error rate |                  index-append |           0 |       % |
+|                                                 Min Throughput |                       default |       19.94 |   ops/s |
+|                                                Mean Throughput |                       default |       19.95 |   ops/s |
+|                                              Median Throughput |                       default |       19.95 |   ops/s |
+|                                                 Max Throughput |                       default |       19.96 |   ops/s |
+|                                        50th percentile latency |                       default |     23.1322 |      ms |
+|                                        90th percentile latency |                       default |     25.4129 |      ms |
+|                                        99th percentile latency |                       default |     29.1382 |      ms |
+|                                       100th percentile latency |                       default |     29.4762 |      ms |
+|                                   50th percentile service time |                       default |     21.4895 |      ms |
+|                                   90th percentile service time |                       default |      23.589 |      ms |
+|                                   99th percentile service time |                       default |     26.6134 |      ms |
+|                                  100th percentile service time |                       default |     27.9068 |      ms |
+|                                                     error rate |                       default |           0 |       % |
+|                                                 Min Throughput |                          term |       19.93 |   ops/s |
+|                                                Mean Throughput |                          term |       19.94 |   ops/s |
+|                                              Median Throughput |                          term |       19.94 |   ops/s |
+|                                                 Max Throughput |                          term |       19.95 |   ops/s |
+|                                        50th percentile latency |                          term |     31.0684 |      ms |
+|                                        90th percentile latency |                          term |     34.1419 |      ms |
+|                                        99th percentile latency |                          term |     74.7904 |      ms |
+|                                       100th percentile latency |                          term |     103.663 |      ms |
+|                                   50th percentile service time |                          term |     29.6775 |      ms |
+|                                   90th percentile service time |                          term |     32.4288 |      ms |
+|                                   99th percentile service time |                          term |      36.013 |      ms |
+|                                  100th percentile service time |                          term |     102.193 |      ms |
+|                                                     error rate |                          term |           0 |       % |
+|                                                 Min Throughput |                        phrase |       19.94 |   ops/s |
+|                                                Mean Throughput |                        phrase |       19.95 |   ops/s |
+|                                              Median Throughput |                        phrase |       19.95 |   ops/s |
+|                                                 Max Throughput |                        phrase |       19.95 |   ops/s |
+|                                        50th percentile latency |                        phrase |     23.0255 |      ms |
+|                                        90th percentile latency |                        phrase |     26.1607 |      ms |
+|                                        99th percentile latency |                        phrase |     31.2094 |      ms |
+|                                       100th percentile latency |                        phrase |     45.5012 |      ms |
+|                                   50th percentile service time |                        phrase |     21.5109 |      ms |
+|                                   90th percentile service time |                        phrase |     24.4144 |      ms |
+|                                   99th percentile service time |                        phrase |     26.1865 |      ms |
+|                                  100th percentile service time |                        phrase |     43.5122 |      ms |
+|                                                     error rate |                        phrase |           0 |       % |
+|                                                 Min Throughput | articles_monthly_agg_uncached |       19.95 |   ops/s |
+|                                                Mean Throughput | articles_monthly_agg_uncached |       19.96 |   ops/s |
+|                                              Median Throughput | articles_monthly_agg_uncached |       19.96 |   ops/s |
+|                                                 Max Throughput | articles_monthly_agg_uncached |       19.96 |   ops/s |
+|                                        50th percentile latency | articles_monthly_agg_uncached |     26.7918 |      ms |
+|                                        90th percentile latency | articles_monthly_agg_uncached |     34.1708 |      ms |
+|                                        99th percentile latency | articles_monthly_agg_uncached |     42.3661 |      ms |
+|                                       100th percentile latency | articles_monthly_agg_uncached |     43.0024 |      ms |
+|                                   50th percentile service time | articles_monthly_agg_uncached |     25.3893 |      ms |
+|                                   90th percentile service time | articles_monthly_agg_uncached |     32.3418 |      ms |
+|                                   99th percentile service time | articles_monthly_agg_uncached |     41.3612 |      ms |
+|                                  100th percentile service time | articles_monthly_agg_uncached |     42.0802 |      ms |
+|                                                     error rate | articles_monthly_agg_uncached |           0 |       % |
+|                                                 Min Throughput |   articles_monthly_agg_cached |       19.94 |   ops/s |
+|                                                Mean Throughput |   articles_monthly_agg_cached |       19.95 |   ops/s |
+|                                              Median Throughput |   articles_monthly_agg_cached |       19.95 |   ops/s |
+|                                                 Max Throughput |   articles_monthly_agg_cached |       19.96 |   ops/s |
+|                                        50th percentile latency |   articles_monthly_agg_cached |     9.63666 |      ms |
+|                                        90th percentile latency |   articles_monthly_agg_cached |      10.973 |      ms |
+|                                        99th percentile latency |   articles_monthly_agg_cached |     27.1236 |      ms |
+|                                       100th percentile latency |   articles_monthly_agg_cached |     28.7119 |      ms |
+|                                   50th percentile service time |   articles_monthly_agg_cached |     7.99763 |      ms |
+|                                   90th percentile service time |   articles_monthly_agg_cached |       8.979 |      ms |
+|                                   99th percentile service time |   articles_monthly_agg_cached |     25.7034 |      ms |
+|                                  100th percentile service time |   articles_monthly_agg_cached |     27.1026 |      ms |
+|                                                     error rate |   articles_monthly_agg_cached |           0 |       % |
+|                                                 Min Throughput |                        scroll |        5.85 | pages/s |
+|                                                Mean Throughput |                        scroll |        5.86 | pages/s |
+|                                              Median Throughput |                        scroll |        5.86 | pages/s |
+|                                                 Max Throughput |                        scroll |        5.87 | pages/s |
+|                                        50th percentile latency |                        scroll |      229970 |      ms |
+|                                        90th percentile latency |                        scroll |      319870 |      ms |
+|                                        99th percentile latency |                        scroll |      340138 |      ms |
+|                                       100th percentile latency |                        scroll |      342421 |      ms |
+|                                   50th percentile service time |                        scroll |     4269.07 |      ms |
+|                                   90th percentile service time |                        scroll |     4308.67 |      ms |
+|                                   99th percentile service time |                        scroll |     4445.16 |      ms |
+|                                  100th percentile service time |                        scroll |     4605.69 |      ms |
+|                                                     error rate |                        scroll |           0 |       % |
+
+
+----------------------------------
+[INFO] SUCCESS (took 1772 seconds)
+----------------------------------
+----
+
+===== PMC custom track
+We customized the PMC track by increase search throughput target to figure out our Elasticsearch cluster limit.
+
+The result is that with 25-30 request/s we have a 99th percentile latency of 1s.
+
+==== References
+https://www.alibabacloud.com/blog/esrally-official-stress-testing-tool-for-elasticsearch_597102[esrally: Official Stress Testing Tool for Elasticsearch]
+
+https://esrally.readthedocs.io/en/latest/adding_tracks.html[Create a custom EsRally track]
+
+https://discuss.elastic.co/t/why-the-percentile-latency-is-several-times-more-than-service-time/69630[Why the percentile latency is several times more than service time]
+
+=== Benchmark RabbitMQ
+
+==== Benchmark methodology
+
+===== Benchmark tool
+We use https://github.com/rabbitmq/rabbitmq-perf-test[rabbitmq-perf-test] tool.
+
+===== How to benchmark
+Using PerfTestMulti for more friendly:
+
+- Provide input scenario from a single file
+- Provide output result as a single file. Can be visualized result file by the chart (graph WebUI)
+
+Run a command like below:
+
+[source,bash]
+----
+bin/runjava com.rabbitmq.perf.PerfTestMulti [scenario-file] [result-file]
+----
+
+In order to visualize result, coping [result-file] to ```/html/examples/[result-file]```.
+Start webserver to view graph by the command:
+
+[source,bash]
+----
+bin/runjava com.rabbitmq.perf.WebServer
+----
+Then browse: http://localhost:8080/examples/sample.html
+
+==== Benchmark result

Review comment:
       ```suggestion
   ==== Sample Benchmark result
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@james.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@james.apache.org
For additional commands, e-mail: notifications-help@james.apache.org