You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@devlake.apache.org by zk...@apache.org on 2022/11/25 06:33:05 UTC

[incubator-devlake-website] 01/03: docs: add code review metrics

This is an automated email from the ASF dual-hosted git repository.

zky pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake-website.git

commit 7dec2a208ea8f464df4056ba7809da1144396859
Author: Startrekzky <ka...@merico.dev>
AuthorDate: Fri Nov 25 04:03:06 2022 +0800

    docs: add code review metrics
---
 docs/Metrics/CodingTime.md                         |  32 --------
 docs/Metrics/CycleTime.md                          |  40 ---------
 docs/Metrics/MergeRate.md                          |  40 ---------
 docs/Metrics/PRCodingTime.md                       |  54 +++++++++++++
 docs/Metrics/PRCount.md                            |  62 +++++++++++---
 docs/Metrics/PRCycleTime.md                        |  63 +++++++++++++++
 docs/Metrics/{DeployTime.md => PRDeployTime.md}    |  31 ++++++-
 docs/Metrics/PRMergeRate.md                        |  89 +++++++++++++++++++++
 docs/Metrics/PRPickupTime.md                       |  57 +++++++++++++
 docs/Metrics/PRReviewDepth.md                      |  52 ++++++++++++
 docs/Metrics/PRReviewTime.md                       |  61 ++++++++++++++
 docs/Metrics/PRSize.md                             |  36 ++++++++-
 docs/Metrics/PRTimeToMerge.md                      |  78 ++++++++++++++++++
 docs/Metrics/PickupTime.md                         |  34 --------
 docs/Metrics/ReviewDepth.md                        |  34 --------
 docs/Metrics/ReviewTime.md                         |  39 ---------
 docs/Metrics/TimeToMerge.md                        |  36 ---------
 static/img/Metrics/PR-merge-rate-monthly.png       | Bin 0 -> 37165 bytes
 static/img/Metrics/pr-coding-time-monthly.png      | Bin 0 -> 24810 bytes
 static/img/Metrics/pr-count-monthly.png            | Bin 0 -> 24678 bytes
 static/img/Metrics/pr-cycle-time-monthly.png       | Bin 0 -> 26369 bytes
 static/img/Metrics/pr-pickup-time-monthly.png      | Bin 0 -> 28634 bytes
 static/img/Metrics/pr-review-depth-monthly.png     | Bin 0 -> 31892 bytes
 static/img/Metrics/pr-review-time-monthly.png      | Bin 0 -> 23972 bytes
 static/img/Metrics/pr-size-monthly.png             | Bin 0 -> 22729 bytes
 .../img/Metrics/pr-status-distribution-monthly.png | Bin 0 -> 26997 bytes
 static/img/Metrics/pr-time-to-merge-monthly.png    | Bin 0 -> 28717 bytes
 static/img/Metrics/pr-time-to-merge-text.png       | Bin 0 -> 14224 bytes
 28 files changed, 567 insertions(+), 271 deletions(-)

diff --git a/docs/Metrics/CodingTime.md b/docs/Metrics/CodingTime.md
deleted file mode 100644
index d80a0eb2e..000000000
--- a/docs/Metrics/CodingTime.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: "PR Coding Time"
-description: >
-  PR Coding Time
-sidebar_position: 15
----
-
-## What is this metric? 
-The time it takes from the first commit until a PR is issued. 
-
-## Why is it important?
-It is recommended that you keep every task on a workable and manageable scale for a reasonably short amount of coding time. The average coding time of most engineering teams is around 3-4 days.
-
-## Which dashboard(s) does it exist in?
-- Engineering Throughput and Cycle Time
-- Engineering Throughput and Cycle Time - Team View
-
-
-## How is it calculated?
-<b>Data Sources Required</b>
-
-This metric relies on PR/MRs collected from GitHub or GitLab.
-
-<b>Transformation Rules Required</b>
-
-N/A
-
-<b>SQL Queries</b>
-
-
-## How to improve?
-Divide coding tasks into workable and manageable pieces.
diff --git a/docs/Metrics/CycleTime.md b/docs/Metrics/CycleTime.md
deleted file mode 100644
index 5d9e67666..000000000
--- a/docs/Metrics/CycleTime.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title: "PR Cycle Time"
-description: >
-  PR Cycle Time
-sidebar_position: 14
----
-
-## What is this metric? 
-PR Cycle Time is the sum of PR Coding Time, Pickup TIme, Review Time and Deploy Time. It is the total time from the first commit to when the PR is deployed.
-
-## Why is it important?
-PR Cycle Time indicate the overall speed of the delivery progress in terms of PR. 
-
-## Which dashboard(s) does it exist in?
-- Engineering Throughput and Cycle Time
-- Engineering Throughput and Cycle Time - Team View
-
-
-## How is it calculated?
-You can define `deployment` based on your actual practice. For a full list of `deployment`'s definitions that DevLake support, please refer to [Deployment Frequency](/docs/Metrics/DeploymentFrequency.md).
-
-<b>Data Sources Required</b>
-
-This metric relies on PR/MRs collected from GitHub or GitLab.
-
-<b>Transformation Rules Required</b>
-
-N/A
-
-<b>SQL Queries</b>
-
-
-## How to improve?
-1. Divide coding tasks into workable and manageable pieces;
-2. Use DevLake's dashboards to monitor your delivery progress;
-3. Have a habit to check for hanging PRs regularly;
-4. Set up alerts for your communication tools (e.g. Slack, Lark) when new PRs are issued;
-2. Use automated tests for the initial work;
-5. Reduce PR size;
-6. Analyze the causes for long reviews.
\ No newline at end of file
diff --git a/docs/Metrics/MergeRate.md b/docs/Metrics/MergeRate.md
deleted file mode 100644
index 6ce83bcbb..000000000
--- a/docs/Metrics/MergeRate.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title: "PR Merge Rate"
-description: >
-  Pull Request Merge Rate
-sidebar_position: 20
----
-
-## What is this metric? 
-The ratio of PRs/MRs that get merged.
-
-## Why is it important?
-1. Code review metrics are process indicators to provide quick feedback on developers' code quality
-2. Promote the team to establish a unified coding specification and standardize the code review criteria
-3. Identify modules with low-quality risks in advance, optimize practices, and precipitate into reusable knowledge and tools to avoid technical debt accumulation
-
-## Which dashboard(s) does it exist in
-- Jira
-- GitHub
-- GitLab
-- Weekly Community Retro
-- Engineering Throughput and Cycle Time
-- Engineering Throughput and Cycle Time - Team View 
-
-
-## How is it calculated?
-The number of merged PRs divided by the number of all PRs in the given data range.
-
-<b>Data Sources Required</b>
-
-This metric relies on PRs/MRs collected from GitHub, GitLab or BitBucket.
-
-<b>Transformation Rules Required</b>
-
-N/A
-
-
-## How to improve?
-1. From the developer dimension, we evaluate the code quality of developers by combining the task complexity with the metrics related to the number of review passes and review rounds.
-2. From the reviewer dimension, we observe the reviewer's review style by taking into account the task complexity, the number of passes and the number of review rounds.
-3. From the project/team dimension, we combine the project phase and team task complexity to aggregate the metrics related to the number of review passes and review rounds, and identify the modules with abnormal code review process and possible quality risks.
diff --git a/docs/Metrics/PRCodingTime.md b/docs/Metrics/PRCodingTime.md
new file mode 100644
index 000000000..4d2f69e01
--- /dev/null
+++ b/docs/Metrics/PRCodingTime.md
@@ -0,0 +1,54 @@
+---
+title: "PR Coding Time"
+description: >
+  PR Coding Time
+sidebar_position: 15
+---
+
+## What is this metric? 
+The time it takes from the first commit until a PR is issued. 
+
+## Why is it important?
+It is recommended that you keep every task on a workable and manageable scale for a reasonably short amount of coding time. The average coding time of most engineering teams is around 3-4 days.
+
+## Which dashboard(s) does it exist in?
+- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+
+
+## How is it calculated?
+<b>Data Sources Required</b>
+
+This metric relies on PRs/MRs collected from GitHub, GitLab, BitBucket, Gitee or other code review tools.
+
+<b>Data Transformation Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the `coding time` of a specific PR. DevLake pre-calculates the metric and stores it in table.pull_requests.
+
+```
+SELECT
+  coding_timespan/60 as 'PR Coding Time(h)'
+FROM
+  pull_requests
+```
+
+
+If you want to measure the monthly trend of `PR coding time` in the screenshot below, please run the following SQL in Grafana. 
+
+![](/img/Metrics/pr-cycle-time-monthly.png)
+
+```
+SELECT 
+  DATE_ADD(date(created_date), INTERVAL -DAY(date(created_date))+1 DAY) as time,
+  avg(coding_timespan)/60 as 'PR Coding Time(h)'
+FROM pull_requests
+GROUP BY 1
+ORDER BY 1
+```
+
+## How to improve?
+Divide coding tasks into workable and manageable pieces.
diff --git a/docs/Metrics/PRCount.md b/docs/Metrics/PRCount.md
index 65bad0420..367fb8be3 100644
--- a/docs/Metrics/PRCount.md
+++ b/docs/Metrics/PRCount.md
@@ -1,12 +1,12 @@
 ---
-title: "Pull Request Count"
+title: "PR Count"
 description: >
   Pull Request Count
 sidebar_position: 13
 ---
 
 ## What is this metric? 
-The number of pull requests created.
+The number of pull requests (eg. GitHub PRs, Bitbucket PRs, GitLab MRs) created.
 
 ## Why is it important?
 1. Code review metrics are process indicators to provide quick feedback on developers' code quality
@@ -14,12 +14,11 @@ The number of pull requests created.
 3. Identify modules with low-quality risks in advance, optimize practices, and precipitate into reusable knowledge and tools to avoid technical debt accumulation
 
 ## Which dashboard(s) does it exist in
-- Jira
-- GitHub
-- GitLab
-- Weekly Community Retro
-- Engineering Throughput and Cycle Time
-- Engineering Throughput and Cycle Time - Team View 
+- [GitHub](../../../livedemo/DataSources/GitHub)
+- [GitLab](../../../livedemo/DataSources/GitLab)
+- [Weekly Community Retro](../../../livedemo/OSSMaintainers/WeeklyCommunityRetro)
+- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
@@ -27,12 +26,55 @@ This metric is calculated by counting the number of PRs in the given data range.
 
 <b>Data Sources Required</b>
 
-This metric relies on PRs/MRs collected from GitHub, GitLab or BitBucket.
+This metric relies on PRs/MRs collected from GitHub, GitLab, BitBucket, Gitee or other code review tools.
 
-<b>Transformation Rules Required</b>
+<b>Data Transformation Required</b>
 
 N/A
 
+<b>SQL Queries</b>
+
+The following SQL shows how to find PRs **created** in specific repositories and given time range.
+
+```
+select
+	count(*) as pull_request_count
+from 
+	pull_requests pr
+where
+  -- $__timeFilter will take Grafana's time range
+  $__timeFilter(created_date)
+  -- please replace the repo ids with your own, or create a '$repo_id' variable in Grafana
+	and base_repo_id in ('repo_1', 'repo_2')
+```
+
+If you want to measure the monthly trend of `PR count` in the screenshot below, please run the following SQL in Grafana.
+
+![](/img/Metrics/pr-count-monthly.png)
+
+```
+WITH _prs as(
+  SELECT
+    DATE_ADD(date(created_date), INTERVAL -DAY(date(created_date))+1 DAY) as time,
+    count(*) as pr_count
+  FROM pull_requests
+  WHERE
+    -- please replace the repo ids with your own, or create a '$repo_id' variable in Grafana
+    base_repo_id in ('repo_1', 'repo_2')
+    and $__timeFilter(created_date)
+    -- the following condition will remove the month with incomplete data
+    and created_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+  GROUP BY 1
+)
+
+SELECT 
+  date_format(time,'%M %Y') as month,
+  pr_count as "Pull Request Count"
+FROM _prs
+ORDER BY time
+```
+
+
 ## How to improve?
 1. From the developer dimension, we evaluate the code quality of developers by combining the task complexity with the metrics related to the number of review passes and review rounds.
 2. From the reviewer dimension, we observe the reviewer's review style by taking into account the task complexity, the number of passes and the number of review rounds.
diff --git a/docs/Metrics/PRCycleTime.md b/docs/Metrics/PRCycleTime.md
new file mode 100644
index 000000000..3b61a7e3f
--- /dev/null
+++ b/docs/Metrics/PRCycleTime.md
@@ -0,0 +1,63 @@
+---
+title: "PR Cycle Time"
+description: >
+  PR Cycle Time
+sidebar_position: 14
+---
+
+## What is this metric? 
+PR Cycle Time is the sum of PR Coding Time, Pickup TIme, Review Time and Deploy Time. It is the total time from the first commit to when the PR is deployed.
+
+## Why is it important?
+PR Cycle Time indicates the overall velocity of the delivery progress in terms of PR. 
+
+## Which dashboard(s) does it exist in?
+- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+
+
+## How is it calculated?
+You can define `deployment` based on your actual practice. For a full list of `deployment`'s definitions that DevLake support, please refer to [Deployment Frequency](/docs/Metrics/DeploymentFrequency.md).
+
+<b>Data Sources Required</b>
+
+This metric relies on PRs/MRs collected from GitHub, GitLab, BitBucket, Gitee or other code review tools.
+
+<b>Data Transformation Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the `cycle time` of a specific PR. DevLake pre-calculates the metric and stores it in table.pull_requests.
+
+```
+SELECT
+  change_timespan/60 as 'PR Cycle Time(h)'
+FROM
+  pull_requests
+```
+
+
+If you want to measure the monthly trend of `PR cycle time` in the screenshot below, please run the following SQL in Grafana. 
+
+![](/img/Metrics/pr-cycle-time-monthly.png)
+
+```
+SELECT 
+  DATE_ADD(date(created_date), INTERVAL -DAY(date(created_date))+1 DAY) as time,
+  avg(change_timespan)/60 as 'PR Cycle Time(h)'
+FROM pull_requests
+GROUP BY 1
+ORDER BY 1
+```
+
+
+## How to improve?
+1. Divide coding tasks into workable and manageable pieces;
+2. Use DevLake's dashboards to monitor your delivery progress;
+3. Have a habit to check for hanging PRs regularly;
+4. Set up alerts for your communication tools (e.g. Slack, Lark) when new PRs are issued;
+2. Use automated tests for the initial work;
+5. Reduce PR size;
+6. Analyze the causes for long reviews.
\ No newline at end of file
diff --git a/docs/Metrics/DeployTime.md b/docs/Metrics/PRDeployTime.md
similarity index 67%
rename from docs/Metrics/DeployTime.md
rename to docs/Metrics/PRDeployTime.md
index f0c96234e..d0de7d9aa 100644
--- a/docs/Metrics/DeployTime.md
+++ b/docs/Metrics/PRDeployTime.md
@@ -13,7 +13,8 @@ The time it takes from when a PR is merged to when it is deployed.
 2. Evaluate whether the delivery capacity matches the business phase and demand scale. Identify key bottlenecks and reasonably allocate resources.
 
 ## Which dashboard(s) does it exist in?
-
+- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 ## How is it calculated?
 `PR deploy time` is calculated by subtracting a PR's deployed_date and merged_date. Hence, we should associate PR/MRs with deployments.
@@ -29,7 +30,7 @@ under the same scope and environment (in terms of TESTING/STAGING/PRODUCTION),
 we get pr-3 connected to commit-3
 3. Now we can get pr-3's deploy time by finish_time of deployment-2 minus merge_time of pr-3.
 
-<b>Data Sources Required</b>
+<b>Data Transformation Required</b>
 
 This metric relies on two sources:
 1. PR/MRs collected from GitHub or GitLab by enabling "Code Review" under the Data Entities section.
@@ -43,5 +44,31 @@ This metric relies on two sources:
 
 N/A
 
+<b>SQL Queries</b>
+
+The following SQL shows how to find the `deploy time` of a specific PR. DevLake pre-calculates the metric and stores it in table.pull_requests.
+
+```
+SELECT
+  deploy_timespan/60 as 'PR Deploy Time(h)'
+FROM
+  pull_requests
+```
+
+
+If you want to measure the monthly trend of `PR deploy time` in the screenshot below, please run the following SQL in Grafana. 
+
+![](/img/Metrics/pr-cycle-time-monthly.png)
+
+```
+SELECT 
+  DATE_ADD(date(created_date), INTERVAL -DAY(date(created_date))+1 DAY) as time,
+  avg(deploy_timespan)/60 as 'PR Deploy Time(h)'
+FROM pull_requests
+GROUP BY 1
+ORDER BY 1
+```
+
 ## How to improve?
+N/A
 
diff --git a/docs/Metrics/PRMergeRate.md b/docs/Metrics/PRMergeRate.md
new file mode 100644
index 000000000..9fa6cb029
--- /dev/null
+++ b/docs/Metrics/PRMergeRate.md
@@ -0,0 +1,89 @@
+---
+title: "PR Merge Rate"
+description: >
+  Pull Request Merge Rate
+sidebar_position: 20
+---
+
+## What is this metric? 
+The ratio of PRs/MRs that get merged.
+
+## Why is it important?
+1. Code review metrics are process indicators to provide quick feedback on developers' code quality
+2. Promote the team to establish a unified coding specification and standardize the code review criteria
+3. Identify modules with low-quality risks in advance, optimize practices, and precipitate into reusable knowledge and tools to avoid technical debt accumulation
+
+## Which dashboard(s) does it exist in
+- [GitHub](../../../livedemo/DataSources/GitHub)
+- [GitLab](../../../livedemo/DataSources/GitLab)
+- [Weekly Community Retro](../../../livedemo/OSSMaintainers/WeeklyCommunityRetro)
+- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+
+
+## How is it calculated?
+The number of merged PRs divided by the number of all PRs in the given data range.
+
+<b>Data Sources Required</b>
+
+This metric relies on PRs/MRs collected from GitHub, GitLab, BitBucket, Gitee or other code review tools.
+
+<b>Data Transformation Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the PR merged rate in specific repositories and given time range, eg. 'repo-1' and 'repo-2'.
+
+```
+select
+	count(distinct case when merged_date is not null then id else null end)/count(*) as pr_merged_rate
+from 
+	pull_requests pr
+where
+  -- $__timeFilter will take Grafana's time range
+  $__timeFilter(created_date)
+  -- please replace the repo ids with your own, or create a '$repo_id' variable in Grafana
+	and base_repo_id in ('repo_1', 'repo_2')
+```
+
+If you want to measure the monthly trend of `PR merge rate` in the screenshot below, please run the following SQL in Grafana.
+
+![](/img/Metrics/pr-merge-rate-monthly.png)
+
+```
+SELECT
+  DATE_ADD(date(created_date), INTERVAL -DAYOFMONTH(date(created_date))+1 DAY) as time,
+  count(distinct case when merged_date is not null then id else null end)/count(*) as pr_merged_rate
+FROM pull_requests
+WHERE
+  $__timeFilter(created_date)
+  -- please replace the repo ids with your own, or create a '$repo_id' variable in Grafana
+  and base_repo_id in ('repo_1', 'repo_2')
+GROUP BY 1
+```
+
+If you want to measure the monthly trend of `PR status distribution`, please run the following SQL in Grafana.
+
+![](/img/Metrics/pr-status-distribution-monthly.png)
+
+```
+SELECT
+  DATE_ADD(date(created_date), INTERVAL -DAYOFMONTH(date(created_date))+1 DAY) as time,
+  count(distinct case when status != 'closed' then id else null end) as "PR: Open",
+  count(distinct case when status = 'closed' and merged_date is null then id else null end) as "PR: Closed without merging",
+  count(distinct case when status = 'closed' and merged_date is not null then id else null end) as "PR: Closed and merged"
+FROM pull_requests
+WHERE
+  $__timeFilter(created_date)
+  and created_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+  and base_repo_id in ('repo_1', 'repo_2')
+GROUP BY 1
+```
+
+
+## How to improve?
+1. From the developer dimension, we evaluate the code quality of developers by combining the task complexity with the metrics related to the number of review passes and review rounds.
+2. From the reviewer dimension, we observe the reviewer's review style by taking into account the task complexity, the number of passes and the number of review rounds.
+3. From the project/team dimension, we combine the project phase and team task complexity to aggregate the metrics related to the number of review passes and review rounds, and identify the modules with abnormal code review process and possible quality risks.
diff --git a/docs/Metrics/PRPickupTime.md b/docs/Metrics/PRPickupTime.md
new file mode 100644
index 000000000..47783f282
--- /dev/null
+++ b/docs/Metrics/PRPickupTime.md
@@ -0,0 +1,57 @@
+---
+title: "PR Pickup Time"
+description: >
+  PR Pickup Time
+sidebar_position: 16
+---
+
+## What is this metric? 
+The time it takes from when a PR is issued until the first comment is added to that PR. 
+
+## Why is it important?
+PR Pickup Time shows how engaged your team is in collaborative work by identifying the delay in picking up PRs. 
+
+## Which dashboard(s) does it exist in?
+- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+
+
+## How is it calculated?
+<b>Data Sources Required</b>
+
+This metric relies on PRs/MRs collected from GitHub, GitLab, BitBucket, Gitee or other code review tools.
+
+<b>Data Transformation Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the `pickup time` of a specific PR. DevLake pre-calculates the metric and stores it in table.pull_requests.
+
+```
+SELECT
+  review_lag/60 as 'PR Pickup Time(h)'
+FROM
+  pull_requests
+```
+
+
+If you want to measure the monthly trend of `PR pickup time` in the screenshot below, please run the following SQL in Grafana. 
+
+![](/img/Metrics/pr-cycle-time-monthly.png)
+
+```
+SELECT 
+  DATE_ADD(date(created_date), INTERVAL -DAY(date(created_date))+1 DAY) as time,
+  avg(review_lag)/60 as 'PR Pickup Time(h)'
+FROM pull_requests
+GROUP BY 1
+ORDER BY 1
+```
+
+
+## How to improve?
+1. Use DevLake's dashboard to monitor your delivery progress;
+2. Have a habit to check for hanging PRs regularly;
+3. Set up alerts for your communication tools (e.g. Slack, Lark) when new PRs are issued.
diff --git a/docs/Metrics/PRReviewDepth.md b/docs/Metrics/PRReviewDepth.md
new file mode 100644
index 000000000..7c8c2cc52
--- /dev/null
+++ b/docs/Metrics/PRReviewDepth.md
@@ -0,0 +1,52 @@
+---
+title: "PR Review Depth"
+description: >
+  PR Review Depth
+sidebar_position: 21
+---
+
+## What is this metric? 
+The average number of comments of PRs in the selected time range.
+
+## Why is it important?
+PR Review Depth (in Comments per RR) is related to the quality of code review, indicating how thorough your team reviews PRs.
+
+## Which dashboard(s) does it exist in?
+- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+
+## How is it calculated?
+This metric is calculated by counting the total number of PR comments divided by the total number of PRs in the selected time range.
+
+<b>Data Sources Required</b>
+
+This metric relies on PRs/MRs collected from GitHub, GitLab, BitBucket, Gitee or other code review tools.
+
+<b>Data Transformation Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+If you want to measure the monthly trend of `PR review time` in the screenshot below, please run the following SQL in Grafana. 
+
+![](/img/Metrics/pr-review-depth-monthly.png)
+
+```
+SELECT
+  DATE_ADD(date(pr.created_date), INTERVAL -$interval(date(pr.created_date))+1 DAY) as time,
+  count(distinct prc.id)/count(pr.id) as "PR Review Depth"
+FROM 
+  pull_requests pr
+  left join pull_request_comments prc on pr.id = prc.pull_request_id
+WHERE
+  $__timeFilter(pr.created_date)
+  and pr.base_repo_id in ($repo_id)
+  and pr.merged_date is not null
+GROUP BY 1
+```
+
+
+## How to improve?
+1. Encourage multiple reviewers to review a PR;
+2. Review Depth is an indicator for generally how thorough your PRs are reviewed, but it does not mean the deeper the better. In some cases, spending an excessive amount of resources on reviewing PRs is also not recommended.
\ No newline at end of file
diff --git a/docs/Metrics/PRReviewTime.md b/docs/Metrics/PRReviewTime.md
new file mode 100644
index 000000000..6221d5bf7
--- /dev/null
+++ b/docs/Metrics/PRReviewTime.md
@@ -0,0 +1,61 @@
+---
+title: "PR Review Time"
+description: >
+  PR Review Time
+sidebar_position: 17
+---
+
+## What is this metric? 
+The time it takes to complete a code review of a PR before it gets merged. 
+
+## Why is it important?
+Code review should be conducted almost in real-time and usually take less than two days. Abnormally long PR Review Time may indicate one or more of the following problems:
+1. The PR size is too large that makes it difficult to review.
+2. The team is too busy to review code.
+
+## Which dashboard(s) does it exist in?
+- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
+
+
+## How is it calculated?
+This metric is the time frame between when the first comment is added to a PR, to when the PR is merged.
+
+<b>Data Sources Required</b>
+
+This metric relies on PRs/MRs collected from GitHub, GitLab, BitBucket, Gitee or other code review tools.
+
+<b>Data Transformation Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the `review time` of a specific PR. DevLake pre-calculates the metric and stores it in table.pull_requests.
+
+```
+SELECT
+  review_timespan/60 as 'PR Review Time(h)'
+FROM
+  pull_requests
+```
+
+
+If you want to measure the monthly trend of `PR review time` in the screenshot below, please run the following SQL in Grafana. 
+
+![](/img/Metrics/pr-cycle-time-monthly.png)
+
+```
+SELECT 
+  DATE_ADD(date(created_date), INTERVAL -DAY(date(created_date))+1 DAY) as time,
+  avg(review_timespan)/60 as 'PR Review Time(h)'
+FROM pull_requests
+GROUP BY 1
+ORDER BY 1
+```
+
+## How to improve?
+1. Use DevLake's dashboards to monitor your delivery progress;
+2. Use automated tests for the initial work;
+3. Reduce PR size;
+4. Analyze the causes for long reviews.
\ No newline at end of file
diff --git a/docs/Metrics/PRSize.md b/docs/Metrics/PRSize.md
index ab75daddb..8e898bdd4 100644
--- a/docs/Metrics/PRSize.md
+++ b/docs/Metrics/PRSize.md
@@ -12,8 +12,8 @@ The average code changes (in Lines of Code) of PRs in the selected time range.
 Small PRs can reduce risks of introducing new bugs and increase code review quality, as problems may often be hidden in big chuncks of code and difficult to identify.
 
 ## Which dashboard(s) does it exist in?
-- Engineering Throughput and Cycle Time
-- Engineering Throughput and Cycle Time - Team View
+- [Engineering Throughput and Cycle Time](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTime)
+- [Engineering Throughput and Cycle Time - Team View](../../../livedemo/EngineeringLeads/EngineeringThroughputAndCycleTimeTeamView)
 
 
 ## How is it calculated?
@@ -21,14 +21,42 @@ This metric is calculated by counting the total number of code changes (in LOC)
 
 <b>Data Sources Required</b>
 
-This metric relies on PR/MRs collected from GitHub or GitLab.
+This metric relies on PRs/MRs collected from GitHub, GitLab, BitBucket, Gitee or other code review tools.
 
-<b>Transformation Rules Required</b>
+<b>Data Transformation Required</b>
 
 N/A
 
 <b>SQL Queries</b>
 
+If you want to measure the monthly trend of `PR review time` in the screenshot below, please run the following SQL in Grafana. 
+
+![](/img/Metrics/pr-size-monthly.png)
+
+```
+with _pr_commits_data as(
+  SELECT
+    DATE_ADD(date(pr.created_date), INTERVAL -$interval(date(pr.created_date))+1 DAY) as time,
+    pr.id as pr_id,
+    prc.commit_sha,
+    sum(c.additions)+sum(c.deletions) as loc
+  FROM 
+    pull_requests pr
+    left join pull_request_commits prc on pr.id = prc.pull_request_id
+    left join commits c on prc.commit_sha = c.sha
+  WHERE
+    $__timeFilter(pr.created_date)
+    and pr.base_repo_id in ($repo_id)
+  group by 1,2,3
+)
+
+SELECT 
+  time,
+  sum(loc)/count(distinct pr_id) as 'PR Size'
+FROM _pr_commits_data
+GROUP BY 1
+```
+
 
 ## How to improve?
 1. Divide coding tasks into workable and manageable pieces;
diff --git a/docs/Metrics/PRTimeToMerge.md b/docs/Metrics/PRTimeToMerge.md
new file mode 100644
index 000000000..c1bcbeeda
--- /dev/null
+++ b/docs/Metrics/PRTimeToMerge.md
@@ -0,0 +1,78 @@
+---
+title: "PR Time To Merge"
+description: >
+  PR Time To Merge
+sidebar_position: 19
+---
+
+## What is this metric? 
+The time it takes from when a PR is issued to when it is merged. Essentially, PR Time to Merge = PR Pickup Time + PR Review Time.
+
+## Why is it important?
+The delay of reviewing and waiting to review PRs has large impact on delivery speed, while reasonably short PR Time to Merge can indicate frictionless teamwork. Improving on this metric is the key to reduce PR cycle time.
+
+## Which dashboard(s) does it exist in?
+- [GitHub](../../../livedemo/DataSources/GitHub)
+- [Weekly Community Retro](../../../livedemo/OSSMaintainers/WeeklyCommunityRetro)
+
+
+## How is it calculated?
+<b>Data Sources Required</b>
+
+This metric relies on PRs/MRs collected from GitHub, GitLab, BitBucket, Gitee or other code review tools.
+
+<b>Data Transformation Required</b>
+
+N/A
+
+<b>SQL Queries</b>
+
+The following SQL shows how to find the `mean time to merge of PRs` in specific repositories and given time range.
+
+![](/img/Metrics/pr-time-to-merge-text.png)
+
+```
+SELECT
+	avg(TIMESTAMPDIFF(Minute,created_date,merged_date)/1440)
+FROM 
+	pull_requests
+WHERE 
+  -- $__timeFilter will take Grafana's time range
+  $__timeFilter(created_date)
+  -- please replace the repo ids with your own, or create a '$repo_id' variable in Grafana
+	and base_repo_id in ('repo_1', 'repo_2')
+	and merged_date is not null
+```
+
+If you want to measure the monthly trend of `PR time to merge` in the screenshot below, please run the following SQL in Grafana. 
+
+![](/img/Metrics/pr-time-to-merge-monthly.png)
+
+```
+with _prs as(
+  SELECT
+    DATE_ADD(date(created_date), INTERVAL -DAY(date(created_date))+1 DAY) as time,
+    avg(TIMESTAMPDIFF(Minute,created_date,merged_date)/1440) as time_to_merge
+  FROM pull_requests
+  WHERE
+    $__timeFilter(created_date)
+    -- the following condition will remove the month with incomplete data
+    and created_date >= DATE_ADD(DATE_ADD($__timeFrom(), INTERVAL -DAY($__timeFrom())+1 DAY), INTERVAL +1 MONTH)
+    -- please replace the repo ids with your own, or create a '$repo_id' variable in Grafana
+	  and base_repo_id in ('repo_1', 'repo_2')
+  GROUP BY 1
+)
+
+SELECT 
+  date_format(time,'%M %Y') as month,
+  time_to_merge as "Time to Merge"
+FROM _prs
+ORDER BY time
+```
+
+## How to improve?
+1. Use DevLake's dashboards to monitor your delivery progress;
+2. Have a habit to check for hanging PRs regularly;
+3. Set up alerts for your communication tools (e.g. Slack, Lark) when new PRs are issued;
+4. Reduce PR size;
+5. Analyze the causes for long reviews.
diff --git a/docs/Metrics/PickupTime.md b/docs/Metrics/PickupTime.md
deleted file mode 100644
index 61033a877..000000000
--- a/docs/Metrics/PickupTime.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: "PR Pickup Time"
-description: >
-  PR Pickup Time
-sidebar_position: 16
----
-
-## What is this metric? 
-The time it takes from when a PR is issued until the first comment is added to that PR. 
-
-## Why is it important?
-PR Pickup Time shows how engaged your team is in collaborative work by identifying the delay in picking up PRs. 
-
-## Which dashboard(s) does it exist in?
-- Engineering Throughput and Cycle Time
-- Engineering Throughput and Cycle Time - Team View
-
-
-## How is it calculated?
-<b>Data Sources Required</b>
-
-This metric relies on PR/MRs collected from GitHub or GitLab.
-
-<b>Transformation Rules Required</b>
-
-N/A
-
-<b>SQL Queries</b>
-
-
-## How to improve?
-1. Use DevLake's dashboard to monitor your delivery progress;
-2. Have a habit to check for hanging PRs regularly;
-3. Set up alerts for your communication tools (e.g. Slack, Lark) when new PRs are issued.
diff --git a/docs/Metrics/ReviewDepth.md b/docs/Metrics/ReviewDepth.md
deleted file mode 100644
index b97f0e77f..000000000
--- a/docs/Metrics/ReviewDepth.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: "PR Review Depth"
-description: >
-  PR Review Depth
-sidebar_position: 21
----
-
-## What is this metric? 
-The average number of comments of PRs in the selected time range.
-
-## Why is it important?
-PR Review Depth (in Comments per RR) is related to the quality of code review, indicating how thorough your team reviews PRs.
-
-## Which dashboard(s) does it exist in?
-- Engineering Throughput and Cycle Time
-- Engineering Throughput and Cycle Time - Team View
-
-## How is it calculated?
-This metric is calculated by counting the total number of PR comments divided by the total number of PRs in the selected time range.
-
-<b>Data Sources Required</b>
-
-This metric relies on PR/MRs collected from GitHub or GitLab.
-
-<b>Transformation Rules Required</b>
-
-N/A
-
-<b>SQL Queries</b>
-
-
-## How to improve?
-1. Encourage multiple reviewers to review a PR;
-2. Review Depth is an indicator for generally how thorough your PRs are reviewed, but it does not mean the deeper the better. In some cases, spending an excessive amount of resources on reviewing PRs is also not recommended.
\ No newline at end of file
diff --git a/docs/Metrics/ReviewTime.md b/docs/Metrics/ReviewTime.md
deleted file mode 100644
index 1b7a310e1..000000000
--- a/docs/Metrics/ReviewTime.md
+++ /dev/null
@@ -1,39 +0,0 @@
----
-title: "PR Review Time"
-description: >
-  PR Review Time
-sidebar_position: 17
----
-
-## What is this metric? 
-The time it takes to complete a code review of a PR before it gets merged. 
-
-## Why is it important?
-Code review should be conducted almost in real-time and usually take less than two days. Abnormally long PR Review Time may indicate one or more of the following problems:
-1. The PR size is too large that makes it difficult to review.
-2. The team is too busy to review code.
-
-## Which dashboard(s) does it exist in?
-- Engineering Throughput and Cycle Time
-- Engineering Throughput and Cycle Time - Team View
-
-
-## How is it calculated?
-This metric is the time frame between when the first comment is added to a PR, to when the PR is merged.
-
-<b>Data Sources Required</b>
-
-This metric relies on PR/MRs collected from GitHub or GitLab.
-
-<b>Transformation Rules Required</b>
-
-N/A
-
-<b>SQL Queries</b>
-
-
-## How to improve?
-1. Use DevLake's dashboards to monitor your delivery progress;
-2. Use automated tests for the initial work;
-3. Reduce PR size;
-4. Analyze the causes for long reviews.
\ No newline at end of file
diff --git a/docs/Metrics/TimeToMerge.md b/docs/Metrics/TimeToMerge.md
deleted file mode 100644
index f080a8e56..000000000
--- a/docs/Metrics/TimeToMerge.md
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title: "PR Time To Merge"
-description: >
-  PR Time To Merge
-sidebar_position: 19
----
-
-## What is this metric? 
-The time it takes from when a PR is issued to when it is merged. Essentially, PR Time to Merge = PR Pickup Time + PR Review Time.
-
-## Why is it important?
-The delay of reviewing and waiting to review PRs has large impact on delivery speed, while reasonably short PR Time to Merge can indicate frictionless teamwork. Improving on this metric is the key to reduce PR cycle time.
-
-## Which dashboard(s) does it exist in?
-- GitHub Basic Metrics
-- Bi-weekly Community Retro
-
-
-## How is it calculated?
-<b>Data Sources Required</b>
-
-This metric relies on PR/MRs collected from GitHub or GitLab.
-
-<b>Transformation Rules Required</b>
-
-N/A
-
-<b>SQL Queries</b>
-
-
-## How to improve?
-1. Use DevLake's dashboards to monitor your delivery progress;
-2. Have a habit to check for hanging PRs regularly;
-3. Set up alerts for your communication tools (e.g. Slack, Lark) when new PRs are issued;
-4. Reduce PR size;
-5. Analyze the causes for long reviews.
diff --git a/static/img/Metrics/PR-merge-rate-monthly.png b/static/img/Metrics/PR-merge-rate-monthly.png
new file mode 100644
index 000000000..86c2d0aaa
Binary files /dev/null and b/static/img/Metrics/PR-merge-rate-monthly.png differ
diff --git a/static/img/Metrics/pr-coding-time-monthly.png b/static/img/Metrics/pr-coding-time-monthly.png
new file mode 100644
index 000000000..ac250e1fe
Binary files /dev/null and b/static/img/Metrics/pr-coding-time-monthly.png differ
diff --git a/static/img/Metrics/pr-count-monthly.png b/static/img/Metrics/pr-count-monthly.png
new file mode 100644
index 000000000..c54d2268d
Binary files /dev/null and b/static/img/Metrics/pr-count-monthly.png differ
diff --git a/static/img/Metrics/pr-cycle-time-monthly.png b/static/img/Metrics/pr-cycle-time-monthly.png
new file mode 100644
index 000000000..6d9c2a25f
Binary files /dev/null and b/static/img/Metrics/pr-cycle-time-monthly.png differ
diff --git a/static/img/Metrics/pr-pickup-time-monthly.png b/static/img/Metrics/pr-pickup-time-monthly.png
new file mode 100644
index 000000000..b3b481914
Binary files /dev/null and b/static/img/Metrics/pr-pickup-time-monthly.png differ
diff --git a/static/img/Metrics/pr-review-depth-monthly.png b/static/img/Metrics/pr-review-depth-monthly.png
new file mode 100644
index 000000000..59481ef1a
Binary files /dev/null and b/static/img/Metrics/pr-review-depth-monthly.png differ
diff --git a/static/img/Metrics/pr-review-time-monthly.png b/static/img/Metrics/pr-review-time-monthly.png
new file mode 100644
index 000000000..e0b4377b3
Binary files /dev/null and b/static/img/Metrics/pr-review-time-monthly.png differ
diff --git a/static/img/Metrics/pr-size-monthly.png b/static/img/Metrics/pr-size-monthly.png
new file mode 100644
index 000000000..c8579a6d6
Binary files /dev/null and b/static/img/Metrics/pr-size-monthly.png differ
diff --git a/static/img/Metrics/pr-status-distribution-monthly.png b/static/img/Metrics/pr-status-distribution-monthly.png
new file mode 100644
index 000000000..9a4971142
Binary files /dev/null and b/static/img/Metrics/pr-status-distribution-monthly.png differ
diff --git a/static/img/Metrics/pr-time-to-merge-monthly.png b/static/img/Metrics/pr-time-to-merge-monthly.png
new file mode 100644
index 000000000..638e3c572
Binary files /dev/null and b/static/img/Metrics/pr-time-to-merge-monthly.png differ
diff --git a/static/img/Metrics/pr-time-to-merge-text.png b/static/img/Metrics/pr-time-to-merge-text.png
new file mode 100644
index 000000000..4f7c1b8d0
Binary files /dev/null and b/static/img/Metrics/pr-time-to-merge-text.png differ